Factually! with Adam Conover - Why Self-Driving Cars Aren’t Coming Any Time Soon with Dr. Missy Cummings

Episode Date: June 23, 2021

Tesla and other automakers have convinced the public that fully automated vehicles are just around the corner. But what if … they aren’t? Dr. Missy Cummings, AI researcher and director of... the Humans and Autonomy Laboratory at Duke, joins Adam to detail the massive gap between Silicon Valley's promises and the technology’s limitations, and explain the real benefits that might come when we use AI to enhance human capability rather than replace us. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript
Discussion (0)
Starting point is 00:00:00 You know, I got to confess, I have always been a sucker for Japanese treats. I love going down a little Tokyo, heading to a convenience store, and grabbing all those brightly colored, fun-packaged boxes off of the shelf. But you know what? I don't get the chance to go down there as often as I would like to. And that is why I am so thrilled that Bokksu, a Japanese snack subscription box, chose to sponsor this episode. What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds. Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Starting point is 00:00:29 Plus, they throw in a handy guide filled with info about each snack and about Japanese culture. And let me tell you something, you are going to need that guide because this box comes with a lot of snacks. I just got this one today, direct from Bokksu, and look at all of these things. We got some sort of seaweed snack here. We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
Starting point is 00:01:15 chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this is so much fun. You got to get one of these for themselves and get this for the month of March. Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono style robe and get this while you're wearing your new duds, learning fascinating things about your tasty snacks. You can also rest assured that you have helped to support small family run businesses in Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight to your door.
Starting point is 00:01:45 So if all of that sounds good, if you want a big box of delicious snacks like this for yourself, use the code factually for $15 off your first order at Bokksu.com. That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything. Hello, everyone. Welcome to Factually. I'm Adam Conover. You know, before we get started, I just want to tell you about a new project that I got that is out right now. It is a narrative comedy podcast called Edith, all about Woodrow Wilson's wife, Edith Wilson, who, incredibly, not many people know this story, actually practically ran the American government after Woodrow had a stroke. It's an incredible story from history. This is a very funny satirical podcast about that untold story.
Starting point is 00:02:49 It was created and written by Gonzalo Cordova and Travis Helwig, two original Adam Ruins Everything writers and very good friends of mine, incredibly funny guys. And it stars Rosamund Pike and me. That's right, I'm in the show with Rosamund. We had a great time. Actually, we never met. We recorded it at home in our closets during the pandemic.
Starting point is 00:03:10 But it came out fantastically. And it is out now wherever you get your podcasts. Look up Edith with an exclamation point or go to EdithPodcast.com. Check it out. Now, on to today's episode. Check it out. Now, on to today's episode. You know, the pace of innovation in the tech sphere in my lifetime has been so swift that we have come to expect that basically anything is possible, right? I mean, now that you can summon a hamburger from a little rectangle in your hand, you literally have powers they did not have on Star Trek. It is difficult to believe that, you know, anything couldn't happen, right? Basically, any promise the tech industry makes us, we tend to swallow, even though over and over again, those promises turn out to be wrong.
Starting point is 00:03:55 Like the iPhone, very impressive. But so much of the time, in reality, the biggest promises made to us by our technological innovators and by even people writing in the media don't come to pass. We are, as a species, incredibly bad at predicting our future, despite the fact that we feel we know what is just around the corner. You know, there's this term that's been going around on the Internet for a while called retrofuturism. Maybe you've heard of it. It's the idea of looking at the vision that Americans had in the 50s of what the future would be like. You know, you can picture it flying cars, monorails, video phones, you know, very Jetson stuff.
Starting point is 00:04:35 And we like to look at it and we like to laugh and say, ha ha, that didn't come to pass. People back then were so dumb. But today we make almost the exact same mistake. We are constantly swallowing these visions of the future given to us by people who don't really know. Like, let me give you one example that that really jumped out at me when it happened. Remember Google Glass, those dumb looking glasses with a little like piece of glass on the side that that Google said we're going to take over the world. We'd all be wearing them. Everyone bought this like the release of Google Glass was huge news. People literally thought that we would be walking around with these things on our face, despite the fact that
Starting point is 00:05:15 there was no real use for them. No one could describe what you were actually supposed to do with these things other than maybe see a little, I don't know, number in the corner of your vision telling you how many emails you had. I guess you could take photos with them. They were released. They sucked. Nobody wanted them and they ceased to exist. And that was 10 years ago. And yet we are still seeing so many articles about, oh, the future of AR, of augmented reality is coming, despite us still having no idea what we would actually use these things for in our daily lives. Another great example of this is VR. VR has been constantly touted as the future of entertainment that we'd all be living inside VR simulations. It dates back to
Starting point is 00:05:56 the 80s. And then, of course, in the last 10 years, there's been an immense resurgence of interest in VR with Oculus and the HTC Vive and all these things. But unfortunately, the VR revolution has not come to pass. I mean, you can buy the headsets, but in my case, I currently have two of them collecting dust in a box in my closet because wearing them makes me nauseous and there's nothing good to do on them. You can play one fun game for half an hour and then you're like, I feel sick. I never want to do this ever again in my fucking life.
Starting point is 00:06:29 Now, that's not to say that these things will never come to pass, that they'll never go mainstream. After all, it took video chatting decades from conception to eventually become reality. But the fact remains that everyone who up until now has been beating their drum saying these things are about to become huge and transform society. All those people have been wrong. And yet, despite their terrible track record, we end up believing them anyway. We want to believe that there is an incredible future right around the corner that's going to transform our world, despite the fact that we're almost always wrong. The examples I just quoted, not that big a deal, right? I mean, they're just consumer tech products. Who cares? Why not believe a little bit if you're let down?
Starting point is 00:07:09 No big whoop, right? Well, unfortunately, this same pattern also applies to bigger transformations in our society. For instance, let's talk about self-driving cars. People today believe, they believe as a matter of faith that fully self-driving cars are just around the corner and that we need to prepare for that future. Companies are telling us that this future is on our doorstep, but it still doesn't exist. Tesla advertises a mode called autopilot, but that mode is only supposed to be used with a fully attentive driver, which means that a lot of people believe that the car can do things that it can't really do because of that halo effect that makes us want to believe that technology is capable of more than it is. And if you look at the broader landscape of self-driving, you start to see a lot of companies
Starting point is 00:08:01 sucking up venture capital money and a lot of press attention, but that they've produced nothing but a couple of cool pilot projects and a bunch of blown deadlines, not to mention some unfortunate deaths. When it comes to something as important and dangerous as our transportation system, we need to have a lot more skepticism about what we're willing to swallow as being just around the corner. And no one articulates that better than our guest today. Her name is Missy Cummings. She's an engineering professor who directs Duke's Humans and Autonomy Lab. She is an AI researcher, and she makes a compelling case that the claims made by the proponents of self-driving cars and the companies that claim to be producing them
Starting point is 00:08:46 are vastly overblown and do not reflect the actual technology that we have available to us or the technology that's coming in the next couple years. She makes the case very compellingly that this entire industry is making promises that it simply cannot keep and that the vision of the future that a lot of people have is not going to come to pass at least anytime soon. So look, with all that being said,
Starting point is 00:09:11 let's get to the interview so you can hear from it herself. Please welcome Missy Cummings. Missy, thank you so much for being here. Thanks so much for having me. Let's jump right into it. There's so much talk right now about how we're going to be getting self-driving cars in the next couple of years. You're a skeptic on this topic. Tell me why. Well, I'm a skeptic because I do this research every day in my lab at Duke University. And so I see the realities of what is and is not possible. And I'm not saying there's no chance that self-driving is going to come in the next couple of years, but the way it will come is going to be substantially in smaller
Starting point is 00:09:51 markets and many limited applications as compared to the dream that everyone has to jump in your car, jump in the backseat and tell the car to take you to Vegas. And this dream is really widespread. I mean, the dream is literally being sold by some of the automakers. Tesla, you can, they currently charge you more if you want to have the ability to get this dream of self-driving later. So it's very much part of their entire marketing strategy. And it's very much, the public truly believes it. Like people are planning their calendars around the fact that they believe they're going to have that ability
Starting point is 00:10:29 in the next couple of years. Why do you think the reality is going to look so different? What are, what are, what are we all missing? Well, the, the big problem is centered on what we call perception problems. So perception systems in self-driving cars or flying cars or really any system that uses artificial intelligence, the perception systems are the long pole in the tent, meaning that everything relies on them working correctly. But the problem is that we really still do not know what we're doing when it comes to artificial intelligence and quote unquote seeing. Computer vision systems are still extremely brittle, meaning that they can not work in unexpected ways. And the best illustration I can have your listeners go look at is there's a YouTube video of a Waymo One car that had to be rescued when it got stuck by a single orange
Starting point is 00:11:34 construction cone. I saw this. Yeah, it freaked the car out. And eventually the rider had to be rescued by humans. So we're still learning a lot. And I tell people if we're still doing basic research on a technology like computer vision systems, it's probably not ready for commercialization. Yeah. I mean, I'm so struck by, I saw that video and I've seen lots of videos of self-driving cars and they're sort of like vision overlaid over what, you know, we're looking at the camera and it's what they're identifying it as. And it'll be like it'll draw a box around a person and be like person, person, person, tree, person, person. You can see the brittleness that you describe is is like extremely apparent. And also it strikes me as like what to what to do with that information is then even more
Starting point is 00:12:26 difficult that once it like this video is very striking it like the car literally what it sees the orange cone and it stopped it like pulls in the wrong direction. It just sort of entirely doesn't know how to its entire reasoning system ground to a halt because it saw something that it extremely briefly couldn't identify. That's correct. And there are other ways, other things, events that we've had that will give you that same information. So one of the issues that we're really worried about from a cybersecurity perspective is this idea of passive hacking.
Starting point is 00:13:03 The way that these convolutional neural nets work, these are the algorithms that power a computer vision system. You have to, quote unquote, teach them using potentially millions of images. So for example, to make sure that the car understands what a stop sign is, I have to show it a thousand different examples of stop sign or a million different examples of a stop sign. But if it sees a stop sign with a half inch of snow on it and it's never seen that before and has not been trained to see it, then it won't know what that is. So that orange construction cone, it just appeared in a slightly different configuration than it was trained against. I'm quite sure that Waymo did train their cars to see the cone. But the problem with passive hacking is if we know that there are these vulnerabilities,
Starting point is 00:13:55 then people can use them to exploit the vulnerabilities. And so Mobileye went out and put just a piece of black tape on a 35 mile per hour stop sign. It extended the three out just a little bit. And the Tesla came by and interpreted the sign as 85 miles per hour. Wow. And started to speed up to achieve that. Yeah. Yeah. So that's bad.
Starting point is 00:14:28 Yeah. Yeah. So that's bad. Yeah. I mean, I, I've seen lots of examples of, you know, on the internet viral videos of, you know, here's a, here's a mask that you can wear to fool facial recognition cameras like this sort of, it's like a video game. If I understand the way a video game works, having grown up with video games, I can sort of, oh, here's how the game logic works. If I do this, I'll be able to trick it. It's like a very common thing that humans do once we understand technological systems. Of course, we'll start doing that once, if we have self-driving cars roaming around the streets, people will understand how to sort of like manipulate their mental models. And that's bad. If you're in a car, if you're in a self-driving car, if someone's able to change what your car does simply by baffling it with, you know, some unusual stimuli, that's,
Starting point is 00:15:15 yeah, that's really bad. I mean, are we just looking to do something that is too difficult here. I mean, I understand why AI researchers would think, hey, this isn't so hard. Like driving is a system with rules, especially in the United States where we have a very rule following traffic culture. We've got signage everywhere. The signage is universal. I understand why that would seem like an easy problem to solve. But then whenever I watch one of these videos, I'm like baffled by how complex the job actually is. Like I was watching one in which a car, I think it was a Tesla pulled up behind, like in the middle of the street, there were some people loading a van with, you know, they had double parked and they were loading a van with some stuff briefly. And if I'm a human driver, I'm like, OK, I understand who these people are, what they're doing. I understand that they might suddenly
Starting point is 00:16:07 walk into the street, but probably pretty slowly. So I'll just give them a wide berth. I'll wait, you know, like there's a whole social dimension to this that is extremely complex. And I looked at it and went, how could a Tesla possibly understand all of those things. Like this is a novel situation. And it, but we, as we don't understand how humans, how the human mind processes such a thing that well. So are we being like incredibly hubristic to think that we can design a machine that can, that can handle all this? Yeah, well, I'm a futurist. I mean, I'm an academic professor, and it's my job to look in a crystal ball and try to start working on technologies five to 25 years before you might ever see them. So, you know, I wear two hats here. The first hat is I'm very optimistic that we will create new technologies that are beneficial to society or and even beneficial to companies' wallets.
Starting point is 00:17:08 to society or and even beneficial to companies' wallets. But I have to wear another hat that says, okay, but we need to be realistic. And academics, I think, bear responsibility to notify the public when perhaps technology is kind of escaping the barn a little too quickly before it's been properly vetted. And indeed, I think that's what's happened here is that we've got some basic research technologies that have been overhyped by Silicon Valley and by some other academics because people want to capitalize on this big VC startup culture. So I think there was a big craze about six, eight years ago where people started jumping into the self-driving space. And this goes to the Silicon Valley mantra of fake it till you make it. And I think we're seeing this still with Tesla. It is just in
Starting point is 00:18:01 the Silicon Valley ethos to over-promise and then hope that your technology development can move along very quickly to live up to your promises. And I'm not really even blaming people for that. You know, if we're talking about cell phone apps, I'm good with that. You can over-promise because it's just my cell phone app. But I think what we're seeing is the fake it till you make it culture is bleeding over into safety critical technologies. And it's just simply not going to work. And it is true. I just wonder.
Starting point is 00:18:31 I mean, there are so many, just like you, I hear you say, like, there are people who really believe that it's going to happen tomorrow. There are people who believe that. There are very smart people who believe that. There are very smart people who believe it and who should know better. And what that tells me is the psychology of wanting to believe that something magical can happen is so strong that it makes people who should otherwise know better completely divorce themselves from reality. But you can't blame the public for for wanting to believe it, especially because, you know, the tech industry has transformed our lives in so many beneficial ways over the last 30 years. You know, the the you know, the the smartphone, the iPhone is a thing from the future.
Starting point is 00:19:20 And we've you know, that was what it was like when we all received it. And it's transformed our society in many beneficial ways, some non-beneficial ways as well. But, you know, our expectations have been justifiably raised in a lot of ways. But yeah, I mean, you're right. It feels to me irresponsible when you're making these promises in an arena where, OK, people die in in transit every day. People die on our roads. And I don't want to overhype you. I think there's a danger in overhyping these cases where, you know, someone died in a Tesla self-driving car. I, you know, certainly I'm of the opinion that their their marketing is overhyped in an irresponsible way. But do you share that worry that, hey, these some of these companies are making specific marketing decisions that are leading to unsafe outcomes?
Starting point is 00:20:10 Oh, absolutely. In the Missy Cummings blame tree, I'd like to tell you that I'd like to be, everyone to be blameless. My personality profile is an ENTJ, so I just run around judging everyone. My personality profile is an ENTJ, so I just run around judging everyone. I apologize if it's not deserved. So, you know, I fault the car companies the most because they are making huge claims. And, you know, and I fault the public the least because if you're being told that your car has a full self-driving chip and, you know, how many of us read the fine print? No one, you know, it's easy to see how people can develop these incorrect mental models about what their car can do. Uh, but you know, while I blame the companies, number one, I think that the government bears a huge responsibility, too, because they have the power to step in and stop these technologies or at least kind of rein them in a little bit.
Starting point is 00:21:17 Indeed, in Europe, Tesla cannot market its autopilot as autopilot. It cannot call any system full self-driving. So in other parts of the world where Tesla operates, they have been reined in and we don't see as many egregious accidents in other parts of the world that we do in America, right? So, you know, but there's plenty of blame to go around. I do think that that users need to understand that their cars need their attention but we've all we also know from years and years of aviation research that if automation performs pretty good that that's actually more dangerous than if it were to perform terribly because if you if your system is driving almost all the time, it does a great job.
Starting point is 00:22:09 And only every now and then it will fail at whatever task. That sets you up to be complacent. And it sets you up to look at that cell phone. We're all so easily distracted. And indeed, in this last year where accidents have gone way up, despite the fact that people have driven fewer miles, I suspect that this is actually the beginning of a larger trend that is not just related to COVID, where we are just increasingly becoming multitaskers and we feel that we're so bored, even if we're just driving, that it're so easy to look at your phone. So
Starting point is 00:22:45 easy to try to get that information that you seek. And that's what is so, you know, the thing that frustrates me the most about, again, these high profile cases where someone dies in a Tesla that was an autopilot and Tesla itself has said, oh, this guy was using his iPad when he crashed. this guy was using his iPad when he crashed. And to me, it's like, this is, yeah, he was doing what you promised your software would allow him to do,
Starting point is 00:23:10 which was take his mind off the, what is the point if not to allow him to be distracted? What's the point of putting basically an iPad in the, I've been, I've been in a Tesla with someone who's had the self, the autopilot on, on the freeway and they're dicking around with the iPad. And I'm like,
Starting point is 00:23:24 this, this feels very dangerous to me. And they're like, nah, it's going to be fine. Like, and that's why they put, that's why they made it an entertainment center. And, you know, as opposed to taking some amount of responsibility for the fact that, you know, Hey, we, you know, maybe there's a gap or, you know, maybe the, because in a lot of cases, you know, there, these are cases in which autopilot has, you know, perhaps steered someone into a median or, or made an error that led to a death that required that user interruption. Our human desire to trust technology happens so quickly.
Starting point is 00:23:55 Like it's the it's the phenomenon of, you know, someone following their GPS directions into a lake. Right. That sort of like famous version of this. Or I don't know, my favorite example is I read an anecdote once from race directors, from people who run marathons who like that now after everyone is wearing a Fitbit, every time they have a marathon, someone email, like dozens of people email the race director and says, your race actually was too long or too short because my Fitbit said it wasn't 26.2, it said it was 27 miles. And these people don't, it doesn't occur to them to think, oh, the $50 GPS tracker I have on my wrist maybe isn't 100% accurate. They think, no,
Starting point is 00:24:35 it is accurate and it's the professional who's wrong. And that is very human to always do that. And that seems very dangerous in this case to me. very human to always do that. And that seems very dangerous in this case to me. Yes, I think humans can be predisposed to overtrusting technology, but certainly in my own research that spans different domains and different applications, kind of universally, we see young people or younger people like you trust the technologies way more older people like me who are curmudgeonly you know there is there's a great place for curmudgeons in the world and i've got news for you you'll get there too um i think i think i qualify now i think it is amazing though that how much trust and faith younger people, they're just digital natives.
Starting point is 00:25:27 Yeah. So this is a world that is much more familiar to them. And we see a lot of overtrust of, you know, especially a car technology. And I do want to point out, you know, going back to the blame game, I don't want to leave anyone blameless here. We've picked a lot on Tesla in this conversation. But I think the new big area that I'm concerned with is what all other car manufacturers are starting to call their technologies that are similar to Tesla's autopilot. So, you know, just look at any car and they'll start billing their new hands-free capability. So, you know, hands, calling your technology hands-free and then telling people they have to pay perfect attention.
Starting point is 00:26:27 You know, it's maybe not quite as egregious as calling your system autopilot, but still using the phrase, you can be hands-free while driving our car is setting people up to be mind-free. And so I think that this is, it's a problem because we, we will believe what the, you know, if you're telling me I can drive hands-free, great. Even though there's a camera theoretically monitoring what I'm doing. But if you're still giving people up to 30 seconds on the freeway to be hands-free, you'll get mind-free after two seconds that their hands leave the steering wheel. So I think we're perhaps in the most dangerous point of history in driving where we're in this quasi automated role, we will be so much better once we get to automation everywhere, including in the cars and in the infrastructure. But for right now, you know, I think we're almost moving back to
Starting point is 00:27:17 the days where before we had taillights because, you know, it's just a wild, wild west out there. If we're going to have all this automation, all these incorrect mental models, you know, it's just a wild, wild west out there. We're going to have all this automation, all these incorrect mental models, you know, some people hands-free, lots of people mind-free, you know, it's, I think what this is begging for, and I definitely think this is what's going to happen, is there's going to be more regulation. Yeah, but it depends on what the regulation is, you know, I mean, my understanding is that NHTSA, our government agency that regulates these things, has been trying to get ahead of the industry, trying to say, OK, this is going to happen. Let's help make it happen in a way that to me certainly doesn't seem like they are. They're keeping a really, really close eye on it in the way that I would like to. How do you feel about about that piece of it? Well, it's we're also in a weird space where we just had a changeover of administrations. So it's I think it's too early to start pointing fingers at the Biden administration.
Starting point is 00:28:22 But, you know, we can look back at what happened under the Trump administration. And there was never a director ahead, the administrator of the National Highway Traffic Safety Administration. So basically, the agency that was responsible for safety and driving was, you know, there was an absent leader for the last four years. And I think that that has, you know, is a big reason why we're seeing the vacuum that we're seeing now. And we can compare ourselves again to Europe where Europe has been much more hands-on in terms of regulatory and by hands-on, I don't necessarily mean they
Starting point is 00:29:00 haven't introduced a lot of regulation as much as they've introduced things like you can't advertise your car as autopilot. You can't advertise full self-driving. Right. So and so I think that there's a lot that we can learn from other countries. But we are also seeing I read an article the other day where currently 30 Tesla accidents are now under investigation by NHTSA. So I can't imagine that with those 30 accidents under investigation, something's going to happen. There'll be some recommendation that comes out as a result of that. Well, let's, you know, I feel like we moved very quickly into the social consequences.
Starting point is 00:29:40 I actually want to ask you more about the AI piece of it and how the technology that we have, especially around perception, differs from our imagination of how it works. Let's talk more about what is missing from AI when it comes to self-driving that these systems are simply incapable of that humans do very easily. systems are simply incapable of that humans do very easily. So what I'm about to say applies broadly to all AI, not just driving AI. Fundamentally, what we're missing from a science perspective is replicating human judgment and reasoning. So when we try to build computer vision systems, we are equating two
Starting point is 00:30:26 cameras that can give you stereo vision, just like your eyes can. So that's how they're doing it from a sensor perspective. Then the quote unquote brain for these systems is this huge huge collection of neural net models that say, if I see this thing, then I kick off a set of rules that go along with it. And I've trained using millions of images, I've trained the car to recognize these different objects and potentially different situations. But because the situations can be rapidly evolving and can be incredibly dynamic, that causes problems for the vision system, which is not really connected to a brain like humans have. I think the greatest creation of all time is the brain vision system because we have a constant feedback loop that's telling us how to
Starting point is 00:31:25 not just classify the world, but how to have imagination about what might happen. And that allows us, that predictive ability allows us to avoid a lot of bad situations. One of the key elements missing from self-driving systems today, bicyclists, boy, I mean, you're in real trouble if you're on a bicycle still around a self-driving car because bicycles can become obscured so quickly. And while we can see, a human can see a bike dodge behind maybe a FedEx truck and go up on the sidewalk and we can project that idiot on the bike, he's going to go up on the sidewalk and then he's going to come back and then get back in the car in the road in front of me. So I know to watch out for that.
Starting point is 00:32:11 We just can't do that in in a self-driving system. Maybe you could try to do that with both the vision systems and maybe prediction. and maybe prediction, but the number of ways that that could happen, the number of different presentations combined with weird sun angles can cause this to be, you know, a very unsolvable problem by automation, AI in the timeframes that you would have to solve it. Yeah. I mean, coming back again to that point of so much of our knowledge is social of what's going to happen. This is certainly the case driving around L.A. I drive around L.A. a lot less than I used to. I've talked on the show before about how I kind of quit driving, started taking a lot more public transit, but I did drive here for many years. And like driving in L.A. is almost like weaving through a subway station in New York city. Like there's
Starting point is 00:33:05 so much going on. You're aware of ideally at what, you know, every single car might do. And so, you know, you're literally looking at a car that might be parked to you on the side and going, ah, that, that car just parked, this person might open their door. And so I shouldn't get too close or, you know, uh, yeah, you know, this, this bike lane is blocked and this bicyclist is going to have to move around me. Or even just the thing of, you know, when you're, yeah, you know, this this bike lane is blocked and this bicyclist is going to have to move around me or even just the thing of, you know, when you're when you're driving next to a bicyclist and you feel a little more nervous and you drive a little more carefully because you see an unshielded person next to you and you've got a social response to them. Like there's all of this stuff that we're bringing from our the rest of our lives of knowing the way that people operate,
Starting point is 00:33:44 knowing seeing a person on the curb and being able to tell from their body language whether it looks like they're about to cross the street or not. Like, once you start thinking of it that way, yeah, there's so much going into this that it seems very... I'm not saying an artificial intelligence could never have all of that, but it starts to look like one of the most difficult things to model, not one of the easiest.
Starting point is 00:34:10 Right. Making reasonable guesses. And I tell people that that is truly important on so many levels. It is what is going to be the inability to put a lot more autonomy in medical applications. For example, real robots doing surgery, we aren't going to be able to get there for a long, long, long time in medical applications, not in my lifetime, because human anatomy can be so different from person to person that we just can't have an AI system that could have a good enough perception system to be able to cope with all that uncertainty. And that's really what it comes down
Starting point is 00:34:51 to is that the more uncertainty that you have in the world, the worse AI performs. It's funny because I do like to come out to LA every now and then, but as like every other person who visits LA, the traffic drives me bananas. I would actually tell you that's actually that the traffic jams of LA are a perfect place to do it where I think the only really good application of level three autonomy in cars is the slow crawl, right? So there it's called the traffic jam pilot and Honda in Japan is starting to roll this out where the car can drive itself under slow conditions and you can do whatever you want while the car creeps along in slow traffic. And then once the speed gets to some predefined level, the speed picks up again and you have to take over the car. So indeed, I think L.A. could be
Starting point is 00:35:45 served very well by that kind of technology. You know, the caveat to that is Audi did try to roll that out in Europe about a year or two ago, and they had to the system folded. It failed and it failed. They weren't able to roll it out because of legal and liability restrictions. And so what that tells me is that Audi just was not able to get their system to perform at high enough for liabilities. And, and, you know, they haven't said, but I suspect the real problem was the human handover. So yes, the car can crawl along in very slow traffic, but what happens if the car is doing that in some traffic jams in LA, I know I'd fall asleep. Yeah. And then, and then the car would just,
Starting point is 00:36:30 it would come to a stop and it wouldn't move until you woke up. And that would be a, you know, like that only would create more of a problem. Like you had people falling asleep on the 405, you know? So, you know, so there are some bigger ramifications. And I think that this variability in human behavior is it can be a big mystery to car manufacturers. Yeah, it's I mean, to a certain extent, it almost just seems like this fundamental mismatch of what we're trying to do. fundamental mismatch of what we're trying to do. Like, you know, automation seems wonderful if you're able to construct sort of an automated arena for the automation to work in. You know, if you're able to construct a factory that is everything is sort of separate, you know, we've got robots moving around on little tracks or whatever, you know, our human streets are like
Starting point is 00:37:22 designed to be anti-automated. We made a decision to not go with trains, which run on tracks, according to predictable schedules that keep human, keep the average person away from the machine, you know, and you step on it, you're allotted time. Instead, we decided let's build streets where everyone gets a machine that they get to use willy nilly, according to the whims of that they have at the moment, right? And because that's what we wanted. And now we're going to try to insert automated machines into literally the messiest possible human system where you've got tens of thousands of individual people doing things basically randomly based on like, oh my God, look, there's a Boba place. Let me
Starting point is 00:38:04 screech into the parking lot real quick. You know, we're just we're just like being being messy people. And that's the thing that we're trying to automate. That seems like kind of a kind of a big mismatch to me on a fundamental level. Well, I think your intuition is spot on. You know, if you want to be my graduate student now, you pass the first gate. Wow. I think you're right. I think it's absolutely correct that there is just a fundamental mismatch. Now, that doesn't mean we can't have nothing.
Starting point is 00:38:35 I do think that the slow speed shuttles that travel less than 25 miles per hour going short geofenced areas is a good idea. And the applications you would see there are last mile delivery. So I do think that you can get very safe systems that are doing slow speed deliveries in very well mapped areas that don't require a lot of upkeep to have to keep remapping them. And that's one of the other problems is, you know, for these cars to be able to work anywhere all the time, they have to have very detailed maps. And the labor cost to keeping up your maps, to keep them up to date,
Starting point is 00:39:18 to make sure that they were as detailed as they needed to be. I used to live in Boston. Boy, the orange cones would pop up in unexpected places all the time. You couldn't do that for real systems because you'd have to map all those systems in detail and then upload it to the cloud, make sure all the cars got it. So I just think that there's some infrastructure penalties that people weren't thinking about, There are some infrastructure penalties that people weren't thinking about, things that for last mile delivery don't become cost prohibited. But I also think we'll see maybe slow speed shuttles and indeed Vegas, you can do this.
Starting point is 00:39:59 You can take a self-driving shuttle from McCarran, the airport, down to the Strip. Great application. You could layer in some additional infrastructure in lights, you know, along the roadways. It's expensive, but if you're only having to lay in additional infrastructure for a few miles, then that makes sense. So I do think we're going to get something out of the self driving crazy bonanza that we've been seeing for the last six, eight years. I do think that within the next one to two years, you're going to see a lot of long faces. Yeah. Okay, well, your focus on infrastructure
Starting point is 00:40:30 actually gives me a question I really want to ask you, but we got to take a really quick break. We'll be right back with more Missy Cummings. I don't know anything. I don't know anything. Okay, we're back with Missy Cummings. So one of my worries that I have about this, you were talking about infrastructure that, you know,
Starting point is 00:40:54 for instance, they can build in Vegas to make a particular trip work better. One of the concerns I have, based on the way I've seen our societal conversation about self-driving cars shape up, is the rest of us, the rest of society being perhaps built too much around self-driving cars. You mentioned the, you know, the ability of people to, you know, put a piece of masking tape on a sign in order to fool a self-driving car. Certainly I can imagine, you know, people wearing different outfits, stepping into traffic could, could baffle a self-driving car. Certainly, I can imagine, you know, people wearing different outfits stepping into traffic could baffle a self-driving car. And I've, I actually even saw
Starting point is 00:41:31 an editorial, I can't remember where, I saw it about a year ago, but it was proposing, hey, maybe we should, in order to make the world easier for the poor self-driving car, because the technology is not really up to being a part of this human system where people can step out into traffic at any time and where there's lots of chaos and stuff. Maybe we should fence off our streets, make it more difficult for people to cross the street at an intersection, you know, make jaywalking physically impossible. Maybe we should, you know, basically turn every street into a boring company tunnel where, you know, it's just that we're reducing the amount of things that can happen on the street to make it easier for the cars. And that sounds bad to me, that I can imagine, I can easily imagine a world where
Starting point is 00:42:17 rather than making a transportation system that is more pedestrian friendly. I want to reduce deaths. Right. But I don't want to do that by making a world that disadvantages pedestrians even more and makes it even harder to get around unless you happen to be sitting in the backseat of a self-driving car, because now, you know, we've turned every street in Manhattan into a subway track that you are not allowed to cross. Do you have that concern? Oh, yes, indeed. These issues are what basically turned me in from a regular, stuffy professor of technology to a one-woman Don Quixote-esque attack against all the windmills of self-driving. But literally, I was in a meeting with a group of people that I do research here in North Carolina, and we were listening to people from the North Carolina Department of Transportation
Starting point is 00:43:21 start laying out plans for how they were going to literally, they were going to tear down some building garages and they were thinking about making parking lots on the edges of the downtown area because the idea is that your self-driving car would come in, drop you off, and then it would go to its self-driving car penalty box, wherever that was in some, you know, out of the, out of the way place. And it would wait there for you and, and, you know, start really planning, doing serious urban planning for no parking for cars. And, and first of all, while I would love to see fewer cars on the road, I am a big biker. I'm a big pedestrian. while I would love to see fewer cars on the road, I am a big biker. I'm a big pedestrian.
Starting point is 00:44:12 I would love to have more accessible public transportation. I was askance that my taxpayer dollars were about to start going into a future that I knew was not coming. So that's when I started engaging people and started to become a very mouthy broad, I'm sure people would say. But I just couldn't take it anymore. I couldn't take it because I work on these systems every day. I tell people this all the time, and I'm 100% dead serious. I would never get into a car, a self-driving car, than any of my students ever programmed. Because I know how, first of all, just the basic problems with convolutional neural nets and then all the mistakes that programmers can make in the development of these cars.
Starting point is 00:44:55 And there's just a phenomenal lack of testing to make sure that they're safe enough. So yeah, I absolutely think that people should not be having, you certainly shouldn't be spending taxpayer dollars to start trying to build the city that you think is going to come when self-driving cars get here in whatever year anyone has promised because it's not coming. coming. And I think that people, this is where, and I've pointed a lot of the fingers and I've made a lot of enemies by saying both at the federal and all state levels, these governments don't have anybody on their staffs that know what they're talking about when it comes to self-driving and AI in general. It's a bigger problem that we have in the country that anyone who's any good at AI goes to, you know, the sirens on the shore in Silicon Valley. And all the really top people are working strictly for the commercial market.
Starting point is 00:45:55 And it's just a supply and demand problem. people who are moving into governments at the federal or state level who can flag these problems and start to understand what's what is an overhype what is a fake it till you make a promise and what's real and what's the real timeline yeah i mean if you look at the success these companies have of going into a particular municipality and say hey we're gonna let's let's do a trial here let's i mean i think elon musk's Company is a different example, but it's the same phenomenon of, hey, here's a very wealthy person who speaks very
Starting point is 00:46:32 confidently about what the future is and these are civil servants who are like, yeah, I don't know. I'm not an AI researcher. They're not hearing the other side. So the vision, the problem that you just laid out, though, is actually even more realistic than the one I laid out. The you just laid out, though, is actually even more realistic than the one I laid out. The one I laid out is like, if we have self-driving cars, what
Starting point is 00:46:49 if we design our cities too much for them? You're saying, no, we're going to build infrastructure that it's not even going to connect to anything that exists because the technology is simply not there at all. So I'm curious, what do you see as, you know, we have all these companies saying, hey, this technology is five years away. It's five years away. What do you think is actually going to happen? So I think in the next five to 10 years, you will see the slow speed driverless shuttles. I have a few former students who work at a company called Neuro in the Bay Area.
Starting point is 00:47:20 So I think Neuro's model, I think Amazon's jumped in this game. So I think slow speed robotic last mile delivery will become a reality. You'll see more and more of these shuttles in limited areas. And maybe, you know, at a stretch in the southwest where there is no snow that can fall on a stop sign and confuse the you know you'll still you'll see some very limited robo taxi applications like uh waymo has i think the real question about the robo taxi problem is whether or not they can ever make it scalable and make a profit. So one of the hidden costs in these systems that people don't really realize and where I spend a lot of time in my research are these remote operation centers. So you have to have, you know, there is no such
Starting point is 00:48:18 thing as any autonomous vehicle ever in our world. You have to have humans overseeing them at some capacity. So when the Waymo car got stuck by the orange cone, there was a whole team. There was likely probably five to seven people who were involved in orchestrating the rescue of the passenger out of the backseat of the Waymo one car. You know, it's funny.'s uh it would make a hilarious tiktok uh meme um but i think i i think i actually saw it on a clip of it on tiktok like yeah well you know i mean it's just you know the fact of the matter is is that going to be scalable like every time a car gets uh stuck by a or orange cone or a stop sign that's you know somebody accident or didn't accidentally somebody spray painted something on the front of it.
Starting point is 00:49:06 And then you have to send out, you have to engage five to seven people to rescue the passenger. That's just not scalable. And so it's, I do think that it is still, you know, yet to be determined whether or not if the cost of the remote operation center, where you have to pay people a lot more because their skill sets are a lot higher really outweighs the cost of having
Starting point is 00:49:33 drivers. Yeah. I mean, it's so funny the way you describe that. Like, yeah, my understanding is for this Waymo trial in Arizona, it's like they've got the Waymo cars self-driving around, but they've also got what a fleet of just people in a van at all times that the cars are out like a mile away so that whenever one gets stuck, they can go nudge it. And it makes it look like basically a glorified Roomba, you know, that like,
Starting point is 00:49:59 Hey, you have a Roomba. It sure. It uses AI quotation marks to, to clean your living room. But you know that like it's going to get stuck under the couch and you got to go get it. You have to be one of my on the Waypoint podcast, which I love. They describe this once as being you have to be a Roomba foreman
Starting point is 00:50:15 when you have a Roomba. You have to keep an eye on the Roomba and make sure that it actually gets things clean and, you know, doesn't fall down the stairs or whatever. Maybe one of these companies comes up with one of these services. You know, Tesla says, OK, you can, doesn't fall down the stairs or whatever. Maybe one of these companies comes up with one of these services, like, you know, Tesla says, okay, you can turn on full self-driving, but it's only in what, a city where they've got a rapid response team and you're paying $100 a month in order to have the privilege of being able to call them and have them rescue you at any time. Like, this is not what, you know, this is not what we were promised. It starts to go very far from that. Now, you see, you and I could cook up a whole new business, right?
Starting point is 00:50:50 Because if they do charge you, we could come in and charge you in addition. You know, we're like, don't pay them $100, pay us $50 a month. And, you know, we can just track your cell phone. And whenever you're in, you know, it does raise this question about what kind of derivative technologies and companies and services with this technology spring up. You know, I do think that one of the interesting things looking forward to more and more autonomy in cars are, you know, are we going to have to have more driver education? Are you going to have to have adult driving schools? What's that going to have to have more driver education? Are you going to have to have adult driving schools? What's that going to look like? You know, if I used to be a fighter pilot for the Navy and I had to go to two years of flight school to learn how to fly these things,
Starting point is 00:51:34 you know, these cars are starting to become that level of advanced in terms of use and application. of use and application. So what are we going to do in the future when your car, you know, significantly exceeds your cognitive abilities? And the other area that I think is interesting that people don't realize, even in Arizona, this is a big deal. The sensors on the car must be kept clean because if you, you know, it just, you know, your eyes, if you get dirt in your eyes, you can't see. And so dust accumulates pretty quickly on these vision sensors and even the LIDAR. So now is there a whole new world of little tiny windshield wipers? Are we going to start seeing some kind of development? Like, are you going to have groups of people instead of windshield people like you see in New York City or people with special claws just to clean your vision sensors and your LIDAR to make sure that it's when whatever scenario you end up with dust and dirt and snow and what have you?
Starting point is 00:52:36 And once you put it that way, the number of things that can go wrong, the brittleness of the system, because it is so complex, is massive. And that's disconcerting in its own way, because we can end up, you know, I mean, you're an AI researcher, so I apologize if I'm putting words in your mouth, but I've heard AI researchers say one of the weird things about these systems is that when something goes wrong, you often don't know what it is because you don't fully understand the system that you've created because you've trained it on all this data. You've grown this neural network. These crashes can happen and it can be unclear exactly what the automated system did and why. Was the sensor dirty? Did it misidentify something? Like what is the deeper problem? We do not know how convolutional neural nets do what they do.
Starting point is 00:53:28 And indeed, when I'm, I just gave a talk to my robotics brethren at a conference and I call it, I call it the dark magic. So, you know, magic is wonderful because you know that there's some trick behind it, but you don't know what that trick is, but you know that the magician has full knowledge of the trick. So magic works and magic is delightful because you know, you're being conned, but you know, you're okay with it and you don't want to dig too deep because you know, the, that the magician knows, knows all.
Starting point is 00:54:05 AI is a dark magic because not only do you not know how it's reasoning, neither does the magician, right? I think that's the real problem is we've got, there's dark magic in AI and it seems magical. And if it seems magical to the people who created it, I'm just telling you, you know, run for the hills because if we can't figure it out and we don't know, and that's why Waymo has not been more successful. And it's why all companies have not had successful self-driving companies because they're constantly surprised in ways that they have no knowledge of how to fix. Same thing for that orange cone. I can tell you the Waymo crowd was scratching their heads over that one for a long time.
Starting point is 00:54:51 Yeah, it was such a wild video. Well, how much of this that we're seeing from these companies is coming from the VC culture? Like, to me, a lot of it looks like, you know, when Uber was still working on self-driving cars, it seemed like a whole bunch of the point of the over-promising was, hey, if we just make the case that there's a lot of investors who believe that this is gonna transform transportation
Starting point is 00:55:19 and whoever wins is gonna be, you know, the new standard oil of transportation and monopolize the whole industry. And so we simply going to be, you know, the new standard oil of transportation and monopolize the whole industry. And so we simply need to make people believe that we are going to have this transformational technology. It doesn't matter whether or not we do it. We just need to like sort of keep the con going. That's how it often looks to me. Does it look that way to you ever? Yeah, that's part of the fake it till you make it culture. So in the last 10 years, one of the side hustles I do is occasionally a VC will call me up and ask
Starting point is 00:55:56 me to either look over some documents or occasionally go on a site visit to company X that they want, they're thinking about investing and they want me to tell them whether or not the company is legit. You know, do they have sound science? What is the likelihood they can deliver on the promises that they're making to the VC? So, I mean, I can't tell you who all these companies and VCs were, but I can tell you that 100% of the time I told the VCs that technology, the technology promise was not good. It was going to fail that the startup company did. They just didn't have any basic science on their hands. And 100% of the time, the VCs ignored me 100% of the time they invested against my recommendation. And 100% of the time, the VCs ignored me. 100% of the time, they invested against
Starting point is 00:56:46 my recommendation. And 100% of the time, I was right. That must be, I'm not going to say that that makes you happy to be in that position, but it must be a little bit satisfying to have that. Well, you know, it's in the dark world of AI consulting when no one listens to you. Yeah, I mean, of course, I'd love to tell you that I'm not an I told you so, but I think we already established that I'm a judger. So I definitely judge you as a dumbass. I'm sorry, I don't know if I can say that on the radio.
Starting point is 00:57:24 You know, if I'm telling you sorry. You can say whatever you like. Well, if I'm telling you something and I've spent my whole professional career doing this, and then you want to go against my recommendation, man, that's not my fault. Right. So, but it does worry me. And again, this is one of the reasons why I increasingly get vocal, more vocal every year, because I don't want my taxpayer dollars invested in this. I actually want to know when people are in full self-driving mode when I'm on the road and they're near me, because I think that, you know, they're not just putting themselves at risk. They're putting a bunch of other people at risk. And last year, I think was the first year
Starting point is 00:57:59 that we started seeing Tesla accidents that caused the deaths of other people, not just the driver. So we are starting to get to an area where people, and this is where regulation will eventually finally kick in because if the companies, look, the VCs are there to make some serious cash, right? I get what they're saying. And, you know, it is true. The last couple of years I've had fewer VCs contact me because in the end they know what I'm going to tell them and they don't want to know the truth. So this goes back to this kind of like magical thinking of, you know, if I believe strong enough in it, then it will come true. And I see all of the quote unquote progress that these companies are claiming, not just Tesla, but Waymo.
Starting point is 00:58:48 You know, there are other companies, the other car manufacturers. Oh, wait, if you think if you think it's crazy now, the flying cars are coming, you know, so we're just going to double down on some of the difficulties that we're having with autonomous systems. some of the difficulties that we're having with autonomous systems. So, you know, I appreciate the desire. And of course, as a researcher, I want this money to keep pouring in. But I think we need to start making a more clear, crisp threshold between what is still research and what is really ready for public use. Yeah. Well, look, we've been speaking so negatively for most of this conversation. You are, however, doing your own work on AI. I'd love to know, you know, what what is the positive vision that you have about what AI could actually be used for in transportation or in anything else that is responsible, you know, could save lives rather than hurt them
Starting point is 00:59:48 and that it wouldn't be overhyped? If you could go set AI policy for NHTSA and for Tesla and make it all work right, what could it actually do to benefit us? I am a big proponent of humans plus autonomy instead of humans versus autonomy, right? I don't think we should be worried so much about replacing people as we should about combining the relative strengths and weaknesses of both humans and autonomy. And so, for example, one really cool project I worked on that recently ended was a flying co-pilot robot. So the idea was, is that in the world today, we actually have way more planes than we do pilots. We need pilots.
Starting point is 01:00:35 Even with COVID, there's still a shortage of pilots. But we have a lot of older aircraft that would be, it would just be cost prohibitive to try to retrofit them to be digital aircraft so that they could have some kind of autopilot. So you we with a Boeing company called Aurora Flight Sciences, we built a robot arm that could listen to you and talk back to you. And it could do everything in the cockpit in terms of basic flying. It could grab the yoke. It could flip switches. And its whole job was to relieve your workload. So if you needed to get up and go to the bathroom,
Starting point is 01:01:18 you could tell the co-pilot to take control, and it would. And it would call you if anything became a problem. Or if you're in the goo and what that means is you're doing an approach and you're completely in the clouds and the rain is bad. If you're completely focused on the task of flying, you could have it make a radio call for you, right? So this is a great idea of how do we balance humans and autonomy to work together. I think Toyota has a great concept and it's guardian concept. So instead of trying to replace the human driver with autonomy, try to keep the human driver from doing dumb things,
Starting point is 01:01:55 running off the road when they're talking on the cell phone, maybe, you know, some lane keep assist, but in ways that, you know, sometimes that doesn't, some cars can have what I call ping pong, you know, you bounce between the lanes. So, you know, more effective lane assist to keep drivers from, you know, running off the road, emergency braking assistance, I think is really important. And we work a lot in our lab about about recently we have a project where we developed a drone listening system for rogue drones. So, you know, usually I work on how to make drones better, but in this case, we've been working on how to alert prisons, for example, turns out drones drop in contraband into prisons. It's a huge problem wow so yeah right drugs guns
Starting point is 01:02:48 this is great you just like opened up a whole topic for another episode this i'm like hold on a second wow really that's wild okay i i accept it i'll look into this later i i accept that this is happening that's wild yeah it's i, it's a worldwide problem. And so. You can't prisons can't afford the expensive radar based drone detection systems, you know, that cost hundreds of thousands of dollars. So we invented basically a three hundred dollar. It looks like a little Alexa puck and it listens. It can listen for drones and it warns you when a drone is nearby. And it's got a convolutional neural net in it and it's not right all the time. And so there's a collaboration between the system notifying the person.
Starting point is 01:03:39 The person can tell the system whether or not it is a drone sound or not. And then the system can learn over time. And so, you know, these understanding when is the right place to use autonomy, how to balance it with what humans can do, especially the human ability to reason under uncertainty, that is where the real strength is going to lie in the next 30 years. Yeah, it's like, I mean, we need to remember that technology is a tool for us to use. And insofar as it allows us to do a better job at what we do, I love the example of a pilot who is still going to be in control of flying the plane,
Starting point is 01:04:18 but could use a little bit of assistance from a system that they fully understand. As opposed to, I mean, tell me if this fits into your critique. Uh, cause I, you know, I don't understand enough about flying to, to know if this quite fits, but the problem with the Boeing 737 maxes where, which my understanding was that the reason those crashes happened was something, some automated system that was causing the plane to do something that the pilots were not ready for, uh, that there was like a mismatch between their understanding of what the plane was not ready for, that there was like a mismatch between their understanding of what the plane was doing. And it was some, do I have that right?
Starting point is 01:04:51 That's correct. And so that's a bad version of that because the pilot in question was like, holy shit, what's going on? Did the wrong thing as a result? They were like fighting an automated system that caused the plane to crash versus them remaining the expert, them remaining in control, them, them understanding everything that's happening in the machine and having assistance from the, uh, from, from AI or from, uh, a human built machine rather than, uh, rather than being disrupted by it or taken aback by it or surprised by it, uh, you know know, having help rather than taking your eyes off the road. Yeah, that's exactly correct. And I think the whole Boeing 737 MAX situation was especially
Starting point is 01:05:35 egregious because Boeing at one time in the past was considered the world leader on developing collaborative automation between humans and flying. And this just goes to show you how it can be a slippery slope. Companies want to believe that eventually if they can just get rid of pilots, they would save a ton of money. But I see this everywhere. I mean, you name me an industry and I can promise you, I've had a conversation from someone in that industry who came to me and said, I want to get rid of everyone in the fast food restaurant. How do I do that? I want to get rid of drivers. I want to get rid of pilots. I want to get rid of people in manufacturing settings. pilots. I want to get rid of people in manufacturing settings. So it is that siren on the shore of you wanting to believe that you can get rid of people to increase your bottom line.
Starting point is 01:06:33 And I think that sometimes the lure of that is so powerful that it causes companies to forget the knowledge that they had. And in the case of Boeing, no company knew the risks better than Boeing's human factors division. But that being said, they were still overridden in the design of that system. Wow. And I mean, this desire to get rid of people, first of all, Andrew Yang ran on the premise of what you're talking about, that these companies are going to do this and therefore we just need to give people money because everyone's going to lose their job to automation. Why do we need to envision a future that way? Why not, you know, envision a future where humans are still part of the equation, but their lives are better and easier rather than saying, you know, they're able to do their jobs more effectively. They're
Starting point is 01:07:17 able to, you know, like, you know, care for themselves, care for others using, you know, artificial intelligence as tools, right? Why not, you know, a version of AI, of self-driving cars that, you know, centers us behind the wheel, doesn't have a sitting in the back on iPads and doesn't like eliminate every cab driver in the world, but instead makes transportation better for people without people having to lose their jobs. Yeah, I totally agree. And, you know, I mean, I don't mean to start dissing Andrew Yang, who is a fellow nerd. Right. So I appreciate his nerdy approach to the world. But his statement that, you know, we're going to automate everything anyway, is a very common
Starting point is 01:08:07 opinion among CEOs, the C-suite, leaders of companies and government organizations. And it just tells me, it's kind of like a hidden IQ test for me, because it tells me they don't understand autonomy and automation at all. Because if they really knew what they were talking about, they would realize they're not going to get that. We're nowhere near that. And for all the reasons that self-driving cars are not going to be here in the next five years in the way that they're being sold to us. So I do think that the world of collaborative automation, regardless of what everybody's saying, eventually it's going to be clear to people. It may happen slowly in, you know, one domain at a time, but I tell taxi cab
Starting point is 01:08:54 drivers all the time, do not get worried. You know, there are still so many problems that by the time we actually figure out and the change will come incrementally, that we're not going to wake up one day and have self-driving cars. Recently, right before COVID, I went to Peru and I was in Cusco. And in Peru, people just park on the sidewalks. Dogs are everywhere. You know, it's kind of like in Italy. It can be a free-for-all.
Starting point is 01:09:27 And some places in India where you've got cows and bikes and trikes and cars. Oh, no. Oh, no. This is completely out of the realm of self-driving cars for now. So, you know, people are safe. But what we need are government leaders who actually, instead of spouting off and saying things like that, even when they're smart nerds, we need to make sure that they understand what they're talking about. Well, I can't thank you enough for coming on to give us a little reality check. It's exactly what we love to do on this show.
Starting point is 01:09:57 Where can people find out more about you and your work? You just Google Missy Cummings Duke and you'll get my website. And I've got every paper. There's so many papers there. If you're having a problem with insomnia, just go to my website. They're pretty dense papers, but there's also a lot of papers there that are written for the general public because I believe that public education is critical. You know, clearly I need to sit down with Andrew Yang and have a conversation with him among many other people. And I do do that quite a bit is, you know, make it a personal goal to make sure that I brief people in power to make sure that they understand what is and what is
Starting point is 01:10:36 not possible. Yeah. Well, maybe you could run for president on the opposite platform of this is not going to happen as soon as everyone thinks it is. Really, really appreciate you being here, Missy. Thank you so much. Thanks for having me. Well, thank you once again to Missy Cummings for coming on the show. I hope you enjoyed that conversation as much as I did. Gave me a lot to think about. Hey, if you want to support the show, visit our custom bookstore at factuallypod.com slash books, where you can buy the books that some of our incredible guests have written. And when you do that, you'll be supporting not just the show, but also your local bookstore because it is through bookshop.org. That is it for us this week on Factually. I want to thank our producers,
Starting point is 01:11:17 Chelsea Jacobson and Sam Roudman, Andrew Carson, our engineer, Andrew WK for our theme song, the fine folks at Falcon Northwest for building me the incredible custom gaming PC that I'm recording this very episode on. You can find me online at AdamConover.net or at Adam Conover wherever you get your social media. Thank you so much for listening and we'll see you next time on Factually. that was a hate gum podcast

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.