Command Line Heroes - Robot as Vehicle

Episode Date: December 14, 2021

Self-driving cars are seemingly just around the corner. These robots aren’t quite ready for the streets. For every hyped-up self-driving showcase, there’s a news story about its failure. But the g...ood news is that we get closer every year. Alex Davies steers us through the history of autonomous vehicles. Alex Kendall maps the current self-driving landscape. And Jason Millar takes us under the hood of these robots’ computers to better understand how they make decisions.If you want to read up on some of our research on self-driving cars, you can check our all our bonus material over at redhat.com/commandlineheroes. Follow along with the episode transcript.  

Transcript
Discussion (0)
Starting point is 00:00:00 New horizons, new ways of living, wondering, searching, exploring, far distant views. Welcome, welcome. Futurama exhibit right this way. See the automobile of the future. By 1960, your vehicle will drive itself. The 1939 World's Fair in New York City was crazy optimistic about the future of tech. Five million people visited the Futurama exhibit, where General Motors was laying out a utopian vision of what they called magical motorways, a web of enormous highways would weave American cities together. And by 1960, cars would be automated. That was the promise. We're still waiting for those self-driving cars.
Starting point is 00:00:53 But that's because getting to Futurama isn't just about improving our vehicles. It's about realizing a robot revolution. I'm Saranjit Barak, and this is Command Line Heroes, an original podcast from Red Hat. Let's get one thing straight. Yes, self-driving cars are robots. They make independent decisions. They navigate the real world. They might not look like robots in the movies, like R2-D2, but that's something we've seen over and over again this season. The robots we imagined rarely line up with the robots in real life.
Starting point is 00:01:33 For our season finale, we've saved a topic with more hype than any other in the world of robotics right now. The self-driving vehicle. For nearly a century, we've been promised cars that drive themselves. And as we work through the last miles on this journey, we're learning just how transformative a robot revolution will be. The first cars in the 19th century were called horseless carriages. That was the big breakthrough. A vehicle with no horses required.
Starting point is 00:02:12 The upside was obvious. No manure on the street. And a lot more, quote-unquote, horsepower under the hood. But there was a downside, too. You got rid of a sentient being, the horse, which was helping you stay out of danger. People think, oh, shoot, we don't have a horse anymore, so I now have to be the driver. Alex Davies is the author of Driven, the race to create the autonomous car. And as he tells it, we lost something kind of amazing when we started driving horseless carriages.
Starting point is 00:02:45 Of course, you would have someone at the reins telling the horse, go left, go right. But if they fell asleep or fell out of the seat, the horse isn't going to walk into a wall or off a cliff, which is very much what you get when you have a human-driven car. So from the very dawn of the automobile industry, there's something missing. That extra brain with an awareness for danger. And that's why, early on, we started trying to bring back an external intelligence. Some of the first examples of self-driving cars you see showed up in the 20s and the 30s, although self-driving is probably a generous description. Some of these
Starting point is 00:03:26 were actually radio-controlled vehicles. So you would have someone in another car nearby sending electronic signals to the vehicle to tell it to accelerate or brake or turn left or right. Those were sometimes called phantom cars. Bit of a cheat. You've still got a human driver, they're just not in the car. In the 1950s and 60s, people started experimenting with putting magnets underneath the pavement in highways. And the idea was that the car could follow the magnetic signals and it would be similar to a train running on tracks. Okay, slightly more autonomous. But still, if it's basically a train running on a track, then why not just take the train?
Starting point is 00:04:12 And besides, they weren't about to rip up every mile of highway in the United States and bury magnets underneath. Norman Balgettis, the industrial designer hired by General Motors for the Futurama exhibit at the World's Fair, imagined that by 1960, we'd be zipping across whole countries at the push of a button. Maybe we'd be chauffeured by robots. One way or another, the driving experience would be roboticized. But Geddes was wrong. By 1960, we weren't anywhere near his vision.
Starting point is 00:04:48 He was right about the need for robotic aid, though. Driving has always been a dangerous part of our lives. Today, worldwide, there are around 1.3 million fatalities from car crashes every year. Almost all of them are caused by human error. And that's something robots could help solve. Our self-driving solution got a bit closer in the 1980s, when powerful computers weren't taking up whole rooms anymore. You could fit them into something, say, the size of a car. The first thing I think you would call a self-driving car came out of Carnegie
Starting point is 00:05:27 Mellon University. In the 1980s, they started a program called the Navigational Laboratory, what they abbreviated to NAVLAB, and they created a whole series of vehicles. NAVLAB 1 was different from most robots that had been built before it, because it was a moving vehicle that made its own decisions about how to drive, and it was the first robot that was big enough that people could actually sit inside it. So, a faster computer, a smaller computer. These are key if you want to build a self-driving car. But there's something else, too. Researchers get serious about the idea of artificial intelligence. A paradigm shift in the kind of computing that we want to build into our
Starting point is 00:06:12 robots. Which is really one of the main things that drives the idea behind self-driving cars today is the idea that this is not a machine that follows very simple commands like follow the magnets or response to radio controls that are given by a nearby human. The idea is that the computer can use tools like cameras and radars and laser sensors and maps to understand its surroundings independent of what a human thinks and then can make its own decisions. Pioneering work on artificial intelligence began in the 50s and 60s. But it wasn't until pretty recently, not until the 90s, that our computers became powerful enough to bring this idea to life. Autonomous car projects started to gain some traction
Starting point is 00:07:05 as processing power sped up. And pretty soon, it was time for the rubber to meet the road. In the early 2000s, DARPA became very interested in autonomous vehicles. It seemed like a technology that would make life far safer for members of the military. They weren't just going to fund it via research, though. They wanted to kind of crowdsource things. They decided to have a race, and everybody was invited to submit their own vehicle.
Starting point is 00:07:37 Only rule? No drivers allowed. Bring it to the Mojave Desert in March of 2004, and we'll race them 150 miles across the desert. Car that gets there first wins a million dollars. Engineers joined, veterans from BattleBots, amateurs, lots of different folks. They all came to the Mojave to prove they'd cracked the secret of self-driving cars. And it was this amazing disaster. Like Davey said, it was a 150-mile race. And the vehicle that did the best, an entry from Carnegie Mellon,
Starting point is 00:08:17 made it about seven miles. It got stuck on a rock and melted its tires off while trying to get unstuck. Not exactly a success, but DARPA didn't see it this way. People were excited about self-driving vehicles. People were engaged. So DARPA re-ran the contest a year later. And this time, the prize would be $2 million. At the 2005 Grand Challenge, five vehicles finished the entire race and you get a genuine race between Carnegie Mellon and Stanford and all of these teams that couldn't make it out of the starting gate in 2004 are going dozens or hundreds of miles in 2005. And that's where you get this huge success. DARPA follows it up in 2007.
Starting point is 00:09:06 This time with obstacles and an urban setting. The grand challenges prove not just that the self-driving vehicle is possible, but that it's inevitable. Because by the end of 2007, you've got people like the founders of Google live watching these races thinking, I can take this idea and I can run with it. We're not in some Futurama fantasy anymore. The self-driving car is about to become an industry. Hey there. Nice day we're having. You in the market for a car? So you've just arrived at a dealership. You're there to buy a car. You're not too concerned with make or model, though.
Starting point is 00:09:58 What you want to know is how much autonomy you can hand over to your new wheels. There are five levels to choose from. Level one, basic driver assistance. Your car might have cruise control, for example, giving you a bit of a break with highway driving. Level two, partial automation. Your car is going to help with steering, braking, and accelerating, but just in certain situations.
Starting point is 00:10:26 Level three, conditional automation. Now it can handle driving on the highway up to 50 miles per hour. And if you initiate a lane change, the system can help you weave between cars. Then something interesting happens when we jump to level four. Now you're at what's called high automation. The car is completely aware of its surroundings and can basically drive itself. It can drive on highways up to 100 miles per hour. It's changing lanes all on its own. And as for you, you can start to focus on other things. Your car will let you know if it gets in a situation that requires your attention. And then, finally, there's level five, full automation. You can fall asleep if you like. You're not needed at all.
Starting point is 00:11:13 This is top-of-the-line automation. Levels one to three are things you may have experienced already. Tesla has its autopilot feature, and Cadillac has Super Cruise. But over in England at the autonomous driving startup Wave, CEO Alex Kendall has been trying to deliver those last two levels of automation since founding his company in 2017. Autonomous driving is what I'd call L4 or L5. And that's really where there is no human required to be responsible or liable for controlling the vehicle, but an autonomous algorithm or robot makes those decisions and commands the mobility of that vehicle. Moving past level three into complete autonomy could
Starting point is 00:11:56 make things a lot safer for drivers, because by hovering around three, we're in a tricky middle spot where the driver feels like they don't need to pay attention, but the robotic car really isn't ready to take full control. It can be very easy to become distracted and there's been some well-documented examples where that has led to unfortunately some damaging accidents. So for those reasons, I think it is quite challenging and that's why making the jump to a fully autonomous system is so much more important to really realize the benefits of autonomy. But making that jump is not going to happen without extraordinary investments. A lot of people are only starting to understand the scope of this project. It's one of the great technology challenges of our generation.
Starting point is 00:12:41 I'd actually compare it to the space race, what we saw in the 1960s about getting a getting humanity to the moon now if you think about the Apollo program that ended up succeeding in a 1969 landing humans on the surface of the moon that was a program that in today's dollars cost about 300 billion dollars to get through and it was hugely complex in terms of technology now in terms of self-driving we've seen probably a hundred billion dollars of investment in this program and so we're and we're quite far into what I would consider what we need to do to get this to market but the progress has certainly been humbling. If Kendall's right and the self-driving car needs
Starting point is 00:13:21 the same kind of investment as the Apollo mission then funding is certainly one roadblock on the way to market. But the cost of the end product is a sticking point, too. A fully automated car could cost customers a hundred grand more than an ordinary car. But also, and maybe this is the biggest roadblock, it's just incredibly complex to get to level four or five on that autonomy ladder. We're talking about a billion lines of code with a complexity greater than a jet plane's. What we haven't yet seen is what I call embodied intelligence, which is taking this artificial intelligence, taking deep learning out of simulation,
Starting point is 00:14:03 out of software, and putting it in the real world and giving it a physical interface to society. What Kendall calls embodied intelligence is the final hurdle. The challenge of getting machine learning to be able to cope with all the countless real-world circumstances that we can't necessarily tell it about in advance. We've been able to teach our car how to drive and do lane following, how to go through roundabouts, give way intersections, traffic light intersections, double lane roads, and we're now starting to learn some of the more complex behaviors that are needed in urban driving. Now to learn on that curriculum, you need an incredibly powerful platform, one that can take
Starting point is 00:14:41 petabytes of driving data, of video driving data, and understand where these different scenarios are, how it should present data to the model to be able to learn, and also statistically validate that your behavior is safe. Kendall says they're getting there, moving into those final two levels of autonomous driving. But this final mile may be the hardest. In the real world, robotic cars will have to understand not just the rules of the road, but random changes in the environment too. And most difficult of all, the varying and unpredictable behavior of human beings. Let's think of the journey towards self-driving like a trip from LA to New York. How far on that journey have we come?
Starting point is 00:15:27 Alex Davies puts it this way. We'd probably be somewhere in that incredibly long line of cars trying to get onto the George Washington Bridge right now. The work we've done in the last couple of decades, especially in the last 15 years, is all of the easy miles. It's all of that stuff where you're just cruising across I-80 going 70 miles an hour without another car in sight. The really hard work all comes at the end. Self-driving cars are out there, already on the roads.
Starting point is 00:16:14 They're just not exactly what Futurama promised back at the World's Fair in the 30s. One large pepperoni, one medium cheese lovers. In Houston, Texas, they're delivering pizzas. In the UK, the government is looking to legalize hands-free systems. It's a start. We're creeping toward level five. And as we tackle that last mile, we have a decision to make. What kind of programming is going to get us there?
Starting point is 00:16:46 Machine learning, like the kind Wave relies on, is often seen as the holy grail for self-driving cars. But what does all that black box number crunching mean when we're talking about robots that travel 100 miles an hour? It's an interesting ethical question. Jason Miller is a professor at the University of Ottawa researching the ethics of robotics. You have no way of stepping through line by line and understanding like this is exactly why it's doing this. This is why it's seeing these types of objects and so on and so forth. There's a lack of transparency there. Naturally, machine learning is hugely attractive and a crucial part of making automation happen. If you have the right kind of data and you're feeding the right kind of data into the right kind of model, it's going to be much more efficient to get a car to, say, navigate a
Starting point is 00:17:31 roundabout using that approach than it would be if you tried to line by line define and code all the rules that have to be satisfied in order to navigate a roundabout. Miller points out that we often don't know why machine learning works. We just know that it does. And that concerns people who want to be sure that robots like self-driving cars always have our best interests at heart. Thankfully, machine learning is not the only tool we can use. It's possible to pair it with more traditional, rule-oriented programming, too. You can use more traditional types of programming and very clearly define the rules that a vehicle is going to abide by or the types of driving characteristics that it's going to have. In which case, you have quite a bit of transparency, at least in terms of how the vehicle will behave and what rules it's following. With machine learning and these kind of black box approaches, like the neural net approaches to doing coding, we don't have that.
Starting point is 00:18:35 A slick piece of machine learning doesn't necessarily tell us why it arrives at a certain behavior. And that matters because. Efficiency and robustness from an engineering perspective doesn't translate directly into trust and trustworthiness from a public regulatory perspective. So a few transparent pieces of rules-based programming on top of that machine learning can go a long way to engender trust when these cars go out into the wider world.
Starting point is 00:19:06 For example, you might use machine learning to let a vehicle teach itself lane changing, but then have an explicit rule that limits how close you get to other cars. And so if you're starting with these kind of abstract principles, the reason you would do that is to signal to people and to regulators or whoever you're trying to get to trust the system that, look, this system has certain principles designed into it that align with your expectations in terms of an ethical system. dancing between machine learning and more transparent programming could be a sweet spot where the amazing robotic future gets welcomed into our everyday lives. So whether we get there via machine learning or rule-oriented programming or some mix of the two, we wanted to finally get an answer to a question you might be asking. When am I going to get my own self-driving car?
Starting point is 00:20:05 When do regular people get to snooze in the driver's seat? Our experts all sort of told us the same thing. That's the wrong question. Right. Think bigger. The question isn't, when do I get my self-driving car? The question is, how are robotic vehicles going to transform, well, everything? That's what's so exciting about robots. Their agency, their ability to interact with the larger world, invites us to think at the biggest scale we can imagine. I mentioned earlier,
Starting point is 00:20:41 when automobiles first showed up, everybody just called them horseless carriages. They were still comparing everything to a 19th century technology. Same thing happens when we talk about self-driving cars today. We're imagining a driver-free experience. But we're not imagining how the whole paradigm of transportation can change. Here's Alex Davies one last time. Just as Horseless Carriage was very limited in that it didn't think about all of the things that the car could ultimately do, that it could become this inspiration for art and a version of art
Starting point is 00:21:18 itself, and it could drive the creation of the American suburbs, and it could create entirely new sports and ways of moving around the world, I think we don't know very much yet about what the autonomous vehicle can do. Untying the carriage from the horse allowed for a century's worth of innovations, and where we are right now in the progression of the self-driving car is untying the car from the human driver. Experts told us that someday we may move about in fleets of vehicles owned by the city or companies. And our groceries might travel in autonomous vehicles way more often than people do. Now take that kind of change
Starting point is 00:22:07 and try applying it to everything. The point is, cities and daily life are going to be remade, not just by autonomous vehicles, but by robots that are currently spinning in a piece of simulation software waiting to be born. Life is about to change in ways we're only
Starting point is 00:22:27 beginning to comprehend. Whether we're talking transportation or healthcare or economics, the change is just as radical as when cars remade the 20th century. And that's kind of awesome. It's one of our generation's greatest engineering challenges. And if we manage to overcome all those computational and theoretical and psychological barriers, there's no telling how far our robots could take us. All season, we've been trying to separate robot facts from robot fiction. The old hype about what robots can be was often wildly wrong. All season, we've been trying to separate robot facts from robot fiction. The old hype about what robots can be was often wildly wrong.
Starting point is 00:23:11 Hey, get me a beer, would you? My pleasure. But it did propel innovation. And our innovations have delivered a robot reality that's just as fantastic. This season, we've discovered robots that have already become essential co-workers, making jobs safer and more productive. Others offer companionship or replace parts of the human body.
Starting point is 00:23:36 And as you've just heard, they're on the verge of remaking the whole field of transportation too. It's all happening because robotics has massively opened up over the past few decades. Thanks to simulation software like Gazebo, or open source projects like ROS, or competitions like the DARPA Challenge, whole new crowds of command line heroes are joining the field. And personally, I can't wait to see what they dream up next. I'm Saranya Barg, and that's it for Season 8 of Command Line Heroes. But Season 9 is already in
Starting point is 00:24:15 the works. Subscribe wherever you get your podcasts, and you won't miss an episode. Until then, keep on coding. Well, first, models aren't one-size-fits-all. You have to fine-tune or augment these models with your own data, and then you have to serve them for your own use case. Second, one-and-done isn't how AI works. You've got to make it easier for data scientists, app developers, and ops teams to iterate together. And third, AI workloads demand the ability to dynamically scale access to compute resources. You need a consistent platform, whether you build and serve these models on-premise or in the cloud or at the edge. This is complex stuff, and Red Hat OpenShift AI is here to help. Head to redhat.com to see how.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.