Command Line Heroes - Robot as Threat

Episode Date: November 30, 2021

When a robot goes bad, who is responsible? It’s not always clear if the user or the manufacturer is liable when a robot leaves the lot. Human behavior can be complex—and often contradictory. Askin...g machines to interpret that behavior is quite the task. Will it one day be possible for a robot to have its own sense of right and wrong? And barring robots acting of their own accord, whose job is it to make sure their actions can’t be hijacked? AJung Moon explains the ethical ramifications of robot AI. Ryan Gariepy talks about the levels of responsibility in robotic manufacturing. Stefanie Tellex highlights security vulnerabilities (and scares us, just a little). Brian Gerkey of Open Robotics discusses reaching the high bar of safety needed to deploy robots. And Brian Christian explores the multi-disciplinary ways humans can impart behavior norms to robots.If you want to read up on some of our research on robots as threats, you can check our all our bonus material over at redhat.com/commandlineheroes. Follow along with the episode transcript.

Transcript
Discussion (0)
Starting point is 00:00:00 Are you still struggling to keep those pesky pieces of paper together? No more, my friend. Introducing the Paperclip Maximizer Bot 3000, a robot whose sole purpose is to produce as many paperclips as possible. The future of office supplies has never been so bright. We're interrupting this broadcast to bring you updates on the catastrophe playing out downtown. It looks like the paperclip maximizer has torn apart most of the city's buildings. It's repurposing them into piles of well-made paperclips.
Starting point is 00:00:34 I'm told that the company's founders left town. When it comes to robots, even the most innocent of intentions can go awry. They obey the letter of the law, but not the spirit. A Roomba might try to vacuum up your cat, for example. Making sure robots don't cause harm has become a crucial field of research. And figuring out who is responsible for what, as robots become more a part of our lives, is more difficult than you might imagine. When a machine has some measure of autonomy, like a lot of robots do, is the manufacturer responsible for its actions?
Starting point is 00:01:18 Is the user? Could a robot be held responsible. I'm Saranya Barg, and this is Command Line Heroes, an original podcast from Red Hat. All season, we've been tracking the fast-evolving field of robotics, and this time we're asking, what happens when good robots go bad? Who's responsible for their actions? And who do we blame when a paperclip maximizer bot 3000 decides to destroy the city? We'll come back to that disaster scenario, an interesting thought experiment by philosopher Nick Bostrom. But first, we need to grapple with some immediate worries, because questions about robotic responsibility are already here, and the stakes are high. So, who's responsible when robots do harmful things?
Starting point is 00:02:20 If I cut my hand while preparing dinner, I'm not going to blame the company that made my knife. But robots are different. Sometimes robots have a degree of autonomy. Sometimes their inherent wiring controls their decisions. And that means responsibility in the world of robotics is a lot more confusing. Our search for robot responsibility begins with the folks who make them, the manufacturer. What responsibility might they bear even after the robot's been sold? No one's really teaching me what I'm supposed to do, what I can't or cannot build using the powers I have. Ah Jung Moon is an assistant professor at McGill University.
Starting point is 00:03:07 She studies the ethical consequences of AI and robotics. Moon says her students are engaged by these questions in a way that previous generations might not have been. They're pushing to understand the multifaceted responsibilities of this field and the role manufacturers play when designing robots is especially blurry. They don't have a lot of legal responsibilities to make sure that everything that they put out the door is used for ethical reasons and purposes.
Starting point is 00:03:38 Fact is, it's almost impossible to know how somebody will use a robot once it's old. People constantly come up with innovative ways to use robots. The New York Police Department, for example, purchased robotic dogs and repurposed them to help with police work. They didn't exactly weaponize their robots, but they did use them on patrols and in some perceived dangerous situations. That got a lot of people anxious. New uses for technologies are often
Starting point is 00:04:06 positive. They can move things forward. But manufacturers that are wary of unplanned uses could revisit their user agreements. The fact that these machines can make certain quote-unquote decisions or behave in a particular way in context that the designer hasn't necessarily hard-coded into the system or has thought through fully, that allows for a little bit of uncertainty to be built into how users interact with the system. A user agreement might include a promise that you won't use the robot to harm a human, or won't allow the robot to be easily hacked. Both those things are easier said than done. And the more powerful the robot, the more specific that contract needs to get.
Starting point is 00:04:55 For example, back in 2014, ClearPath Robotics released a statement saying, we are building these field robots that can be used underwater, above ground, and so forth. And it has a lot of military use, and it continues to have military clients. But they've recognized that retrofeeding these systems to become, quote unquote, killer robots, or robots that are weaponized, is not good for the society. So they've essentially made it so that it is their responsibility to communicate that their clientele would not be using the technologies for those particular killer robots purposes. Moon has recently been purchasing robots for her lab,
Starting point is 00:05:40 so she's seen a few user agreements lately. Clear Path's contractual language about ethical boundaries was some of the most direct out there. Why the use of such strong language for their robots? I'm Ryan Garropy, and I am the CTO and co-founder of ClearPath Robotics and Auto Motors. ClearPath created the Husky robot, an all-terrain four-wheeler about the size of a dog. They've made other animal-named robots for research, too. A dingo, a jackal, a moose.
Starting point is 00:06:15 And they work with some heavy-duty robotics companies, places like Boston Dynamics and Universal Robots. So Ryan Garropy knows the field. He sees his robots used by new startups and corporate innovation programs. A seafaring robot patrols the ocean looking for dangerous algae blooms. Another might be used to haul material at a mining site. And the scope of all that work means Garropy can't know exactly how his robots are going to be used. We have to do a degree of know-your-customer research because these robots can be repurposed for some fairly harmful roles if they do get into the wrong hands.
Starting point is 00:06:57 Partly, that connection with customers is about screening, making sure customers have the right intentions. But it's also about education. Robotics might be the technology that has the widest gap in the real world between perception and reality. We don't want to release technology to people who are going to hurt themselves with it. Garaby says that as a manufacturer, he has three areas of responsibility. There are legal responsibilities for sure, things like export control. And then there's a responsibility to his customers to make sure they're set up to succeed. But what's really interesting is his third area of responsibility.
Starting point is 00:07:41 I think we do have a responsibility to society. And that's where we've engaged with governments in the past. That's where you'll find that we've been very outspoken on the need to regulate some more extreme uses, such as lethal autonomous weapon systems. So we do have these areas because, you know, in the end, a manufacturer can only control the technology that they develop so far. So it's important to engage with the rest of society and inform people about, you know, where they should be worried and where they shouldn't be worried. Ultimately, the manufacturer can educate the public, can push for good policy. But the power to manage a sold robot is going to be limited. If you find out that someone is using a product that you have sold them and that they have title to, you know, legally there's nothing that you can do.
Starting point is 00:08:34 And it doesn't make any sense to claim that you can do things that legally you cannot. If you've sold them an asset, it's theirs, right? That's the current laws that we operate within. And yet, Gariby points out that, as a manufacturer, you're not totally powerless. However, if they did mislead you about the purpose of their assets, then you are, of course, free to withdraw support. And that's something I hadn't thought about. One thing we've learned about this season is that the robot revolution is in large part a software revolution. And that means somebody who just buys a robot is in the long run going to be tied in a way to software providers. And that makes it a little more difficult for anybody to go rogue. Folks will always find ways to hack, to modify, to repurpose. It's a core part
Starting point is 00:09:28 of technology's history. And Ah Jung Moon says those modifications will keep happening with robots too. But that doesn't mean we give up on creating safe systems. It's really interesting to discover new ways that these robots and AI systems are impacting our lives, our behaviors, our decisions. But that should be coupled with more directed decisions on what are some of the harms that can result from these technologies being deployed and how do we prevent that. To really prevent harm, it's not enough to have responsible manufacturers. The users are being tasked as well. And that makes the question of robot responsibility even more complicated.
Starting point is 00:10:20 Hello from the hackers. Stephanie Tellix is an associate professor at Brown University. Her lab makes autonomous collaborative robots, and she's also very interested in robot vulnerability. One day, a couple years ago, Tellix learned something that intrigued her. I learned that it was possible to scan the whole internet for a particular port. That seemed incredibly important to her. Because, as we described earlier in the season,
Starting point is 00:10:50 there's a standard operating system for robotics called ROS, the Robot Operating System. And ROS uses a particular port, number 11311. What all this means is that Telex could search the internet for any ROS users who weren't protecting their system. And I know from my own personal experience that we are not being particularly careful to secure our robots. And if we can scan the whole internet, we just have to do it as soon as we can because it's going to be awesome. So they scanned the internet for ROS ports. They found a lot of them. And what's key here is they found folks using ROS with sort of a
Starting point is 00:11:31 wide open door. There were dozens and dozens of people running ROS with no firewall at all. They found a DaVinci robot, for example. This is the kind they use for surgeries. It had a ROS interface and essentially no security. We didn't actually try to move the robots. We didn't actually read any sensor data off of them because we felt like that was like too far to be ethical. But they could have. We could have and a bad actor could have, we think. Turns out one of the robots running ROS that Telex found was owned by a colleague of hers. We found your robot. We'd like to do a proof-of-concept attack on it.
Starting point is 00:12:10 So they agreed on a time. And we're going to try to read the sensor data, and we're going to try to move the robot. And then we did. So we actually were able to read sensor data, read the camera. We were able to make it speak. We would play sound out of the speakers. So this is just to show that, like, if you're running ROS and you haven't secured the port, anybody on the internet can publish and subscribe
Starting point is 00:12:32 and they can read your sensors and they can move your actuators. Telex finds that researchers are especially careless with their robot security. In fact, she even found one of her own robots hanging out on the internet unprotected. Her lab had opened a Ross port and just left it open by accident. Telex feels that running these scans and closing off attack vectors
Starting point is 00:12:58 is going to become a regular part of good robot maintenance. I really want to find my buddy's robots at other universities and give them crap about how they're insecure. Like, that seemed really fun to me. Maybe I'm a bad person, but like, I wanted to do that. And I also thought, wow, this is something that I didn't know. I bet a lot of other roboticists don't know it as well. Telek says that everybody using internet-connected robots
Starting point is 00:13:30 should be doing a certain amount of basic security maintenance. Whether we're using robots for research or managing robots at a factory, we each have our own share of responsibility. And that, it turns out, is by design. Ross, the robot operating system, is primarily maintained by Open Robotics. But as Telex discovered, they count on users to secure robots at the network level. Brian Gerke, the co-founder of Open Robotics, explains why. We made the decision early on that we're not security experts. We didn't want to invent some security mechanism and get it wrong.
Starting point is 00:14:09 We didn't want to make the wrong guesses about what the threat models are. And so we made it very clear from the beginning, we're building a system that can connect to a network, and we didn't build any security in. And so what you need to do as a user is take appropriate precautions and use modern networking tools to wrap this system so that it's not exposed on a network where it can have threats. Truth is, hackers are always going to be there. Some can mess around with industrial robots
Starting point is 00:14:38 already. And sometimes they can hack into the microphones and cameras of robots in your home. Any robot that's connected to the internet is potentially vulnerable, but there are precautions that users can take. Remember the PR2, one of the robots we talked about earlier in the season? It runs on ROS, and Gerke, who worked with PR2, says, That robot, the only way to connect to it was through a VPN or virtual private network. And we put that out there as an example of how to deploy a robot that runs ROS. And I'd say that if you look at deployments of robots from the many, many companies who are using ROS 1 in
Starting point is 00:15:18 production, they're following a very similar model. Open Robotics has since created ROS2, where they're trying to embed some security into a distributed system. But users still have to configure things to their own needs and take on an active role. With ROS2, you can actually enable security at the application level. We would still advise you not to expose your ROS2-based robot directly to the Internet. I mean, frankly, that's bad practice for any device anywhere. Basically, no device should be directly exposed to the Internet without incredibly high levels of security applied,
Starting point is 00:15:56 which are generally not applied to most devices. Gerke is aware of Stephanie Tellex's work, exposing those vulnerable ROS users. And he's not totally surprised by it. and he got on his laptop, connected to the Wi-Fi network in the building, which required credentials to access, so it wasn't open to the public, but then used that connection to get onto one of the robots and then drive the robot over to the door and push the door from the inside and let him in. So it was actually a robot-mediated breakout. That example, of course, is a bit like somebody taking over your Roomba. Not too big a worry. But what if the same kind of hack allowed someone to take over a car or a piece of heavy
Starting point is 00:16:55 machinery at a factory? Suddenly, you've got a major problem. We've all heard stories about hackers getting into IoT devices. But the difference with internet-connected robots is that they're moving, they're manipulating the world, and that instantly becomes more serious. There's certainly a higher bar whenever you deploy something out in the world. And to meet that higher bar, we're going to explore one final level of responsibility. We've already looked at the manufacturer's role and the user's role, but what about the robot itself? Can we hold robots responsible for their own actions? Science fiction authors have spun up robot disaster scenarios ever since the word robot was first coined. There have been fantasies of Skynet and a laundry list of robot rebellions.
Starting point is 00:17:55 But it was a philosopher called Nick Bostrom who described a subtler and maybe more likely disaster. Must make paper clips. Paper clips. Bostrom suggested that a super intelligent, goal-oriented robot would consume everything in sight, including the planet's resources, in order to accomplish its one goal.
Starting point is 00:18:32 It goes about turning the entire planet into paperclips, including all of the human beings and everything that we hold dear. Brian Christian is telling us about this thought experiment. He's an author and researcher at UC Berkeley and has become a leading thinker on the future of tech and how it impacts humanity. It offers us a different vision than the Skynet Terminator vision. In this case, it is not necessarily that the AI takes on a set of goals that are contrary to our own, but rather it's trying in earnest, you know, in good faith to do exactly what we asked it to do. Hmm. So how do we avoid getting turned into paperclips? It's a harder problem than you
Starting point is 00:19:13 might think. If we give a machine a goal and then let it run all on its own, we've got to be absolutely positive that goal doesn't interfere with other goals like human safety. In other words, how do we give robots a sense of responsibility toward not just a narrow goal, but the larger interests of humanity? How do we make sure that our robots are pursuing the things we truly desire? We might try the machine learning route, giving robots fewer explicit instructions, letting them learn through countless examples all on their own. But that has its problems, too. It turns out to be extremely difficult to make sure that the system is learning exactly the thing that you have in mind when you taught it and not something else.
Starting point is 00:20:02 So what's the solution? Christian suggests that we could avoid the paperclip maximizer robot by moving our thinking beyond the single-mindedness of corporate goals and thinking about the goals of multiple disciplines at once. I think it's really going to take a pretty holistic approach to solving this problem. There has been an increasing awareness dawning on people in the computer science and machine learning community that they really need to address these problems in dialogue with people in other fields, people who have disciplinary expertise, whether
Starting point is 00:20:38 it's doctors in the medical context or, you know, people with a criminal justice expertise. And a lot of these problems, I think, exist at the boundaries between those disciplines. Aligning a robot's goals, not just with the goals of those who created the robot, but also with the goals of people who actually interact with that robot, people whose lives are touched by that robot, is one level of work. And then, in addition, we may need to move beyond simply shoveling millions of data points at our robots. We may need to find some entirely new way to help robots learn what really matters to human life. There's been a lot of work by the theoretical computer science community around, can we avoid having to translate all of the things that we want into this numerical form?
Starting point is 00:21:32 Might there be other ways to impart our norms, our values, our desires into a system? Can we, in other words, give robots a sense of right and wrong? Instead of uploading every single example of what is considered right and wrong? It's a problem as sprawling as the field of robotics itself. But that's the next level of robot responsibility. Translating our real, complex, messy values and desires. And it means, in addition to making manufacturers and users responsible for robots' behavior, we need to start giving robots a sense of responsibility that's all their own.
Starting point is 00:22:20 You might have noticed in this episode, there are a lot of stakeholders. You've got the manufacturer, you've got the user, there are a lot of stakeholders. You've got the manufacturer, you've got the user, you've got the robot itself. And that's the point, really. Designing a robot future that works for everybody means bringing everybody to the table. The more that robots move through our lives, the higher the stakes get. We're forced to think about who gets to offer input and who gets to help design our robotic future. Next time, it's our season finale.
Starting point is 00:22:55 We're looking at the robot revolution that's been rolling toward us for over a century. The self-driving car. I'm Saranya Barik, and this is Command Line Heroes, an original podcast from edge computing goes way beyond that. A fridge with a Wi-Fi connection is one thing. A robotic vehicle that's sorting packets and using AI to plan its route through the warehouse, that's something else entirely. At that level of complexity,
Starting point is 00:23:35 you've got software in the cloud, software in the warehouse, software in the robot. How would you even manage an update without a common system? This is where Red Hat's edge solutions come in. We simplify and streamline operations from the cloud to the farthest edge across all kinds of devices and use cases. Because everything should just work everywhere. Find out more at redhat.com slash edge.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.