CppCast - Robotics Development

Episode Date: October 19, 2016

Rob and Jason are joined by Jackie Kay from Marble to discuss the use of C++ in the Robotics industry and some of the unique challenges in Robotics development. After spending her childhood wa...nting to become a novelist, Jackie switched over from writing stories to writing code during college. She graduated from Swarthmore College in 2014 with a Bachelor's in Computer Science and went on to work at the Open Source Robotics Foundation for two years, supporting Gazebo, a physics simulator for robotics R&D, and ROS, an open source application framework for robotics development. She recently started as an early employee at Marble in San Francisco, a startup working on autonomous delivery. Jackie was a speaker at CppCon 2015 and 2016 and a volunteer at C++ Now 2016 and frequently attends the Bay Area ACCU meetups. Her hobbies include rock climbing, travelling, and reading (books, not just blog posts). News What does "Modern C++" really mean The "unsigned" Conundrum C++ Variadic templates from the ground up Jackie Kay @jackayline Jackie Kay's GitHub Jackie Kay's website Links ROS (Robot Operating System) ROS 2 Gazebo (Robot simulation) Gazebo's Bitbucket Repository Caffe - Deep Learning Framework TensorFlow - Machine Intelligence Library Marble CppCon 2016: Jackie Kay "Lessons Learned From An Embedded RTPS in Modern C++" Code examples from "Lessons Learned From An Embedded RTPS in Modern C++" Work-in-progress implementation on DDS/RTPS Sponsor Backtrace

Transcript
Discussion (0)
Starting point is 00:00:00 This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building. Get to the root cause quickly with detailed information at your fingertips. Start your free trial at backtrace.io slash cppcast. And by Meeting C++, the leading European C++ event for everyone in the programming community. Meeting C++ offers five tracks with seven sessions and two great keynotes. This year, the conference is on the 18th and 19th of November in Berlin. Episode 75 of CPP Cast with guest Jackie Kay, recorded October 19th, 2016. In this episode, we discuss the meaning of modern C++.
Starting point is 00:00:57 Then we talk to Jackie Kang, C++ developer and roboticist from Marble. Jackie tells us about some of the unique challenges for C++ developers by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how are you doing today? All right, Rob, how about you? I'm doing good. Did you see the... This is completely off topic.
Starting point is 00:01:41 Did you see the new Guardians of the Galaxy trailer today? I did not. I did see that it was released. Twitter was talking about it and everything. We could pause right now and you could go watch it for a minute maybe. I have watched all of the Marvel movies in
Starting point is 00:01:56 order just for the record. And then when the last one came out, the last Age of Ultron, was it? No. Yeah, Age of Ultron. Yes. Yep. Big one. That was the last age of ultron was that no yeah age of ultron yes yep i went big one that was the last big one we went to war civil war yeah it was it was captain america yeah yeah yes that was a very good one but it was before age of ultron we actually went back and re-rewatched all of the movies in order just to make sure we weren't missing anything but no i have not seen
Starting point is 00:02:24 the guardians of the galaxy volume two trailer yet. It's definitely worth watching. I'm super pumped for the new movie. It's six months away. So I'm not sure what I'm going to do. No, there's a doctor strangers before that. You have to go watch it.
Starting point is 00:02:36 That comes out in like three weeks. Right, right. Anyway, at the top of our episode, I'd like to read a piece of feedback. Um, this week we got a bunch of tweets from, uh week with kenny kerr on the show and i guess some of the cpp cast fans
Starting point is 00:02:51 uh likened kenny kerr to shaft um we got this tweet saying kenny kerr is one bad shut your mouth never pass up an opportunity to get kenny's take so that was uh that was some good feedback there i like that username just for the record fox molder yeah that comes from fox molder yep so we'd love to hear your thoughts about the show as well uh you can always reach out to us on facebook twitter or email us at feedback at cppcast.com and don't forget to leave us reviews on itunes and i really mean that last one actually i checked for itunes reviews we haven't had one since january so please go on itunes give us wow so maybe we just don't have any apple users left that seems unlikely no that's true i know
Starting point is 00:03:36 that we have at least one person who works for apple who listens to the podcast there you go well joining us today is jackie k after spending her childhood wanting to become a novelist jackie There you go. a physics simulator for robotics R&D and ROS, an open source application framework for robotics development. She recently started as an earlier employee at Marble in San Francisco, a startup working on autonomous delivery. Jackie was a speaker at CppCon 2015 and 2016 and a volunteer at C++ Now 2016 and frequently attends the Bay Area ACCU meetups. Her hobbies include rock climbing, traveling, and reading books, not just blog posts. Jackie, welcome to the show. It's great to be here. Thank you. All right. So what, what piqued your interest that moved you from writing stories to writing code? Oh, well, it pays a lot better. And it, it's also really nice. It's a bit more concrete to
Starting point is 00:04:42 write a program and see the logic come together. Whereas when you write a story, it's being kind of subjectively judged as an artistic piece. But I talk to a lot of coders who think their code is also judged subjectively in a kind of artistic way. So it all comes full circle. That's a good point. You know, I almost said, well, at least with writing stories, you don't have to worry about syntax, but that's not really fair. Right, yeah.
Starting point is 00:05:12 People just aren't going to read something that's not grammatically correct, even in this day and age, yeah. Yeah. Okay, well, we have a couple articles to go through, Jackie. Feel free to comment on any of these, and then we're going to start talking to you about robotics, okay? Sure. Okay, so the first one is a blog from Jens Weller at Modern C++, or Meeting C++, and the article is, What Does Modern C++ Really Mean? And it was a pretty interesting article where it kind of goes into the history of the Modern C++
Starting point is 00:05:44 name and where it started. I know prior to reading this, I just, I definitely just thought of modern C++ as meaning C++ 11, 14, and now 17. Yeah. How about you, Jason and Jackie, what'd you think? Well, I was a, go ahead, Jackie. Well, it's funny for me because having recently graduated relative to some other people we've had on the show, I learned in C++03 and I didn't really consider myself a C++ programmer until after I learned 11. And I think the modern standard has made the language a lot more accessible for people.
Starting point is 00:06:30 Since overall, and I don't know if it's the standard or the style, the language is a lot more powerful if you leverage some generics or if you think about a more modern way of writing code. So this is very important to me. But again, because I don't have any professional experience in non-modern C++, really, I have a limited perspective on this. You know, you said something. You said it made the language more accessible,
Starting point is 00:06:58 and it's not something I'd ever really thought about before, but simply iterating over a standard container pre-C++11 took a lot of knowledge. And now it doesn't. Deducing the type of an iterator, for a beginner programmer
Starting point is 00:07:16 trying to write out the type of an iterator of a standard map and not using auto to deduce that is already a huge obstacle. Yeah, yeah maps a bad one right okay uh this next one is the unsigned conundrum and this is on the bulldozer blog i know we've talked about a couple of his blog posts before and he's referring to a talk that john called did a lightning talk at CppCon 2016.
Starting point is 00:07:45 And if you're going to watch just one lightning talk, I'd highly recommend watching John Kolb's lightning talk on unsigned integers. It was very, very entertaining to watch. And in this article, he goes into some of the things that John brought up, one of which being if you compare an signed and unsigned integer, it actually has some unexpected results. And he goes into the why of why that happens. You know, my only comment against John's argument here is any decent compiler is going to generate warnings on these things. Yes. Yep. Yeah. Any decent compiler is going to generate warnings on these things. Yes. Yep. Yeah, so if you're paying attention to your warnings,
Starting point is 00:08:27 you shouldn't run into this, but I know I've seen plenty of code bases where these warnings come up and are being ignored. Yeah, well. Yeah, kind of from a perfectionist point of view. Well, most cases when i use integer types i want to use an unsigned type because i'm counting something that has that that i'm counting something i'm using a natural number and uh when i when i'm doing math and when i when i compute something over the real numbers i want to use negative floating point numbers or doubles.
Starting point is 00:09:10 But I can't think of a compelling case for using a signed integer type besides avoiding problems like this. Yeah, yeah. Unless I'm actually doing math, I math i agree right on negative integers for some reason yeah yeah it's an interesting point okay uh this next one is c++ variadic templates from the ground up and this is on cppisland.com and if you're not too familiar with variadic templates, this is a really great blog post that kind of walks you through an example that becomes much easier to do if you have variadic templates at your disposal. Yes. Yeah. So I'm not sure there's much more to say on this one other than I'd highly recommend it, especially if you're not familiar with the variadic templates feature of C++.
Starting point is 00:10:05 I would like to comment that it specifically focuses on building up recursive variadic template instantiations. And that is almost always not what you want to do because it adds so much compiler overhead. There's almost always a non-variant a non-recursive solution with variatics okay so just for someone to think about just let me say jackie yeah uh i thought of a few kind of additions i would make to this blog post if i were writing it it's absolutely a good intro i think but there's stuff like stuff like, well, if you're in 14, you could use auto function return type deduction, and then you could deduce the type of your summation based on, actually type promotion rules, based on what the types of your arguments
Starting point is 00:11:03 were to the sum function. And I also, I thought it was kind of weird. This is a total nitpick, but I thought it was kind of weird that the base case only took one argument. It was like you're taking the sum of just one number. So you can't add one number to nothing. I don't know. It's true. Yeah, so you would have done the base case as, yeah. What would you have done the base case as two zero?
Starting point is 00:11:29 Yeah. So I actually, I was bored at work. So I wrote up a little, how I would have done this one using C++ 14. And, uh, um,
Starting point is 00:11:41 I also did the base case would, uh, accepted like template T, uh, template type name t type name s um and two arguments and that way if you tried to write sum of 0.1 uh it would be a compiler okay um yeah okay okay well jackie let's start talking about C++ and robotics. And maybe a good question to start out with is, what exactly are C++ robotics developers doing? You know, what are we talking about when we're talking about robots and working with robots? That's a great question. Because it's very broad. Sure. uh robotics is is a relatively new field people have
Starting point is 00:12:29 been automating things for a very long time people have been automating machines for a long time basically since computers were invented people have wanted to talk to hardware uh but i think the idea of having a mobile autonomous system uh system that learns and adapts to the environment, it's been pretty recent that we could have that realized. And that's partially, there's a lot cheaper to buy sensors, the kinds of sensors that you would need to make an autonomous system like cameras and laser scanners. In 2009, DARPA, the US Defense Research Agency, poured in a lot of money into the first autonomous vehicles challenge. And we're sort of seeing that the ripple effects of that now in the autonomous vehicles industry basically exploding in the last few years. But kind of grounding this in the technical sense of things, I think of C++ being important in robotics for two reasons. One is to develop firmware for
Starting point is 00:13:42 small sensors and actuators, basically small devices that are going to compose the blocks of a robot. And then I see C++ as being very important to process large amounts of data in an intelligent, adaptive way. So if you're going to do machine learning or inference, statistical inference based on large data sets from a high-resolution camera, you're going to want to do that in the most performant way possible. And you're probably also going to want a very expressive API to do that because you're doing complex math. And I think C++ is the best language for that combination of power and expression.
Starting point is 00:14:24 I'm curious, you said writing firmware for sensors, and that's not something I've ever thought of. But so is there some level of pre-processing that's done on our sensors? There are sensors that are used in robotics today? Absolutely. Yeah. So the pipeline of how that goes is you have an electronic component that's converting analog information to digital information so a camera is converting light to pixels
Starting point is 00:14:54 and you uh you want to get that uh digital information off of a very small microcontroller that is accessing, you know, digital IO pins connected to your ADC or analog digital converter. And so you probably want to have a camera module with a small microcontroller and then plug that plugs into a computer over USB. So you're going to want firmware that accesses the IOPins and has some USB protocol to send out over this unified interface something to your main control unit. What kinds of processes are used there these days? Oh, it depends. And yeah, so I guess going back to the software firmware side of things, there might be firmware on board that does, in the camera case,
Starting point is 00:15:55 some preprocessing, like auto white balance is one thing, and that's managing channel levels in the image before sending it off to the computer. So, yeah, it depends on the complexity. So, you know, you could do a bunch of research and find answers to that question. STM32 is very popular, I think, for that class um if you're if you're looking for like specific names um let's see uh i kind of need to have like wikipedia up could you tell us a little bit about um the type of robotics projects that you've worked on personally uh you said you're working at Marble Robotics?
Starting point is 00:16:46 Yeah, so going kind of a historical timeline of this, when I was in college, I did a lot of kind of educational products or projects, basically working on this. I was a TA for a class where we had basically Roombas without a vacuum and an Xbox Connect. And that was doing very basic color segmentation and line following. I wrote some path planning code at an internship for a lab that was interested in sending rovers to the moon. So that was very, very different and also very high level. Basically,
Starting point is 00:17:35 it was a lot of graph search algorithms and thinking about like cost maps based on the temperatures that you would experience on the moon since there's no atmosphere. Over basically the last two years, I worked on more simulation when I worked on Gazebo, and I worked on some code for a semi-humanoid robot. Basically, it had two six six degree of freedom arms and that's basically saying that it had uh it has arms that have the kinematic workspace of a human uh they can it can kind of hit uh points in 3d at any orientation in within a certain bounding box. So flash forward a little bit.
Starting point is 00:18:28 These days, I'm working on basically an autonomous land vehicle. It's not a car in that it's not something that someone could hop in and drive in, but it does operate outside. And the outdoors-indoor distinction, I think, is pretty important because that does inform what hardware is going to go on the robot, what kinds of sensors, and it also informs the scale of the problem. So a robot that operates outside,
Starting point is 00:19:00 it's going to have a much larger map and representation of the world than something that operates inside in a small building or even just one room. And this all eventually maps down to hardware because if you need to store a large representation of the world, you either need to have a much larger disk, or you need to be talking out to a cloud server and basically pipelining the map representations that you're going to get. Like, give me, you know, this portion of the map, which I'm going to be in, if I'm operating over like miles and miles of an area. Whereas if you're just operating in a finite like warehouse workspace,
Starting point is 00:19:49 you can just store that on disk or even in memory. What in the world is the failure mode there? If you're uplinked to your cloud goes down and your autonomous vehicle is driving across some terrain. Yeah, that's a great question. Yeah. So, uh, uh, robots with cloud interfaces, it's kind of a new thing. Uh, and it's very important because as I said, you're going to want to pull data from the cloud eventually. Um, but luckily, uh, and, and there are people who have these teleoperation interfaces, uh, where you are kind of like a video game. You have some controller, you're moving a robot around, you have a video screen, uh, streaming camera data, uh, live in real time. And that's a challenging problem in itself because you need really good fast compression.
Starting point is 00:20:46 And also, you know, you're going to have a lossy link because Internet access, wireless access, not very good. You're probably going to need 4G or LTE connection, and that's why we need autonomy, because the robot needs to continue operating, even if it doesn't have a human operator or like a certain level of information. So some possible solutions, you can kind of, you know, think out what would be the sensible thing to do. Hopefully, you have some representation of where you came from. So there's a concept called odometry, which is basically the robot looks at its motor information. It measures that these are the motor commands that I've accepted since I started up. And if I reverse those, theoretically, I could go back to where I started from. So that's what a robot might do if it needs to go back home for some reason.
Starting point is 00:21:54 Odometry, though, is also subject to error. And anything you do in robotics, you get sensor data, and it's probably going to be noisy and random and have some element of error to it. And so most, a lot of the software we write is managing that error in some sensible way. Interesting. I have done a little bit of work having to deal with sensors on microcontrollers, and it just reminded me of like my set of heuristics for, because I knew what kind of data I was expecting to get, I just basically ended up making this if block. I don't even know how big it was. That was like, I'm going to massage the data
Starting point is 00:22:33 to be something like I expect it to be. I guess you probably have to do the same kind of things. On a code level, there's some interesting hacks that we do. So there's some very sophisticated mathematical methods for dealing with this kind of thing. So there's filtering techniques where you say, I'm going to model my sensor noise as a Gaussian, because a bell curve is like the nicest probability distribution people can think of. And maybe you have a data sheet for your sensor that says, this is the variance of a measurement that you get. And you'd say, all right, I know the variance and I expect this to be sampled
Starting point is 00:23:18 from a Gaussian. So if it's a little bit off, that's okay. And so randomness is very important. And actually a lot of those filters are implemented using some kind of like Monte Carlo techniques. So generating fast random numbers and actually random numbers, not pseudo random numbers, is important because a lot of our algorithms for inference run on those. Where do you get your actual random numbers from then? I got to ask. Currently, I use the standard random number library. Okay. Yeah, it's pretty good. So one useful technique for testing is
Starting point is 00:24:02 if you have a lot of random numbers in your code, you're going to have, and you have tests, you're probably going to have non-deterministic tests, right? So you can either kind of massage the boundaries of your test to say this is within some acceptable limit, my result is within this variance. Or you could fix the random seed, which is to make it deterministic, essentially. And testing is very important in robotics. Again, it's different from a lot of other software disciplines and something people haven't really written in the book of how to do reliable testing in robotics yet, I think, or I haven't read it.
Starting point is 00:24:52 Generally, you know, you have to do unit testing on a code correctness level. So applying unit testing, applying, I like to use sanitizers or static analysis you could always use more of that then checking kind of an overall algorithmic correctness checking if your code works on recorded data and if it works live and benchmarking is also hugely important
Starting point is 00:25:23 because you need to hit some real-time requirement in robotics in all the interesting applications. Basically, all of your sensors come in at some fixed rate and there's kind of a rough way of designing the timing requirements for a robotic system where you look at your slowest sensor and you make sure that the algorithms that depend on that sensor data aren't going to lag that update rate. Uh, so for example, again, to ground this in reality, uh, let's say you have a laser scanner. A laser scanner is a sensor that shoots out laser beams, uh, to obstacles in the, in the environment. And it measures how long it takes for those laser beams to come back to the laser scanner. Right. time tells you, because you know the speed of
Starting point is 00:26:27 light, that time tells you how far away an obstacle was at that point. So you get a point cloud back, point cloud being this 3D representation of your world. Laser scanners are super cool. So let's say you want to write an algorithm that figures out where you are in a global map based on the point cloud you get in a particular time step. That point cloud is going to be relative to where you are right now. And your map has some fixed origin, like zero, zero. You know, let's say it's where you booted up the robot. I don't know. So you have to process hundreds of thousands of points and compare it to a map that has like millions of points.
Starting point is 00:27:18 And you have to do that in 100 milliseconds. That's the kind of scale of the problems that we're working on in robotics. So do you end up taking advantage of multi-processing on the processing? I mean, do you have multi-core CPUs that you're working with? Yeah. So again, this is what I would call desktop robotics computing, not embedded robotics computing. Absolutely. Parallelism is really helpful. A lot of the libraries that we use leverage OpenMP in some way.
Starting point is 00:27:55 Parallelism is also really important for machine learning. So another really recent craze is GPU deep learning. NVIDIA is all about this, for example. And they're actually doing really awesome things i think uh with the some of the single board computers they're putting out so uh basically when you when you're training so when you're doing machine learning you have two steps you want to train your neural neural network or your convolutional neural network or whatever on a big data set. And that can be slow, but that's a massively parallel problem. And then after training, you have this model with a certain set of weights.
Starting point is 00:28:46 Usually what we do in deep learning is you specify some layout of nodes in your network. And then you learn a set of weights on the edges. And that's kind of like your activation function for your neurons or whatever metaphor you want to use. And then you import that in some format to your production code. And then you have your testing phase where you want to be really, really fast to compute the expected output based on the inputs that you present to it. And so I could talk a bit about, again, I'm not a developer. I've never been a developer on a machine learning framework, but I've been using a few.
Starting point is 00:29:35 What a lot of, for kind of the sake of convenience, a lot of these frameworks will export the trained model to some serialized format. So even like some of the really state-of-the-art ones, we'll use something like Protobuf. I think TensorFlow and Cafe are two big frameworks. I'll serialize it, and then when you want to train it, or sorry, when you're done training and you want to actually use your neural network,
Starting point is 00:30:05 you deserialize it, you load everything up in RAM, and then you process your camera data in real time. And that testing step when you're actually using your network is also a highly parallel problem because you're feeding pixels through a network of individual nodes with known connections between them. So, yeah, in summary, there are innovations in parallel computing, GPU computing, are like pivotal in robotics right now. So I've never used a neural net personally. I have, you know, some vague ideas for how they're used,
Starting point is 00:30:52 but maybe just like in the most basic, like explain like I'm five kind of, what kinds of things are you training and how are you using it in your robot? Of course. Robots. Well, neural networks are the most useful for when you have a category that computers and math don't really understand. So, you know, you look at a picture and you say, is this a cat or a cheeseburger? And you don't want to hard code
Starting point is 00:31:30 a cat class or a cheeseburger class necessarily and say in your cat class it has to have two ears which have these polygons and these eyes in the image. let's say you want to
Starting point is 00:31:48 look at a bunch of images and uh mad you know magically figure out from these images uh a function and that function when presented with any any other image that could have been from your original set or from just like a picture you take, will tell you the likelihood that that picture contains a cat. And the way that's implemented usually is you have a graph, and that graph accepts at the, or it's a multilayer graph. It has a connection of edges and nodes. At the very top, it has a connection to every pixel in your image. It has some input. And basically when you train it, you have a known labels of images.
Starting point is 00:32:46 So you have someone sit down with hundreds of images and you say, this has a cat, this doesn't have a cat. This one has a cat, this one has a cheeseburger, this one doesn't have a cheeseburger, or whatever. So that step of operating on a labeled data set is very important. So then machine learning is how you get from this labeled that has a specific set of weights to the edges that can output a label from an image. Okay. Okay. Okay. That makes sense, I think. Maybe that wasn't an explanation for a five-year-old, but for a computer scientist.
Starting point is 00:33:46 Right. I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors. Backtrace is a debugging platform that improves software quality, reliability, and support by bringing deep introspection and automation throughout the software error life cycle. Spend less time debugging and reduce your mean time to resolution by using the first and only platform to combine symbolic debugging, error aggregation, and state analysis. At the time of error, Bactres jumps into action, capturing detailed dumps of application and environmental state. Bactres then performs automated analysis on process memory and executable code
Starting point is 00:34:21 to classify errors and highlight important signals such as heap corruption, malware, and much more. This data is aggregated and archived in a centralized object store, providing your team a single system to investigate errors across your environments. Join industry leaders like Fastly, Message Systems, and AppNexus that use Backtrace to modernize their debugging infrastructure. It's free to try, minutes to set up, fully featured with no commitment necessary. Check them out at backtrace.io slash cppcast. You mentioned the difference between embedded robotics development and desktop robotics development. What exactly do you mean by that difference? Yeah, I've noticed I've started developing this language with myself and other, basically, robotics developers, where I say things like desktop and embedded and frontend and backend, which is a less well-defined distinction.
Starting point is 00:35:16 So I'd say a desktop robotics developer is somebody who's writing code on an operating system. Okay. A lot of robotics, larger systems that have to do a lot of computing and have to run a lot of processes, people will generally choose a desktop class computer running Linux. Um, some Ubuntu is very popular because of the software ecosystem, but, uh, something that can manage processes and threads and devices, uh, cause it's very important that you have, uh, the kind of development convenience of, I want to plug in this USB device and I want my computer to recognize that it is a USB device. And that's functionality that an operating system provides. Then a very important, but on the other side, a very important developer group in robotics is
Starting point is 00:36:18 people who are developing firmware. And it's actually very important to me that the people who are making robots happen in some way, who are writing applications to deploy automation, and the people who are writing firmware for sensors and motors communicate. Because we, being the desktop robotics developers, have to use those APIs, and we have to choose the right abstractions to work with those sensors. So why is that important? Well, there's a lot of software that we use in desktop robotics development that wraps some device. So, you know, you have different types of cameras. And those cameras have different parameters. And they have different firmware APIs for these different cameras. But it's really convenient to have a wrapper that uses the same API call,
Starting point is 00:37:33 even if you switch out different cameras. But if those APIs differ in some really subtle way, that could introduce a very subtle bug into the higher level code so the design of those systems is very important that that the expectations are set uh so that everyone has a good day and doesn't run into a subtle api bug no so you are a desktop robotics developer, you're saying, basically, right? Yes. Okay. Yeah. There's some cases where we get to, in my current job,
Starting point is 00:38:21 get to have some fun debugging or writing lower-level code. Okay. Specifically, there's some stuff I'm not allowed to talk about in my job. It's very common in the Bay Area tech scene of keeping many secrets. But speaking generally,
Starting point is 00:38:37 in my previous work at OSRF, I communicated a lot with small startups using our software research labs and sometimes with external providers of sensors. So it's interesting to see the whole ecosystem that way. So I'm curious, since you're running on Linux and the operating system presumably is going to introduce its own level of non-determinism because, you know, it's doing its own things in the background.
Starting point is 00:39:11 But you said that you have at least soft real-time requirements. How do you deal with that? How do you write your C++ so that it meets these real-time requirements that you have and interacts with the operating system in a way that they're not conflicting? I know it's kind of a big question, but... Absolutely. Yeah, operating on Linux, the underlying threading mechanism that you're using is Pthreads. Mm-hmm.
Starting point is 00:39:39 So the... Okay, so the hardcore real-time requirements in big autonomous systems often... Okay, back up. So the simple answer is just let the operating system handle it. Most of the time it's fine because you're not running a lot of other applications on your computer. You're not running Firefox and streaming music at the same time as your robot's operating. But sometimes that's really not enough. And you need some kind of more complex thread scheduling. Maybe you're going to install RT preempt, for example, a kernel patch that allows you to have preemptible real-time threads
Starting point is 00:40:31 and set thread priority. But then there's this kind of software ecosystem that people use for the convenience of developing these applications. So I used to work on ROS, R-O-S, or as some people call it, which is basically this callback mechanism message-passing middleware library that hides a lot of the threading from the user. So people use ROS because it has a lot of helpful introspection tools. And it kind of came out of an era, a pre-C++11 era, where writing concurrent code was kind of hard,
Starting point is 00:41:21 writing generic code was hard, so it provided these mechanisms for code generation and for process and thread management. But the disadvantage of using that is that you can't set thread priority, which is very important in a real-time system because the threads are handled by this API framework that you're using. So what do you do? Well, if you're relying on ROS and you have robotics applications that are using that,
Starting point is 00:41:54 you're going to have to choose another library if you have really hard real-time constraint. If you are running on Linux, on vanilla Linux, you probably need to use RT Preempt or Xenomai, which is another Linux variant that has basically Linux user space and a hard real-time kernel. Or there are other solutions like that to offer better performance. But yeah, really, it's about benchmarking and then identifying where actually the bottleneck is coming from. So anecdotally, I'll jump into a story here from the robotics world. A few years ago, I guess two years ago now, there was the DARPA Humanoid Robotics Challenge. And this was another contest put on by the U.S. government to fund research labs to get autonomous humanoid robots walking and doing essentially an obstacle course where they would have to complete different tasks. These humanoid bipedal robots would have to walk, get in a car, and drive a car.
Starting point is 00:43:17 So not only is it a self-driving car, it's a robot-operated car. Get out of the car, open a door, turn a valve, get a screwdriver and cut a hole in a wall and drywall, walk up steps and walk over kind of cinder block debris things. So basically it's simulating a disaster scenario. Right. So basically it's simulating a disaster scenario. If we have a nuclear meltdown or a massive earthquake or something, we're going to deploy robots instead of humans to rescue people. Sounds like a great idea. Well, it turns out making robots walk is really hard. And that's because if you think about the dynamics of the system, a walking robot is very top heavy. And that's in physics and
Starting point is 00:44:08 dynamics, that's the inverted pendulum problem, where you have a big mass swinging on top of a fixed bottom. And you basically have a lot of torque because of your large mass and the length from the bottom. So that's an inherently unstable system, and you need really fast control in order to make a robot walk and balance dynamically. So you need fast control. You basically need to have software that reacts to sensor data in a one kilohertz update loop.
Starting point is 00:44:47 So the team that did the best with a, okay. So many of the winning teams cheated and had wheels. They built robots that like had legs, but could kind of squat down and get on wheels, which is totally cheating, but, but, but, but not because like, it's a more sensible engineering design, but the team that did the best, um, on a non-wheeled robot, uh, was running a real-time kernel. Uh, they were running, uh, they were running Linux with the RT preempt patch and they were also running
Starting point is 00:45:27 Java yeah this really I'm still like really upset about this but they weren't using open JDK they were using some like expensive
Starting point is 00:45:43 proprietary real-time JVM implementation. I don't know all the details about that. I'm not a Java expert. But from my understanding is you can't get the performance that they got using the default open
Starting point is 00:45:59 Java garbage collector. So fun facts. Data points uh and also like this is the darpa robotics challenge was a research project um it it's hard to say uh if if you can extrapolate like data points from for about the entire industry and all applications from that yeah okay so uh you gave a talk at cpp con do you want to tell us a little bit about that i did yes you're very busy cpp con actually yes cpp con was a lot of fun it's a great conference uh so my talk was born out of, again, this motivation to connect robotics desktop the network on desktop and on embedded and have like one magical library that was cross platform between, you know, desktop computing and bare metal embedded computing. And this is motivated actually in my previous job. One of the projects that I worked on was called ROS2.
Starting point is 00:47:35 And it was, it's essentially, I've noticed actually after C++ C++11 is coming into popularity, a lot of people are talking about this thing number two of our old project. The Boost community sometimes talks about Boost 2, which is let's have a big refactoring of Boost libraries using the modem standard. Probably someone from Boost is going to correct me based on what I put. Anyway, so ROS 2 is,
Starting point is 00:48:12 we have this industry and ecosystem of robotics application developers who use this library called ROS that is pre-C++11 and has some questionable design decisions. So let's rewrite the whole thing. And one of the ideas was the underlying middleware will be this thing called DDS. And DDS is just a standard for a distributed message passing protocol.
Starting point is 00:48:45 And it was designed for robotics and the Internet of Things. Basically because you want, in this situation of having lots of devices, you want to pass messages across many different nodes connected on the network. You don't want to have one central server because that could be a point of vulnerability. Basically, if one part of the system goes down in a highly distributed network,
Starting point is 00:49:17 you don't want to have one central broker. And it might not make sense to establish one of your devices as a central broker. Anyway, so that's GDS. Like C++, it's this kind of committee ratified standard. It's put out by the object management group. And it's already used in various kind of IoT applications. And another goal of ROS2 was to be able to run natively on embedded bare metal environments.
Starting point is 00:49:56 Okay, the problem with this premise was that there's no open source implementation of DDS, which runs on embedded devices. So this was maybe a bit of a project planning oversight. Because one of the things that's important to ROS, this project and community, is open source is really cool. And open source libraries allow people to move faster and prototype faster because there's less financial overhead in using an open source library. And bugs are often caught and fixed faster because people, when they encounter a bug, can go in and patch it,
Starting point is 00:50:44 and then they can pull requests that pack patch back to master uh so okay so there was no open source implementation of dds uh so i thought wouldn't it be great if somebody implemented that and uh not only implemented that, but did it using C++14, which is a great standard that has generic lambdas and auto return type deduction, etc. So I started this. It was a lot of fun. But it also it opened up a lot of questions. And my CppCon talk was going over basically some techniques I thought would be interesting for this library, benchmarking them, and then essentially complaining about poor support for certain parts of the C++ standard that are not supported on bare metal ARM or the kinds of environments that I was targeting. Sorry.
Starting point is 00:51:56 So if you want me to get into some of the higher points from that talk right now. Yeah, yes. I'm kind of curious what, what, just like maybe an example of some part from the standard, from the standard library that could not work on bare metal arm. Like that sounds like a particularly interesting question to me personally.
Starting point is 00:52:19 Yeah. Yeah, definitely. So, uh, okay. So there's the, uh, arm, no EABI tool chain, which is really cool.
Starting point is 00:52:28 It's the basically cross-compilation environment for targeting ARM, like mostly the ARM Cortex MX series when you have no OS. And in my talk, I cited a particular page in the documentation where it says the compiler has support for almost every part of the C++11 standard, except for standard thread. Okay. And that makes sense, right? Because standard thread is... It's going to use some native implementation of threads, which if you're on Linux or OSX, it's pthreads.
Starting point is 00:53:20 I'm less familiar with Windows, but there's the pretty awesome Windows thread runtime thing, whatever and essentially standardizing standard thread for an environment
Starting point is 00:53:37 where you don't have an operating system doesn't make any sense because you don't have an operating system to manage threads for you and generally if you don't have an operating system to manage threads for you. Right. And generally, if you want to have some concurrency on a bare metal embedded system, you pull in lightweight threads from some real-time operating system. So, for example, in FreeRTOS, which is the real-time operating system I looked into the most,
Starting point is 00:54:17 you can pull in threads, you can pull in really tasks, is what they call them, which are very lightweight threads, and they also have support for coroutines. These coroutines are not exactly the coroutines that people talk about that are on the roadmap for future C++. So essentially, another fun fact is the ARM toolchain provides this thing called CMSIS. And it's another, you know, this world is full of all these really opaque acronyms. But it's like wrapper convenience layer over different operating system,
Starting point is 00:54:58 real-time operating systems. So you would think, oh, maybe ARM could implement this wrapper between different real time operating systems that has an abstraction for threads then
Starting point is 00:55:15 couldn't standard thread just be a wrapper over that but still when you use CMSIS or however people pronounce it, since I've never heard someone talk about it in real life besides me, you have to specify in the API which RTOS you're using,
Starting point is 00:55:40 and it's a part of your build tool chain which RTOS you're using. So it's like the customization points aren't all there uh for specifying a threading arc like just a threading architecture when you don't have an operating system um so so that makes sense um certainly uh but i still feel like compilers, the ARM tool chain could talk to vendors who implement real-time operating systems and say, look, guys, what if we had some preprocessor option that says, I'm using FreeRTOS, and that makes standard thread available. And that standard thread is a wrapper over FreeRTOS threads,
Starting point is 00:56:30 which actually have more functionality than standard thread. But maybe someone could, and sometimes people do this with Pthreads, maybe someone could add some FreeRTOS things to set, API calls to set thread priority or something. But then they also use standard thread in their code and that uses REI and kind of like nicer C++ syntax that people are used to. So I don't know, a lot of ideas.
Starting point is 00:57:04 Yeah. Maybe one question to end on is, uh, we've talked a lot about different C++ features using robotics. Is there anything you don't use? Um, like exceptions, for example, I know video games often don't use them. Do you find yourself using them in robotics? Yeah. Okay. So again, this some, okay. So robotics has a lot, a very high dependency tree to do a lot of the complex tasks that we use. And a lot of the like legacy code that we use or that, that I'm using, uh, does use exceptions. And, and some of these libraries use exceptions and like really terrible places, uh, or like where I wouldn't necessarily use an exception even. So, uh, it's, it's kind of bad. There's some, I always think that you can implement error handling in a more elegant way than using exceptions or in a different way.
Starting point is 00:58:17 And there's also some libraries that are very useful and that I would never go a day without that do not. So one of my favorites, which I'll plug here is Eigen, which is an expression template library for matrix operations or linear algebra. And I mean,
Starting point is 00:58:40 yeah, it's mostly header-only and providing data types and operations between those uh and and like because it's a math library um it's very pure and and the error handling is usually a compiler that's very nice um but uh and in terms of parts of C++ that most people in robotics don't use, and I wish they used more, adoption of the modern standards is still lagging a lot. And it's something I advocate for a lot. But even the really cutting edge machine learning libraries are essentially pre C++ 11 code bases that still use boost thread for that kind of thing.
Starting point is 00:59:31 So that's, I kind of, a part of my mission, I think, or what I'd like to do is encourage cross-pollination between this community, the C++ community, and robotics. Because I think that adopting the modern standard and a kind of more powerful way of writing in this really powerful language is going to accelerate the forward progress of the robotics world. Okay, well, Jackie, thank you so much for your time today. Where can people go and find you online? Well, I have a very, very minimal website at jackieok.com.
Starting point is 01:00:14 That's my little initial. And I'm on Twitter as jackaline. That will be in the show notes, but it's like a weird portmanteau of my first name and my last name so I tweet occasionally that's probably the best place to find me okay thanks for coming on the show yeah thanks so much guys thanks for joining us
Starting point is 01:00:35 thanks so much for listening in as we chat about C++ I'd love to hear what you think of the podcast please let me know if we're discussing the stuff you're interested in or if you have a suggestion for a topic I'd love to hear about that too of the podcast? Please let me know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, I'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. I'd also appreciate if you like CppCast on Facebook and follow CppCast on Twitter. You can also follow me at Rob W. Irving and Jason at Leftkiss on Twitter. And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.