CppCast - Autonomous UAS
Episode Date: October 7, 2021Rob and Jason are joined by Brandon Duick and Billy Sisson from Exyn Technologies. They first discuss the upcoming CppCon hybrid conference and a new tuple library for C++20. Then they talk to Brandon... and Billy about the autonomous UAS/Drone software they work on at Exyn Technologies. News CppCon 2021 Program Announced Tuplet: A lightweight Tuple Library for Modern C++ Span should have a converting constructor from initializer_list Links Exyn Technology Careers Exyn - Autonomy Level 4 First Dog to Fly a Drone ExynAI - Modular Autonomy for Mission Critical Data Sponsors PVS-Studio Learns What strlen is All About PVS-Studio podcast transcripts
Transcript
Discussion (0)
Episode 320 of CppCast with guest Brandon Dewick and Billy Sisson recorded October 6, 2021.
Sponsor of this episode of CppCast is the PVS Studio team.
The team promotes regular usage of staticppCon and the Tuple library.
Then we talk to Brandon Dueck and Billy Sisson from Exyn Technologies.
Brandon and Billy talk to us about their autonomous UAS software. Welcome to episode 320 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
All right, Rob, how are you doing?
Doing okay. We are getting closer to uh cpp con you getting
excited i don't know if i'm excited yet because i'm still trying to figure out if i'm excited
about tech town because there's still no news from norway there although i do want to comment
and i know you and i were just chatting about this right before we started um i am the as far
as i know the only on-site cpp con class i'm definitely the only pre-conference on-site CBPCon class that is still moving forward.
And I do have students, so it is highly likely that the class will still go forward.
If you're thinking about coming to CBPCon and you want to go to an in-person class, come sign up for mine.
Definitely.
It's the practical performance practices I've mentioned that um on the show
before this is new class for me awesome well i hope it goes well i hope a couple more people
sign up yeah yeah okay well at the top of every episode i tried a piece of feedback uh we got
this tweet from pejman who we had on the show a couple months ago, I believe. And he wrote, shout out to Jason and Rob for encouraging me to submit
during our CPP cast chat back in June.
Now I need to scramble and figure out how to prepare for this talk.
I never thought about that.
Well, he did great on the show.
I'm sure his talk will be great.
Yeah, that's, yeah.
Although we all are feeling that if you're paying attention on Twitter right now,
lots of speakers who are thinking about
both Tech Town and CBPCon are like,
wait a minute,
I actually have to get this thing done
in the next week, whatever.
Okay, well, we'd love to hear your thoughts
about the show.
You can always reach out to us
on Facebook, Twitter, or email
at feedback at cbcast.com.
And don't forget to leave us a review on iTunes or subscribe on YouTube.
Joining us today is Brandon Dueck.
Brandon is a software engineer and roboticist who really enjoys problem solving for complex systems.
Brandon is the director of system software at Exyn, where he leads and contributes to a wide variety of projects.
He is particularly interested in simultaneous localization and mapping.
Recently, he has been working on the calibration methods that ensure the XN systems can register LiDAR sensor movements
with the highest possible accuracy.
Brandon holds a BS in MS in electrical engineering
from the University of Pennsylvania.
Brandon, welcome to the show.
Thank you.
What the heck is simultaneous localization and mapping?
It's commonly called SLAM.
It's a very popular algorithm domain in
robotics. So you can imagine if you're a robot that knows nothing about the world around you,
and you turn on and you have to simultaneously figure out both where you are in that environment
and also map that environment at the same time. Sounds like virtually every adventure game ever
written, pretty much. You've woken up and you don't remember who you are.
Now we have to...
I was trying to do my dramatic radio voice entry to the show.
Right.
And also joining us today is Billy Sisson.
Billy is the director of motion planning at Exxon with prior experience at United Technologies, now Raytheon.
He studied computer science and systems in undergrad at Rensselaer Polytechnic
with focus on robotics, controls, and electrical engineering.
He's been programming professionally using C++ for 12 years.
At Exxon, Billy focuses on the autonomy of the robot.
How does the robot build a searchable map of the environment?
How does the robot efficiently generate safe paths from A to B?
And how does the robot explore the environment in a way that maximizes information gain?
Billy, welcome to the show. Hey, great to be here.
That's, well, we're going to talk about all this stuff in a few minutes, I guess,
but there's so much to unpack in both y'all's bios. Yeah, definitely.
Well, like Jason said, we'll definitely be talking a whole lot more about robots and drones, but first we just have a couple news articles to discuss,
so feel free to comment on any of these guys. Okay. Sounds good. Okay, so first one, CppCon 2021. The main
program has been announced. And yeah, you can now see all the talks that will be happening.
There's more details about exactly how the hybrid event is going to work. Yeah, so looks like it should be a good conference.
I'm excited.
I am also excited because I am speaking at 11 a.m. on Monday morning.
That means I get to actually go to this conference.
Very good.
Usually John shuffles me to the last day,
but I managed to get in before that this time.
Nice.
Do either of you,
have you ever attended a CPP con before?
I've never attended a,
in person.
I've definitely watched some of the talks after the fact.
Yeah.
We were supposed to attend it this year,
but you know,
things get in the way.
I got to say,
there's a lot of interesting talks going on,
like right out of the gate,
thinking about programs, algebraically, some products. And it's interesting that there's a lot of interesting talks going on, like right out of the gate, thinking about programs algebraically, some products.
And it's interesting that there's both online and in-person keynotes, but they'll both be broadcasted and stuff.
So is Strauss-Strupp's talk considered the keynote there, or is that just a talk?
It is the opening keynote one of them is but then he's also giving
another one on monday he's giving two talks on monday which i don't believe i he's ever done
at cvp con and so i know a couple of the people who are um competing with him at 2 p.m on monday
morning and they're not necessarily thrilled about that.
Yeah, it's a rough time slot.
Okay, next thing we have is a library on GitHub.
This is Tuplet, a lightweight tuple library for modern C++.
And this is a header-only library,
and it looks like it's pretty easy to use.
Single include file, yeah.
Yeah, and the examples look quite nice.
Like, you can just, you know, get your tuple item with just by passing, like, tuple and then, you know, within brackets one or two or whatever.
Yeah.
I got to say that that overload of the index operator, that's some cool stuff right there.
It's a neat trick.
So the thing that jumped out at me on this is with CppCast,
it kind of always comes back to API breaks.
That section there at the bottom.
Oh, I missed that.
What is it with API breaks on the bottom?
So he has a section right before the benchmarking section
where he's saying, okay, well, can this implementation ever become the standard?
And of course, it'll be an ABI break to try to make that kind of change.
Yeah. Oh, okay. This is interesting. Oh, sorry.
Go ahead, Brandon. I was just wondering if I didn't get far enough into it to
figure out. It seems like C++ 20 features are kind of what are required to
make this implementation work. Uh, I know it's possible to do a similar implementation and C++
17. Um, but I don't know how much of this requires C++ 20. I had, I had a little bit of a moment as
I was reading this. Cause I'm like, Oh, the only way that they did this is if it was like,
cause I did a toy tuple implementation myself.
Okay, so, all right, I'll back up.
The lot of what the author here complains about, that tuple is not trivially copyable and stuff like that,
comes down partially to the fact that tuple is typically implemented as recursive inheritance. So you've got, you know, like this
ridiculous inheritance hierarchy that's built when you create a tuple with one of the standard
implementations. And I'm reading this thinking, well, the only way that the author could have
done this is if they didn't use recursion. And that was something that I did as an exercise on
my own, like three or four years ago, just to see if it was possible to make tuple without recursive
templates. And so yeah, that same technique is definitely possible in C++ 17, and can get you
at least most of the way there anyhow. But it really made me sad. I didn't thought about the
fact that tuple is not trivially copyable when it contains only trivial types.
I did see that there was an ACCU talk
on how C++20 can simplify the tuple implementation,
but I didn't get that far.
All right. I didn't get down there either.
Yeah. Okay.
And then this last blog post we have is
StoodSpan should have a converting constructor
from initializer list.
There's a post on Arthur O'Dwyer's blog.
Jason, you want to tell us about this one?
We haven't talked.
I mean, Arthur was publishing an article like twice a day for a while there, right?
Yeah.
We haven't put one of his up here on a while.
Yeah.
I mean, it's just the fact that if you have a function that takes a standard span,
you can't pass an initializer list to it.
It's pretty
much just that simple um and so he's making the argument for a converting constructor that would
allow you to do that easily certainly seems like that would uh be a good addition makes sense the
other thing that kind of jumped out at me though is this he says uh when string view was adopted
as the parameter only replacement for const string,
we suffered through a few years of people worrying about the proverbial newbie writing dangly reference bugs like this.
And he gives an example.
And it's interesting that in Arthur's mind, this is just, oh, that's a solved problem.
We just train programmers to not use StringViews as local objects.
But I would say there's a considerable number of people that don't have that mindset.
And I've just had discussions about this on Twitter a couple weeks ago.
I don't mean to out myself as one of those people with those mindsets,
but in our domain, we do things like returning blocks of matrices
all the time. Enforcing this parameter-only
design constraint is a little
i don't know how i feel about it i feel like there should be something that's supported about it in
terms of like lifetime extensions or whatnot i mean i totally agree with you i was someone that
was saying wait what do you mean you're not allowed to have a local string view object
because if you were past a string view object and then you want to do like parsing over that and shrink it and grow it and do whatever, take some subsection of that and return it back out.
You're going to have local string view variables.
No question.
So I agree with you just for the record.
It's one reason I found it interesting that Arthur thought this was a completely solved problem.
OK, well, Brandon and Billy, we've had some episodes a long time ago, I think, about robotics.
It's definitely been quite a while.
I don't think we've ever specifically talked about like UAS and drones.
So could one of you just start off by telling us a little bit about what you work on at XNAI?
Sure.
So at the highest level, a lot of the
problems are going to be pretty common across all robotic systems. So the SLAM, simultaneous
localization mapping that we already referred to, motion planning and motion control. But from there,
there's a couple of interesting things about our company and the aerial system that presents some
really interesting challenges. So the first of these is that we decided early on
to provide a fully infrastructure-free solution.
So this means that we don't rely on GPS
to help us solve that SLAM problem.
It means that we don't rely on communications
back to a base station to...
So, for example, other systems might rely on getting
kind of continual operator inputs
to help control where the system is going,
or they might offload some
computing from a small aerial vehicle try to offload that to the base station so that you can
solve really math and really computation intensive problems right so because we decided to be fully
infrastructure free we have to be able to do all these things on the system which has to be
lightweight which means it has a small compute so if you if you kind of take those two constraints
um now you can start to get
into some of the unique and interesting problems that we face. So probably one of the best ones,
I'll go back to the operator input here. If we're sending our system around the corner where you
can't see what it's seeing at the time, but you need it to kind of come back to you with a... Oh, so I should back up a little bit.
Our main use case is to provide maps of environments that operators can't reach, can't access.
So here I'm now getting back to the problem.
So Billy has done a lot of really interesting work here in terms of exploration.
So you send this robot into a new environment that has no prior map of whatsoever.
How can it efficiently and optimally explore that environment, deciding where to go,
when, and come back to you with a full coverage map of that space?
Right. So you sent us some videos right before the show, and I did watch some of those,
like videos of this drone flying around in a cave system or mine shaft or something like that,
exploring it and being able to provide mapping as it goes.
What are some of the real-world use cases where you're using drones to do that sort of thing?
So that mining use case is our main revenue driver at this point.
So we have a lot of mining customers that are using our system right now in the field underground,
the first use case that comes up is these mines will blast areas of the mines, at which point these are no longer areas that can be safely accessed by humans. But they still need to know,
okay, well, what does this environment look like after I've blasted? So the purpose of our system
is they can kind of set up from a safe distance away,
put the robot down. We'll talk a little bit more about this, but effectively point it at this area
that they need to survey, hit the go button on our system. It'll fly. Usually in this case,
it's a pretty short flight, maybe three to five minutes, provide it and fly and explore and map
that space and come back with a very high resolution, very accurate point cloud map. So that'll give you the ability to visualize and fully perceive what the environment looks like.
Just out of curiosity, is that where the LIDAR comes in at that point? I saw you mentioned that
in your bio. Yes. So the LIDAR is our primary sensing pipeline. Obviously, we also use an
inertial measurement unit as well to complement the localization pipeline.
Interesting.
The LiDAR generates the map for us.
So you use gyroscopes or something for your inertial measurement?
Is that what you're talking about?
Yep.
So accelerometers and gyroscopes are the sensors there.
Okay.
Now I'm going to completely go off on a tangent here but um are you using like the tiny like solid state kinds of things or more like higher end yes solid state things
and those provide enough accuracy to give you some awareness of your movement so all it's got to do
is it's only got to give you enough accuracy for a little bit of time, right? Because the LiDAR is scanning
in our case at like 3-ish
hertz or something. So
3 hertz, the robot knows
fairly well how much it's moved
since the last time it scanned. So
all the IMU has to do is predict the
movement for that short
300 milliseconds of time.
And it turns out it's pretty good
at doing that,
especially with the caliber of IMUs we tend to use.
So to add a little bit there,
so the reason why you can't just take those 3 hertz LIDAR measurements is because that's not going to give you high resolution enough state information
to feed into our control pipeline.
And another reason is because for the lidar
to actually work you need to use the higher rate information from the imu to uh undistort the so
there's a lot there's a lot of depth you have to try to figure out how to navigate here but
a lidar measurement is it's not just like you get a snapshot of the entire world every 300
milliseconds it's it's doing a continuous scan.
Oh, I guess, okay.
Yeah, so if the vehicle is almost always moving over the course of that time,
so you need to use that inertial information
to undistort those measurements
so that you can then feed it into the,
to accurately register your scans to your map.
Okay, so if you could actually like,
in a theoretical world,
if you could snapshot at each location,
then you could just line up the point clouds and carry on with your life.
Exactly.
But you can't.
Unfortunately.
Well, I guess maybe it is fortunate because that gives us a job.
Right.
I've actually bought one of the tiny I2C 9-axis sensors
with solid-state gyroscope whatever accelerometer and then
i've yet to actually play with it so this is where my curiosity comes from there
sure it might be worth describing for our listeners what lidar is and how exactly this
helps you in the situation as well i guess i'll take this one so it's uh i always forget the
it's laser help me me, Billy. Laser.
Man, I actually don't know what it stands for.
It's like laser range sensor.
That's what it is.
Oh, man, this is embarrassing.
Light detection and ranging is what I see on Google.
Light detection. I forgot the middle.
Okay, so there's lots of different LiDAR packages,
but essentially all these things are doing is using a laser to get a very pre-accurate
So we're talking for the sensor we're using it's about a three centimeter standard deviation on a laser measurement ranging anywhere from one meter to
100 meters out
And the particular lidar we use is very popular in the autonomous vehicle space
So it's using a lot of autonomous cars and this is a it's basically got the simplest way to describe it is
16 laser beams on an axis that are spinning around so it's taking a
360 degree field of view with 16 beams vertically
ranging from minus 15 degrees to plus 15 degrees it's giving you over the
course of a pretty short time window it's giving you those what ends up being
16 times 900,
so about 14,000 measurements over that cylinder.
So you can imagine that gives you pretty dense point information to work with,
to feed into that SLAM pipeline that we've talked about before.
Okay. So, you know, you talked a little bit about what makes your platform different
from some of the other platforms, how you're doing absolutely everything on the drone.
So do you want to tell us a little bit about those constraints in more detail like how much computer can you fit on these drones so this is uh this is the point we're trying to
improve actually so the the compute platform we have on there right now is a it's basically an
intel nuke if you're familiar with that. It's approximately equivalent to a laptop.
It's a little bit more beefier than that.
So, I mean, one of the critical constraints of this is it's a CPU-only platform.
So we're pretty much running all of our algorithms, all of our processing, all of our drivers on a CPU.
And, oh, man, how much does the nuke itself weigh it's like 250 grams or something like that
out of a total max payload capacity for the play loop for the platforms we fly with like 1.2
kilograms so already compute is using up a lot of the payload it uses up a lot of the battery like
the compute itself doesn't use up the battery it's the weight of the compute that uses the battery, which is kind of interesting when you think
about it. So the compute constraints,
it means that we're basically limited to, what, four cores plus hyperthreading
processing power, 32 gigs of RAM, which is
actually quite a bit when you think about it. We actually don't currently run
up against the
compute constraints
as much until you want
to start investigating algorithms that
run better on GPUs, which is
something that we really want to extend
to. I know there's a lot of popular
platforms like that, or for that, for
the, what's it called, it's NVIDIA's
thing. Jetson,
TX2. Yeah, that's Jetson, TX2.
Yeah, the TX2 and the Xavier,
those platforms.
Those are little tiny embedded GPU core things, right?
Yep.
Relatively speaking, yeah.
They're actually about the same size as the Nuke.
Oh, okay.
At least the Xavier, the TX2 is actually quite a bit smaller.
But yeah, it's those compute
and those are the compute constraints we run up against.
Another major constraint, too, is battery life.
The drone has to carry its own power source,
which limits it to like 20, 25 minutes of flight time.
So you've got to get all that stuff done in 25 minutes
from takeoff to landing.
Just make a hybrid model with a gas powered charger on board consider it has anyone ever actually done that with like a
small gas powered uh rc motor or something like that is that actually a thing or you know that
it sounds crazy well even more because i remember from a few years ago there were some videos of
some people playing around with jet engines.
So our quad rotor, right? That's our system.
I think some people put jet engines instead of each of the four props.
Oh, yeah.
An interesting approach, let's say.
Probably doesn't have quite as fast of feedback as you expect from a brushless motor.
Slow response time for sure.
Right. expect from a brushless motor slow response time for sure right but uh in terms i'm sorry so going back to what you're asking rob about um the the compute constraints so so actually so this is the
area that billy kind of specializes in probably what the part of our stack that's been optimized
the most to run efficiently is the uh the motion planning uh part of our software stack so i'll
just give a little bit of a high level context and Billy can kind of
jump into some of what he's done,
but you can imagine for an aerial system we're exploring in three dimensional
space,
as opposed to many ground robots that might just be planning over a two
dimensional surface.
So that means that we actually have to keep track of a much larger searchable
map space.
And then you can,
you can also imagine that searching for optimal paths or optimal routes
through that three-dimensional space can be very computationally costly. So again, I'll pass over
to Billy here. So he's kind of the subject matter expert for our grid or cell-based map representation
and then also for the search algorithms that we run over that map.
Yeah, so mapping, planning, this kind of stuff,
it's a strange fusion between trying to optimize the code as much as possible
and also trying to make it do as little work as possible.
So we tend to take this hierarchical approach to the problem
where you sort of think about it like as you're
driving from from city to city you think about the highways that you're driving along right you
don't think about like tree a and tree b and car a that you have to avoid along the way like there's
this there's this notion like in our planning stack of global planning course global planning it does so over a very a very simplified
representation of the world so this is what i mean about like reduce the work that you do um
and then you also in addition have to solve the problem of okay i have to avoid things densely
right i have to i have to dodge tree a at some point. So cool.
So this is where the second... Sorry?
I said it's more like skiing than driving.
I'm just...
Sorry.
Actually, we need to factor dynamics to an extent.
So this local level of planning actually goes beyond the 3D realm too because it's also planning in terms of velocities and the momentum of the vehicle.
It knows that it can't stop on a dime, for example.
So when it's trying to plan, when it's trying to maintain the flight speeds
that we advertise we maintain,
it has to know that it can't turn sharply. It has to slow
down or it has to go into a banked turn
to actually enter corridors at speeds and stuff like that.
So that's a lot of, I'll be honest,
it's a lot of like magic secret sauce
once you get too far below that.
But it's really, I got to be honest,
in terms of optimizing this thing,
it's all about making the algorithm do less work
rather than making the code run faster.
If there's a distinction that can be made there.
So I appreciate you don't want to go too much into secret sauce.
But do you have like a pre-computed physics model on board or are you taking everything dynamically in case there's like a draft in the mine shaft or something like that?
This kind of I can talk about that.
This kind of goes down like to the bit to the
fundamentals of control we do there's there's there's elements of what we call feed forward
where the robot knows what trajectory it's going to be taking roughly how fast it has to or speed's
not part of it how much it has to be accelerating at a certain point in time to to actually hit this
trajectory and then there's this element of feedback,
disturbance rejection. When a gust of wind blows, it only has to correct for that little bit off of
the trajectory that it already knows that it's following, right? So that's how, when you sum
those two things together, that's how it reacts to the real-world events.
So, okay, then another thing that I was just thinking about,
like, I'm trying to imagine, like, how constrained and or dangerous
are the environments, dangerous to the life of the drone
that you're flying through.
And, like, do you ever have to take into account and say, like,
well, in this corner, you know, I'm going to be getting, like,
swash from the rotors or something else that'll make it too difficult
for me to make this passage.
So there's a couple of heuristic ways
that we approach this.
So first of all,
you got to start with reasonable assumptions, right?
Like if somebody hops into the environment
and chucks a baseball at you,
there's a way that you're right.
Right, sure.
So we do make certain assumptions on like how
static the environment is i believe we call it the pseudo-static assumption um if a change happens
we see it happen and it doesn't it's not that quick um so we also have a piece of the stack
at the high level this is actually like part of the exploration it knows The robot knows when it's exploring, when it's flying, the faces to not fly into a blind spot.
It actually would be really dangerous for it to do so.
It doesn't know what's in that blind spot. If it's going three meters a second
into what could potentially be a wall, that's a disaster waiting to happen.
So it keeps track of
the areas that it's seen.
It knows the areas, it knows the boundary between the seen and the unseen.
It's always constructing a plan to see that boundary rather than fly into the unknown
directly.
How many, uh, how many of these drones did you lose during the development of the
algorithm?
Um, so this is actually an interesting
point.
More than uncomfortable stating,
however, less
than if we had a simulator.
Right? So
a ton of algorithm
development happens in simulation before
anyone reaches a piece of hardware.
So we've already
sussed out a majority of the things that can cause a crash
by the time we leave the simulation framework.
So you have a full simulation framework that you could, what, time step,
replay, figure out what went wrong, all that stuff?
Cool.
And now you mentioned that logging is just super important for these things.
It's actually one of the most critical frameworks on the device.
It's not just logging terminal
output, but logging sensor information,
communications between
modules and the software stack.
All of that stuff is
logged into this giant
bag of data on
the device that you can play back.
The developers can play
back and see what went wrong or what went right.
Usually it's a lot of what went right.
We also often use the sensor inputs from previous log data to,
if we're making a change to a critical algorithm,
we can actually re,
we can test that change on the log data from previous flights.
In the simulation.
Yep.
So,
so almost everything will be tested in simulation first. Uh, but there,
I mean, there's still points where it's just, you get more information or at least additional
information from testing those algorithms on real world data. So, uh, this is maybe,
maybe an area we'll get into actually more in this, um, later in the discussion here, but, um,
we do have our entire code base. So you might think that we don't have to worry too much about
backwards compatibility. Uh, however, we have, since we have a trove of log data we have to actually keep
backwards compatible in mind just because we want to make sure that we can continue to test our
algorithms on our log data how like uh proven and tested is your physics simulation you're in
simulated environment or more maybe more directly what i'm trying to ask is do you use your real
world log data to ever refine your physics model for your simulation?
In terms of the quadrature dynamics that we simulate, they're actually fairly basic, fairly textbook.
Okay.
Because you don't really need to go too much further beyond that to simulate the autonomous behavior that accurately. I'll comment there are some efforts to go deeper into that,
but Justin probably doesn't want me talking about that.
So the other thing that you can refine rather than the dynamics of the system
is the maps that it's simulating within.
And we have done this in the past where we take a scan of a mine that it's flown in or we'll stick with a mine that is flown in and fit surfaces to it and just stick that into the implementation, into the simulation.
And so one of the things Brandon brought up a couple of minutes ago was rerunning software modules on logs data. This is
fantastic for open loop,
as we call it, pieces of the system
where it's just like data
in, data out, that kind of thing.
But you do have to modify the simulation
if you want to test closed loop pieces
of the system, like the planning stack.
Okay. The sponsor of this episode of CBPCast is the PVS Studio team.
They developed the PVS Studio Static Code Analyzer.
The tool helps find typos, mistakes, and potential vulnerabilities in code.
The earlier you find an error, the cheaper it is to fix,
and the more reliable your product releases are.
PVS Studio is always evolving in two directions.
The first is new diagnostics.
The second is basic mechanics,
for example, data flow analysis. The PVS Studio Learns What STIR LEN is All About article
describes one of these enhancements. You can find the link in the podcast description.
Such articles allow you to take a peek into the world of static code analysis.
Can we talk a little bit more about, you the c++ software itself like are you uh using any
popular open source libraries for your software stack oh for sure we uh we love open source for
um so the autonomy stack on its own i mean there's boost is like a critical piece of it um
everything from like the graph to the coroutine library.
I think I'm missing some things.
Another couple of popular mathematical libraries
are like Igen, Ceres, OpenCV.
And then there's communication stuff like Protobuf and ZeroMQ
that we make use of.
Wow.
What about the flight control software itself?
Like I've got a friend who does hobbyist drone things
and he's always hacking on his drone
with one of these open source flight control systems.
So to this point,
we've designed it such that the output of our system
is a thrust roll pitch jaw command.
So we own everything above thrust roll pitch jaw.
Okay.
So that's, you can imagine you have some trajectory
that you're trying to plan over. Everything between there and outputting the thrust roll pitch jaw okay uh so that's you can imagine you have some trajectory that you're
trying to plan over everything between there and outputting the thrust roll pitch jaws is our
software but then we do we do rely on uh lower level flight controllers so uh what you'd be
referring to there is px4 or r2 pilot are the two most popular uh open source flight controllers i
mean they do provide some of that higher level functionality that we've taken over ourselves but
we do rely on those implementations for the low-level flight control.
So that's converting our PRPY, thrust, roll, pitch, jaw, into individual motor commands.
So you're just hooking up to the radio pins, the RX pins on the flight control module or whatever and saying, this is the inputs I want to feed into you.
Yeah, you could visualize it like that.
Like the autopilot that we write is controlling the vehicle like a person would.
In terms of your commanding, it's tilt, it's collective thrust.
We do actually command it at a much lower level, like API,
or using like RG pilots or px4's api okay so it is actually a
serial i think we are actually sending messages to the thing but i say so not like literally
taking over the electrical connections that would be coming in from a radio module or something
here there's a serial interface i don't i don't actually know how any of this works i just know
that i know people who play with that.
So I guess it's like an add-on that can be put onto many different types of drones.
You're not making the drones yourself?
Yeah, actually, one of the things we sell it as is platform agnostic, right? So I mentioned PX4 and RGPilot, but we also support some closed source RG, man, some closed
source autopilots like DJI.
Okay. Oh, okay.
I mean, they all tend to accept
the same commands, like most
under-actuated
vehicles like a quadrotor, even
like helicopters or coaxial copters,
they're controlled using the same
inputs. It's like thrust and
rotation.
Right.
Any fixed-wing aircraft or other vehicles that you support?
That's in the realm of much harder.
One of the neat things about quadrotors is that they can stop.
Right.
Fixed-wing is the biggest thing where they fall out of the sky if you stop.
Right.
They can glide so we uh right now our our again going back to revenue drivers the the aerial system for mining is our is our primary
product uh we are starting to branch into just a mapping only system so this is now instead of
being controlled by our software it's just something that an operator will carry around or maybe mount on a vehicle and just basically take only that
slam part of our system to use that to map out an environment
where it is at least accessible by the person.
And then we are, this is still pretty early stages, but we're also looking at
moving our autonomy onto ground vehicles. And I mean
longer term, we see ourselves as an autonomy software company.
So we want to be able to provide
either our software or our software
in a payload that can make any vehicle
that's fully autonomous.
Cool. Very cool.
Go ahead, Jason.
Oh, I was just going to say, sorry.
I mean, I just, I do enjoy flying things,
although it's been a little while
since I've actually flown anything.
All my toys are over here.
So I just keep thinking about these things.
But it was like maybe, I don't know, 12 or 14 years ago, I was in the Cave of the Winds National Park.
I think it's Cave of the Winds or Wind Cave National Park up in South Dakota, where they're saying, the park rangers were saying they think that it's the largest cave
system in the world but they haven't been able to map the whole thing and i'm like 14 years ago
we're only a couple years away from someone just being able to fly a drone down this thing and map
the whole cave system out and so i'm curious like what have has your tool or has anyone approached
you about using it for scientific applications also?
I know you said that mine things is your main driver right now, but like scientific research or archaeological mapping or anything like that.
So I have to admit here that when Billy was the third employee, basically, and I was the 10th.
So back at that time, we were a lot more customer facing or outward facing in terms of all the different vendors we were taking on.
But we've grown to 60 people over the last four years and have a little less exposure now than I did before.
But I know recently we were, I don't think it was a mine, but there was some sort of like, there was some sort of collapse or safety issue recently somewhere here in the U.S.
And our systems were called in to explore it so they could basically arrive at the structural integrity of whatever damage was done.
I don't have the details again because it was just something I was kind of peripherally seen. As far as direct science applications,
I'd almost bet that we've had some questions come in,
but I can't speak to any specifically.
I'm not sure if you remember anything, Billy.
No, sadly nothing in addition.
That's cool.
Are you able to keep up with the latest C++ or are you using C++ 17 or 20?
Oh, man.
So C++ 17 all the way.
Um,
I was so happy when we switched to it for structured bindings.
Oh my God.
Um,
you have no idea how many maps I had already over.
It just made my life so much better.
but we,
uh,
we,
uh,
we definitely have eyes for C plus plus 20.
Um,
I gotta,
I gotta be honest. There's a lot lot of package management stuff that makes this upgrade
difficult. So you guys have talked about it before.
We recently switched to Conan as our package
management solution. I'm definitely not one of the more skilled
people we have on that, but it's starting to help us solve the problem.
We're kind of hoping that over time
it's going to reduce the pain of making changes like this.
Going back to the compute platform,
are you running on an operating system
or is it right on the hardware?
How are you running exactly?
It's running on a 1.2 server edition
as a base operating system. It's running on a 1.2 server edition as a base operating system.
It's good enough.
A stable Linux, good enough.
And long-term LTS.
Right.
Does LTS cause any problems for you for staying up to date with your compilers?
Honestly, we actually want to decouple ourselves from this. This is this is like part of like the package management sort of endeavor that we're going down. It's just given the pace at which C++ evolves, it's driving me nuts that our version of GCC throws a warning if you don't
so if you're iterating over a map and you figure out we're getting a warning for
not using the key, which that extra line of code I have to add
drives me nuts. Yeah, they fixed that. When did they fix
that? I do remember hearing that as soon as we're able to
upgrade our GCC version, that's the result.
But I know that our version is still complaining.
Yeah, so for the sake of our listeners,
if you do a structured binding
and an older GCC don't use
all of the destructured elements,
then you get an unused variable warning.
And a newer GCC, as
long as you use at least
one of the destructured elements,
you don't get that warning anymore.
Yep.
Yeah, that's awesome.
Interesting tip. Don't mix
structured bindings with lambda captures.
It's not good.
At least in our version of GCC.
I mean, C++20
fixed some of those issues
as well, too. There was a language
standard change involving structured bindings
and lambda captures also.
That's actually fixed in the next version?
Oh, man.
I think that's right.
You're morphing this stuff, Grave.
I guess you just lit a fire under our effort
to make sure that we can switch to the next standard.
Now I feel compelled to double-check that,
because that's totally relevant.
So I was curious about your you're talking about how important data logging is.
And I don't know, like what the risk of vehicle destruction is.
Like, are you is it so important that you have to do like flush on every right to the log kind of thing just in case like the device blows up
and you hope to recover the data off of it or what?
So this is a good subject to bring up.
So I'll say off the bat that this is an area we need to improve on.
But we have gone as far as even just messing with the operating system settings to even ensure that the OS is flushing from RAM into our SSD as frequently as possible.
Because earlier versions where we weren't doing that, we could see up to, I think the default setting actually on our Boudou servicing is like 30 seconds.
So you can imagine if you're troubleshooting incidents and you've lost the last 30 seconds of that log,
that it's pretty difficult to find out what's happening.
As far as the actual log recorder, Billy, can you comment if we're taking any,
in other words, at the software level aside from the OS level, if we're doing anything?
In terms of using, oh man, what's that flag, ODirect or something like that,
I don't think we'd go to that level.
I think there was like, there's a realization we came to with like Linux and solid state
drives and the interaction there.
There's just like this, there's this bit of caching you just can't do away with for whatever
reason.
I think it's like on the SSD itself.
I was never able to narrow it down before I had to move on to something else.
This is kind of exposing sort of your question before, Rob,
about moving from operating, running on an OS to running on kind of bare metal.
This is definitely an area as we, I mean, we are still a startup.
60 people is a lot more than we started with, but still not too big of a company.
So, I mean, as we kind of hire more people and mature our product,
this would be definitely something we'd be looking at.
Right.
It makes me think about IncludeOS,
which unfortunately I think has been a canceled project now.
Remember that one, Rob?
Yeah, I do.
I didn't realize that was canceled, though.
That's too bad.
The look on Billy's face is that he does not know what IncludeOS is.
Is that accurate?
Maybe read it on Reddit a while ago,
but elaborate.
A C++ operating system kernel
that you can literally pound include
the operating system, and then when you're
done compiling, you have a standalone
bootable binary
that has your C++
thing in it.
That was a very interesting project.
But yeah, I think they stopped development
on it a couple of years ago now.
I could be wrong.
I kind of feel like the containerization
craze may be a little bit obsolete.
Maybe.
Okay, Lambda capture and storage class
specifiers of structured bindings,
I believe is the feature from CUS Plus 20 that you want,
which is supported in GCC 10. There you go. That's nice. My life just got so much better.
I was wondering, between all the logging you're doing and just generating these point clouds,
how much disk space does that wind up taking over your 20, 25 minutes
of flight time?
Oh, that's interesting.
A full 25 minute log takes about 20.
It's like a gigabyte per minute, I believe.
Something around there.
It's a little unfair, though, because we're I mean, right now, this isn't a limiting factor
for us, right?
Our our customers are okay with with kind
of logging a lot of information for us and we're we're intentionally logging as much as possible
again given kind of the where we are in our product life cycle um there are there are definitely
things that we can choose to turn off from a logging perspective or you can log just the
the most compressed versions of of different parts of our data streams to reduce that size.
And there actually is an open software ticket just related to some of the customer experience things to reduce that file size.
So it's quicker to transfer information off the robot and get our map right from the robot into their mind management software as fast as possible.
So there is a ticket I've been being bugged about to act on.
But yeah, I mean, Billy's right.
As of now, we really are logging a lot of data.
Just use like H.264 or something to compress it, right?
That should be fine.
For our video streams, maybe.
Actually, we do compress our video streams like that.
I will comment, though, that the gigabyte per minute thing
is to generate a log file that's useful for developers.
Okay.
The customer-facing one, the raw sensor stream,
it's like less than, it's a lot lower than that.
I'm not going to quote a number right now.
Sure.
But that's, you're talking like the LiDAR mapping data
that the customer's going to care about.
Right.
Interesting.
I mean, as an example, I mean, this may be a little too detailed,
but the raw data for the point clouds
is pretty well compressed.
But then we actually store
our software level representation
of the point cloud
because it just makes it much easier to,
if someone wants to kind of play back and visual,
even the customer, let's say,
would play back the log
and visualize the cloud from that flight
so they can kind of remember which flight it was,
or maybe they were just looking for some real-time information on what the robot just observed.
So we include that uncompressed, well, not as compressed version of the point cloud in the log data.
Visualize the point cloud as it's being created, you're saying?
Or in other words right after
the so you might you might fly you might do a flight the robot you you actually can see the
r so i did mention that we okay two things so i did mention before that we don't rely on the
base station but we do actually have there is a tablet that the operator uses to control the
system they and they if as long as there is wireless connectivity they're able to observe
what the robot's seeing real time.
So they can see the point cloud that way.
But then what I was also getting at is,
okay, so let's say the robot did
which five, 10 minute flight comes back.
You can, via our log interface on the tablet,
you can actually just go back and play back that log
and see real time what the robot observed.
Neat.
And you said you capture video also?
Yes, so we have versions of our product that have a first-person video camera on it.
For our underground systems, we have to have a lot of active lighting
so you can actually get anything out of that.
So in addition to the LiDAR point cloud kind of thing, you said some versions have a camera.
Do you try to capture other color data or environmental data that might be useful to the client?
So this is something that's that's discussed a lot.
I'll admit that right now the point cloud map is the main thing.
Again, the first-person video as well.
That LiDAR map also gives you intensity information,
which is useful in some contexts.
So that tells you what was the relative intensity of the reflection.
And you can often see things about the environment
using that intensity information.
Is that how like aerial LiDAR that sees through trees,
does it rely on that
kind of intensity information this is a little different this is more about the um the reflective
properties of the actual surface that's being illuminated uh okay i guess there is a little
bit of applicability there um okay particularly like determine materials that type of thing
uh yeah so they're okay so like uh i mean colors right like a white wall
versus a black wall there's some information to be gained there um okay but uh more interesting
than that we do have a lot of customers that have asked about other types of sensors so um
we haven't made too many inroads in this yet but uh we spoke with some companies that that manage
nuclear facilities and um i forget what, a dosimeter, right?
I think that's the name of the sensor that detects levels of radioactivity.
So having, in addition to generating a structural or point cloud type map,
you would also envision having localized readings of radioactivity in the site being managed.
Radiation map.
Yep, exactly.
And gas sensors for underground mines has been mentioned as well.
So this is something we're looking at and trying to kind of expand our offering.
But I mean, even right now, there's still so many improvements to be made just in terms
of the base mapping product and so much interest from the customers that that's been our primary
area of focus if you don't mind i'd like to ask one other question about your
data logging and uh is it ever become a problem like a bottleneck in the system where you're like
we have to figure out we have to spin off a thread to write these data logs or whatever because
because it's it's causing a bottleneck here so So the way that's actually interjected, not interjected,
like the way that it works in the system is,
I briefly touched on this before,
but the system is organized like a modular architecture
with like messages flying in between each module.
So the logger itself is just another module
that subscribes to everything in its own process
and is only responsible for just like dumping that to disk.
Oh, okay.
So you've already offloaded that to another process, I see.
As much as we can.
It doesn't use all that much CPU, to be honest.
Okay.
Okay.
So that's actually, at least in terms of kind of software design, software architecture, that's an area that we haven't really got into.
But we do have our own, and I'll admit that we've debated the merits of this decision a little bit internally, but we have
our own internal
software framework for Verizon Execution Framework, Configuration Management System,
Message Passing. So that's something that we've
put a lot of development effort into, and it is a pretty decent
abstraction. So obviously we're using this framework for our robotics purposes,
but it does provide a pretty general use modular execution framework.
So you can configure a series of software modules that,
so you could run like a couple software modules that might be passing
messages on one process and then have another process that's just running a single module.
And you can kind of, again, via pretty general abstraction,
configure a set of modules to run it in a plugin-style architecture.
And you were asking before about software libraries.
So our message passing system is built on
0MQ. And we're using Google ProtoBuff for our message definitions.
So we are leveraging open source libraries that we're building on, but it is our own
message passing framework that supports publish, subscribe, request, reply.
We also have broadcast replies built into that request reply framework. So we are solving
some, even though we are a robotics company, we definitely have some more traditional software
problems that we have to solve because of the fact that we're maintaining this execution
framework. I'd like to add that in addition to like the two libraries you mentioned,
ASIO from Boost, you can imagine with messages flying around,
request replies flying around,
that asynchronous programming paradigms
are super important to us.
So we make heavy use of ASIO and coroutines
and stuff like that.
Have you done any research
into what those coroutines would look like
if you could move to C++20's coroutines.
Just the other day he was, I think, right?
Yeah, on Monday.
Yeah, that was a weekend project for me.
I got to say, C++20 coroutines, they're great,
although a bit cryptic to get into.
Yeah, the library feature, excuse me,
the language feature was shipped without a library component to help.
So I did see that there was a talk from CppCon
from problem to coroutine reducing IO latency.
I haven't got as far into C++20 coroutines as Billy has.
I felt guilty after he brought it up
and started doing a little bit of reading.
But at least the abstract from that talk sounded pretty compelling.
It does.
It sounds exactly applicable to what you're doing.
All right.
Well, I think we're running a little low on time.
Is there anything else you want to tell our listeners about
before we let you go?
And is Exyn looking to hire more C++ developers?
Yeah.
So I mentioned that we've grown a lot.
We're still growing. We're
expecting, again, to kind of double in size
over the next year. Wow.
We hope for a lot of those. Billy and I
definitely are hoping for a lot of those hires to be
in the software engineering space.
And actually, so right now
we're specifically looking for
more software engineering focused
C++.
So one of the questions actually that we had discussed is on our robot,
everything running on the robot is C++.
So we did recently introduce a web UI front end just to start and stop the
captures for that mapping only system that I looked at.
But other than that, everything running directly on the robot is C++.
So going back to what I was saying, we're specifically right now trying to hire a software engineer
to introduce a little bit more of a good software engineering focus
into some of the robotics development that we're doing.
I would like to pluralize your statement, Brandon.
We're looking to hire software engineers.
So maybe we can provide a link.
Yes, we'll provide that in the show notes.
Okay.
I think it's pretty easy to get to just from xn.com.
Join us.
And where is the company located?
And are you interested in hiring remote developers as well?
So we're located in Philly.
We do have some remote developers in Philadelphia.
We do have some remote developers already.
It depends a little bit on the role. You can imagine um so one of the cool things about our job actually is we have
a flight space right right in our office so we have developers working in some areas some developers
like working right on the side of that flight space so you'll have drones zipping back and
forth in the flight protected by nets but zipping back and forth in the flight space so depending on
kind of how hands-on the particular role might be,
then there might be a preference for being on site.
But there are definitely lots of roles where remote is also possible.
Do either one of you have one of these drones in arm's reach
so that you could just, for the viewers who watch this later,
be enticed to come play with your toys?
Sadly, we are both working from home right now.
Do not have one at home,
but we can make sure to get something
in the show.
Okay. Well, Brian and Billy,
it was great talking to you both today. Thanks for coming on the show.
Thank you guys. It was great to meet you.
Thanks so much for listening in
as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in or if you have a suggestion for a topic. We'd love to hear what you think of the podcast. Please let us know if we're discussing the stuff you're
interested in, or if you have a suggestion
for a topic, we'd love to hear about that too.
You can email all your thoughts to
feedback at cppcast.com.
We'd also appreciate if you can like
CppCast on Facebook and follow
CppCast on Twitter.
You can also follow me at RobWIrving
and Jason at Lefticus on Twitter.
We'd also like to thank all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode was provided by podcastthemes.com.