Programming Throwdown - Building a Robotics Software Platform with Abhay Venkatesh
Episode Date: August 23, 2021You’ve seen the dancing Boston Dynamics dogs, Honda’s ASIMO greeting people at malls, and the half-court-shooting robot at the Olympics, among other awe-inspiring robot stories that nowad...ays are getting increasingly more common. But equally fascinating, especially for us programmers, is the amount of programming and structure needed to make sure these robots work as intended. In this episode, we talk with Abhay Venkatesh, Software Engineer at Anduril Industries, about Platforms for Robotics (PFRs), and the intricacies happening inside these mechanical wonders.This episode touches on the following key topics and ideas:00:00:24 Introduction00:01:10 Introducing Abhay Venkatesh00:03:00 What robotics is as a field or practice00:07:18 Platform for Robotics (PFRs)00:10:07 OODA loop00:12:27 What makes up a Platform for Robotics?00:14:17 Raspberry Pi 00:15:30 Nvidia Tegra00:17:17 Edge computing00:19:29 Telemetry00:22:06 Ad: SignalWire, a next-gen video collaboration platform00:23:30 Real-time constraints and safety challenges00:28:31 Formal verification and defensive programming00:32:28 Operating systems in robotics00:34:27 Nix and reproducible hermetic builds00:37:52 Key aspects in robotics software development00:41:14 Deployment00:46:24 Simulation00:48:51 Google testing pyramid 00:52:01 Actuators00:55:27 Future of PFRs01:02:49 FarewellsResources mentioned in this episode:CompaniesAnduril Industries https://www.anduril.com/Nvidia https://www.nvidia.com/en-us/Boston Dynamics https://www.bostondynamics.com/ToolsArduino https://www.arduino.cc/Raspberry Pi https://www.raspberrypi.org/Nvidia Tegra https://developer.nvidia.com/tegra-developmentNixOS https://nixos.org/Docker https://www.docker.com/Bazel https://bazel.build/Our sponsor for this episode is SignalWirehttps://signalwire.com/Use code THROWDOWN for $25 in developer creditAbhay’s website: https://abhayvenkatesh.com/Abhay on Twitter: https://twitter.com/AbhayVenkatesh1If you’ve enjoyed this episode, you can listen to more on Programming Throwdown’s website: https://www.programmingthrowdown.com/Reach out to us via email: programmingthrowdown@gmail.comYou can also follow Programming Throwdown on Facebook | Apple Podcasts | Spotify | Player.FM Join the discussion on our DiscordHelp support Programming Throwdown through our Patreon ★ Support this podcast on Patreon ★
Transcript
Discussion (0)
Hey everybody, we're here with another
exciting interview. Today, we have Abhay, who's a software engineer at Anduril Industries. Go ahead
and say hello to us, Abhay, and tell us a little bit about your current role there. Yeah, thanks
for having me on the podcast, Jason and Patrick. I'm Abhay. I'm a software engineer at Anduril
Industries, and I work on platform infrastructure,
mainly focusing on aspects of simulation, deployment, and so forth. Before that,
I did a lot of work in our autonomy area of expertise and did some foundational work there,
and also have a background in perception and machine learning.
You said so many buzzwords already right up front. You know,
a lot of people very excited. I mean, I think there's a couple topics that I think are very
motivational to people early on when they're getting into programming, or at least they were
for me. Maybe now people are excited by like, you know, websites and web apps and stuff. But for me,
it was always building games and building robots. Right? Yeah, I guess to the point of buzzwords, I often like to say we do everything but blockchain
at Andreo.
So we really do have that diversity of expertise.
And certainly robotics is one of the areas that we are particularly excited about.
I'm particularly excited about.
And yeah, I, you know, the first time i started working on robotics was at stanford
where we had this fun project of well having this sort of chair bot and okay the sort of problem
space was going to study how humans interact with various robotic furniture and how can you sort of
imagine those kinds of interactions and sort of worked on building this chair robot that was literally, you know, a chair with a Roomba attached to it.
And you could control it using an iPad and try to like run your experiments that way.
So yeah, it's a fun space.
I mean, my first thought is like you're programming the chair to move out from underneath someone
when they try to sit on it as a prank.
But I assume that's not what you were doing.
Yeah, I cannot deny that never happened. But certainly it is.
Yeah, it was a lot around what is an acceptable approach? Like, can you think of a chair?
What are the sort of acceptable interactions that would be sort of socially pleasing or, you know,
not aggressive necessarily thinking about how robots should interact with humans in general and trying to build like a design model around that
was sort of the research goal. And yeah, so at least that was my foray into robotics and
certainly have taken it a lot more places from there. Very cool. So for a second here,
I think most people kind of know the word robot or even know
of robots, right? So you mentioned like Roomba, people think of sort of like androids. So I mean,
maybe in like your mind, not necessarily like a definition. I mean, maybe that's kind of too
boring. But like, what is it that makes robotics as like a field or a practice? Like what makes
it different? What makes it robotics? And then for you, like what makes it something
that you're excited to work on?
Yeah, yeah.
I guess I think of robotics in a pretty broad sense.
So usually, like you mentioned, people think of Roomba
or even more in the broader sense,
people think of like some sort of robot,
like C3PO or R2D2.
And it's like somehow it has a body and a physical state.
But robotics can be much broader than that.
Really, even just a sensor with a control loop,
meaning that it interacts with the environment
and gains data from the environment.
You could think of that really as a robot.
And in that broad sense, robots are all around us. Even your fridge has
a robotic component to it. And, you know, even your microwave or certainly your phone. So yeah,
that's how I really think about it. And if you think about it in that broad sense, then you can
start making a sense of really what are robots and what they do for us and the value of them.
Are there things about programming robots like that, that you found in your work that
are sort of different or unique to, I mean, I guess for me, like hearing you say that
it involves the physical world and sort of interacting with it in some way, measuring it,
affecting it, and having some control loop for that. I mean, is that the essence of what makes
it sort of different? Or do you think there's something more there? Like before you're mentioning your chair
bot and how like there's a human component that once you have something interacting in the world,
you're sort of interacting with humans as well. Like what do you find particularly engaging? Or
is there something that you find particularly engaging about that? Yeah, I guess there are a
few differences. The main thing I would say is that interactive component with the environment.
So you can contrast it against like building a database where it's a lot less effectual
in the sense that you don't necessarily directly interface with the human.
You're usually serving requests for other pieces of software.
Whereas usually when you're making robots, a sensor or
some sort of actuator, it does have that closer control loop or interaction loop with the
environment. And that sort of is, I would say quite different, mostly because interacting with
the real world is really on a higher dimension almost. So when you're building a, let's say a
drone and you have to
make it fly around, it's kind of funny in the beginning, at least because when I was first
writing control loops, you would think that the robot would behave in a fairly straightforward
manner, but it almost never does. And it does take some getting used to in terms of building a robust,
robust robotic system and making it, you making it impervious to all the various
states that could be possible in a complex environment. Oh, impervious. Oh, I like that.
Yeah. I mean, I've done a little bit of hobbyist dabbling. And I think this thing you're pointing
out, right, that it's very different from my day-to-day programming to work on something which
has mechanical failures,
mechanical limits. You know, there's like, you have to think about, it's not just I'm at value
one and I want to be at value two. You have to think about how does a transition between,
I guess we're calling it sort of states. Like I have a servo motor, a servo motor doesn't move
instantaneously. So if I tell it to go to 90 degrees and then to 50 degrees and then to 70
degrees, well, it may still have only just
started moving. So you got to think about sort of like the progression of time and the limits of
the system. Yeah, I also find that very, I don't know, like a different challenge, I guess, than
like you point out working with a database or an application. Yeah, yeah, certainly there's a like,
I think you're, maybe you're getting at some, this notion of hysteresis,
which I learned when building the systems
where there's like a lag in the system
and you want to like counteract that.
Yeah, building some sort of like mechanism
for applying that is pretty useful in general.
Yeah, we've never really talked,
I don't think in depth or gone into about control loops.
I don't know that I want to go there now.
That's going to end up being a whole topic into itself, but you know,
control of something. And then I think you were talking as well about,
you know, sort of sensors and measuring and, you know,
observing in order to have a sort of a control loop.
And I think that's also a really interesting field. You know,
I don't know how much, how much you've gotten into it,
but just sort of even thinking about when you observe the world,
it's pretty noisy because the world itself is
noisy because your sensors have noise and they have limits and, you know, knowing those as well
so that you, your model of how the world works is accurate to what's actually happening.
Yeah. Yeah. I think that does get into this notion of, I guess, like a platform or a software
platform in which you can have a place to ingest all this
data from sensors and track them. So when you have a robotic system, there's all this kind of
telemetry that you must track and sort of, you know, maybe collect health data from that telemetry
or use the telemetry itself to do some sort of high order planning. And that's kind of where, you know, having a like a broader, let's say, like platform
that contain tracking systems, data ingestion systems, and so forth can be pretty valuable,
especially when you're, you know, building, say, simpler robotics applications that are
not necessarily very highly complex on, let's say, actuation, but are more complex on the side of making sense of
the kind of information they are producing. Interesting. Oh, there's another like very
densely packed sentence, I guess, or sort of sort of comment. So I guess so yeah, so what you're
saying is, you have sensors, and you're collecting an information, then you're, you know, sort of
doing some understanding of that information. And I guess when you were collecting an information, then you're, you know, sort of doing some understanding of that information.
And I guess when you were sort of saying a platform or a framework for handling these
things, that there are some parts of it and some flows which are common between applications.
Is that sort of what you're getting at?
Yeah, yeah, absolutely.
So like you can sort of think of a lot of these like sensor platforms, like let's say you have a smart home
or you have some sort of security camera or you even have like a robot or like, let's say a drone
that's flying around. There's a lot of commonalities between these kinds of robots. At the
end of the day, it really is about, you know, getting the information from them, storing them
somewhere, performing analytics on them, and then also maybe controlling them and sending, you know, getting the information from them, storing them somewhere, performing analytics on
them, and then also maybe controlling them and sending, you know, commands to these various
things. So it's kind of like, you can think of it in that sense, which is, yeah, that's sort of the
commonalities, I guess. And you do actually end up seeing a lot of these in many different
application areas these days from self-driving cars to our drone delivery.
So a little bit of an aside,
but when you were saying that, I was triggered.
Someone introduced me to some concept a bit ago,
and it's been kind of interesting to hear
in a broader context,
because I kind of think about it
in the terms of control loops,
because I guess that's where I first
sort of stumbled across it.
But this is the OODA loop.
Have you heard of this before?
No, what is that? Okay, so the OODA loop. Have you heard of this before? No, what is that?
Okay, so the OODA loop, I guess,
is something that,
I believe it was the United States Air Force Colonel
came up with,
which is basically like a way for humans
to sort of tackle the problems.
I believe he was doing sort of fighter pilots
and thinking about the decision-making process.
And so he came up with this term, this OODA loop.
So it's four steps,
the four letters of OODA, O-O-D-A. It's observe, orient, decide, act. So he has it as this, you know, continual process that's running in your brain. You first observe, then you orient yourself, you decide what you're going to do, and then you act. Then you observe like the output of that action, right? And so hearing you describe this, this is somewhat of an aside,
but coming from, I guess what we call that meat space,
like humans and the Air Force and having this OODA loop.
And then I've heard it applied to business, right?
And making business decisions.
And then here, I'm reminded of it again,
when you sort of talk about robotics
and about how you want to get this flow set up
and this cycle set up.
No, yeah, that's actually, now that you mentioned it, I have basically heard something exactly
like that.
So it does come to mind now.
And that action loop is, I think, at the crux of these kinds of robotic systems, especially
because both they have to make decisions in the real world, but also humans can make decisions
based on the information collected from these,
let's say, robots or sensors.
And yeah, it's observe, orient, detect, act, if I got that right.
And yeah, you can sort of break those down or map those into the various components
of, let's say, a robotics system or platform where, yeah, you ingest data
and you collect it and maybe you display it in
some part of a web UI of your platform.
You know, the user orients themselves and then maybe they find some information they
want to act on and then they can sort of, you know, send the command back to the robot
and get their work done.
So, yeah, that's absolutely, I think, the right analogy when thinking of these
kinds of systems. I'm debating with myself whether to go through the rest of the things you say,
or to lift up a level and talk about all the different components of a platform,
because you're already starting to allude to some. So you mentioned telemetry, the UI display
component, trackers. I'm tempted to go into each of these, but I think
that might become a little long. So we were mentioning a platform. So to you in your mind,
like you work with these platforms and sort of the commonality between this and the software
that drives it. What are some of the, I don't know if there, if you have like a canonical sort of,
there's these five components or just like, what are the most common ones that you see come up
that are useful across these things but maybe could you talk a little to like
what are the normal pieces that would make up sort of like a robotics platform absolutely yeah
i guess uh there certainly are some commonalities before i describe them i do want to i guess like
maybe sort of clarify what we even mean by this platform notion. Because in my head, platform is something that enables applications.
And with each application you sort of introduce or is enabled with this platform, future applications
are made cheaper or easier to integrate.
Let me give you maybe an example of what a platform might be. So like normally like in the software world,
you can think of, let's say, the iPhone as a platform,
which is, you know, it enabled the consumer internet
and sort of each application that was added to the App Store
made future applications, let's say, easier
because, you know, Apple could extend its functionality,
the APIs it exposed to app developers,
and you also had this ecosystem effect.
And over time, you had this explosion of apps on the App Store.
So that's, I guess, what I roughly mean by platform.
Does that make sense?
Yeah, maybe I'll take a little bit of an aside here,
and I'll tell it back to you, and you tell me if it makes sense.
So I mentioned dabbling in electronics of like electronics robotics stuff as just like a hobbyist so for me uh one
of the things that was really interesting is even though i was a programmer and i started very early
predating some of the more modern stuff you would have to go and find a c compiler for an app mega
part and or an app now app now part what is8 or whatever, right? And it was some strange
platform. It had very low level libraries. It was very hard to write for that chip. Then the Arduino
Raspberry Pi came out. And so to me, like when I hear you say, you're describing, is that the
something like an Arduino where it is, it is a hardware set, but in actuality, it's a library that gives you your sort of input point.
It gives you libraries for taking in sensor data, for actuating sort of motors and whatever.
And you actually can run it on several different, not just Arduino platforms themselves, but even other processors will adapt that library because like you were saying, it speeds up people's ability to get in
and do integrations and make applications cheaper because so much of the lifting is done for you.
Absolutely. Yeah, that's exactly what I mean by platforms. And Arduino certainly qualifies for
that. There is like a similar platform in the hardware world called NVIDIA Tegra, which has
basically enabled, I think like single-handedly enabled the
edge sort of edge perception or edge IoT revolution that's currently happening. And certainly it's for the same reasons you mentioned. They do a lot of heavy lifting for
you. They give you libraries with which you can implement perception algorithms.
And you also get the board obviously with the GPU attached to it. And you can literally slap
it on a robot and now you have edge machine learning edge computer vision so uh that is the power of
platforms so I actually don't know anything about that can you speak what so it's a like
nvidia so I assume it means it has a sort of general purpose gpu processing setup on it is
that the yeah that's that's basically I think the So NVIDIA has these boards that you can use.
And I guess the killer app there is it's a board with the GPU on it.
And the idea is that it's something that you can slap on at the edge.
So it's pretty small.
It's compact.
It's low power.
And yeah, it runs Linux.
And it comes with, you know, yeah, a general purpose
GPU on which you can run all the fancy machine learning and computer vision algorithms that
you would want.
So, and the promise of that again is you don't have to, let's say, if you have frames or
camera video that you have on your robot or sensor, the promise of NVIDIA Tegra is that
you don't have to ship those frames or video
off to the cloud and run inference on the cloud.
You can actually do it on the edge,
which can yield performance improvements,
you know, much better latency on inference and so forth.
Okay, that's probably worth a call as well here.
So the word edge isn't an edge between two nodes,
but edge is the like frontier of the the frontier of the thing doing the observations.
Yes, exactly.
So there's, as I can say, the edge compute
and then there's cloud compute.
And that's the contrast here where you run,
cloud is in a data center somewhere in a big warehouse,
whereas the edge is, it's near,
it's at the edge of where it's actually happening, so to speak.
And you have compute that is sort of spread out across the environment rather than...
Yeah, so that's the idea here.
So that's pretty interesting.
I guess in my experience, I've mostly encountered what you're calling edge compute, but edge
compute as only edge compute, like there's no cloud component.
So are you saying is that for some sort of robotics applications,
I guess that makes sense, is sort of cooperating,
I don't know what you would call them, robots, agents that interact through.
So some data is done on the edge and in the local thing,
but then some is pushed to the cloud and then data is pushed back out
and shared across other?
Yeah, absolutely.
I think that division of labor is quite powerful. And the reason for this
is that there are applications that there's hard limits. So the NVIDIA Tegra platform I mentioned
was, you know, you get four gigs of RAM and even the GPU is not necessarily, you can't run the
largest models there. So it has pretty hard limits in terms of what you can do. So for example, if you want to
do batch data processing, that's probably not the right place to do something like that. But you can
ingest data from all your ad sensors and maybe have a fleet of ad sensors that are sort of like
percolating the environment, doing all their work. Maybe they're in a warehouse, maybe in a
construction site and so forth. Maybe you do want to do analytics. You want to understand what
they're doing. You want to get like maybe a report of, you know, how is my fleet doing as a whole?
So those kinds of functions, you can have a data ingestion system to sort of get all that data,
put it in the cloud and, you know, and run batch data analytics and all the, you know,
all the fun stuff you can do in the cloud that wouldn't otherwise be possible at the edge.
So that kind of division of labor is quite powerful when you're building a robotics platform.
Okay, so I guess you brought it up earlier, but I guess we bring it up here again. So telemetry.
So the, I guess, in my mind, that's sort of like the recording or log of what happened.
Sort of in, I'll call it a robot, because I don't know better. But what happened in a robot, or even like in a race car, you know, some of the telemetry is streamed, but some of it may exceed the bandwidth.
And so as recorded locally.
And so what you're saying is there's also a component here when you get to this at like a large enough scale or commercial scale where you want to do further processing and aggregation of those streams across.
OK, yeah, that makes sense.
All right. Yeah, yeah.
Yeah, that's that's kind of the idea.
It's funny you mentioned racing
because I think that Formula One teams
are getting into this.
So they are using, I think,
Palantir did like a release on this
where they're using their software on Formula One.
So Formula One teams are getting their telemetry
and using Palantir's sort of big data analytics tools
to analyze that. So that's exactly like the application teams are getting their telemetry and using Palantir's sort of big data analytics tools to
analyze that. So that's exactly like the application that we are starting to see
with this new sort of IoT or sensor evolution that's going on right now.
Yeah, I saw this the other day. I just Googled it while you were talking because I wanted to get
some reasonable number. But an F1 car, they're saying has over 300 sensors. And it does something
like just transmission from cars to the pits is over a million like data points per second.
So I mean, even if each of those is only a byte, you're still like a megabyte of second data,
which they're clearly not one byte each. Yeah, that's crazy. I'm not a big F1 fan, but I imagine they know all sorts of
things about the car. Yeah, no, exactly. And I guess like going to the point of the, you know,
bandwidth limitations, I often think about this a lot as well. We often, you know,
when you have an edge network, it's not as ideal as having a data center wired connection, fiber, gigabit, so forth,
where you can literally ship all that data. Often you're running on LTE or other kinds of
lower bandwidth, less reliable links. And the amount of data that can be generated in theory
is pretty large. So you do have this question of how do you process all the telemetry that you might see, let's say,
on an F1 car in an efficient manner. And that in itself can be a pretty interesting data processing
challenge. Or, yeah, it's a question of how do you efficiently get the useful information you need
without, you know, hitting into hard limits of bandwidth and so forth.
Today's sponsor for Programming Throwdown is SignalWire.
SignalWire is a pretty awesome company that allows developers to use multiple languages to call their APIs and deliver low latency video and audio technology.
So imagine if you're building an application or a website and you want to host an interactive event like a charity event that they supported for the American Cancer
Society, where they're able to have multiple rooms, people interacting in the rooms, like a
video conference call, but like way more tailored to your specifications and so much more flexibility
and the APIs that enable you to do that. They're already being used by large TV studios,
film companies, Fortune 500.
These are all things
that are definitely been battle tested.
And today we are happy to have them
as a sponsor of Programming Throwdown.
Yeah, SignalWire provides expert support
from the real OGs of software-defined telecom.
These are the original geeks of that technology.
SignalWire is a complete unified platform
for integrating video as well as voice
and messaging capabilities into any app.
You could try it today at SignalWire.com
and use code THROWDOWN for $25 in developer credit.
So you go to SignalWire.com
and use the code THROWDOWN at SignalWire.com today to receive $25 in developer credit.
Now back to our episode.
So talking about hard limits there and maybe a bit of my background.
I mean, I think like I start to when you say there's difference between sort of edge compute and cloud compute, and I start to think about decisions that need to be made under a given timeline.
Right. So we're talking about like control loops, right? There's hard real time limits if you want certain performance out of
your loop. So like when you start to mix in these sort of like not guaranteed bandwidth streams and
cloud compute, are you able to still do any sort of like real time guarantees? Or is it sort of
become a much softer thing?
Yeah, yeah, that's a that's an interesting point. And something I've firstly run into
when building these complex robotic systems is a few different points there I noted, but
to riff on the, you know, guaranteed delivery, for I would say one of the primary concerns for me,
when I was building, like actual robotic systems that do fly and do actual things.
And it's like something I would always have at the back of my head whenever I'm writing, let's say, for loop that acts on external data.
It's like I have to think about, you know, what if this data does not actually arrive?
What if I miss measurements that would have been otherwise critical in my decision making? So one of the patterns that I ended up adopting over time is
sort of focusing a lot of idempotence and sort of writing my code or structuring my code in a way
in which I would get almost like eventual behavior. So eventually, my robot would do certain things. So it would
tend towards things rather than depending on exact delivery of messages, if that makes sense.
Yeah, I mean, I guess for like, maybe just to kind of illustrate it for other people. So when
you're building a video game, people know like, oh, you're trying to hit a certain frames per
second, say it's 30 frames per second, or 60 frames per second. And so various parts, maybe this isn't obvious, but various parts of the
system get budgets, right? So the AI for path planning for all of the bad guys, right? That
has like a budget for how long it can take to run. There's a, how long it can take for you to do
determining all the polygons you need to render. Everything gets a budget because you have a
timeline. But for things like robotics,
the difference is instead of dropping a frame of video
or having a stutter, which is not ideal,
and that's very serious work,
if you're flying a quadcopter
and you have your motors need to have a certain signal
sent at a certain rate
to control how fast the props are spinning,
if you miss one of those,
the issue is that like the system isn't going to behave like you thought it would. Like there is a
how fast you need to respond to an input from either the vehicle itself or from a human
controller if they're in the loop for the system to behave as you've sort of modeled it. So I guess in some systems, you get these deadlines,
which are very serious to the operation.
That was super vague.
But like, if you imagine a robot arm,
you know, moving around the world,
and if it sees a person step into its path,
if there isn't a guarantee that you can see with your camera
or sensor that the person is now in the danger zone
and stop the robot within 100 milliseconds. If you end up with a garbage collection running
during that time and you end up with a 50 millisecond delay, then all of a sudden you
can't guarantee the safety of humans around that robot. No, I think that's exactly, that's a pretty
big concern when you're building robots, especially those that interact with the real world and are near humans and or are doing similarly like life critical stuff.
So I guess one pattern that shows up when you're trying to build systems like that is you want to have, let's say like a safety layer. So you want to actually break up your system into say the functional layer and the safety layer.
And you want to keep these pieces quite like decoupled.
And the safety layer is something that kicks in
whenever let's say your telemetry goes off
or whatever, if it thinks that something is off,
that it's not receiving the information
it needs to be receiving,
the safety layer should kick in
and hopefully you have like a path to
say safe exit. And I would say this seems like a probably a pattern that you see across drones,
robotic arms in manufacturing, or even probably self-driving cars where you want to like safely
stop. And if your sensor systems are malfunctioning or your telemetry is off and so forth. So, yeah, I found that pattern pretty useful in general.
Yeah, I guess like these topics get pretty involved, like how you sort of guarantee safety,
especially if you start talking about various certification organizations who want to, you
know, safety certify something or medical equipment like a pacemaker, not only like having it be safe,
but proving that it'll be safe becomes quite an involved process, I guess.
No, absolutely. You know, maybe that's a domain topic and formal verification.
That could be an interesting area. I haven't looked too much into it. But yeah, certainly,
like you would have to prove that your software will work under these, in these constraints
and show them your source code and so forth.
Yeah, I haven't personally been through that process so far, but yeah, I imagine it's a
tough problem.
At least, you know, one thing I do is really like very defensive programming when I build
some of these systems is default to nothing or, you know, landing if I'm running a drone,
for example, if nothing, or returning to base, which is the other concept in drones. But yeah,
so always having that default fallback pattern helps you have that kind of defensiveness without,
you know, requiring formal verifications. Yeah, yeah, I've had incidental contact with some of this a couple of times.
I guess formal verification
where you sort of have like approvably correct,
or at least for one definition,
like approvably correct way of coding or design spec.
And you have certain axioms
and you guarantee those axioms are met.
I've actually never come across a system that
tried to tackle the problem that way, although I know people do do it. It's an interesting field.
But yeah, what I've seen more common is, I guess, sort of similar to what you're doing,
this defensive programming, which is you could talk about things like C or C++ coding standards
and what you're allowed to do. So one of the easiest to understand ones is it's very common if you're using,
I'll use C++, that's what I'm familiar with.
If you're using the C++ standard library, right?
And you have an STL vector
and you're inserting something,
it's actually allocating as you're inserting,
it's allocating extra and moving stuff around,
it's doing a lot of work.
And if you're doing that inside of a loop,
you can end up with a lot of performance issues
because when the memory goes to be freed,
what the allocator does or doesn't do at that time period
isn't very easy to reason about.
Or what if you ran out of memory because you didn't do it?
So one of the techniques there in this sort of,
you mentioned defensive programming,
is do all of your allocations up front.
So figure out what you're
going to need, do them at the very beginning. That way the system either fails to start up
or once it starts up, you sort of know no more allocations. And I've seen that be done even to
the point where the allocator is effectively turned off after like some phase of the startup cycle.
And anything that tries to allocate is like guaranteed to not work.
Yeah. Yeah. I think C++ even makes it pretty easy for you with the reserve keyword. I'm not sure
you're familiar with that. But yeah, you can like reserve space before you even add to the vector.
So certainly, that's a pretty good pattern in C++ defensive programming. But you could also do
defense not necessarily on the, let's say the code level, but you could also do it on a process level, which is, let's say, more coarse, but easier to manage. consuming at any given time. And you can sort of like manage the processes running on your compute board
in a way that you just kill, let's say the, the application processes whenever
maybe they're going off the rails for whatever reason, so that the, um, the
backup or the safety processes, uh, should have enough, enough memory so that the
drone can return home or the robot can safely reset to a safe state.
So I also do end up thinking a lot on these various levels of abstraction of the system
where you kind of separate it out.
So it's maybe like a shortcut or easier way to build more systems, but the alternative
is to be super careful with how your C++ is structured, which can be pretty hard. hard and c++ is a is a beast of its own well yeah let's not get into that
yeah yeah so you bring up another interesting point there so managing you know processes and
and sort of like the approach of of making sure that things stay within their limits
i guess that's an interesting thing too so when we talk about something like tagra and you mentioned like, oh, running Linux and having that kind of stuff, is it in your
experience? Like, I mean, is it mostly very familiar, like Linux operating systems with
all the sort of programmer ergonomics you would normally see related to that? Or are a lot of the
times you're interacting with devices that have more specialized operating systems?
Right. No, that's a good question. I would say the main difference is actually the architecture,
which is maybe not that different these days since you have ARM on Mac, but the NBGiTegra
platform is an ARM thing. And normally your Linux running in cloud or on your desktop is an x86
architecture. So that ends up being like
probably the biggest difference in terms of the development environment that is exposed to people.
That being said, you also want to do want to be careful in terms of how you sort of structure
your operating system on the robot itself. And by that, I mean, you know, while the intricacies of
compiling a program on Linux is that there's
no guarantee that you get the same artifact when you compile it on, let's say, your cloud
computer or your desktop.
And they could be the same architecture, but, you know, the intricacies of like your package
management and all the paths you have set up on your Linux, those can basically mean
that the compiled artifacts are completely different.
And it could mean that there are intricacies like the memory management problems or even CPU problems you can run into from that.
So that is, I would say, a big area of challenge.
And we do have, there are actually pretty good tools these days to deal with that.
And I don't know if you want to go into that, but there is this thing called Nix.
I don't know if you've heard of it, but it does solve this kind of problem.
So I guess what you're talking about here, and maybe the word is like doing reproducible or hermetic builds so that you know that if every engineer does the same thing.
So I've heard of people using sort of Docker to tackle the challenge, but you were mentioning this Nix.
So what is Nix?
Yeah, yeah.
So I think Nix is, again, like, I think their goal is to solve precisely
the problem you mentioned, which is you want to have a hermetical reproducible
build, so when two programmers are building an artifact, it results in the same
artifact, so yeah, and you should be able to prove that because if you, if you can
hash the binary or something or the out, the build artifact and they have the same
hash and you can compare those and so forth. Nix tackles this problem by, you know, providing a, Nix is a programming
language that allows you to build things.
And, you know, it will, you can sort of like specify, Hey, this is my source code.
This is the way these are the build instructions.
These are the inputs to my build.
So if you need CMake or other, you know, you need Bazel or whatever to build your things,
you can specify those inputs. And then what it outputs is an artifact with a hash.
You could build anything. You could build a text file using Nix. So you could say,
build instruction, just copy paste this text into a text file and the output is build.txt.
But the advantage is that you run the same,
let's say it's called derivation,
the concept, you specify derivation,
which is the blueprint of a build.
And the key bin here is when you run this derivation
on two computers, two developers
running the same derivation,
they get the exact same output.
And if they don't, they can verify this
by comparing the hash.
So that is sort of the key feature it enables.
Interesting.
Yeah, so I guess maybe people have run this.
I've seen this also come up as people doing Python stuff
run into, it just will grab whatever you installed
to your system and you don't know what dependencies,
so you send it to someone else
and they're like, this doesn't work.
So then people use sort of like virtual environments
with pip and a requirements file that specifies I want this version of the
library to be used. And I guess what you're saying is so Nix is able to do that for more than just,
you know, a single application, but actually for the operating system itself.
Yeah, yeah, I'm very impressed that you're able to pick that up. So that's exactly the
the idea here, which is that you don't necessarily have to have
your build target as a text file or a binary, but you can literally build an operating system in
this manner. You mentioned sort of Python virtual environments and, you know, yeah, you can specify
requirements.txt, but then say you want to set up a Python virtual environment that is actually
non-trivial. Like I actually struggle a lot with getting a virtual environment going and installing my requirements.txt for whatever reason. But Nix actually lets you
have all of this defined in a set of files. There's even a notion of modules. So you can
decompose your operating system into various modules. And you can literally have a repository
that describes an operating system. and that lets you reproducibly
build these operating systems. And it's a huge deal, I think, especially when you're building
these resource-constrained edge robots where you really do care about what are the big artifacts,
you really do care about the limited resources you have in an operating system. And having that
reproducibility basically lets you eliminate the myriad of bad
variables you could have when debugging application. So I think it's a really powerful tool.
Well, I'm going to maybe segue a bit, I guess. So we've been talking about a little bit of,
I guess, sort of like the very low level parts, right? So like the operating system,
real time constraints. But I think I sort of,
that's sort of my background. So I'm interested in talking about that. But I don't want to lose the context you were giving earlier, where you were saying sort of thinking about across multiple
different kinds of robots, multiple instances of the same robot and thinking about as a platform
and more than just, you know, an individual, you were saying like an edge agent,
but like across more.
And then you were also mentioning stuff
like even doing sort of like image recognition on the data,
you know, the kinds of other bigger tasks that you might do.
So when you think for yourself
and you were sort of describing this platform
and things that might go in it,
what are some of the other things
besides just like sort of configuring a single robot and its communication that would go in it. What are some of the other things besides just like sort of
configuring a single robot and its communication that would go into that platform?
No, absolutely. Yeah, we did take a digression there. So thanks for redirecting us. It was a
long digression, for sure. But yeah, I think there's a few big components. And I'm not sure
I think of them as necessarily being unique to robotics. but let's say there's a Jack Power line where events happen in real time.
So anything that happens in real time can be sort of thought of in this way, or these components of the platforms that I will describe, you could, in theory, apply them.
So I think there's a few key components. I think systems, which you already discussed,
is certainly one key component
when you're thinking about,
let's say, deploying a fleet of robots.
So you do want to own the systems
and you want to own the installation of those.
So getting to installation of software or artifacts,
there's the notion of deployment,
which is, you can think of it as the D in CI, CD.
Yeah, the main concern is, okay, you're building software, you have iteration cycles, developers
are constantly making updates, you know, in the sort of new age of development, as opposed
to the old age of enterprise software, where you would maybe release a CD or something,
or a single build artifact once you have done coding for months.
These days, we don't do programming that way.
We are interested in continuous integration,
continuous deployment.
So deployment becomes a key aspect
of running these sensor platforms.
And yeah, the other aspects are simulation,
which is sort of simulating the robotic fleet
in the cloud environment.
There's data infrastructure, which is, you know, these are data generating systems.
How do you ingest those?
How do you perform analytics on those?
How do you, you know, run image detection like you were mentioning earlier?
And finally, there's sort of like a networking component, which is these robots have to
communicate over a network and you can't wire them.
I mean, even if you did wire them, that would still be a network.
So there's always a communications networking aspect.
And finally, there's building clients and APIs for these robots.
And what are some of the ideal ways in which you can structure that?
So I would say these five or six areas are roughly key or core components of building such a platform. And you will see these patterns across pretty much any robotics or IoT or sensor platform
you see, whether it's from Amazon or Microsoft or any other company, really.
So I'm going to try my best to go one by one to them and sort of like give us a chance
to talk about maybe each one. So the deployment,
I guess like for me,
I think about the Mars rovers and,
no,
there's a problem.
We don't know what's wrong.
And like,
we need to run a build and send it.
I guess I'll talk about simulation here as well.
So there's some issue.
The Mars rover stops moving.
Some poor guy or girl has to show up to the office and like,
figure out how not to strand multi-billion dollar, like piece of equipment on another planet. And then but eventually all that
happens, and someone has to deploy the binary. Right, right. No, that's exactly the challenge.
It's funny that you mentioned that because I was having the exact same conversation,
the colleague who did work on similar stuff in the past. So it's eerily accurate, your description.
And certainly in the Mars case,
the challenge is quite complex
because I think it's not,
like the communication itself is not instantaneous.
So it takes a few minutes or something, 30 minutes,
I think, if I'm remembering correctly,
to get like some packets from Earth to Mars.
And certainly like, you know, they don't
necessarily have the challenges of continuous integration, continuous deployment where,
you know, we have, they have a fleet of coders who are releasing software every day or every week,
but certainly the, the question of patching bugs is quite important because when you have these
robot or sensor fleets, they are basically
out there in the environment and they are there for their lifetime unless you send a field service
agent to go and fix it. In the case of Mars, that's the logistical nightmare. It can be well
understood. Call up Elon, I guess. You have to call up Elon or Bezos these days if they're
taking a flight. But yeah, so I think deployment sort of does solve that
problem for you. And it is a big area or a deep area of thinking about how do you deliver software
to your fleet? So I guess like two questions I have there. So like the first we were talking
a little bit about like compartmentalizing your system and having like a safety part and then
like the normal part and monitoring each other. But then I guess for deployment as well, like there's always a risk that if you're doing
continuous deployment, like sure you tested it, but that there could be something different or
some glitch or a bit gets flipped because whatever neutrino flies through and like swaps a bit in
your robot, like how, how do you handle the sort of like risk of over deploying and accidentally shipping
something that's bad? Yeah, yeah. I think a key aspect of building good deployment software is
building good rollback software or undeploying your software. Yeah. So when you make a software
update onto your edge, you have to build you have to be ingesting that telemetry we previously
mentioned, you have to be, having well-designed health checks
that will catch bugs for you and automatically roll back.
Deployment for robotics is slightly different
from deployment for cloud
because cloud is always connected.
You always know that there's going to be a network link.
It's easy to roll back.
You don't even have to have, let's say,
rollback artifacts present at the edge in the cloud.
I mean, you can just resend the latest artifacts or the previous artifacts that you wanted
to roll back to.
Whereas when you're doing deployment at the edge, you need to ensure that whenever you're
making a software update, you have the previous rollbacks locally available and ready to go.
And you have an operating system that can do that for you.
And Apple does this pretty well, I guess, where if there's some sort of bug that you can,
or let's say you turn off your phone in the middle of update, it will revert back to a
functional condition. So you do have to think about those kinds of things when you are deploying at
the edge and having that rollback ready and health checks. So yeah, I guess that makes sense.
And then having them on the edge
and being able to roll back,
and you were talking about
not having constant communication.
So I'm curious,
is there some sort of negotiation that occurs
about when a robot wants to take an update,
like when it's a good time to do it?
Or is that something that's sort of just pre-programmed?
It's like, oh, at night when it's plugged into the charger.
Yeah, no, that's an interesting question, right?
I guess, like I mentioned, health checks.
And if you don't have the network link,
how do you know that health checks are gone off?
So it's certainly you could maybe, you know,
have a pattern where, let's say in a drone,
maybe it has a life cycle.
Let's say, let's consider the example of a drone delivery system where the drones, let's say, in a drone, maybe it has a lifecycle. Let's say, let's consider the example of a drone delivery system
where the drones, let's say, fly around,
make the deliveries and return to base at the end of the day.
So you could maybe have your software update process
kind of planned around those frequent landings
where it's relatively safer or a better time to deliver the updates.
But at the same time, when you build the rollback mechanisms,
they cannot necessarily only live at the cloud.
You do have to have some of those
living on the edge itself.
And like the robot has to be able to figure out on its own
whether its latest software is working for it or not.
And it cannot rely on the cloud for that update.
And then I guess, you know,
we've been talking about making sure things work,
and you mentioned simulation.
I mean, I think people think about writing unit tests
and maybe integration tests for their software,
but how is, like, simulation for sort of robot systems
different than maybe what people are used to?
Yeah, absolutely.
So I would say, I guess, if you're building a cloud application
or database system, you don't necessarily need a simulation component in your, say, you have a heterogeneity
of robots. Let's say you have different versions of robots, different versions of sensors. Maybe
they have different hardware components. So just those few variables leads to, let's say,
a combinatorial explosion in configurations. So your software is running in a heterogeneity
of environment, in a much more diverse set of environments.
And just that piece itself increases the complexity of testing and deploying your software.
So you do need some notion of simulation or simulating your hardware devices in the cloud,
which adds that extra layer of testability. So before you deploy, you have some level of
confidence versus no level of confidence.
Yeah. One question about this is really, really interesting. The whole idea of using simulation
as a test and what comes to my mind is like when you write a unit test, you know, it's,
it's, let's say you're testing the addition operator. And so you do two plus four and then
you verify that it's six, right? And so the verification, like, you know what the right answer should be. And even if it's something more complicated, you can estimate it
pretty easily, or you can have like some common sense reasoning. But if it's a simulation, like,
you know, fly this drone or drive this autonomous car down the street. I mean, obviously, there's
extreme cases, you know, you fall off a cliff
or something, but, but in general, like, how do you really know if, if the test passed, you know,
like what the fidelity was of the, cause it's, it's not a binary thing. Like, how do you know
the quality of the test? And are you doing some kind of like self-supervised thing? Like
if I can predict what's going to happen next, then maybe that is success.
Even if what happens next is bad. Like, can you walk us through that? I think it's so interesting.
No, absolutely. I think you do hit the nail on the complexity of that kind of problem where,
yeah. So at least my model for some thinking of something like this is I think of it a lot more
like almost like UI testing and less like unit testing or backend testing.
And the reason for this is like you mentioned,
you have a robot and then it has to do stuff
in the environment and it may not be deterministic.
So you may do different things and that might be valid.
How do you verify that?
So it really does increase the complexity of your tests.
And certainly when you're building
an automated testing system,
key thing to keep in mind is flakiness and how flaky is the test. So when I think about testing
these robotic systems, at least I always go back to the Google testing pyramid. I'm not sure you're
familiar with that concept. Maybe I can briefly mention what it is. Yeah, I've never heard of it.
Okay. Yeah. So I guess the testing pyramid is essentially a pyramid. And it has like three layers. At the bottom is like the thickest part of the pyramid
is sort of your unit tests. And that basically means that you want to have most of your tests
concentrated unit tests. The center part of the pyramid is what they call integration tests,
which in their definition, it really is integration of call integration tests, which in their definition,
it really is integration of multiple units.
So if in a program you have, let's say, two functions and they both are combined in a
certain way to achieve some goal and you test that end-to-end within your program, that
is an integration test.
And finally, there's end-to-end testing, which is the testing of the entire system.
In a web application, that would mean testing the database, the UI, and whatever middleware you might have,
and testing that whole workflow.
And I guess the key part here is that
you want to have most of your tests concentrated at the bottom,
at the set of unit tests,
and you want to have fewer integration tests,
and you want to have bare minimum end-to-end tests.
And the main reasoning or main logic here is that end-to-end tests. And the main reasoning or main logic here is that
end-to-end tests are flaky. There's a lot more going on. There's a lot more complexity going on
there. And just thinking about the testing pyramid helps you even restructure your code so that it's
more testable at the unit test level and does not necessarily have to rely on end-to-end testing.
So I think it's a pretty powerful framework
when you think about how should you structure a test
and how should you even structure,
let's say something like a simulation test.
Yeah, if that makes sense.
Yeah, that makes sense.
Cool, thanks for explaining that.
Yeah, I guess the way that applies,
at least in my head for simulation
is that we have these behaviors that robots can do.
Let's say like you have a drone delivery
trip and you want to make that trip and you want to verify aspects of that in an
automated fashion, you would focus on testing the bare minimum of that.
So maybe you test all the message passing of the system, or you maybe test some
terminal states that the system might have, but you would probably sort of maybe
try and like relax the constraint of the drone delivery.
Maybe it doesn't necessarily test the exact path
that it takes.
So you can maybe just say,
okay, we're going to test the beginning state
and the sync state
and ignore everything in the between.
And that sort of like leads to a lot more robust tests
that are not necessarily flaky.
Nice.
So also, I guess one of the things
that I'm curious about is like
simulation and how do you know? So if you're ultimately going to be on some device, which has
what do you what would you call like an actuator, a motor, something like that? Do you end up running
a test where you try to see like, does the motor actually move? Yeah, yeah, that's a good question.
I think at least one of my models for robotics these days is we actually find that the robotics,
the actuators or the mechanical engineering that is actually quite sophisticated. And it has sort
of been proven out over a long period of time. And it's often the fact that like those are,
at least in the application domains I'm familiar with, like drones, those are actually like not
the biggest concern when it comes to building these robotic
applications. So most of the time you can actually rely on the mechanical components being fairly
reliable, mostly because we're kind of in this curve where, you know, we have been building
super fancy mechanical robotics. I mean, you can see the Boston Dynamics robots. I mean,
they have so many actuators. I don't even know, right? I mean, the complexity of actuators
and the efficacy and the, you know,
it is very the innovation stage of actuators
where they're really highly reliable.
And that is one of the reasons
I'm personally excited
about robotic software platforms,
because I think if you think about the tooling
or the various building blocks,
like Arduino, like NVIDIA
Tegra, these are actually very new.
So we are at, let's say, the earliest stages of the S-curve on the software side of things.
Therefore, this has sort of, at least in my personal experience, made me focus a lot more
on testing the software components itself.
And just saying the hardware guys have done an awesome job.
And I trust the hardware completely and a lot know, done an awesome job. And, you know, I trust the hardware completely.
And a lot, lot less trust in the software at the moment.
You remind me like, you know, at first it's like, oh, how do you, what do you mean you just trust?
But I think it's funny if you ever, or at least I've run it across a couple of times with like really new programmers where they'll do something and then they'll check it twice.
So like in their code, they'll do, you know, like Jason was twice so like in their code they'll do you know
like jason was pointing out before like one plus one equals two and then they'll like write a check
to say if it's equal to two and then inside the check they'll write it again like if it equals
two and then you ask them like is it going to change between these like how would it change
like well i don't know right and it's like, well, yes, but at some level, if you can't trust the computer to not modify random values,
then your whole program doesn't make any sense.
So when you say this about motors, yeah, I mean, there's always a line.
You're ultimately trusting that the CPU is repeatable
and the instructions it's running, right?
We don't normally test that.
And so, yeah, there always is some line where you don't test below.
Yeah, yeah. And certainly, I'm always is some line where you don't test below.
Yeah, yeah. And certainly, I'm sure the CPU has hiccups, like it's not a there's no 100% system in the real world. There's no lines, no triangles, right? These geometric objects are ideal objects,
like platonic ideals. So and there's no 100% working computing system, but it just works
most of the time or sufficiently enough that we don't have to think about it. And I think, although like, to be honest, like a funny thing I did run into was they are actually on the ARM computing platform, the guarantees are much weaker.
So if you're writing concurrent code and you're doing lock free concurrent concurrency, there's actually no guarantee in execution patterns.
So it can actually maybe sometimes that trust breaks down
even on the computing side.
Yeah, I mean, all of the like
out of order execution stuff that they had Intel
that led to like the specter vulnerabilities
and all of that.
Yeah, it turns out none of us know what the CPU is doing
or at least not that we thought it should.
But cool, I wanna make sure we get
a little bit of time to talk about,
so we talked about most of the elements.
I think there was a couple we left off, but I wanna to make sure we have a few minutes to talk about sort of
like what comes next. So you were describing sort of having a platform where, you know, the robots
data gets brought in, you know, somewhat you deal with this heterogeneity creating like, you know,
I imagine you didn't really talk about this, but like databases where the data is stored and can
be analyzed and sort of gone over, which leads to the ability to like refine behavior
and improve things by doing traditional like big data analytics. But like beyond that, you know,
I don't know, what is your thought? Like, I have some stuff, but I don't want to like
bias the conversation. So I guess I'll let you go first. Like, what do you think comes next? So
like, there's more and more robots, you mentioned, like drone delivery, Boston Dynamics, like,
cars, like, I think we are seeing more robots. Like, that's inevitable at this point. Like,
what do you think comes next? Yeah, absolutely. I think it's an interesting question. When I think
about sort of, let's say, technological progress in general, I'm often reminded of
Isaac Newton's line on this, which is, if I have seen farther, it is by standing on
the shoulder of giants.
And similarly, I think we have technological progress when we stand on the shoulders of
platforms.
And certainly there's like, you can think of these in layers.
So we have the Linux layer, we have the operating system layer, we have the networking layer,
we have the hardware layer.
And like we mentioned earlier about trust, like when we make progress, we trust each
layer of the system incrementally.
Similarly, so when we have these platforms, I think we are not quite there where we can
say we have a mature set of robotics platforms on top of which we can
easily build applications but if it did or once we do have some of these mature platforms which
in which is very easy to build applications you have higher apis one of the things i've been
fantasizing about is you have this you you just literally have a declarative behavior
specification system and everything else is
abstracted away from you.
You don't have to think about the systems.
You don't have to think about the deployment or the simulation.
All of that is like somehow provided to you.
If you have this kind of, let's say, platform, you can build higher order applications.
You can do, I don't know, things like much better like fleet management for robots where
they do actually, you know, optimize their paths.
They do share tasks, collaborative autonomies.
I would say one big area that gets enabled in this way, where if you have, let's say,
a fleet of robots, they're not necessarily doing work independently or individually,
but even collaboratively.
So they are sharing data.
Applications are like, you know, you can imagine a high order of applications.
And, you know, this gets mentioned a lot, but I do think you can also start building really like powerful learning, learning systems that automatically, you know, learn behaviors.
And once you have all that infrastructure in place, you can, you know, start doing some
of the, the hopes we have for reinforcement learning and, and so forth.
Yeah.
I guess that's kind of like, that makes sense.
Like moving up the abstraction ladder, let's call it and, and getting higher and higher
order.
So most people don't worry about programming and assembly anymore, or at least do it rarely,
you know, we keep moving to higher levels.
And then I think this thing you talked about reinforcement learning, I guess, is, is one
of the things that I always think about, but it is probably in, well, Jason will probably chime in, but it feels like maybe in a little bit of a perpetual future where
you see things like, was it open AI did a gym where it's like, oh, here's a bunch of video
games with like standardized input. You were talking about simulation. I mean, if you have
enough of a fidelity in your simulation, in theory, you can put agents in there, give them a motivation or not, and have
them figure out like what it means to sort of move around in that world and how to optimize
themselves. And, you know, yeah, it's super like, yes, it gets very sci-fi, you know, people write
about this, but hopefully they don't turn a Terminator on us. Yeah. Jason, what are your
thoughts about that? Yeah. So, so actually I think robotics and reinforcement learning both have the same
phenomena. And I don't know if maybe you gave me this, this soundbite, but feel free to take
credit back for it. But people say, you know, it's called a robot until it works reliably,
then it's called an appliance, right? So, so like your washing machine isn't a robot anymore. Your
refrigerator isn't a robot anymore, but you know, at the time they were these mechanical, you know, aberrations, right?
And they didn't work well.
And so that was the right time to call them robot.
And I feel like reinforcement learning is actually falling through the same sort of
situation where it's like it's reinforcement learning until it works reliably.
Then it's like control it's reinforcement learning until it works reliably. Then it's like control theory,
you know? And so, and so, yeah, I wonder if reinforcement learning is always going to be
just the word for the thing that doesn't work yet and control theory and, and, you know, I don't
know, bandits and all these other words will be used for all the things that are already
established. But yeah, I mean, I think that both of them are, we're making tremendous progress in
both areas and robotics and RL. And I don't know if we'll see killer robots, because I think that the whole human values is we're still really far behind on that. But yeah, I think that, you know, getting a robot to like, you know, climb the stairs of, you know, 90% of houses in the country, like that would be really powerful. Or even just fold the laundry
of 90% of articles of clothes would be amazing. And yeah, I wonder how far we are away from that.
I mean, as someone who has no background in robotics, I can't tell if we're a year away
from that or two decades away. Yeah, yeah, I think there's a there's a lot there, I would say. And I
I love that you mentioned robotics as a pejorative term almost,
where you don't want robots, you want like stuff that works, which is pretty funny, I guess. And
I think there's some truth to that. My theory on this is that it's kind of like, I think people
both overestimate and underestimate scale at the same time. So on the one hand, people believe that,
you know, if you have big data, you can predict everything,
you can know everything, you can know more about you, that the AI can know more about you than you
know about yourself. So there's certainly that belief. But on the other hand, people underestimate
the power of scale and the sense that there's something that does really change. There's a step
function change that does happen once you do have that scale.
And at least my theory of what has happened in machine learning was that initially it was kind of a toy and, you know, neural networks were kind of a joke.
But then you did have maturity of systems.
You know, Google had these really mature big data systems.
Then it did start working.
You know, it does really work for recommendation and search and so forth.
But of course, with pretty hard limits at the same time.
So my view on this is, again, going back to the software platforms, like once we have
these robotic software platforms established and sort of permeating the world, that's when
you get the higher order learning systems.
And that's when they stop being robots and being, you know, things that work.
Yeah, that makes a ton of sense.
Yeah, Abe, thanks so much.
I mean, this has been an awesome interview.
So I know you have a website.
We'll put it in the show notes
and your Twitter handle.
Is there any other thing
you want to sort of like talk about
or tell people to do or visit or read
or, you know, anything you want to kind of say?
No, I think that that sounds like a great, great spot to stop. So yeah, I do want to thank you for
having me on the show. It's been a really awesome experience.
All right. I want to thank everyone for, I guess, tuning in. That's outdated. For downloading or
streaming the podcast. I think Abhay for, you know, coming on and talking to us about
robotics platforms.
Thank you to all our Patreons who help make all this possible.
If you would like to become a Patreon, you can visit patreon.com slash programmingthrowdown.
And we have so many great people writing in to us, telling us stuff and helping us out.
And there's been a lot of enthusiasm for the sort of more frequent podcasts.
So I hope all of you are staying safe and healthy and we'll see you next time. See you later.
Music by Eric Barndoelen. music by eric barnwell programming throwdown is distributed under a creative commons attribution
share alike 2.0 license you're free to share copy distribute transmit the work to remix adapt the
work but you must provide an attribution to patrick and i and share alike in kind