Software at Scale - Software at Scale 33 - Drone Engineering with Abhay Venkatesh
Episode Date: September 28, 2021Abhay Venkatesh is a Software Engineer at Anduril Industries where he focuses on infrastructure and platform engineering. Apple Podcasts | Spotify | Google PodcastsWe focus this episode on drone e...ngineering - exploring the theme of “If I wanted to start my own technology project/company that manages drones, what technology bits would I need to know?” We discuss the commoditization of drone hardware, the perception stack, testing and release cycles, simulation software, software invariants, defensive software architecture, and wrap up with discussing the business models behind hardware companies.Highlights1:56 - Are we getting robot cleaners (other than Roomba) anytime soon?5:00 - What should I do if I want to build a technology project/company that leverages drones? Where should I be innovating?7:30 - What does the perception stack for a drone look like?13:30 - Are drones/robots still programmed in C++? How is Rust looked at in their world?18:30 - What does software development look like for a company that deploys software on drones? What are the testing/release processes like?20:30 - How are simulations used? Can game engines be used for simulations to test drones? Interestingly - since neural networks perceive objects and images very differently from how brains do it, adapting drone perception to work on a game engine is actually really hard.26:30 - Drone programming can be similar to client-side app development. But you have to write your own app store/auto-update infrastructure. Testing new releases manually is the largest bottleneck in releases.30:00 - Defensive programming for drones - how do you ensure safety? What is the base safety layer that needs to be built for a drone? “Return to Base” logic - often separated out into a different CPU.33:00 - How do hardware businesses look different from traditional SaaS businesses? 38:00 - What are some interesting trends in hardware that Abhay is excited about? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev
Transcript
Discussion (0)
Welcome to Software at Scale, a podcast where we discuss the technical stories behind large software applications.
I'm your host, Utsav Shah, and thank you for listening.
Hey, welcome to another episode of the Software at Scale podcast.
Joining me today is Abhay Venkatesh, who is a software engineer at Anduril Industries, a defense contractor. Thank you for joining me. Thanks for having me. Super excited to be on the podcast.
Yeah, so maybe you can get started with just in general, like how did you get interested in
robotics, right? So Anduril does a bunch of stuff with drones, and I don't know how drones work. I
just think they're super cool. So what got you interested in that? Yeah, absolutely. So I guess I've always been passionate about how we can use software or how
can humans work with software and what is the benefit that software provides humans.
So I ran into this guy, JCR Licklider, who is this computer scientist in the 60s.
And he had this theory that there's going to be something called human computer symbiosis,
which is essentially like humans learning from computers or humans benefiting from computers
and computers benefiting from humans.
So it goes both ways.
And that really interested me because I thought that would be a way in which somehow automation
or the future would develop.
And that's why I got into HCI, which is the first
point, which I would say human computer symbiosis, human computer interaction, that's probably the
closest spot. And that was my first foray into robotics because the HCI lab at Wisconsin also
had a huge robotics focus. It was called HRI, I guess. And I was working on collaborative
robotics, collaborative industrial robotics,
like doing tasks with robots in like industrial setting. And yeah, that was sort of my foray into
this space. And since then it's been a big rabbit hole going deeper and deeper.
So since you have a lot of familiarity with the robotics space,
are robot cleaners coming anytime soon? Or is that just like a distant reality?
Yeah, yeah. I think that's one of the things you usually first come across. Like if you watch the Jetsons
or the Roomba, the meme robot, which is, I think it's actually a serious engineering company and
it's a serious engineering problem. But yeah, it's kind of funny because we would expect robots to be
helping us out in the home. But over time, what I've learned is that they're actually the opposite. Robots in the real world are actually much harder than robots in the
digital realm. So AI would actually... There's some sort of replacement of human labor. It's
probably going to be in the knowledge realm first than in the physical realm.
So that's one intuition that I didn't originally have, but over time, given the difficulty
of building software in the real world, it's kind of my impression.
That's how I would sequence it at least.
And you also did a bunch of AI research, right?
So what got you interested in that?
Yeah.
So the AI research piece was again going going off of that human computer symbiosis
thing. I started off with human computer interaction, and that was a lot about how humans can benefit
from computers. But symbiosis is this two-way relationship where the machine should also
be getting better with interactions from humans. So it's like, yeah, I learned about human
computer interaction is really valuable in understanding how we can better build interfaces
or better build software that is easy to use, but it didn't necessarily allow me to build software
that was, I don't know, the next generation or push some human computers in biases.
And that's why I got into machine learning as a way to actually have the computer
learn from humans and the computer to get better. And yeah, that was sort of my motivation to getting
started. And yeah, Wisconsin had, it's a serious research school and they have labs for it,
pretty much everything. And I was able to get into machine learning at one of the labs there.
And yeah, it was a blast. So maybe you can tell us at the And I was able to get into machine learning at one of the labs there.
And yeah, it was a blast.
So maybe you can tell us at a higher level what Endural does.
I know it's a defense company, so you can't share all the secrets, but we'd love to know some details.
I think at a high level, what I can say is that Endural is a defense company that builds hardware and software products. And, uh, it's not that.
Yeah. And the way I think about it is that it's, it's about taking the, you know,
improvements you made in the consumer space or consumer product space in the
last 20 years and applying those to the fence, so applying the Silicon Valley
mindset to the fence, because the status quo in defense is like big contractors,
like Boeing
and Lockheed Martin, and they don't necessarily have a Silicon Valley way of doing things.
And what I mean by that is moving fast, having a high autonomy culture, having rapid iteration
cycles, things like that is sort of our strategy to succeeding in the defense space.
Let's say that I'm a software engineer
and I want to build a drone company. Where would I be looking at first? What should I be learning
first? Or where is the area of innovation that I should be thinking about? Yeah, absolutely.
There's a few different components to that. One is building the drone itself.
And I would say that's pretty commoditized at this point.
There's probably thousands of drone manufacturers.
I would focus a lot more on the software side.
So there's a perception stack that you would have to become really good at, which is how
can the drone move around in the world, learn about its environment and so forth. And the other stack would be sort of the platform stack of how you can monitor your drone.
You know, you have to build an application. The drone has to do some work.
If you're applying the drone in agriculture, for example, you want to be able to see how the drone is performing in the agricultural application.
So you want to be monitoring and so forth. So I would say those are the two areas, like building a strong perception stack and building a strong platform stack that can
integrate the drone, a commodity drone, really, like you can just buy off of any drone and
integrate that drone into some useful application. I would say those are the main areas that you
would have to learn. That's like a high level. I mean, and then like, how do you dig into those areas
is something I'd be happy to chat a bit more about.
Yeah, it seems like it's very similar
to commodity servers
and like whatever Google started
like 15, 20 years ago,
like before everybody would try to buy
specialized hardware,
but then they're trying to build software
that works on anything.
And are you thinking that's something similar
to how you should approach something like in robotics or on drones for example yeah yeah i've actually thought
of that about the analogy quite a bit especially the work i do is also about you know building
large set of you know drones in the cloud and yeah thinking about this transition from
um yeah commodities specialized to commodity hardware is the way I think about that at least, it's
that sort of the point at which the hardware is becoming good enough and good enough general
purpose that the main benefit or advantage is from the software going forward.
So once you start to see a commoditization in the mechanics or mechanical engineering
of any kind of computer system, that's when you basically
need to say, okay, that's the point at which now we need to be building software and not
necessarily improving the form factor of the drone itself or the hardware itself.
Maybe then let's talk about perception.
Do you get a stream of images continuously that you can do some kind of ML on?
Is that the idea?
Yeah, yeah.
So there's computer vision, and computer vision becomes a big part of building perception.
I haven't thought about that too much, but it's also somehow similar to that pattern
of commoditization, where you get a very good general purpose sensing tool. It's an image stream and then you can apply
software to get order of magnitude improvement. So you really just have pixel images, which is,
first of all, it's lossy because it's the inverse model where you go from 3D world to a 2D world,
so you're losing information. But usually you can work around that with smarter perception algorithms,
machine learning and so forth. But that's been the trend there. And yeah, and with that,
you can do a lot really. And then there's two questions like where do you do the processing?
And yeah, how do you distribute the various kinds of perception workloads that you might want to do. Yeah, there's some stuff you want to do on
the device. And there's obviously a benefit of having a large repository of data where you can
do big data analytics or longer range monitoring. So there's going to be a divide of distribution
and computation, mostly because there's a scarcity of compute resource. You can't run a cloud server on your drone,
so you're going to have some limitation there. There are these days pretty good
edge compute AI platforms, NVIDIA and Intel that lets you run inference at the edge,
and they're getting better all the time. You can run machine learning on your iPhone these days.
So this stuff is getting better all the time, and it will probably continue to get better.
But yeah, right now you can run Edge inference.
It's not necessarily going to be the best models.
They're going to have frame rate limitations, so you necessarily wouldn't be able to run the fastest stuff.
But you can definitely do flight critical stuff. So like you mentioned already,
the drone may lose connection with its home base or internet or whatever. You ideally want the
drone to be as self-sufficient as possible. So you would distribute the critical computations,
like localization, for example, onto the drone. And then you would have a back channel, let's say on a lower priority stream to the big
data center or whatever, where you can do longer range stuff processing.
Maybe you're in a construction application where you understand you're going to map out
your construction site.
So you wouldn't necessarily build a 3D photogrammetry on the drone itself.
You would do the photogrammetry on a cloud,
but you would let the drone do localization
and you would stream on a back channel,
a backhaul link to your cloud whenever that's safe or available.
And that's one way to think about it.
Yeah, that makes a lot of sense.
The drone itself can just do the ML to navigate things
and do object detection and all that.
But then in your
construction example, they would be sending back these frames or pictures of what it's seeing,
and then something else can basically recreate the building in that case. That is pretty interesting.
You can do a lot with that, especially if you have a large set of drones. I mean,
that's like, you know, it's a data source. It's like a way you can do big data
stuff once you have that, because now you have a rich real world data source over which you can do,
I don't know, like any kinds of machine learning, AI analytics applications.
Yeah, you'd have to build some pretty interesting technology to stream all of that, those images
back, you know, to also stream the coordinates and stuff. I guess what the weather drone exactly is so that you can actually stitch back that
image. Yeah, that sounds like a tough problem.
Yeah, that's an interesting area of work as well.
And yeah, it's cool that you brought that up because getting networking right is a huge
deal for any kind of real-world robotics.
And for the same reason you mentioned is, yeah, we need a way to prioritize messages.
We need a way to...
We would also have to rethink our applications, the way we design them, because in a data
center, we usually assume...
Well, most of the time, you're not really worried.
Depending on what kind of application.
Some databases require you to assume that it's going to fail, but we don't necessarily
are thinking about that as much.
But when you're building a distributed robotics application that is over a network that can
be problematic, it can be a huge problem.
And there's also different kinds of links you can choose. You can choose LTE, but you can also pick up a radio and run a network over that.
So there's options there as well.
And this company is building mesh networking, decentralized networking technologies where you can run a mesh network without necessarily depending on a cell phone tower.
So there's lots of cool technologies getting developed
in this space right now. Well, you just made me realize that managing a network of satellites,
maybe like Starlink, is remarkably similar to managing a network of drones.
Yeah, it's a very similar problem. I agree. And yes, SpaceX is a pretty cool
enabler of these kinds of applications.
Now you have a network anywhere globally.
It's still a little bit nascent.
They don't necessarily have everything very well set up right now, but over time I have
confidence that you can very easily build robotics applications without worrying about
network, but at least not the link layer, the physical layer of the network.
We'll start to figure out the application semantics, L7, the OSI model.
That's interesting.
I want to ask you one question that's a little deeper before we get into that idea of just
robotics as a platform.
But just out of curiosity, as an engineer, I'm thinking, do you all still write C and C++
to run on the robotics?
Has that changed at all?
Or is that just the de facto standard still?
Yeah, unfortunately, it is still the case.
There are some things.
It's kind of like the, I don't know, it's like Lindy rule or something.
The more it's been around, the more it's going to be around.
There's a similar version in language where once English spreads or whatever, how do you
get rid of it?
It's a similar thing.
C++ is just always going to be around, I think.
Go has done a pretty good job actually on the server side backend. These days, lots of companies are
moving up of C++ on billing servers, high-performance servers, but on robotics,
it's still C++ is the gold standard. And that comes with its own set of tooling challenges
because it's like, at least the way I think about C++ is that it's a tool that has,
it's a multi-paradigm, you know, language that can do pretty much anything. And as a consequence,
it's like, it can be really painful to deal with because it in theory supports any kind of,
you know, application, which means it's, you know, unusable or something at times at least.
So yeah, that's a little bit of a bummer.
Yeah. Hopefully, hopefully someday the next generation of companies decides
you're going to rewrite all our robots in Rust.
And like, do you know of any efforts like, you know, in the open source or anything,
people actually thinking of that?
So, yeah, I think we actually consider using Rust
and our whole issue with that was actually the...
We thought the tooling ecosystem is actually not mature enough.
Like the open source community or...
Yeah, that was one concern.
And the other concern is like,
how do we hire engineers who are familiar with this language?
And it's just like, there's a lot of costs associated with it.
Like you need to get people comfortable using it, you need to get the tooling right.
You need to have a good ecosystem where you can just get a library easily.
You need to have other people using it.
So it takes a while to get all those going.
And then I think the genius of Go is that it's so
stupidly easy to use. You learn in a weekend and the tooling is just ridiculously simple.
There's no style or language wars. They just gave you a formatter and now you just have to use this
formatter. Nobody's fighting over that anymore. So I think you need those kinds of bootstraps
to succeed in the language ecosystem.
And we personally tried Rust and for those reasons we couldn't go forward with it. And
C++ was the best choice. Yeah, that makes sense. I don't know if you're familiar with that recent
controversy. It's not a controversy, but Rust's handling of out of memory errors is a little interesting.
That's why the kernel developers are not okay with it, because it just panics and aborts.
Ideally, at least in the kernel, you should be able to control what happens when malloc
fails, basically.
So they're working on that right now.
Yeah.
This is a trade-off between usability and flexibility.
Exactly. When they first
developed it, they were like, who cares about if malloc fails, just kill the program. But like,
that's clearly not okay in the kernel. Yeah. Let's maybe walk through that idea of, you know,
like robotics as a platform, because I've been intrigued about that. And I just had to go on
this like programming language tangent. You made me think, you made me think about, you know,
like AWS as this platform such that you don't have to worry about servers anymore.
But today, if I want to start a robotics company, I can buy hardware, but I have to build all of this perception stuff and everything on my own still.
How do you think that's going to change in the future?
Yeah, that's a good point.
And I've thought about that as well. And I think Amazon, Microsoft,
all these companies have tried to do something similar, but they don't necessarily provision
you a set of like a fleet of drones. So the hardware management piece is still an unsolved
problem. And it's a little bit different than cloud because in the cloud case, you can provision
the servers anywhere. Like it's going to be in some server farm and it doesn't matter. Whereas in the drone case, the drone has to be closer to
the actual location. That being said, I can actually see a future where you have some
provider that provides drones as a service, as a platform, and developers can just provision them. And there's ways in which they
only fly between a certain zone or there's safe flight zones, there's safety features.
There's a lot of infrastructure work that needs to happen and nobody's doing it necessarily.
And there are some companies that are better positioned to do it, but I don't think anybody's
directly tackling that problem of just having drones available as a platform for use for
developers to build on.
Yeah, it's an interesting space to think about if you're interested in building a startup.
Yeah, I think that makes sense.
And how does software development work in an organization that has to run a bunch of drones?
Do you still have that kind of divide of tooling people?
What does the release process look like?
Let's say I've written a bunch of new software,
I've modified my application,
how do I test my changes?
How do I make sure the next deployer
is not going to break things?
Yeah, that's definitely a high level of complexity in the robotics world.
Because in cloud software, it's all in the cloud and you can easily build basic CI, CD
processes.
You have a CI job in your repo and then maybe you'll have some integration testing environment
and both of those pass.
And usually that's fine, I guess, assuming in most, maybe
some companies have something more fancier than that, but that's a good baseline.
But in the robotics realm, you have two new or hard steps. One is you need to simulate
all this stuff. So this robot is in a physical world, it's flying around, it's a drone, it's
flying around in a real world. Other kinds of robots are even more complicated. Maybe if they're in a self-driving
car is another good example. That's even more complicated because it's pedestrian, all sorts of
nonsense you can run into. So you need simulation and other pieces, hardware integration,
because your software has to run on a robotic hardware and it's not necessarily as commoditized,
going back to our earlier conversation. All the deployment information, you don't have EC2 that
you can just program and get on demand to scale up or scale down. You need to build all that manually
and custom fit to your specific robot. So those two components complicate things a lot and you have to think about how do you get all those things right depending on your robotics robot. So those two components complicate things a lot. And you have to think
about how do you get all those things right, depending on your robotics application.
Okay, so let's talk about simulations, right? Like, simulations aren't perfect,
but how do you go about building a simulation that makes sense?
Yeah, it's a broad question. And there's so many ways to think about it. And depending on the application domain, it will probably differ.
One strategy is to use kind of like a testing mentality, a testability mentality.
I don't know if you're familiar with this concept, but there's a testing pyramid where
you typically have the bulk of your test cases, unit tests, and you have the center of the pyramid
is integration tests. The top of the pyramid is end-to-end tests. You would think about simulation
in a similar manner. You want to have extremely crass simulation at the lowest level, which is
just API mocking or something. You would just mock. You would build APIs for your robot,
and you would mock them or something. In the middle layer, maybe you would add a little bit more fidelity to the simulation.
Maybe you would have a good in the loop, not necessarily realistic physics, but somewhat
workable physics.
In the case of a drone, maybe you can somehow say project doing basic physics models, where will the position of the drone be given
this speed and velocity, well, this velocity and position right now.
So you can do that kind of stuff.
And then at the top of the pyramid would be like really high fidelity simulation.
You would do physics modeling, you would build high fidelity models.
Maybe you would do computer vision.
Maybe you would generate synthetic worlds from a dual engine to augment your perception.
So that's one way to tackle it.
Overall, it's a pretty hard problem.
And there's companies that have put decades into this and billions of dollars, like Waymo
building.
Yeah, it's some ridiculous infrastructure.
Yeah.
Do people use game engines for maybe the highest fidelity stuff
or like something similar
to like a game engine ultimately?
Yeah, that's always been an interesting problem.
And I think the main benefit
with game engine is at least
because you don't need the game.
You can use a physics model from it,
but one of the benefits
at least with the latest Unreal Engine
and so forth is that
you can get photorealistic environments and those environments are always getting better and they're really good already.
And using that to augment your perception algorithms is in theory useful, although it's
in practice really hard to make work. And maybe this is just like an artifact of the way neural
networks work is that if it
looks similar to you, it doesn't necessarily look similar to the neural network.
And yeah, it just this, our brain does a lot of, you know, optimizations to make, yeah,
two things look similar, but for the model, it can get confused still.
So it's a hard transfer learning problem.
Yeah, that's fascinating right i feel like you might think it's easier to train like a drone
through like a game engine but because it's a neural net like wow that's something that i've
never had to think about and it's like really really interesting um so so waymo and all of
these other companies i don't think i don't know if tesla does any simulations like this just seem to roll out new versions where like the standard rolled it out to one percent five percent
ten percent rolled out employees first but uh so somebody like way more has to just build like
custom purpose simulations that work with their internal systems and test it on that that is
that that smells like a startup,
like another startup idea
right there.
But I don't know how big,
like,
I didn't think the market
for something like scale AI
would be so big,
just labeling for self-driving cars,
but,
oh,
billion dollar company.
So you never know, right?
This just seems like a hard problem
that everybody who has to
build a drone company
has to think about
at some point.
Yeah.
There is a company around this actually.
It's called Applied Intuition.
I don't know if you've heard of it.
Oh, interesting.
They just build simulation infrastructure
for self-driving cars.
So it's exactly that.
Too late, unfortunately for you.
Let me look up that crunch base
and see when they were founded.
Hopefully I'm not more than five years late.
If it was like 2015 or before,
then I'm still thinking too much in the past. Yeah. You said the next step after your
simulation is actually testing it on hardware, right? Does that involve like just humans rolling
out the new software on that hardware, making it go around in circles like a drone, for example?
Like what does it mean? Yeah. That is a good question.
There's certainly one human component to it.
We still need people to move the drone to a test site or something.
I guess the way FAA regulations work in the United States, at least,
is that you just can't fly anywhere.
You need to be smart about it, where you fly those things. Um, you can buy warehouses with one of those, or you can just fly stuff within the
warehouse. Um, or you can get to like some, some large private property or rent, you know, space
from them. Um, so yeah, getting like that physical component going is a real challenge. And that might be connected to the platform idea in the past where you have all of that
managed for you somehow, and it's easy to do that.
But yeah, that's one component, which is managing the test site itself and getting the drone
to fly.
The other component is maybe more around the lines of continuous integration, continuous
deployment, which is how do you get your software installed on the appropriate targets?
And how do you manage the software at the edge on the devices?
How do you do things like rollback?
And it's probably precedent for some of that already with companies like Apple, which are
essentially doing something like this, where you have edge deployment,
hardware deployment, rollback, and so forth. But that's the other piece, I guess, you would have
to build today because it's all commoditized and you will probably have to fit your particular
developer use case. Okay. So what I'm imagining is that you have some kind of auto-update
infrastructure on each drone that's checking is there a new package
is there like a rollback package
or something and
okay so it's just like deploying
maybe like client software like
as you said like phones, desktops
and that's the CI CD part like what do the
release cycles look like for something like
like let's say
on the rollout drone version 2.0
like what is that how long does that generally take in a small
company, large company?
Probably the biggest bottleneck is the physical testing side.
Just coordinating the test side can be insanely hard.
And just availability of test sites, again, is another problem.
You have to find a test site.
Most software startups, you don't have to find anything.
You just have to find a room and coders somehow, or you find a test site.
Coordinating all of that, that becomes the bottleneck in your release cycle.
So getting that out of your release cycle, minimizing that, thinking about your software
architecture and reducing the exposure of your software architecture to the test site is critical, basically.
And that's where, again, simulation comes into play.
The goal is to get to cloud release cycles, but in practice, a real production release
can take weeks because you're just blocked on test time time and depending on the scale of the company
that can get harder and harder over time and depends on the kind of robot as well do you have
like qa engineers who basically just use like the drone and like a certain set of scenarios they've
like a bunch of scenarios that the drone has to pass i guess with new software or something like
that like i don't even know how you would do something.
That would be the goal, which is you have a suite of scenarios. The analogy actually that I've learned over time is it's closer to UI testing than back-end testing for this reason,
because the space of interactions is so large that it's hard to really systematically unit
test it or integration test it. So that's why probably you would need QA testers,
which is kind of how front-end testing is also done most of the time, is because it's hard to
unit test because the space of interaction is so large. And yeah, that's one approach. And the other approach is
build a better simulation or do the Tesla thing and somehow find a hack to getting people using your... Maybe you would just somehow... You would use people's... Yeah, maybe if you're a consumer
drone company, you would do rolling updates to them, and then you would use the consumers as a test, as a tester or something, which is what test site is, I suppose, you just
use the drivers as testers. That's like another hack, but otherwise you'd have to do it manually.
How do you, I guess, code a drone? I'm just asking question that I'm thinking of,
because I'm learning so much. How do you code a drone in a way that is defensive?
I'm guessing there's probably layers that will make sure don't hit a human by mistake,
similar to the three laws of robotics or whatever
from iRobot.
Is there a general architecture that you've learned
after working on this for so long that works?
Thinking in layers is the right approach.
And you want to well again
depends on the kind of robot i guess if you're building like i think elon rolled out like a
like a humanoid robot so for that one you might need uh azimuth laws of robotics like don't kill
people or whatever so you would need the first first layer we don't kill someone uh for drones
it's usually like you you would build yeah, you would build like a RTB
system, a return to base system first, which is you want to always have the drone just find its
way back home if anything goes wrong. And literally anything goes wrong, it should have a hard-coded
path home or a smarter way back home. So That's like number one in terms of defensive programming.
And then you can build layers on top of that, depending on your behavior.
That's like on the logic level.
You also want to be thinking similarly on the programming level itself.
You don't want to be writing code that will run the buffer overflow or allocate too much
memory.
So you want to be thinking about,
well, you do want to layer the application as well.
So you can containerize your services on the drone so to isolate the resources they would use, for example.
So you can ensure that there's enough resources
always available for the flight computer.
You can go in with a step farther,
is like separate the flight computer from,
just get different computers on your drone.
Computers are getting smaller these days, so you just get different computers
for each drone, and then you put the
critical stuff in one, put the perception
in another, and you put the... So you can
layer even on a hardware
level. So as many layers as possible
to hedge that risk
of something going wrong as much as possible.
Yeah, I think
that makes a lot of sense. I guess
it just makes sense to containerize
on a computer, have a different computer.
These are the same, I guess, tricks
that we learn in regular
software engineering, just applying that
in a place where you need to be
much more concerned about everything.
That makes a lot of sense.
This seems less like a black box
to me now.
Now, let's talk about running a hardware business, right?
Like let's say I'm running a robotics business.
I have experience, or at least I've been in a couple of, you know, just standard SaaS
companies like enterprise SaaS or whatever.
And I kind of know now, like roughly, like what do you need?
Not everything, not for sure, not that, but you know, you need like a support team,
you maybe need a sales team,
you maybe need a customer success team.
Okay, these are all the different components
to run an enterprise SaaS business, right?
Like what are some components you need to run
like a hardware business, right?
Like I'm sure every business is different
and everything has its own, you know, nuances and stuff,
but like how would you go about even thinking about that?
Like, or like what are some things that are just obvious after you think about it for five minutes?
I would say there's two different kinds of hardware
businesses. One are the hardware businesses that are
actually hardware businesses, and that would maybe be Apple in the older
days. These days, it's a little bit confusing to me because they have a services component as
well. But Apple in the older days, these days it's a little bit confusing to me because they have a services component as well. But Apple in the older days was a pure hardware business. But the other set
of businesses are hardware businesses that are really software businesses. And those two tend
to have different kinds of business models. Yeah. And in the first kind, you would rely on, you would need to crack a rapid enough
hardware release cycle.
See what went wrong with Fitbit.
Nobody wants new Fitbit every three months.
Apple, iPhone, maybe they intentionally screw with your iPhone and it just gets slow somehow
over time.
They force you to buy a new iPhone.
So they figured that out.
So there's a constant revenue stream. The other set of businesses are more... I've actually learned
to model it as the hardware is more customer acquisition cost. It's a way to get your
software in somebody's hands. Let's say you're building Nest, for example. You would allocate
the spend on the hardware production, just
say customer acquisition, it's a way to spend to get it into people's houses.
And then you would have a service model, which is back to SaaS or back to subscription, which
is now you would get constant updates, they keep getting better, there'll be AI, there'll
be all these better features over time.
And if you don't subscribe, I guess your software will stop working or something.
And once you do that, it started looking more like a SaaS business in terms of how you think
about it, at least from the business modeling side.
Well, I mean, yeah, like,
when I think about
other examples
to plug into that,
like Alexa,
like, yeah,
you buy the hardware,
but then it's really
like the thing
that talks to the cloud
and understands
what you're talking about,
which is what's
providing value.
Okay.
And it keeps getting better
and you just subscribe to it
and all that, so.
Yeah, that's good
and also, like,
a little sad at the same time that
everything is converging to sass but it just makes sense yeah that is a always a big worry of mine is
everything becomes sass and now i'm subscribing to everything i'm subscribing to my food or
something which is a little bit scary but uh that's how the business model work I guess. That's like the dash pass and
all of those things are also right so it is happening we have like a liquid food subscription
and which is which is actually which exists which is like that I forget the name. Soylent?
Soylent yeah which is that I guess liquid Which is that, I guess, liquid food subscription. So it has happened.
It's too late.
Everything is sass.
Yeah.
If you were like the CTO of like your next robotics company,
what are the first set of roles that you would hire for?
Like who would you look for in terms of like engineering?
Like not specifically, but what are the like skill sets and stuff?
Who would you be looking for?
Like what would you be searching on LinkedIn?
This will obviously be a little bit biased
perspective because there's not necessarily
one way to go about these things.
So I can give my biased perspective
on what I think is valuable.
On the hardware
side, I would just look for
really solid technicians
who can build shit
predictably, reliably, on time.
They have a strong record of just producing physical stuff.
On the software side, my bias is towards just hiring scrappy, high-energy engineers and
giving them as much autonomy as possible. So hiring the highest caliber, highest energy,
engineers who just are ready to go crazy at your company.
And I would launch them, unleash them, essentially, in your company.
So this would be like the early days hiring strategy
where you don't necessarily have any product.
You're trying to get something out the door.
You just have to get the most
scrappy people who can just...
I've seen some crazy people, I've met some crazy people who are just willing to work,
I don't know.
And not necessarily because they are forced to work, but they just love building stuff.
So that kind of really high energy, scrappyppy love building stuff mentality and getting them on a high autonomy
plane and getting them compounding as soon as possible would be my focus and it's really hard
these days because those people are getting up usually by 20 different companies so it's it's a
tough situation in terms of hiring but those are the people i would be looking for okay yeah i think i think that makes sense and maybe like as a like a wrap-up question is like what what are some advancements
that are in like you know the robotics world like in the hardware world like you're talking about
you know better chips and stuff um recently i got an m1 mac which is pretty cool i didn't know that
macs didn't need to overheat so i've been pretty impressed yeah that sounds uh life changer i would get that because
my laptop is burning my lap yeah like laptops don't need they you know they don't ship this
one with the fan just because it doesn't need to just like so fast and so like perfect yeah
so what are some advancements that you are excited about and you think could actually
like make a difference you know right now that are happening right now and you think could actually make a difference? You know, right now that are happening right now, or you think like 10 years from now,
something big is going to happen.
What do you think?
So there's two main trends that I'm mainly excited about, and I would like to have people
work on it.
I would like to work on it.
One trend is the hardware specialization trend, which is you're going to get more and more
specialized chips.
So you're hitting limits in Moore's law.
So you can't just increase transistor count anymore.
So you hit limits of general purpose computing.
So we've already hit limits on that.
So what will happen is you're going to get more specialized computing chips
like the M1, more specialized chip,
although you can do general purpose computation.
But it makes a trade off for performance, creates performance
for like coolness or energy use basically. So you will get lots of specialized hardware.
So that will be a huge trend, which is you're going to have different kinds of hardware
and getting really good at dealing with different kinds of hardware, programming them, building
systems from them is going to be really key and valuable. The other end, I'm really excited about the trend of increasing sizes of neural networks.
And I'm positive about those unlocking new opportunities in robotics.
And so the intuition on this is, there's been like AI hype for I don't know how long, and
that's always been the thing.
But I think something changes with scale. So you have
a large enough neural network. I don't know if it's going to become God or something. I do know
that it's going to do things that it wouldn't be able to do in the past. And my general model has
been if you can solve an AI problem in a toy fashion, like you can define it as a toy problem,
you can eventually scale it up with enough
compute to solve that problem in a reliable manner.
So I'm really excited about that because you can start thinking about building real life
agents that are more intelligent, that can do more stuff, build more general purpose
robotics.
And to add to that, if you have a solid like infrastructure, platform infrastructure, which
is like what we discussed, then you
would have all the necessary ingredients to actually get robots into the world, actually
get that cleaner robot, or the person who would help manage your house and make your
food.
So yeah, I think those are the things.
Thank you so much for joining me.
I think it's a lot of fun.
And I certainly learned a lot.
Yeah, we'll do this with a lot of fun.
Thanks, guys.
Bye.
Bye.
Bye.
Bye.