Screaming in the Cloud - Episode 54: Rethinking the Robot: How AWS Robotics is helping shape the future of domestic and commercial robotics
Episode Date: April 3, 2019Some of the highlights of the show include: The benefits of RoboMaker in code deploymentHow cloud computation frees up local resources Using machine learning to improve robot reactionHow gr...eat a name RoboMaker isAmazon’s commitment to the enduring APILinks:https://aws.amazon.com/robomaker/http://www.ros.orghttps://aws.amazon.com/deepracer/
Transcript
Discussion (0)
Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world of cloud,
thoughtful commentary on the state of the technical world,
and ridiculous titles for which Corey refuses to apologize.
This is Screaming in the Cloud. Well, AWS, back the f*** up. Backups are incredibly easy. Restores, however, are absolutely not.
You want to find out that your backups work well in advance instead of the way that most of us do.
When they don't work quite right, immediately after you really, really, really needed them to work correctly.
Check them out at n2ws.com.
That's n2ws.com. Thanks to them for supporting N2WS.com. That's N2WS.com.
Thanks to them for supporting this ridiculous podcast.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined by Roger Barget, General Manager of AWS Robotics.
Roger, welcome to the show.
Thank you.
So starting at the very beginning, what would you say that RoboMaker does exactly?
So RoboMaker tries to remove all the undifferentiated heavy lifting that a robotics application developer has to do from the moment they try to start their project with multiple team members, making sure everybody has the exact same development environment, offering them a good runtime to actually run on their robot, and then offering them simulation as a way to test their application in 3D or 2D environments,
and also complementing the software with cloud services powered by AWS,
which I believe the cloud is going to be one of the most powerful resources
a robotics application developer has access to.
But it also allows these developers, once they've built their application,
tested it through simulation, maybe running hundreds of simulations to test their robot in different environments, the ability to publish their application to a robot anywhere in the world and manage hundreds or thousands of robots in a fleet.
So it really provides end-to-end application development support for robotics.
Where does a service like that come from? I guess, what sort of pain did
you see in the industry that made you decide, yeah, this is a service that we should bring to
market? I'm trying to imagine a conversation that ends with, you know what would really help this
problem? That's right, a whole bunch of robots. Now, in my business, I don't have any of those
needs. I also live in fairly gentrified San Francisco, where giant piles of robots solve remarkably few problems in my life and introduce a whole bunch
more. Obviously, I am probably not the target market for this. Who is?
Yeah. So there's a number of innovative companies that are right now exploring what can be done with
automation and robotics. And we view robotics as a very general term. It's anything that can sense, compute, and take action.
So a Coke machine can be a robot.
A dishwasher can be a robot.
The Kivas that are running around in our fulfillment centers.
And so we started talking to many of these startups, many of the developers within Amazon Robotics, which are building and deploying robotics.
We said, where do you spend your time?
What's tough about this?
Where are the hard parts at that really don't add value to your robot?
And this is how we started to understand the product definition for RoboMaker.
Because what we saw is these developers spend 80, 90% of their time on tasks that add absolutely no value, unique value to what the robot they're trying to build will actually do.
They have to set up dev environments, set up simulation, manage the machines associated with that, very clunky mechanisms to update their robots, let alone
manage them once they're put into production. And this is actually what became the product
definition for AWS RoboMaker. All of which makes sense, but how does the cloud enter into this?
Yeah. So one of the things, if you look at actually where a robot spends its compute power and what resources it has available to it, you quickly find that most of the power that's spent on a robot is computing functions locally on the robot, which in fact could be shifted to the cloud, allowing more power to be utilized by the robot for movement and interacting with its environment. When you step back even further and think about how could the cloud be used to capture data about all the robots in production to detect trends? How could the cloud
be used to coordinate the activity and orchestrate the activity of a group of robots in your house
or a fulfillment center? Then you start to see the value that a cloud can bring not only to an
individual robot, but someone who's trying to actually build a business and optimize a business with a collection of robots.
I am in no way, shape, or form a roboticist.
But to my naive view, I would imagine that if you have a robot, as I guess society generally conceptualizes a robot,
the fact that you have motors that provide locomotion, potentially there's a vacuum on it,
maybe it has a bunch of articulated arms that do different things. It seems to me that the power requirements
to power those engines are almost on a different order of magnitude than what it takes to power a
CPU or a disk or RAM. So from where I sit, it seems like having compute on device isn't really
something that moves the needle in any meaningful way.
Is that naive of me?
It is.
It turns out basically for a lot of robots we actually looked at, over 50% of their power was actually processing imagery coming in through the cameras, processing data coming in from the sensors, especially for things like navigation or creating SLAM maps, which are computationally intensive. When we could stream the data coming off of a LIDAR or a camera to the cloud, do the computational intensive mapping,
object recognition, and route planning up in the cloud and push these simple instructions down to
these motors. You can save over half of the battery power, let alone the fact that a developer who's
trying to build an affordable robot does not have to put expensive compute power. And I'll talk
about a customer we've been working with puts very affordable, low power chips on the robot because
they can actually push that compute capability and spread that cost of what there's paying for
in the cloud across thousands of robots, not putting it on each and every robot. It brings
the total cost down. And this is really important because we see a world where there could be
hundreds of robots that we get to work with throughout our house, throughout our businesses.
And that cost has to be low for them to provide value for the company that's running them.
When you're talking about a commercialized robot, something that a company generally tends to sell for a fixed fee and then it has capabilities that may be cloud empowered.
Do you find that the economic story winds up changing as a result?
Instead of a fixed bill of materials for a robot, it's out the door. Now you have effectively cloud services, which are historically pay for use,
which means that the life cycle and how long something's going to exist does over time have a
different economic model than existed previously. And do you find customers are okay with that?
Indeed, we've found this actually in cloud computing in general, where customers can amortize the cost and the investment of a piece of software,
not over on a per robot basis, but the actual usage. And they find the economics actually
work out in their favor. Not to mention the fact that they're sharing information. If you just
think about robots navigating throughout your house or fulfillment center, each one of those
has information about its local environment, which it can share up to the cloud. If another robot needs to enter that part of the warehouse it no longer needs to spend
the compute power to understand what the map looks like. It can simply borrow from one of its
neighbors who's been there previously and utilize that map saving compute power for everybody. So
again it's that information knowledge about sharing the compute power but it's sharing the
information that each one is getting. It's also exciting when we think about machine learning.
Let's say we put a machine learning model and install it on the robot for navigation,
and it bumps into a wall.
It can actually send that little information,
that one or two errors that it's going to make up to the cloud.
And if the customer has hundreds or thousands of these robots,
each one making one or two mistakes, they now have a large corpus they can use to retrain the model
and push a more intelligent model back down to the robot the next day. So RoboMaker effectively empowers the
hard slash interesting parts of building what most of us think of as a robot, not the actual
assembly line pieces of constructing things in hardware. That is correct. And in fact, I'd even
argue when you look at the hard problems that roboticists have to solve, they've got really hard problems that they're trying to solve. And some of
it's actually they're pioneering. So to actually ask them to spend 90% of their time doing this
undifferentiated heavy lifting is really slowing innovation in this field. We can actually give
them that time back so they can do that innovative task, build that custom hardware that's going to
make their robot special and take advantage of the ecosystem and services that we're providing.
For some of us, making fun of various AWS service names has almost become a sport.
And I take a look at RoboMaker and what it does, and it is a shining example of an awesome name.
It's very descriptive. It's catchy. It fits in a single tweet, which in some cases is hard to get
to. It's so well named, I almost have to assume that someone fought against it when it was first
proposed. Was this the first name for the service you were considering, or did you
have a more contentious discussion? Naming is taken very seriously here at AWS.
Names are important. It will shape how a customer thinks about a
service. It can shape basically what people think they can do with a service. And I have to admit,
I'm really bad at naming. I'm a very pragmatic individual. I came forward with some very simple
names for the service. And our leaders, we step back as a group and your five ideas actually
turn into a list of 500 and you get to think about the merits and see what your peers think about them from their experience.
It turns into a journey, but also a heck of a large number of meetings.
When it's done, it actually look back and go, yeah, that's a great name.
Why didn't I think of that in the first place?
Speaking from personal experience, it is many orders of magnitude easier to make fun of
a name than it is to come up with a good one.
Naming is an art.
And as much fun as I have with tearing down the very hard work of other people in that
context, it's hard.
There's no great way to get there.
Changing gears slightly, there's been a lot of talk about ROS or however it's pronounced.
R-O-S.
I read, I don't speak.
What is that?
ROS, I read, I don't speak. What is that? Ross, yes. You know, researchers 10, 12 years ago realized that research in robotics was being slowed down by the very problem that I described for industrial applications of robots.
That they too were repeating the same amount of undifferentiated heavy lifting to build a robot so they could actually publish their thesis and do that last little bit of interesting work. So the community got together and said let's actually build an open source academic runtime
for robots. It's not an operating system, it's a message passing relay bus. Think about our sensor
that actually senses something and it puts what it senses on a message bus with a topic. And again,
robots sense, compute, and act. So if there's a compute node that needs to process information
from that sensor, it subscribes to that topic, does the processing.
If it needs to move a motor, it puts a message down the line with another topic.
And what's happened over the years is that academic institutions, researchers have been contributing to this ecosystem of ROS packages for different actuators, different sensors, different computational tasks like navigation.
And it's been picked up by industry as well.
There's thousands of companies that are using ROS today for commercial applications of robots.
And what's been happening over the last year is industry has been saying,
let's advance this from a research platform to an industrial strength, open source platform for robotics,
where the code has been verified, tested,
hardened, performance has been improved. And that's the effort called ROS2, which we're proud
to be part of. And this is much larger than Amazon or AWS itself. This is a community or
industry-wide effort. Yes. Not unlike what happened in Linux, a number of companies have stepped up
and say, we have a vested interest in making the best industrial strength
runtime for operating systems.
It's open source, community supported, and community driven.
And each one of the companies in the technical steering committee
for ROS2, of which Amazon was one of the founding members,
are actively contributing source code, designs, code reviews,
and reaching out to the broader open source community of startups,
asking for their feedback, their input as well to define and build ROS2.
Which brings us a little bit to, which brings us to the topic of open source, which has a bunch of different directions it can go in.
But let's start at the beginning.
What are you doing that is open source?
You said you're part of a larger effort.
How does that manifest? So we, as part of our membership of the Technical Steering Committee,
and just what we feel is our responsibility to the open source effort behind ROS2,
we have engineers who are actively contributing to ROS2 code.
In fact, the most recent release of ROS2, Crystal, which just happened back in December,
roughly 40% of the code that was actually contributed in the designs was was through my team and a contribution through my team. We have a number of engineers which
are actually not only contributing source code to ROS2, but reviewing designs and helping
support the community through forums, playing a very active role. And we this is what we
expect every company in the ROS2 technical steering committee to do as well. Because
this together is how we're going to make this a successful open source platform for robotics what are you doing i guess that's different as far as the open source world goes
these days yeah so amazon and aws have played very active roles in open source software and not all
cases have we been so visibly active and vocal in this case we are we actually came out as a
public member of the technical steering committee. We're helping actively drive discussions, getting feedback from the community,
and doing so in a very visible manner. And we think this is important because if you're a startup or
another company thinking about investing in Ross, you need to see the names of the companies who
are standing behind it and hear about the contributions that you're making. So can we
trust our business with this? And what's been fantastic since the announcement of both
the launch of RoboMaker,
but also the ROS2 initiative
and the companies,
specifically Amazon behind it,
a number of robotics companies
have approached us and said,
we're now moving to ROS2.
We see that Amazon's behind it.
We see the RoboMaker support,
and this is going to be
community supported and led.
Let's go ahead and port our robots
over to ROS2.
One thing that strikes me
as a bit of a strange tangent off to the idea of running robots that are cloud connected is,
as wonderful as the idea is of offloading all of the compute, all of the different heavy lifting
that they wind up doing to a third party provider that's somewhere in a data center far, far away.
What about latency slash, I guess, safety-sensitive concerns here?
The easy example is an autonomous vehicle, which is a whole separate kettle of wax.
But we're waiting on an API error, and there's a timeout, and we're doing exponential backoff.
Meanwhile, you are hurtling toward the bay and wanting not to drive into that. Do you find that there are use cases
for which there is no substitute for on-device computation? Or is there a fallback mode? I mean,
how do you envision this? Absolutely. And we can talk about what customers have done,
including our own robots in our own fulfillment center. And we can also, again, we're very much,
we also follow trends and we anticipate when we see trends are unfolding 5g is unfolding
which will give us ubiquitous connectivity devices around the world with low latency so again we see
connectivity and increasing latencies falling but let's talk about where we are today where if you
wanted to have a highly interactive session with your robot and you didn't you could not afford
that latency robomaker allows you to actually deploy code, including machine learning models,
onto the robot itself for low latency interaction.
And then you can partition the work
where if I can actually handle high latency,
if I want to tell my robot to go actually get me something
out of the refrigerator,
I'm okay if it takes 100 milliseconds latency
for that command to actually get up to the cloud,
translate it to commands, and back to my robot.
So good engineers will actually partition the work
that needs to happen on the robot, which is that when what can be actually offload back to my robot. So good engineers will actually partition the work that needs to happen on the robot,
which is that when what can be actually offloaded to the cloud.
And again, this is that interesting programming the edge and what role will the cloud play
and how do you partition your work, which makes it such an interesting problem.
And again, companies that need safety critical processing will put that on board on the robot,
prioritize those messages, prioritize that processing,
and actually offload other processing capabilities to the cloud.
When this service was announced at Midnight Madness at reInvent last year, it was fascinating
in that it was sort of out there, not directly tied to other services.
And what was also neat about this is that it was announced as being generally available.
This was not a in preview, coming later, apply for a thing. It was there ready to go that evening
because what you absolutely want people to do is building industrial robots at two o'clock in the
morning the day it's released because that could not possibly go wrong. I've never known Amazon to
release a service that did not already have active customers using it.
You're not generally a company that says, hey, we built this thing.
We're super proud of it.
Maybe someone will use this.
You aren't the throw a bunch of stuff at the wall and see what sticks company.
You have customers actively using this on launch day.
Do you have any you can talk about?
I do. And in fact, even from the very
inception of the project, because we do work customer backwards, we reached out to customers
that are running robots in production as soon as we had our PRFAQ written to get their input,
to get their guidance and prioritization of features and really dive deep with them in
actual use cases. We later onboarded those customers into an advisory board and actually a beta program. So we were working with customers months before our actual launch,
including working with customers who were ready to go into production. We worked with NASA JPL
to port their open source rover to RoboMaker and to Ross. One of my favorite customers because of
the nature of the robot and how they're using the cloud is Leah by Robot Care Systems. Leah is a walker robot for the elderly or the disabled.
It's not something you would think of as a robot, but it's running Ross. It computes,
it senses, it takes action. And in the case of Leah, they actually have added our cloud services
for Polly and for Lex so that the customer can actually call the walker to them
from across the room with their voice. Leo will respond, come to the patient, interact with the
patient in the most natural manner through voice, but they're also streaming telemetry off of the
walker through Kinesis Data Services so they can actually understand the gait of the patient,
their walking rate, how much activity they've done. Doctors can have dashboards that monitor their patient. If they feel the patient's recovering, they can
actually build predictive models with that data and actually predict when the patient's going to
recover or detect if there's a negative trend and they need to actually interject and take action.
It's changed their business. It's changed the value prop they offer to their customers. And
it's a great example of how RoboMaker, the cloud, can actually complement robots
in houses.
When someone is looking through the vast, vast, vast list of various AWS services, and
they come across RoboMaker, which, again, props to the name, that's evocative.
And let's say they're a new grad.
They've graduated from college yesterday, and now they're figuring out what they want
to do with their career.
I'm told it doesn't quite work that way anymore,
but let's pretend that, oh, wait, you mean I need to get a job?
Here we are.
And they see that.
Is there an easy on-ramp for this service
for someone who is puttering around at home for fun?
Is there a DeepRacer-style equivalent or DeepRacer itself?
Is this something that is going to be useful to someone who is not part of a larger organization?
Or do you generally need to already have a number of prerequisites before this starts to add value?
Yeah.
First off, when one looks at ROS and the educational materials that are available, they immediately have access to this.
And in fact, the University of Cambridge in the UK is using RoboMaker right now to teach their robotics class. We have an
educational outreach program that contains 15 universities in higher education that are using
RoboMaker to teach robotics to their students. In addition to that, once you actually launch
RoboMaker and open it up, lo and behold, you'll actually find that it's actually used to train
DeepRacer for actually learning how to race around a track. We have about a half dozen today and more soon to come,
sample applications where we have the source code and we walk you through how we built the
application. And in fact, the TurtleBot, which is the most widely used robot for education and for
hobbyists, all of these applications run on that robot. So a customer can actually deploy it to
the robot in their living room and actually see it execute the program, change their program, and see the turtle bot's behavior change as well.
So we've tried to put a number of resources available, and we have more to come, which I can talk about later.
With the understanding that forward-looking statements, et cetera, et cetera, if we take a look back at some of the early launches of AWS, where an awful
lot of what was announced made absolutely zero sense. You're an online bookstore. Why are you
announcing a queuing service or this thing called an object store that none of us had ever heard of?
And now we are a decade later and change looking back on that and seeing, okay, yeah, this was used to build an awful lot of transformative, amazing
things. And it's never quite clear how much of the world today you folks saw coming back when
this stuff was launched. So in the context of RoboMaker, do you have a vision 10 years out
from now or however long it is where we're going to be looking back and this was the most obvious thing
in the world to build, but needed to get to a certain place. And now it empowers something
transformative and grand, or is this effectively aimed at today's customer requirements or both?
Yeah. I think that's part of Amazon's culture of being customer obsessed and invent and simplify.
I can assure you that every feature of RoboMaker was derived from talking with customers with actual real pain points today, both within the company, but also outside the company.
And we're already consulting with companies now.
Now that we've built this service, what other new features can we add for you to enable you to do more with it?
So again, I think the reason these services become more valuable, most viable, is not because of how they started, but how they evolved working with customers as
their needs evolve, as their requirements evolve, as new applications of robots, in our case,
evolve, will evolve with it. Common refrain from Amazon is that collectively as a company,
you are, and I quote, willing to be misunderstood for long periods of time.
If you look right now at the feedback you've gotten since launch,
how people are using this service, how people are talking about your service,
how do you see that RoboMaker is potentially being misunderstood today? Yeah. So developers have not had access to cloud services to take advantage of both for fleet
management, for augmenting the capabilities of their robot. So we do find it foreign
roboticists who have not had access to this capability. So we do find ourselves leading a dialogue with them about how we use
cloud services to coordinate the robots in our fulfillment centers, how other companies
are using cloud services to program and control robots that are out in space hurtling towards
new planets. And so it is a little bit of an education of what the possibilities are,
but then also listening of what new services should we build.
So I do believe that's the most interesting space.
It's that partitioning of functionality between the edge and the cloud and how it can complement
their capabilities.
One of the more, I guess, signature attributes of AWS has been that when you wind up launching
a service, even if it's one that doesn't seem to make sense, doesn't wind up seeming to have a market,
it never gets turned off.
And every service you launch has customers
to my understanding, but
APIs are almost perceived as promises
from you folks. I feel like
I can wind up taking this recording of our
conversation and archive it, and
in 50 years, my descendants will be able to listen
to it, and they may laugh
at an awful lot of how naive the conversation was, etc., etc.
But that service is still going to be there.
There are very few companies I would take that bet on, particularly in the technology space.
But it seems to me that whenever something goes GA from AWS, I have remarkably little hesitation in recommending that people build their business on top of that service.
The counterpoint to that is APIs are forever, for better or worse.
Are you starting to see ways for the API to evolve?
Have you gotten to a point, and you don't need to be specific on this,
where now that you've seen, even in the few months that it has gone GA,
that you would have made different decisions in how the service is interacted with, how it interacts with other services, or alternately, are you seeing ways
to expand this far beyond where it is today and start embracing other AWS or third-party services
that at launch you hadn't really considered using? So we don't have any crystal balls that tell us
how an API is going to hold up over time, but we do know... I was hoping I could borrow it if you did.
But we do know we have customer trust and customers will actually take a dependency
on our API, build their application on our API. And we can't have the hubris to think that we
can simply change an API and break those customers. So while we try to think very deeply and very
careful about the functionality of an API, is it as simple as possible? Is it as cross-cutting as possible?
Because you can always add new APIs
with different functionality over time,
but you never want to deprecate an API
for the fear of breaking potential customers.
So there's a thought process that goes in there,
but there's also an obligation to keep the API as it is.
You can always add new APIs with new functionality.
And again, a lot of that is if you start with customers
and beta programs that you know they're deriving value from us.
You know that API is going to continue to add value in the ecosystem.
That doesn't mean we're not going to add more as we see additional ways of exposing functionality in a simpler form for our customers or more powerful.
But there is that commitment that we will continue to support the APIs we have exposed. As you take a look across the landscape of other AWS services, at launch, you mentioned that there were a bunch of very high level, very forward thinking services that RoboMaker integrated with and also CloudTrail.
And I'm wondering if you take a look across the ecosystem of various AWS services, are you seeing opportunities to integrate with different services that weren't necessarily there at first?
And there are some ridiculous answers to that.
Yeah, we want to make sure that the robot can speak appropriately to Cost Explorer.
Sounds like something that not a lot of people would be clamoring for.
And then, of course, I tend to make no predictions about anything AWS does.
There's nothing I'm saying that will never happen.
For all I know, there's a huge customer that you can't tell me about that's already doing a lot of work with robots and Cost Explorer, though I can't
imagine what that would look like. So obviously we were very excited about integrating RoboMaker
with Poly and Lex so customers could have a more natural interaction with it. Pleasantly surprised
to find out later that CloudWatch turns out to be one of the most commonly used services because
customers want to know what the heck is going on in my robot. Where is my robot at? And so when you start
to see services like that, that expose meaningful value, SMS, which actually allows me to stream
messages off my robot, maybe send a notification to my robot is one we're looking at right now.
We have the ability to actually put an agent on a robot and update the operating system on a robot
with an AWS Action EC2 instance, but we're seeing demand for that. So again, we start to think about the pragmatic nuts and bolts
about actually managing a robot, where it's at, what its telemetry is. We see new services that
we're going to be integrating over time. I'm pleasantly surprised to see when we launched
with DeepRacer, another team basically is actually using reinforcement learning for training a car,
the interest and response we've gotten from companies that say,
I'd love to use reinforcement learning to actually train my robot to do new behaviors,
and we need a deeper and richer integration with that.
So again, I think in the fullness of time,
we'll be both building new services for fleet management,
but integrating even more AWS services into robots.
CloudWatch is one of those, I guess, personal hobby horses I have.
But CreditWare 2, that service is evolving rapidly over the last few months,
and it's modernizing at a very interesting rate.
A lot of the challenges historically that were there are no longer there now,
and I'm sure even fewer by the time this episode airs.
So I want to be very clear that was a joke.
That was not an actual criticism of the service.
One interesting aspect of this is the idea that you mentioned
with Leah, the robot that walks around and integrates with various other services. It
seems like this is almost a straight shot play for some of the various Alexa services out there as
well, where this winds up being able to empower different modes of interaction with existing
things around both around the home as well as in the workplace it feels to me like and i can't even articulate how but this is a
glimpse of a future where working on a computer no longer looks like sitting there typing into
a terminal or an editor it starts to look a lot more like a conversation and it starts to look
like where you say you give a series of instructions and things start happening in the real world.
It feels like a number of things I've never spent a lot of time going into on the AWS side that
interface with the real world, things that I try not to deal with as best as possible.
IoT is an example of this as well, where it starts to hint at a future I can start to see the edges
of, but can't quite figure out what that's going to look like. Yeah, it is. Again, if we think about robots in their most general sense,
they sense, they compute, and they act. And how many devices do we have to interact with today
that do that for us? And think about the interface we have. I'm confounded by my dishwasher. I can
spend a half hour trying to get the darn thing to actually do the right load. What if I could walk
up to it and tell us exactly what kind of load I wanted to run, what time I wanted it to start. And I could do the
same with other appliances throughout my house, which in fact are robots. What it's really
surfacing is not necessarily going to replace developers, but give a more natural way of
interacting with these devices, which are in fact robots. And it's again, how are we interacting
with our edge devices? How are we, how are we programming and managing them? So I think that's an exciting future.
I would absolutely agree with that assessment.
One thing I will point out,
and I expect you won't have anything meaningful to share with me,
you are not the GM of RoboMaker.
You are the GM of AWS Robotics.
And on the one hand, I feel like this might wind up being a story
similar to Ground Station, which is in its own category called Satellite. Either there's about to be a whole lot of
interesting space releases, or it just didn't really fit into any other existing categories.
Is this an area that you're seeing is ripe for expansion? Or is this
more or less a, well, we didn't really know where else to put the robot thing and we're done?
We think this is an area of great innovation and great opportunity for the years ahead.
So much like a naming exercise for a product, we applied the same naming exercise for our service and our team,
not wanting to be locked into any single definition of what the team does or owns.
Thinking in the fullness of time, there will be other services we're already talking and thinking about
and validating with customers what those services might be.
But it's clearly a new category of emerging technology,
so we should be prepared to build and manage services
for our customers that are building robots
out in the real world.
Thank you so much for taking the time
out of your day to speak with me.
This is an exciting space
and I'm very interested to see what comes next.
It's been fun talking to you today.
Thank you.
Thanks very much.
Roger Barget, General Manager of AWS Robotics.
I'm Corey Quinn, and this is Screaming in the Cloud. This has been a HumblePod production.
Stay humble.