Screaming in the Cloud - Episode 6: The Robot Uprising Will Have Very Clean Floors
Episode Date: April 18, 2018How many of you are considered heroes? Specifically, in the serverless Cloud, Twitter, and Amazon Web Services (AWS) communities? Well, Ben Kehoe is a hero. Ben is a Cloud robotics research s...cientist who makes serverless Roombas at iRobot. He was named an AWS Community Hero for his contributions that help expand the understanding, expertise, and engagement of people using AWS. Some of the highlights of the show include: Ben’s path to becoming a vacuum salesman History of Roomba and how AWS helps deliver current features Roombas use AWS Internet of Things (IoT) for communication between the Cloud and robot Boston is shaping up to be the birthplace of the robot overlords of the future AWS IoT is serverless and features a number of pieces in one service Robot rising of clean floors AWS Greengrass, which deploys runtimes and manages connections for communication, should not be ignored Creating robots that will make money and work well Roomba’s autonomy to serve the customer and meet expectations Robots with Cloud and network connections Competitive Cloud providers were available, but AWS was the clear winner Serverless approach and advantages for the intelligent vacuum cleaner Future use of higher-level machine learning tools Common concern of lock-in with AWS Changing landscape of data governance and multi-Cloud Preparing for migrations that don’t happen or change the world Data gravity and saving vs. spending money Links: Ben Kehoe on YouTube AWS AWS Community Hero AWS IoT Ben Kehoe on Twitter iRobot AWS Greengrass Shark Cat Medium Boston Dynamics AWS Lambda AWS SageMaker AWS Kinesis Google Cloud Platform Spanner Kubernetes Digital Ocean .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is generously sponsored
by DigitalOcean. I would argue that every cloud platform out there biases for different things.
Some bias for having every feature you could possibly want offered as a managed service at
varying degrees of maturity. Others bias for, hey, we heard there's some money to be made in the cloud space. Can you give us some of it? DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of
mine who are avid DigitalOcean supporters about why they're using it for various things, and they
all said more or less the same thing. Other offerings have a bunch of shenanigans with root access and IP addresses. DigitalOcean
makes it all simple. In 60 seconds, you have root access to a Linux box with an IP. That's a direct
quote, albeit with profanity about other providers taken out. DigitalOcean also offers fixed price
offerings. You always know what you're going to wind up paying this month, so you don't wind up
having a minor heart issue when the bill comes in.
Their services are also understandable without spending three months going to cloud school.
You don't have to worry about going very deep to understand what you're doing.
It's click button or make an API call, and you receive a cloud resource.
They also include very understandable monitoring and alerting.
And lastly, they're not exactly what I would call small time. Over 150,000 businesses are using
them today. So go ahead and give them a try. Visit do.co slash screaming, and they'll give
you a free $100 credit to try it out. That's do.co slash screaming. Thanks again to DigitalOcean for their support of Screaming in the Cloud.
Hello and welcome to Screaming in the Cloud.
I'm Corey Quinn.
Joining me today is Ben Kehoe, who is currently a cloud robotics research scientist at iRobot, in many circles better known as the Roomba company.
Welcome to the show, Ben.
Hi. Glad to be here. So you've been involved in the AWS ecosystem
for a fair bit of time.
In fact, I believe a year or two ago,
you were named an AWS Community Hero.
That's right.
What is that?
So a Community Hero is,
it's a program that AWS has to recognize people
who are contributing to the community around AWS. So expanding
the understanding, the expertise, the engagement of people with AWS. And it's exciting to be,
you know, I really like sort of facilitating people's understanding of AWS and their interactions
with AWS and amplifying their voices so that AWS hears the masses more
clearly. Gotcha. So did that come as an outgrowth of your work at iRobot? Did it come through your
work on other projects? I mean, how did that, what's your phone just ring one day and, hi,
it's Amazon. We've got this thing we'd like to talk you into doing. Yeah, well, it was pretty
much that. And I think, you I think a lot of it came with,
grew, the seed of it is the interaction
with AWS at iRobot,
where we transitioned our robot fleet
to use AWS IoT as our cloud connection mechanism.
And grew out from that with my Twitter account
and my interactions with them
and my interactions with the rest of the community,
both other serverless users especially,
and just the broader Twitter community around AWS.
Gotcha. So help me through this a little bit. You're effectively at this point known as a
cloud-slash-serverless guy, which makes sense. But when I ran into you at reInvent last year,
you convinced me to go ahead and buy a Roomba. I did this as a favor to you. I figured I'd try it,
it wouldn't work, and I would return
it quickly. Instead, it serves two purposes. One, my floor is far cleaner than it ever was before
I had this thing, so it's become indispensable. And secondly, as an added bonus, it terrorizes
my awful little dog every time it starts with a little chime. She starts barking and invariably
goes to the wrong part of the house because she's cute, not smart. So what I'm trying to understand is how do you go from
being this cloud slash serverless guy to tying that back to the robot vacuum that I don't have
to think about? Well, I'm first of all glad that you decided to purchase the Roomba. Our CEO is a roboticist and talks about having to learn in the early 2000s to become a vacuum salesman. I think that's true of a sort of while I do serverless cloud stuff is a big part of what I do.
There's also robotics in there.
And there's also like smart home IoT things in there.
So it's kind of a mishmash of a lot of stuff.
But, you know, in undergraduate, I was physics and math and I worked as a theatrical carpenter.
I worked for a big enterprise IT contractor for a while.
And then I went to grad school for robotics.
And I started out there doing unmanned aerial vehicles, which are a different kind of cloud robot.
And then halfway through that, the funding got cut.
This is a thing that happens to grad students.
And I switched to starting to think about how could we leverage cloud computing to enable robots to do more and better things.
And then finished my PhD in 2014, came to iRobot in the midst of a transition period for our connected robot at the time.
And that helped transition into the AWS and serverless realm.
Okay, which makes sense.
But credit to your salesmanship, you know a sucker when you see one.
So you sold me on one of the upper line of Roomba devices, and that's great.
The Roomba 980, yep.
That would be the one.
However, Roomba's been around for 10 years.
15, actually. My apologies, even better.
So back in 2003, AWS wasn't a thing. So there's obviously been a series of, I guess, evolutionary
steps as these things continue to evolve. What did they look like originally? And what, I guess,
what do they do now that they didn't once upon a time? And how has AWS helped with that?
Yeah, so there's a few different pieces.
One is that, you know, when you look at a connected robot, you can get telemetry back from it. And
I mean, I don't know about you, have you ever sent in one of those registration cards where they
have a little survey about how you use your robot or any product? Yes, I have. You really have. You
were like the first person I've ever met. At one point, they offered this raffle that I'm pretty sure didn't exist for some company.
So I did that a couple of times.
So you are a sucker.
Yeah, absolutely.
Okay, okay.
Say something authoritatively, and I will do exactly what you tell me to do.
Yeah.
So most people are not like that.
And so for a long time, we've had passionate users who care about their robots.
And we know that they lasted a long time, but we never knew how long.
We never knew, are people using them in the ways we expect?
Are our batteries sized correctly for the size of people's houses?
And when you start having a connected robot, you can start to get that information back and understand better how your users are interacting with your product and then make it better for them.
So that's one aspect the other aspect is right if you if you look at a room but it doesn't have a screen on it um but you want to be able to program it to do the things you
want to do like schedule it change settings about how it cleans um view the information that it's
generating and without a screen on the robot, you can't do that. Setting
a schedule on one of the non-connected robots is an exercise in pushing a lot of
combinations of four different buttons. With a connected robot, you just open up your app
and you're using that screen and you can have a nice experience right there.
So these are the benefits that come with a connected robot. And we launched our first connected robot, the 980, in 2015, and we're now
connected through the whole line. The benefit of the high-end ones is that they also perform
systematic navigation. So they use robotics algorithms that allow it to tell where it's been
and where it's going, and that helps it map out the space,
which again, because you have that cloud connection,
you can show that map to the user.
And we're now, you know,
we've just announced a beta around
showing you the Wi-Fi signal strength that it sees
as it moves around your house.
So you can identify dead spots.
All of that is enabled by cloud connection.
Wonderful.
I'll even take it a step further,
and I already made sure the one in this room was set on mute, but I very rarely play with the app anymore. Ignoring the schedule,
I even will just sometimes say, Alexa, tell the Roomba to start cleaning, and that works.
And a fun fact about that, this is one of my favorite pieces, is so we use AWS IoT to communicate
from the cloud to the robot and vice versa. And the connection there is
low enough latency that when you say that, the Roomba will start playing its little noises before
Alexa even responds. Fascinating. So I just said that, well, granted, Alexa is on mute in my office,
but the Roomba is sitting right next to me and it did not start playing when I said that. That would
have been hilarious if it had. So there is still that communication that has to happen, or is there
more to it than that that I'm not seeing? Well, I'm saying, no, I'm saying that Alexa tells us
that you want your Roomba to start. Ah, okay. We're able to deliver that down to the robot
faster than after we return, Alexa packages up its text-to-speech and delivers it down to Alexa, and Alexa plays it back to you.
And that's thanks to AWS IoT.
Oh, I see what you're saying.
Yes, it does start playing before Alexa starts responding to me.
I was very confused for a second there.
I understand that these are little robots, but I have trouble imagining the use case for putting microphones in it. You'd wind up with a combination of the vacuum noise itself and, as previously mentioned,
my obnoxious barking dog who follows it around barking angrily. It's great. She weighs less
than the Roomba does, which really is just, it's a wonderful experience for everyone involved.
Yeah. So I guess you're based in Boston, correct? That's correct.
Wonderful. So I feel like there's a Boston, correct? That's correct. Wonderful. So
I feel like there's a opportunity there because I believe Boston Dynamics is also based there,
which given the name. That's true. So it seems like Boston is really shaping up to be the
birthplace of the robot overlords of the future. Well, yeah, I mean, robotics happens primarily
in two centers in the U. US, Boston and the Bay Area.
Boston Dynamics is here, and they make very slick robots.
I don't think you'll see them taking over the world anytime soon.
They're very good mechanically, and they're good at very specific things, which is true of all robots.
They're good at very narrow use cases, and you put them outside of that, and they tend to fall over.
Yes, I've had mixed results attempting to release the Roomba into the wild.
Experiments continue to be ongoing.
So you mentioned AWS IoT a few minutes ago.
And I historically have a background on the web app side of things.
I can talk about EC2 until I'm blue in the face for my sins. But I don't know much about what AWS's IoT offering is. In a nutshell, what is that?
So AWS IoT is actually a number of different pieces in one service. It's a PubSub system that delivers messages over MQTT and WebSockets.
And it's a rules engine that can take messages that are in that PubSub system
and deliver them out to Kinesis or Lambda or other web endpoints.
There's some asynchronous communication mechanisms and storage involved in there.
And then it also involves authentication and authorization mechanisms for connecting devices to the internet. Because unlike all the rest of AWS services, your device, your IoT device, is probably not going to have the same kind of AWS credentials that even a, say, Cognito user might have.
Okay.
And that's, of course, just our connectivity layer.
Behind that is where we build all our application logic.
And AWS IT itself is serverless, right?
There are no provisioning knobs to tweak on it.
You pay for what you use.
But then our application that sits behind it is also completely serverless.
And I think we're up to 30 AWS services in production now.
That's how many AWS services we use to deliver our production applications.
That's over a third of them.
And given how far out into the woods some of those services are, that's almost impossible to wrap my head around.
It keeps life interesting. No, and it's one of those things as well where if I take a step back and imagine when I was a kid what the future would look like, I might have said that the robot – we've predicted a robot uprising potentially.
But I would never have guessed, for example, that I bet the robot uprising will have very clean floors.
It was one of those things where, oh, a robot and cleaning the floor was never something that made sense.
I mean, until I owned one of these, it sounded like a ridiculous thing for people with too much money.
But it works.
It's one of the more astonishing and, I say, pleasant consumer experiences I've had, where it over-delivered
above my expectations in virtually every regard. Well, thank you. And that's no small feat. I tend
to be relatively demanding and cynical as a personal failure mode. Oh, I was going to say,
you know, we have some Easter eggs in our Alexa skill, where you can ask, you know, you can ask Roomba to give the cat a ride,
you know, in homage to Shark Cat. And you can ask, I'm forgetting the other ones. There's a number of
them. I wanted one of them to be Alexa, ask Roomba to take over the world. And for the response to
be, I'm sorry, I can't do that yet. But unfortunately, that didn't make it into the final,
that didn't make the final cut. Generally, PR and legal tend to want to weigh in
on things like that. Yeah. So one question I have for you is a service that I think the world is
taking relatively lightly, or at least ignoring it for the most part. And that is AWS Greengrass.
For those listeners who aren't aware, that fundamentally acts as deploying Lambda runtimes to edge devices. And by edge, I mean out in the world, not at CloudFront Pops, where you effectively get a full Lambda runtime environment and it can execute Lambda functions in response to certain triggers on embedded devices. That's actually only half the story because the other half of
Greengrass is a local MQTT broker that helps the communication between those lambdas on the device
and between devices, as well as up to the cloud. So if you're using Greengrass with AWS IoT,
Greengrass can manage your connection to AWS IoT sort of transparently
to all the code that's running on your device. And it also enables you to having that pub-sub
system locally also enables you to make all of the Lambda code that you're writing to run locally
run event-based. Okay. Is that something today that the Roombas are using themselves? For example,
if I have three Roombas in my house, do they start communicating between each other? Do they start,
and is that powered by Greengrass or is that not? So we don't use Greengrass. Okay.
Greengrass is an interesting, I think the real power of Greengrass is that it enables a company
that has a lot of experience in cloud development
to move on to devices without a lot of the pain of learning how to deal with devices.
And iRobot doesn't have that problem. We know how to do devices. It enables some firmware update
use cases that are interesting.
It helps you package up your code and send it down. It gives you that familiar code environment, that execution environment that you have cloud-side, and you can reflect that on devices.
And then it helps, especially in cases where you have a gateway.
So there's a notion of a Greengrass core, and that's the MQTT broker
that provides the communication with the cloud. And then you can have multiple Greengrass devices
that are talking with this core and sharing the communication mechanism between it.
So that's sort of the master, the broker, that core device, which is helpful when you have that sort of star topology among
multiple devices. If you look at something like multiple robots in a home, you need each robot
to be fully autonomous and operating in federation rather than as a star topology. There's not one
leader or gateway. And so Greengrass doesn't support that use case today as robustly as it
does some of these other use cases.
Got you. So to that end, as mentioned, you're a 15-year-old company. I'm assuming that-
We are a 25-year-old company.
Oh, my apologies. It took 10 years to get the robots shipping.
It took 10 years to figure out what robots were going to make money.
So for the first 10 years of iRobot's existence, it produced a number of different robots from space exploration to underwater to oil well robots, going down oil wells to determine what's wrong with stuff.
There's some really creepy dolls that got made.
And then in 2001, our defense business produced the PackBot,
which was our first broadly successful robot for bomb disposal.
And so that made us some money.
And then the Roomba came out in 2002, and then that was also successful.
And between then, that sort of powered the company for a long time as we explored other businesses and such.
In the past couple of years, we've spun out the defense business and become exclusively focused on consumer. But in the history of iRobot, it's also just like my long and winding journey.
iRobot has taken a long and winding journey to the point we're at today.
Okay. Back 15 years ago, when the vacuum robots started showing up, AWS wasn't a thing. I'm
assuming that there was another cloud provider,
there was a data center build-out,
or were the robots back then entirely self-contained?
So Roombas were not connected until 2015.
Okay.
And Roombas have always been so entirely self-contained.
And so even today with the cloud connection,
if the cloud goes down,
it's going to run on the schedule
that you set it to. If, you know, it's in the middle of a firmware update and you press the
clean button, it ditches that and goes and cleans for you. So it's always going to have that autonomy
to serve the customer in the way that the customers expect. Now in that, you know, in the time that iRobot has been around, there are networked robots. And even
we had telepresence robots that used a cloud connection for that telepresence and remote
driving capabilities. And so there's a lot of learning that we had around what happens when
you give a robot a network connection. And then in the lead up to the launch in 2015, we started developing that capability and taking
that learning to our Roomba products. In 2015, the landscape was slightly different than it is
today. But I know that in 2018, it was brand new. Oh, absolutely. But here in 2018,
if you're selecting a cloud provider, AWS is not necessarily a slam dunk anymore. There are a
number of very competitive offerings from the other major players in this space. Did you folks
look seriously at other providers, or was it always AWS was the clear winner?
So this was at launch of our first connected Roomba.
This story is also slightly complicated.
At launch of our first connected Roomba, we had a full solution IoT cloud provider who was sort of a turnkey solution that managed the communication, the firmware update,
all of the pieces for us. But it wasn't going to scale.
We found that out. It wasn't going to scale to the volumes that we need because we sell
a lot of robots. And it didn't have the extensibility.
So doing an Alexa integration with it would have been very difficult.
And so in 2015, we determined that we're going to move off of them
and went through a selection process for that connectivity layer.
And we landed there with AWS IoT.
And we also knew that we wanted to start to own the application
so that we would own the extensibility of it.
And we knew we wanted to build that on AWS because in 2015 and even today, the range of AWS
offerings gives them an advantage, whether it's bigger or smaller than it used to be
as when we were looking at it. In 2015, Lambda was very new. Serverless itself was brand new. The serverless framework was still called JAWS. But we decided in building this that it wasn't in our interest to have to build, learn to build, maintain, own, deploy server-based infrastructure for an elastic cloud IoT application to support the volumes of robots that we sell.
And therefore, we decided to go all in on serverless
and say, we're going to build this around AWS IoT
and AWS Lambda and pull in services
and figure out how to make that work for us.
And that's been enormously successful
in both keeping the size of our teams, the costs, the development
time. All of that has been really benefited by deciding to go serverless.
Right. And that's what I find interesting about iRobot's position on a lot of these things.
I can envision the use case for IoT pretty easily. And conversely, on the serverless side of the spectrum,
I can see using Lambda functions to do some processing.
My podcast and my newsletter both are powered
by an obnoxious array of Lambda functions now.
I can see using it to inject static headers into CloudFront
because there is, in fact, no God.
And we have to do that dynamically instead of
statically like a reasonable CDN. I digress. But I still have a hard time wrapping my head around
the use case of a serverless approach to what is effectively an incredibly smart vacuum cleaner.
Can you distill that down a little bit? Sure. So if we're looking at processing when a cleaning mission finishes, the robot sends up a little report through AWS IoT, and that goes into, say, a Kinesis stream with a Lambda reading from it.
And when it gets that, it can store that in a place that the app can find and also dispatch a push notification to you if you've opted into those to tell you,
hey, look, your cleaning's done.
And doing that means that we don't, in any of there,
we're not using Kafka to process those messages.
We're not using even RDS to store those reports.
And we definitely don't need
some auto-scaling EC2 application
to read off and glue that logic together.
It's a Lambda that contains basically just AWS SDK calls
dictating our business logic
of what we want to do with this piece of information.
Okay, by having that on-demand,
there's an obvious economic win
by not
having a bunch of idle servers sitting around. Is there any other advantage that a serverless
platform brings to the table? Oh, yeah. So, I mean, there's the direct cost of your AWS bill,
which depending on, you know, that can go either way, right? You have idle capacity,
but at the same time, if you have, there are use cases where you can be highly optimized and use EC2 in a way that your bill would be lower than Lambda.
But the hidden cost is your operations burden.
How many people do you need to deploy and run this? You know, with the millions of robots we sell a year, we only need, you know, single digit FTE operations to manage that entire application that handles all of the data and functionality that our connected robots do.
Which would not be possible if we had a server-based architecture.
We would need a lot more people to make sure that everything was going smoothly and that all of our servers
were patched, et cetera. And then on top of that, the development time becomes very low
once you get good at it. Because all the code you're writing is just the bits to glue your
infrastructure together and the code that just says, I want this to move here and this other thing to move here,
you're writing so little code and it's directly feature-based rather than some
infrastructural notions that don't relate to what you're doing as a business. It means you can churn
out those features very quickly. Which makes an awful lot of sense for a number of use cases. It's fascinating to me, not just the variety of use cases that Lambda and its brethren get put to,
but how these use cases tend to cross into so many different areas of technology and of different types of platforms
that just would not have occurred to me until someone mentions, hey, this is this thing that we're doing.
So looking a little bit to the future, do you see that there is that list of 38 services?
Are you starting to play with any of the higher level machine learning tools?
Yeah, I mean, I think we're looking at, you know, whenever an AWS service release, the question is, oh, is it useful to us?
Can we use it? And when you look at a service like SageMaker for developing machine learning models, it certainly is attractive in reducing the overhead and the amount of infrastructure you need to own. I'm actually even interested, so you can bring
your own. SageMaker includes a lot of functionality for machine learning algorithms, for training machine learning algorithms.
But you can also bring your own algorithm where you provide a Docker container, and then it will run that on all the data you input, which SageMaker as a bulk processing tool, especially
when you look at its hyperparameter optimization. So if you need to run something on a combinatoric
piece, you can just use that to help farm out all of the different things that you need to do.
So in addition to looking at the on-label uses of a given AWS service, we're always looking at, is there something else? Could we use
it for another gap that we have where we have a pain point and we could make this service,
you know, bend it to our will to perform this other task? Speaking of, a common concern that
is raised by companies that are doing interesting things in the entire cloud space is often the idea of lock-in gets raised.
With your level of AWS services, I get the sense that it almost doesn't matter what other cloud providers do or even what AWS does.
It feels to me, based on the story that you've told, that you're locked
into AWS come hell or high water. Is that accurate? And if so, is that a concern?
So I think if you're looking at something like machine learning,
the primary lock-in that you get with any cloud provider is data gravity. And so if you consider running a given service
on one cloud provider and hooking it to a different service
on another cloud provider,
you're paying for the bandwidth cost
to send the data between them and the latency.
And I think that alone is a big obstacle
to multi-cloud architectures being economically and functionally viable.
So I think it's more about making sure, you know, if you're evaluating, if you're looking at SageMaker versus some of the machine learning services that Google has, right, its suite of deep learning training. You can evaluate both,
well, what's going to be in isolation the best performing? What's going to get me the best model?
What's the easiest thing? And then you look at, well, what's the cost going to be to hook this
all together? And then you have to weigh those two things. I don't think anybody gets locked in as, we should use it just because it's here, as opposed to what's the total cost of ownership of using something that's outside of your primary cloud vendor.
At the same time, I don't think lock-in is so bad. The way that cloud pricing works
between the big cloud providers,
it's much more public.
And so it's subject to market pressures
in a way that enterprise software agreements
in the past haven't been.
And your ability to get your data in and out
is kind of up to you, right?
You can store it in whatever format you want.
You can make it portable.
I think the cloud events specification that's coming out of the Cloud Native Computing Foundation
is going to help with interchange of information between cloud providers,
which I think ameliorates the primary concern of cloud lock-in, which is
these services only work with all these other services. And so I'm not going to be able to,
you know, if I'm using Kinesis, I can only use Lambda to process it. And with the cloud event
spec, you know, in theory, you'll be able to ship those off and run Azure functions based on your Kinesis stream.
In reality, I don't think anyone's actually going to do that, but it will make people sleep better at night.
Because while I believe that the fact is that lock-in is not that big a deal, the fact that people worry about lock-in is itself a big deal and needs to be addressed.
Does that make sense?
Very much so.
I've been accused at various times, in some cases by the same people, of being an AWS
partisan to the point of being a fanboy, whereas I've also been nominated for the position
of AWS community villain.
So there's sort of a spectrum on that.
My approach has always been that once you pick a vendor,
it's somewhat alarmist and unnecessary
to avoid tying into the higher level functions.
As long as you have a theorized exodus strategy,
you're mostly fine.
Now, that does mean that, for example, if you're
building your entire application architecture around something like GCP Spanner, which is a
world-spanning, ACID-compliant database, which, as far as I can tell, works on magic, that is a form
of lock-in in the sense of, if you have to leave GCP for some reason, there is no clear-cut strategy to get out of that environment.
Yeah, at the same time, the changing landscape around data governance means that these world-spanning
databases, I think their utility is somewhat limited. But I completely agree. And you look at
the sort of alternative, which is, we'll build an abstraction layer over it, and then you could move to Microsoft Cosmos DB,
which is a similar global database.
The problem with doing that is that you lose
the particular aspects of the individual services
that make them special and make them powerful.
So I could make an abstraction for NoSQL databases
that would allow you to use DynamoDB
or Google's Cloud NoSQL and Azure DocumentDB,
but you wouldn't get to use global secondary indices,
which are a really powerful feature of DynamoDB.
And so you're limited to the least common denominator, which is not very good.
And you're just hamstringing yourself for a contingency that nobody ever faces, right?
So you never see people out there saying, we went multi-cloud and it saved our butts.
Or people saying, we didn't go multi-cloud, and it really bit us, and we learned our lesson. You hear people talking very differently. I do see multi-cloud in a couple of scenarios. One,
where there was a decision made at the outset to keep everything lowest common denominator,
if you will. And as a result, all of the higher level services are more or less closed off to
these shops. They tend to run on the things that are available everywhere.
Instances, load balancers, object storage, and a few other bits and bobs that generally tend to do a one-to-one mapping.
The other scenario where I see it in is where there was a migration at one point, say from AWS to GCP or vice versa.
And the original plan was to move everything, but it turns out a couple
things are really hard to move for not a lot of benefit. So they plant a flag and at halfway
through declare multi-cloud victory and then move on to things that actually move the needle on
their business. I see more effort being placed in to preparing for a theoretical migration and
maintaining an agnostic layer than I've ever
seen into an actual migration, because they generally don't happen and they're generally
not world-changing. That's exactly what I'm saying. I think we're agreed on that. Yeah.
That it's kind of like, you know how there's a very good argument for why
cow tipping is not possible, right?
People talk about teenage pranksters tipping cows, but there's no videos of it on YouTube,
and therefore it's not possible.
And I find that to be very compelling, right?
Nobody's talking about it, and so it's probably not happening.
There may be some companies out there that just don't want to talk about it and succeeded. But on the other hand, people who do successful migrations like Spotify
get trumpeted, right? Google's out there saying, look, Spotify moved from AWS to GCP,
and it was absolutely great. So in those cases where somebody actually succeeded or needed to
do this, I think you would be hearing about people.
It's a story that I think people want to exist,
but I think you're right.
In practice, it doesn't.
The analogy I've always liked was,
it's astonishing how UFO sightings plunged
right around the time that everyone started carrying
a high-definition camera in their pocket.
Same argument, right?
It's a strange and different world out there,
and I think that companies are still trying to find their way. Historically, in data centers,
it was a lot easier to be agnostic because you're buying utilities that have become commoditized.
I don't care who my power vendor is. I don't care who my bandwidth provider is.
If one of them displeases me, migrating is not that difficult. Whereas in higher levels,
it falls apart. Well, and if you're on Kubernetes, right, you can get Kubernetes on a lot of different places,
and that makes you very portable. But the question is always, is that portability actually worth it
versus going further down the serverless spectrum where you're using higher level services
and doing less undifferentiated heavy lifting?
Which also gets back to your point earlier of data gravity,
where it's, yes, you can save 20 cents an hour on a workload
by having it done on a different provider,
but that workload has to siphon in three terabytes of data from another provider.
So you save 20 cents and spend dozens of dollars
to move the data where it needs to be.
That tends to be an economic non-starter in many cases as well.
Yeah.
So once again, you are Ben Kehoe.
I'll put your Twitter handle into the show notes.
Where else can people go to learn more about you?
About me?
Well, most of the talks that I give are posted on YouTube.
You can search my name on YouTube and find me. I've got some posts on Medium under my Twitter handle that I write about where I think we're at in serverless and where I think we're going and what I think we don't have yet is one of the big things that I like to talk about.
Oh, being a futurist is a terrific business.
If you're right, you're hailed as a visionary.
If you're wrong, no one ever calls you on it.
It's true.
I complain less. I'm more, this is what we don't have today.
And so however it turns out to be later is, you know, as long as it solves the problem, I'm happy.
Well, thank you very much. Last question. What did you name your Roomba?
What did I name my Roomba? Well, so I have a very small apartment and I don't actually run a Roomba in it. Um, but I can share that the most popular name, uh, is Rosie. Ah, the Jetsons reference. Yeah. Wonderful. Well, thank you so much for your time. This has been
Screaming in the Cloud. This has been Ben Kehoe and I'm your host, Corey Quinn. Thanks so much.