Screaming in the Cloud - Episode 38: Must be Willing to Defeat the JSON Heretics
Episode Date: November 30, 2018Do you understand how tabs work? How spaces work? Are you willing to defeat the JSON heretics? Most people understand the power of the serverless paradigm, but  need help to put it into a us...eful form. That’s where Stackery comes in to treat YAML as an assembly language. After all, no one programs processors like they did in the '80s with raw assembly routines and no one programs with C. Everyone is using a higher-level scripted or other programming language. Today, we’re talking to Chase Douglas, co-founder and CTO of Stackery, which is serverless acceleration software where levels of abstraction empower you to move quickly. Stackery has an intricate binding model that gives you a visual representation - at a human logical level - of the infrastructure you defined in your application. Some of the highlights of the show include: Stackery builds infrastructures by using best practices with security applications What's a VPC? Way to put resources into a Cloud account that aren’t accessible outside of that network; anything in that network can talk to each other Lambda layers let developers create one Git layer that includes multiple functionality and put it in all functions for consistency and management Git is an open-source amalgam of different programming languages that has grown and changed over time, but it has its own build system Stackery created a PHP runtime functionality for Lambda; you don't want to run your own runtime - leave that up to a Cloud service provider for security reasons Should you refactor existing Lambda functions to leverage layers? No, rebuild everything already built before re-architecting everything to use serverless Many companies find serverless to be useful for their types of workloads; about 95% of workloads can effectively be engineered on a serverless foundation Trough of Disillusionment or Gartner Hype Cycle: Stackery wants to re-engage and help people who have had challenges with serverless Is DynamoDB considered serverless? Yes, because it’s got global replication Puritanical (being able to scale down to zero) and practical approaches to the definition of serverless Links: Stackery JSON AWS Lambda Aurora Serverless Data API Hype Cycle Secrets Manager YAML S3 GitHub GitLab AWS Codecommit Node.js WordPress re:Invent Ruby on Rails Kinesis Streams DynamoDB Docker Simon Wardley Datadog .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode is sponsored by Datadog. Datadog is a monitoring
and analytics platform that integrates with more than 250 different technologies, including AWS,
Kubernetes, Lambda, and Slack. They do it all. Visualizations, APM, and distributed tracing.
Datadog unites metrics, traces, and logs all into one platform so that you and your team can get full visibility into your infrastructure and applications.
With their rich dashboards, algorithmic alerts, and collaboration tools, Datadog can help your team learn to troubleshoot and optimize modern applications.
If you give it a try, they'll send you a free t-shirt.
I've got to say I love mine. It's comfortable, and my toddler points at it and yells,
DOG! every time that I wear it. It's endearing when she does it, and I've been told I need to
leave their booth at reInvent when I do it. To get yours, go to screaminginthecloud.com
slash datadog. That's screaminginthecloud.com slash d-a-t-a-d-o-g. Thanks to Datadog for
their support of this podcast.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by Chase Douglas, co-founder and CTO of Stackery.
Welcome to the show.
Hey, Corey.
Glad to be here.
So it's been a busy week so far, to put it lightly.
But before we dive too far into that, let's talk a little bit about Stackery. First, what's a Stackery? Yeah, great question. Stackery is serverless acceleration
software. We really come in and we provide filling for all the gaps in the software development
lifecycle as you build your serverless applications. And this goes beyond just the
things that you need to build your first Lambda function, build your first API. It's great at that,
but it goes into how do you begin to use that smorgasbord of AWS services? How do you piece
them together into real-world applications? How do you collaborate with your team so that they all can work together
with a high level of velocity? The great thing about serverless is as a technology,
you can build and ship things faster than ever before. But then when you have a team that
actually has the right tools to do that. It's amazing how quickly you ship
just the most incredible features,
the most incredible products using this technology.
That's a reasonable introduction to it.
That said, in the interest of full disclosure,
I've been playing around a little bit
with Stackery over the past few weeks,
and I've got to say, it's interesting.
It's not quite clear to me, I guess,
from a perspective of looking at the ecosystem around this entire space,
where, not to call you a framework, but there are something on the order of 15 different frameworks
that wind up wrapping around Lambda functions and serverless applications
that all purport to make it something a human being might be able to work with.
What is it that, I guess, makes Stackery different from the direction that most of these things are going in?
Yeah, so Stackery sort of sits on top of the framework. So today we import serverless application models, SAM applications from AWS.
We import serverless framework applications from serverless.com.
We really help with everything above that framework level. So as soon as you start to
use these frameworks, you might start out with an API gateway that is connected to a function
to do this little hello world app. But when you realize that, well, what I actually need to do this little hello world app but when you realize that well what i actually need to
do is i need to have this function in my api be able to connect to my database that resides inside
of my vpc and oh by the way when i'm in production it's this vpc and it's this databasePC and it's this database. But when I'm in a development environment, I just want to
spin up new versions of those. How do I manage the passwords to access that database? You've heard
of some of the new integrations with Secrets Manager this week. How do you parameterize things across your environments?
And then all of a sudden you end up with this gigantic template
that you thought you were doing a simple serverless application.
Now this is 1,000 lines or more of straight YAML.
And this is what leads to people tweeting about how they've become
YAML engineers, and usually not very happy about it. Job descriptions, must understand how tabs
work, must understand how spaces work, must be willing to defeat the JSON heretics. Yes, exactly.
So, you know, it's this thing where I think a lot of people understand this is totally powerful,
this serverless paradigm, but I need something to help me corral this into a useful form.
So we come in and we treat that YAML as sort of assembly language. No one programs processors
today like they did back in the 80s with raw assembly
routines. And no one even programs with C anymore. Everyone is in a higher level,
scripted or otherwise programming language. Meanwhile, someone writing an actual assembly
has a single tear fall down their cheek as they listen to this.
I'm sorry. When you figure out the assembly instruction for uploading to S3,
please let me know. But it's all about levels of abstraction. Because the levels of abstraction
really are what empower you to move quickly. So what Stackery does is it takes your frameworks, it ingests that, and it understands it.
It has an extremely intricate binding model to then give you a visual representation at a human logical level of your infrastructure that you've defined in your application. So when we were talking about that app that has a function talking to a database
within a VPC, a VPC alone is like at least 20 different resources under the covers. There's
the VPC itself, but then there's subnets and route tables and gateways and this and that.
I still feel like you could wind up taking people who've been working with this stuff for 10 years,
sit them down and on front of a whiteboard and say, draw out all the moving parts in how a VPC network ties together and still only see about a 20% pass rate.
Personally, I think I might be able to get a good 7 out of 10 of the various moving parts, but it's not something that I would be confident about to the point where it would be a stake my life on it.
Right.
Which means I'm, of course, vulnerable to someone who's incredibly convincing and lying.
Well, we try to play it as we help you
build up your infrastructure
so that it is using best practices.
The things that are straight out of the AWS,
they've got their cookbooks of how to build
the best practices with security,
applications of all kinds of types.
I'm not sure what the...
There's a proper name.
I don't have it off the top of my head.
But as we went through,
we were an advanced tier AWS partner.
We had to make sure that we followed every one of their guidelines.
And they talk about how to do some of these things,
how to provision VPCs in the right way to ensure that traffic is contained
and your databases aren't accessible from the internet, and so on.
So we encapsulate all of that into just a simple, at a human level, people tend to understand
what's a VPC.
Well, it's a way of putting resources into my cloud account that aren't going to be accessible
outside of this network.
But anything I put in that network can talk to each other.
And so that's what we at Stackery try to do.
We understand, we ingest, we manipulate all of this YAML goo
and turn it into an interface that humans understand
using a visual diagramming tool.
We had one of our customers,
they sent us a message one day.
They said, we realized we needed to whiteboard something,
some new feature of their product
and how they were going to implement it.
And then they realized that it was faster for them
to drop into Stackery, drag some new resources around,
wire them up,
than it was to actually get out the markers and start marking on the whiteboard.
They looked at their existing whiteboard, which was full of other stuff, and they're like,
it's not even worth erasing this. Let's just drop into Stackery and just wire it up,
and then we'll be done with it. So that's the speed at which we enable
the customers using Stackery to build their serverless applications.
All of which makes a fair bit of sense and is definitely something that I think that as we start talking to companies that are dealing with this at a larger, more, shall we say, process mature level, is a definite need.
That said, that's probably not the most interesting thing to talk about today.
Yeah, probably not.
Let's talk about Lambda Layers.
Yeah, we're super excited about Lambda Layers.
We are a launch partner of this new expansion of Lambda.
It really falls into two parts. The first part is,
as a Lambda function developer, there are times when I want to have a bunch of common code
that is used and accessed by a bunch of functions in my application. So a great example is in Stackery itself,
as we help you manage your templates and your applications,
those are all stored in various Git repositories,
whether they're on GitHub, GitLab, in AWS CodeCommit.
So what we do is in each of our backend functions,
they need to be able to run the Git commands.
So we compiled a little version of all the git commands that we need for our backend, and we include that
in every one of these functions. Well, now with layers, we can create one git layer that includes
all this functionality and just slap that in to all of our functions. This helps us in two
ways. The first is it gives us consistency and a nice management mechanism for all of our functions.
But also it means that we don't have to worry and consider about how we're packaging two different types of code in our functions.
Git is an open-source C, Perl.
It's actually like an amalgam of different programming languages under the hood.
It's kind of interesting how it works.
But it's built one way.
I assume that Git was a text-based, massively multiplayer online RPG.
But that might be my old bias creeping in there.
Yeah, I mean, it's amazing how much it has grown
and changed over time.
And it's a gargantuan effort.
One of the technological wonders of the world.
The final boss is super hard.
Yeah, right.
It's got its own build system. And
I don't want to deal with that when my functions are all written in Node.js and I just want to run
npm install and get them off and running. So it helps with that. But the second part of the whole
layers, Lambda layers functionality, which is really interesting, is that you can create your own runtime now.
So one of the things that we've done as a launch partner, we went and created the thing that people have been asking the most for out of Lambda since the beginning of time.
I guarantee that this is by far the most needed functionality, and that is
a PHP runtime. Hold your laughter. I tried very hard to steer away from language bigotry,
but there are days, I have to say, and that comes not from being able to code my way out of a paper
bag in any of them, but from painful years of
experience trying to run various different, shall we say, interestingly constructed applications in
a wide variety of languages in challenging production environments. The lesson I take
from all of this is that everything is terrible. That is true. That is true. A lot of people,
they like to rag on different languages, but it's really a means to an end.
And when we look at the proliferation of things like WordPress and other PHP applications, one shouldn't discount the value that that's provided to our society.
So one of the cool things is we've been able to build a PHP runtime for Lambda using a layer.
We published it publicly so anyone can go out
and use it for their own applications.
And it operates like a traditional PHP web server. When a request comes in and you
route it from API gateway to this Lambda function, it's going to interpret your PHP files in the same
way that if you were running your PHP web server at home, it would do. And then it sends it right back out the door. So there's this possibility now,
although this is a kernel, a seed of a runtime, there's the possibility now that all of those PHP
applications that we've got on our servers as monoliths, in other forms, we can now start to think about, oh, maybe we can
start to upgrade this, bring it into, you know, the modern serverless land, break it down.
It's exciting what's now possible with this new Lambda runtimes capability.
What I'm trying to wrap my head around, and maybe this is a naive question,
you have to excuse me.
I had to leave through the keynote
when they started talking about this stuff
because my brain was full.
Again, I am not the sharpest tool in the shed some days
when it comes to these things.
But it seems to me that running
different language runtimes in Lambda,
well, yes, that's valuable.
Yay, I can run my COBOL monstrosity inside of Lambda functions now, and it lasts 15 minutes,
or whatever the time is now. I bet there are many banks out there who would be very happy to play
with that. Absolutely. But that feels like it's a one and done victory, as opposed to the other
components of layers, which really feels like more of an ongoing win as native serverless applications continue to grow and evolve. And I'm not sitting here
trying to say that there's no value to supporting additional runtimes. If there's one thing that we
can always count on Amazon to do besides giving things ludicrous names, it's meeting customers
where they are. And customers do have that need. But I'm curious, from my perspective, as I start looking into this more and more,
it's less about running this Ruby thing inside of Lambda and more about
being able to go ahead and address
the ongoing workflow story of solving the problems of shared
dependencies between a wide variety of Lambda functions, which until now
I have to confess has been terrible, the amount of code reuse in my various Lambda functions is,
to be frank with you, shameful. Yeah, yeah. I mean, the runtime piece is kind of exciting and
interesting, and everyone gravitates towards their specific thing that they've always wanted to do.
But at the end of the day, you actually, for the most part, don't
want to be running your own runtime. You want to leave that up to the cloud service provider
so that they're making sure that the Node.js you're using always has the latest security
patches and the same for every other language. And so that really hits at the, you know, while the bring your own runtime aspect is powerful and interesting,
the real key here is much more around providing these paradigms that really enable people to, you know, build the applications with confidence, with consistency, with best practices.
It's really exciting to see Amazon continue
to push the envelope in all of these ways.
So you are effectively a subject matter expert in this.
You've built an entire product company
around making process maturity something attainable,
not just for enterprises,
but for people with relatively small-scale
service applications like I have.
Right.
To the point where it's not just fit for purpose
for enormous companies where you need a team of 50
to wind up deploying it,
but also to wind up bringing this to a point
where I can do this as a part of my workflow
and not hate it.
It makes sense and it makes me go faster, not slower.
So you're in a somewhat privileged position
to answer this, despite the fact that,
yes, I can walk down the hall and get 50,000 people to weigh in.
So do I refactor all of my existing Lambda functions to leverage this?
Do I just do it one by one as I start updating those in the natural fullness of time?
Or do I just squash all of my previous code into one giant commit, label it legacy code, like the joke goes, and start fresh from here forward,
and just treat this as something I use for Greenfield.
I love the approach of squash it all into a legacy layer
and just wash your hands of it.
That sounds like the dream realized.
No, actually, one of the things that I think serverless has gotten right,
and somewhat out of necessity
is that if you have to rebuild everything that you already built once before, you're
re-architecting everything just to use serverless, then that technology is not going to take
off.
No one is going to sit there and say, I want to completely rewrite everything that I already did
for the past 10 years just because serverless is the hot new technology.
But instead, serverless really enables the techniques that,
one of the patterns is called the strangler pattern
where you take a monolith and your goal is, over time, to strangle it down and pick pieces off of it.
If it's an API monolith, you're picking off one route at a time and re-implementing it in a different paradigm. So you might take your monolithic Ruby application, or Rails application, I should say,
and then you start to take a couple of routes at a time and move it into a serverless function.
And now with the news today that you've got Ruby as a full-fledged runtime,
you're able to do that even easier without having to rewrite your code.
I think with layers, you'll see people do the same thing,
is that where they have a need to share a bunch of code,
the next time that they're going to need to go and update that
across all of their functions, they're going to go and say,
it's time that we go and use this new layers approach.
Because the alternative is, when I need to go update that one line of code in all this shared
stuff, and now I need to go and copy that all around among all my hundreds of functions,
that becomes extremely tedious. So I think it's a matter of this continual evolution
of the way that we do software.
And as new techniques become available,
we adopt them as it makes sense.
Well, to that end, whenever I find myself leaving the,
I guess, tech coast, for lack of a better term,
and I can tell I've left because I'll sit down
and talk to companies about what
they do. And they have these ridiculous things like business models and a sense of profit and
trying to build something sustainable and employees. And they respond very earnestly
without a condescending accent. So I can tell, oh, I'm out of the bay. And to a number of these companies,
serverless isn't really a thing yet. Or if it is, it's a toy that they're playing with in some small
Skunkworks project, which I get. I mean, if you have a massive thing that generates billions of
dollars of revenue, maybe being the first person on your block to try the new technology isn't
really in your top 10. So what I'm trying to figure out as I think about
this is, if I put myself into a place where I was a few years back, by which I mean not that long
ago, and I no longer have anything running in Lambda, and I'm approaching it for the first time,
today I learned that there is a thing called Lambda that isn't part of a Greek alphabet or
on the name of a fraternity or sorority somewhere. Great. Awesome. Does layers change the way that I
approach Lambda from a day one perspective? Yeah. So I first take a little bit of exception to
the premise that it's only the startups, the newer
businesses or the newer product lines that are adopting serverless. There's a lot of the industry,
both small and large, that have found that serverless is key to where they're going.
You've got companies like Coca-Cola, Nordstrom, Mattson, which is a gigantic shipping, like
container shipping in the original sense of the word, company.
And these are companies that people would not have thought are your tech leaders.
They're not on the same tech pedestal as Apple and Google and Facebook. And yet, they're the ones finding that
serverless is extremely useful for their types of workloads, whether it's spiky traffic,
they're finding it's faster to ship products of all kinds. So there's really a wide spectrum of people using this stuff.
Now, I do think that layers as well, it adds another important key to this tool belt that enables people to dive in more fully.
It's kind of the recurring theme of all of serverless since it started out in 2014 here at reInvent.
The Lambda was announced.
The term serverless wasn't even coined yet.
And over time, there's been this evolution of thought of,
oh, well, what can this really be used for?
And it's an expansion of, oh, if we hook up API Gateway to it,
now we've got a great way of creating scalable APIs. Oh, and if we hook up Kinesis streams to it, now we've got a great way of streaming and consuming data in a scalable
fashion. And oh, if we hook up DynamoDB as streams to it, and oh, if we hook up SQS queues, and oh,
if we hook up all these different things uh it expands the the capabilities of what
you can do with serverless i would venture that at this point about 95 of workloads out there
can effectively and and positively be engineered on a serverless foundation and that's that's
really exciting stuff.
I think that it's probably going to creep more and more
towards 98%, 99% the next year or two.
And we're going to see a sea change
as people move to managed services.
So they're no longer having to run clusters
of Docker containers and clusters of databases. I mean, we even see this now with
Aurora Serverless. It's amazing technology in that it's this one thing that everyone kind of
assumed, there's no way to make that horizontally scalable. And while it's not exactly horizontally scalable in the same way that DynamoDB is,
it's still providing a capability for easy scaling up and down for a type of technology that people just didn't even really try to scale in that way before it was released earlier this year.
Absolutely. Now, you started that by saying you took exception
to the idea of enterprises
not playing with serverless and it only being the realm of startups. First, if you take exception to
that, better go catch it. Meanwhile, somewhere,
there's some developer commuting right now who's driving to work and laughing so hard at that
lame joke, they almost ram a bridge abutment. My apologies, other people on the road.
So I agree with your premise that companies are investigating this in very interesting ways. In
fact, this may be one of those weird progressive technologies where enterprises adopt it faster
than some startups do. But when I talk to companies who are doing this, and this may be my own bias
based upon who I'm speaking with,
they tend to be replacing a lot of back-end
processes, things like cronjobs, things that
require instantiation and
then go away. But if it fails or
is delayed, it's not user-facing
in the traditional sense.
That small back-office
market that only has
a percentage of the world's GDP running on top of it.
Yeah.
Right.
So I do absolutely agree with what you're saying.
And I absolutely don't want to come across as if I'm saying that this is just something for the cool kids of tech.
Sure, sure.
So I agree with that wholeheartedly. What I'm trying to wrap my head around is, I guess, getting away from my own historical prejudices regarding Lambda.
By which I mean, I still think of Lambda as being constrained to five minutes.
I still think of it as, oh, that's right, it supports something that isn't Node.
And I still think of it in terms of a very limited subset of its current day
features, just because those constraints from my first introduction to it, and even now that those
constraints have been relaxed, lifted, and expanded significantly, I don't have to, I still find it
hard to come to it from a new perspective. And I wonder how much that shades my thinking about
what's possible with this. Yeah, you know, there's a
couple of things to unpack here. The first is, you talked about use cases that a lot of people have
for serverless and lambdas, which is that background batch processing offline, you know,
not in production throughput traffic scenarios. And that's definitely a huge win for serverless,
but it's also an onboarding step for organizations
where they start to play around with it
and they get comfortable with this idea of,
I don't need to know about the server
that this is running on.
I don't need to know what that is
to just do this little batch script.
And it's extremely powerful for individual developers when they know they just want to
run this tiny little thing once a day maybe, but in the past they've been held up in
infrastructure procurement. They didn't have a server to go put this on. It's simple. It's like
it's a cron script. I can run it on my laptop. It doesn't use any resources. But if I don't have a server to go put this on. It's simple. It's like it's a cron script.
I can run it on my laptop.
It doesn't use any resources.
But if I don't have someone giving me a server to run it on,
then I'm still blocked.
So it starts to tease into people like,
oh, wow, I can do these amazing things that I never was able before.
Just as a developer or as a DevOps practitioner, even ops people can start
to actually cross into the development roles a little bit, the DevOps roles. There's like a
meshing of these roles that serverless enables. But the second thing that you were getting at is,
how do I rethink what serverless is and what it means as it's changing over time?
And this is something that we're focused on as well. There are unfortunate realities that certain
people who jumped all in on serverless in the very early days may have done so without realizing
all of the sharp edges, both in tooling and in capabilities, that have been
smoothed out now. So now at this point, those same people might have an amazing experience
using serverless tech, but they swore it off two years ago. And so we're working to re-engage with
those people to be able to understand where
they had challenges and, you know,
ensure that they've got a great re-onboarding process. There's that,
you know, famous Gartner, what's it called? That graph, the,
what is it called? The graph of disillusionment.
The trough of disillusionment. Yes, yes, yes.
The hype curve, wasn't it? Yes, the hype curve.
I'll link to that in the show notes.
Yeah.
Yes, the Gartner hype cycle. Thank you. I'll link that in the show notes for sure.
Yeah. And serverless certainly has had a lot of hype behind it,
especially early on. Everyone could see the possibility of this,
but it wasn't quite easy or possible to realize all that it was purely capable of two or three
years ago. And so we started on this hype curve. And I think that there are some people who are already heading towards that trough of disillusionment.
We at Stackery, one of our goals is to help catch people when they're starting to get
disillusioned because their existing tools and processes are breaking down, and help
them get across that trough into the zone where actually we're extremely productive and happy with
this. And it's everything that we hoped it would be for our use cases. That's what we try and do
day in and day out for our customers. Okay. One more question for you that I'm sure will
absolutely get both of us thrown out of any conference for
the next two years based upon the fact that someone's going to disagree with this. And it
sounds like a trivia question, but it's not. Do you think that DynamoDB counts as serverless?
That's a really interesting question. I tend to think of serverless as meaning managed servers. I do not need to figure out
how these servers are provisioned, how they are managed from a security perspective.
Obviously, I have to figure out IAM credentials and permissions, which thankfully Stackery handles
for me. But I don't have to worry about operating system patches.
I don't have to worry about, is this spread out across availability zones? Even with DynamoDB,
you've got global replication. So is this even accessible at proper latencies where I want it
to be accessible around the world? That, to me, means serverless.
Now, there are some people who want serverless to also mean that it's pay-per-use, not upfront
provisioned.
And I'll take and leave based on what I fundamentally understand about how a technology works.
I would love unicorns and magic ponies in my backyard every day,
but at the same time, that's not what's available in the real world.
As a computer engineering background engineer,
I understand what is DynamoDB, how it works, what is Aurora Serverless, how it works at a foundational level.
I don't understand all the magic and tricks that AWS puts in place to make it work as well as it does,
but I still understand how that data is stored, how it's sharded, how it's brought up.
And that leads me to understand why it has to be provisioned
throughput, why it has to be scaling that is on demand with a certain amount of latency
built in and hysteresis. So to me, I see it as AWS has taken this technology as far as it's humanly possible,
and they will continue to break down the barriers.
But I hate to kind of fault them for the fact that,
well, this is how database technology works,
to lay around like, oh, DynamoDB is not actually serverless.
I had a back-and-forth discussion about this with Simon Wardley
in a situation where he was far too polite to tell me to go away
and stop bothering him.
The trick, always do this in public.
And the conclusion that he, sorry, not the conclusion,
but the line that he took that I can't really get out of my head
because I think it was very poignant, is that
it doesn't quite
qualify on the simple grounds
that it cannot scale down
to zero. You're always
paying for one write or
read unit of capacity.
And that's a tiny cost
that doesn't wind up changing
any of the economics of anything other than
the smallest toy problem.
But it's still a cost.
You're always going to pay for storage, sure.
But conversely, I'm not paying for a Lambda function when it's not running.
Aurora serverless stops.
I'm not paying instance hours for it.
With DynamoDB, I'm always paying something, regardless of how quiet or non-trafficked I make the table.
And on the one hand, that does feel like it's pedantry.
On the other, I can't shake the feeling
that there is something poignant there.
And maybe I'm just running from lack of sleep
due to re-invent week here, but that's where I sit.
I'm going to throw this back at you and ask,
do you think Aurora serverless is serverless
simply because it can scale to zero,
even though in most real-world usages, no one's going to let it because it has too long of a cold start?
So does the fact that it theoretically can scale to zero, is that the important part?
Or is it important that it has to be able to scale to zero and scale up immediately?
That's sort of the question I run into.
It's scaling up, but it's also, if I'm not running it or putting traffic to it, there's
the assumption that I pay zero for it is, I think, a fundamental tenet of it, with the
obvious caveat of, yes, storage will cost money. I don't expect people to store my data and not
charge me for it proportionately. That's fair. But I'm talking about compute and network perspective,
where I don't have to pay for something that is not seeing active use at this instant.
Right.
And I think that is part of the fundamental tenet, that event-driven computing. I, in my head, I disagree.
And the reason I disagree is simply that
if someone, if a valid use case
was that I needed to have thousands of DynamoDB tables,
and thus I needed them to be scaled down to zero
because the cost of having thousands of read and write compute units,
because they must have at least one of each for each of these tables, is problematic.
If that was a valid use case, I would totally agree.
But I tend not to think that this use case makes sense. The closest
you might get is if I've got a thousand developers in an organization
and they've got DynamoDB with autoscaling
turned on and a minimum of one capacity unit
for reading and writing. And every one of those developers has their own
environment where they've provisioned this table.
That gets close to this use case.
Yet even still, if you've got 1,000 developers,
the percentage overhead of every one of them
having their own single capacity unit tables
versus their salary and benefits is minuscule.
So to me, it seems unnecessary to put that restriction in place.
That's fair. And to be clear, DynamoDB scales down to, I think, costing, what is it,
two bucks a month. it truly is who cares money
unless you have 10,000 of them
and I guess to some extent that's where it starts to
be concerning to me, it's not the one-offs
it's the idea that I can scale out something truly massive
and do one of these for every version of a service
but if that starts to incur cost across the board
I very shortly turns into something where I can't treat it do one of these for every version of a service. But if that starts to incur cost across the board,
I very shortly start turns into something where I can't treat it quite the same way.
Right. And maybe that's an edge case. Maybe that is so ludicrously down the path that it's not even worth having the conversation. But again, it's one of those things that I start thinking about,
and I can't get out of my head now, for which I once again blame Simon Wardley. Yeah, it feels to me like
there's a puritanical approach to the definition of what is serverless, and there's a practical
approach. And I would say that, yep, the puritanical definition of serverless should include
that aspect of being able to scale down to zero. But the practical aspect, the practical definition of serverless,
I don't see that as being a necessary part of that definition. I will tell you what I do wish,
you know, if I'm going to throw out an Amazon wish list, I would love to have a serverless-ish elastic network gateway for my Lambdas when they're running in a VPC.
That's, to me, and obviously everyone's got their own pet need and everything,
that's the one service I would love to have, especially since they don't have tiered offerings for sizes of NAT gateways.
With the need for a lot of people to access their resources, their existing resources,
and this really hits at some of the enterprise use cases that we have people come and talk to us about.
I want to strangle this monolith, the database is in this VPC,
put my functions in there. There's a lot of people who complain about,
you should never put a Lambda in a VPC. It has all kinds of overhead, which is not actually
true if you manage it correctly. But there's still that cost of the NAT gateways that is problematic for people.
So when it comes down to it, I'm hoping that maybe in the next year, next reInvent, even more of these services are serverless.
I think you're right.
And I think that we're definitely seeing that trend. In other words, you don't see significant time out of keynotes devoted to baseline undifferentiated services anymore.
At most, you'll see a few things here or there talking about, yes, we've added the 1800th instance family,
but I don't think that's what's interesting from a perspective of the future of computing.
And I think that for a keynote at an event like this, it always has to be forward-looking and aspirational.
And from that perspective, I think they nailed it.
Yeah, I agree. I agree.
Yeah, it's the breadth of services that AWS has, the amazing use cases of what you can do on the cloud. At this point, it practically is limitless.
It's more a matter of what do they make easier? What do they make easier from here on out?
Because it's all possible. And that's what's really exciting, that I can do everything in
the world.
Netflix is all on Amazon.
And of course, they've been that way for years.
But if Netflix can do it, if all these other companies can do it, you can do it too without
having your own servers sitting in your room.
It's mind-boggling the power at your fingertips. It really is. I think it's unlocking
an entire world of possibility for companies that, until very recently, would not have had
the capability of even dabbling with this. Now I can spin things up in the course of an afternoon
that boggle the imagination. Or let's be honest, I could if I were better at working with computers.
There's still no service for that.
And like Werner always says, there's no compression algorithm for experience.
Yep, it's so true.
Well, thank you so much for taking a time out of a very busy week
to speak with me today.
Yep, I'm glad to do so.
It's exciting, it's fun.
There's nothing like being at reInvent.
There really isn't. And for those who have not had the pleasure of experiencing Las Vegas for
a solid week with 50,000 of your closest friends, it's something that should be experienced once
and never again. This is my second year, and I'm wondering at this point why I have made the choices that I've made.
Yeah. Did you bring along all the medicinal kits that they... What I'm doing, this is my thing,
is all those shots and all those pills they give you when you're going to third world countries,
I've asked for the same thing to go to Las Vegas this week.
That would have made an awful lot of sense. I forgot to get my malaria pills.
Yep, exactly.
Some of those buffets are no joke.
Nope, nope.
Thank you once again for taking the time to speak with me today.
Chase Douglas, co-founder and CTO of Stackery.
I'm Corey Quinn, and this is Screaming in the Cloud.
This has been this week's episode of Screaming in the Cloud. You can
also find more Corey at
screaminginthecloud.com or wherever
fine snark is sold.