Screaming in the Cloud - Going Serverless with AJ Stuyvenberg
Episode Date: September 18, 2019About AJ StuyvenbergAaron Stuyvenberg (AJ) is a Senior Engineer at Serverless Inc, focused on creating the best possible Serverless developer experience. Before Serverless, he was a Lead Engi...neer at SportsEngine (an NBCUniversal company). When he's not busy writing software, you can find him skydiving, BASE jumping, biking, or fishing.Links Referenced:Â Twitter: @astuyveServerless.comServerless Blog
Transcript
Discussion (0)
Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud. on first-class company environments. I've got to say, I'm pretty skeptical of remote work environments,
so I got on the phone with these folks
for about half an hour,
and let me level with you.
I've got to say,
I believe in what they're doing,
and their story is compelling.
If I didn't believe that,
I promise you I wouldn't say it.
If you'd like to work for a company
that doesn't require you to live in San Francisco,
take my advice and check out X-Team.
They're hiring both developers and DevOps engineers.
Check them out at the letter x-team.com slash cloud.
That's x-team.com slash cloud to learn more.
Thank you for sponsoring this ridiculous podcast.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by AJ
Stivenberg, a senior engineer at Serverless Inc. Welcome to the show, AJ.
Thank you, Corey. Thanks for having me. So we've had Austin Collins, the founder and
CEO, if I'm not mistaken, of Serverless Inc. on the show before, but enough has
changed since then that it's time to have a different conversation, ideally with a different person. So we've at least validated now that two people
work at serverless.com. That's correct. And there are at least two of us.
Excellent. We have not yet proven that you folks stay longer than 15 minutes,
but that's a serverless joke. Badum tis.
Love it. So what do you, so let's start at the very beginning. What do you do
at Serverless Inc? Yeah. So I am a senior platform engineer and I'm working on some of our new
features that we launched on July 22nd that we call the Serverless Framework Dashboard.
It's sort of a sister product that is launched along with our Serverless Framework CLI that,
you know, everyone kind of knows and loves already.
And the goal is really to offer a full lifecycle serverless application experience.
Gotcha.
Let's start at the beginning.
I'm sure you've told the story enough, so I'm going to take a whack at it about what
the serverless framework does.
And please correct me when I get things hilariously wrong.
Once upon a time, we would build Lambda functions
and things like it in AWS
or their equivalents and other lesser cloud providers.
And you would wind up seeing
that it was incredibly painful and manual to do yourself.
There were a lot of boilerplate steps,
a lot of things that were finicky
and you'd tear your hair out.
Then we would sort of evolve to the next step,
at least in my case,
of writing terrible bash scripts
that would do this for us. From step, at least in my case, of writing terrible bash scripts
that would do this for us. From there, we iterated a step further and thought, okay, now I could do a
little bit of this with Python, and hey, there's this framework. Now I write in my favorite
configuration language, YAML, where I wind up effectively giving only a few lines telling it
what to do. This interprets it in Node, the worst of all languages, and then spits out
effectively under the hood CloudFormation, then applies it with no interaction from me
into my AWS environment. Have I nailed the salient historical points?
Yeah, I think you did, along with your beautiful commentary as well.
Well, thank you. And that's effectively what this did a year ago when I finally made the switch,
saw the light, etc, etc. My ridiculous newsletter, for example, has about a dozen of these things all
tied together under the hood for the production pipeline. And hey, using serverless make this a lot
easier. But everything I've described so far, first, has sort of been in a bit of stasis for a little while. And secondly, is entirely open source and available to the community.
So first, have I just been not paying attention
or has there been a bit of a lull in feature releases until recently?
That's a great question.
We've been making a lot of headway, supporting a lot of different runtimes
just in general, along with lots of supporting,
lots of new features that AWS has launched.
So specifically, you know,
I would point to the recent launch of EventBridge.
Almost two weeks after that was launched,
we actually had a support for it
inside of the serverless framework.
As of the time of this recording,
it's been out for about a month
and there is still no CloudFormation support.
That's correct.
We had to implement it using a custom CloudFormation resource.
Because everything is terrible.
Yeah, and to get something done
in life, you have to suffer a little bit.
Well, NDAs are super important. Whenever you're building
something at AWS, you sign one
that agrees that you won't tell the rest of the
world what you're doing. Then you start building a new
service and you sign a second NDA
saying that you won't breathe a word of what you're building
to either the CloudFormation or the tagging teams.
That doesn't make any sense to me, but I don't work there.
I can't tell you.
None of that is actually true, but that's my head narrative of why things come out without
baseline support for these things.
It does feel like that sometimes.
And a lot of what we've done over the last year is really try and support the breadth
of the services, not only inside AWS, but elsewhere. Because what we found is to make a compelling offering on a
framework level, we really have to have everything that people want to do, right? If people end up
going back to that sort of bash script, you know, deploy pipeline kind of world you described
earlier, we've really failed, right? So in that time that maybe some people perceive as us going
dark, we've really been working on supporting a lot of different services and making improvements
to the framework that maybe aren't as big as far as like a big splashy product on Hacker News kind
of launch. Okay. I would also point out that I am the absolute worst kind of customer in that I'm
not really a customer at all. Everything I've been using the serverless framework for is freely available as open source. I pay you nothing and I complain incredibly loudly. I'm like the internet
brought to life. Absolutely. So with that in mind, what is effective of the business model here?
At what point do people start paying you? I imagine it's not an out of the goodness of
their heart situation. And I don't think that the investors behind you folks have suddenly decided to turn this
into the weirdest form of philanthropy we've ever seen.
Yeah, that's a great question.
So alongside with the framework at open source contributions we've made over the last year,
we've also been really hard at work on this dashboard product.
And that's what we actually do sell.
There's commercial offerings.
It is completely free to try.
Free up to 1 million
invocations. We'll track them. We'll give you all sorts of insights onto what your services are
doing. You'll be able to use things like our secrets functionality, which will allow you to
encrypt secrets and then decrypt them at runtime so you can pass them between your service without
actually having them floating around in plain text or in a Git repo. You can use safeguards, which is this policy, this code framework.
It allows you to control what your team can and can't do,
like which regions you can use, which AWS accounts you can deploy to,
when you can deploy, et cetera.
All of this stuff is completely free to use,
but we do have paid plans, and that's where we do make money.
So after you go past a million invocations in a month,
we'll charge $10 per million invocations and $99 per seat.
And then we do have enterprise plans that are available, which allow you to run this
entire thing on your own cloud infrastructure.
Awesome.
Okay, so let's break down some of the releases in a bit of, I guess, order that they were
released.
Correct me if I get any of this wrong.
The big one that really caught my attention was that the serverless framework is apparently
now going full lifecycle around offering things around testing, deployments,
monitoring and observability, and probably a few pieces I'm missing.
Yeah, that's completely correct. On July 22nd, we announced this kind of new and expanded
serverless framework, which includes the real-time monitoring, the testing, the secrets management,
other security features. They live inside the serverless framework dashboard, which is kind of integrated into
this serverless framework CLI that we already know. And again,
this dashboard is completely free for you to use up to a
million invocations per month.
Gotcha. So what else? So the way that I always thought of
serverless framework is it wound up, I would run effectively a
wrapper command around a whole bunch of stuff deep under the
hood. And it would package up my lambda functions, I would run effectively a wrapper command around a whole bunch of stuff deep under the hood.
And it would package up my Lambda functions.
It would deploy things appropriately.
It would wire up API gateway without forcing me to read the documentation for API gateway,
which was at the time impenetrable.
It's like a networking Swiss army knife.
It can do a whole bunch of different things.
And the documentation was entirely in Swiss German.
So you'd sort of get it wrong, get it right by getting it wrong a bunch of times first.
But now it's added a bunch of capabilities that go beyond just pushing out functions and keeping them updated.
What have you added?
Yeah, absolutely.
So the biggest thing would be the kind of monitoring and observability capabilities
we've added on this dashboard.
So we'll get you insights into things like, hey, a brand new error was just detected.
Here's the full stack trace
pointing to where the error was thrown.
Here's how many times it was thrown
in the last X amount of time.
Here's the complete reconstructed logs from Lambda.
It kind of allows you to immediately diagnose
and describe the issue to your coworkers
or yourself to go off and patch and figure out
and solve the problem.
That those types of insights are sort of available
kind of in aggregate where you're able to see,
okay, so like, let's say for example,
during the average week,
I might do 5,000 invocations per day.
And then one day I might do 10,000 or 100,000 invocations.
We'll trigger a automated insight that says,
hey, this function is now doing a lot more invocations than it was doing that says, hey, this function is now
doing a lot more invocations than it was doing previously.
This might be something you want to look into.
So it's sort of the full lifecycle of your application, more than just the packaging,
more than just the configuration of your services that you're interacting with inside of your
favorite cloud provider, but also bringing it all together and experience that, you know,
someone who's not necessarily traditionally familiar with serverless would be able to
understand and grapple with.
Understood.
So if I take a look across the ecosystem now,
I think the biggest change is that historically,
I would use the serverless framework to package these things up
and get my applications up and running.
I'd use a different system entirely for the CI, CD stuff that I was doing.
I would pay a different vendor to handle the monitoring and observability into it.
And now it seems like you're almost making a horizontal play across the ecosystem.
What's the reasoning behind that?
Yeah, that's a great question.
So we think we're in the best position to offer the best experience using the surplus
framework.
We don't think that anyone should be forced to cobble together their own solution using multiple providers or writing their own custom log pipeline to do analytics or any of
the sort. We think that we should be able to offer something compelling, out of the box, and easy.
After all, that's kind of the serverless promise. Get up and running, very little configuration,
scale to zero, scale to infinity, pay per execution. And that's the type of thing we're
trying to bring to the entire lifecycle of your app.
Because once it's running in production,
you need more than simply really ease of access
of different services and really an easy way
to package and deploy your application.
You need to monitor it, right?
You need to handle secret management.
You need to make sure that proper safeguards are followed
and things are done according to your company or your group's policies.
And you need to be able to keep an eye on things.
And that's what we're trying to do.
We're trying to be the one-stop shop for all things serverless.
Gotcha.
It's interesting because historically, in order to get all these things done responsibly, the best of breed,
you had to build a microservices architecture by stringing together a microservices vendor strategy, where you have a bunch of different companies doing the
different components for you, and then tying that all together into something that vaguely resembled
some kind of cohesive narrative. Now it seems like that's no longer the case.
Yeah, absolutely. You know, the downside was sort of that approach and experience is that you end up with this really sort of fragile ecosystem surrounding your application.
And these applications don't live in a vacuum. They have to interact with other services, other applications.
So to have this sort of really immense configuration alongside of it simply to monitor your applications isn't really a solution anymore. So we needed this way to have one place to go and look and
see what is my service? What is my application? What is my function doing at this time? And,
you know, why is it broken? Let me get there and fix it quickly.
Right. And that tends to lead to some interesting patterns where you effectively have to pass
through a whole bunch of different tooling in order to get insight into it, which I guess raises the real question I've got. Again, this is not a sponsored
episode. You're here because I invited you here. Huzzah. Nice of me, wasn't it? But it also means
that it's not a sales pitch. So you get to deal with the fun questions. Namely, if I'm going to
effectively pick a single vendor and go all in with them for all of my serverless needs,
why wouldn't I go with, for example, AWS themselves
if that's what I'm doing?
I mean, they have services that do this.
They have services that do everything up to
and including talking to satellites in orbit.
So if I'm going to wind up going down that strategy,
why pick you instead of them?
Great question.
So the answer is simply that we think we offer
the best experience on top of the serverless framework that you're already using.
You know, we understand everything that's going on in that serverless YAML file that you're configuring.
If you have multiple serverless apps, we are understanding how they're talking across things like API Gateway or SQS or SMS.
So it's a lot simpler for us to give you a perspective, you as the customer, a perspective of your application that mirrors what you understand it and not simply a bunch of little services linked together.
Now I think there's competing offerings all over the map here.
And if you still want to go through the joy of creating your own log pipeline or all of
your own metrics or ingest system or monitoring or what have you, you still can.
The serverless framework is still completely open source. You're free to do that. But if you're looking for one
place to get up and running quickly, to get started and get your code out the door to
production as simply as possible, I think we offer the best solution there.
Gotcha. I got to say that as much as I like to challenge you on this, I obviously agree. I've
been using you folks for a while now. So what came after the full lifecycle release?
There was something else.
Yeah, just a couple of weeks after we finally announced the serverless components,
which is sort of a new take on using serverless services in your entire application ecosystem.
The idea is you should be able to deploy an entire serverless use case,
like a blog or a registration system, a payment processing app, or an entire full stack page.
You should be able to do that on top of whatever you're doing in the cloud
without ever managing that configuration, right?
That's kind of the vision behind components, right?
And the idea is that you can define these use cases,
these serverless use cases as components,
and you interact with them in a way that you would be familiar with
if you are using React, for example.
Gotcha.
Didn't you release something that was called serverless components a year or so ago?
I think it went into beta officially a year or so ago, and then we finally released a
GA.
Okay.
So is it fair to view this as effectively, I need a portion of an app to do a thing?
Maybe it's an image resizer, is a classic canonical serverless example. And normally you might consider using something like AWS's serverless application repo,
but maybe you don't hate yourself enough to wind up using SAM CLI instead. So this winds up meaning
you don't have to make that choice. Yeah, you can sort of pick and choose what aspects of serverless
use cases you want. And like you had said, the image resizer is like a super, super common example.
But there's so much more than that, right?
You know, if you want to run a really, really simple monolith out the side of,
or I'm sorry, on top of your application, you can.
There are examples for how to do this where you might have like a,
what we would call like a monolambda structure
where you have a REST API that's routed under the the root domain right and this this
entire application can simply be just deployed with one command using serverless components
gotcha so as you look at this across the board i what what uh what inspired you folks to build
this out what what customer pain was not being met by other options?
Yeah, that's a great question.
I think the biggest was reuse, right?
When we talk about developer practices, things like solid, what we really wanted to do is
reduce the coupling between aspects of your software.
And we're trying to do the same thing for serverless use cases. So instead of having, you might have that imagery size or could be part of
one application in one aspect or one area of your microservices
architecture, but you're going to want to use it somewhere else likely and sometimes that means
either redeploying it or other times it means simply like passing around a route to do that.
Either way with components you can package these things up in a really easy to use way, include them just
like you would any other piece of code, right? And then inject it into your service. And I think
that's where that really came out of. That was kind of the inspiration. So how much of what you've
built out as a part of serverless components is, I guess, tied to your enterprise offering versus community available to the larger community?
Yeah, it's 100% open source right now.
There are a few last steps we have to complete before we'll tie it into the enterprise offering.
Like I said, we did just launch it.
However, I don't think that road will be very long.
Some of the things you've said are compelling.
At various times in my evolution of what I've built, it would have been useful, for example, to have a payment gateway that I could have dropped in rather than having to build my own.
Totally.
For better or worse, I'm not irresponsible enough to try rolling my own payment system or shopping cart or crypto, so I smiled, nodded, and paid someone else to make all that problem go away.
But there's something valuable about being able to take what other people have built and had audited and done correctly
and just drop it in place. Precisely. And that value extends to more than simply, you know,
the use case inside of like that generic, you know, image resizer or payment gateway,
but put yourself in a position in a larger corporate environment where you might have
several teams working together and let's pretend that their application has specific, you know,
API contracts that
kind of bind the different services together.
And you want to just deploy that middleware layer anywhere you want inside of your application.
You could write that as a component and then simply share it.
So you have one source of truth for that sort of interoperability and you can share that
between all the different teams.
And now it's very simple to get started instead of kind of each team implementing their own flavor,
which I'm sure you've experienced
at different parts in your career.
So something else you launched recently
was called Safeguards.
What is that?
Great question.
So Safeguards is a feature
that is built into our serverless dashboard.
What it is really is a policy as code framework,
which means you can define different policies
for your serverless applications in code. Now we include several for free that you can try out. Some really simple
examples are whitelist, you know, AWS regions you can deploy to, for example, you can whitelist
specific accounts that you can deploy to. You could restrict things like, for example, people
often wildly over provision IAM rules with wildcards.
So we can easily restrict things like no wildcards in your IAM rules.
Is that done via config rules, service control policies, something else?
It's done by actually ingesting your serverless.yaml file.
So because we understand what you're trying to do, and we're the ones, like our framework is responsible
for translating your YAML into CloudFormation,
and then we can actually use safeguards to digest that
and, you know, appropriately allow or deny those configuration changes.
But it's more than just configuration management.
It also allows you to control when your application could
or couldn't be deployed.
For example, if you're one of the many groups that has a no deploying
on Friday policy, or you say, you know, no deploying on Friday afternoon.
Careful, say that three times and you'll wind up summoning charity majors to yell at you.
I personally believe that we should deploy forever and always, right? As frequently as
possible. But I understand that some people don't.
Well, that depends too. Are we talking about code that you trust or code that someone else wrote?
Absolutely. I mean, we're at the size in Serverless Inc., thankfully, where the answer is both.
We have seven people who have been working on this serverless dashboard offering.
So the group is small and the knowledge is tribal at this point.
But we're still growing fast and we're making lots of good changes as we go.
This week's episode is sponsored by Chaos Search.
If you've ever tried managing Elasticsearch yourself, you know that it's of the devil.
You have to manage a series of instances. You have to potentially deal with a managed service.
What if all of that went away? Chaos Search does that. It winds up taking the data that lives in
your S3 buckets and indexing that and providing an Elasticsearch compatible API.
You don't have to manage infrastructure.
You don't have to play stupid slap and tickle games
with various licensing arrangements.
And fundamentally, you wind up dealing
with a better user experience for roughly 80% less
than you'll spend on managing actual Elasticsearch.
Chaos Search is one of those rare companies where
I don't just advertise for them, I actively recommend them to my clients because fundamentally
they're hitting it out of the park. To learn more, look at chaossearch.io. Chaos Search is,
of course, all in capital letters because despite chaos searching, they cannot find
the caps lock key to turn it off. My thanks to Chaos Search for sponsoring this ridiculous podcast.
How do you wind up building something like this, I guess, in the shadow of AWS?
Because they kind of cast a large one where this started gaining traction,
and then it felt like they realized what was going on, shrieked,
decided they were going to go in their own direction,
and started trying to launch the SAM CLI, which despite repeated attempts,
I can't make hide nor head or tail of.
And it still feels to me, at least, like it is requiring too much boilerplate and it doesn't
make the same intuitive level of sense that the surface framework does.
That's just my personal opinion, but it seems to be one that's widely shared.
You take a look at the rest of the stuff that you're offering, and they are building offerings
around that stuff as well.
At some point, does it feel like you have diverged from them in a spiritual alignment
capacity?
I think we diverged from the very beginning.
Our goal is to let you build serverless applications on whatever cloud provider you want.
We support AWS, we support Azure, we support Google Cloud Platform,
and we support IBM OpenWhisk.
So that's something that SAM is never going to compete with
on a philosophical level.
They would not build tools for their competitors.
And that's where I think it's kind of
the ideological separation of the two.
Yes, but not for nothing.
I mean, that's valuable in a tool, sure.
But at the same time, how many people do you really see
using the serverless framework
and then deploying from the same repository, for example, into multiple providers?
Yeah, that's a great question.
I haven't personally seen it.
I would expect that that will probably come up a lot more as different vendors kind of
continue to either dominate or introduce new features that people want to use.
Obviously, it's all about capabilities, right?
It's all about using services that these vendors provide. And that's something that I think we have the most
compelling offering on right now. Now, when your question was, why would you build something like
this kind of in the shadow of AWS? The answer was we needed it. We weren't getting enough from
the area of services to do what we needed to do. So, you know, Serverless Inc. is a big believer
in dog feeding our own product.
All of our entire dashboard application
is all built using the Serverless framework.
A lot of aspects of our development
are monitored using the Serverless dashboard.
So we're using it every day,
and we think that that sort of mentality
really can put us a step in front.
I would agree with you.
And I think there is value in a tool being able to speak to anything.
As far as any individual customer, I get the sense that they probably don't care.
For example, I care profoundly about your support for AWS functions, but I don't use
serverless technologies from other providers.
So I could not possibly care less about the state of your support for those things.
I feel like it's one of those things that matters in the aggregate, but on the individual customer level, it's pretty far
down the list of things anyone cares the slightest about. Yeah, and concerns about that type of thing
vary depending on who you are, right? Like a developer or an individual contributor like
yourself doesn't care about a service they're not using, but a chief information officer really does
care if they have the capability to move aspects of their serverless
application from one vendor to the other if needed.
So it really depends on the target audience.
Gotcha.
So, next, normally the way that one contributes to an open source project is they open issues
on github, which is how I insist upon pronouncing it.
But I don't have to do that because I have a podcast.
Instead, I'm going to give you a litany of complaints
about serverless for you to address now.
That's right, it's ambush hour.
Let's do it, I'm ready.
All right, for starters, I have to use npm
to get it up and running,
which exposes a lot of things under the hood, namely npm.
Does it require npm in the first place?
Great question.
Right now, our framework is published on NPM.
We are experimenting with publishing binaries on our own.
Now, in theory, I could wind up just rolling it myself without ever touching NPM and just
use the JavaScript and compile it manually, but that sounds like something a fool would
do.
It does sound like something a fool would do, yes.
We are, like I said, trying to work through a point where you can download this binary
on your own.
Right, because invariably, I find that everything wants different versions of NPM, like I said, trying to work through a point where you can download this binary on your own. Right. Because invariably, I find that everything wants different versions of NPM,
so I have to use NVM to manage NPM versions, and now I'm staring down at a sad abyss that
annoys me. I want to be able to do things like brew install serverless, or I don't know,
apt-get install serverless. Or if I'm using Windows, I just go home and cry for a while,
then I get a Linux box, and then I can yum install serverless. Yeah, absolutely. I think we see that vision too.
And like I said, that's been on the roadmap. And that's one of the things we're really working
towards is being able to do binary drop-in installations of our framework.
Okay, next complaint. It feels like it is fighting ideologically with SAM, AWS's serverless
application model. Part of this is SAM's complete and inability to articulate what it's for in any understandable capacity.
You read the documentation.
You are more confused than when you started.
This feels like it's an artifact of AWS's willingness to be misunderstood for long periods of time and that being interpreted as license to mumble.
Yeah, I mean, I'm not going to comment necessarily on your interpretation of Sam.
But a big part is buying into sort of the ethos and the vision of the tool you're using,
right?
Like our vision is to let you just deploy use cases simply and really focus on writing
your business logic in the form of a Lambda.
You should not be responsible for going out and trying to figure out how to wire your
Lambda up to API gateway or SNS or SQS.
That's not something that any developer wants to spend their time on.
And that's what we're trying to do.
We're trying to abstract away the configuration of these services and let you as a developer focus solely on the experience of building your business logic.
Fair enough.
Next complaint.
It seems like you try to be all things
to all supported Lambda runtimes.
So there is of course the whole story
of running your own custom layer,
which generally is not a best practice if you don't have to,
but it does definitely feel
like there are favorites being played.
For example, it is way easier for me to build a function
in Python than it is in COBOL, it is way easier for me to build a function in Python
than it is in COBOL, which is probably as it should be.
But do you find that the experience is subpar
depending upon other lesser widely deployed languages?
Yeah, it really depends.
Clearly, if you look into the large ecosystem
of serverless plugins that are available,
you'll notice a trend towards things like Python and Node.js.
I think that reflects just the reality of the world we're in right now in the modern web development age. If you do really want COBOL, I mean, I know the head maintainer of the
serverless framework and we can talk with them about it, but I don't expect it to get much
traction because I don't think it's really being demanded. Fair enough. Last time I played with this in significant depth
in the wake of the Capital One overscoped IAM role issue,
it was, you could set an IAM policy,
sorry, an IAM role within a service
and it would apply to all functions in that role.
But scoping that down further
on an individual function basis,
for example, that function needs to be able
to write to DynamoDB,
but none of the others do, was painful. I'd have to wind up rolling a fair bit of custom
CloudFormation myself. So I just shrugged and overscoped IAM roles because I'm reckless.
Is that still the case or has that changed and I just never noticed?
So all of the functions take an IAM role statement that you can actually give individual IAM role access control to on an
individual level. But at the same time, that still creates a role. It doesn't prevent you from
inheriting it in another function. Cloud security is a really tricky thing and like the Capital One
breach and at all, we see that routinely it gets gets misconfigured and that's kind of a big part
of what we're trying to do around safeguards is to kind of define these um and and limit their access
but for your specific question the answer is you can um right underneath the the name of the
function it takes a parameter called i am role statement which then takes an array of i am role
statements effect allow action whatever resource whatever, resource, whatever.
Fair enough.
Another one is I need to use the universe of plugins to do a lot of common things that tends to cause a few problems.
One, I have to wind up installing them and then NPM shrieks whenever it can't find them
and I get to go back into NodeHell, which isn't really your fault.
But then the quality of some of those plugins is uneven.
There are plugins out
there that let me integrate with CloudFront and Route 53 for domain management, but they in turn
then, oh, we're going to update your CloudFront distribution. Not going anywhere for a while,
grab a Snickers. And that's painful, for example, when you're doing this as part of a CICD pipeline
because you pay by the minute in code build. So that feels like it's one of those,
I never quite know whether I can trust a plugin
as something that is a first class citizen
or something that someone beat together
in the middle of the night.
Is there any curation around that?
That's a really good question, Corey.
And the answer is yes.
If you go to serverless.com slash plugins,
we have a full plugin directory that you can search.
There are checkboxes for certified and approved and community.
So they're kind of different levels.
It starts at community, then there's approved and certified.
And that would be what I'd suggest going to as your first resource to kind of determine-
And the ones without the check marks, install Bitcoin miners?
I can't guarantee that.
I haven't read the code, but it is open source and I would encourage you to do that.
Excellent.
Encourage me to what?
Read the code or install Bitcoin miners on other people's systems?
Read the code and then funnel the Bitcoin funds to me.
Thank you.
Absolutely.
Turns out, as we've said on this show before, it is absolutely economical to mine Bitcoin
in the cloud.
The trick is to use someone else's account to do it.
Yeah, it's even trickier with Lambda.
So one of my personal favorite things to make fun of is naming things.
And this is no exception as well.
First, the fact that you called it serverless at all.
Are we talking about the architectural pattern?
Are we talking about the serverless framework?
Are we talking about the company?
And it's sort of very challenging to disambiguate that from time to time.
First, awesome SEO juice.
But on the other side, it feels like that tends to cause a fair bit of
customer confusion. Yeah. Austin touched on that actually in your first episode with him,
and I would echo his sentiments that- But he hasn't renamed the company since,
so we're touching on it again. I think we're really pushing the serverless
ink to brand the actual company and serverless to define the framework would be my answer to
that question. Understood. And that's fair. We're not done with naming yet.
You have plugins and you have components and it's going to become increasingly
challenging, at least for me to keep straight, which does which.
Am I the only person that's seeing issues with that?
Uh, I guess overlap between which side of the fence, one of those things would go
on, or is that something that is ultimately designed to be aligned along the same axis?
I don't think you're the only one confused about that.
I think that'd be a stretch to say.
Serverless components are really about reusing serverless use cases, right?
And serverless plugins are really about enhancing the serverless framework to do other things
on top of the open source offering.
So that would be how I would kind of delineate between the two. That's fair and understood. So I think that really runs out my list of things to complain
about. What haven't I complained about that I really should?
I think we're all still waiting for AWS Lambda VPC cold start time to go down. I don't know
about you, but I come from a very relational database background using MySQL or Postgres.
And right now-
No, my relational database of choice remains Route 53.
Oh, wow.
That's one option.
You do send out-
You can use things that are not databases as databases, and it's a lot of fun and scares
people away from asking further questions.
It's true.
Anything's a metadata service if you try hard and believe in yourself.
Exactly.
One of the things that I would really like to see out of the serverless ecosystem is
a reduction in the cold start time of AWS Lambda functions inside of a VPC.
That would really allow us to start to utilize all the services that Amazon includes, things
like RDS, right, databases, your relational aspects, relational databases that you can't
use right now.
You're kind of stuck to using HTTP implementations. Obviously, we've seen Jeremy
Daly's blog post about Aurora getting a lot better over the last year. And I think it's a great step.
But ultimately, I think, for me, the biggest thing that I'd love to be able to interact with inside
of a serverless application is a relational database. I think that's kind of the last big
piece before all
the services that developers are frequently using become available. Things like Redis
or Memcached or Postgres can actually be utilized in an efficient way because right now that
cold start time is just killer.
Understood. One last question I have for you around this, and it's a bit of a doozy, I suppose, is if I take a look across the ecosystem, and as a cloud economist, I tend to see a fair bit of this.
There doesn't seem to be any expensive problem around serverless technologies, if we restrict that definition to functions, yet.
Sure, S3, you can always store more data there and it gets really expensive. DynamoDB if you're not careful. But Lambda functions always seem to be a little on the strange side as far as no one cares about
the cost. For example, last month, my Lambda bill for all the stuff I do was 27 cents before credits.
And if you take a look at other companies, whenever you see hundreds of dollars in Lambda,
you're seeing many thousands or more in EC2 usage. Are there currently expensive cost side problems in the world of, let's say,
Lambda functions? Yeah, that's a really good question. And I'll actually answer that in a
couple of ways. The first, we should admit that compute is a commodity at this point. Would you
agree? Absolutely. Right. So like any commodity, the providers are finding more and more efficient ways to provide it.
Lambda is sort of the natural evolution of that provision.
Previously AWS was selling their EC2 instances, virtualized instances on top of machines,
but they were guaranteeing a certain amount of memory, a certain amount of CPU power,
really like a certain amount of compute was being sold to you. Lambda takes that
a step further by not guaranteeing you anything and just saying we'll run your function when it
gets called, which allows them to really pack more of these Lambda functions and runtimes into
smaller servers really at the end of the day. I mean, we say serverless, but somewhere down there
servers, I just don't care about them. So from that standpoint, you're correct in saying that compute bills are generally cheap.
Now, the expensive part depends on which services you interact with and how they're set up.
I've read several blog posts about people getting burned by ridiculous DynamoDB and API gateway builds.
There's been a popular blog post that made the circuit discussing how you can save 30 or 60% of your AWS bill by switching
from API Gateway to Application Load Balancer, which I think is all true. I think a lot of
people getting burned on the cost front comes from not recognizing essentially what they're
provisioning or not necessarily using the correct data model or data access pattern for their use
case. That being said, it makes sense that your Lambda bill will be cheaper than your EC2
bill for the most part, right?
Your EC2 bill is like your house with the air conditioning running all day long versus
your Lambda bill is more like just stepping into your car with the air conditioning running
and then turning it off when you're done.
It's night and day.
It absolutely is. The concern that I have is
that it's always challenging to wind up convincing a company that's spending
under $300 a month on their lambda bill to spend even at least as much if not
more on the tooling around lambda I was using a monitoring product for a while
that would tell me in a big big letters in the dashboard that this month's
lambda bill is 22 cents.
That is seven cents higher than it was last month.
Maybe look into this.
Yeah.
How about I spend my time doing literally anything else because it's more valuable than
that, and it continually almost eroded the value proposition I was getting.
I was thrilled to pay more for that than I was for my Lambda functions, but the focus
on cost and cost optimization in
that scenario felt like a hell of an anti-pattern.
Yeah, absolutely. I mean, there's a price for your time and at seven cents seems like
it's a little cheap.
A little bit.
That being said, scale to zero and paper execution are what serverless is all about. Only paying
what you use for what it's all about. And I think we're at that point now where it's really enlightening for people that have been
AWS customers for five, 10 years who are used to paying hundreds of dollars in compute bills
to see new services cost pennies.
Now, is that worth an alert in your inbox?
I don't know.
I would guess that a person in your position doesn't read too much email anyway.
I email 15,000 people a week.
I assure you I read more than you'd think I email 15,000 people a week. I assure you,
I read more than you'd think. Oh man, that sounds awesome. People have opinions on the internet.
Yeah, and they have to be heard. And that's why we follow you on Twitter, Corey.
Exactly. Wait, people read that? That was a write-only medium.
No, no, you'd be shocked. It's a multiplexing system. One to many.
Okay. So if people want to learn more about serverless or what you're up to
in particular for some godforsaken reason, where can they find you? Yeah, you can check out what
we're doing at www.serverless.com. You can catch us on GitHub, github.com slash serverless. If you
have any interest in following me whatsoever, I don't recommend it for the same reason you don't
recommend people follow you. You can follow me on Twitter me on Twitter reach out well thank you so much for taking the time to
speak with me today I appreciate it absolutely Corey thanks for having me of
course if you're listening to this show and you love it please give us a
positive rating on iTunes if you're listening to this show and can't stand
it please give us a positive rating on iTunes. I'm Corey Quinn.
This has been AJ Steifenberg.
This is Screaming in the Cloud.
This has been this week's episode of Screaming in the Cloud.
You can also find more Corey at ScreaminginthecCloud.com or wherever fine snark is sold.
This has been a HumblePod production.
Stay humble.