PurePerformance - 064 Serverless by Design with Danilo Poccia
Episode Date: June 18, 2018We got to chat with Danilo Poccia (@danilop), Global Serverless Evangelist at Amazon Web Services, on how to best leverage serverless and its new principles to speed up bringing new features to the ma...rket. We learn about Event Driven Architectures, Continuous Deployment into Production leveraging Canary and Linear Deployments as well as how to automate testing when pushing your serverless code through CI/CD. Also – did you know that you can run all your Lambda tests locally in your machine? Check out AWS SAM ( https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html ) and SAM Local ( https://docs.aws.amazon.com/lambda/latest/dg/sam-cli-requirements.html ) for more information.Make sure to check out Danilo’s Serverless by Design website ( https://sbd.danilop.net/ ) where it you can visually create your end-to-end serverless architecture and get a CloudFormation template to stand up this environment in your AWS account.
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everybody, and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always with me is my co-host Andy Grabner.
Hello Andy.
Hey Brian, how's it going?
Good, you know I gotta say, the last two podcasts we recorded with Don uh, they were in the afternoon for me at least,
which was awesome. I'm back to nine o'clock, 9am recording and still waiting for that coffee to
kick in. So it's, uh, it's back to the old grind of the recording schedule. Um, but of course we're
recording early because we have, you know, guests all over the world, which is really, really
awesome. Um, how's everything going with you, Andy, before we get to the guests?
Anything momentous in your life?
Well, we just came off Cloud Foundry Summit here in Boston,
which was very interesting, talking with people in that space.
And I just also got off from another call with Donovan,
because he promised us yesterday at the recording of the podcast
that he's going to build integrations with TFS and damage rates and the unbreakable pipeline. And we just had a meeting and he
brought in one of his guys and it seems we are, this is going full steam ahead. So that's
pretty cool.
That's really awesome. We won't mention that because we're talking to a different...
Oh yeah, that's true. We're talking. So today, now let's switch over. Well, the person that I want to let him introduce himself, but what I learned and we both just learned is we learned what this strange top-level domain LU stands for and how it is possible that our guest today, evangelist on serverless technology with Amazon, how he ended up with an LU domain. And maybe I want to give it over to Danilo.
And hopefully I got this name right, but please pronounce.
I'm sure you know how to pronounce it best.
But Danilo, can you please introduce yourself to the audience and then we jump right into
the topic, which is serverless, because that's your passion.
And we want to enlighten our listeners.
Hey, hey, thank you.
Thank you so much for inviting me.
So, yeah, my name is Danilo.
You pronounced it perfectly.
And as you said, I'm an evangelist in AWS, and I'm specifically focused on serverless technologies.
I really enjoy the idea of building this application without having to think about servers.
And the LU August, it comes because when I started working for AWS almost six years ago now, I relocated to Luxembourg.
It's where the European headquarters of Amazon are.
And then my email address stuck.
And for people that may not know, that are not that geographically versed, Luxembourg is a real country in Europe.
It sits between belgium
france and uh and germany and germany yeah yeah it's a it's really an interesting place in 30
kilometers you can change country language so it's it's a nice place to visit and as i was saying
before in my head it's just filled from kilometer to kilometer with banks.
But I know I'm wrong, but that's what it is in my head.
And by the way,
30 kilometers is about 20 miles.
That's insane.
You're right.
So Danilo, I think the reason why we reached out
to you, you're speaking at
DevOne by the time this episode
airs. You have spoken at DevOne, or by the time this episode airs, you have spoken
at DevOne, our conference in Linz. And I think the topic that you've been speaking about was
serverless by design. And the other thing that you are doing, and depends on when this airs,
we may just be close to it. You're also going to be at the first AWS user group meetup in Linz,
Austria in June.
And I'm assuming that serverless is again
going to be your topic.
So can you tell us a little bit about
what you're advocating right now?
What are the key messages that you want to bring across
for developers when it comes to serverless?
What do people do wrong?
What do they have to know when they get started?
I'm really attracted by this topic because when I start building my first, now we
use the term serverless, but at the time there was no such term. My first application without servers
in 2014, 2015, I really saw the speed, the velocity of building something and having it in production
very quickly. So I think that
the good side of several is that there are no dependencies. So you can just use
one of the languages that are supported by the platforms without any specific dependencies.
I think the things that people should probably dive deeper is the event-driven model.
Yeah. And I think this is actually something that i i keep bringing up so i for me serverless is kind of new um but i'm also very
excited so i wrote my first node.js based lambda function early january and the way i always said
the biggest challenge that i had and i didn't know about this in the beginning i learned software
engineering in the 90s.
So I went to high school with a focus on software engineering.
And we learned, you know, monolithic application development.
And when I wrote my first Lambda function, it was actually early this January, I thought, okay, I need to break down this big problem where I used to write a big, big function into smaller functions.
And that kind of made sense.
But then, and I think this is what you're alluding to here,
I made the big mistake of function, the first piece of the function,
or my first function that solved one part of the problem called the second function,
and the second function called the third function,
and the third function called the fourth function,
all waiting on each other without really embracing event-driven programming where I will basically pass state
from one function via, I don't know, a DynamoDB table, via an SNS topic, something like that,
to the next function that will potentially pick it up, right?
And I think that was very hard for me to understand.
And I think I made a lot of mistakes in the beginning.
Yeah, yeah, probably sequentially triggering functions
is not really useful.
But no, the idea is really to, so when you create a function,
so functions is probably one of the most common example
of serverless application,
or you create these functions in the cloud,
they are triggered so that the business logic
in this function is triggered by some event.
And the idea is that you can link your business logic
to how the data is changing.
So for example, if a user of a mobile app
is uploading a document or a picture,
then this will trigger a Lambda function
that can process this document or this picture and maybe extract information and then index this information
in a database. As you said, a database, maybe a relational database or a database like DynamoDB.
And then this can trigger maybe another function that can maybe put this new information in
relation with other data that is in the database and create maybe an index or more structured content and so on.
So that's the idea of chaining business logic with events
that in some perspective can be business events,
like a user uploaded something,
a new user has registered it to your service and so on.
Yeah, and so basically the state, obviously,
the state of your transaction is passed between the function
through, as you said, some type of medium.
It could be a database, and that makes perfect sense.
And then you have all your events that then trigger other functions.
So is there anything – again, coming back to my own experience, I think I started writing my first Node.js function, lambdas, my set of lambdas in the wrong way is there uh is there a way where people that want
to start with serverless where they should go where they can learn some of these best practices
yeah oh there's lots of examples that you can find and i think something that we we did as a
ws is to create a repository it's called the serverless application repository where people
can publish and share their applications.
So if you want to create something that is maybe
a little bit more complex than a single function,
like one example that I love is the classical URL shortener.
In this case, that is complex enough to bring some meanings,
but simple enough to build it very quickly.
So there's a user that shared that on the serverless application repository,
and you can go there and you can look at the code.
It's open source code.
You can see the parameters.
And lots of best practices are already there,
so you can really start using this to kickstart your project,
and then you can customize.
And you know what, Andy, I'll even make a plug because I was thinking about
the same topic here that
you're going into.
So I'll make a plug for your AWS
pipeline
repository up in GitHub. What's the name of that one?
It's the AWS
DevOps tutorial. Yeah, so people look for
that one too because I think going back
to, we recently had Josh
Long on the show,
and one of the points he was making is instead of trying to tell people about stuff, obviously,
we're doing things in voice right now, but instead of trying to present people with slides,
he shows people code. And even for me, the concept of serverless, I understood conceptually,
but it still was a bit of an abstract idea until I actually saw the code written in your function in that repo.
So if you're going to the repos that Danilo is mentioning
or if you go to Andy's, you'll actually just see it's literally just some code.
And I think that really just makes it really clear,
but it also gives you an idea then how this all gets done.
So I'll also plug for you for yours as well there as a, you know, maybe maybe not best practices because you're still learning the best practices, Andy.
But I think you already learned a ton of lessons with it.
And I'm sure yours, your your function is pretty solid by now.
I don't mean to knock it.
I just know you tell the stories about the things you went through.
But, yeah, I think taking a look at an actual function, you know, clarifies it tremendously.
And we'll have, do you have, Danilo, is there, what's the repository you're speaking of?
Is that something we can find or you can send a link to so that we can provide that to our listeners?
Yeah, yeah.
It's, so in the AWS website, there's a serverless section.
And in the serverless section, there's the link because this is a public repository that you can just browse. And then if you are logged in in the console, you then, and I apologize in advance for asking you this,
but do they make you dress up as Sam the squirrel when you go speak at places?
No, the maximum is maybe to have a sticker or a T-shirt with the squirrel.
But we love Sam.
You don't have to dress up as Sam then.
Okay, good, good.
Andy, I don't know if you saw that. There's a squirrel mascot for serverless.
No, I didn't see that.
Oh, yeah.
Yeah, check that out.
So Danilo, I got a question for you.
So your talk is about serverless by design.
What are the things that you can tell developers
besides learning event-driven programming?
What else is there to make sure you really leverage serverless the right way and with all the cool things that come with it you know being
flexible to deploy into production i'm sure there are certain things uh where we want
to promote uh canary releases and and all that stuff can you can you give us an overview
on on what they what other design principles to really leverage serverless? Yeah, definitely.
So my talk of that one is really serverless by design.
It's a personal project that I built.
It's open source in my GitHub account.
And it's a web interface that you can use to design
an event-driven architecture graphically.
Basically, I map an event-driven architecture to averted graphs for the
mathematicians in the audience. And then this inverted graph is translated into a template that
you can use to build the first version of your application. So you can literally design, there is
an S3 bucket where users can upload. I want to create an API gateway where users can interact
from a mobile application and this triggers a function.
And then you define the flow, and then you can output the template that can be used to build up the application.
And this is probably another important best practice.
As soon as you go past one, two, three functions, the best way to keep the overall serverless application together is to use a
template. So we created the serverless application model. It's an open source model that you can use
to define the functions, the relationship between the functions, the triggers for the functions.
And it's just a YAML file that will describe this with the syntax and then you can pass this to a
tool like cloud formation that will build everything up and this is the initial deployment
as you said when you create a serverless application we have lots of small components
in a way you are adopting microservices with smaller functions that triggered by events.
And if you want to do an update,
we now have this possibility to just with a few lines
in this template to force a canary or a linear update.
So you can say, do a canary deployment for 10 minutes.
And then if everything works,
roll this out to all your users,
or you can do a more gradual deployment,
like do a DNA deployment,
a DNA deployment where every two minutes
or every 10 minutes, add 10% of your users
until everybody will have the new version.
And this is something that we found out works very well
when you have a large application, because you can test whatever you want in a synthetic environment.
But only when you go in production, you can really see what happens.
Cool. Yeah, I think I just found your – so it's spddanilup.net, right?
That's the serverless by design.
Yes, the implementation. It's actually working completely in the browser so you can just download it and use it so it's just a static website it's
completely running in the javascript side and on the browser cool serverless yeah
that's cool that's pretty cool but you should probably connect it then and upload some of these templates or blueprints then to a repository.
That would be cool.
That would be nice.
Actually, in the help section, if you go, there are a few examples.
There are more to show the possibilities than to have really a meaning.
But there's like how to set up a simple API or something.
Yeah.
Oh, pretty cool.
Yeah, basically, what I like of this tool is that I built it in three days.
And of course, there were more work on patching and stuff.
But the first day was to choose the libraries.
And 95% of the things that I do,
it's built on top of other libraries
like the network model, the graphical user interface.
I just configure a library to do that.
Yeah, cool.
Hey, I got a couple of other questions for you.
So thanks for that.
I mean, that helps.
You told us that event-driven programming
is something that we need to understand
and event-driven architectures.
From a developer experience perspective,
and I remember when I started, right?
You start with a function, you start with a function,
you start with another function.
Then I figured out there's certain code
that I want to share between these functions.
And then I put it into its own sub,
I mean, I was using Node.js.
So what are the best practices here?
What are the things people need to figure out
once you grow from a few functions
to let's say many functions
that make up a complex application?
How do you deal with managing all this code?
How do you share code components?
How does that all work?
Can you give us a little more detail on that
and some developer best practices maybe?
Yeah, definitely.
So working with lots of customers,
I saw that, of course, every developer has their own peculiarities.
But I think when you go past two, three, five functions, the idea of having a templating process like SAM or the serverless framework, it really helps because you have everything written down in files in a schematic format.
And for example, with SAM, what you have package and deploy your source code starting from the template.
It's two lines.
And, for example, if you use my web tool, when you build the template, I suggest you these two lines of code.
And then you can use whatever tool you want.
So if you have your own tools for building
continuous integration, continuous deployment, you can just add these two lines as script
to do the packaging and the deployment of the new version. Or you can use tools like
CodePipeline, CodeBuild and CodeDeploy that we built in AWS based on our internal tools.
Cool. Hey, and when you just talk about building
and deploying and testing, I mean, when Brian and I,
we have a big history in testing and we are investing a lot
in educating people on how to do proper CI CD
and baking also monitoring into CI CD.
When it comes to the pipeline,
do you, what's the best practice there?
Are you deploying your Lambda function once and then you open it up through a certain API gateway to your tests that you run against it?
Meaning tests would be triggering the events that trigger the functions?
Or do we have special testing frameworks for actually testing these Lambda functions?
Or how does that work?
How does automated testing work in a CI CD world?
So for several applications,
we have all these modular functions.
So what we suggest is to first build
a very well-defined unit testing,
maybe using a test-driven approach.
So for every function,
you should know the possible input cases and the outputs of each function. We also provide a tool to design unit testing, maybe using a test-driven approach.
So for every function, you should know the possible input cases
and the outputs of each function.
And we also provided a tool that you can use to generate the events,
because the syntax that you can expect when a file is uploaded from Amazon S3,
for example, is not something you can remember by heart.
So we have a tool, it's called semlocal,
that you can use to generate these synthetic
events. And you can also use that to run your functions locally. We provide you Docker containers
that reproduce the environment of Lambda so that you can run your functions locally on your
laptop. It supports everything where you can run Docker, so Linux, Windows, or Macs.
And in this way, you can do unit testing locally. Then what I suggest
usually is if there is a build phase, put your unit testing there. And then depending on the
complexity of your application, you can have one or more test environments where you test
synthetic transactions that are not based on single function anymore but will cross
multiple functions and you probably have some database that you can always start with the
same data and reproduce the synthetic transaction and in this way you can do the step further not
test the integration of everything and the final step is I always suggest to do
canary or linear deployments in production
because with microservices in general
and also with serverless,
it's simply impossible to test every use case
in a synthetic environment.
And the suggestion that I usually give there
is the same that I got from Netflix, for example,
is test your business metrics, not just your infrastructure metrics.
So if you want to see if a canary deployment is working, find the one, two, maybe three,
not more business metrics that tells you if your application is working or not.
So for example, for Netflix, the number of play per second,
so the number of times that people
is starting a playback of a video
is their master metric.
If they see that this value is changing
with some statistical significance
compared to what they expect,
then probably they broke something with the release.
And then you can use this Canary frameworks
to roll back immediately to the previous version.
And you can also automate this with alarms, for example.
Yeah, yeah, that's pretty cool.
And that's also something that we are advocating
and where, you know, obviously we come from it
with our monitoring background,
we support Canary releases
where we can actually monitor your regular traffic
and then we also know what is the traffic
that goes to your canary
and then we give it a metric split up
and we can compare it
and that makes a lot of sense.
I like what you said.
Infrastructure is one piece, monitoring, right?
How is your infrastructure doing?
But more importantly is
what is the actual impact on the end user?
And that's very important to understand these metrics.
Can you remind me again, though?
Maybe you've mentioned it earlier, but for our listeners out there, the Canary frameworks that you have, is this from an AWS perspective?
Do you provide something within AWS?
Is this something that is built, for instance, in the API gateway that allows routing a certain part of traffic to the Canary versus the regular version?
Or is there an additional Canary framework or linear deployment framework that people have to use?
Oh, of course, people can use different frameworks, but there are two main approaches.
One is in the API gateway.
So this works, of course, for a web API. And you can
do a canary deployment. And this is really for monitoring even long term. So you can keep the
canary taking maybe 5% or 10% of your traffic live, not just for a few minutes. You can even
keep it for a few days. And then you can do can do also a b testing so you can compare how the main population of user is is behaving to compared to the smaller population
so normally the rule there is use the smallest number of users for your canary deployment
that can make a significant impact and you can grab some metrics out of them. So usually a few point percentage is okay.
So this is for web APIs in the API gateway.
And normally I suggest this not just for testing,
it's more even for comparing how the new version
is working for your users.
So are they enjoying the new feature or not?
And then we have something built within Lambda
and that's what we usually call safe deployments.
And it's basically the automating deployments
with versions and aliases.
So if you use Lambda, we have, for every function,
you can define multiple versions and versions are immutable.
So there's version one, version two,
version three of your function.
And then you can have an alias like production
that links to one of those version. So production is version two, version three of your function. And then you can have an alias like production
that links to one of those version.
So production is version two
and I'm working maybe on a new version.
What we added last year is the possibility
to have an alias that can link
two different version with weights.
So you can say production is now 95% version two
and 5% is version three.
So a few users will see the new version of the function.
And this is at Lambda level. Of course, this is useful, but it's difficult to orchestrate.
So we integrated the management of this with CodeDeploy and with SAM. So now basically in a
template, you can just say all the functions of my serverless application do a canary deployment with, as
I said, like 10% of the users for 30 minutes or something like that.
And then CodeDeploy will automatically create the new version, will shift the traffic from
the old version to the new versions.
And you can add monitoring to that. So you can say if some alarms has rised
or if a monitoring function that I can define
that will monitor the environment before and after the release
will sense something wrong, then you automatically roll back.
That's pretty cool.
Andy, I wanted to ask a question.
Unless you had something to follow up on that topic hey i wanted to ask a question unless you had something
follow up on that topic i wanted to ask another question but did you have any follow-up on that
one there i just have one one quick one so the the um the traffic is the uh if you're doing let's
say 10 on the canary is this sticky sessions the oldest session stickiness that means the same user
gets to you know it's kind of exposed to the same
version or it's just random round robin no it's it's completely random around robin and it's also
at function level so if a user to to complete a transaction now it's where tools like dynatrace
can help you know that to trace across different microservices. So if you go through five functions, for each function,
you can go on the old or the new version.
So it's really a mixed up test.
And in a way for testing, I think it's good
because you can test all the possible permutations
and one of the main characteristics of microservices
that you can independently deploy each function or each
service from the others. So you're really testing that. If you want to test a coordinated set of
microservices together, then Canary releases with the API gateway is the way to go because
there you can say, okay, there is this new version with all the dependencies. And then
if a user goes on the new version, we'll see only the new version of all the dependencies. And then if a user goes on the new version, we'll see only the
new version of all the function of the microservices that build up my application. Cool.
So I wanted to ask Andy, I forget who it was that was on. I think it was you and I talking. See,
this is what happens when it's early morning recording for me. My brain doesn't quite work.
But we were speaking, I believe we were speaking to somebody recently who was talking about
they skipped over containers and went straight to serverless.
Was that, do you recall that, Andy?
I think it was Fender.
Wasn't it Fender?
Oh, yes.
That's it.
That's it.
Okay.
So, yeah, it was an interesting topic.
As they were going to the cloud, they said, all right, you know, we see everyone's on
containers, but we see serverless. We're going to just go straight for serverless. And what I wanted to
find out from you is, I think, and I might be wrong, this is why I asked you the expert,
obviously. Do you think there in whenever somebody is breaking their or creating a
microservices application, what guidance would you have, if any, to give to people to say,
certain things maybe you should
run in a container and certain things should maybe run serverless is there a decision point to say
don't do don't even try but don't even bother doing this in serverless go ahead and run that
in a container oh well we've seen lots of different workloads on serverless.
And normally what I suggest customers, especially if they want to build something that is completely new, is write your serverless applications so that they are portable.
So the idea is just to apply the old style architectural patterns to the new architectures that now we have with containers
and several.
So in this case, it's the adapter model.
So if you want to build something that runs as a function, you can just decouple for the
event wrapper.
So the part that is managing the event for the function from your business logic, and
then you can add a web interface, for example,
to your business logic and the same function can run
in Lambda or can be dockerized in a container.
And this is, for example, I just published an article
on Medium about that, about you can write the same code base
and it can run and be tested even automatically
on containers and functions.
So technically that's my suggestion
so that you don't force yourself to go only
in one direction.
But of course, with serverless,
we've seen some common use cases where it's very easy
to get to see the advantages.
So whenever you're building a web application
or a mobile backend, even an IoT backend,
I know if you have one of those Roomba devices,
but the Roomba devices,
they are connected with the WS using IoT,
and they use Lambda function
for interacting with the backend.
So mobile and IoT backends is a great use case.
We've seen also data processing,
strangely enough is something that's,
can really be interesting in a serverless architecture
because you can really scale from zero to 1000
or more functions working in parallel on your data.
And then you scale down to zero in a matter of seconds.
And this is something that is quite interesting.
There's a customer from the U.S.
that probably you know better than me.
It's called Fannie Mae.
It's the Federal National Mortgage Association.
They do Monte Carlo simulations on their data on mortgage
using Lambda function,
and they can scale up and down
in a way that they really enjoy.
And that's an only interesting use case.
Then we have chatbots.
So probably the integration with Alexa,
with Amazon Alexa was probably one of the drivers,
but we see lots of other platforms as well,
using serverless architecture for building chatbots
and to link the logic of
the chatbot with maybe the interaction with the physical world, for example, when you
want Alexa to do something.
And another strange use case that I think it's really interesting because usually it's
really at cost zero, is all those old style automation scripts that we have in our
infrastructures now to automate backup, moving data, cleaning up storage,
those things that you put into some Chrome tab, they are executed.
So I've seen a lot, especially in enterprise context,
people starting to adopt serverless, replacing the scripts with functions
that can be triggered with a Chrome schedule.
And since there's a huge free tier with Lambda,
you really don't pay anything.
And you saw one of those problems of the scripts
that sometimes they don't run
because they are installed on one physical server
and you don't know why.
And using Lambda, you can put them in high availability
and you don't pay anything for that.
That's pretty cool.
Actually, this was one of my use cases uh i used
lambda in my devops scenario for basically linking different tools right i'm basically executing
remediation actions so what you said earlier with executing certain batch jobs to clean up things or
to restart services that stuff that i built into Lambda. The other example that Brian brought up, so we talked with Fender,
and I believe Fender is one of your, at least in the U.S.,
I know they have been invited to the AWS Lofts in San Francisco
to speak about their architecture.
And we have him on podcast, and he was basically saying,
and Brian, correct me if I'm wrong,
but I think he was saying they run their complete infrastructure
for $80 a month on serverless
and another $80 for EC2 because
they still have a couple of, let's say, monolithic pieces that they just run on EC2.
And that was just fantastic. And then we were all wondering, how does
Amazon actually make money with these companies? Because $80
doesn't get you.
It's obviously not a whole lot.
I mean, it's fascinating.
Right?
It's amazing.
I mean, it's not an extremely large application.
It's a smaller thing that they're running.
It's not the entire Fender property,
but it is still a significant application set.
And yeah, the fact that they're running it for $80.
And that kind of ties in, Andy, to the biggest part of performance right i mean obviously everyone likes response time
as a key indicator of performance but that goes doubly for serverless because
however long it's running is how much you're paying so it becomes even that more important
and it's funny that you don't have to think about well what's the cpu utilization it's funny that you don't have to think about, well, what's the CPU utilization? It's just a different mindset, I guess.
But.
I have one question on lock-in.
So a lot of the times when I talk with people,
they say, well, we don't know if you should go
with a certain vendor on serverless
because we fear a lock-in.
How do you counter that argument?
Because you mentioned earlier, right,
you can abstract all of your code,
and there's frameworks out there that allow you to port things over.
But I remember when I wrote my Lambda functions, there was a certain lock-in
because I was using certain AWS libraries to interact with the rest of the AWS ecosystem.
So what's the recommendations you give to enterprise companies that say, we want to
go serverless, but we also want to keep our door open to potentially move it to other
vendors?
Yeah.
So as I said, you can, from a code base perspective, you can decouple the integration with the
event platform that triggers your function from the business logic.
And you can have the same code base running,
for example, in Docker containers and Lambda functions.
So that's my top suggestion.
On the other side, you get lots of things for free
by using a serverless platform.
So for example, if you deploy a function on Lambda,
on AWS Lambda, it would be automatically scalable.
So you don't need to set up a lot balancer.
You don't need to set the configure on auto scaling.
From a security perspective,
you give permission to each Lambda function to do something
like you can read data from this S3 bucket.
You can write in this database table and so on.
So you also get this security control,
very granular as part of the platform.
You get versioning, the deployment, the set and so on.
So what I think is that one thing is,
is of course, if you change platform,
you will need to rebuild all those features
because they were just there.
But that's probably the main advantage
because especially small companies,
when they start like a new startup building something,
they can go out creating a prototype
and then the prototype is almost ready for production
because security is built in, availability is built in,
scalability is built in.
If you want to leave the platform,
then you will need to re-implement all
those features now that's that's a that's great points obviously yeah cool um danilo is there
anything else that uh you typically give you know developers when you talk with them or people that
make decisions on serverless yes or no anything else that we have not covered today? Because I think the goal of this podcast is
to tell people more about, you know,
if serverless and when serverless,
then, you know, best practices,
like as you said, serverless by design.
Anything else we missed?
Oh, of course there's lots of different topics,
but I think that the key idea for me for several is not just for cost
reduction, like you said, but it's more for velocity and agility in development.
I think someone once on Twitter was discussing like it's easier to build a prototype on Lambda
than to discuss it on Twitter.
So that's the main point. No, it's really a way to build something that is like 90% ready to go into maybe a control production environment.
And that's the idea.
So don't waste time maybe trying to set up some complex environment to support your idea.
Just start and build it. And because we have a lot of enterprise customers out there
that obviously deal with their existing legacy applications.
And what we see is people are now exploring different models
on how serverless should be used in that respect.
It's either building some new capability with serverless
and then talking with the, let's say, legacy backend.
Another approach that I've seen is if you know that you have a monolith and you know
if there are certain features within that monolith that you would like to kind of innovate
fast or wrong, but you're constrained by the fact that it is within a monolith.
I've seen, and have you seen this as well, people rebuilding certain features with Lambda,
so with serverless,
and then eventually kind of fading out
the monolithic piece of it?
Is that another interesting approach?
Yeah, definitely.
This is part of the overall process
of peeling the onions of the monolith.
So you start by removing some features
from the monolithic application,
replacing those features with something that is smaller,
faster, more granular,
and Lambda function is definitely an option.
So I've seen that done a lot.
And for example, like I remember,
and the integration of serverless application
with a legacy application is something that
we were focusing with the new features
that we launched last year.
So for example, you can connect your Lambda function
to your virtual network on AWS,
or even with your on-premise network
using tools like Direct Connect.
So you can have network visibility
from a Lambda function of your legacy application.
And then you can also now control scalability.
So because we often say with serverless
that you can scale now from zero to 1000 concurrent
execution in a few seconds.
But if your legacy application is connected
to those function, maybe they don't like it very much.
And so you now control the concurrency at function level.
So for example, if you have a function that connects
to your on-premise relational database,
you can say, I don't want to have more than 100
concurrent connections because otherwise
that can overload my database.
And then you can manage maybe how these function are invoked
and absorb any delay with exponential backoff
or these kinds of strategies.
But it's something that we did last year,
especially for enterprise customers.
And I've seen now interesting use cases in the area,
for example, Coca-Cola,
they built like a serverless component
to really absorb the speed of some application
with their legacy consumer database.
So I think this is a US only, but with Coca-Cola,
you can go to a vending machine and get credits when you get drinks.
And they had the problem that these credits on a mobile app
would go out of sync from what they had inside their database.
And this was because they were just reading from the consumer database too quickly.
So in this case, they used a state machine
that is another feature that we have
is part of the AWS step functions.
The state machine would just wait for a few seconds
before reading the updated value
from the consumer database.
And in this way, they integrated the new fast application
that is for the mobile app
with their slower legacy stuff. Okay that's cool yeah and Andy another
example I saw recently with a commerce application is where there's these
background jobs running right it's more of a monolithic application but this I
thought was a really good use for it where periodically either once a day or
once or out an hour a job would run that would re-index the catalog, the inventory catalog.
And instead of running that on the actual application so that every time that runs,
that's suddenly stealing CPU and memory away from the application that's trying to run and serve the customers.
And instead of putting that into a container that's going to be running up all the time,
but only executing once every, you know,
however many hours or minutes or whatever, it was a perfect candidate. They took him through
that into a serverless function, um, where it just runs and saved him a bunch, but it was really,
really just novel. Like, you know, when you're talking about in terms of looking for those ideal
candidates to get started with, it could even be something like a background job right yeah exactly cool all right um brian what do you think should we sum it up let's do it let's summon the
summerator let's summon the summerator so danilo the way this always works it's just uh obviously
with the background of me being austrian and the determinator and her and uh and the Governator. We have the Summerator.
So, well, thank you so much for educating us on serverless by design, what this means.
I think I learned a lot today regarding that, first of all, there's the SAM project that we can go out. Also, the Serverless by Design project that you built where we can visually design our complete end-to-end serverless architecture.
And then you actually create a CloudFormation template that helps me stand up everything. I think I learned a lot about how serverless in general, the real, I think, benefit is speed of innovation, allowing people to
innovate faster on individual features or functions that they want to push out, exposing
them to the outside world using canary releases.
We learned that there's both on the API gateway capabilities to do canary releases, but also
on the Lambda side itself, where you have the different versions and you can use the
Elias mapping.
So that was very interesting.
Coming back to the very beginning where I struggled
with my first adoption of serverless,
we all have to understand really how event-driven
architectures have to work, because otherwise you just run
into classical mistake that I made in just calling
different functions sequentially,
and obviously that doesn't help anybody.
There's more information out there, obviously, Danilo,
that you put out, and there's more information,
I assume, on the AWS website.
Also, the serverless repository is a great way
to see what type of serverless use cases are out there,
some implementations that people can look into.
And we also have our own Lambda functions out there, some implementations that people can look into. And we also have our
own Lambda functions out there. If you go
to the AWS DevOps tutorial
on GitHub, and you'll see that.
Thank you so much for being part of
DevOne in Austria. And also
thanks for joining us
for our first AWS meetup in Linz,
Austria, in June.
Yeah, that's it. And yeah, thank
you so much. It's been great.
Great conversation.
It's been my pleasure.
So thank you so much.
And if anyone listening here to this podcast will build something, just ping me on Twitter.
It would be great for me to know.
And what's your Twitter handle?
It's Danilop.
I'm very boring.
I'm Danilop everything.
It's even my email at Amazon.com.
All right. Well, and we'll have that on the link to the Twitter handle on the thing,
but we will not put a link to your email because then crawlers will get it and start spamming you.
Thank you. Thank you. I have I have nothing extra to add.
I think Andy did a great summary. I do want to see if we can possibly encourage
your
superiors to make you dress up as
the squirrel when you present.
Actually,
Tim Wagner, our
director for
the API Gateway and Lambda, he
was dressed up as a squirrel at
reInvent. And I think you can find
pictures.
Well, then you're off the hook.
Thank you for taking your time to be able to talk with us today.
We really appreciate it.
And to all of our listeners, if you have any other questions,
please contact Danilo.
You can also send them to us,
especially if you have topic ideas for another show
or some topics you would like us to discuss and dive deeper into.
We'd love to get some feedback.
Anything else from anybody else or are we all good here?
All right. Well, thank you very much. Have a great day, everybody.
Thank you.
Thank you. Bye.