PurePerformance - So you think you should Serverless? Things to know before you do with Sebastian Vietz!
Episode Date: August 26, 2024Has one of the decision makers in your organization decided that you have to go "all in on technology X" because they saw a great presentation at a conference or got a great sales pitch from a vendor?... If that is the case then this episode is for you and you should forward it to those decision makers.Sebastian Vietz, Director of Reliability Engineering and Host of the Reliability Enablers Podcast, shares his thoughts on considerations when picking a technology like Serverless. We discuss the importance of knowing limits, best fit architectural patterns and things that should influence your technology decisions!Being aware of coldstarts, a 20000 concurrent request limit or 512mb being an ideal size for Lambda are just some of the things we can all learn from Sebastian.Additional links we discussed:Sebastians LinkedIn: https://www.linkedin.com/in/sebastianvietz/Reliability Podcast: https://podnews.net/podcast/ibe8kMore things on serverless: https://serverlessland.com/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always I have with me my co-host Andy Grabner.
Andy, how are you doing today?
Well, I just want to tell you that your prediction was right.
Yeah, I figured it might be. I didn't know if you'd be able to resist.
Either you wanted to prove my prediction wrong and you'd do nothing, or you're like, no, I'm going to
do it.
So for everyone listening, I am
officially a fortune teller. I can see into
the future. I predicted Andy
would mock me during the intro like he always
does. And he
did. So I'm glad of it. It makes
my day. Not always.
Sometimes. It's just funny.
Well, of course it's funny. I'm not coming down just funny. I think it's, well, of course it's funny.
I'm not coming down on you.
I think it's,
I expect it.
It's almost predictable
at this point, though.
But I'm proud to say.
That's why the question is,
hey, if it's predictable,
then the question is,
are you still a fortune teller
or is it just too easy
because it's happening
most of the time?
I say,
I knew you were going
to ask me that, too.
So I'm going to say
I'm a fortune teller
or I can see the future.
The good news is you can probably hear I'm a little
bit more with it today.
I'm officially off
my painkillers for my accident
to my collarbone.
Unfortunately
for our listeners, that means I might have more dumb
jokes during the show. We'll have to see where it
takes us. But speaking of the show. We'll have to see where it takes us.
But speaking of the show, Andy, should we just bypass all this and get into it?
Yeah, I think we should.
And we've at least made our guest already laugh once or twice already so far.
And without further ado, I want to welcome Sebastian Witt on the show.
And thank you so much for being here. Actually, I could say, Servus, Grüß dich, wie geht's?
Ganz genau, ja.
Ganz gut, bei mir ist alles okay.
Ich habe gerade meinen Urlaub begonnen.
I just started my vacation just for the week.
So it's perfect timing.
Awesome.
And I think we just freaked out Brian because he thought maybe we just go all the way in
Germany.
No, I might bail at that, Brian.
So don't worry it's not gonna
happen well welcome to the show hey sebastian um thank you so much for back in june it was the last
my my last trip to the u.s and i went to north carolina and you actually came down from canada
where you live um and we we had a meeting we then had dinner and a lot of great dinner conversations.
I want to let you introduce yourself,
but I just want to quickly highlight here something
that I find really interesting when I read
and look at your LinkedIn profile.
It says, a navel of people,
reliability engineering advocate, podcast host,
and a CNO, the chief naming officer,
which I find really hilarious.
So, Sebastian, for the people that have never heard about you,
because maybe they have never heard your podcast, but will so from now on,
because the link to your podcast will be in our description.
Who are you? What drives you? What motivates you?
Besides finding, obviously, a lot of interesting names.
And not all of those names are my creation.
The actual, the last one, the chief naming officer
was one of my peers in my current company
that gave that one to me
because I have a tendency to want to clarify things
and make things clear.
So I don't like acronyms a whole lot.
So I'm trying to come up with names that are simple and clear for everybody so that they understand what we're talking about.
So that's kind of where that chief naming officer idea came from.
Yeah, I'm Sebastian.
Have been in technology now for longer than I want to admit on this podcast long enough, I think, to develop a few opinions about technology,
especially in the last, I want to say,
10 years, reliability engineering.
So that's my passion.
That's where my heart is.
That's also where I focus my sort of private passion
towards with my co-host, Ash Patel. We're doing the Reliability Enabler
podcast together if anybody's interested. We're very much focused on reliability engineering and
adjacent disciplines, trying to provide a training platform, a teaching platform, trying to educate people on reliability engineering and its value.
So that's kind of what we're focusing on when we're doing our podcast. And in my profession
at the moment, people call me a director of reliability engineering for Compass Digital,
which is part of a larger organization called the Compass Group.
And the teams that I support with my reliability engineering team are predominantly product
engineering teams that build hospitality technology.
So things that you would find in cafeterias, in large offices offices or in locations like universities, right?
Or in large event spaces, you know, everything these days,
we all know it is enabled and facilitated through technology.
And so that hospitality technology is what these product engineering teams are creating
that we then hand over to our customers.
And they're using those apps, websites,
backend systems within their locations
in order to facilitate basically the serving of food.
I just want to leave it here for a moment.
I think that's enough of an introduction.
Yeah.
You basically make sure that everybody gets the food and the drinks that they need at the time when they need it, obviously, when they order it.
It's really interesting.
And thanks for the background.
Also, thanks for the training platform.
I think this is something, folks, if you're listening and now you're really curious about the podcast and also what else Sebastian is providing, check out the description of the blog post.
Not the blog post, this is going to be a podcast, obviously, and follow up on this.
Sebastian, when we were sitting back at the dinner, we had a long, long conversation.
I think we said, hey, we need to get on the podcast together.
And then we said, you actually, you know, we talked a lot about serverless technology.
And I think one of the things you actually said is,
there's so much promise out there with serverless
and all of these vendors, they're saying,
just bring your domain knowledge.
This is all you need to know.
We care about all the rest.
And then you said it would be interesting to do a podcast
of how much an engineer really needs to know about the care about all the rest. And then you said it would be interesting to do a podcast of how much an engineer really
needs to know about the underlying infrastructure and other components, because it sounds really
nice on marketing, from a marketing team, it sounds really nice on paper or on a PowerPoint
presentation on a website, but it's not always that easy.
And so I would really like you to dive in into that topic, because I know you're really
passionate about that
and you have as you said built a couple of opinions I'm sure based on on real life experience
on the promise of serverless and what people need to know about it before they jump in and just buy
whatever people sell them that serverless is all about yeah, it's near and dear to my heart at the moment
because I'm right in the middle of it, very exposed to the topic because at my organization,
we embrace serverless quite a bit. But we obviously have gone past the marketing
spiel, past the marketing message, and we've seen what the reality looks like when you're embracing that topic.
So that's what I want to talk a little bit about.
Is it really that easy and that simple as various companies make it sound?
And I'm of the belief that it's, again, one of those shiny objects, right?
You have to be very careful that when you embrace serverless,
that you're not going in blindly and making big assumptions
as to the complexity or lag thereof, as it's being sold to many,
that you will encounter.
Because the reality will strike very soon where it's not
as easy as it sounds.
It never really is.
So people shouldn't be too surprised, right?
But my hands-on experience at the moment tells me that there are a couple of things that
everybody needs to consider and cannot forget just because
somebody tells you, yeah, you don't have to worry about infrastructure all that much anymore because
we are now taking care of that for you. And we can dive a little bit deeper into what that actually
means. But I want to highlight then a few things that people and companies and teams that want to embrace that set of technologies
really need to look out for as they're going down that path.
Well, actually, let's go into this because it's interesting that you said
that you obviously went past this.
How many years have you been embracing that kind of architectural pattern
or whatever you want to call it?
So I joined my most recent organization
and I landed in an organization where they had already,
for the better part of like five years,
embraced the serverless ecosystem, if you will.
And so part of the reason why I ended up in the organization
was because they had a bunch of those challenges that I hope we get to talk about a little bit.
And so now you end up potentially needing within your organization a group that I would be building, like a group of reliability engineers, because you ended up potentially hitting a hard wall, right? You
get to a point where, hmm, didn't appreciate that I had to put so much effort into serverless,
you know, didn't appreciate I had to learn so much about the underlying infrastructure
in order to take full advantage of that serverless offering or not fall into some of the common pitfalls.
So there is a little bit more to it.
And if you are too blinded by the topic
or don't invest early on in the right level of training
for all your engineers
so that they have a bit of a decent baseline
of what, for example, AWS's service catalog looks like
in terms of the serverless technologies
and how they're actually working high level under the hood
so that when you embrace it,
when you actually want to take advantage of those technologies,
that you're going in with the right mindset. Because the mindset here is very different for
serverless, right? You're not going in the traditional way and you're not building out
a web server, you don't build out a backend system and then attach a database to it.
It's different how you need to think about
the coding practices that you need to now implement
for yourself as an individual engineer
or for your team or for a larger group of teams.
So that's why I'm saying training,
not to be underestimated,
bring in some of the subject matter experts that actually know how these serverless technologies work under the hood.
And really learn how to apply these serverless technology and know also very early on where the boundaries are.
What is it really good at and What is it really good at?
And what is it not good at?
Where you better serve going back
to your traditional compute model,
and then choose that instead
because there are limitations.
Like to every technology,
there's limitations to what you can do
and accomplish with serverless as well.
So just go in with open eyes, be aware that it's not as simple as people make it sound,
and learn a little bit about the underpinning technology that makes it possible.
Just enough to make educated decisions when you're deciding,
I take this new feature or this new application that I want to build,
and I build it in a mostly serverless fashion or I build it in a traditional fashion or I choose a hybrid between the two.
But then knowing which pieces are going to be the serverless pieces and which pieces are going to be the more traditional pieces that I need to add in order to make my application, my feature work the way my business would expect it
and the way my customers would expect it.
I think that last point is really important.
I see all too often we come across customers, or you hear about other people too,
who were moving to 100% serverless, right? There's this idea that, you know,
I think it stems from the idea of
if you go from, let's say, bare metal
to VMs to containers,
there's a lot of, you move everything
for the most part.
Even when you get to Kubernetes,
a lot of that is you're going to
pretty much move everything to that.
That is an underlying data center infrastructure
to those.
With serverless,
it's more about it serves a purpose.
And I think way too many people fall in the same trap
of we're going to do everything serverless
because we're going to learn it.
We're going to put everything serverless because we're going to learn it. We're going to put everything serverless, right? Except maybe our database, right?
But it's that lack of understanding,
lack of learning the underpinnings, lack of, I love when you
said learning the limitations. What is serverless good for? What is it
not good for? That gets people trapped into this idea
of we're going to move everything we're not
going to consider anything else because we're just we want to be fancy we want to be able to claim
we're a hundred percent serverless shop right you don't get a badge of honor for moving everything
to serverless you get the badge of honor for having a fantastic performing piece of you know
your enterprise software whatever you're serving to your customers.
And that road can lead to a lot of danger
because not only now are you saying,
I want to move everything to serverless
without understanding the limitations,
which could hurt you on that side,
but you're also then ignoring the implications
of potentially the other components that you need to run.
You need to consider security when you're making these decisions.
You need to consider performance, observability when you're making these decisions.
You need to consider how easy it is to be able to do canary releases or whatever.
And I'm not saying serverless has any ticks against this on this stuff, but all these other factors you're ignoring, and these are factors that have to go into the plan, the architecture for what you're going to do as a business, as a whole.
Go big picture.
Find what you could do, but you need that understanding first, which is, I think, the big point you were driving. You need to bring in some people. If you don't have a mid-house from the outside
or you need to spend time learning first,
Andy, it's funny because it ties back to me
everything back to Hibernate, right?
I thought about the same thing, yeah.
So the classic problem with Hibernate,
it's a database caching proxy or something, whatever.
I don't know the right way to categorize it.
An OIR member, an object relational member.
Yeah, but it's like, let's just get this, toss it in,
and it'll do what it does, right?
And that's when you end up with all the N plus one queries,
all the other problems that occur
because you haven't learned what you're doing first.
And unlike a hobby where you can't,
if you have a hobby, right,
you can take time, mess things up, experiment, no harm, no foul.
You're going to learn great things from that.
But when you're running this in a business, it's a whole different ballgame, right?
And it'll be really great to see when people, you know, slow down a little bit to make these considerations.
Anyway, I'm reiterating what you said at this point. But I think that's a
very important point that you made there.
Sorry, and you just to say to Brian's like, maybe we have a
moment and to go back to the one point that you made about
architecture, because that speaks to how well you actually
design with serverless. And again, one other promise that
comes along with serverless very often again, one other promise that comes along
with serverless very often is,
you don't have to do that anymore.
You don't have to put that thought in upfront
because things are now so easy
and you just stitch them together and then boom,
they just work and they scale magically, right?
So capacity analysis, performance,
everything happens magically.
You don't
have to worry about those things anymore. Reality couldn't be farther from the truth,
right? So it's really important that the traditional way of thinking about designing
technology ecosystems, those rules have not changed just because you're choosing this
particular group of technologies that happens to work a little differently.
And if it's well understood,
then maybe you get to spend a little bit more time
on product engineering versus your infrastructure considerations.
But don't go into that and think that some of those things
that we've done traditionally,
like think about observability by design, performance, capacity,
testability, maintainability, sustainability,
all of those abilities that you want to make sure they're covered
before you run into this,
that you still need to put that effort in.
You can't not do that because otherwise you end up in a chaotic situation
that you will not like.
I promise you that.
Let's let's let's talk about some of these
like limitations, give people a real example of the architecture before.
I just want to make one more comment and that I've seen whether it's serverless
or other technologies,
it's about hypes that are often driven
by decision makers.
And correct me if I'm wrong,
but I've seen decision makers going to conferences
or going to events,
hearing from their peers,
from their competitors
or from other organizations.
They go on stage and say,
we are fully serverless,
or we are all in on Kubernetes. And now we're so much better. We saved so much costs. And this is
somebody takes this and says, wow, they did this, they went all serverless, they saved 80% costs.
Let's do the same thing. And it just force it down the throat in our organization, not realizing
that this is not applicable to every
organization, what one organization went through.
And I think, and this is true for any type of hype, whether it's in technology or in
other areas, I'm sure, in our lives.
But I guess this could lead to something that Brian said, when we sometimes run into organizations
that have an all serverless strategy or an old Kubernetes strategy
or an old XYZ strategy.
Sometimes these decisions are driven
by a hype that they see from other people
that were successful with it,
or at least they say they were successful with it on stage,
and then just follow that blindly.
Can I just jump in there?
Because I love this.
Because this is very true. The funny thing is, the same
people don't actually talk a whole lot about the challenges that they encounter along
the way. What they also often miss to say is
that the way we went about it and the reasons why we went that
way is very contextual. For our business context
and for our customers
and for what we wanted to do
in terms of our growth trajectory,
that path made sense,
including all the challenges
and pitfalls and learnings.
We worked through all of this.
None of the stuff often appears
in sort of these public stories
that people are telling, right?
You both probably watch a lot of this content too, just like myself, right?
And I always wonder, what are you not saying?
What are you not telling me, right?
What are some of the things that really where you were grinding,
where you had to like bang your head against the wall
and you just couldn't make progress for a month or two,
right? That's not usually part of the success story that people are telling.
Sorry, go ahead. Yeah. No, I was actually saying maybe we should now take this as an opportunity
to actually tell people about the things they don't hear from those conference talks that only show the shiny side of
the coin i think it should be part of the honest reflection right if we tell if you tell a success
story also tell everybody like at least a list of a few of the challenges that you're that you're
encounters like heartfelt challenges like where you really banged your head. And it was where you had moments in your
journey where you thought, you know, did we take the right path here? Did we really just got caught
up again, ourselves by some of the hype cycles that we swore we would not, you know, get caught
up in, but it happened anyway. And so now we are really wondering,
did we take the right path?
Did we overextend ourselves?
Did we make a bunch of assumptions
as part of our organization
that all of our engineers
are perfectly capable of doing this,
going to Kubernetes or going to serverless?
But we haven't really invested
in training all that much.
How are these engineers that are already there
supposed to know all those things, right?
Because traditionally, we haven't worked with these technologies, right?
So that type of knowledge doesn't just appear out of thin air.
You actually have to invest time and effort into it,
train up everybody that you want
to make part of that journey, right? And again, I don't always hear a lot about these sort of
hurdles and challenges that people have to encounter. There's no way that you don't,
right? But they don't become public as part of these success stories.
That's why, Sebastian, let's take this as an opportunity now.
Obviously, you're a company that you work for, Compass.
You are very much invested in serverless,
and I assume you're still there because obviously it allows you to become successful as an organization
because you build great software products
that help the hospitality industry.
But what are some of the things that you say,
man, maybe this would be something
I would have liked to know before.
This is something that others also need to run into
when using serverless technology.
Oh, yeah.
That's a great starting point for me
to get all, you know, rambling.
So first thing,
so just for clarification,
we operate predominantly
in the AWS ecosystem.
So use their serverless offerings, right?
And we are very committed
to Lambda functions.
Okay.
So you go into this conversation around serverless technologies,
and that includes Lambda functions, and how you use them.
If you make the mistake of taking a group of traditional engineers,
software engineers, that are very, in their mindset,
very used to working the traditional way
with traditional compute systems, right?
And you do not lay a good foundation
for them to understand what Lambda functions are,
what they're not,
what they are compared to your traditional compute systems,
you will inevitably end up in a scenario where you are mimicking your
traditional compute system and trying to replicate that within Lambda functions.
And what you end up with is a bunch of monolithic Lambda functions. Lambda functions that do a
whole lot, way more than they were ever designed to do.
So you stuff in as many functions, you have functions within functions.
Because the engineers haven't really thought about how do I take a piece of code that has maybe 10 functions and separate them, right? And now I built a discrete piece of technology,
which is a function,
and just have that developed and deployed, right?
And have these functions now interact with each other,
but each function is its own entity
as opposed to part of one class structure, right?
So if you go in and not having set the baseline of what
a Lambda function or any functional piece of serverless
technology, step function is another one.
It's different, but again, you need a different mindset to think about
those things. And if you are not laying a good groundwork for your engineers
to appreciate,
functions are different than your classical
programming approach.
Right?
So,
do not go into
Lambda functions
hoping for them to perform
and hoping for them
to fulfill their promise
if you are
taking your classical
programming approach
and apply it to Lambda functions. You're going to end up with a bunch of misery because these Lambda functions, they're supposed to be lightweight.
If you make them fat, they're not going to be lightweight. way more time on the underpinnings, sort of the infrastructure
that makes Lambda functions work
than you could have ever imagined, right?
You look a lot of performance,
you spend a lot of time in performance tuning,
you're trying to tweak the underlying ecosystem
to a degree that was never designed to, and on top
of it, you don't actually
have the same level of control
over that underlying ecosystem.
By design, you don't,
right? As opposed to,
you know, deploying a container.
You have a little bit more control on how
that runs, where it runs, how much
compute you allocate towards
it, how much memory you allocate towards it,
how much memory you allocate towards it. You don't necessarily have the same level of control.
You don't have the same knobs and levers that you can adjust for these serverless technologies.
So you end up between a rock and a hard place where you know, where you're trying to embrace something serverless,
you haven't changed necessarily your mindset or your programming approach towards it,
and you have less control over that serverless technology. And so it leaves you really limited
in terms of how much you can tweak that and how much performance tuning you can do.
So that's just my initial, if you don't pay attention to this,
you will have challenges.
And Sebastian, what I've seen, and I remember my first encounter
with a serverless application, and I saw the same thing.
Somebody basically tried to break their monolithic application
into individual serverless functions that were all calling each other,
but one function calling the other, waiting for that.
So instead of really thinking about how can you break down a process
into individual substeps and then connecting them through events,
because in the end, serverless, and correct me if I'm wrong,
really what we talk about is event-driven programming, because you need to understand
how can you break a bigger problem that you would have in a monolithic fashion,
implemented as a request-response architecture, where the request may take a second,
maybe 10 seconds, maybe 30 seconds, where you do a lot of computing in one big monolith.
You need to break this into smaller pieces
and you need to also figure out
what is the state between the different pieces,
the state, but then comes an event
that is trend driving the next execution
of the next function.
So I think, and this is also,
I know we jokingly said,
I think when we were not yet recording,
but we both started our career in IT many, many years ago, many, many moons ago.
And we probably have never learned, at least I have not learned about event-driven architecture and event-driven software engineering when I went to high school where I learned software engineering.
We did it the traditional way.
And so for me, this was also a hard thing to change my mindset.
How would I break down
this big problem
that I implemented
in a, let's say,
Java class
with a thousand lines of code,
maybe in 50 functions
or 50 methods?
How would I solve
the same problem
by breaking it
into smaller pieces
and having individual workers
work on that particular state of that
problem and then hand the state with events.
So I think that's, it's a mind shift change.
It's an architectural change.
And only if you do that, I believe you can really leverage the power that serverless
can bring you if you use it right.
Yeah, and you say something else very important there.
I don't necessarily think, and from what I'm seeing through my hiring,
I don't necessarily think what is being taught in colleges and universities has changed all that much.
It's not like now the people that are graduating, coming out, and they understand serverless technologies,
or they have been exposed to event-driven design, which is a different architectural pattern.
You really have to think about how you're implementing that feature, that product, that again, if you're trying to apply your traditional compute thinking
or programming models to that same paradigm, you're going to be miserable, right? So,
this is almost like a call out to, you know, all colleges and universities, you know,
you got to bring your A-game, you got to ramp it up a little bit. You got to expose yourself to some of those newer technologies as well.
And I know it's hard for universities and colleges to always be up to speed with what's most recent and most progressive in the industry.
But we will have these challenges for another at least five years until these educational institutions have
caught up with us.
And in the meantime, I don't know what we're going to do, hire a bunch of reliability engineers
maybe to solve for some of these reliability issues that you will inevitably encounter
as you're designing with a new paradigm,
but with people that understand predominantly a traditional way of programming
and building applications and functions.
My mind is racing right now because, first of all,
again, another shout out for everybody that is creating educational content for this particular domain.
And thanks to Sebastian, obviously, you are creating some educational material on reliability and resiliency. For me, coming back to reliability on Lambda functions,
and also maybe this goes hand in hand with some of the limitations.
I remember in the early days of Lambda,
and I assume this has changed a little bit,
but in the early days, a Lambda function had to be finished
within a certain amount of time.
I don't know, was it 30 seconds? Was it 60 seconds?
I think there was some type of run time limitation
that AWS gave you.
Also, a Lambda function could only use so much memory or CPU.
So basically that quote-unquote container or run time
you were running in was very constrained and limited,
which also meant you had limitations
on what you could even do.
Because if you would need to, I don't know, process a gigabyte of data
to come up with some result because you're aggregating over it,
then maybe a Lambda function is not the right thing.
Because if the Lambda function is constrained to be only running in 512 megabytes,
then you have to think about different ways to solve this problem,
either with a different architecture overall, like maybe you just put it in the container or you stick
with your monolith.
Nothing is wrong with a monolith if it is more efficient in solving a certain problem.
Or you need to break this bigger problem, like crunching through a gigabyte of data
at once, break it into smaller pieces, and then kind of work on it step by step.
And then maybe, you know,
Lambda functions or serverless would still work.
But Sebastian, what are the current limitations
that exist, at least in the world of serverless,
that you know,
and you being primarily on the AWS tech?
So there are a number of things
that people need to think about.
So for example, memory and virtual CPUs are directly related.
So what you actually cannot do with Lambda functions, for example,
you cannot assign a certain number of virtual CPUs to it, not directly.
You do that through your memory allocation. So the higher
memory you chose, the more virtual CPUs are being assigned to you. There's a number out there where
when you get to like about 1.7 gigabytes of memory, that is approximately one full virtual CPU
that is allocated towards your Lambda function.
And so you can go smaller all the way down to 128,
which I think is still the default,
which do not use 128.
It's useless.
There's so many troubles with that default configuration.
It should start at 256, better yet 512.
And after that, the performance gains are actually quite negligible.
So that's one limitation.
The other one is concurrency.
For everybody that doesn't know, a Lambda function only ever takes in one request at a time.
So if you want to process two requests at a time,
you spin up two Lambda functions.
A thousand, you spin up
a thousand Lambda functions.
Right?
And there's a limit to that.
20,000
for one AWS account.
So that's your limit.
If you want to process
more than 20,000 concurrent requests, you need to rethink, you need to re-architect, right? You need to figure out what is a better compute model, for example, is a container, because a container doesn't have necessarily the same limitations. So then you start to think about,
okay, I have all these limitations, so what can I do about it? There are a bunch of different
add-ons that AWS has added over the years. Provision concurrency, for example. You want to be setting up your lambdas maybe ahead of time, because
one thing that is really something that people don't think about is there is a startup time
to a function, right? Which is called the cold start. And so when you are getting a
request coming in and it sits and waits in a queue, it only waits in the queue because it waits for a Lambda function to be provisioned and to be ready to serve that request.
And so especially in certain scenarios where you have spike loads, you get into trouble really, really fast, especially taking that example that you were speaking about, Andy, earlier,
when you have functions
that are directly calling each other,
which is sort of an anti-pattern, if you will,
to the serverless architecture
or to the event-driven architecture.
Because what you're doing
is you're basically compounding
the response time of the individual Lambda functions.
So you want to get a request that comes in from a mobile app or from a web application,
and you want to service that in under 500 milliseconds.
If your Lambda function is too big, if your Lambda function calls another Lambda function calls another Lambda function,
if you haven't allocated enough memory
to that Lambda function
or to any of those Lambda functions,
you end up with higher than desired response times
very, very quickly.
You end up in the second range very, very quickly,
which is oftentimes undesired,
especially for mobile applications
or web applications.
All of those things, bits and pieces
that matter and they compound
each other if you're
again,
not going through the traditional
critical thinking exercises
up front where
you're saying, this is my use case.
And for this use case,
Lambdas or step functions
or some other serverless technologies
could be a good fit, right?
But go through that analysis critically
and also admit to yourself
when Lambda functions
and other serverless technologies
are not a good fit, right?
So that's why I also don't often like,
I think this was something that you, Brian, said earlier,
where people are proclaiming they're 100% serverless,
or people are proclaiming they're 100% on Kubernetes.
But what if the use case that you're trying to solve
doesn't fit?
Doesn't fit serverless, doesn't fit Kubernetes.
Shouldn't there be room to sort of not be 100% serverless
or not be 100% Kubernetes,
but allow for leniency and for freedom
to choose a more appropriate piece of compute,
for example, that doesn't fit the mold,
that is not Kubernetes, that is not serverless. What if I need to choose a container? What if I need to choose an EC2 instance?
Should I not do this just because somebody said we are all serverless now?
No. And that's coming from a reliability dude That is all about, you know, simplicity and not so complex systems,
right? But I'm also a fan of performance and availability at the same time. And I'm a fan
of critical thought. And I'm also a fan of if it fits the use case, then choose that appropriate piece of technology. Don't be
beholden to the umbrella term,
Kubernetes, serverless, whatever. Don't be
beholden to that and try to fit square pack through
a round hole idea just because that's what you're dealing with
on a daily basis, right?
I took
a note. This could be the title
of this podcast. Don't just
Lambda because you can.
Do it because it fits the best.
Do it because it's the best fit for your
use case. Maybe that's something.
I was thinking of an old movie. There was
an old movie called Lambada, the
Forbidden Dance, and you could change called Lambada, the Forbidden Dance.
And we could change it to Lambda, the Forbidden Dance.
It's funny, too, because as we're talking about this, I'm looking at the mess of cables on my desk.
So I have two monitors.
I have all these peripherals and junk.
And you always see people talking about, oh, I'm going to declutter.
I'm going to get all my wires either hidden or get things all wireless, right?
And, you know, every time I see this, I'm like, is there a way I can do that?
And I'm sure I can take a hack approach of trying to do this, right?
But again, certain things, latency is going to come into the peripherals.
There's going to be certain things, like even if you're thinking about about Bluetooth headphones, you're going to have a quality drop of your sound.
So if I were to tackle this project, and this goes I think for anything, any project you're going to tackle, you first have
to look at what is your end goal. Define your end goal.
Understand all the components in your system.
And then figure out what you can replace with
in this case the lambda right or what might be a candidate for getting rid of a wire in my case
but if you just go willy-nilly right i mean even usb cables have a length limit right so if i were
going to hide them all under the desk and channel them through this, I might start making ridiculously long connections just to
hide it without getting the benefit.
And I think that's the loss. There's so much excitement about the new things.
There are, I think we talked, Andy,
Gene came a long time ago when there was all the
talks, I forget the conference, but it was always about the failures in the stories are the much better stories than here in the success stories.
For those ones that you're talking about, Sebastian, about all the great things we've done, if the story focused a lot more like, hey, guess what?
We accomplished this awesome thing.
I'm going to focus my speech on the insane hurdles
we had and how we overcame them. That would be a far more thrilling
speech or presentation than just hearing the sunny day side
of we set out for this goal and we made it. Luke Skywalker blew up the Death
Star. Well, let's talk about all the steps in between that made it really, really
difficult. And that's what's missing from it, right? Everyone wants to be the hero,
but a hero's journey is very, very detailed and has a lot of trials in between that get you to
that point. Anyhow, sorry, pontificating there a little bit, but I'll just...
No, it's true, right? But that's where, if we were to share more honestly like that, right? Across the industry, people would take away from those things way more than from the shiny nothing, right? I take nothing away from that, just maybe an awe moment.
But for me personally, it impacts me zero, right?
Because I haven't learned anything.
I could take nothing away from that speech's talk about those like challenges and hurdles and
obstacles and pitfalls and you know bad cultural habits that really held us back and really made
it difficult for us to get from point A to point B right because everybody struggles through those
right and everybody would want some assistance with those if they had the choice, right?
Because that's the other part.
That's the German in me where I'm just thinking so many times in my career and so many times in my life, like, why do we reinvent the wheel so many times?
Like, so many lessons have already been learned but not shared.
So, so many others need to go and learn that lesson too.
To me, I find this mind boggling.
This is such a waste of time and effort.
I got in trouble at home for that.
Like, you know, my daughter is going through, you know, exploring life.
You know, she's just about to be 15 and she's getting into some hobbies I've,
I've been into.
And I get yelled at,
like,
let me figure it out on my own.
I'm like,
but I,
I see what you're trying to do.
I've learned all these mistakes and I'm trying,
like,
I can shortcut you through that so that you can start off further down the
path.
Right.
And then even my wife was like,
you got to let her learn this stuff.
I'm like,
but it kills me.
Like I've learned it.
Can I share it so that we as humans can progress further?
You know,
like let's just move on from that.
Don't,
don't put your hand in the fire.
Right.
You're going to get burned.
Trust me.
Like just go on.
But yeah,
I totally get it.
You,
you,
you want to get people past this those
the same mistakes you made um but i think you know not to make this philosophical i think this is part
of just the human condition right someone can tell you not to put your hand in the fire but
until you do it yourself you don't know why you know to to also kind of confirm what you guys are saying, I remember one of my most, at least from my perspective, successful conference talks I ever did was the top performance problems that we have seen in distributed architectures. They basically talked about six or seven different patterns we've observed by looking at observability data from our clients.
And we just talked about the things that they didn't do well so that people could learn and to avoid them.
And I think it's similar to some of the things that you just said, Sebastian, right?
It's about knowing the limitations, knowing what not to do, knowing when not to use serverless,
will, if you listen to it, or if the person that you tell it to, teach it to,
if they listen to it, they can avoid a big mistake.
I remember the early, it was my second year at Dynatrace.
I used back then AppMon for an application that was built on top of SharePoint.
They were building a very large
transactional, like a financial service application on top of SharePoint, using
SharePoint tables and lists back then to store transactional data.
And there was a big, very big mistake from an architectural perspective.
They built it for two years.
And then we looked at back then a pure path,
a distribution with a trace,
and I told them this is never going to scale.
The underlying technology was not built for this.
SharePoint was not meant for this.
And then they had to rebuild everything.
But, and I think to your point, Sebastian,
would they have taken the upfront time and investment
to educate themselves about what are technologies
that would fulfill their requirements, then they would have made a better choice upfront.
Yeah, that's why I'm also a big proponent of sitting back and waiting past the hype
cycles, like latest and greatest, like large language models and all their possibilities
and like where they could all be applied, right?
I'm quite comfortable and I think more, especially technology leaders or technology decision
makers are well served to sit back and wait, let the dust settle and see what actually sticks and remains around.
You don't need to be at the head of the development at all times
because A, there's a lot of information that is not present yet.
There's a lot of learnings that haven't taken place yet.
And you are ill-equipped to put your company or your team on a path that you don't know yet whether or not it's going to pan out.
Because the technology itself hasn't stabilized yet necessarily, or you haven't necessarily had enough time to absorb it all, to educate yourself, to figure out, okay, now I'm educated,
how do I now relay all that education towards the people that need to do this day to day?
And sort of that whole like training and learning aspect again, where if you're just following the
shiny object, and then you're just declaring, this is what we're going to do going forward. Which also goes back to, I think, Brian, you said that early on, like that black and white
thinking, like it's this or that, right? I'm really not a big fan of it. I'd rather have
a few select options within my organization that we have vetted really well, that we know how to educate people around.
We know how to put those options together,
potentially in a hybrid version, right?
But they cover a fair amount of our use cases.
And it's okay that we have multiple options, right?
It's okay that it's not just one and we have to fit all our use cases
into that one technology option. We have at least
two, maybe three, right? And we know how they work really well.
And then that level of complexity or
yeah, these multiple options are perfectly
fine to entertain
from an organizational perspective.
I like the idea that you're talking about not being at the vanguard of it, right?
I grew up in, you know, 80s and all,
and Vietnam movies were all the thing, right?
So I watched a lot of those,
and every time they put someone on point
the person who's going to be in the lead of the platoon walking what always happens to that person
they're the one that steps on the landmine right so you want to be a few a few people back so that
when you get to the point you can then be successful um yeah just thinking about being
blown up by uh trying to use new technology, but
it happens.
It happens all the time.
You people, people, you know, not literally, but figuratively get blown up by trying to
be the first on the new technology.
Cause you're the guinea pig.
I mean, again, think about when the first iPhone came out or any new model car, right?
Don't buy the, don't buy the new model car.
Wait for the second or third year of it.
And that's exactly what I do.
I've been buying the same vehicle for many, many years now.
And every time they bring out a new iteration,
that's not the one I'm going to get.
I'm getting the one after.
Because all the little kings and all the little idiosyncrasies
that the first model had in the second generation,
they're sorted.
It's not error-free.
Nothing ever is.
But I have assurances just by the approach that the second version is going to be better than the first.
So I don't need to be there right out of the gates.
Maybe that speaks to other people, early adopters.
But depending on your organization
and your size and your maturity and the impact of your business towards the larger community,
it just makes sense to be a little bit more careful when choosing these types of approaches
and technologies. I'm not working for a startup, right?
Maybe for a startup, that's more appropriate, right?
Because you want to see where this goes.
You want to differentiate yourself.
You want to build a business out of it.
And maybe if your business model was chosen wisely,
if the technology does take off,
you're the first one on the ground
and you're the progressive ones that now gets to go and sell a bunch of software to other people.
Because you've been there on day one.
But when you're working in a more mature organization that already has an established customer base, wants to grow a little bit further, wants to challenge your competitors a little bit more,
I would be more careful around choosing some of these early technologies.
And I'm not saying serverless necessarily is in that stage still.
But being more careful around being an early adopter and just proclaiming outright,
we are going to be this now and we are going to be all AI or all serverless or all this or that.
Because you have a bit of a business responsibility, you have a customer responsibility,
and you have a bit of a social responsibility
that the stuff that you're building works
right
hey Sebastian it's amazing how time flies
when you have conversations like this
but it's been I also want to make sure that we are
giving people a reason why they should listen
to your podcast go to your training platform, because if we allow you to give away all of your insights, then we just want to make sure they consume the other content too.
I would like to kind of recap a little bit because I think a couple of things that became clear to me today.
Whatever technology you use, right,
you want to understand the limitations
and also the strengths,
especially for technology shifts like serverless
and the same is true for containers.
You want to make sure that you are,
that you understand the architectural pattern where it fits best. As you said,
it doesn't make sense to take people that only build monolithic applications and don't give
them any chance to educate them on, let's say, event-driven architectures and then
assume they will just leverage serverless the best. I think that's important.
I also like that some of the stats you gave us, I had no clue
about the 20,000 concurrent requests. And I think if, let's say, concurrency and high speed and
performance is a big topic, you want to make sure you do your upload or upfront load testing and
performance testing to see how much you can really get out of that engine that you are potentially picking.
And to everyone that is ever presenting at a conference or an internal presentation,
think about it.
People learn more about others' mistakes than just hearing the final success story and if they don't hear the tough parts along the way,
they will may repeat it and then wonder what's wrong, what did I do wrong?
Would have been better if they would have known from the start.
Sebastian, did we miss anything? Any final thoughts?
No, I think I liked the summary very much. And obviously, there's more to
serverless technology than we had the time here to elaborate on. But yeah, if we can just leave
everybody with sort of that one thought to not forget about your basic architectural design considerations, right?
Really take the time, sit down,
figure out what your use case really is,
make some educated assumptions
around what the performance level is supposed to be
that you're trying to accomplish with the new feature,
the new app, the new service, right?
And then think critically,
regardless of your organizational tendency, think critically about what is the best technology that
services that use case the best. And then be bold enough and be brave enough to present it
to the audience that needs to hear it.
That, you know, we really thought critically about those different choices that we have in front of us that, for example, AWS provides us.
And we really think that, you know, this is a use case where serverless is not the appropriate solution, right?
We think this compute system, this container system is better suited.
Or vice versa.
Affirm some of the assumptions that are existing in your organization.
This is a really good use case here right now.
If you want to create an example within the organization when to actually choose a serverless technology,
this is one that you can go with.
This is one that really fits well
into the serverless technology offerings
that some of these vendors have.
And now we have a guidepost
for everybody, right?
Whichever thing you're proposing,
but now you have a guidepost
because now you say,
this is the use case.
I'm choosing this set of technologies for it.
And here are the reasons why we did this.
Nice.
Brian.
I'm speechless.
Yes, absolutely.
Sebastian, I can't thank you enough.
I think this was a really fascinating episode. It's funny because it's, you know, was promoted as you don't have to think, right?
Just do it, right?
But that's obviously not the case.
And I think it's great to have reminders to everybody to make these considerations.
So really, really appreciate you coming on to share all this with us.
I want to thank everyone for listening as well.
And I hope you all found this as for listening as well, and I hope you
all found this as fascinating as I know Andy
and I did. And Andy,
I was thinking when you did your summary, I haven't called you
the Summarator in a long time, so
the Summarator has returned.
I used to intersperse Arnold
Schwarzenegger quotes sometimes with Andy
just to bust his chops, but that's too much
work when I'm doing these, so I gave up on that
laziness. Laziness cut that out.
Thank you so much for being on. Thanks to our listeners. Thanks to everybody.
Thanks to so much cool technology existing.
It'll be interesting to see. I think the most interesting thing to see
in the future is how things like universities and colleges
are going to start adapting to some of
this and also if there's going to be a pattern towards non-college-based education that's going
to be more like trade schools specifically like maybe obviously getting a degree may or may not
be important in your life but then getting going to a specific trade school, right, for different types of platform
engineerings, whether it's going to be Lambdas or Kubernetes or other things coming out to
help with that hiring process.
That would be actually another good, interesting conversation is how do you hire people for
serverless, right?
We won't get into that right now, obviously, but if you're a shop and you're looking into
that and you've had that, how do you go about finding people and hiring them?
And are there things people can do for that?
But that's a whole different topic.
Anyhow, I'm rambling again.
This is because I'm no longer my med, so all my speech is coming out now.
So thanks, everybody.
Until next time.
Bye-bye.
Thank you.