PurePerformance - 008 A Cloudy Story: Why You Should Worry About Performance in PaaS vs IaaS or Containers
Episode Date: July 18, 2016The initial idea of the Cloud has long become commodity – which is IaaS. Containers are the current hype but still require you to take care of correctly configuring your container that will run your... code. Mike Villiger (@mikevilliger) – a veteran and active member of the cloud community – explains why it is really PaaS that should be on top of your list. And why monitoring performance, architecture and resource consumption is more important than ever in order for your PaaS Adventure not to fail.Related article:http://www.it20.info/2016/03/the-incestuous-relations-among-containers-orchestration-tools/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome back to Pure Performance.
This marks episode number eight, so we're all happy for that.
I am Brian Wilson and as always, to the left of me or to the right or virtually in the ether is my co co-host, Andy Grabner. Hello, Andy.
Hey.
Well, it's actually a good question.
Left, right, or in this case, across the pond,
I'm sitting in lovely England.
Oh, it's really good.
Believe it or not, it's sunny.
Oh.
Wow.
Crikey.
Should we do the whole show in –
I can sometimes do a pretty good English accent. I just can't do it right now. I was going to do the whole show in – I can sometimes do a pretty good English accent.
I just can't do it right now.
I was going to do the whole show in an accent, but I'm just not even hitting it right now.
So forget it.
Forget it, mate.
It's Australian.
See, I don't even know what's going on.
You know why?
Because it's 925 in the morning for me right now.
That's why.
There you go.
Hey, who else do we have on the line today?
Who is the guest of honor we have uh someone who used to be a
colleague of mine at dynatrace uh by by title we were equal footing although he was always much
more brilliant and bright than me but now now he's gotten recently a promotion so he's a big fancy
higher up um so welcome to the show mr mike Hey there. Hi. Michael, why don't you tell us a teeny bit about yourself there?
So my Dynatrace story is a long and entangled one.
Obviously, as some of you might, if you go off Google in my name and my relation to Dynatrace,
I was actually a customer for a couple of years.
I've spoken at nearly every Perform conference. I've participated
in one way or another. And, you know, obviously, after coming on board, I worked with Brian for a
couple of years in our product specialist group. And now I'm working in strategic business
development, kind of working on a lot of our cloud partners. Cloud partners. So, Mike, I think I just recently saw you, and maybe I'm jumping too far
ahead, but I saw you presenting at the Cloud Foundry Summit. Is that right? Yes, yes. So,
at the Cloud Foundry Summit, I was able to give a lightning talk about performance metrics driven
continuous integration and continuous delivery, which is a topic that's
near and dear to your heart. Is it not, Andy? It is. I love it. I love it. And I like it that
you bring it into, you know, obviously into the cloud era. And that is also the topic,
I think, that we want to talk about today, right? Correct. Although the sun is out
in jolly old England,
it is going to be a cloudy day.
Although Andy,
is it really going to be a cloudy day or is it going to be more of a virtualization day?
I don't know.
It's a good question.
So in the preparation of this podcast,
and you can see that we are actually,
uh,
try to be funny here.
So,
uh,
we, in the preparation of this talk, we started talking about, so what do we talk about today?
And cloud, obviously, is a big topic.
And then I challenged Mike, and I said, hey, Mike, if you want to talk about cloud, if I hear cloud, I think about easy-to-compute storage.
And Mike, you said, what did you say?
So I said that we kind of take IaaS for granted nowadays.
I mean, we basically just, the fact of the matter is that everybody's going to be using cloud infrastructure in one way or another, whether that be, you know, on-prem via VMware and OpenStack
or in the cloud via EC2 or Azure, that really the IaaS is just a fundamental building block of how
we build software nowadays. And to a certain extent, I don't even feel like it's even worth
mentioning anymore, since it's just a given that it's going to be part of our infrastructure.
So it's commodity.
Correct.
And there are a lot of cool new acronyms now.
So Mike has been mentioning IaaS, and he's not just saying a bad word from the South.
It's IaaS, which is infrastructure as a service.
And I bring these up because there's a lot of these now, right?
We have IaaS, PaaS.
What's the – is there another, Mike, I think?
Well, there's –
There's always been software as a service, right?
Yeah, software as a service.
Pretty much the marketing folks will generally try to apply the as a service to probably any primary acronym in the tech industry. Coffee as a service. Yes. Right. Coffee as a service to probably any primary acronym in the coffee as a service yes right
coffee as a service um you know talking to the new stack guys the new stack guys have been going to
conferences with a um a pancake printer so so i kind of talked to them and said well does that
make that pancakes as a service see that's, that's a hot idea that you got there.
So, yeah, so infrastructure as a service, that's basically what everyone knows as EC2 and all that, where you're just getting the computer or the server.
And everything else is up to you.
So there's a lot more offerings now and a lot more exciting things that we can delve into and use. And, uh, I think it's a really kind of,
although I, I, I, my personal feeling is that a lot of this is still a little bit in its infancy.
It's definitely progressed a lot further than where it is. Um, but there's, there's a lot of
cool stuff happening now and it's just going to get cooler and cooler as time goes on. So it's
really good to be aware of all this and, and, and think about how it can all be leveraged and pulled into your systems.
Cause yeah, there's a lot of cool stuff going on.
So we today we're going to be talking about two of these components, right?
Correct.
And would you like to, so yeah, let's start with the idea of,
of containers, right? This falls into the which section?
So we talk about containers kind of in the context of virtualization, right?
Although the interesting thing is that containers are really kind of fundamentally operating at a different level than virtualization. They're actually going to be closer to the bare metal of the machine
than a container, if in fact, sorry, than a virtual machine,
unless, of course, you're running your containers in a virtual machine,
which, of course, is the way a lot of folks actually deliver those.
But, you know, when we start talking about containers,
you know, the elephant in the room in the container space is going to be the Docker.
And as we start to talk about Docker, the main thing you're trying to do there is kind
of provide an immutable, repeatable environment in which your software is going to execute.
And so how does this differ then to a traditional amazon ec2 image i mean i same
thing right i have an image i can run it 50 000 times if i want to and every time i start it up
it's the same new instance so so the yeah well the idea is that that you know docker is going
to be a lot more lightweight than than a full-fledged v. So you're going to be able to scale a few more Docker containers in an
environment versus VMs.
You know, like the same piece of bare metal, you might be able to get,
say, you know, 32 VMs on it, and you might be able to run, you know,
60 Docker containers in that same environment.
Right?
Because the whole idea is that you don't have to repeat all those fundamental operating system components.
And then you're relying on Docker's use of – this is actually something that people don't fully grasp is the fact that Docker is actually taking advantage of components of the Linux kernel that have actually been there for a very long time.
And Docker is actually fundamentally replicating some elements that were in, you know, other
operating systems, you know, in the early 90s. And so I think basically the main benefit you
mentioned in the beginning is speed, speed to delivery and also resource more resource friendly or i'm not
sure what the best word more resource optimized so you can run more containers faster meaning
bringing them up faster on the same hardware yeah that's awesome so and that's what i've
heard and my correct me if i'm wrong but but is it still the case that Docker is still seen as more pre-prod testing CI, CD, or is it ready for primetime production?
Well, it's definitely ready for primetime. of containerization are going to be more readily apparent in, in some of the pre-prod environments, you know, simply because of the whole concept of providing that, that immutable, you know,
the unchanging environment that, that you're, you know, able to bring up very quickly and then tear
down equally as quickly. So some of the, some of those benefits are actually, you know, really
great in the pre-prod part of your SDLC, But there are definitely, I can't think of any names off the top of my head,
but I mean, there's definitely a lot of Docker and prod
where you're seeing, you know, very large scale web properties
that are running in the containers that are, you know,
basically scaled out into the tens of thousands of containers.
And in fact, some of our customers are doing that.
Yeah, I know.
And basically, I mean, I think what,
I mean, the real benefit comes
if your application architecture,
obviously, you know, is built for that.
I mean, it doesn't make sense
if you have your monolith
and then you just run it in 50,000 Docker containers
because it's still a monolith
and it's just going to run into a lot of issues.
So, but new architectures that leverage the fact that they have small components
that can be spun up very fast and that can basically failover very fast,
new instances that also come up very fast, that's perfect, right?
For that, Docker is just perfect because you can run these instances super fast,
super efficient, whether it's in pre-prod or in prod.
Correct.
Correct.
That's exactly correct.
And then the whole idea then is that the runtime environment between pre-prod and prod is going to be identical.
Right.
And that's a really great component of it.
I don't know if anyone listening has experimented with Docker at all, but what I've seen in my few experiments so far is, you know, we keep
talking about the speed, and just to put that in perspective,
if you go on to, let's
say you provision a VM,
EC2, Azure, wherever you might want to put it,
which is, you know, as Andy said,
you might have an image that you're just going to deploy to that.
You're still talking about, what, like, maybe
five to fifteen minutes to get
that instance up,
whereas Docker, you're, you know,
as long as you have a place to deploy your container,
it's what, a minute or less even?
It's super, super fast.
Typically, it's very close to the time it takes
for your process to actually start.
Yeah, it's ridiculously fast.
And then the awesome thing about that,
especially in these pre-production environments,
is you don't have to have necessarily a,
your testing environments up and running and maintained all the time,
right?
The development will have everything going and just hand,
hand off your new container.
You just instantiate that,
throw it up and start hitting it and then take it down.
You know,
one of the biggest problems we used to have in,
you know,
my old job was maintaining that environment,
you know,
and a lot, you know and a lot you know a lot
of that comes into the same thing you could you could say the same story if you have a an image
that you're going to deploy on a vm you know yeah the same kind of benefit but the speed and ease
that docker provides with that and it uh it's just really really beautiful and the fun part about it
at least for me because mike keeps pushing me into this direction,
and although I've resisted a little just because it's too much to wrap my head around sometimes,
but it's really fun getting back onto the command line.
Yes.
You deploy this.
You look really cool.
People come by your desk, and you have a terminal open, and you're typing in commands and making things happen.
Obviously, this can all be automated.
If you're in that testing environment,
you can even have scripts be part of your script
before your test executes.
So it's really, really fun in that sense,
and it kind of brings you back to getting to the heart of computing,
which is moving a little bit more away from the GUI stuff
and getting into it.
Although I think Docker has a GUI interface now, don't they?
Yeah, well, when you look at the Docker ecosystem, there's many orchestration tools, right?
The Docker orchestration tools like Kubernetes and something like a Marathon and Mesos,
those are tools that are there to help you deploy containers across multiple machines and to actually,
you know, then work with things like your Docker compose files, which are, you know,
how you bring up multiple interrelated containers. So, so obviously, although, you know, there's a
number of different UIs to kind of help make things easier. But, you know, when you're in that,
when you're in that pre-prod phase, or especially in the dev phase, you're going to be interacting with that that CLI pretty frequently.
And, you know, going going back to what you said about kind of getting back to the the origins of computing, if you will.
You know, my wife is always asking me if I'm in the Matrix.
Right. Because, you know, you're watching all the all the text kind of you're, you know, you're watching all the, all the texts kind of scrolling
by, you know, all the stuff that's kind of going out to standard out with some of the things that,
that you're doing there. So, well, I think according to Elon Musk, I don't know if you
saw recently, he, I didn't read in depth in the article, but his theory is that we're all part
of somebody else's computer game, like the Sims or something. Yeah. So, yep. It's basically,
you know, given, given the advance of, of computing
and things like that, it's, it's inevitable that, that we'll have the ability to do that ourselves
at some point. And then, you know, given that fundamental construct that in turn, it's,
it's almost inevitable that we are in fact, you know, inside somebody else's simulation at this
point. And, and it's actually, that's actually, it's a very long-standing philosophical idea.
Some scientists actually, I don't recall their names.
You know, folks can actually Google this offline.
But there were a couple of papers that were actually written within the past two years
where a couple of folks have actually disproved that idea.
Like they somehow came up with a proof to to prove that we were not in fact
running in a simulation well that's because if i punch if i punch you in the nose you'll feel it
and you'll bleed kind of a theory well i mean if you remember if you well if you remember in the
matrix you know you could actually punch somebody in the matrix and they bleed in real life yeah but
how do i know they even exist anyway Anyway, I got two points now.
First of all,
would we deploy the whole thing
in the cloud?
Second of all,
how can you offline Google?
That's something that I never
heard about before.
How can you offline?
You said Google is offline.
How can you Google offline?
Anyway, maybe coming back
to the cloud and Docker topic.
This reminds me of why
Steve Jobs took acid.
Okay.
So here's one thing that strikes me, Mike.
You mentioned three new terms, and I'm sure there's 20,000 more, Mesos, Kubernetes, and
the Marathon.
And there's so many new tools and so many new frameworks and there's so many new stacks.
It feels overwhelming, at least for me sometimes, because it seems frameworks and there's so many new stacks it feels overwhelming at least
for me sometimes because it seems every week there's yet another thing that is so cool and
sexy right now and i'm not sure what is really going to stay where what is hype and what do you
say is this i mean obviously docker is going to stay I guess it's not going to go away. What about some of the other frameworks and tools and whatever you want to call them that you just mentioned?
Are they all going to be there?
Is this something that we need to read up on?
Or is this something we just wait another year and see what's still left?
And what's your take on it?
So I read a really great article the other day, and the URL would be really too
long to read within the context of a podcast and expect somebody to type it in. But somebody
really described the Docker ecosystem as incestual. And we'll put a link on the podcast page if you
can get to it. All right, that'll be great then. So they basically described the whole kind of container ecosystem
as incestual because all of this,
so you have the fundamental construct of the container,
and that's Docker, right?
Then you have these varying mechanisms
for distributing Docker containers across environments,
and that's your runtime engine.
And that's where you have things like Kubernetes.
But then you have competing frameworks like Mezos.
So Mezos is a workload orchestration engine
that grew out of big data projects.
So it was originally intended for, you know,
supporting analytics jobs that would fire off and disappear.
And then they started adding the capability
to support long running jobs.
And now in that type of context,
something like a Docker container running a web app
is actually thought of as just a long running job.
Right. And then so basically this this article that I read was was talking about how all these tools exist and they all run with or on top of each other.
And and the fact of the matter is, is that if you blink, if you feel like you understand an idea of how these tools interact with each other, if you blink, somebody will have figured out a way to flip that around.
So let's blow everybody's minds real quick. So I have a really well-respected friend, one of the best system architects I ever worked with back when I was a customer.
And he left the organization that we were both working at.
And through circuitous means, he's now effectively a system engineer at Intel.
And what he's doing at Intel is running clusters
and running clusters that support container workloads.
Now, they were originally running Mesos,
and now they've actually migrated to Kubernetes.
And in talking with him,
I basically wanted to get his take on that.
I'm like, so what's your take
on just how completely ridiculous the container ecosystem is right now?
And he basically pointed me to something called Stackernetis.
All right. Now, what Stackernetis
is, is it's a project
to run the OpenStack
infrastructure components
as containers orchestrated by Kubernetes.
Yeah, you're blowing our minds.
So I honestly, like, I feel like if I could, you know,
have a crystal ball and figure out, you know,
where the container ecosystem is in three years, I'd be using a very
large sum of money to invest in various organizations. And in three years, I wouldn't
be talking to you guys again because I'd be sitting on a beach enjoying a tropical cocktail.
So I think it's almost impossible to tell where that, where those container ecosystems are going to go. I personally, my take on it is that, that the container ecosystem
will, will eventually become a commodity like IaaS. And then we're going to basically be concerned
with platform as a service.
Ah, okay.
Yeah, that's also – that's a great segue over because that's basically also in our preparation to the talk was what you were advocating. We were basically saying, well, the whole Docker community or Docker is basically stealing the limelight from what you believe is really what we should focus on, which is platform as a service. So now I'm sure we could probably talk more about Docker and containers
and whatever they are called,
but I guess we should really then focus on platform as a service
because this is the way it seems,
and obviously we trust you and your expertise,
and you see the stuff all the
time out there we should probably not talk more about the next acronym pass but before we hold
on before we get the pass it's time for our trivia oh yeah so so mike and just to remind
other people in case you haven't been listening to the whole history of our show, I mean, it's only eight episodes, so you really can go back.
And we know we're the most compelling podcast out there, so you should be devoting all of your time to us.
Anyway, we do have a trivia component to our show.
The winners get their name put up on the podcast, our www.dynatrace.com.
It's what we call a no prize.
Mike, were you ever a comic book reader?
Off and on.
Okay.
Off and on.
So I'm super uber nerdy, but for whatever reason, comic books were never my thing.
I was basically too busy either creating or breaking the internet.
Okay.
Well, we kind of stole the idea of the prize from Marvel Comics.
They had the no prize, NO prize, which would get an empty envelope.
So this is the K-N-O-W prize.
See?
Nice and clever.
So anyway, you basically just get your name up on the site as the winner.
And this week's question, or this episode's question, is from Andy.
And it's a very alluring question, at least for me at 9.45 in the morning.
It's a hoppy question.
It's a weedy question.
It's a yeasty question.
So here is, I think all the cool companies now and i think including damage
waste because we are a cool company uh are offering things like you know beer what a great
invention uh beer for the employees not obviously nine o'clock in the morning instead of coffee to
make them drunk and produce some strange code no but, but on Wednesdays, Thursdays, and Fridays in our Waltham office in Massachusetts, we
have two tabs of craft beer.
And thanks to our colleague, Jeffrey, who is actually very acquainted in the craft beer
business and he has a lot of contacts.
He's actually doing also beer festivals in the Boston area.
So if you're in the Boston area, you should check that out.
Now, here's the thing.
The trivia question, which you cannot Google.
Or Bing.
I'll play your role this time.
Or Bing, whatever your favorite search engine.
So here's the trivia question.
How many gallons of beer has the Dynatrace Waltham office consumed in the last 12 months knowing
that the beer taps are open Wednesdays, Thursdays, and Fridays after 3 p.m.
Two taps in the last 12 months, okay?
So it's not Google.
Has it been open the last 12 months or how many months?
It has been open.
So what's the – we would take the average.
Let's say – actually, let's do it that way.
Let's take the average because the total might be – let's take the average number of gallons per month in the last 12 months.
And is the winner going to be the closest person without going over or is it just going to be closest period?
Is it kind of like prices, right?
Closest period.
Okay. So no bidding a dollar or one gallon okay exactly cool and if you uh if
you know the answer then how does this work oh you will tweet your answer to hashtag pure performance
at dynatrace or you can always if you want email it in to pure performance at dynatrace, or you can always, if you want, email it in to pureperformanceatdynatrace.com.
Especially if you don't want to give away your guests, basically, right?
Yeah, because then someone might do like one above you.
Because it's a very compelling prize, right?
So we know people are going to be really, really on top of it.
So that is the question.
And you can always use those avenues to contact us anytime you want.
Who's typing there?
I hear some typing.
Nobody.
Yeah, that's Mike.
Says nobody.
Mike is working while we're doing this, everybody.
That's the kind of guy he is.
He's keeping the Matrix alive.
I'm in the cloud.
He is in the cloud.
Hey, talk about working.
Can we go back to pass?
Exactly.
That's where I was going.
You stole my thunder. I'm so sorry. That's all right. He is in the cloud. Hey, talk about working. Can we go back to PaaS? Exactly. That's where I was going. You stole my thunder.
I'm so sorry.
That's all right.
I missed you.
All right.
We're back to PaaS.
Go.
So PaaS.
Mike.
Yes.
Platform as a service.
I'm going to babble a little bit more because I think it's a topic that I'm extremely passionate about.
And it's something that I find, I don't want to say oddly compelling, but I find it really compelling.
And the history is really interesting because we look at kind of the development of the public cloud.
And I'm going to kind of blow people's minds here for a second. And, you know, Microsoft is not always thought of, you know, at least within the past,
you know, decade or so as, you know, really leading innovation in an extremely forward
thinking way. But when we look at the growth and development of Azure, Microsoft was actually ahead of the game because Microsoft provided Azure as a PaaS environment before they eventually ended up getting bought by Salesforce for a pretty penny.
So some of these early successful cloud companies were actually doing PaaS before they did IaaS.
And was that because the world was not yet ready for PaaS?
I think that's definitely the case.
So we look at what some of those environments were fundamentally trying to do is really where we're looking to go again now here currently or here in the future.
And that's – we look at containers kind of stealing the
limelight right now. And some of these paths environments, you know, for example, Cloud
Foundry. Cloud Foundry is kind of my, you know, one of the environments that I've been doing a
lot of work in since they're very largely prevalent in the enterprise market. And, you know, basically, containers are actually a core
component of the path. So your applications are still running in containers. But the whole idea
here is, when you're running a container, you're still responsible for configuring the Tomcat
runtime that's going to exist in that container or whatever other
runtime might be present in that container. You're still going to be playing with the JVM heap
settings and blah, blah, blah. So you're still responsible for defining the platform.
Now, when we look at platform as a service, what we're actually doing with platform as a service is we're not, we don't care about the platform anymore.
Our, our platform as a service is, you know, defining, configuring, standing up that platform
for us.
All we have to be concerned with is writing our code.
So there's, there's literally, you know, it, it, it's basically, it's basically just allowing your business to focus on exactly what it is that makes your business money.
And you're not – most of our customers are not in the business of running their business logic,
whether that be an e-commerce platform, a banking system,
whatever that might be.
That is the core focus of their business, not Tomcat, right?
Yeah.
But I want to challenge you now on this.
So here's my challenge.
Here's my problem with the whole thing.
If I'm a developer and I develop my codes, then when I look at my Tomcat,
then about 80% or even 90% of the code that actually runs in Tomcat is not my own.
I write 10% of the code.
90% is out of my control.
But at least I control how I configure Tomcat because I know how to configure Tomcat correctly
because I know what to configure Tomcat correctly because
I know what load is coming in.
I know which memory, how my application consumes memory.
So if I am also giving up the control of optimizing my pass to the pass provider, then my dependency
is not only 90% to external components, but it's not only
95%.
So I understand what you're saying, and I think PaaS is beautiful for, especially I
think a lot of companies where software engineering is not their first, let's say, discipline.
As you say, the insurance companies of the world, even though I'm sure there's a lot
of great engineering teams there, but a lot of enterprises, they are not software companies, yet they have to become a software-defined business, and therefore PaaS is beautiful.
But it's also, I think, very dangerous because you are giving up a lot of control, and you're trusting an entity that does not know how to optimally run your code.
Am I right or am I wrong? So you are exactly correct. And in my opinion, I'm going to actually
turn that whole statement on its head. Because in my personal experience, I have yet to meet a
developer that actually knows how to run Tomcat in any kind of meaningful way.
You know, this coming from, you know, a very, very large environment.
You know, we were in the very beginnings of this podcast,
we were actually talking about, you know, some of the test environments
and things like that that we worked with.
And my pre-prod environment was 175 physical machines.
You know, and my pre-prod environment
was probably somewhere on the order of like 6,000 plus cores.
So, and, you know, across that environment,
there was, you know, it was extremely difficult
to define consistent Tomcat configurations.
And as a, you know, as a performance engineer,
I was the one that was testing, tuning, and tweaking
Tomcat in order to get the code to run. The developers literally had no idea. So when we
look at what a PaaS does for us, now you look at the key thing to remember here is that when you
look at something like a Cloud Foundry, it's an opinionated platform. And what that means is you're it's,
it's basically their way or the highway.
Okay.
So,
so,
so you're,
you're basically,
you know,
you're,
you're basically paying your,
your,
your enterprise vendor money because you're going to entrust them to provide a valid configuration
for those platform components. Now, you can, you know, most of the time there are ways to,
you know, kind of adjust things as necessary. You know, Java ops are still, you know, configurable
via certain ways and things like that. But the whole idea is that you're no longer setting up a virtual machine.
You're no longer installing Tomcat.
And in fact, now you don't even necessarily need to really worry about managing configuration.
Right.
And when I say managing configuration, I'm saying the deployment and configuration of your APM agents, the
deployment and configuration of your database configuration.
So all of that stuff is being abstracted out as a service.
And if your application needs to consume a database, if your application needs to consume
an APM agent, you're going to run one command
to bind the application to the service that it needs to consume. And then the platform handles
all of the under the covers manipulation to, you know, enable your application to use that
database. Now, obviously, when you're building your application, you're going to be using something like Spring
in order to be able to consume those configuration settings
in a meaningful way.
But quite frankly, those are the kinds of things
that you should be doing anyways.
Yeah, yeah.
So that's interesting.
That means coming actually back to one of the stuff,
one of the things we talked in the beginning,
we have a
lot of more as a service we have you just said monitoring as a service data storage as a service
whatever as a service so you bind that to your app and the sole purpose is really to make businesses
more efficient by allowing them to focus on what they want to solve which is the business problem
correct correct so i i want to i want to interject with something because this is perhaps probably one of the best, most thoughtful things that I've ever heard.
And this is something that Ansi Fakhouri, I believe one of the CTOs at Pivotal, had basically kind of coined a haiku about Cloud Foundry.
And the haiku is as follows.
Here is my source code.
Run it on the cloud for me.
I do not care how.
So now I want to segue over and challenge you with another thing, which obviously is dear to all of our hearts, which is performance.
So that means I can write crappy code and excuse my language.
I can write crappy code and still give it to Cloud Foundry and they just work and they just make it work and they just make it scale and they just make it do wonderful things.
Is that what it is or what's the price I pay? it work and they just make it scale and they just make it do wonderful things.
Is that what it is or what's the price I pay?
No.
So the thing is that this is something that had kind of percolated through the Twitterverse. And I do believe that I've actually received the original information from you, Andy. But when we look at these pipelines across these new environments,
if you deploy crap, all you're going to end up with is just a thousand instances of crap
once you get it through these platforms.
So you're still fundamentally concerned with a lot of the same paradigms
that we've been concerned with for
years. It's just as we start to move away from, you know, being dependent upon developers to define,
you know, environmental configurations, as we start to abstract that out,
you know, you talked about, you know, being responsible for your code, you know, as we start
to move to, you know, microservices and things like that,
and we start to use Spring, Spring Boot, Spring Cloud, your developers are literally only writing
the code to represent your business logic. So everything else is being abstracted out and being
handled by the platform. And if you implement those things in
an incorrect way within your business logic, all you're going to do is going to waste a lot of time
and resources by requiring your platform to run, you know, perhaps orders of magnitude more instances
of a crappy application than you should have, you know, run otherwise. And basically the price is what you see at the end of the month or however frequently
they bill you at the end of the billing cycle.
And you can see, oh my God, it's not $1,000, it's $10,000 to run the whole thing.
Correct.
Correct.
And so basically it kind of brings us back to one of the things that you talk about a lot, Andy, and that I've actually seen firsthand in my own life.
So I like to talk about this as well. cycles significantly quicker than we ever had in the past. You know, we need to be looking at these, you know, metrics that can actually destroy
the environment or just result in really crazy, just completely untenable monthly bills by
looking at metrics as early in the life cycle as we can.
Right?
Because we don't have the luxury of, you know a four six eight hour you know performance test
before every release goes out in the prod anymore that that's just not the way we do business anymore
yeah let me let me circle back for a minute here and you know one of the one of the common
performance issues we see a lot and and andy brings this one up a lot as well is you know
perhaps not having enough threads you you know, your max thread count
and all that. And when we, how does this get handled now in a pass situation? Because obviously
there are things we can do in the code to try to optimize and maybe use less thread or share them,
but there's, there's a wall and it could be something as simple as the performance solution
could be increasing the, the number of max threads on that instance.
How does something like a pass, are they just going to throw another instance at you?
Or is there a control to say, okay, no, we'll give you more threads on that instance?
So a lot of those things are configurable fundamentally as you really dive into the
guts of how those things are kind of put together.
I mean, because the reality of the situation is that there's, you know, a Tomcat or a Waz instance under the covers. But typically,
what you're relying on here, once again, since you are using something like a container,
right, under the covers, you know, the PaaS is actually managing that container environment for
you. You know, what you're looking to do here is you're looking to actually scale the environment
out by increasing the number of app instances. So you're designing your application in such a way
that there's, you know, minimal persistence and that persistence is actually handled outside of
your application runtime. And the whole idea then is that as your application needs to grow,
you're going to scale that application out horizontally. And then the ideal situation
is that your application can scale the, you can scale the fundamental building blocks of the application
to be able to scale those independently.
So, but coming back to, so it's amazing.
I mean, I think amazing things will happen, and I think we will see a lot of software-defined businesses that may not have, let's say, the – well, actually, they don't want to spend the resources on actually figuring out everything on their own.
They just want to build the business logic.
They can do a lot of cool things.
They can fulfill what I would always say as the promise of continuous delivery,
the promise of DevOps deploying fast and building scalable apps.
But the critical thing is what we have always been promoting is monitoring every single change that you're pushing through the pipeline.
And now, not only looking at performance,
which has always been one thing that
apm tools and we in dynamic trace have done pretty well but it's not only about performance but it's
about architectural validation it's about like how how many database queries do you make how often
do i call the service but also how many resources do i consume meaning how many cpu cycles how much
memory do i allocate how many log statements do i write to this skin how many CPU cycles, how much memory do I allocate, how
many log statements do I write to disk, and how many bytes do I send over the wire?
Because if we monitor all of these metrics early on, then we can immediately say, hey,
this code commit that developer Joe is trying to commit is going to increase CPU consumption
by 10%, which means we need 10%,
or the platform will eventually provision 10% more
compute power underneath the hood
to compensate for that, right?
And so we need to stop this early on,
and that's why we are such a big proponent
of making sure that you do metrics-driven continuous delivery, looking at the metrics
early on.
Because if you just start monitoring in production and you find out, ooh, the latest deployment
is now eating up our margin for our platform that we built.
We're actually not making money anymore, but we paid too much.
Then it's too late.
And you may need to revert a lot of changes that the engineers did that they pushed in
the last sprint that are hard to revert, that mean a lot of effort.
So metrics-driven continuous delivery.
And Brian, we too talked about this as well.
I think in episode three, if I believe, we talked about performance testing and continuous
delivery, performance testing and continuous integration.
But it's more than just looking at performance from a response time perspective it is very much
so from a resource consumption and a cost perspective in the end because if you are
allowing developers that and excuse hopefully i don't offend anybody but if you allow developers
that don't really have a notion they don't know what's actually going on in the architecture
or in the platform itself
and they just write code,
if they don't know what cost
impact that code change actually has,
we allow them to deploy change
that potentially destroys the whole business
model of the organization.
Yeah, I think
this also, you know, back on
that episode, a lot more in context of infrastructure as a service. We talked a lot about this stuff does come back to bite you in your bill at the end of the month, right? And I think whether or not you're talking as or PAS or any of the others, you still have to monitor all the same metrics
that we've been talking about
because it's still going to end up billing to something.
Monitor all of the things.
Yeah, it doesn't change what you have to pay attention to, right?
So in this case, it's not like,
oh, we're using platform as a service now.
We can start ignoring certain components.
Not at all.
In fact, you know, I guess they're all equal.
You have to always be monitoring those because there's going to be a charge there somewhere.
Yeah.
And I think what I want to stress, and I know that, I mean, there's a lot of tools out there in our space.
But if a feature that I'm implementing as a developer, you know, is performing in sub milliseconds but if i increase that with a
code change from sub milliseconds if i just increase it by 50 but if i know that this
feature is used a million times every hour that's going to be a lot of impact and that's why it's
so important and that's what i think that's the stunt that's that's what we stress all the time
that's why we are so so so so hard so hard on the fact that we have every single transaction all the time.
We have our pubic technology where we trace fast, slow, failing, good transactions.
But what we really identify is regressions from code commit to code commit.
And these regressions can be, yes, performance performance related, but more so, especially in the past
world, I believe, and Mike, if I get you
right, more so about
resource changes,
resource consumption changes, and
also architectural changes.
And call pattern changes.
I wrote some blogs
about bad
call patterns in
microservice architectures, the N plus one query problem
that we traditionally saw between app server and the database, then later saw between the
browser and the front and the web server.
And now we see it between caller and callee in a microservice architecture where you end
up because of a bad coding practice, instead of making one smart web service call to the back end,
you make the N plus one query call pattern
where you are, depending on your result set,
make 10 calls, 100, 1,000 calls,
and that adds up and then costs in the end.
Yeah, one of the things that always amazes me,
and I actually personally, Andy, I love,
well, I don I love, well, I don't,
I don't love that observation, but I find it almost embarrassing that the more new pieces of technology that we have, the more we keep seeing people make this, the same mistakes again,
right? We, we, we, we learn through a mistake with, you know, two pieces of of architecture talking to each other.
And then we deploy two new pieces of architecture and somehow we've completely forgotten what we learned about the two other pieces.
Yeah, that's human nature. Right. I just find that fundamentally just I don't I don't know if I want to say baffling, but it's something that's always kind of amazed me,
and it's something that I've always, you know,
taken on the road with me when I go talk to people,
is, you know, we have all these new pieces of technology,
but we're still making the same mistakes
that we've been making for the past 20 years.
And that's why we'll never be on a Starship Enterprise
exploring deep space.
We'll end up destroying ourselves instead
because we make the same mistakes over and over.
Yeah, they'll have used fundamentally wrong metrics
for measuring distance.
They'll use inches in one part of the system
and centimeters and millimeters in another.
Oh, they will never have to. Come on, really?
We are just about out of time.
I do want to bring up are just about out of time. I do want to bring up though that Andy's point about the, you know, the few milliseconds or sub milliseconds or even really brings, turns this into a call out to the performance engineers, right?
Because if we're talking about, and again, I don't mean I'm saying this in any kind of an offensive way, but if we're dummying down development to just strictly the business logic, even more, right, that's going to, that's going to mean developers going to be
paying a lot more attention just to that business logic. They might, there's a chance they might not
be looking at the bigger picture as much. And they in their CI environments might not notice or think
about a 50% increase in a sub millisecond response time. And this is where it
comes to those performance teams to really, you know, make sure they're looking at this data,
because they're now going to be the gatekeeper, because they're going to see that that function
is called 500,000 times in a second, which then adds up that piece. So it's really, you know,
a wake up call to say, you know, start besides testing and looking at the stuff, uh, life cycle
really, really, you know, the performance teams really have to dive in deep on, on these metrics
and track them to be able to translate and bring it back because they're, they're going to the
ones, they're going to be the ones that see this under load, that see the real impact of this and
can prevent it from going out into production. And, and, and so I'm going to, I'm going to kind of leave on an interesting statement that I made at our Perform conference when I was talking to the guys from PerfBytes.
Really?
Not to mention an alternative podcast in the context of yours.
PerfBytes is responsible for us starting this one.
Oh, awesome.
Our good friend Mark Tomlinson pushed us to do this. Sweet. So I was talking to the PerfBytes is responsible for us starting this one. Oh, awesome. Our good friend Mark Tomlinson pushed us to do this.
Sweet.
So I was talking to the PerfBytes guys,
and one of the things that I mentioned is at that point in time,
last year, there was a quote kind of floating around
that it was inhumane to not do DevOps
because by following the DevOps best practices and things like that
for, for a lot of these organizations and folks, you're actually starting to give people
their, their personal time back. Right. And, and I went on record saying that, you know,
not only is it, so it's, it's, so we can kind of sort of start to accept as given that it's inhumane to not do DevOps.
And then my take on that was it was inhumane to do DevOps without performance metrics in your pipeline.
Because you're just compounding all of those issues that you would have had in a more traditional environment.
You're taking that same death by paper cut, and now you're applying that to your releases as well.
And I know that in my personal history, having formally been out in the wild, when I had 13 different agile teams all filtering builds into me, in order to scale my efforts and my expertise, I had to teach people to fish.
And then additionally, if I had a dollar for every build that I received in my history of being a performance engineer,
if I had a dollar for every build that I received that was just fundamentally garbage and was easily identified as such in the script
recording phase, I probably wouldn't need to be working anymore.
You know, going back to that comment that I made earlier about, you know, possibly enjoying
a cocktail on the beach for the remainder of my existence.
You know, if you have some sort of, you know, release that has a 120 millisecond non-functional requirement
for a key transaction,
and when you're recording a script
and it's taking 12,000 milliseconds,
the old excuse from the developer saying,
oh, well, I thought it would work better in your environment,
that just doesn't fly anymore
because every single thing that we've done as a part of, you know, implementing DevOps correctly
is to try and ensure that those environments are as identical as possible. You know, there's been
a lot of talk lately even about, you know, doing things to make test data significantly more
replicated across environments and things like
that. So those old excuses just don't fly anymore. And those old problem patterns or anti-patterns,
as we like to call them, are just significantly more impactful just because, once again,
those old standard problems that keep happening are just now even more impactful. And the problem patterns actually then extend themselves that that death by paper cut is actually happening with your releases as well.
So if you waste an hour of time on every release and you have fifteen hundred releases, well, you've just wasted fifteen hundred hours and you've almost wasted an entire man year.
Wow.
Excellent.
The only thing I'd counter to that is it's not going to give people their life back.
They'll just get more work in the pipeline.
It won't be as stressful, but they'll –
Hey, I want to say, Mike, wow, I think we need to actually invite you back to another show.
And I'm sure there's a lot of topics we can find to talk about.
Brian, do you want to or should I start with my wrap-up, what I learned?
Yeah, I kind of – I think I did mine before Mike.
So please go ahead.
Okay. please go ahead okay so i think what i just learned is that pass is the next big thing
even in case it is not already the big thing and what i just learned is that we allow a lot
of developers to build to focus on their business logic, which is great, but that performance monitoring,
but extending it from response time to resources,
to costs, to architectural patterns,
is more important than ever
because you're going to kill all your benefits later on
if you're making mistakes.
And if you're not monitoring with every commit,
then it might be too late and it might be
too cumbersome to actually fix the stuff that you've committed days or weeks ago so that's why
i encourage everyone to also read up a little bit and watch some of the videos that i put on youtube
on the online performance clinic channel it's bitly slash dt tutorials I have a session on metrics-driven continuous delivery where I actually cover how this works
in a Dynatrace environment.
We'll also show how you can get Dynatrace for free,
registering for the free trial,
and then using it as a personal license for life
to actually make sure
that you're not committing any bad code changes.
I think that we need to start where it starts at the beginning,
and that's the IDE and the developer.
We want to empower developers with detecting regressions on these metrics.
So please have a look at what we have to say.
All right.
I'll put a link to that resource you mentioned on the page.
And what you just said just kind of brought
a thought in my head. It's the same old new, right? It's everything we're talking about,
the metrics, same metrics, everything else is just something new on top of. So whether it's PAS
or whatever the next new thing is going to be, definitely it's important and interesting to
learn what those new components are and how
they're being utilized and what benefits they can offer but you still have to monitor and manage it
same way it doesn't it hasn't at least so far nothing has really changed the fundamentals of
what developers and performance teams have to do and even operations to ensure
that is it is a cost-effective solution and a
really good performing one i concur all right and any uh they're going to be besides uh on the dance
floor any any appearances um after july 19th yeah i hope so i hope uh in the DevOps days in Boston and Chicago, that's on my list.
And I believe in September we have, what do we have in September?
I think it's StarWest, if I'm not mistaken. We started in October already.
I believe it is. Oh, Java 1 is coming up in September.
That's quite a way out. And StarWest, first week of October.
So I'll be at the Spring 1 Platform Conference in Las Vegas.
Obviously, the speaking slot or whatever isn't necessarily ironed out yet,
but Donna Trace is going to have a presence there.
Excellent.
All right, and if there's nothing else then,
I will say goodbye for Episode 8.
Thank you, everybody, for listening.
Thank you. Thanks for having me on, guys. Thank guys thank you for being on mike it was a pleasure great conversation today
bye-bye goodbye everyone