PurePerformance - 027 Essential things to know about Kubernetes, Docker, Mesos, Swarm, Marathon
Episode Date: January 30, 2017Eric Wright (@discoposse) is a “veteran” and expert when it comes to virtualization and cloud technologies. He introduces us into the field of container and container orchestrations, the vendors i...n the space, the pros and cons and the key capabilities he things have to be considered when evaluating the next generation virtualization platform for your enterprise.If you want to learn more check out his podcast - http://gcondemand.podbean.com/ - as well as his publications on https://turbonomic.com/author/eric-wright/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time of Pure Performance.
I think it's about our third episode of 2017.
So Andy, how are you?
Good, good.
I think we officially made it to 2017, even though we already had two episodes aired.
But they were, I believe, we can tell the audience recorded before the new year.
So I'm really good.
I got a couple of days off.
I hope you as well.
Yes, I had about a little over a week off.
It was nice.
Hard coming back.
Yeah.
And for me, too, especially, it's been my first year of being employed here as a U.S. employee.
And remembering the days as a European employee where it was a little easier to take a little more time off over Christmas.
But nevertheless, I enjoyed it a little more intensively.
And here we are.
And now you get to travel with mufflers across overseas.
We'll leave it at that.
That's true.
Oh, yeah.
I told you that story. Yeah. That's right. But we'll keep that a secret. We at that that's true oh yeah i told you that story yeah that's
right but we'll keep that a secret we keep that a secret yeah good uh so today uh eric uh eric
eric is the one actually that i want to introduce because he's been sitting there quietly listening
to to our you know funny stories um brian is it okay if I introduce him?
Oh, please do.
I'm sure Eric is laughing so loud he had to put himself on mute.
I'm keenly sitting in the background trying to make sure that I don't jump in.
I forget when I'm used to hosting a podcast.
I've got to wait until it's my turn this time.
We still haven't introduced you officially, so you just jumped in.
That's right.
Wow.
So please, please go back to your basement, which I know you are.
So audience, I'm really happy that we have Eric Wright today with us.
I was fortunate enough last year, I believe, to be on one of Eric's podcasts, talking about DevOps, talking about performance.
And Eric, now officially, I want to introduce you to our podcast, Pure Performance.
And maybe some of our listeners already know you, maybe some don't.
So please introduce yourself.
Who are you and why do you believe we invited you for the show?
All right.
Well, thank you very much for having me on the show,
Brian and Andy. It's always a pleasure. Good to meet the Pure Performance crew. This is kind of
nice. My name is Eric Wright. I'm usually known as at Disco Posse. That's the kind of the easiest
way to track me down on Twitter. And I'm the technology evangelist for Turbonomic,
formerly VM Turbo. It's like still close enough that I would like to say formerly VM Turbo
for the folks that are in the vendor space and they know who we are.
And I've kind of dabbled in the IT space for a long time,
been a blogger for a couple of years at discoposse.com.
Now, of course, I write a lot for our on technology blog on the Turbonomic site.
And I think we're here to talk about the year of containers.
We've gotten over the year of EDI.
It's time to talk about the year of container orchestration, isn't it?
Mm-hmm.
You know what?
Yeah, I think.
Yeah, go ahead.
I was just going to say, speaking of Turbonomics and all,
if any of your uh you know
co-workers are listening that we andy and i both know uh hello to you all we know we know we know
about half the company yeah it is a it is an incredibly small world in technology and i'm
always amazed when you go out especially you know talking to friends at you know different community
events and especially because you know you know obviously you guys are are used to being out in the virtual community as well as physically
out at events and such and we we end up meeting each other at all these amazing places but yet
never in our hometowns or anything so the the best we can ever do is we're finally home and we're all
on microphones you know from our basements doing podcasts together.
So it's nice to be home.
One day we'll actually catch up in person for a beer together.
Yes.
Yes, yes.
So I have a question now because I actually want to understand why you say it is the year of the containers.
Wasn't that last year?
That's what I thought too, yeah.
Just like the year of VDI, it's always the year of VDI, right?
Or Linux on the desktop or, you know, and the most recent rage I think is going to be when are we finally going to arrive at this elusive year of containers.
And I think we've kind of gotten there in a funny way where these people know what it is and you don't have to explain it
as much and we've are because we've evolved to the point where like all right we understand what a
container construct is now how do we figure out how to deploy and and manage them and and then
there's all this other care and feeding around the environment itself so i think think, you know, VMs took a while to introduce and virtualization
was, is now obviously rather broadly accepted. Cloud also fairly broadly accepted. But now we
have this container construct. We're like, all right, we kind of get what it is. We're not as
concerned about the security wrapped around it as much. You know, no one, you know, brings up the
classic thing of like, well, I hear you have root access from every process.
No, no, no. We've we've gotten past those points.
And now we've hit the point where we said, OK, how do we do it as a continuous momentum in our infrastructure?
Deploying it, you know, having a visibility and an awareness of where it is. And then, of course, how's it acting?
The old 1-800-HOWS-MY-DRIVING for my container environment.
How do we know what the heck it's even doing once we actually deploy it out there?
I always tell people, I said, if you thought VM sprawl was scary,
where do you see containers?
Because they're much lighter and way easier to lose track of. Yeah, and I think I agree with you somewhat on the idea of the container
because, Andy, if you think back to also the concept of DevOps
and when we were talking to Gene Kim, there was the curve and the early adopters.
Everyone's talking about DevOps, but it's kind of like the early adopters are running it.
And I think there's a whole bunch of talk and a whole bunch of action around containers last year,
but it's still probably in much more of the early adopter phase.
We're getting a lot of great stories about their usage and the deployments,
and more people are starting to go there.
But I think maybe you might be on to something as this might be the year where it finally goes from,
I don't want to say the unicorn status, but from the early adopters to that bigger, broader rollout.
Yeah.
Yeah, the proverbial crossing of the chasm, right?
That's the piece.
And I mean, I see a lot of it from the ops side.
And I mean, Andy, what do you think?
Like, you're obviously super focused, you know, on the development side of things. And so your camp is probably much earlier to the front of the line on this stuff.
Like you've probably been involved in active deployments a lot more than I would have been just because that's kind of more where the community and where the focus has been.
But do you see now a big difference compared to say a year or or even two years ago
yeah so i totally agree first of all coming back to one thing that brian said uh yes if you think
back about the devops adoption chin came even last year in november said that only two percent
of companies have been adopting devops best, even though we've been hyping this topic for years now.
And so I think the same is true, obviously, for containers.
Now, Eric, to your point, you're right.
I've been seeing containers, especially on Docker,
being hugely adopted in the pre-prod environment.
Why?
Because we can solve some amazing problems with it,
right? Whether it is parallelization of test execution to speed up your pipeline when you
have to execute hundreds and thousands of tests. And if you can spawn them up in Docker containers
in parallel and do some crazy things with it, this obviously speeds up. Then also, you know,
using containers just to spawn up and down environments that you need to have access to for a very short time.
And in a pre-prod environment where it kind of sandboxed, all of these concerns that you brought up earlier are not really concerns anymore.
Because in the end, you know, you use it to get your job done faster in a more automated way.
And I believe and I agree with you, we are seeing now more and more swapping over from pre-prod into prod.
And I think one of the reasons, at least from my perspective, could actually be DevOps or continuous delivery.
Because the promise of continuous delivery, so the hope is, the premise is, you are delivering something and no matter in which environment
you deliver it and deploy it,
whether it's dev, test, staging, or prod,
it should be using the same tools,
the same technology.
And in this case,
it would be the same container technology
that you already use in pre-production, right?
So, but it took a little longer
because obviously there's also other people involved
that have to then manage not only one Docker container or 10 Docker containers that you need in a pre-built environment, but managing thousands of servers, of infrastructure.
And then now you're introducing this new very dynamic component that might be a little scary for some people, but I think we are probably – we've seen from a Dynatrace perspective a huge tick up in the last year on container technology in the production, in the operations world.
So then I think that's where we have the fun right now where we look at we've – the first Docker containers – and we say it almost becomes interchangeable docker
and container just because they obviously they led the charge in the same way vmware to
virtualization you know there's there's obviously they they own a significant hunk of the market
however there are many other players in it so you know docker rockets uh lxc uh but obviously
docker kind of led the charge and we see people that have gone a lot of Docker Compose, a lot of standalone, just Docker Run.
And then we get to the next piece where we think, oh, how do we deal with a whole bunch of them?
And how do we think about the operations side and the actual broad container scheduling platforms?
And this is where the fun part comes in now.
And maybe you want to start. And Brian, what are your thoughts as well?
What is the this next step where we said, OK, we get containers, we're seeing value,
but now how do we carefully operate them in production at a larger scale. And that means we need true orchestration infrastructure.
And then we can kind of dig in on each of the individual ones and see who's maybe going
to be the one that we're talking about in a year from now is the one that won this battle.
So I think what I just learned is that you obviously, as you said in the beginning, you
don't even know if you are a guest or actually running the show here.
You're perfectly pivoting over from we ask you the question because we see you as the expert towards us and say, hey, what do you think?
Perfect.
So I think, as you said, you obviously bring up the perfect questions here. People need to now think of how can we not only use 10 containers in a pre-built environment,
but potentially hundreds or thousands in a production environment.
How do we monitor?
And especially, how does monitoring change?
I think this is a big challenge for most of, at least for the traditional operations teams that I interact with,
where for the last, I don't know, 20, 30 years, we told them,
hey, you need to look at metrics.
You need to look at dashboards.
You build your dashboards.
You define your SLAs, and then you watch these dashboards.
And if something goes wrong, you know what to do depending on your runbook.
All of a sudden, we have applications that work differently because you cannot just create
dashboards for five machines anymore because it's five now, but in 10 minutes, it might
be 50 and then maybe 500.
And then five minutes later, it's down to two.
And using traditional monitoring approaches, I believe, doesn't really work anymore.
So, I mean, I know I'm pivoting a little bit because I'm also touching a little bit
in how we monitor applications that run in these very dynamic environments. You were more talking
about how do we also operate and monitor, I guess, the orchestration infrastructure behind all of
that. And now I want to actually pivot over back to you, Eric, and actually understand from your
side, as again, I consider you as the expert here,
I know there's multiple options out there for people
when they look about container orchestration.
Could you give us and also our audience a little overview
on kind of the major options that have crystallized themselves
over the last couple of months?
What are the options?
Because some people may still look into what they should evaluate. So so what are the options out there what are the pros and cons especially when
it comes to how do we scale this stuff oh absolutely this is the this is the fun part
where we get to be we get to be wrong in in three years and someone's going to yell at me for being
a being a quote-unquote pundit for for talking about products that i think are going to yell at me for being a quote unquote pundit for talking about
products that I think are going to win the battle.
But there's the key players that we have in the market today, obviously.
So, you know, we talk about Docker and we talk about containers.
That became a hand in hand thing because Docker was the one that kind of led the charge.
And then the funny thing was the one that I think is leading is then the most well-known is Kubernetes on the orchestration side of things.
But yet they also weren't the first one to market with container orchestration.
And I would liken it to what's happened with LXC in comparison to Docker is what Mesos is to Kubernetes.
So if we think about these two major open source platforms that have been around for quite a while,
the Mesos project is very cool, very interesting container orchestration,
has its own scheduler, baked in its own algorithms,
which are actually fairly interesting algorithms in the way that it does placement and sizing and such.
But there's some idiosyncrasies and some limitations to it, obviously.
Kubernetes kind of won because it has that Google thing attached to it.
And, you know, when you run something at Google scale, then everybody in the world says, well, clearly it must be very cool.
Then at the same time, Google also has the
tendency to run something very neat, and then suddenly just shut it down. You know, whether
it's Google Buzz, Google, you know, whatever, there's like a handful of products that they've
run that everybody loves. And then it just goes away. What was interesting, though, with Kubernetes
is again, container orchestrationration framework and really much more.
But at its core, it's container orchestration.
And then they've open sourced it.
And it comes from the learnings of about a 10-year period where we had Borg, which is this big monstrous thing that actually powers the entire Google systems. So all of their world is scheduled
and managed by Borg, which is this monstrous and appropriately named central orchestration
and scheduling system. Then they created Omega because they said, let's try and do Borg,
but without the mistakes that we made with Borg, because it's a live production system.
And so they created this Omega environment,
which was a little bit leaner, a little bit faster. And, you know, obviously they,
the languages changed along the way and some learnings came in. And then from there,
this kind of third generation came out, which was Kubernetes, which, you know, from all of the
tales that I've seen are that it's, you know, like, like I said, the child of the other two saying, here's the kind of small framework that will give us a good starting point to launch from.
And then they open sourced it, which is very cool.
So those were kind of the two big ones that we talk about.
And then, of course, with Docker, we have Docker Swarm.
Docker themselves as a project and as a company have their own platform. And Docker Swarm,
obviously, it makes sense. It's going to work really well with Docker. And so of those three,
these are kind of like the, these are the podium players of the container orchestration frameworks.
And Kubernetes leads out the charge again, because it's cross-platform.
So you can run it with LXC.
You can run it with Docker.
You can run it with Rocket.
You can run it on any number of other frameworks, which is actually pretty cool.
It doesn't necessarily even need to be just straight up container only.
It can plug into other hypervisors and such.
There's really a lot more potential flexibility.
And then, again, Docker Swarm is focused on working with Docker. So I see it's cool, but it's also fairly limited in that way. And I think that's the one.
And then Mesos, like I said, I'm a huge fan. We see what's happened with DCOS, which is
a GUI-driven management framework, which has another scheduler in it called Marathon.
Like you can see what happens.
We're talking about kind of the top three and you're already getting into like this
handful of other opportunities and options.
And I think this is where it becomes tricky from the operations side, because developers
just want to say, look, I just give me, I know what I need in my containers, and I just
need a way to deploy them in my own environment. And then it becomes the how does an operations
team give them that same capability, but also deal with monitoring around visibility, the
infrastructure itself, the performance of both the underlying infrastructure and the applications
that are running inside of these
environments. And that's this neat sort of space where obviously folks like Dynatrace are doing
really, really cool stuff. And Turbonomic, we've been doing a lot around adding ourselves in to be
able to help these scheduler frameworks and replace their native schedulers to make better
intelligent decisions around performance. So yeah, yeah, there you go.
That's the longest way to get to the top three, which is Kubernetes, Mesos, and Docker Swarm.
And truthfully, there's actually two others which are interesting.
And, you know, but I'll say Nomad, which comes out of the HashiCorp ecosystem, which is very
cool.
I'm a big fan of pretty much anything that comes out of the HashiCorp environment.
So Nomad is interesting.
The only thing I worry is that it will become the fourth player, if not further below,
just because Kubernetes has kind of captured all of our eyes.
And then the other one is, I'll say it in the nicest way,
I'm a little worried that VMware has gone out on their own and created this photon controller and this photon scheduler.
And it may be because at some point they've got proprietary underpinnings that are going to make it perform specifically around their ecosystem,
which is why they didn't grab onto a Mesos, a Kubernetes, or another open source platform.
But that's – so those are two interesting ones to watch.
And we'll see how those actually show up, you know, as we talk.
We'll do this again in January of 2018.
And then we'll see where everybody is.
But I think it's still going to be Kubernetes, Mesos, and Docker going to be leading the charge. In terms of those, a question I had, or I guess two questions, sort of.
You were talking about the Borg and Omega going into Kubernetes,
and you kind of hinted at a little bit of fear of Google does what Google wants,
so you're kind of left to their whims.
Do you kind of feel at this point they would take something like Kubernetes and just suddenly swap it out or pull the rug out from everybody in place of something else that they favor?
Or do you think at this point, since it's open source and I'm sure a lot of people are running it on their own and modifying it, that it's, it's going to have, it's safe and
it'll have a life of its own regardless of what Google ends up doing internally. And then the
second question kind of is, you mentioned the VMware one, I think you said it was Photon,
which ones of these, if any, it almost sounds like VMware might be one, but which ones of these,
if any, would run a risk of locking you into, you know, backing you into a corner where you're suddenly all of your other options are much more limited from that point forward?
Oh, those are good questions.
They definitely the first one is the Kubernetes piece, right?
Is there a worry that Kubernetes will, you know, go in a direction that we don't
necessarily like or go away? Yeah. And obviously, I think it won't go away because it's got such a
broad community. Google is clearly the largest contributor at this point, but Mirantis is
jumping in. They're doing a ton of stuff, CoreOS. There's literally a dozen key players that are
putting all of their bets in.
Red Hat is doing their OpenShift, PaaS stuff.
That's a different beast unto itself.
But again, on top of Kubernetes as an orchestration platform.
So you've got a lot of folks that are going to make sure that it lives beyond.
And it's not going to – I don't think it's going to fall by the wayside. And what's interesting about what Google is going to learn from it is Google wins by everything that
we as a community do with their products. And that's what's interesting is they may have a
product, like I said, whether it's one of their internal social platforms and such, they'll run
it for a while.
A whole bunch of people get on it.
But in our world, a whole bunch of people is, you know, 250,000 people are using it. In their world, that's like, eh.
I guess there's a couple of people using it, so we'll give them three weeks to get out.
But Kubernetes now has become so much of a standalone platform approach that there's no way
they can reel it back in. And what they will learn from it, even if it goes in odd directions,
I think they're definitely going to continue to learn from how it's being used. And they can kind
of steer it a little bit. If I were to look at one that I am concerned about, it's Docker Swarm and Docker in general.
We've seen this battle recently or for the last couple of years, really, since Docker incorporated.
Like Docker as a company has got to create something that's going to generate revenue and whatnot.
So they're going to create enterprise platforms wrapped around their underlying infrastructure.
They're going to use stuff like Docker Swarm.
And we've seen it where somebody said, you know, hey, let's make it work better for like the open spec with Rocket as an example.
That was one thing that kind of highlighted it. So Alex Polvey and the CoreOS team said, hey,
you know, we want to make this way that you can use Run C, you can use a different app spec,
and it will still be able to run with Docker as the core component. And it got refused.
And then it became this big thing of like, whoa, wait a minute. I
thought it was open source. Like, well, it's open source, like it's a democracy. Somebody eventually
has that a little more weight on their vote. You know, the world, we're all created equal,
just some are more equal than others. So they have this corporate responsibility now that they feel
they know where it's going and they need to keep it on the rails.
And then the result of that is that it now goes counter to where a good hunk of the community wanted to take it, which created this battle around App C, Run C, and these other frameworks.
Now, Docker has since learned.
They've opened the doors a bit. They've obviously opened up a few more of their platforms, portions of the platform, and they've kind of learned that they have to align a bit more with how the rest of the world is doing stuff.
It's kind of that Apple thing of you're holding it wrong.
Like, no, no, you made it wrong.
If everybody is holding it wrong, then we know what the problem is.
So that's that piece of it.
Now then on the second part, we talked about lock-in.
We've got this elusive thing of the lock-in with a platform.
I do think that Kubernetes, again, is going to win just because of velocity. I think Mezos and potentially their native scheduler,
if you use Marathon with DCOS or if you use whatever it's going to be,
them as a front end and as the consumption layer for container orchestration,
they're probably the best in the same way that Betamax was the best technology.
But VHS, whether right or wrong, it won that battle.
The difference is that Betamax won't go away.
Mesos is all over the place.
We've got huge companies using it.
DCOS, Mesos, and Mesosphere, and different corporate iterations and different product
integrations.
I mean, Azure is using it as their container platform, for goodness sake.
So if Microsoft Azure containers are running using a Mesos framework,
I think it's fair to say that it's going to stick.
So when we look at stuff like Photon from VMware,
this is an old open source battle, right?
And you guys have probably seen this too.
I heard at the Interop keynote last year, John O'Bacon was there, Colin McNamara, and Sean Roberts from Walmart Labs now.
And it's a really great, like huge open source advocates.
And the greatest quote I heard was, just putting your code on GitHub doesn't make it open source.
That open source is a movement.
It's a momentum.
And it's a community effort.
So if, and I looked at the photon
and literally 100% of the contributions
are from VMware staffers,
which is normal, what you see at the start.
But compare that against everything else
that we're talking about.
And we've got massive communities
that are wrapped around it.
So if you put your eggs in the basket of the photon as a consumption layer, if they were smart, and I think they're doing it, they're going to make it so that you can consume it with the same API structure that Kubernetes has to offer, which again, then that just screams to me,
why in goodness name didn't you just use Kubernetes? If you think you're going to
profit from your container orchestration, then good luck and may your God go with you,
because it's a tough nut to crack out in the financial market for sure.
But I guess the reason why they do it is because they just want to be one vendor
that provides everything as much as possible, right?
Yeah, there's definitely –
Those are the reasons.
There's the branding piece of it.
There's the –
And again, there's something underneath it.
Like there's a reason why they've done something a certain way.
And in two years, you'll be like, oh, you magnificent buggers.
Now I know why you did it that way.
This is what it is.
So clearly there's something at a layer underneath that's giving them a need in order to – they have to build their own platform in order to integrate it.
And I hope that it's not just hubris of accepting that something else is better. And you see that in
a lot of other environments, you know, in software infrastructure or software defined infrastructure,
right, that everybody wants to be the one single source solution for everything. But I think people
that use look at Red Hat, like they're using a ton of different other platforms as part of their
ecosystem. And they why they widely accept it.
Like, hey, look, if you're going to run Kubernetes, then have at it.
Run it in our – we've got a distribution of it that we support.
Turbonomic, that's the thing we said.
You want Kubernetes?
You want Mesos?
Whatever you want to choose.
I'm not going to tell you that you need to do something over here in order to make it work well. We made the choice that we created integrations with all of the active
frameworks so that we knew it has to be customer choice. It has to be consumer choice. And that's
that dangerous thing. So yeah, we'll see. Like I said, we'll see in a year i i applaud vmware for going into more open source
worlds uh with a lot of their other platforms they've done some some more stuff uh in recent
years obviously but they also stopped doing it in other ways that's that's a whole podcast unto
itself and so i have a i have a question now and obviously there's there's multiple options but it's clear
what your tendency is now one of the biggest challenges that i could see potentially is
do we trust do we trust these schedulers do we trust these platforms that they that they take
care of a lot of underlying resources,
and they schedule everything, and they do everything in a perfect way,
so that at the end, not only do we have a dynamically scaling system
that scales depending on end user demand, but that it's also efficient enough.
And I know you and your company, you are obviously providing your own algorithms that are, I'm sure, better than what comes out of the box.
But still, is there anything that operation teams can do or enterprises can do when they look at these different options and figuring out what's the best option for me besides obviously the technology but in the end it comes down to do we do we uh
do we walk down a path where we're not losing money in the end because we picked something
that is not very efficient yeah that's the and that's a great one see let's go back a few years to virtualization.
We saw – we put our trust in virtualization at some point between the last – over the course of the last decade.
We've – no one even questions virtualization first in most companies that are of a reasonable size.
And it's not a detractor to any company that can't
make that choice for whatever reason. But no one questions it. They're like, all right, well,
maybe I can't afford it and we only need one server. Okay, that's fine. Fair enough.
Then it becomes the cloud thing, right? So this whole battle of, is it cloud first? And you'll
see pundits all over the world my favorite thing is you know
anybody who says something like if you're not building your entire infrastructure public cloud
only then you're clearly you're wasting your time and then you'll take a look at their profile and
like oh look at that you happen to work for a public cloud company no no wonder of course
they're going to be evangelical about their platform
and their approach to it and the reason why containers really are going to be widely accepted
is because it now enables complete heterogeneity underneath because what happens is this new
abstraction layer means the cloud companies are happy, the virtualization companies are happy, the bare metal people that are going to run it directly on bare metal,
also, you know, perfectly happy. So do we, how do we look at this trust layer? And then the
performance thing, plain and simple, the solution right now for most of these environments in order
to solve the performance problem is to fire more containers at it or fire
more resources at it. And that's like if you take nine women and you give them one month to produce
a baby. There's certain things that you can't throw capacity at to solve. And this is where
we're going to start to bump into it. We're about to screw up containers the way that we screwed up virtual machines in the first P to V stages. So we're going to deploy extremely large containers
that are monolithic applications inside them. We're not going to see any value in it,
and we're going to blame the container orchestration environment.
How many applications have you seen where someone says they do some terrible application coding,
and then they say, oh, well, that's because you're using Java.
Like, well, no, maybe it's because you actually don't know what you're doing.
Then you could have written it in Go and it would have run faster and gotten to the failure a little faster, right?
So we think of it as operations teams.
You know, we've got to get first stand up the infrastructure, being able to be comfortable in getting it up and running.
Then the piece is around, you know, scaling that infrastructure and being able to do day two management.
Now, if if you really look at it, the ability to run container orchestration is as a platform for consumption by everybody.
It's so much more flexible than what virtual machine infrastructure is. But what it means is that you're going to have to think a lot about
networking. You're going to think about security, not in the like, you know, PID1 root level access
to the hypervisor and bare metal, but in the sense of identity and access management.
And then again, you know, once you even just solve the tooling issues, then you're now going to realize that you've got a performance problem and that you have a cost problem. Because if you don't
understand performance for your applications, then you're going to be running, A, you know,
too much physical infrastructure on-premises, or B, you know, too much cloud infrastructure,
which is, you know, despite they say it's cheaper, they are the cloud companies that
tell you it's cheaper, or consultants who get paid by cloud companies.
So, like, this is this real neat phase where we're in, where, like, first, you know, pick
one to bet on, and then go with it.
And like I said, if you're going to choose one, I'm going to say Kubernetes is the one we're going to be talking a lot about for many, many years.
Mesos will be a nice second place player.
It's like being second place to AWS.
You'll never win the gold medal, but it's not being second place to AWS. You'll never win the gold medal, but it's not a
bad place to be if you're second place to AWS. Who's going to be third? I think Docker is
definitely the one, obviously, because when we talk containers, we talk Docker Swarm.
So if you're going to be a betting person on where operations teams and development teams need to look is they need to become acutely aware of how their applications perform inside of these constructs.
Because the more you abstract them, the further away you get from being able to truly measure and being able to affect performance inside those
application layers so it's you know i and i of course again i i talk about the pundits who only
talk about stuff that their company does and it sounds like i'm only talking about stuff that my
company does but i mean i've i've been talking about performance since i've been a virtualization
admin for far too many years and you know it's way too easy to just fire it out there.
Like I said, we're going to see tons of containers trying to solve a problem
by just like, ah, put the replication controller, set it for 40.
If 30 won't do it, set it for 40.
We'll see how it goes.
No, no, no.
Let's actually look at what the real problem is.
So basically, I mean, and you're speaking out of my heart, I would say.
I mean, one thing you mentioned is, yes, we can throw as much hardware as we want on it, but we won't solve the problem.
What we really need to figure out as an industry in the future is how can we build efficient software on whatever, technology is available and i think the
the ability of monitoring not only applications and the performance to the end user but really
the resource consumption and that includes obviously virtual physical hardware whether
it's i or whether it's storage network and then basically taking this information and bringing it back to the
engineering teams and say, well, you gave me an application that I can scale endlessly
in my virtual infrastructure, in my container infrastructure.
But guess what?
It costs me five times as much as before when we had this big monolith and it also kind
of scaled a little bit at least.
And so I totally agree with you.
And that's why monitoring is obviously so important and breaking it down to not only apps.
We talk about features.
We talk about services.
And then giving this feedback to the engineers and the architects and say,
well, we give you all this great new technology,
but now you actually need to start building applications
that are really efficiently
using these capabilities that we have here and not just relying on the underlying infrastructure
to scale everything. Yeah. And Andy, that also even goes to not just building efficient code
to cut down costs of scale, but as I think we've talked in the past about building features,
monitoring not just your performance, but monitoring your feature usage so that you know if there's this feature that runs on five services out there that no one's using, you can just cut it, you know, cut it out and save that cost.
It might not be a huge amount, but, you know, it all ties into that whole idea of monitoring not just the efficiency of the code but also the business value based on the usage.
Exactly.
Adaptive provisioning and orchestration depending on real usage patterns.
And if you then architect your application correctly so that you have small individual services coming towards microservices, if you have the architecture picked correctly,
then you actually have the ability to much better scale your infrastructure to exactly provide the number of resources it requires to deliver the type of service
in the right amount of way for the people that need these services,
but not over-provision it just because you think you need to provide these resources.
Eric, I got one more question to you because I know we're running kind of late on time.
Now, what do you think is the best approach if an enterprise looks into these technologies?
Is it smart that the ops team, who in the end becomes the service provider to the application
teams, to pick and choose depending on what they
what makes their life easier in the future and pick their technology stack or should it be the
application teams that tell the operation teams what they need or how does this work what's the
what's the what's the best approach when evaluating and figuring it out as an enterprise that's a
great question you know because we've it's the it furthers the DevOps beans where they're so far apart.
You know, the reason why we have trouble merging them together is the answer.
If you define DevOps, if it depends on who you ask DevOps to a developer is finding a better way to get the operations team out of my way so I can get code out there faster. And from the operations side, DevOps is finding a way to get those damn developers to write better code so it doesn't
break in production. And for container orchestration, it is a great way to bridge that.
And I feel it's an operations responsibility. And the reason is, as a developer, it's probably much easier to change platforms. Because of the
abstraction that we present, whether it's Kubernetes, whether it's going to be, you know,
Marathon on DCOS, or just raw Meso scheduler, or whether it's Docker Swarm, or whatever it's going
to be, that abstraction layer is very easy for developers to switch gears on.
And then they are still concentrating on code.
And, you know, the containerization kind of took away a lot of the stuff that they had to worry about.
So I think the operations team needs to know that, one, they're comfortable with the vendor who's going to support it.
And I say this importantly because you are not alone in this. And if you are alone,
you better have a lot of engineers behind you. Because we're running on cloud virtualization,
whatever it is, there's a vendor attached to it, most likely, you're not running raw KVM,
or even like the OpenStack on KVM folks that have engineering teams that support their
infrastructure. They've got that knowledge and their comfort of controlling
and managing the tooling of the infrastructure. So if you're an enterprise consumer,
go to your partners. Take a look around at who, number one, in your competitive space,
in your vertical, in your business, is using similar platforms. They're probably doing
something similar to your business.
So why wouldn't you take a look what their infrastructure looks like as a way to stay
competitive?
And then, of course, like I said, who are the vendors that you already have relationships
and are they comfortable with some kind of container orchestration system?
Most likely, they're already working with, you know, a Red Hat or, you know, like an
enterprise vendor that you pay an annual 20%
year over year fee to, to keep your infrastructure up and running and keep the service and support
alive. And that's where I think why VMware is kind of, they're going to do fairly well as they
get people working with Photon and such, just because they've kind of, they've got the audience,
it's a captive audience. But as a, like I said, as an enterprise consumer, just make sure that you've got something to fall back on.
But make sure, even if you're a Windows admin, people have to know, understand scripting.
They have to understand working in CLIs.
They have to understand working at a much lower level than they're used to working. There may be gooey representations of what the container environment looks like on Windows when it goes to like Azure Stack running containers on-prem.
But you've got to know what's running inside it much more so than we did with, you know, we trust Hyper-V, we trust VMware, we trust a lot of these platforms. And so definitely, like I said, look to look to who else you can call
on when you need to have it certified, because you want to make sure it's a major player that
you're comfortable with and you have an existing relationship with. Because, yeah, like I said,
there's Mirantis has now become a Kubernetes company. The way that the same way they were
the OpenStack company,
they were pure, what do they call them? PurePlay OpenStack. Now they are PurePlay Kubernetes.
No surprise. They saw the writing on the wall and said the OpenStack business isn't going to
keep returning as strong as they needed it to. So time to get into the Kubernetes business.
And Intel, CoreOS, all these other other companies they have folks that are willing to support container orchestration stacks in your environment
alongside with you which is kind of cool wow cool hey uh well brian i can tell you something i i
think today well i was very happy that we have eric on the call and that he kind of took took uh took charge and took uh and and and explained a lot of the stuff that i definitely
don't feel like like being an expert definitely not as much as an expert as eric uh i also know
that there's still a lot of stuff out there to learn for all of us especially for those
enterprises that make a decision soon on which horse they bet.
But that's basically it.
I like what you said at the end, where you said, you know,
in the end it's the responsibility of operation teams to figure out what it is that they need to pick.
And I would be very happy, Eric, to definitely have you back at a later point in time and then see whether your predictions came true or what we have in a couple of months or maybe in a year from now.
And that's going to be very interesting.
Yeah.
Any final words, Brian?
Yeah.
You know, one thing you mentioned, it's not something really to talk about, I guess, because I think it's something for us to more sit back and watch.
Besides being the year of the container, I was just recently reading an article that Microsoft Security put out about latest attempts to hack into virtual cloud machines and use those as massive scale attacks.
And you had mentioned security around containers and all.
So kind of, you know, we just had that recent IoT,
all the security cameras were used to do the DDoS attack
against that whole large swath of the Internet there a few months ago.
It'll be interesting to see how security with containers
and this whole virtualized world,
what plays out in the next year or so. And if somebody does get some sort of a massive attack
based on getting in there. But yeah, Eric, this has been very, very eye-opening. Learned a lot
today. So thank you very much. And Andy, I would like to point out how good his voice sounded
because he's using a microphone and not a headset. I'm going to keep pushing so thank you very much. And Andy, I would like to point out how good his voice sounded because he's using a microphone and not a headset.
I'm going to keep pushing that on you, Andy.
I'll buy Andy a microphone.
No, he's got one.
Actually, I have a microphone, yeah, but I hardly ever keep it with me.
That's my problem.
He uses it for his field recordings when he does the Pure Performance Cafe.
Eric, a shout-out to your child in the background there.
Yeah.
Special guest star.
That's the only downside to a good microphone.
It also picks up what's going on a whole floor away.
And the other thing, I know why Andy probably doesn't travel, doesn't have his microphone everywhere he goes.
As a frequent traveler with an eight inch cylindrical
aluminum tube in my carry-on let me tell you i get a whole lot of love from tsa every time i go
through that x-ray machine so it it's nothing like it's filled with wires and it's an aluminum tube
it's a fast way to get a little extra pat down every time and i also want to give a shout out to
um to our friends over over there, Skitty and the Legend.
If they happen to listen to this episode, they'll know who they are by those names.
But anyhow, yeah, excellent, excellent having you on here.
Do you have any final thoughts before you go ahead and give another plug to your podcast and your blog and all?
Oh, yeah.
No, this is great, you know, and again, you know, for the folks that
are, that are listening in, obviously you're probably more familiar with, with what's going
on on the diamond trade side. Uh, you know, your team's doing a lot of great stuff. Love your
push into, you know, very open about, uh, how to, how to adopt things, how to really great walk
throughs, uh, with, with the blogs there. Uh, yeah. And just, you know, look to look to the industry
for people that are, that are walking in just slightly ahead of you because they've, they've
already stepped into the puddle first. So it's, you know, I hope that I can, I can be that for
some folks, you know, who are listening in and, you know, of course, if they want to reach out,
you can always find me on Twitter. I'm Disco Posse. I blog at the On Technology blog.
So if you go to turbonomic.com forward slash blog, I write there fairly regularly as well as on discoposse.com.
And I am an occasional podcaster on the GC On Demand.
So if you go to GCondemand.io, we actually had a little bit of a holiday break there and I've got a few that are queued up.
So we'll have some,
some neat stuff.
And again,
I love the,
the conversational style,
uh,
with,
with you guys here.
It's been a lot of fun and I look forward to a chance to,
to come on again and we can catch up in a couple of months and see how
things have gone.
Excellent.
Cool.
Well,
thank you very much.
And,
everybody until next time,
this is Brian and Andy. This is Andy. Yeah. Well, thank you very much. And everybody, until next time, this is Brian and Andy.
This is Andy.
Yeah.
Thank you.
Thank you, everybody.
And don't forget, you can follow us at, what is it,
pure underscore DT at Twitter.
And you can email us any feedback, show ideas,
or anything else at pureperformance at dynatrace.com.
Thank you all very much.
Goodbye.