PurePerformance - Keptn: Shipping and running cloud native apps with Alois Reitbauer
Episode Date: April 15, 2019How many different continuous delivery pipelines do you have in your organization? Do you have dedicated teams that keep them up-to-date and constantly extend them with new tool integrations? Have you... already built in capabilities for shadow, dark, blue/green or canary deployments? Is auto-mitigation and self-healing already on your internal pipeline roadmap? Sounds like a lot of manual work?Keptn (@keptnProject)– an open source enterprise-grade framework for shipping and running cloud-native applications – is going to eliminate the manual efforts in building, maintaining and extending pipelines. Alois Reitbauer, Head of the Dynatrace Innovation Lab, gives us the background on how keptn evolved, which cloud native best practices are implemented as core capabilities, how to contribute to this project and gives us a glimpse into where the journey is going. Visit the about page and join the community and make sure to deploy keptn on your own Kubernetes clusters by simply following the step-by-step guides.https://keptn.sh/https://twitter.com/keptnProjecthttps://keptn.sh/about/https://keptn.sh/docs/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson, and as always with me is Andy Grabner, my co-host.
Hey, Andy, how are you doing?
Hey, Brian, I'm doing pretty awesome.
It's a new day.
Whoa, whoa.
You sound really different, Andy.
What's going on?
Well, maybe I finally got through puberty and my voice broke. I don't know. Or maybe it is because I finally followed your advice and hooked up my
Zoom handy recorder to my laptop. And I think now the audio quality should be much, much better.
A lot better. This sounds amazing, Andy. And you mentioned there's a, when we were talking about your voice change,
you said puberty, which is what we talk about here.
You said in Austria you have a specific word for the voice change itself.
Stimmbuch, yeah, Stimmbuch.
And it's, well, I use Google Translate and it says voting break,
which doesn't really make sense.
It's more like voice break, your voice breaks, right? I
mean, that's what it is. Oh yeah. I don't think we have a term for that over here in the United
States, or at least in my experience of the language, but you sound wonderful. It's glad
to finally have you, the most knowledgeable host on the show, sounding like a professional. So
congratulations to you, Andy. Thank you so much. Let's hope that the quality of the words
that come out of my mouth
at least stays the same
or maybe improves as well.
Speaking of words,
I think it's not just the two of us talking today.
We also have an honorable guest
who happens to be
the one and only guest host that we had had so far and as i said earlier correct
years ago years ago it's been about two years and i think uh he's also my boss that's why i have to
talk very nice now the reason probably why nobody else wanted to be a guest host until now is
because nobody can live up to the expectations that people now have of guest hosts and performance. Of course.
But I don't want to slime even more now.
I will just pass over the word to Alois Reitbauer, who is hopefully with us also on his Zoom.
So, Alois, are you there with me?
I'm here.
Hello, Alois.
So, does Andy get a raise now for all those kind words?
No, actually, he could get a raise for work.
He's doing great work, so that's fine.
But not for just climbing.
He's not a much better voice.
And we have made so much, I think, advertisement for Zoom
that Zoom might actually pay him now.
Exactly.
And this is not the Zoom video sharing,
but it's the Zoom, the company that has put microphones.
Yeah, it's really cool. It's a handy little
portable microphone in case anyone's curious.
So Alois, can you, I'm going to play
Andy's role here for a second. Can you give us a
although people know and
have been waiting for you to come back on the air as
a guest host. I know probably for two years people
have been standing by like, where is he? Where is he?
For those who are not in that
category, can you
remind everybody who you are and you know
give us a little bit about you tell us a little bit about yourself uh absolutely so for that one
person out there who doesn't know who i am just just joking uh i'm alice reitbauer i've been with
diamond trace for i think almost 13 years it's been and various roles um, and in my role, I was leading the innovation lab, I was working with
a lot of our technology partners and driving a lot of our innovation projects. So the Davis project
in the beginning, or even before that was working with Andy on HX edition. Also doing a lot of our
cloud related work. And most recently, what we do here at Dynatrace
around autonomous cloud management,
and even more recently,
about this new project called Captain
that we're going to talk about today.
Yeah, so what...
Before we go in there, though,
hold on, Andy, before we go in there,
two things I wanted to say is,
number one, I still get people,
every once in a while,
I still hear someone asking,
is the Ajax edition still available?
So you got a lasting impression on that.
So awesome.
So you said you're 13 years.
Andy, I believe today is your 11th year anniversary, you said before?
Exactly, yeah.
And actually, it's amazing how long both of us have been with the company.
I will obviously never be able to catch up to Alois.
But yeah, it's been a good ride.
And Alois and I have always worked together.
And even before Dynatrace, we worked together.
So we've been sharing the professional life for, I think, more than 20 years now.
Wow.
And I'm coming up on my eighth year.
So if that tells you something about how awesome it is to work at dynatrace so technically
almost 30 years of dynatrace knowledge on this podcast that's pretty awesome yeah so alo as you
said you know recently and this is really the the main reason why we brought you on the show
is uh you launched the initiative around captain and now for those people that have not yet seen Captain or heard about Captain,
do you want to first of all quickly tell us a little bit about Captain,
also how people can find it, and then we go into the –
I want to learn more about what is Captain, why did you build it,
or why do we build it, and which problems does it solve?
But maybe let's get started with the name and the initial thought on on why captain yeah i want to start a bit early i want to start really and
in about july last year where as you know andy we all get together we all got together and we're
taking all of our examples that we've built the dynatrace, whether it was the unbreakable pipeline, it was self-healing,
and a lot of other things that we used around making applications more autonomous,
more self-healing, and living this cloud-native thought process.
And we gathered in this nice hotel near the Lens office,
put all of this together into kind of like a workshop,
and got some of our smartest minds at Dynatrace there and share our ideas around what we back then thought okay this is going to be
a new initiative at Dynatrace which we called Hanabas Cloud Management which was really
taking the best practice we had around no ops and everything about what we learned from our customers.
That's why there are perform conferences and put this into a cohesive program.
This is a program that evolved over the last half year or so.
And our goal was really to have best practices, have a training course.
A lot of these concepts, also what we launched at our perform conference this year was autonomous
cloud management.
So what does this have all to do with Captain?
What we learned out there is we realized that a lot of implementations around continuous delivery and autonomous operations when talking to our customers kind of look the same, but not really.
So we thought, okay, as we're going to implement this,
we will do a lot of tasks and we will be repeating things over and over.
And at the same time,
we will build very complex setups
with lots and lots of tools together,
which led to an open source project,
Captain, that can be found under
captain.sh. Yes for everybody out there who doesn't know that you can buy
an sh domain the only caveat of this some virus scanners might actually pick
them up as a virus that's the only downside but we thought it's a really
cool idea and what Captain is in one sentence Captain is a really cool idea. And what Captain is in one sentence, Captain is a control plane for continuous delivery
following our best practices of the unbreakable pipeline
and autonomous operations and self-healing.
So all buzzwords in one sentence.
And I do want to mention that it is K-E-P-T-N dot S-H.
Yes, because it's the German phonetic writing of the word captain.
Oh, nice, nice.
And just because you mentioned the S-H, what the heck is an S-H website?
I mean, obviously you have the allusion to the shell script, right?
But what does S-H even stand for in terms of websites?
I actually don't know.
I just saw you can buy an S combine SH website and it simply did it.
I guess there must be some country out there, right?
Or is it one of these new top-level domains?
Yes.
And before we go on, though, I do want to point out that
just even on the About page on the site,
that this is not a Dynatrace product right specifically as you worded
on the website captain is an open source enterprise grade framework for shipping and running cloud
native applications curated by dynatrace right so the project is originating within by a bunch of
people at dynatrace but it is not quite a product it's more of an open source project is that correct
yeah that's true so it's not that it's a product. We still have our core product, which is Dynatrace.
And Capn is really an open source framework
that allows people to use Dynatrace.
And everything you can do with the APIs
and even beyond in an open source fashion.
And just to complete this, I just Googled this.
So.sh is St. Helena.
Okay.
Learn something new, a new geography lesson look at that
so believe it or not by the way that top level domain was introduced
in 1997 so we could have done this for almost wow over 20 years yeah so um alice i mean obviously
i know i've been part of that journey over the last year or so.
I mean, almost a year.
You said July is when we first got together.
But I believe what you said almost at the very end now is for me the most critical thing.
When we went out to our customers and also internally, because we started also building our own projects and tools, containerized using Kubernetes.
And then we all figured out that it seems we're all trying to solve the same problems by coming up with a new way of doing continuous delivery, by taking existing tools, by building
quality gates in there.
And it felt like when talking internally with our different teams, every team kind of built their own little thing.
And then we talked with our customers and they were also saying, well, we kind of containerized our apps now and we know that Kubernetes is the way to go.
But for continuous delivery, for doing blue-green deployments, it seems we still need to build either our own tooling or mesh together the toolings. And I think this is what I love about Keptn is,
is because you're taking away the pain of having to come up with your own tooling that actually
gets your containers in an ordered orchestrated fashion through the different stages of your
pipeline all the way into production. And this is what I love about it. And this is where I think
when we see a lot of value in it
and hopefully the world will see a lot of value
because we, and I believe you phrased it that way,
people can finally focus on building code again
and not having to maintain pipelines.
Yeah, and I think that there's a lot of problems there today
when you look at how people build pipelines specifically.
A lot of this is like ad hoc code.
It's really integration code that's scattered across a number of tools.
So on average, I think we looked at some of the implementations
that customers, it's seven plus tools that you're integrating.
It's not test-driven development that people are using.
It's just, oh, that stuff works great.
And then you have a new release of your deployment tool
and you have a new release of your bug tracking tool
and suddenly stuff breaks.
And nobody at some point knows why stuff breaks,
especially as you're very successful in the beginning
integrating two tools.
You start with three and you start with more complex workflows
and you keep going like this
and you invest a lot of time and resources into it.
And at some point,
you end up with a lot of code
that more or less becomes
your next level legacy code,
but it's not application code.
You realize that you have
actually built an application,
but the application in this case
is your continuous delivery
and automated orchestration pipeline.
And with Captain,
we want to massively simplify it
and also ensure that things are maintainable.
Plus, to your point, Andy, also make it quicker.
So my vision is really I want to take, say, an Nginx image
from Docker Hub and then simply deploy it
and have a three-stage pipeline with testing,
with self-healing, everything
out of the box without having to care about anything.
And I think that's what I mean.
We will go into some of the details, especially in another episode we'll be recording with
Dirk Wallersdorfer, who is kind of leading the technical innovation team here that is
actually building the core of Captain. But what he just said is, you know, taking a container and then letting Captain figure
out how to push it through the individual stages, what type of testing strategy to execute
on it, and then also validating or evaluating a deployment.
We have some cool concepts that we've taken over from the year-long work we've done with
integrating Dynatrace in CI CD, which metrics to look at.
And instead of kind of hard coding this into plugins every time for every different CI
solution out there, I like the approach of Captain where the deployment
validation is just a part of the orchestrated workflow. So no matter in which stage your
container will be deployed, after the executed tests, then Captain will start the evaluation
of the deployment and will then either promote
or stop the promotion of that container, depending on how well you score.
And I think, you know, we got obviously not only our own experience factored into this
part of Captain, which we call a pitometer, but there's also a lot of other companies
out there have kind of solved the problem.
But I think we took kind of the best of the best
and looked at things like Kayenta,
how they have been doing build evaluation and stuff like that.
But that's what I really like about it.
And as a developer, I don't have to care about this anymore
because it will just happen automatically for me.
Yeah, exactly.
And that's also what you really don't want to care about,
but not only even care about it,
and you've been part of a lot of these conversations
with customers as well when you talk to them,
very often a deployment pipeline
is something that's specific to one application.
And the customers who we usually talk to have hundreds of applications, which means to have
hundreds of individual pipelines.
And what Captain can really do is standardize this or at least have it descriptive in a
way that you can simply, even for existing applications, bring them on the same pipeline
process, which is also important for a lot of companies who have regulatory requirements that they know if an application
is shipped via a specific process that they can easily deploy it into production.
Maybe one important thing to mention here, we kind of did it implicitly, but Captain
is really built for cloud native applications.
So our focus is really on Kubernetes-based applications
and also applications using service meshes,
in our case, specifically Istio.
So we focused on these modern applications,
not only first, but foremost.
While there are some people who we know
who might also want to use it at some point
for more legacy apps,
but a lot of the concepts we built
into Captain really only work well in cloud native applications. So I think the ideal point for a lot
of customers, also people out there, you don't need to be a Dynatrace customer to use Captain,
because also we want you to use it with Dynatrace. You could theoretically use it without,
is when you really think about, oh, I'm migrating my application to containers now, or I'm re-architecting my
application for microservices and moving to Kubernetes. So I wanted to ask, you actually
mentioned something I was going to ask you specifically about is how much of a requirement or how dependent is this on Dynatrace?
And you just stated you don't have to use Dynatrace with it.
So from my understanding of a lot of the projects we've done in the past with these pipelines,
it seems like the Dynatrace component is part of the automation of analyzing the performance and pushing forward, rolling back, correct, or even the self-healing part where Captain is going to be able to tap into, at least from the Dynatrace point of view, the API, pull in metrics, evaluate whether or not the build is good, push forward, roll back.
Is that the crux of the Dynatrace integration? So then what you're saying is
if you did have another tool that has that capability, you could obviously customize
the pipeline to work with another tool? You can. And the idea is that theoretically,
you can run everything just using open source tools. It will just depend on the use cases that
are supported.
So there will be, for example, a simple evaluation component as part of Pitometer available with the open source release that can compare your response times, for example, against well-defined
thresholds.
But obviously with the AI solution, you can do way more complicated things.
So eventually you will see, especially
for more complex applications, you're going to need a more powerful solution. But technically
you can, especially to play around with it, to get to know Captain, you don't have to
rely on Dynatrace as a solution there. Although we always wanted to use it.
Yeah. And I just bring that up to, so listeners understand this is not just a dynatrace integration this is more of
that full-on pipeline kind of piece that you're building out here and this is something that
i think according to how you know andy i believe it was i'm not sure if it was when we were talking
to nikki or so um who was it before um priyanka priyanka priyanka yeah you're bringing it up how
we're really looking to kind of turn this into something that can be used
enterprise-wide by a lot, a lot of people.
That's the only reason I'm taking that. Also because we don't like to
only talk about Dynatrace on the podcast, but also just to break up the idea
that it's not APM
specific. This is going to solve much larger,
be a much larger piece of your pipeline
than just the APM side.
Yeah, so I described it in the beginning.
So what we really start to call Captain White,
not just us, also other people who we explained it to,
is really a control plane for continuous
delivery and the unbreakable delivery, but also for automated operations.
And this is actually nothing that we really invented. To be honest, this is something that
comes from the networking space. So the networking space, if you look at a networking space,
like a long time ago, you had all of these different
hardware appliances that you were running, and each had their own web interface, which
is great.
But once you're running lots of them, it's not that great anymore.
So you want to separate what was there called the control plane from the data plane.
And if you're on the top, obviously, of the control plane, you would define what your
application is, like your networking stack or your networking environment.
And if you look conceptually, like at maybe 100,000 feet, a network, a computer network,
is very similar to continuous delivery pipeline because what they're doing,
they're ensuring that certain content, in the case of a network packets,
get shipped to a well-defined destination
where intermediaries, while they're checked
for different criteria, like do all packages
will leave, do they arrive,
do we need to do retransmissions,
you might have security checks,
you might have netting rules
where you're changing the content of stuff, and so forth.
And the same is pretty much true when you look at continuous delivery. You're shipping
an artifact, in this case, not a network packet, to an ultimate goal, which is production,
where a number of steps, where some transformations occur, where checks occur.
And we believe if it works for a very large networking environment, this concept of
separating the control
plane from the actual data plane or the execution plane, then it should also work very well for
continuous delivery and automated operations. And I think that's actually, I think it's the
first time I listened to this explanation from the network side. And I think it's fascinating,
especially now with the decoupling. And I this is something we we need to stress here that the decoupling means
we can i mean captain itself is an event-driven framework where like you said uh aloe is if a
new artifact comes in that you want to push from inception of the artifact, when you
first see it all the way to production,
the new
artifact will trigger an event, which
then causes Captain
to then execute
certain actions. But actually, Captain doesn't execute
actions. Captain only
issues another event, like,
hey, there's a new artifact, and
we need to deploy it into a certain
environment using let's say a shadow deployment or a dark deployment or blue green and then we
need to execute we need to have somebody executing the tests and what i find so nice with captain is
that we decoupled it so that the the actors are actually services that can be implemented, right? And Alice, correct me if I'm wrong,
but in the current version of Captain,
we have services, for instance, for GitHub,
where we can make GitHub updates
because we follow the GitHub's approach.
We have services for Jenkins to execute Jenkins pipelines.
We have a service that is listening
for a Docker registry notification.
And this also allows us to then provide easily
new integrations with other tools
that companies out there in the large enterprises
actually use because not everybody is using Jenkins
or GitHub or the standard Docker
registry.
People are using Bitbucket.
They're using Quick Build.
They're using other tools.
And the nice thing about Captain is we're not locking them in, but we are actually allowing
them to pick some of the tools that they already have integrated.
Maybe for certain things, they want to move over
to a new pipeline anyway, then I'm pretty sure we have the service that integrates with
that piece or with that tool.
But I think large portions are also now completely covered, especially orchestration by Captain.
And I think this is what I love about it.
Yeah, so what we really built into Captain is all those best practices for cloud-native deployment options you mentioned.
Dark deploys, shadow deployments, blue-green deployments.
Obviously, we also support direct deployments into a stage.
This comes out of the box, but the way it comes out of the box is we define those messages and those message flows, and you plug in the tools that you want to use it's not just us
not locking in people it's also not people locking in themselves which you did in the past because
what you built in the past you build a direct api to api integration and very often then maybe even
run like in your own gateway in some way or another connecting i don't need your docker
container registry directly to GitHub,
where a small service that you run in between.
And with Captain, everything is really using a standardized messaging format. So on the background, Captain is running on Knative services.
So Captain itself is running in Kubernetes.
And it uses cloud events, well-defined events for everything that's happening
throughout continuous delivery,
but also everything that happens through an automated operations process, like for self-healing
and so forth.
So all of these events are well-defined, but the only thing you need to do, you have to
describe what you want to do in a certain situation, and then you create your own Knative
service that then obviously can talk via a proprietary API to your product.
And the other nice thing is that
you can reuse best practices from others.
So assume that there is a company out there
and you really love the way they deploy their code,
that they build their pipelines,
they have the testing procedures,
and you absolutely love the way they do it.
But there is one tool that you have to use
within your company that's not part of this pipeline. So you can't use it, right? So we
have to rebuild more or less everything from scratch. With Skepton, you have a lot of flexibility
because this control plane is actually built around two core concepts. One is what we call shipyard
files. What a shipyard file really defines is which stages that you have
for a pipeline and how this stage is supposed to behave. So in my shipyard
file I would define I have a dev stage and my approach is I want to do a dark
deploy then I want to do a functional test and I want to do a dark deploy, then I want to do a functional test, then I want to do a blue-green deploy, and then I want to run a performance test,
and then I want to promote it. And this is what I can define for each and every
step or for each and every pipeline
stage. And currently we're working on the
second core concept, which technically is already part of Captain today.
It's just a bit cumbersome to implement right now, which is
uniforms. So a uniform now defines
which tool you want to use for certain events, which event
subscribers you're more or less using. So I say for my
I want to use, for example, the built-in GitOps
provider or I want to use Weaveworks. I want to use, for example, the built-in GitOps provider, or I want to use Weaveworks.
I want to use Spinnaker as my deployment automation tool,
or I'd like to use Excel release.
And you can go on like this.
And what you now get is complete flexibility
in the way you build your continuous delivery pipelines,
but also how you build your operations automation.
So whether on the operations side,
you want to use Ansible for self-healing, or again, you want to use Spinarca
maybe in certain situations, or you want to use ServiceNow processes that you're running,
you have all this flexibility and you describe this in your uniform files. Another combination
of uniforms and shipyard files suddenly gives you full flexibility. So I can take your shipyard file, combine it with my uniform.
I can follow your process best practices and just use different tools,
which might even be valuable if it's just one company
where one project uses one set of tools,
another project uses another set of tools,
but still you use the same pipeline definition.
And at the same point, if you have a certain set of tools
and you want to change the way they're orchestrated
or interacting with each other,
you can change the shipyard file.
So for some, say, more front and center,
more modern applications,
you might have a much less strict release process than for your core
applications. You might define like one is just running maybe on one stage and you deploy directly
into production experiment really quickly while the other has a four-stage pipeline, but you're
going to use the same tools. This is the flexibility you would get with shipyard files. So the combination of both gives you the most flexible way to build an unbreakable delivery
and also operations automation solution that you can imagine without having any hardware
connections.
You can exchange pieces on the fly if you want to switch from three stages to four stages,
you could do it.
If you want to replace one tool with another one, you could do it as well.
That is pretty cool. Now, as I think of it, so Alex, correct me if I got this wrong,
but I think I got it right. If we decide as a company, all of our projects that are mission critical have to have a three-stage pipeline.
But then I let my individual teams pick which tools they prefer, maybe where they have experience, where they have already maybe built some artifacts.
So I can say I define a shipyard file that says dev stage prod.
In dev, I do my API tests and functional tests.
In stage, I do my performance tests.
And in prod, obviously, I do my blue-green deployment.
Then I can define this shipyard file.
I give it to all of my teams that are building the mission-critic labs.
And then they can, through the uniform, say,
well, this team is deploying this using AWS CodeDeploy because they're running on AWS.
The other team is using Ansible or something else to deploy.
But the configuration stays the same.
And thanks to the shipyard option, I just automatically swap out the tools that will be kind of triggered by Captain when
a new artifact is there and it has to get promoted into a stage when the tests have
to be executed.
So I could even replace testing tools.
Some may say for API tests, I use JMeter.
Some may have Neotis.
And if I understand this correct, this really really allows me and this is where the enterprise
grid actually comes in i can on the enterprise define my pipelines and through the uniforms i
can automatically say how these pipelines get or get executed who is acting upon the different
events and this is this is pretty amazing.
And think about it.
In some cases, you might even have to change the tools simply because of the very nature of the application
that you're building.
You might define that you need a performance test
in your staging environment.
You might have to run it for a couple of hours
until you promote something.
But it's a significant difference
if you're testing an API-based application
versus you want to test an application that's running in the browser and also want to test part of the code that's running in the browser.
So you might have to exchange which tool that you're using, not just by replacing your entire tool chain, but by the very nature of the application that you're running and also for other tools along the line.
So the way I see it, it allows you to build opinionated pipelines.
So you can be very opinionated about what you want your pipelines to look like,
but still there's a lot of flexibility.
It's not locking you into a certain set of tools
or into a very rigid process.
And also the opinionated pipeline is something that you define as the end user.
It doesn't come with Captain per se.
Captain itself is very open in what it allows you to do. The only thing where you have an
opinionated piece in Captain is really how we have these event flows for shadow deployment,
for the green deployment, for functional tests, for performance tests, and so forth.
I think that opinionated piece is important too, because we've been hearing more and more lately that you really need to put guardrails around in your pipeline so that people just don't really go all over the place going to get unmanageable and unwieldy. So I really, really love the idea
that Captain itself is not opinionated, but you can build it, your instance to be opinionated so
that you force people into those pens to keep everything kind of running smooth. I do also
want to point out, you're talking about a lot of this really great stuff. And on the website, again, it's K-E-P-T-N dot S-H.
There are a lot of examples and walkthroughs
on some of these things,
performance as a service, production deployments,
runbook automation, where you even show
how to set it up and integrate it
with a free instance of ServiceNow.
So if a lot of, what can I do with Captain?
Captain, I keep saying Captain, right?
You all keep saying Captain.
Anyway, if a lot of what you can do with Captain isn't completely clicking, you can, and I'm going to do this myself, obviously,
you can go to the Captain website and go through the walkthroughs.
They have all the example applications.
There's even some videos, right? So I think you've all really done a great job
in helping people get started
and get the basis and foundation of understanding
of how to make this all work.
So kudos to you for that as well.
There's one more thought that I just had, Alois,
and I think this is also something we've seen
with the people we've worked with.
Now taking, let's say, an enterprise, they have 100 projects,
and that means they probably have dedicated resources now
that need to maintain all these pipelines.
And let's say some new regulation happens that forces this enterprise
to add an additional check
into all of the delivery pipelines
before they deploy it into prod or before they do something.
Now in the old days, before Captain,
that would mean you need to spend
a lot of engineering resources
to update all of your pipelines out
and all the different sorts of tools
and then figure out where's the right place and how do I integrate these tools. So for instance,
the security check tool. And with Captain, I just change or update the uniform, I guess.
And then I say, hey, I now want to also invoke the security tool every time I deploy an artifact into a certain stage or after the
tests are executed in that stage or as part of the deployment validation. Yeah, even not just
like long-term changes, it's like even about short-term changes. We talked to a lot of customers
who have something like lockdown periods. I could even get this flexibility on the fly. So I'm assuming I'm a big e-commerce shop.
And throughout the year, I totally find that my people are experimenting.
So I put most of my application on self-healing and working with blue-green deployments,
A-B tests, and let them do pretty much whatever they want to within certain variables, obviously. But then the day comes
closer for
Black Friday and Cyber
Monday, I start to see this
differently. So I'm just updating my
shipyard file that now every deployment
makes it into production
requires my manual approval
for example via GeoTabletScoreServiceNow
workflow. And after
this one week, I'm changing it back.
These are all use cases that nobody ever thought about before
because they couldn't do it.
That would mean touching a massive amount of continuous delivery
and automation code.
With Captain, these are use cases where you can say,
yeah, let's do this.
Let's give people more flexibility, remove flexibility
here a bit as we
go. Because otherwise,
the implementation of this change will take
longer than the timeframe that you want
to use it for. So there's also
some quick experimentation that you can get
out of the usage
of Captain for how you want to ship and
deliver applications.
That's completely then changing the way how you would, for instance,
also POC new tools that you want to embed in your pipelines.
I just know that just like last week,
I talked with one of our teams in Detroit and they are just going through a
bake-off between different vendors that are providing solutions as part of the CICD pipeline.
And in order for them to do the proof of concept right now,
they basically need to completely reintegrate this tool
and the first tool and the second and the third tool
to really figure out how it really works in some of their pipelines.
So they need to touch every pipeline, need to bake it in.
With Captain, this would just mean, well, I just change the uniform and update it through
the Captain CLI.
And then all of my pipelines that are using that uniform or that are updated uniform with
will now all of a sudden use tool X versus Y.
You can even go further than this.
The way Captain is spelled, even with the CLI,
is that everything is a cloud event.
You can even have one environment just issue
as a promotion step another Captain command,
like a new artifact command.
Suddenly you can have these tools running in parallel,
so you can deploy those three environments in parallel,
and you just have the Captains talk to each other.
So you could run these three tools in parallel and, for example, check which one is, for
example, finding vulnerabilities that the other doesn't, which one is faster, which
one leads to the better results, and which one reacts quicker if we think about runbook
automation.
So it gives you full flexibility.
And the nice thing is it's flexibility that you can control because everything is code, everything is centrally deployed
and version controlled.
And also other people know how to use those tools.
But I usually like to challenge people if they tell me
to have this great team of both containers delivery
called this whole productivity engineering team
or automation engineering or whatever they call these teams.
I asked them, okay, so how long would it take you to get me all the code that you need for
every automation from a deployment and operations perspective?
Like find all of it or draw it here on the map, how it works for application X, Y, Z.
Usually this takes quite a while to figure it out.
It sounds easy
in the beginning but when you really ask them to get all the artifacts they have a hard time to do
so but having everything version controlled it's um as well defined as uh as the rest of the software
development process which i think is very important that's pretty cool hey um i know we
have a second i just want to remind listeners,
we will have another episode on Captain with Dirk
where we dive a little deeper into the architecture of Captain,
talking about the events, how the workflows actually look like,
and how people can obviously contribute to Captain itself,
how they can test it out.
But Alice, maybe because this is something we should tell people.
How can people, besides going to Captain.sh, how can people contribute?
How can we make sure that we are actually building the right things
at the right time?
Can we do a shout-out and say, hey, this is what we need.
This is what we hope
the community uh to to give to us what's what's the what do we what can we tell them yeah so when
we started captain from the very beginning we really wanted to have like a poster child uh
open source project for dynatway so we followed all the best practices um there are two weekly
community meetings um there was actually one today uh where
we've shown some of those things everything is handled by github so you can reach us uh by just
dropping an issue on github which people do there's a slack channel that you can interact with
uh if you have like questions that usually go deeper I've never been able to file with an issue.
And ideally, what you do is you just give Captain 0.2, which is our current pre-release version,
a try for your application,
and then you just simply engage with us.
If you have a great use case and you want to share it
in one of our community meetings, you're more than welcome.
Just let us know, hey, I've done this
and just want to share my experience. Feel more than free to do it.
Obviously, you can file bugs if you have some, and I'm sure we do have some because there's no
software without bugs out there. But the best way is really getting started with the latest
pre-release version and interact with us via all of these channels. So obviously with Slack,
you have the whole development team there that you can interact with us via all of these channels. So obviously with Slack,
you have the whole development team there that you can interact with.
If you want to build a custom service right now,
I mentioned in the beginning,
this is not yet as convenient as we see it
to be in a couple of months from now.
I would really recommend you directly get in touch with us
so that we can help you to bootstrap this activity
and save you a number
of or give you back a couple of hours that you would otherwise maybe use to get things
up and running.
And also there's a nice page.
If you go to captain.sh, so K-E-P-T-N.S-H, and then click on the about link, there's
a nice list of the email, the Slack channel, the Twitter
account, and
also the link to the community meetings.
So that's for
people that want to learn more.
And by the way, Alois, and also at the time of the
recording, it's April 1st, and things
are moving fast.
This will probably air soon,
but as I said,
if you listen to this at a later point in time,
then make sure that you are double-checking on which date Captain is currently.
There's a lot of great things that happen, as Ale said, on a biweekly basis.
So every other week, there's going to be the community updates,
and innovation is fast within
captain as well so it's a lot of cool stuff that is happening and the more
people that contribute the better it is for all of us so Andy yeah I think I
don't know do we need a summer eater on on today's show I I think it kind of
yeah yeah you kind of did one there yeah Yeah, maybe I think a summary would be.
I want to do one more,
one summary attempt though,
because I mean, obviously,
Alice and I have been working
on this for a while
and we kind of think,
know what the problems are that we solve,
but to kind of summarize it,
we believe that a lot of organizations
that are now moving towards
cloud native, while Kubernetes is great and Kubernetes solves a lot of problems in how to
manage containers, it does not solve the continuous delivery challenge. And I think this is what
Keptn addresses. Keptn is, as we said in the podcast, an opinionated framework where you can build your pipelines.
And thanks to the combination of shipyard files, which define how your pipeline should look like,
and the uniforms, which defines which tools you want to use, you're completely flexible
of how your individual pipelines in your individual projects and on your individual platforms that you're using should look like.
I think what I also love about Captain is it includes all of these best practices that
we have seen within our organization, within our customers, but also in the industry, like
automated deployments in multi-stage environments, supporting deployment models like blue-green, shadow,
dark, canary.
And it also has quality front and center.
That means every time an artifact gets deployed into a stage, we make sure, A, we don't break
that stage thanks to all these deployment models.
And B, we execute the right tests.
And C, we automatically validate the deployment so that we only promote good code into the
next stage.
And I think that's really cool.
And thanks to the work that Alois and our Dirkin team and hopefully now the rest of
the global Kubernetes CNCF community is doing,
Captain is so easy to use. Check out captain.sh,
walk through the use cases that are already out there,
and see it for yourself, how you can deploy it and how you can
especially then onboard your own services and see
how it automates and orchestrates your pipelines.
I just want to mention one thing here.
It's obviously not just for the deployment,
but also for the self-healing piece,
which I think is key too,
because a lot of people ask us,
how do you build a self-healing application
or how do I automate some of my operational tasks?
And this is where Captain can help you too.
So we should mention this as well.
It's not just, under exclamation marks, the delivery,
but also keeping things running once they are in production,
which I think differentiates the Captain project
from some of the other approaches out there
that focus on a delivery-only model.
And once something is in prod,
there's not a lot that the framework
of the project offers you
to keep your applications running.
And I think that's an important point.
Thanks for bringing that up, Alois,
because a lot of people are interested,
I think, in the self-healing aspect.
But there's always that,
well, how do I find time
to build that into my pipeline?
And how do I find time to do X, Y, and Z with it?
And this is going to go a long way in simplifying that.
So that's a really excellent point.
Well, thank you very much, Alois, for joining us today.
I think it'll be interesting someday to see if you can ever be a repeat guest host.
But in the meantime, you and your team are doing awesome work
with Captain. And Andy, I know you're a part of this project partially, or
I know you work a lot closer with it. I'm going to definitely start
going through some of the walkthroughs myself, Andy, so I'll probably be bugging you
like, I can't get this to work because I'm an idiot. But I'll let you know.
Alice, any final thoughts?
If you had to, you know,
is there any pie-in-the-sky vision for Captain
that you hold dear or something that you would really love
to see happen with it?
I think we all share the same vision here
with building cloud-native
applications, that at one
day we as developers just
really deploy code and really don't have
to worry about anything that happens
across all stages and also
in
operations,
except obviously some really weird
and hardcore use cases.
But we really can focus on just chipping applications,
having them run, now also helping themselves heal and manage.
A vision that started with Kubernetes where it helped us a lot,
especially at the infrastructure level,
but now also more and more on the application layer
and eventually also on the business layer.
Because what we obviously could do as well,
we didn't talk about this today,
but you could also say if we make version rates go down,
I want to do rollback in production.
So Captain goes a bit further there.
So ideally maybe in five years from now,
if you're a young developer
and you're deploying your first application into production
and that's managing itself,
you see it more as if you see your car engine today you
know it's you understand kind of how it works but you don't have to do any of your magic you don't
have to assemble it or build it you can really focus on the really interesting parts and i think
that's the bigger uh vision here focusing more and more on the interesting parts and not so much on
that the keeping the lights on pieces and obviously make the world a better place.
Can't we all just get along?
All right.
Well,
thank you very much for being on.
And as always,
if anybody else has any questions,
comments,
or topics,
ideas,
they'd like to have us explore on the show,
or if you want to be a guest yourself,
you can reach out to us at pure underscore
dt on twitter or you can send an old-fashioned email to pure performance at dynatrace.com
definitely hope everyone checks out the the captain website k-e-p-t-n.sh as we mentioned a
few times there's some really great walkthroughs and example applications to try it out with
and i'll be really interested to see what people start doing with this over
time.
So thank you all for,
for joining us today.
Yeah.
Thank you for having me.
Thank you.
See ya.
All right.
Thanks.
Bye.
Bye everyone.