PurePerformance - 082 Adopting Cloud-Native in Enterprises with Priyanka Sharma
Episode Date: March 18, 2019Is Cloud Native just a synonym for Kubernetes? How to make sense of the sea of tools & frameworks that pop up daily? What can we learn from others that made the transformation and most of all: Where d...o we start?We got answer to all these and many more questions from Priyanka Sharma (@pritianka) – Dir. of Alliances at GitLab and Governing Board Member at CNCF (Cloud Native Computing Foundation). In her work, Priyanka has seen everything from small startups to large enterprises leveraging Cloud Native technology, tools and mindset to build, deploy & run better software faster. She advises to start incrementally and whatever you do in your transformation make sure to always focus on: Visibility (which leads to transparency), Easy of Collaboration (which increases productivity & creativity) and Setting Guardrails (this ensures you stay compliant & avoids common pitfalls).We ended the conversation around the idea of needing “Cloud Native Aware Developers” which can follow best practices or standards such as those promoted by CNCF or OpenSource projects such as keptn.shhttps://twitter.com/pritiankahttps://www.cncf.io/https://keptn.sh/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always with me is my co-host Andy Grabner.
Hey Andy, how are you doing today?
Hey, well actually good as, well actually, why do I say actually good as every time when we have a conversation, right? It would be sad otherwise. That's why I feel good.
Everything good.
You know, that just put a thought in my head
about like sad Andy.
You sitting in your office with a tear
rolling down your cheek.
And yeah, I'm going to have to figure out
how to get a picture of sad Andy.
I love the, I think it's a terrible,
tragic concept of sad Andy,
but I think it would be awesome.
I think in the Slack channels
or something else,
if somebody did something
that was not right to do, it would be good to put a picture of sad Andy as a reaction.
So I'll have to work on that.
Thank you for the idea, Andy.
Anyway, Andy, so we're recording a podcast, eh?
Yeah.
And it seems that even though I think we're not that funny, but it seems somebody finds it funny.
And maybe that person just wants to jump in right away and introduces herself.
Who is laughing in the background here?
Well, hi to both of you.
This is Priyanka Sharma.
I serve as director of cloud native at GitLab.
GitLab is the first single application for the entire DevOps lifecycle.
Some of you may have heard of it.
And yeah, I'm the person laughing in the background. What can I say? You guys are really funny. I don't hear that very often. I try to be
funny, but most people say, hey, you have dad jokes and that's it. Yeah, I'm one of those people
who are a big fan of dad jokes. So here we are. Hey, thanks for jumping on the show. And I think before we kicked off
the recording, we just had a quick chat. And I think I was reminded when we met, we met at
reInvent 2018 in November. And I saw you on stage presenting there in the expo area. And after that,
I was really excited about what you showed, how you are promoting cloud-native software engineering and how that's supposed to be done.
And I know a lot of people came over to you asking you questions and what is this all about and how does this really work? on the podcast and talk about what cloud native really means, especially for large enterprise organizations,
which I believe a lot of our listeners are working for representing.
Because cloud native, obviously, besides the whole marketing topic,
the marketing term, there's really a lot to it.
But there's also, I think, a lot of questions,
especially for enterprises that need to figure out what does it mean for them.
And so I want to actually throw it to you.
What do you see out there?
What gets people excited about cloud-native?
And what are the challenges that they run into?
What's happening?
Sure, absolutely.
So cloud-native, when you hear the term these days
and big enterprises are discussing it, I noticed two emotions.
One is excitement and the other is trepidation.
Now, the excitement comes from all the promise that cloud native holds, all the stories we've heard of digital transformation, of much faster delivery, of being responsive to the market, all those great things, right?
And obviously, any good leader would want that for his or her organization.
So that's the excitement piece.
The trepidation comes from knowing that most large enterprises are managing unwieldy, bulky software. And they know their processes may be a little solidified over
the last decades, if not years. So the trepidation comes from, well, this cloud-native thing sounds
really great. It means I can stay more responsive to my business and affect the bottom line
positively. But can I really get there how do
i get there and many of us as we've gone through the journey have made mistakes on the way learn
from them um but sometimes people can uh feel burnt so i think that's what in a nutshell i'd
say people are experiencing and based on what my conversations have been like and would you say
so i think obviously cloud native,
I don't know who coined the term
and how long it's been around,
but obviously there have been companies doing cloud native
before we had the fancy term of cloud native, right?
As you said, these are the people that are,
that tried out this new set of technology
and then trying to figure out what works, what doesn't work.
And so in your own experience, and then trying to figure out what works, what doesn't work.
And so in your own experience,
and obviously you are representing a company,
you're working for GitLabs,
I assume you have to consider yourself one of the early adopters of that technology,
learning from experience,
making the mistakes that others have made as well or are about
to do.
Is this a correct assumption that you, I will probably assume you are one of the first early
adopters and now also bringing a product to the market that helps people on building software
in a more modern way?
Yes.
So GitLab is definitely a cloud-native company.
Just like every cloud-native company, we are on a journey towards cloud-native. I think it's a process and a mindset as opposed to, okay, I did A, B, and C, and now I'm cloud-native. You know what I mean? really been useful in developing the empathy that we have towards our customers as they go through
the process. Because often, if we look back at the history of Cloud Native, everything starts with
Cloud Compute, right? And the coming of AWS, I guess, 10 years ago now or something, was what
kicked off this movement where you didn't
need your own servers anymore.
No server farm necessary to build a big business.
And with that paradigm shift came an opportunity to build software differently because suddenly
provisioning was just faster and easier.
And those technologies,
that way of building technology where you can move faster,
you can break out pieces
and empower developers to run quickly,
that is the core of the cloud-native way, right?
Now, before everybody got excited about it,
the large web-scale companies
were definitely the trailblazers here, right?
When you think about the Googles, the Netflix,
and the Twitters of the world.
And that's where we noticed
we get a lot of our experts from today
because these people built cloud natively
before cloud and cloud native was a thing.
And so there's a lot we can learn from them.
However, their experiences are different from that of enterprises out there.
Just the scale is different.
The expertise is different.
And so GitLab having gone through the cloud native journey ourselves, it helps us be really empathetic towards the end user.
And I think that's why we have over 100,000 organizations using and contributing to GitLab.
Can you repeat that again? How many organizations do you have to contribute?
More than 100,000 organizations use and many of them contribute to Git lab that's that's quite an achievement i would
say wow that's a pretty big number yeah thank you yeah so now in the so so that's great right now
you you walk you know through the transformation we at dynatrace you know we also had our
transformation that we've been we've been talking uh about you know in several episodes here and
people that you know follow what we've been doing they in several episodes here, and people that follow what we've been doing,
they know our kind of transformation the way we said.
We went from classical enterprise,
six-month delivery cycles to now.
We deploy constantly in production.
We run both in the cloud,
but also on-premise for those customers
that still want us on-premise for those customers that still
want us on-premise so the um the the big question though that people always ask us and now i want to
throw it to you is and you kind of alluded into this earlier now i i'm if i'm a big enterprise
company and i have a lot of legacy applications and I have a legacy mindset, how do I get started? What's the best way to kind of get into a cloud native mindset?
What's your proposal? Yeah, absolutely. So this question actually reminds me of a panel I did at KubeCon Seattle last year with end users, enterprise leaders from T-Mobile,
Lyft, Delta, and CVS. And that was the question there, like, how do you get started? What is the
best way to move quickly, learn, and keep going? And really, it comes down to the advice, and I'm
channeling those folks here right now. The advice is that start incrementally.
Don't get overwhelmed by all the various things that could be cloud native, all the various tools you could possibly get.
Sit back and think through your strategy.
There is no rush.
There is no reason to get this done today in a hasty manner versus tomorrow in a well done manner.
So the first thing that we think about when cloud natives comes, right, like the first
challenge, I would say, if someone were starting today specifically, is that you look at the
landscape, it's extreme of like all the tools that are available for cloud native way of
operations.
There's a lot.
I mean, you just need to go to any KubeCon or any AWS
and look at the number of vendors that are displaying over there.
Many people sound exactly the same as their next neighbor.
And there's this, you know, we're at this time of peak confusion
where all these open source projects, all these companies, they're all offering ways to make it easier to operationalize a cloud data workflow.
But there's so many.
It's like the paradox of choice.
And I think that can be really overwhelming for companies.
Also, most large companies, as a leader, enterprise leader, you don't know exactly what every team is
doing. And many have set up their own toolchains in some kind of cloud native way of development,
somehow, maybe it's half-baked, maybe it's well done, but different teams within your org will
be at different levels of maturity. So my personal opinion is that cloud native is critical for,
as a business objective, because it's going to
make you competitive in the market or make if you if you're not shipping fast and reducing cycle
time you'll you'll not survive but at the same time there's so many because operationalizing
it is different from operationalizing large monoliths, so many projects and companies came to the
fore that now it's more confusing about what the right thing to do is.
So in this world, I think as a large company, it behooves enterprises to think through what
kind of tooling they want to put in place.
This is because ultimately for cloud-native workflows to work right you need a system that
allows for um visibility which means it's easy for any team member to know what's going on in
a certain project without having to ask many people without having to check with a lot of
people they should just be able to click click no and then there's the use of collaboration like
people shouldn't be waiting for handoffs that That's sort of the whole point of CloudNative, right?
That people can operate asynchronously. And so the tooling, though, they'll need to enable that.
And then there's a third piece, which is that you need unified guardrails so that people aren't
stressed when they're shipping fast about accidentally breaking some rules or shipping code that may not be fully baked.
So there's these three elements.
I'll repeat again.
The visibility that teammates need of what's going on in the entire company
or organization, the ease of collaboration,
which is necessary to truly be cloud native and unblock people.
And then there's a third, which is you need to set the right guardrails for the entire
company so there's no confusions and people feel empowered to move quickly.
So I recommend enterprise leaders to think through these three key things they need then they set up their tooling. Now, a challenge they'll probably face is
that different teams have different tools already in place. And so there does need to be a little
bit of a recon exercise of what's going on and how do we streamline all this. Everyone's culture
is different. So I wouldn't be prescriptive here on what they should do. However, there's many different tactics people have taken.
Some people have had internal hackathons where teams convince each other of which tooling
is best.
Some people establish tool teams that have a set of guidelines.
There are many ways to do this.
And knowing your organization, you'll be able to decide what fits your culture best.
And so that's the
consolidation of your tool chain piece. And then finally, the third piece is really training your
workforce, because I'm sure you guys have probably seen this in Dynatrace as well, and your
customers that different engineering divisions, teams will have different levels of comfort with
cloud data. And some folks need more training
than others. And the answer is not to forget about them, but rather to harness their energy.
And so I've, again, channeling that panel at KubeCon, the folks at T-Mobile and Delta gave
particularly good examples of running dojos in their organization online and in person where people can uplevel
their skills. A really interesting thing that I think it was Brendan A. from T-Mobile who mentioned,
he's like, the one thing you have to become okay with, even when you provide all kinds of training,
is that there'll be some people who don't want to learn this new way. It's not for them.
And the best you can do is explain the critical importance
of value and how it will
up-level their career.
But you have to understand
that some folks don't want to do it.
And you have to make
the right business decisions accordingly
while supporting people
as much as you can.
Yeah, I think you made
an interesting point there too.
I'm going to try to extrapolate
a little from what you said there.
One of the things I hear often when people talk about cloud native
and this whole modern CI, CD pipelining and all this
is the idea of providing autonomy to these small development teams
that each development team should be able to do whatever they want
to get out what they want.
However, when we start talking enterprise grade,
as you mentioned, there needs to be some guardrails there
needs to be some organization of what is used because everything all the different teams at
some point or at some juncture somewhere are going to have to be able to fit in together
so that means the tooling is going to have to be compatible you might want to have and i love the
idea of the hackathon to show off which tools are best right and then you try to get people together
because we have a similar situation and andy correct me if it's changed and all, but I remember
when we talked to Anita last, the way we had it is Anita and her team run the entire pipeline
and decide on all the tooling. Anytime someone new comes on board or a new team starts working
on a new project, here's the tooling, here's how you do everything. But if they have a specific requirement that is not covered by the tool set they might say hey this
tool would cover it that team will take and then review it figure out how to integrate it and and
there's a process behind it because it's again we're talking enterprise this might not apply to
a startup something small where you really just need to crank things out but once you start getting
to that level you you still want to have the idea of the autonomy
and some of the autonomy to explore,
but a controlled autonomy, let's say maybe.
Yeah, and you know, if you think about it,
what is the purpose of the autonomy?
The purpose of autonomy is to unleash the creativity,
to help people be free to ship application logic,
to ship code as fast as they want to as you know,
and build the coolest things.
Autonomy to do that is going to come with a little bit of standardization on
the tooling because that'll make your shipping faster.
The reviews will go faster. Everybody will be on the same page.
The collaboration will be easier. So it all really works.
It feels
counterintuitive really when you begin to think
about it, but it
actually enables the autonomy to
have some standardization on your tool
chain.
I mean, there's a lot of, well, thanks
first of all for a lot
of great points on how to get started.
So you mentioned KubeCon and the panel.
I assume if people want to watch this, it's probably i think kubecon is one of the conferences that are
posting uh all the recordings probably on youtube or somewhere so we should probably put a link uh
to the podcast proceedings later on for people that want to watch it absolutely there's there's
a video there's also a blog post. So it's very well covered.
So in your, let me ask you a question. Now, if we look at enterprises, that means,
obviously we all would like, you know, companies to consolidate their tools and basically
consolidate it down to the tools that we offer, right? In your case, it's GitLab. In our case,
it is Dynatrace and it would be a perfect world for us vendors. But in theory,
the reality always looks different, right? The reality
looks like there's so many tools
out there, certain ones can be consolidated, certain
ones can't be consolidated. This is where integration
then obviously fits in.
How do you
integrate, though?
I'm not sure if you have an answer for this, but
how do we go about integrating
tools that have completely different mindsets?
But I'm not sure what the right word for this is.
But tools that have been built for completely different architectures to support completely different processes, a completely different mindset.
Can we integrate all the tools or are there certain scenarios where you say, well,
you know, certain things
cannot be brought over
the new cloud native way.
You better stay where you are.
And then we build the new cool stuff
with cloud native.
And then maybe we find a middle way
where we have some integration portal
or whatever you want to call it.
So my question to you is,
how do we, how can we,
how can we integrate these, these, let's say,
more legacy environments, legacy tools with the new cool cloud native stuff? Is there a good
answer? Can we do it for all? Or are there certain things where you say, well, maybe not,
maybe you better stay where you are? Great question. So I'll answer it from the perspective that I know, which is the GitLab perspective.
I will also speak to, so my background, just so you know, is in observability.
So before GitLab, I spent a lot of time working on the open tracing project with the CNCF.
So I'm big into tracing.
So I will address it from the GitLab perspective and then also a few thoughts from that observability mindset.
So I hear you that some things, you know,
how can you bring it all in one nice, perfect, go-wrap thing?
And the answer is it's not so easy, right?
Otherwise, everybody would have done it already and we'd be on to the next.
What we have noted, like, as you know,
GitLab focuses on being the single application for the entire DevOps lifecycle.
And because our customers are such large companies,
as well as startups,
but because we also have the really large organizations,
by nature, a bunch of their software development
has in the past been legacy.
And some of it continues to be that today, right?
And the value of developing everything in one place with GitLab
and then adding in the two, three other tools that may be necessary.
So, for example, let's say someone is a heavy Jira shop.
They don't have to use GitLab's planning,
even though they could, and they would probably like it.
But they can integrate that,
and now the ticketing is all in sync
with your version control and CICD and et cetera, et cetera.
So it doesn't matter whether the ticketing
is about a legacy system or a modern system.
The point is it's all in one place.
And so those key starting points, right?
Like where is your planning happening?
Where is your version control?
Is it connected?
And your CICD, which are really the building blocks, right?
These are, I wouldn't call them sharp tools.
I would call them mini ecosystems that you need
when you're doing software development.
So connecting these big pieces, I think,
takes people a long way
in that vision I told you,
which is of visibility, efficiency, and guardrails.
So that would be the first thing
that I do believe that the large pieces
can be put together.
Now, beyond that, there comes the story
of like very specific use cases,
particularly when it's post-production,
such as monitoring or for debugging and all that.
Now I'll put my observability and open tracing hat on.
And from that world, I would say that it's true
that there might be some particular problem
that, let's say, you can only solve with tracing, right?
Like you need to understand the set of transactions
and you have traces for it.
And let's say they lie in Denetrace
or they lie in, I don't know, Jaeger Traces,
which is an open source project by Uber.
In that scenario, I would say,
if that is the only thing that is solving your problem,
keep it, but keep an eye on when consolidation happens.
The one thing to know is that this space is moving really fast
in terms of that consolidation story.
So two years ago, I didn't think there would be,
there wasn't much from the cloud providers at all around traces, for example.
And now, or like at least it wasn't at all popular.
And then within the last year, AWS X-Ray, Stackdriver Trace from Google, also OpenCensus from Google, like a lot of things have come out, right?
So everybody's trying to create their like fortress of tools and make you commit to them.
But so consolidation will be easier with each passing month.
But if there's a specific tool that's solving a very specific problem and doing it really well, keep it, you know, and maybe only the new people are using it.
And maybe the legacy folks still rely most heavily on logs.
That doesn't mean they should stop using logs because it's working well for them and efficiency is important but the slow transition
towards a consolidated thing that will work for more people than not is i think the guiding light
that's um so i honestly i didn't know that you were uh that engaged with open observability
and open tracing we've just uh we've just joined uh you know i call it uh that movement as well because we are contributing to that because we also believe that we have to, in the world we're moving towards, we need to have open standards for tracing and monitoring through all different types of stacks, right? I mean, as you said, you may end up having some Dynatrace agents,
some New Relic agents, some Jaeger,
some whatever it is.
And just because you have
one team that decided
for this tool versus another tool,
this should not be
the reason why you don't have your
end-to-end visibility, which you obviously need, right?
As I said earlier, the key thing is
visibility. And if you don't get visibility, which you obviously need, right? As I said earlier, the key thing is visibility.
And if you don't get visibility, if it's broken,
then this is something we need to fix.
It's a problem, exactly.
Yeah, I know.
I've spent a lot of time in observability.
Now, I might not be as current as you are, since lately my involvement has mostly been with the Jaeger project
in terms of documenting end user use cases and stuff.
But yes, I'm very familiar.
And I know your CTO, Eloy, does a lot to further the standardization.
There's a lot going on.
And just for the sake of the end users,
I really hope we all start consolidating the ecosystem fast.
Pretty cool.
Hey, I want to go back to, you know, I mean, cloud native for me, when I hear it, it's
also kind of synonymous to we're developing something on Kubernetes, right?
I mean, at least that seems to be.
I mean, that's fair.
That's fair.
Now, I love Kubernetes.
The more and more I play with it, but I also have a challenge with it. Like today, I was deploying one of our open source projects that we have on EKS.
And for that, I provisioned an EKS cluster.
And I used a Terraform script.
And now it's running.
And I had to add a couple of nodes to it.
And in the beginning, I didn't know how large the nodes need to be.
I deployed the app.
It didn't work.
I added larger EC2 instances until eventually the app actually ran.
And then I said, from a development perspective, it's fine.
Now I can write my code.
I can run my pipelines.
I deploy.
But what I really lose and actually come back to the visibility is I lose the visibility or the knowledge what
actually happens behind the scenes and how I as a developer can actually now perfectly
write an application that perfectly leverages the resources underneath.
So it seems we are building all this with cloud native, with all these abstraction layers,
where it's really super easy
to write a new microservice
and deploy it.
And the platform magically
take care of it.
And maybe I have some pipelines
that, you know,
at least do some security checks,
some functional checks,
maybe some performance checks.
But I feel,
and Brian, this goes back to the recording we had last week.
So last week we had a recording, and this obviously would have aired probably two weeks
before this one airs, with a gentleman, Conrad, who was explaining the differences between
the memory management of the JVM and the.NET runtime.
And he said in.NET, because he's an expert on.NET, he saidVM and the.NET runtime. And he said in.NET,
because he's an expert on.NET,
he said in Microsoft and the.NET team,
it wants developers to become performance-aware developers,
so they understand how
the platform is actually handling memory management,
and therefore write optimized code so
that your code can optimally run on the
underlying platform.
Now, to kind of make my point, do we have something for Kubernetes?
Do we have something for cloud-native where we say, in order for you to not only write
a microservice that is deployed very super fast and it's out there. This is the way how you write your cloud native apps
so that they, A, obviously adhere to the security,
to performance, but also really leveraging
in a perfect way, in an optimal way,
the underlying resources.
Because I fear that if we don't know
what's actually happening within the frameworks
and within these clusters, that we may end up writing cool software fast.
But in the end, we're all kind of surprised about the costs that we have.
Because I was really surprised the large EC2 instances that I had to add to my Kubernetes cluster to get our very simple app running.
Right, right. I see what you're saying. So I'll speak to it from my, I do not claim to be an
expert in this particular area, but I'll speak to it from my perspective. So I mean, getting
developers to be performance aware is awesome. That's the whole point of DevOps, right? But I would say, like,
getting them to think about application performance
is a big win in itself
to get them to start thinking about
all the compute details also.
It may be a little bit of a big task, right?
Because, I mean, I'm sure you folks have the experience too
that just the move from dev to DevOps
when it comes to application performance, and that's
something Dynatrace deals with regularly,
is in itself a challenge.
So then when you add in
people having to think through
the
compute issues and all that
beyond the regular budgeting guidelines, etc.,
it's like one more level of overhead for developers.
And the question is, like, is that doable, right?
So that's one just like point I would like to make.
And like the answer may be, yes, it's totally doable and everyone's doing it.
And I just don't know.
That could totally be it, but that was my reaction to that thought now um in
terms of standardizing though this again everything comes back to standardizing i think service meshes
can really help so as i'm sure you know um things like envoy envoy is not a full service mesh they
say but you know it's a reverse It just doesn't have the control plane.
Anyway, so like Linkerd, Istio, et cetera,
the whole point of using something like all these services together
is to standardize the way the networking happens for your services
and then also to provide like a layer of tooling that you want consistent
across all the services that are ever generated.
Right. And so those can be utilized to use some best practices when it comes to,
you know, what kind of usage different services are taking, et cetera.
And it again goes back to the story of what tooling can you use to create those guardrails, which will,
you know, not topple your bill and like not make you have to like spend like all your money on one service that someone ran over a weekend for fun. So I think it's a lot, it goes back to the
standardization story because asking every developer to remember XYZ PQR will be just
too challenging. Of course, because as I was discussing,
the Kubernetes landscape has new tools every second.
So since I last said that statement,
there might be a perfect tool that's out there.
I'm not sure of that.
But yeah, now another thing I would say,
at least when you do your development
in a kind of integrated environment,
kind of like GitLab,
when people provision clusters and run those,
at least they have full visibility on how many pauses did I start?
Like what's going on?
All within the environment where you're putting your version control
or your source code.
So that accessibility definitely helps.
But that's my answer to your question which i know may not
be the best one no no that's good hey and well yeah and andy i wanted to kind of go in on that
get in on that one as well too where um you know there are a couple things we've seen being done
and a couple things we've heard from other guests right number one is the idea where if you are
using you know you mentioned open tracing earlier or like tools like Dynatrace or anything, there's all this robust
API where developers can, as
part of their code check-in, have certain
metrics and, you know,
components that they're pulling
in with their code executions
that whether or not they're running, writing
something to write a compare or a diff
or someone else is having it, that can be just part
of that process that you talk about can be standardized
upon where this performance metrics are collected
with every check-in.
You could even take it further
like some of Andy's pipelines that he's built
where he's, you know, he's,
you did the awesome one in the AWS code pipeline, Andy,
and you've done it in Jenkins and some others
where it's going to take the performance metrics
from a test run and spin up a Lambda function
to run a diff to figure out,
was there an improvement or a degradation?
But then there's also things like, and Andy, you might remember a little better than I
do, when we spoke to Geronimo Bietov from Facebook, where remember with the capacity
planning idea, where the developers had to, as part of their success failure, whether or not their code was going to
be allowed to stay in production, they had to, with their code, check in some performance metrics
that they had to meet with that code. And when they started pushing in, they did that slow rollout.
And as it was going, if it didn't meet that, they would reject the build, the capacity team.
And again, this is all sort of like signs of
maturity right you're not going to have this as a very small startup but this is where when you
start those standardization when you add in those guardrails that you're talking about
and you you you're approaching that enterprise or or even if we go back to um what was it the the
spartans the um spartans the romans and who was the one in the middle, Andy, when we talked to Emily?
The Mongols, right?
The three different, well, so we had Emily, editing Emily, yeah.
And she was talking about how there's three types of companies. You have your startup, which is kind of like your Spartans.
The Mongols are that mid-level, not quite startup, but not quite enterprise.
And then the Romans, you know know this is like talking from an
army point of view uh where it's it's that's your enterprise side but whereas the roman army kind of
operated as a bunch of small little armies all pulled in together but when you get to that level
maturity there are definite ways you can add all these things in and start making it continuous
but anyhow i think no i 100 agree especially like performance metrics right when
it comes because so in this conversation like we've spanned a few things one was cost the other
was performance right like uh and they're totally related they are related yes but they're like um
you would do different things i think to manage both and so like from the performance perspective
i think just standardizing what the metrics and the alerting states are and all, there's like a whole science behind.
And I'm sure you guys are more than familiar with this being Dynatrace about how to manage your alert states, how to make sure that the right metrics are captured and then alerted on.
And then where do you pull in traces?
Where do you pull in logs, et cetera, comes in.
At GitLab, it's very much like, as I said,
it's a single application.
So when you deploy to Kubernetes from GitLab,
you automatically get a bunch of metrics for free,
which is very nice.
Some people utilize service meshes
to standardize their observability story.
Some people have, you know have a system working independently.
But the key is, however you do it, make sure you do it.
Because this is something that Brian mentioned earlier.
So one of the things we've been promoting over the last couple of months is the concept of monitoring as code or performance as code or quality as,
you know, maybe you want to call it observability as code
or quality gates as code.
So the idea is if I'm a developer or a team
and I'm developing a new feature, a new service,
then I should also write down in code or in configuration,
what are the metrics that are important for me,
maybe from the business perspective, later on adoption rate, what are the metrics that are important for me, maybe from the business perspective later on adoption rate?
What are the metrics from a performance perspective?
What are the resource constraints or the resource metrics?
Like how cost they should my code be?
And then our idea is that we write these down in a spec file and a config file.
It gets version controlled with your configuration, with the other configuration
files that define your service definition, whatever else you have.
And then as I push my code through the different pipeline stages, my pipeline makes sure that
a hey, here's a new service.
Andy the developer wants me to check these five metrics.
So let me reach out to my monitoring tools to get these metrics.
Now Andy or Brian, his performance expert says, we have some guardrails here.
It should not consume more than this amount of memory or it should not be slower than
this.
So let me act, you know, kind of enforce quality gates based on these metrics and so on and
so forth.
And in case something happens in production, then, hey, I probably want to alert
whoever team is responsible for the overall business.
Right.
So I think what we are trying to what we have been discussing
and promoting for different
implementations is a concept of monitoring as code.
We also call it performance signature.
There's different terms that we've used.
So we are actually trying to reach out to companies like you or people like you to really figure out how we can standardize that
because I think this is another standard we need
is a standardized way how we can trace and track
anything related around performance, resource consumption, costs,
anything that could either impact the end use
in a negative way or the business in a negative way.
Yeah, so that's really interesting.
I mean, I think, again, going back to what is cloud native
and all of that, the whole idea of shifting left, right?
Like DevOps and maybe DevSecOps even
is that you involve the developers more in this process of building right.
And so conceptually, 100% agree.
Now, in terms of finding the right standardization, I'd love to learn more just because.
So the way we've seen it work really well from a GitLab perspective is just that you provide like a a lot can happen through the CI CD pipelines
and having, so GitLab CI is really popular partially for, because of that reason is like,
it's really cloud native friendly and fast, which is nice. And it's, the idea is that then
a company can decide what is their set of guardrails around this now when it
comes to standardization uh are you are you thinking like standardizing way beyond like the
w3c wire protocol stuff that's happening you're talking about specifically do a b c d and e and
do it through a service mesh or put this in your cicd pipeline um like which what what level of
standardization are you talking about, I guess?
No, we really want to work on a... That's the idea.
And I've also presented this
by the time of this recording.
It was two weeks ago
at an event in France
with our friends from Neotis.
It's really...
We want to join forces
with vendors, with companies,
because we know there's a lot of enterprises out there that already thought about how can we also write
this type of configuration down as code and use it either for automatically setting up
production monitoring or enforcing quality gates around performance.
So what we want is really a standard specification that as a developer of, let's say, cloud native microservice that I specify.
And then as I push my microservice through the different stages, my pipeline can ask my service, hey, what metrics are important for you?
What are your performance requirements?
What are your performance requirements? What are your cost requirements? Because I,
as a pipeline, I know which tools to ask to get these answers. And then, for instance,
enforce quality gates or send an alert. Right. So we really want to we really want to work
on a standard that we can then also contribute back to the Cloud Native Foundation and really
say this is the way you are developing
cloud native microservices.
And this is where I came to my point earlier where I said, maybe we need to come up with
a, this is a way how you become a cloud native aware developer.
This is what you need to do.
Not only write cool code in a cool language, but you really have to also think about the specification that tells the pipeline that enforces all these guardrails on what to look for, what to learn and when to stop or recycle your service.
Got it. No, that makes a lot of sense now that you explained it.
So I, as you know, like CICB is a big part of our offering and so big believers in utilizing pipelines to do these kinds of things.
I'm not sure if you've seen GitLab Auto DevOps, which is a best practice pipeline, basically, which gets you all these metrics out of the box and all that.
So I wouldn't go as far as to call it a standardization.
It's more a best practice that you can edit as you want. But that's kind of been our initial take at exactly what you're talking about.
And so I personally, not just as a GitLabber, but also because I'm on the board of the Cloud
Native Computing Foundation, I would say that some kind of, I wouldn't, I would hesitate to
call it a standard. I would probably call it a best practice guide that speaks to how
you can, you know,
have a checklist of things
that happen in your pipeline
to call it cloud native
would be super cool.
And since we do this
auto DevOps thing already,
we'd probably be in a good place
to start talking about it with you.
Yeah, we also have,
this is something,
so we've been,
the team that I'm working with for Dynatrace,
the Strategic Alliance Innovation Team,
we just released what we call KEPTN.
I'm not sure if you heard about KEPTN.
It's spelled K-E-P-T-N dot S-H is the website.
And it's also K-E-P-T-N dot S-H.
And it's also a, as you said, like a best practice and also a framework behind how to deploy cloud native applications. Um, and
the, the specification, whether it's a spec or whether it's going to be a best practice,
uh, that remains to be seen that I was talking about is also part of it.
Very cool.
Yeah, we should definitely talk further about this.
And so this is something that Dynatrace has developed
and you hope to get contributors and then contribute to the foundation?
Exactly. That's the idea.
Gotcha. And it's in the framework.
Okay, deployable, blah, blah, blah.
And it runs through the CI-CD pipeline, right?
Exactly. That's the idea.
I mean, we just released it.
So by the time of this year, when this airs, it might be a couple of weeks old already.
But definitely a project we need to collaborate on.
And I think this is why we need to join forces.
Because in the end, what we all want, obviously, we all want to do business. But in the end, what we really
want is we want to make sure that these enterprise organizations that are currently completely
confused and overwhelmed by what's out there, I think we want to give them more guidance and more
tooling, as you said, as well, tooling and best practices so that their journey towards cloud native
becomes an enjoyable one and not a nightmare.
That's a very nice way of putting it.
I really like that.
But yes, totally open to collaborating.
So it looks like, obviously,
since you're going to donate it to CNCF
as open source,
which is a key thing for GitLab, as you know.
But yeah, I'm happy to talk more about it.
And like, you know, if you need to be put in touch
with the right, some folks on the product side,
I can do that as well.
Yeah. Thank you.
Cool.
Well, let's see.
You know, we talked about cloud native for enterprises.
Hopefully we could shed a little more light
on what this all means
and especially kind of getting started tips,
because we know there's a lot of enterprise customers or companies out there
that have to adopt Cloud Native 4, as you mentioned earlier many times,
just for competitive reasons, right?
You need to move faster.
Is there anything else that we should cover
before we kind of wrap it up?
I don't think so.
I think this was a really fun conversation.
We touched on various topics.
I really enjoyed myself.
I hope I didn't bore you guys to death.
No, not at all.
And I really have to say,
everybody who can get a chance to see you live on stage
when you're doing one of your demos and your talks,
it was really a pleasure watching you on stage
and seeing how modern development can really look like.
So that's why we reached out initially.
And I'm very happy that you joined us today.
Absolutely.
I had a wonderful time.
Thank you so much for reaching out to me.
And I think you're running a really interesting way of doing this podcast.
And so I've just had a good time here.
So I think this is great.
Please keep it up.
I will be listening to more of your podcast too.
Great, great.
So Andy, do you want to do a quick summary there?
Yeah, even though I think
we did a quick,
a short summary already,
but of course we stick
with what we always do
and we summon the summerator,
right, as we call them.
Yes.
Absolutely, yeah.
Let's do it.
Yeah, go on.
Yeah, sure.
So, I mean,
what I learned today
and hopefully many out there
learned that cloud native
is just something
we cannot ignore it is
the enabler of the future of software engineering software delivery and software operations
and there's a lot of excitement out there obviously for people that first see what's
possible now some people may then drown in the realization that it's not as as easy for
the environment that they are that they're living in right now,
maybe if they are large enterprises
and have to deal with legacy applications
and maybe also with a legacy mindset,
which can even be more of a problem
than even legacy technology.
But thanks to what we learned today,
there's obviously ways to get started, right?
I think a great piece of advice is start incrementally.
Don't try to boil the ocean,
but find your individual projects
where you're starting incrementally.
I think three big interesting pieces of advice
that I heard today is when you start down the journey,
always make sure, and I'm quoting now,
you look at visibility, right?
You have to have visibility into what's going on.
You have to focus on ease of collaboration
because that's basically the big enabler.
And also make sure you have to set the right guardrails.
And guardrails can obviously be set
in many different ways and forms.
But when we talk about CI, CD pipelines,
it's quality gates and also process guardrails.
But I think this is very great pieces of advice.
And the last piece I want to kind of reiterate, because I like this a lot, we briefly talked
about the purpose of autonomy, because everybody wants to become more autonomous.
And obviously, it is an autonomy that has to live within the guardrails.
But what I really liked is, and I'm not sure if I wrote it down correctly
or if you remember it correctly, but if you
really give autonomy back to your engineering teams,
you really enable them, yet you actually spur the creativity.
Because only if you give them autonomy, they start experimenting, they start
becoming creative. And them autonomy, they start experimenting, they start becoming creative.
And with that, they are going to build features maybe that nobody ever thought about.
And therefore, obviously, you know, supporting your business and then bringing your business into the next level.
And there's probably much more that we talked about today.
But I was really thankful that we had you on the show today.
Thank you so much.
Thank you so much.
Thank you so much for having me.
And if anyone's interested in continuing the conversation,
I'm always on Twitter, to a fault.
So you can find me.
It's twitter.com slash P for Parrot, R for Russia, I for India, T for Tango,
I for India, A for Apple, N for Nancy, K for kangaroo, A for Apple. So it's Priti Anka.
I can never get my name. So I just go with that username everywhere. So yeah, please find me on Twitter. We can catch up there. And thank you once again to Andy and Brian for having me on the show.
I had a wonderful time. Excellent. And just one last question. If Andy said it's great to catch
you speaking and demonstrating.
Do you have any other public appearances coming up, maybe late March, April, May timeframe?
Yeah, absolutely. Thanks for asking.
So there's a possibility I will be at GrafanaCon in the next week or so with a really cool panel.
But that's up in the air, but highly likely.
And then at the Open Source Leadership Summit, I'll be speaking about
serverless. And I'll be doing the same, actually
not speaking, but doing a tutorial around serverless both at OSCON and Velocity.
So those are the confirmed things so far. Awesome.
Well, thanks a lot for coming on the show today. If anybody else has
any questions, comments, you can find us at Pure underscore DT on Twitter,
or you can send an old-fashioned email
at pureperformance at dynatrace.com.
I love your feedback.
And if anybody wants to be a guest,
let us know and we'll try to arrange it.
Thanks, everyone.
And I hope you have a good rest of your day.
Bye-bye.
Bye-bye. Bye-bye.