PurePerformance - Dynatrace for web-scale operations with Guido Deinhammer
Episode Date: January 29, 2019Guido Deinhammer talks about how to make the most of our latest innovations and enable you to automate and manage your operations at web-scale. He shares insights from his session on management zones,... API integrations, deployment automation, and best practices using our open AI.
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance!
Hello from Dynatrace Perform 2019 in Las Vegas. I'm Andy Grabner and this is Up Close and Personal with Product Management on Pure Performance.
I want to introduce you to my guest today Guido Deinhamer. Hey Guido.
Hey Andy, great to see you.
Yeah, it's the end of day one at Perform. Actually day two depending on how you see it because day one was kind of a hot day.
Were you part of the hot day? I was involved in preparing one of the sessions around WebScale.
Just my favorite topic, just coming out of the WebScale,
Dynatrace for WebScale operations session right now with Wolfgang, my colleague.
Yeah, cool.
I want to ask you in a minute more about this,
but first of all, for people that don't know you at Dynatrace, what's your role?
I'm chief product officer at Dynatrace. I'm specifically responsible for product delivery
so I'm involved in many of our large accounts, large deployments. Web scale is a bit of my,
one of my main responsibilities. And that's obviously why you also had a session about
web scale operations with Dynatrace. Exactly, yes. So unfortunately not everybody in the world that
is listening to this can be at Perform but we want to give them a quick overview of
what they missed and maybe encourage them to also look at the recording
because I believe all the sessions have been recorded. So what were the main
takeaways? What did you guys talk about? We were talking today about you know how
you can actually leverage Dynatrace in a large scale in a web scale environment.
So that means you know how do you roll web scale environment. So that means, you know, how do
you roll out the agent? Now that's the easy part, right? You can say, I can roll out a thousand
agents in half a day, but that's really where the fun begins. Afterwards, you have to actually
think about, you know, how do you leverage management zones to make sure your team can focus
on what their responsibilities are? How do you define your host groups so that you maintain a certain strict order?
That is very important if you have a large environment.
And also, of course, you have to make the decision,
should you use a single large environment
or do you want to split your deployment across multiple Dynatrace environments?
And if you do that, how can you maintain some level of visibility across environments so there's a couple of great things on our
roadmap for that as well but that's just the deployment aspect of it.
Wolfgang was talking specifically about how to leverage our new configuration
API so you can basically make sure you don't test your
new configuration in production. You actually want to make sure you have
proper staging environments for configuration. How do you migrate
configuration from here to there? How do you migrate configuration across
environments? And with our REST API we have great possibilities to do that.
Same thing around dashboarding. If you have for example a couple of dozen Dynatrace
environments you want to make sure you can roll out dashboards across all of
your environments that have you know consistent terminology and are easy to
understand for all the teams using Dynatrace and with our dashboard API
that's just been released,
that's really easy to do, straightforward, and is a great benefit for everyone using multiple environments or even just having a proper staging process for new dashboards.
And apart from that, of course, we also talked about the openness of our new AI. I mean, if you monitor large-scale environments, one important aspect is, of course, getting specific events and also specific metrics that are perhaps not automatically collected by Dynatrace because they are coming from proprietary systems that are equally critical to your business and that you need to monitor just as closely.
And with AI 2.0, Dynatrace can really incorporate all types of information into its root cause analysis logic.
That's pretty cool.
Well, there's a lot of things that you just mentioned.
I have a couple of questions because they're very interesting.
So first of all, one topic that I love a lot because I hear it a lot from my customer
when I talk about continuous integration, continuous delivery, is the concept of configuration as code.
So what you mentioned earlier with our configuration API, basically moving configuration
pieces from one environment to another, maybe obviously testing it earlier. I think I talked
with Wolfgang and he said, you can extract configuration from one environment, store it as config files, version control it in your Git.
That's exactly what Wolfgang demoed actually in the session.
So he showed a Python script where you can extract all your configuration from Dynatrace.
He checked it into GitHub, checked it out again, applied it to another environment.
So all those things have been shown live, so if that's anything anyone is interested
in, just check out the recording.
That's a great live demo given by Wolfgang on this.
And now I also understand why you call your team technical product managers, because these
guys are actually not just product managers in a sense coming up with nice PowerPoints, but actually hands-on doing Python scripts, Git push. I mean, it's amazing.
Absolutely. I mean, yes, we have to do the PowerPoints as well, but we would all love
to be developers. At heart, most of us are developers, have been developers for a long
time. And if there's a chance to program something, we jump on it typically.
So I want to then go back to the first thing you brought up.
I believe you said in a large scale, sometimes there's a decision, do you run in one big
environment, do you run in multiple environments?
Also I guess at some point there's a decision, do you run SaaS or managed, but I think there's
different considerations.
Can you give me maybe a quick example of why would somebody, for instance, go with multiple
Dynatrace tenants versus putting everything into one tenant and using management zones?
Great question.
I mean, with management zones, certainly we have a concept that allows different teams,
even if they share the same environment, to focus on their responsibilities.
So that's something we've
released at the last performance in Vegas. We've announced it and since then it has been very
successful. We have really hundreds of customers using this feature very successfully now.
But if you have very large environments, there is another, you know, if you have say 10 teams
working on the same environment, there are aspects of shared configuration items.
Some configuration is global.
And if you want to ensure that people don't get in each other's way, if many teams own configuration,
then that's certainly something where you should consider going with multiple Dynatrace environments.
Another practical thing is also if you look at the number
of hosts you're monitoring. If you're monitoring 10,000 hosts for example
splitting this up into multiple environments is usually a good thing
mainly because if you think about it the more entities we monitor the more
hosts we monitor that you have basically an exponential increase in complexity.
To break this down, it's certainly worthwhile considering splitting this into multiple environments.
When you do this, the question is how can you maintain a single pane of glass across
all your environments?
We're actually working on a couple of things that allow
you to have a dashboard showing information from different Dynatrace
environments right and also allowing you to trace transactions across
environments so if you have a back trace for example you look at a transaction I
want to see hey where has this been coming from which other environment did this transaction come from? You can just click there, drill down, and you see the continuing
backtrace in the other environment to see if that's something to just help you diagnose
the full problem. But transactional visibility, if you think about it, is typically not the
main issue for splitting things into multiple environments because if you think about it, is typically not the main issue for splitting things into multiple environments
because if you think about it, there'll be a different team responsible for, say, the root cause.
I mean, if you're responsible for one environment, your root cause is basically the other team.
It's important for you to give them the information of, hey, this was the transaction,
that's how you continue it, and you can actually send them the link to the backtrace for them to diagnose it.
So that's a great game changer in setting up Dynatrace with multiple environments.
That's pretty cool because we do the end-to-end tracing,
but yet for the individual teams,
I only give them the visibility that they maybe need or are allowed to see.
I think that's, again, from a permission perspective.
I know, let's say, if you're two teams, Guido, you and I,
I know I'm impacted.
The root cause is you.
I don't see the details because I'm not allowed to see your details,
but I can send you the link and you'll say,
okay, Andy, of course I will fix it because I am a technical product manager.
Exactly, yes, yes, yes.
Now, that's exactly what we think is what we're trying to solve with this.
It's not released yet, but we're working on this and we'll have it over the next couple
of months.
Cool.
It's actually nice.
Everybody else had to ask at the end.
So give us a glimpse into the future because typically you don't want to talk too much
about the future.
Well, a little bit, yes, but we don't want to nail people down on dates.
Go to the close of that.
Is there anything else that you want
to share with the people that are thinking about large-scale deployment of
Dynatrace? Maybe something that is coming already here?
Basically from from a functionality perspective I think we've covered
most of it. Also specifically if you have say hybrid cloud, multi-cloud environments,
then of course there's a couple of specific tracks to that.
How do you roll out Dynatrace most effectively in Kubernetes environments, in Cloud Foundry environments.
So all those things are of course covered.
Ours was really specifically on organizing, how do you leverage the AI based in large-scale environments.
And the other thing that maybe if someone is really planning on deploying Dynatrace large-scale,
make sure you involve services, make sure you involve Dynatrace One.
Even though rolling out Dynatrace is very easy and very quick,
if you do it very large-scale, you want to do it right the first time.
Exactly. Cool. Hey, Guido, thank you. I think it's the evening of day one or two, whatever
you want to see. At the end of Tuesday, I think there's a party or something going on.
We should probably walk over because we should have some beers over there.
That's a good point. Let's do that. Thanks Andy.
Thank you. For Pure Performance, I'm Andy Grabner.