PurePerformance - Managing hybrid complexity with Kurt Aigner
Episode Date: January 31, 2019Kurt Aigner gave a session about managing hybrid system complexity, from the cloud to the mainframe and everything in between. He shares a few notes and tips in this discussion....
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance!
Hello from Dynatrace Perform 2019 in Las Vegas. I'm And Grabner and this is Up Close and Personal with product management on Pure Performance.
I want to introduce you to a special guest, a good friend of mine and product manager as I just said, Kurt Aigner.
Hey, Kurt.
Hey, Andy. How are you doing?
Good, good. I think both here in Vegas, it's been an exhausting time already.
Hot day on Monday, now the breakouts.
Exciting.
Exciting too, but also exhausting so Kurt you just came out of your session I believe and
you talked about what did you talk about what was your session about yeah the
session was about managing hybrid complexity from the cloud to the
mainframe and everything in between okay well that's a broad broad field like
everything in between like what's what was the highlights yeah the highlights were actually yeah to actually double down on well every in the it is actually overwhelmed from
what's going on in public cloud cloud private cloud and then how the things involved there we
all know that pace is picking up and and it's getting faster and complexer actually. But what's often forgotten is actually that you still have stuff running in enterprises
where most of our customers have still enterprise service buses, messaging systems and the mainframe
on premises and really running their stuff which is high critical to them and where also
this cloud stuff is really
relying on so that's where the data come from and that's where the actions are
processed there as well. So that's interesting I mean obviously I know
because I know some of our customers that still use the mainframe but maybe
for some of our listeners they are not aware that there's still a lot of
mainframe out there and especially as said, powering very critical applications
that make sure that our day-to-day life actually works.
I think credit card payments, obviously,
if you think about any financial service,
still a lot of mainframes, and obviously they need to be monitored.
So what are the...
Obviously, the listeners here,
they were probably, unfortunately, not in your session,
so there was a recording.
So what are the things that you presented today that is the hey this is the things that i want to make sure every
listener takes home with them and maybe encourages them to also watch the recording yeah sure so
actually what i did is i really showed what's the in betweenbetween to the mainframe. So we talked about things like IBM integration bus,
typical
active metrics or things like data power and so on so forth. So there's actually a ton of technology still in place.
Appliances and things which are accessing the mainframe and other
services still running on-premise.
So that was more or less the first part of where we show that Dynatrace,
very famous for all the cloud technologies, as you know,
can do their job as well to really provide this end to end experience.
Plus the whole IAI with fault domain isolation back to the back end actually.
As soon as you have a blind spot within your environment, you really have a very high risk
that you miss a very important thing.
And that's the whole story about.
And we are very excited actually that we currently launched our early access program for the
mainframe agents.
So we are now also capable to monitor kicks and RMS regions on the
mainframe there is also a C Linux full-stack agent upcoming so again Linux
running on the mainframe also very popular in those shops with which has a
mainframe yeah to actually have the stable environment on such a huge box
that's pretty cool and so also for some of the listeners
that may not be aware of the history of Dynatrace,
so we did have support for Mainframe years already
with our Epmon product and the agents that came with Epmon.
And now bringing all of this to the new Dynatrace,
the new full stack to the one agent world.
Yeah, perfectly point.
So we were the only ones and we are the only ones who are capable to trace
transactions in a production environment with full coverage
down to the mainframe. That was also in the good old Appmont times.
The good thing is that we of course learned a lot. These agents are matured. The technology has not really changed.
It was just converted to Dynatrace
actually with some different tagging technology, but it is not really easy to have such a powerful
agent on the box where you are really driven to have as less overhead as possible because
that's really expensive on the mainframe actually.
So yeah, it was a tough job already in the
past and now we are really proud to bring this also to Dynatrace and scale
within the Dynatrace platform because that's what we're looking forward to.
We had the technology back then, there we were limited to a single server, now we
are in a cluster and we can really take the full load of the biggest mainframe
environments here in the world. That's pretty cool.
And because this session is not only about obviously the mainframe, but really connecting
it with the cloud and everything in the middle, I just recently did an engagement and was
on set with one of our customers that actually uses mainframe and then now builds cloud native
applications on top.
And the first thing that they said is the big benefit of them is actually seeing how the calls from the new cloud native world are made into the mainframe because they're detecting a lot of architectural issues.
Like, let's say, the Node.js front-end developers, they are not thinking about the expansiveness of making calls to the mainframe.
But now with our pure web technology going end-to-end, they can see bad performance or bad calling patterns into the mainframe right and therefore saving performance
but also resources which means money right i mean that's yeah you're totally spot on so it's really
the most benefit we can bring out there talking about the backtrace we are seeing just which
mainframe program was called by whom down to the application
the user actions actually end to end as we know it having this covered also with our new artificial
intelligence seen out the baselining for all those values so if a deployment as you mentioned in node
gs triggers multiple transactions on the mainframe.
It will immediately be discovered by Dynatrace and showed that, well, that changed something.
You're making double of calls, whatever, to the mainframe,
which might not even be recognized in a response time degradation,
but you will see this a few months later on the build which goes to IBM
because you doubled actually the power on the mainframe so and now the the other thing that you mentioned the
very beginning some of the components in the middle the physical power data power
and some other components so we're pulling in a lot of additional data now
from also these quote-unquote black boxes or whatever components that there
are and we heard this week that the AI, our DAVIS, artificial intelligence, has
been extended. So we're no longer just looking at baseline degradations in case there is
a problem, but we look at every single metric that is part of the full smartscape dependency
tree. And therefore, I assume we also look at all of this data that comes in from these,
let's say, components,
middleware components in a classical mainframe environment, right?
Yeah, that's right.
So it's really easy to capture a ton of data.
So there are APIs.
We are doing nothing else within those appliances.
So we're just querying the API.
But the question is not what are you doing with the data?
Okay, you can display it in a chart
you can have sitting an expert expert in front of a screen saying I know I can read this actually
but the beauty is exactly as you said to have really here AI power on top of it to really detect
what's going wrong and if something is going wrong and that's in a proactive way yeah and that's what
we actually do.
Pretty cool.
So Kurt, for people that listen to this
and actually happen to be here at PERFORM,
where's the best place for them to meet you?
Are you around in, I think there's an innovation center,
they're at the towers, or where was the best place?
That would be the best place.
So we have our innovation lab directly
in the heart of the marketplace.
You can't miss it.
There's a big Dynatrace UFO on top of us.
We have a lot of demo stations there.
And whenever I'm not busy with customer engagements, I will be right there at the tower and just meet me there.
That's pretty cool.
Last question that I have.
I know you said there's a lot of stuff coming on the one agent for mainframe.
Looking into the future, the next three, six months, or maybe until the next perform,
what are the big highlights that customers should be looking forward to?
Yeah, as I mentioned, we are planning a CLinux full stack agent.
That's one of the big things.
We will also look and work with our early access program customers and with the early adopters also after GA to get mainframe
dashboards into Dynatrace as we have it for other technologies as well. That's
also something we are looking into and we're looking forward to. And we will
also provide data for IBM. They are offering a special pricing for load which is coming from
mobile or from the public cloud and we will simply allow through a rest API to
pull the data and to make the billing to IBM that's pretty cool and that actually
helps our customers save money because they can now prove what kind of
transactions come from these type of environments. Exactly. That's pretty cool. Alright. Hey, thank you Kurt. I would say, you know, enjoy the rest of the
conference. Will do. Yeah. And for Pure Performance, this was Andy Grabner. Thank you.