PurePerformance - Perform 2018 Tuesday Morning
Episode Date: January 30, 2018Live from the conference on day 1, the early annoucements, reflections, excitement, caffeine and live stream from the conference: http://perform.dynatrace.com...
Transcript
Discussion (0)
Square meal of the day. Don't forget the PerfBites.
F*** waffles. Microwave ready.
Blah-ba-dee-blah. Add nutritional value to your brain.
It's time for PerfBites with your hosts Mark Tomlinson, James Pulley, and Eric Regler.
PerfBites!
Whatever.
Whatever.
Mark!
Whatever. Hey, it's PerfBites time.
Yes.
You know, last night was a little crazy, as would be usual, at the Dynatrace Perform 2018 Welcome Reception.
What happens in Vegas stays in Vegas.
I agree.
Mostly, you and I are jet-lagged from the East Coast.
Sure, we'll go with that.
So you're hanging out in Vegas until even, let's say, a reasonable bedtime hour for an adult, 11 p.m.
That's 2 in the morning.
Yeah, let's go with that.
We need to sleep more.
Yeah.
So anyway, to everyone out there who's listening to us on the East Coast,
it's now the afternoon, but it's still...
And we're still pile-driving the caffeinated beverage.
I'm on my third coffee.
I've had my first Dr. Pepper,
and I'm eyeballing this display of Coca-Cola sitting in front of us.
Yes, there's some wonderful things.
We are broadcasting live from the Dynatrace Perform 2018 user conference.
And actually this morning, I guess at 9 a.m., it's for the last hour or hour and a half,
they've been in the main stage doing all sorts of announcements.
Yeah, we've been watching kind of a video, a live feed of what's going on on the main stage.
Yeah, which is actually, there's a monitor near us which actually looks kind of crazy.
I don't know what's going on with that monitor.
There's one all the way across the room, but from here it looks like it's about a 7-inch diagonal television.
Yeah, it's kind of crazy.
But for anyone who's not here, of course, last night we reminded many people that it's a radio show.
They can't actually see what's happening here.
Most people listening to a podcast.
Yeah, like you had shaved your head and I now have pink hair.
I shaved my hair and pink hair or something.
But actually, people can tune in and watch a lot of the mainstream and other streaming sessions on the live stream.
You can go to, Andy said last night, perform.dinatrace.com.
And you can click on there and join the live stream.
I think it's free.
It's free to watch from home if you can't make it to the conference.
But a few of our announcements, James, do we have anything in the PerfBytes world we're announcing?
Hold your calendar, August 11th, PerfBytes annual barbecue.
And James' birthday.
Yes, yes, i'm turning 50 so if ever i have helped you in your
career right um with an answer or a non-answer yeah um please join me on my birthday we're
we're gonna you know a pig is gonna lose his life uh in support of my birthday event yeah that would
that that's absolutely true the um uh as many perf events, pigs lose their lives and sacrifice themselves.
Yes, yes.
No, wait.
Well, let's see.
We had jalapeno poppers.
It's like a shoulder or something.
So this time we're going to do a whole pig.
Ribs do a whole pig.
That sounds good.
The only other thing in my calendar is the, well, we'll be at SDPCon in April.
Yep.
I'll be at Tisco down in Chapel Hill. I I'll be at TISCWA down in Chapel Hill.
I might do a little road show up there just to say hi to some people.
Yeah, this is going to be great to see the largest TISCWA that's happening in Chapel Hill.
T-I-S-Q-A 2018 is going to be huge.
But let's talk a little bit about at least the things we picked up from the Twitter feed and such in the Dynatrace world.
There are three major things that got announced.
Okay. What are they, Mark? Three three major things that got announced. Okay.
What are they, Mark?
Three areas.
Let's go with number one.
Remember last year they did Davis?
Yes.
I like Davis.
Davis was cool, but it was kind of a teaser, right?
I mean, Davis wasn't quite ready for super prime time.
But now Davis is actually quite mature.
You can use it all the time if you've got full Dynatrace.
It doesn't really work with Atmon.
It's not for Atmon people.
You've got to go to the full Dynatrace. It doesn't really work with Atmon. It's not for Atmon people. You've got to go to the full Dynatrace
to get Davis working.
But you have to get an Amazon Echo
and set all this up.
And remember,
Davis is just the voice interface
to Dynatrace
and the artificial intelligence engine
that's already built into Dynatrace.
Yeah, people don't even know that.
This is confusing to a lot of people
where you have this this davis personality oh
so davis is the a is the ai yeah davis is jarvis i think davis is close to an iron man analogy
i don't know if you can change davis's voice but you see jarvis was the actual ai
an iron man and in this case davis is just the voice of the AI. The voice interface to the AI. To the AI, right. So the rules-based engine or the other parts of the, that's actually developed quite a lot now in the latest versions of Dynatrace.
So what they put on the stage this morning, there are three areas that came out that I picked up from the Twitter feed.
One, log analytics.
Now, if you've been an AppDawn customer and you drill down into, let's say you've got excessive logging happening in your application,
you would see a certain layer in the layer breakdown or you'd find that, you know, Java logging system.
Or just giving credit to a colleague, Chris Jeans.
He just happened to be looking at the log sizes on a whole bunch of platforms where
one tier of the architecture was slow, and amazingly enough, its logs were, say, several
orders of magnitude larger than the other logs.
Yeah, just a grep for debug.
There are a lot of log-based issues from a performance standpoint where most developers think logging is async.
It should never be a problem.
Oh.
Until it becomes a problem.
Until we get to like Spectre and all of these things that all of a sudden take more security overhead on every IO request, on every system call.
Or the ops department decides to put you on NFS.
And now you're paying a round-trip latency to
a filer somewhere.
Even in that case, with the latest
Spectre and stuff,
you're looking at
an additional cost for
non-intelligent network adapters to
send that stuff and receive it across
the network. So yeah, there
is a cost, even though it's async.
The CPU pool is finite.
The interrupt service pool is finite.
And, yeah, so you can definitely overlog.
Now, two years ago at the first Perform that we got invited to hang out and do podcasting,
remember Splunk is a huge partner.
Yes.
And they were there in the partner ecosystem you
don't see splunk here no i don't i mean i can run out and look at the at the sign no i don't i don't
i don't see them here and that's because i my interpretation and a little bit on the twitterverse
here was that the long analytics is kind of bringing elk and and Splunk as competitors. They're incomplete.
Really looking at a couple of things that Dynatrace is going to leverage.
One, the single central repository,
not six different indexes for six different logging tools,
which is, if you think about it, people already have the problem of,
I got logs everywhere.
The last thing I need is logging tools everywhere.
That doesn't, the single view, single repository, single index is still a goal.
Splunk and Elk and Sumo Logic, they're all designed to really aggregate all of that and provide a single comprehensive interface to look at all of your logs.
So, I mean, they're attacking it from that perspective.
And then you have Dynatrace coming from the other perspective of the inside analytics, inside the virtual machine for both.NET and Java, and being able to merge that now with the data.
You can always do that inside of Splunk or Elk.
You could go out and get it from Dynatrace, but if it's going to be a more natural integration now, that has some intriguing possibilities.
It is.
So the single repository, single index is nothing new in the world of log aggregation
and that kind of stuff.
But being able to have it in the single index repository instantly correlated in the way
that other Dynatrace contexts can be applied, right?
And people have been trying to do this inside of Splunk and AppDynamics
by getting the Dynatrace data and merging it into Splunk.
And then writing their own correlation engine without any AI,
without the expertise and the platforms and the acumens of the agent.
And if you look at UEM, is logging affecting my end users?
A lot of ops guys don't even know.
Logs are way back there.
How could that affect end users?
It's supposed to be async.
My developer said it's async.
I trust them foolishly.
But now Dynatrace gives you the insight to say,
here's the context of a logging-based problem,
and now you can correlate to it more immediately, right?
Plus you can put some base rules in the AI that say, hey, these are the top 10 common logging problems.
You've got excessive logging.
Oops, the oops, you just pushed debug logging to prod.
Yeah.
Like, even if it happens, because it's going to happen.
Or even just the scale issue.
If just one node is out of scale with the rest, even if all the developers have...
The log level is set correct, but the developers have set the logging class incorrect on some of their code.
That can cause the logs to explode, even with the log level set to appropriate.
Which is, from a developer standpoint, if I don't have permission to change the logging level,
they'll put stuff in info mode just to do debugging
because someone told them they don't have permission to use debug mode in dev or test.
So that's why people abuse info mode, and then it becomes a debug problem.
So correlating in other DT, Dynatrace capabilities. So I love the layer
breakdown, the drill down capabilities. I'm an old Atman guy. But even if you think of a Davis
interaction, it'll be able to detect and correlate. I'm detecting, you remember when Andy was talking
to Davis last year on the road with us? I do, and Davis was asking those really awkward questions of him. I know, yeah.
Well, it's probably the Austrian accent that throws it off.
But the basic idea being,
now you'll get not just,
I can drill down to a logging issue,
you can automatically detect, escalate,
and look at how a logging issue evolves
in the problem evolution.
You know, the tree diagram
looks like a vertical transaction flow,
and you can see which parts of the system turn red over time,
like problem isolation and replay.
Logging will be a major indicator.
So you have one that starts growing more than the other ones.
In an app server or a web server, there's a log anomaly,
and now you can start mapping to that as a
predictor.
So I have an open question about this, since we're going to be incorporating more on the
log side.
Right.
And this, I asked Andy this last year, what is the ability to incorporate third-party
rule sets?
If you and I have discrete knowledge on log patterns related to performance, how can we get those either integrated into the AI engine
or a private rule set that someone can say,
hey, download the cool PerfBytes rules.
Absolutely.
And we'll look at your logs.
We'll look at your Dynatrace stuff.
The identifying performance issues in the application exhaust
just becomes the pulley rule set.
Exactly. I think you could pull that off. at the same time but the question being how do we get that in as a partner
integration or third party is it like neo in the matrix where he you know he's jacked in and he
says uh you know i know jujitsu so it's like boom he just jacked it right into his brain he knows
jujitsu so this would be like for dynatrace you would
you would inject you would inject it and then all of a sudden it's like oh i have a cash management
problem what's with jira and then cash management like like they're on the roof with the helicopter
and it's like tank i need a i or something. I need a new cache policy.
Can you help me out here?
And then you get the rules to repair.
So you could get identification correlation rule sets.
You could also get remediation rule sets.
Yeah.
That could be very, very powerful.
And then hook it up to the AI.
The other thing that's really interesting about log analytics that I think is a strength in the Dynatrace world is auto-discovery of
log file locations.
So if you are doing a deterministic implementation of Elk or Splunk, you kind of have to be very
explicit in your integration of where you send the logs and for integration to the central
repository.
You have to know before you know to know where to look.
Yeah.
Well, in the world of super complex changes and, hey, we took a new version,
they changed the default log directory, and we just went with the defaults,
since Dynatrace is hooked in and it can see the entire system library calls into system log,
boom, auto-discovery of logs.
Right?
So it knows where they are.
So you don't have to go hunting for them and then you miss out.
So to me, it avoids blind spots.
Like something's logging somewhere but no one told me kind of excuse.
So I'm going to put my curmudgeon hat on and say, I want to see this one.
I want to see this one.
Well, we didn't see it.
Brian Wilson's here.
He probably saw the auto-discovery
of logs. We've always had that, though.
I know you've had it, but it's like that's part of
integrated with the new log analytics.
The more important thing
with the new
log analytics
is
streaming out logs into
a central repository.
So let's say you have... We already talked about that.
Well, that's the bigger piece.
That's the bigger piece?
I just saw that there.
Not the auto part.
We've always had auto discovery logs was always there.
And then if it's not, you know, and just like anything,
it's based on rules and patterns of where sender log files go.
And then if you have some kind of weird custom log,
different name, different location.
Or the location changes and you didn't know about it,
it could pick it up and still get it into the repository.
Right, but also I'm saying if you have a log that's not named
or not following a pattern that we recognize,
you can always go to the configuration and say,
hey, here's another log that I want you to monitor.
And immediately start pulling,
and it'll automatically bring it in?
Because we were just saying
you had to be more deterministic
in like the Splunk or Elk world.
You kind of have to know where that stuff is
and set it up before you deploy.
We're going to move you to another mic over there.
Brian's going to get another mic.
Aha.
There you go.
All right.
Yar.
Okay, sorry for tapping there, everybody.
So I'm not going to claim to be an expert in the log analytics experts,
but from what I've known about it existing already,
as I get a piece of walnut on the mic,
yeah, it's going to automatically log standard type logs.
It's going to automatically detect all your standard type logs for every application.
You can custom tell it where if you have your own unique type of log that doesn't follow
one of the patterns we automatically match, that doesn't follow a directory we're automatically
looking for.
We can indicate that in there.
I can see
a path to auto-discovery on logs
by
the fact that you're watching
inside of the virtual machine.
You're seeing how the log is kicked off.
You're seeing its log destination
inside the code. But I don't think we're doing it that way in this case.
And pick up all that stuff.
I don't know if we're doing that in this case. I think the log analytics
piece is a lot more basic.
There are directories for
every technology where log files exist
and there are log naming patterns
that we know to look at the, you know,
and again, I don't know, I'm
hypothesizing at this point because I don't know the
details, so completely making this
all up at this point.
But since you're hooked in, you should be able to know
where system.log is logging.
Right. And our host agent, which runs on the
host, since it's not just a
JVM agent anymore, that's a library
and the thing, we have visibility to everything.
Right. So we can find
these things, we can detect them much easier.
And to me, the cooler
aspect, as I was saying before, of the
newer logging capabilities is for those temporary servers that might not be here in another five minutes.
We're going to stream your log output out to a central repository.
So you can go and grab that later and not have to worry about that.
That's very, very cool.
And as it puts out there, I mean, again, you get all the root cause identification and correlation from the standard Dynatrace experience, right?
So the views and everything, which is better.
Now, for me and you, as an old Atman, you know, gurus, we're like, I love my layer breakdown.
And I could see, you know, where logging turns synchronous when it's under tremendous pressure.
And that backs up into the app.
So to me, that's like, okay, that's what Dynatrace helps me do for logging.
I can do it in method.
If I look in a method hotspot, look, see, oh, you've got a logging hotspot.
But that could be excessive logging where you have latency.
This is going to go one step further.
And the question we had at the beginning of the conversation, Brian,
is two years ago we saw Splunk as a partner.
They're not here, I don't think.
So is this encroaching, is this a competitive
move or a coopetition
kind of thing? Well, I think
Splunk does a lot more with
logs.
What I saw in the presentation
earlier today is one of the things they're going to introduce
is the ability to start charting and graphing based on your log messages, which does definitely encroach some more.
But like a lot of technologies, there's usually a lot of overlap, but a lot of things that they don't completely do.
Are we fully going after the likes of Splunk, I'm not quite exactly sure because, you know, would we currently, at least with the new stuff, be able to tell a customer,
oh, you have Splunk, you can rip them out and put us in?
Probably not at this point.
Probably not, yeah, yeah.
Because Splunk is a much more generic solution where you can ask a lot of different questions.
Right, and they've also been at it for a lot longer at this point.
You know, three or four years from now?
Who knows, right?
No idea.
But the big thing we noticed is, so the reason we have logging in our application in the first place,
and I love these kind of stories,
is that we are a monitoring tool,
and when we were developing Ruxit,
the requirement for the developers was,
you have to use this tool that you're developing
to monitor your developing pipeline.
Yeah.
Right?
Yeah.
So as they
were going through, the developers
said, you know what?
APM is great. One agent stuff
is great. Methyl data is great.
We still need logs though.
Logs, in
some cases, the developer still needs to go
refer to a log.
So product said, all right, we'll build
it in. That me the logs.
That's what we need in order to make this tool.
Therefore, it needs to be part of this tool.
Yep.
You know, and it was, you know, people like to call it, you know, eat your own dog food.
So, this is Cuba Gooding Jr.
Show me the logs.
Yeah.
Show me the logs, Jerry.
Show me the logs.
Well, I think the geniuses in Austria, instead of doing the whole eat your own dog food saying,
they're like, we prefer to drink our own champagne.
Yes, that's Andy.
Andy says it all.
They do
champagne in Austria?
They can't call it champagne, but it is.
Well, they can buy it and have it.
But it wouldn't be their own.
So that's where logging has come out
of
the pipeline usage of logging. Now, is it going to mature and grow into something that's going logging has come out of the pipeline usage of logging.
Now, is it going to mature and grow into something that's going to compete more and more with, say, what Splunk does?
Which, again, I don't know 100% of everything that Splunk does.
I work on the pre-sale side.
I know a lot of it, but, you know.
So we'll see where this matures, but the potential of having more capability come in in the Dynatrace world around logging is only good stuff.
Yeah, I mean, the capabilities there are already amazing.
You know, obviously I think there are things, you know, Splunk might be able to grab logs from places where we wouldn't be able to put an agent maybe.
But it is applying, it's Dynatrace making a statement in this beta for analytics related to logging, which brings it into the full analytics.
No, we were watching the live stream.
Oh, so did you?
You know the live stream.
That's right, yes.
You can get it at perform.dynatrace.com.
So the one thing, though, and here's the biggest thing.
This is why I think it might be.
We haven't covered the other two yet.
This is, well, no, based on the logging still,
where it might be almost irrelevant
how much we're encroaching on Splunk.
Splunk or Elk or whatever.
Because it's a different approach
where they mentioned,
I don't know if you caught it in there,
that they're going to do AI analysis of the logs.
Right, that's the analytics part.
Which that is a different,
that's a whole different side,
completely different bit of it.
So yeah, so leveraging the existing capability,
making it a little more robust,
and then it opens it up
to the full machine learning analytics.
So that's log analytics.
That's differentiated.
I think that's differentiated from your Splunk Elk experience of most people that say, show me your logs, right?
Roll me an index and then make me a Splunk query master.
It addresses an issue that many Splunk and Sumo Logic and even Elksack customers have,
which is we want to see the deep diagnostic stuff so we can correlate it all.
Right.
Now, this is a specific use case for those logging tools, which is fairly narrow in the analytics side of the house.
For instance, all of these log analysis tools,
they really just operate against text files. So, I mean, they could be pulling files from systems
that really Dynatrace doesn't even integrate with very well.
They could be providing data files and logs from mainframe hosts,
from remote non-integrated systems for business purposes, sales reports that are generated.
So very different types of data.
But definitely that area that many customers that have both deep diagnosis tools on the APM side as well as log analysis tools,
they spend enormous amounts of labor to try and integrate those two data sets.
But here's the big difference.
And this will make it seamless.
Here's the big difference, though.
And I'll even say this of Atman.
So many tools out there, including Atman, which I love.
I've been an Atman user for years and years.
Love Atman, right?
I'm what we call internally an Atman hugger, although I've come to see the light on Dynatrace because it's just really
getting amazing, right?
Tools I think like Splunk, like
Atman, like most other
tools that are out there now, provide
data. They do not provide
information.
Yeah, they're
data collectors. They're data collectors.
They're not meaning creators.
Right. And that I think is one of the big differences
in what we do, is that we're turning
that data into, in many,
many cases, information.
Not just the whole AI, here's the
hotspot, but correlating
all the dependencies. Let's go with a different term.
Actionable intelligence. No, it's information.
Well, actionable intelligence, but if you want to
keep it to
the definitional, it is information.
The semantic of the information age, you're absolutely right.
That is the definition.
That's the leverage definition.
Let's go to the next one.
Yes.
In the list of announcements that came out is management zones.
Okay.
So in Dynatrace, as we heard from Andy yesterday in casual conversation, everything in Dynatrace now is a tag.
Everything can be tagged.
And so there are ways to arrange tags.
Do management zones, is it a way of leveraging and tagging?
And is it just a hierarchical or an abstraction layer for managing stuff?
What did you hear about management zones?
Well, let me put it in the terms of a true microservices architecture, because that gives you the best example.
Right.
Anything, the smaller environment you get, still very important.
But to make an example of it, let's go with this big monolithic type application, right?
Right.
I mean, sorry, big microservices type application.
Microservices, sprawling swamp of microservices. You have 500 services.
You have 30 instances of each service running in 15 data centers.
You have teams in-house working on some service,
some teams out-of-house working on some services,
and you have one agent on all of it.
So when you look at that service... Topology, per se.
I should know the name of it.
Huge spaghetti, right?
And everyone's relying
on AI to analyze it, but when you're
like, well, I work on these two services
and this is what I care about. I want to look at views
specific to this. Well, Dynatrace
is all or nothing.
Ah, so it is
a way to deconstruct pieces.
You're going to chop it in Management zone now. Yeah. Deconstruct pieces. It's simple.
You're going to chop it in.
You can have, here's everything that's infrastructure. If the infrastructure team only cares about infrastructure, they're going to have all their views that are just infrastructure.
Are you streaming?
Yes, we are.
Would you like to share a story?
Were you in the morning session?
Yes, I was over in the main session.
I'm just kind of around here.
What do you guys do?
We're talking about management zones today.
Management zones?
Yes.
Actually, there are two different live podcasts
you're on right now, Bob.
Oh.
I don't need a podcast, man.
No.
That's fine.
Okay.
All right.
So back to management zones and tagging.
Okay.
So, yeah.
So you're going to have...
So you can slice and dice vertically and horizontally.
And then you're going to be able to filter your dashboards
by these things.
If I...
Right.
Let's say we have a third-party vendor
working on one of our other services. We want them to have access to filter your dashboards by these things. Let's say we have a third-party vendor working on one of our other services.
We want them to have access to the Dynatrace data, but we want them to see everything else.
Well, great.
We tag their stuff in the management zone, give them access, set their access to just that zone.
Now they're walled.
But if the problem to their service is triggered by something on a host on another side,
they're going to see that it's triggered by the dependency.
They won't be able to drill into that.
Or they might be able to.
No, I don't think they will.
But they'll see it's triggered by a host three services away is impacting you.
Work with them and get it figured out.
So I have a question on LDAP integration for management zones and managed objects.
Cool.
I don't know. You don't know? Oh, darn. I don't do a deal with LDAP integration for management zones and managed objects. Cool. I don't know.
I don't do a deal with LDAP so much.
What was the question?
That was it.
It's like, okay, for all these management
zones and managed objects
and things of that nature, permissions,
how is all this stored? Is it integrated
with LDAP?
You're going to be able to set user permissions.
Yep.
You're going to be able to set your permissions and assign people to management zones. It seems a natural way to store all of this data.
And if I have an Active Directory environment, do I use Active Directory?
If I have OpenLDAP, do I use that?
In the Atman world, you've got all the auth module integrations that you would need.
But you also have, you can slice and dice down to agent groups and hosts and all sorts of stuff.
You can build out a very elaborate group structure, whether it's local to the Dynatrace auth module or you're using an external LDAP or Active Directory.
I think there's something interesting here because James and I were talking earlier offline about test data and infosec
and data classification.
People look at Dynatrace and they're like
do I have PII data
masked because you can see everything inside Dynatrace.
Maybe these management zones are
also good to say only certain
people can dig that deep.
Right. And other people can only see
other data, right?
But just want to clarify, Dynatrace is not going to automatically collect the PII data.
No, no, no.
Some things of it.
If you're going to try to capture someone's, let's say, social security number, that's not exposed in us by default.
Someone would actually have to go and configure and say, hey, I know the SSN is in this method argument.
Go in and I'm going to collect it. But most InfoSec people that don't understand much in the world will just say you have some hook into something.
You can see it.
You're not trusted until you prove to us that you have capabilities for access control, flow of information, storage and encryption, all that kind of stuff.
So this seems like management zones would be a good fit for regulated compliance type environments.
I'm in that in the financial sector.
Are you hoping your chief security officer doesn't get a hold of your management zone policy editor?
We'll have a lengthy conversation about aligning the management zone and accesses to the data classification.
Anyway, this is...
The one last important thing about the management zones is it's interesting because it's going into public beta, I think in March, you said?
In March, yeah.
And we do not have a date for it yet.
That's fine.
For when it's going to go because, specifically, this has been something people and customers have been desiring for quite a long time.
Right. And if we were to say it's going to
go into beta and EAP
and by
June we're going to release.
We don't want to take it that route.
We want to spend time with the users who
need this to figure out how do we
finesse this, how do we fine tune this, how do we make it.
And once we say it's good, then it's going to go.
So I was going to add the, in our
conversation with Red Hat,
but also talking to Pivotal,
in the OpenShift, in the Pivotal
world, management
zones are the alignment, right? Between
Kubernetes labels, if you're running in the public
cloud or something like that.
Management zones is also not just for
like we talked about, access control and stuff like
that. It's about cooperation
with cloud-based platforms or the labeling you would do in Kubernetes or something like that.
The namespace.
Yeah, I mean, in a lot of those things, like in Pivotal, you're probably going to be able to just inherit a lot of the tags you have in those components and bring it right in and then apply them.
Some kind of mapping.
You have in, let's say, in Pivotal, you have the, forget the regions or whatever you have, the dev region, the prod region and all that.
That you're going to be able to pull right on in.
Chris told us in their nomenclature it's projects, but it's leveraging namespace and labels within the Kubernetes world.
Which is great.
Yeah, yeah.
All those tags.
I mean, we import almost all these tags from these different things, so that's going to play right on into it.
So management zones is a way that the future of Dynatraces can integrate more easily on something as simple as tagging and labeling.
Yeah, it's just slicing and dicing,
giving you ways to filter and restrict,
or even if you're not restricted,
you could say, I want to see this dashboard
only in context of this.
Right.
The third thing that was under digital experience
was this key performance metrics.
We didn't really get the full scoop on what this third item was.
What was your take on this?
What is this?
Because it sounds so generic.
That's the first I've heard of as well.
And we had a conversation with somebody yesterday.
I don't know if it was here or in one of the classrooms.
Yeah.
Where they talked about user experience index.
Do you remember that?
UEM, yeah.
No, not UEM.
The user experience index versus AppDex. User experience index. Do you remember that? UEM, yeah. No, not UEM, the user experience index versus aptX.
User experience index versus aptX.
So right now, sorry for chewing some seeds and nuts on air.
You've got to have high fiber.
Keep your fiber intake happening.
So right now we have aptX,
and then last year we introduced speed index and visually complete.
Yeah.
Yeah, was it not?
Dynatrace 7 was visually complete right we last year at perform we talked
all about it and so problem is is that we said okay these are the metrics you're going to use
and some people say well my for my pages i really care about time to first bite because that's seo
yeah right and that's kind of the example they gave.
It depends on the design of the app, but yeah.
So we have all these other metrics.
We have all these other W3C timings already.
Yeah.
Now, this feels a little bit more like a minor announcement because the data is already there.
Right.
We already have.
We're keying off of a metric.
But for the longest time, we only allowed you to really key off of AppDex, and I think maybe visually complete.
Now they're saying, pick any one of them.
Pick anyone that you want.
So you can have a rendering KPI?
Yeah.
Yeah.
Your KPI flexibility.
So hopefully people understand what rendering actually means.
Well, it has to be a W3C metric,
or one of ours.
Because if you look at the...
It has to be one of the ones that we have.
I don't know which ones exactly we have.
But if it fits into the list of what's ours,
whatever this rendering metric you speak of, Mr. Pulley,
which is this metric?
Well, rendering is something that everybody asks for
and nobody really seems to know what it is.
And they all have a different idea.
And they all have a different idea.
Ultimately, the common parlance is
what is the cost of the code running on the client?
And how much is it adding to my response time?
The technical definition is, how long does the page compositing feature take inside of
your rendering engine on your browser?
Yeah, yeah.
So there's a technical metric which is very specific
there's a common metric but it seems like every single person who's well getting to understand
what it means to do performance testing is being driven by a manager that says i have to understand
what is what the rendering cost is right and it's really too late to do that in test.
They should be asking that much earlier.
And if Dynatrace can provide that in dev environments and functional environments
and begin reporting that metric back,
the earlier we ask that performance question, the cheaper it is to fix.
Right, right.
So it takes what were once kind of boxed in.
Dynatrace is saying AppDex and these two things. Yeah. Right, right. So it takes what were once kind of boxed in. Dynatrace is saying AppDex and these two things.
Yeah.
Versus...
It's going to open up to a lot more of them.
I'm sure we're going to find out feedback from customers.
Oh, we want to do this and that.
Because right out of the box we saw...
And I cut you off.
I'm sorry.
But I saw right out of the box we're going to have...
You can set...
Well, I believe they were indicating you could set one of them for page loads and set a separate
one for your XHR actions.
And then those are the metrics that get used in the AppDex formula.
So your AppDex aggregate and the AI?
Well, I don't know if they'd be used in the AppDex.
Was that what they were saying?
Well, AppDex is a loose algorithm, right?
It's frustrating and tolerating.
But I thought in the thing you said that's what the AI is going to use, but you made a mistake.
I don't remember what they said.
I thought you said that's what AI is going to do.
But the source metric is now flexible.
Yes.
Behind those performance games.
And I'm sure we're going to see a lot more development from it because I can imagine already someone's going to say,
for my XHR actions, they can all be this one.
But for this one particular page, they want it to be different.
So I'm sure the next iteration, they're going to have flexibility of you're going to be able to group XHR actions into some for this, some for that.
Probably, you know, that's typically
the cycle of these things that we see.
And fully supporting async and depending on what you
built into the web app.
So there you have it. Those are the three.
But the last one is the one I'm most excited about.
There's a fourth one?
Replay?
I didn't catch replay.
So a little while ago, we brought up.
So Mr. Wilson is breaking news here.
Breaking news.
This just in.
I'm Sam the Eagle.
This just in with replay.
Gary Gnu with good Gnu.
Whatever.
Anyway.
So several months ago, you might have seen a press announcement that we bought this company called CumRum.
Which is a replay technology.
Right.
Basic idea is they went and woodshopped
it. They stripped it back down.
They're
feeding it to Dynatrace. So now, for all
your real users, and again,
I know as much as they've shown us.
I don't know anything else. I'm dying
to get on and find out more.
But the idea is going to be you'll go into your ROM,
a.k.a. user experience configuration, click another slider,
and along with the JavaScript that's going to get you all your UEM data,
it's also going to, in some way or fashion, not movie record,
but looks like a movie, record the entire user session
that you can watch
on your screen as they're clicking and typing and interacting, sorry, with your website.
So now, when you have an error or some things going wrong, you say, okay, I have a JavaScript
error and I don't quite know.
I can see what the error is, but I have no idea how this really impacted the page, why
this is responsible for driving down things.
You can go click the replay
button and see what the user saw
and acted. In the example they gave, there
was a spinner. Yeah, there was a nice little
spinner going. They did a country drop down and a
spinner popped up and it was a single page app. So they
clicked off of it, went back into the cart and the
spinner was just... So this is replay
really at the client layer.
Yeah. So could we
replay 10,000 clients all at the same time?
Well, you would have to have 10,000 browsers all at the same time open.
Yeah.
I don't know if you can.
Oh, I see where you're going.
Well, no, no, no.
It's because it's not clicking through.
It's not making the backend.
It's not making the backend local.
So it's more of a...
This is like...
This is on the...
Think of it as a recording.
They're TeeDlee fish.
Yes, but TeeDlee kind of died a long time ago with that stuff.
They can't really do the single-page apps and all this.
This is...
Yeah, it's the same thing.
It's a visual recording of that data.
Right.
Meanwhile, someone over at Sauce Labs is crying
because that means that we can't really replay 10,000 browsers all at once.
Right.
That would have been, you know.
So this technology, this idea of the visual thing gets back to like in the testing tool world,
automated testing tools, functional testing tools.
The screen capture capability has been there for a very long time.
It was always at small scale in a little test lab.
Maybe a single session.
But was it single screenshots or an actual video?
It would be a video.
Okay, okay.
So some of the capabilities
have been there
on very small scale.
Right.
Never this long.
I think about the T-Leaf.
There was a moment in there
where T-Leaf did this.
The HP stack had a little bit of this,
but it never scaled.
It was never large.
Well, to me,
the biggest question in my head
is how do we do this
without throwing tons of overhead?
And I'm going to ask that question because I work for this.
Chewing up someone's battery?
Well, yeah.
I don't know if it's for mobile, but it might be.
I think this is clearly like an MPEG-4 type item.
Well, I don't even think it's MPEG-4.
We're only sending differences in the screens.
Well, no, but you're seeing the mouse.
You're seeing the type.
Well, that's differences in the screens.
True.
But you're seeing the mouse move. You're seeing the letters come up in the screens. Well, no, but you're seeing the mouse. You're seeing the type. Well, that's difference in the screens. True. But you're seeing the mouse move.
You're seeing the letters come up on the typing.
Now, interestingly enough, they already had mentioned, well, that means they're going to see people entering their credit card.
And the answer is, sure enough, we are.
That's why when you set it up, you're going to say credit card fields don't capture. So when you think about this kind of technology,
there's a lot of security issues or privacy issues that come in.
Obviously, we don't want to capture people's numbers.
So this is going to be built into it.
You're going to have that restriction process.
But it might be a good fit for an internal tool, internal employees.
I think it's going to be good for everybody.
Of course. But when it comes to to be good for everybody. Of course.
But, I mean, when it comes to being like a consumer public kind of piece, then you probably run into these privacy issues.
Yeah, but they're building that into it.
So the cool thing about Qumran is that we – I don't know if you ever met Artie.
Okay, so he was behind this project.
He went and evaluated a ton of these replay-type technologies.
Right. project, he went and evaluated a ton of these replay type technologies and found the Kumaram
team was basically the ones doing it kind of the Dynatrace way where we're like, we're
going to go completely different than the way everyone else is doing it.
We've got a whole new way we're going to go for it.
Right, right.
So I think we're all really curious and really interested to see how it goes.
And especially myself, I work on a lot of the agentless RUM projects, something like
what used to be called Demandware, which is now Salesforce
Commerce Cloud, where
it's a SaaS application, but it's
a public SaaS application.
So we can inject
our script into the pages,
but we can't put an agent anywhere,
so we use cores and send it back.
So that naturally restricts
the richness of the data we get, because we have no back-end
application data.
We're getting user information,
which is a ton more than anybody else can get in general.
But now if we can say to these people,
at least what I'm excited about,
say, oh, and by the way, we can also add replay to that.
Nice.
So now you do have those customers.
Your customer resolution center,
all those other kind of stuff where, you know,
Mark calls up because he was trying to buy a new microphone.
You know, it would be really nice if we could bundle replays
so we could see the individual session on the back end.
And like if we have a two-screen computer, one screen we can see the user actions,
and on the second screen we can see the actual Dynatrace back end calls
directly related to that from that individual client.
Well, it's sort of there.
I mean, you have the entire visit.
You have all the clicks and all the pages on it.
And now you're watching a visual of that.
So all you would do is like, oh, it happened there.
I can go back and click on that user action in the visit.
And now you have that whole full stack call of it.
But you wouldn't be clicking on the video.
Yeah.
You'd be just going back to your user session.
Yeah, I was going to say, while the video runs,
it would actually bring it up. Oh, okay. And so you'd be able to watch to your user session. Yeah, I was going to say, while the video runs, it would actually bring it up.
Oh, okay.
And so you'd be able to watch it.
True, true, true.
Qumram.
Yes.
That's the weirdest name.
It is.
Q-U-M-R-A-M.
Now, I forget where they're from, but the development team is in Barcelona.
Isn't that the name of a city in Marvel's...
I thought it was a spice.
Like, hey, where's the kumrum?
I need a quarter teaspoon of kumrum.
That was actually a nice...
On my turkey dressing.
You're talking about that exotic drink, which is a mix of rum and cumin.
No, I don't think so.
I think so.
And that would taste terrible.
So I'm really trying to also insert myself
into the replay as early as possible
because I love Barcelona.
Yes.
And I need to...
Go to Barcelona.
I need to be sent there for work.
Yeah, and they have that sliced beef stuff
that's really, really good.
Oh, it's like the prosciutto,
but it's like a super sliced beef prosciutto. And they have that sliced beef stuff that's really, really good. Oh, it's like the prosciutto, but it's like a super sliced beef prosciutto.
And they have that really fantastic cathedral there.
Yes, the Gaudi Cathedral.
It's a bunch of old churches there, too.
There's a chocolate museum.
There's a chocolate museum.
It's wonderful.
It's the only European city I've been to twice.
You're right.
There is the Getty there as well.
Sagrada Familia So my last visit to Barcelona was an HP show there
Where I was
Sick as a dog the entire time
Never saw the city
And then drove
Overnight at super high speed
Just to get to Frankfurt and get out of there
I mean it was a good
Is that the one involving the BMW
story? Yeah, yeah. That was an awesome
story. All right, so...
One other last piece, though.
One other last piece that they kind of talked about
was the... So I was talking about agentless
run, which was like Salesforce Commerce Cloud and all,
but there's also the concept of,
let's say, Office 365,
Salesforce proper,
internal tools that are SaaS-based.
Again, you can't inject any agents anywhere.
You can't even inject the JavaScript onto the page
because there's no way to get it in there.
I actually came across this exact problem
because our internal organization is going mostly to SaaS-based applications, Office 365 and things of that nature.
Perfect example.
And I'm interested to hear your path because I figured out a path to get a timing record out of it.
So this is, again, RumData only.
And we have a browser plugin that will inject the JavaScript tag.
Locally. Locally.
Locally.
So you go ahead and set up the application in Dynatrace.
You grab the tag.
You download the browser plugin.
You pop in the tag.
You say this is for Office 365 URL.
And now it's going to collect all the ROM data,
send it back to your Dynatrace instance.
For these very standard URIs that are related to Office 365
or some of these SaaS... Well, you just say domain
Office 365, whatever. Exactly.
Or domain Salesforce.com.
And the nice thing is, since it's a plug-in,
if you have a policy,
you can just deploy it to all your employees.
Did you find a different... See, I would
try to do that at a proxy.
I would inject it as a pass-through
on the proxy. Yes, Squid.
Squid has a capability
of grabbing
this data as it passes through.
You can actually get the W3C
time taken.
So you can get the request
time and the response time
directly at the proxy.
So you can put that data back.
But also on the Squid side of the house,
you can correct the cache management problems on your software as a service domain
that they screwed up that's impacting response time.
So you can set your squid policy to fix the CDM issues.
Just be careful if that might invalidate your license agreement with Microsoft or Salesforce.
Well, it's not just Microsoft.
It could be any software as a service solution.
But look at what Fastly did for all these years.
I mean, that's what they were basically doing, right?
Proxy-based accelerator.
It could be hosted JIRA, Office 365, Salesforce.
But it's great for hosted folks.
CRM, yeah.
Yeah.
So you're now no longer limited with that.
But again, you're going to have to deploy to your browsers, but this is specific.
You wouldn't be doing this for, let's say, Salesforce Commerce, where you can't inject, but everyone's on the Internet because you can't get that thing out there.
Right, right, right.
This is for your internal users.
Okay, so this is internal only software-
Yeah, because you're not going to deploy a browser extension to your customers out in the real world.
You're not, right?
Well, okay, if it's a browser extension, it really doesn't care what it's monitoring.
But they'd have to get the configuration.
I mean, you can use it for anything, right?
Yeah.
But what I'm saying is nobody, for something like Salesforce Commerce, which is not Salesforce,
this is for a form of demand where, right, the people who own and run Salesforce Commerce
have access to their code.
They can inject the JavaScript agent
directly into their page.
So in my particular case was,
okay, I'm a company
and I'm moving from platform as a service
to software as a service
and now goes from a gray box
or white box to a black box.
And all I have is service level agreements that i can monitor
be it uptime and response time right and most if you look at the software to service agreements
the commercial what kind of software is this give me an example of a software for this um
let's let's say it's i don't know you know pick a crm that's hosted sugar crm
their their their service level agreement is on uptime.
It's not on response time.
And for obvious reasons, you know, when you write your own service level agreement.
They don't control the last mile.
They'll never sign up for that.
Exactly.
But if I'm an enterprise and I'm highly dependent upon this, I want to know what the response times are of those critical business transactions, which are dynamic.
Will you be able to get this?
But these are your internal applications, right?
No.
No, these are external.
These are external hosted applications.
No, but I mean they're used by your internal employees.
Yes.
That's why I'm saying the browser one makes a lot of sense here because as an organization,
you can control a device policy to push the agent to all of your machines.
Sure.
And then put the browser plug-in in there.
Yeah, with the configuration so that your employees don't even have to necessarily be aware or they don't have to care.
It's almost like pushing an agent to every laptop. Yeah, so if you don't have a proxy that you can collect this at,
the browser-based agent to send it back to Dynatrace is the solution in this case.
I like both solutions.
Yeah, I mean, there's always a grow your own. I mean, we're both attacking the SaaS problem.
I mean, even now there's open monitoring stuff, right?
So there's someone made it and designed it and maintains it and keeps it up to date for you.
That's right.
Or if you want to be adventurous and constantly be writing your own monitoring code and trying to maintain that, you can do that.
So you can go with Dynatrace and leverage all the open monitoring right into Dynatrace.
Yeah, you can put a lot of stuff in there, too.
We'll pick up on some of it. Yeah. I saw an interesting thing about when people decide to create their own instead of buy a vendor,
in a few years they realize we've got a lot of problems we have to figure out,
but we're the only ones who know this.
Yeah.
We have to solve a lot of problems, and I can't hire someone to fix them.
Yeah.
But anyhow.
Yeah.
All right.
There's a lot of people it's good for.
Let's wrap up. Sure. So the three –. All right. There's a lot of people's good for it. Let's wrap up.
Sure.
So the three, yes, a little bit of a wrap up here.
The three things we covered that we just heard from the main stage this morning.
One, log analytics, which is taking existing log capabilities.
That's coming and extending it, doing some cool things with it.
That's beta.
That's going to be coming in no specific time frame or something.
There's something coming soon.
I forget what they said.
No, some of it's now.
Some of it's in there now.
Some of it's right now it's there.
Right.
Management zones will be beta March-ish.
Yes.
But if you really need to get it now, you can go if you're here.
Yep.
Or if you're not here, you can talk to Guido and you might be able to get into EAP.
Guido Management Zones, leveraging the tagging inside Dynatrace.
The digital experience around being able to have more flexibility around key performance
metrics under the covers.
You're not just using Visual Complete.
You're not just stuck with the same old aptX.
So now you get some flexibility.
And this last one, Replay.
Replay, yeah.
Kumrum. Kum yeah. Kumrum.
Kumrum.
Kumrum.
Not cumin.
Not Cameron.
Kumron.
Kumrum.
Kumron.
That's where Iron Fist comes from in the Marvel comics.
But now they're Dynatrace.
Now they're Dynatrace.
All right.
So this was our morning session on Tuesday here from Dynatrace Perform 2018 in lascivious Las Vegas.
That's by Andy Grabner.
By Andy Grabner.
We're going to come back in the afternoon, maybe grab some more performance stories and maybe pick some things up from folks in the session, see what they learned.
Maybe grab a couple more vendors to chat with.
A couple more vendors to chat with when it's quiet in here in the afternoon.
So tune in later on for the Tuesday afternoon session.
Then I think we have a party tonight.
We'll maybe grab some live stuff at it as well.
Oh, and everybody, don't forget to, Mark, at what time are you?
3.30 or something on the main stage.
You can go to perform.dynatrace.com.
Perform.dynatrace.com.
You can see Mark in all his glory.
Mark actually wore a clean white dress shirt.
I'm going to make sure I spill some coffee on it.
I'm sort of dressed up.
Shall we sabotage him?
I'm still wearing PerfBytes Converse All-Stars, so that's still going to be good.
Everyone else is going to look really good.
I'm just going to look like me.
You could just say I tried to match my laces to my hair color.
I tried, but it's not close enough.
It is much more close to the purple quadrant in the Dynatrace logo.
Yes.
It's not really.
Yeah, it is a quadrant.
It's not five there.
Yeah.
Okay.
All right.
Well, until next time, everyone.
We'll catch you guys in the afternoon session.
Anything else?
Caffeine is good with lunch or breakfast.
I really was lacking.
I wanted bacon, more bacon.
Yeah, I think the bacon's gone.
I think that's the only thing I would say.
We are giving away things at the booth.
If anyone is listening, you can come take some stuff.
That's by Mike Villager.
Yeah, yeah.
So, yeah, we'll see you guys in a little while.
All right.
Bye-bye.
Bye.