PurePerformance - Implementing Performance Engineering as a Self-Service with Sonja Chevre
Episode Date: January 30, 2019Sonja Chevre reviews her session on Dynatrace enables Performance Engineering as a Self-Service. She chats about how you can integrate performance testing tools with Dynatrace and how to embed perform...ance diagnostics into the development work-flow for faster automated feedback.
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance!
Hello from Dynatrace Perform 2019 in Las Vegas. I'm Andy Grabner and this is Up Close and Personal with product management and pure performance.
I want to introduce my guest Sonja.
Hey, Sonja Schaefer actually, sorry for that. Hey Sonja, you just presented some great stuff.
I just catch you here outside of the breakout room. First of all, well, welcome to Vegas.
Thank you Andy, happy to be here.
Yeah, and I think we have a little bit of background noise, hopefully that will be okay for the
listeners.
But so you just came out of the session, can you quickly tell us what the session was about
and what people have actually missed that couldn't attend the session?
Sure.
So the session was called Implementing Performance as Self Service.
It was together with Sumit from Intuit.
So Sumit is a regular speaker at Perfom. We really like his stories because he has been a classic APM user with Dynatrace AppMob.
And now he has shown great progress with his company moving toward a more dynamic approach
and really also relying on the powerful feature of Dynatrace to help the whole organization.
That's pretty cool. So Intuit, where Sumit is actually working, that's the company behind
TurboTax and as a matter of a big service is out there for folks that are not maybe living in the
US and don't know them. So they have millions of users and it's great that we saw his progression
from AppOne. I remember I think last year I did a session with him where he showed how he
integrated Dynatrace in CI-CD and now AppOne in CI-CD and now he moved over to Dynatrace.
Yeah, and the next step for him was he was always saying, you know, with Appmon there
were some experts in performance and you always had to go talk to this expert.
And you know as well as I do that that doesn't scale, right?
You have teams that are growing, more challenges for the business.
You want to push releases quickly out to the public,
and you cannot just wait to rely on five guys
that have the knowledge about performance.
That's something that needs to be built in
with the organization.
And with Dynatrace, they have really managed
to develop this approach of performance as self-service,
meaning all the teams are able to understand the information
and also to react on it, thanks to our AI root cause problem detection,
that's really useful to them.
That's pretty cool.
So basically it's kind of breaking up the old center of performance,
excellent teams, right?
Because where everybody was just, you went to the expert of the COE
and then now you have, as you said, self-service. you have to wait for them because they were busy of course and making
everything slowly and now just people can go to that address. They don't even
have to go to that address they receive proactive a notification about a problem
the developers know what's going on in production they can already react on it
and understand the source code without having to catch an
expert to tell them what the root cause is.
That's pretty cool.
And so, did he show, what else did people miss?
I mean, did he have some best practices, things?
Did he talk about implementation?
Yes, so he was also talking about testing because, you know, people always have the
same challenge with testing.
You are testing something but we don't have the real production data and you don't have
the real use cases. So you have to write tests
assuming what the user will do. And one great thing that they are doing, they are reusing
the data from production to generate test data, but not only data but also tests. So
they have the use case, you know, the analysis from the websites, from all the data, what's
where the people are clicking, how they are using the applications and they are able to reuse those patterns to generate tests for the pre-production
environment.
That's awesome.
And they build this on their own or how does this work?
Yes, so they build this on their own and they also have a test framework that they have
built themselves called Karate which is open source and it sounds really interesting.
I think people should check this out. So Karate for people that want to follow up which is open source and it sounds really interesting. I think people should check this out.
So Karate for people that want to follow up on that open source testing framework from
Intuit.
That's pretty cool.
So you mentioned the recording of the data.
I think didn't we too also work with a company recently?
Maybe you want to give them a little shout out?
Yeah, that's right.
There's a company called Probit in Europe that is a startup doing also that kind of things
based on the Dynatrace data so on the Dynatrace real user monitoring data
they are able to generate also test test based test patterns based on those data
so you really have the possibility to test in pre-production what has really
been used in production. And not maybe some feature that nobody is using, but you think
it's really important to test it, but you really get to test the thing that people are
really using. And you get this data, this test information live and automatically updated.
That's really cool.
So he told some stories obviously, and you from a product management side, I assume you
had some tips and tricks, some best practices, some features that maybe people were not aware
of that Dynatrace actually has when it comes to performance engineering.
Yes, I try to focus on the notification, like for example on Slack, being able to add the
Slack notification in Dynat the address when a problem happens to
make sure that the developers get the information directly to pass the
information on the teams not having to wait for people to log into the
address to look at the data but being practically informed and really not only
to operation but to developers because they are the owner of the code so they
should know what happens in production with the codes. Another part that we
refocus that I refocus is the implementation of load testing with Dynatrace and how to
analyze the load test, being able to compare them with some powerful features in Dynatrace.
It's pretty cool, I like that feature.
All the diagnostic features we now, all the diagnostic screens we now have in Dynatrace
where you can then say show me the response time hotspot compared to time frames. Exactly, what happens between those two releases, those two time frames,
you can filter different request attributes, so you have really a lot, a lot, many, many
capabilities for the tests, for the analysis. And then of course you can again dip down to the data,
dip down to the code, dip down to everything
that you need to know.
And you can also set baselines for your test in production and pre-production and being
notified also automatically when some test starts to take more time than you would expect.
So that means actually while you run your tests you can use the anomaly detection, the
custom alerts in Dynatrace to get proactive notifications while the test is running,
and maybe use this information to either abort the test early if you already know something is wrong.
Exactly, because you don't want to use resources for a test that you already know is being wrong,
or has some issue yet.
Or you can react and analyze after that what went wrong.
I think this is actually something we showed at the hot day.
I did a hot day on continuous performance with Jenkins.
And Henrik from Neodys, from one of our partners,
he also joined and he actually showed now that Neodys
is automatically generating custom alerts
before they start the load test based on their thresholds.
And then automatically taking the damage results
as they come in through the test to maybe,
you know, abort the test earlier.
Yeah, this is great stuff. They have done a nice job in their integration.
Hey Sonja, just to wrap it up. So you are the technical product manager responsible
for all the performance engineering integrations. I think there have been some new integrations
kind of described on the blog page, also the the blogs is there something that you can tell
people what they should look for on the doc pages on the blogs in case they want to learn more yeah
they should look for so we have an article the section in the documentation is integrated with
the address and we have a section for load testing so general documents on load testing
how to tag your load test how to analyze them that really put all the pointer also to all the great job you have been doing with the blogs
and the performance clinics.
So that's kind of the main page to look at the start when integration with load testing.
And then we have specific pages for load runner, for Gmeter, and link to NeoTest integration.
That's pretty cool.
And before we close it, I think just as most of the sessions here in this particular space,
like in the DevOps know-up space, we started to come up with a couple of metrics that people
actually collected based on the autonomous cloud survey.
So for folks that are listening, there's a survey on the Perform app, on the Perform
mobile app, but I believe there's also a public link where Perform app, on the Perform mobile app,
but I believe there's also a public link where people can fill out the autonomous cloud survey.
And I think, just glancing, remembering what kind of metrics were in there,
that I think Sumit also presented, something like a dev to ops ratio,
I believe he talked about this.
Yeah, exactly.
That was one of the first slide that we showed we really sure that is really important to also to
have the teams and the releases you know the one of the key point on the slide
was also the the number of release they were able to do per sprint because we
want to release faster and faster the ratio they left to ops I think it was
four to one if I remember correctly.
Yeah, that's cool.
Alright, so just a reminder folks, if you're interested in this service,
because it gives you a way to evaluate yourself,
like where you are and where you might be able to go,
if you make some improvements on automation using Dynatrace.
Are you giving people some medals?
Like if you reach that, you know, like the best in each category or
goals, you know, like category, when you are in that category, you should think of going
to the next category.
That's a good point.
We have to think about that.
But just a reminder, you know, fill it out.
If you are a Perform, go to the Perform app or otherwise just look for it.
I'm sure we will post the link somewhere.
Hey, thank you, Sonia.
Thank you, Andy. Sorry for messing up the name in we'll post the link somewhere. Hey, thank you, Sonia. Thank you,
Andy. Sorry for messing up the name in the beginning of the intro. Sonia Schäfer from
the technical product management team here at Linz. Well, this is for Pure Performance.
I'm Andy Greta. Thank you.