PurePerformance - Dynatrace PERFORM 2018 Wednesday Lunch
Episode Date: January 31, 2018...
Transcript
Discussion (0)
Thank you. All right.
Hello, everyone.
All right.
Hello, Mark.
Hello.
This is PerfBytes Live.
Should we stand and stretch our feet?
I think we should stand up for this one.
Don't you think?
I have a cup of coffee. James, are you going?
Do you need a break, James?
This is PerfBytes
Live from Dynatrace Perform 2018
here in
Lucy Goosey, Las Vegas.
Lucy Goosey, Las Vegas. We could do that.
We could do
Lavender, Las Vegas.
Lackluster.
Lackluster, Las Vegas Vegas Because I did try a little
A couple slot machines the other night
Lazy Las Vegas
Lovely Las Vegas
Liquid Las Vegas
Which to some people
If they drink a lot
A lot of L words
Lunch Meat Las Vegas
Meatloaf
Everyone's here for lunch If you could see the ocean of people Lunch Meat Las Vegas. Lunch Meat Las Vegas. Meatloaf. Yeah. Meatloaf, Meatloaf, Meatloaf.
And AI apps.
Because everyone's here for lunch.
We can see if you could see the ocean of people at the conference room tables.
You notice nobody seems bored.
Well, except for that one person over there.
They're working really hard.
Everybody's in conversations.
We've had a lot of good traffic at the booth.
A lot of people stopping by.
Maybe because we tweeted out the picture of James with the Dynatrace UFO that we're giving away.
We're giving away a UFO?
We're giving away a UFO.
And this is a real UFO?
Well, yes, it's real.
Okay.
Because the one that landed in Roswell was fiction.
So they say.
I think so.
Yes. I want think so. Yes.
I want to believe.
Yes.
But I'm tired, so I don't believe.
So that'll be good.
So here we are.
You just had a session.
I just had a session on continuous performance.
Let's talk about your session, Mark.
Continuous performance.
I'd not get to make your session.
Dot, dot, dot.
And self-service.
Dot, dot, dot, dot, dot.
With fully automated feedback loops. And meatloaf. Dot, dot, dot, dot, dot. With fully automated feedback loops.
And meatloaf.
And meatloaf.
So it was good.
It was really taking a half-day workshop and compressing it into a 45-minute session.
So what would the highlights of your message be in that?
I think point number one.
Can you take the 40 minutes and make it maybe three right now?
Well, I think there's three takeaways that you would do, like when I do a submission, the three key takeaways. One is that your existing tooling probably already supports doing continuous load testing.
Right?
Right.
But you just need to change how you think you should be doing your job as a load tester.
So it's going to take change, but one, the tooling is probably already there for you.
So let's pause there for a moment and let's clarify continuous load testing.
Do you mean I have an environment, load is always being generated on it, or is it like on demand?
On a schedule, unattended.
Well, here's the thing.
It's funny you ask that because point number two is that you will have to adapt when you do load testing to the fact that there's continuous releases, maybe with continuous integration, continuous.
The pipeline, the main pipeline is continuously working.
It's just most people right now manually decide what combination of releases they push into the load
test environment right then they run a large integrated load test and this is what i would call
facilitated manually conducted load testing it's kind of the old school way we used to do it you
get a release that comes in you go ahead right fire up your generators and hit go and say all
right just kicked off a test i've got got 30 minutes to go play some Halo.
Yeah.
Now, you and I in an old load testing world imagined that that is really a good way to
do it.
Right.
What's the best we had?
What we might not perceive now is that doing that sort of stage up a bunch of releases
that are aligned, then push them into the environment and test them as a unit or as a defined set of app releases, actually is quite costly upstream.
Because you're sort of maybe putting a pause and you're pooling up a bunch of releases, trying to manually figure out which dependencies need the thing, the thing, the whatever,
where actually the development streams now are just continually going forward.
And granted, if they do have dependency issues and they're not very smart about coding against dependencies
or using the tools to figure those dependencies out, yeah, they're going to have collisions.
But you know what your job as a tester is, not just a performance tester?
As soon as possible, I want you to see big red errors
about the stupid dependency thing that we missed. So why would I, if I'm not doing my job of a
tester, as I sit there in my lab saying, well, I'm going to wait till you guys get that dependency
thing figured out. I'm not going to show any red errors. I'm not going to escalate. I'm not going
to make it visible. I'm not going to be part of the display of we're not getting it right. We're not getting it right. We're not getting it
right. So that's a thinking change where rather than being late load testing late in the game,
I have to wait till you figure out the releases for me to do my job as a tester. Instead,
I'm just going to take whatever you got, put it in the environment and hit the run button.
Right. So if the scripts are not maintained, we're going to get lots of errors.
We might get some false results, too, but hopefully we get lots of errors.
If the app is not functionally sound, you're going to get a lot of errors.
What's a company really paying me to do as a tester?
Run stuff as soon as possible and show if something's screwed up.
So that principle from a DevOps sort of frequent iterations, frequent feedback, that principle has to come into continuous performance testing versus manually attended facilitated.
So another interesting thing, I forget who it was we were talking with, Andy and I, on Pure Performance a while back.
The idea of the build comes into your environment, right?
Yeah.
You run your test.
Maybe your tests don't cover the new functionality, or maybe some things have changed, so now
your test scripts might fail themselves.
Right.
Well, what this team is doing, they happen to be a little bit of a unicorn, I'd say,
in load testing, because they are putting in the money to do most of their load testing through browser-driven tests.
Okay.
So if they're going to have, if we talk about virtual users, if they're going to have 1,000 virtual users,
there are going to be 1,000 browsers being driven, meaning that from the development stage on,
someone has created a browser-driven test for the unit test throughout.
So as soon as it comes in their environment,
they already have the new test ready to go,
and they don't have to wait.
They don't have to, the risk of those scripts failing,
again, not everybody could do that,
but I just thought that was a fascinating idea,
because they're like over and over again.
I will say that everyone should do that.
Right, it's just a lot more money, right?
Because how many tests can you run from a single desktop operating system that are browser-driven?
Yeah.
But this is an esoteric thing about browser-driven type of load tests.
Which is easier to maintain that one, right?
Because someone else is doing that.
No one's going to necessarily write a JMeter script.
Yeah.
Maybe, maybe not.
Yeah.
Go.
In my world, I don't have any GUI-based stuff at all.
We are simply a large suite of web services.
We're an enormous suite of microservices.
But my particular little world in credit is a fairly small subset of the greater world.
So web service-based stuff, yeah, the GUI stuff,
that's the easy way to do it.
But it's deceptively easy.
Right.
Because usually the people you get to build those kinds of scripts
are equally tasked now chasing feature changes.
You ever tried to do like a Selenium script with AngularJS?
It's a nightmare.
Okay.
I mean, so the idea of browser-driven GUI, it's still
as much as we might think, oh, that's easier.
Sure enough, we figured out how
to make it super weird and complex
and all that. So you're really kind of
apples and oranges in terms of pain.
You're saying you have more services that you can
test. But I'm not
advocating or saying one way or the other. But in
continuous performance for the session,
it was mostly just describing,
can I share, say, a JMeter script,
the same JMeter script that I'm running in development,
for 10 threads?
I just take that same JMeter script,
feed it different data and different properties,
and run it for 10,000 threads.
And actually, if the upstream in development,
if that JMeter script is used and run on check-in for small unit component level performance tests,
the developers will be the first ones to see that, oh, the script's out.
Yeah, of course.
And they know right in their head for them it's five seconds.
Payload change, add two fields.
There's not this lengthy overhead of I have to communicate and document to somebody to another team in a silo somewhere else.
They're empowered to change the test
script and in my world
your test assets need to be stored
in the repository with the app.
So your test assets being stored
and versioned in Git or in
subversion or whatever you're doing. That's a requirement.
Right and in this kind of situation this is
exactly where the spirit of
DevOps comes in because you should have that person whose this is exactly where the spirit of DevOps comes in, right?
Because you should have that person whose job is to do the performance slash load testing early on or at some point helping guide the developer on,
you're going to write a JMeter script that I'm going to use later on.
Let me teach you and show you what a good performance test is going to look like
so that when the developer writes one, they're just not going to write a simple little dumb script
that's going to test their little function.
They're going to write it a little bit more downstream with that in mind because you're bringing the teams together to say, let's serve each other.
You teach me.
I teach you.
You're going to build something that you create here.
I'm going to use that same thing later.
Let's do it right from the beginning.
Give and ye shall receive.
Yes, exactly.
Right? receive. Yes, exactly. Right. And this is a challenge for people who often think of our discipline in a silo where other people don't understand what we do and it's all on us to do
everything that we have to do. When you do that shift from silos into pipeline. Right. Suddenly
you're helping and facilitating people learn how to do more, better, cool stuff upstream, downstream,
and you're part of that flow. And I found JMeter was a good tool to do more better cool stuff upstream, downstream. And you're part of that flow.
And I found JMeter was a good tool to do this.
We're researching Locust.
But any tool that allows you more modular capability is a good thing, right?
Right, right.
We're going to let them have a conversation.
Yes.
So the first thing in continuous performance is your tools probably already work to do this. Right.
But you just need to start thinking of those tools in command line driven, but unattended and recurring unattended tests.
Correct.
So where in development you would check in, on check in they would run a JMeter script.
Right. You don't want to, like if you have a sonar model, you do the first wave and then you do more testing
and progressively more testing
and feed that data back to the developer,
back to the developer.
If you think of load testing being on the main pipeline
or what I call the promotional flow.
Okay.
If load testing is not going well,
you'll block the pipeline.
And you're right back to a phase gate or a gatekeeping role.
Which is, all you should be doing is load testing is helping things get promoted faster.
Right.
And so, to some extent, there will be some releases that need to go to production that really don't need load testing.
They will say, this is such a small change, let's just push it through.
All we're doing is changing the amount
of threads. And hopefully your engineers are
mature enough or smart enough or your ops people
are smart enough to say, yeah, we can make this change and not do
load testing. Now we track that we
chose not to. There's always an assumed risk.
There is a risk, but someone can approve and say,
yeah, let's just go for it.
If we're moving fast enough in a green-blue
deployment, if it turns out we should have done load testing,
we flip the feature off and we go back and do the work.
So you want to enable that flexible pipeline operation.
So what I did, and I actually talked with several companies over the last few years,
that what I call an off-the-flow or out-of-the-flow continuous performance pipeline,
which means if you draw a straight line on a page,
that's your main pipeline for development to ops.
Right.
Then you draw a circle right below that,
and that is the continuously running load test 24-7
in the load testing environment.
Right.
So you can have releases that cruise along the pipeline.
Some of them are flagged.
I don't want to do load testing.
Just go right on to prod. Some of them are flagged. I don't want to do load testing. Just go right on to prod.
Some of them will say, I need to do load testing,
and it will trigger from the main pipeline,
no matter what's happening in the load test environment,
push the latest release.
And this is all automated, you're saying.
So there's going to be a flag in that release.
With Puppet or Ansible, whatever.
So when it comes in, it's just going to, the process is going to.
And your company, your application architecture should have a commitment to zero downtime and green-blue deployment.
So I should be able to push that new release with rolling restarts and load balancing or whatever.
I should actually have a valid test that says in the middle of a load test running, I can push a release and not disrupt the flow.
So it's actually a good functional test of zero downtime deployment.
Yes, perfect.
Just like product.
I'm like, great.
So I get bonus testing out of that situation, which is really, really great.
I don't know if you guys had stopwatches.
Oh, there's some loud people about stopwatches.
James ran out of stopwatches.
You can take some koozies.
See, look, you can do this with it too
I have one on my arm
See I've made a Wonder Woman cuff link
Cuff
Yes exactly
Alright so
That was great
The one thing that
So in that process
Let's get to number three and then I'll ask you my other question.
So that's the second principle, right?
And you said there's a third principle or no?
The third principle is automated decision making.
Okay.
Because you're going to have to, with notifications and the results and stuff,
if you have sort of a singular pass-fail pipeline criteria, if you have.
And you're waiting for Brent to come and look at the results.
Or, yeah, you have this sort of bottleneck thing.
How do I, this is a big challenge, right?
How do I aggregate all of those results which are nuanced?
You know, it's not just green or red.
There's yellow.
Right.
High risk, low risk, medium risk.
There's all different ways we can communicate that back to the pipeline.
But usually, even a Jenkins or Bamboo pipeline, you've got to have some predetermined thresholds. And this is why I like in Dynatrace, the QA versions of Dynatrace, even Atmon, where you have the trending piece, the corridor.
And so we can start communicating back.
If you use Dynatrace, you can clearly either have an alert or not alert or query an alert.
Am I in threshold or out of threshold?
And that's based on, say, a 30-day look back on how are we doing.
So build over, build over, build. Suddenly, we've gone out of tolerance. And now that the fact that
we're out of tolerance will actually, if it's a high-risk component, maybe it even blocks the
pipeline. Or you want to make sure that the results of green, yellow, yellow to red, and you
see things in the financial trading markets.
You can sort of say, you know, they have the look back periods
and you can see the trends.
Yes, yes, yes.
Same kind of thing for performance.
You want to look at trending information.
So a release manager, if it gets flagged,
whoa, we're not going to automatically push this and decide to push.
If it's totally green, maybe you decide as a company
in your continuous deployment, go ahead and push it right to prod
some companies will never do that
they'll just push to pre-prod
and then the actual push to production
is manual decision
with a push button
the last part is like alerting
and notification and sort of some
parsing of the results
now if I were in a loadrunner world, use a loadrunner API
in the Otis world, there's some APIs out of their results tool or repository.
You can pull that out, parse that up a little bit,
get it back on the main pipeline for a given release.
Right.
So that was the second question I was thinking of you just answered
because as soon as you said automated decision making,
I was thinking, well, do you have yes, no,
and then maybe where human steps in,
and you kind of addressed that right there,
you're going to have conditions where, yes, we can see
this is good, we're going to push it automatically.
Yes, we can see this is obviously bad,
we're going to break the pipeline right here.
And then that, hmm.
The middle ground. Machine doesn't know what to do
with it. That's where it's going to come into
someone like you and the other people,
or whoever else is in charge, right?
Get your chat Slack notification bot put together.
Have Davis on your bot world.
That'll work nicely.
And some way to interact and say,
you know, I get it's in the orange.
I think we'll be all right.
This is actually not that high risk of an X, Y, and Z.
And the other transactions look really good.
It's just this one weird transaction that no one ever does.
So that kind of subtlety.
Now, there are a couple other uh things in the continuous operations mode is that uh
self-service was the other part of this is do i have a way to have any engineer maybe even people
who don't even know jmeter right or don't even know the test tool to sort of hop in and run a load test.
Okay.
Because I was the only performance engineer, the only load testing guy for quite a while here.
Yes.
And so I needed to give them a way to not have to understand the crazy folder structure of the
shell command line SSH session running JMeter. So I used a tool like Rundeck.
So I give a little demo of Rundeck on using the GUI features for a job in Rundeck
and using those to
send the properties and launch JMeter
so they now have sort of a GUI
way to see. I think
Octopurf was, you remember
Octopurf, James? Octopurf.
He's not, James is there.
So Octopurf, we've done some stuff with them. They
have the same thing. Give us a web-based GUI
in order to do J.
They do a hell of a lot more for actual scripting of JMeter.
Okay.
You should be on now.
That's very strange.
Get on the mic.
It was getting on the channel.
It was blinking.
OctoPerf.
Yeah, I like OctoPerf.
He likes OctoPerf.
Yeah, a lot.
Yeah.
Regardless, so here's two things that you can do cool with Rundeck.
You can, on demand, fill in the blanks, fill in the properties, hit go, and run your load test.
Right.
You can also have, like any good job scheduling, you have cron schedule capabilities.
Now, in the manually facilitated load testing world, you create a scenario.
And maybe you have 20 different scripts come together.
You run them all at the same time.
You're doing, like, a simulation of production load well that's nothing more than running
lots of scripts at the same time in kind of the same scenario the way i do it i actually enable
and showed uh run deck being able to wrap here's the script one baseline test and then script two
baseline tests and script three baseline tests.
And all the individual teams have to agree, hey, everybody, every day at 10 a.m., let's do our baseline test.
And so they all put together their profile, self-service for their own app.
This is the kind of baseline test parameters, and they schedule it,
and everyone agrees we're going to run it at the same time.
Now, they're still checking things in in dev dev and they can run small load tests in dev right but if you're in the shared load test
environment which is more like pre-prod the integrated performance test is nothing more than
everyone agreeing let's all run our jmeter scripts at the same time for that scenario then let's say
at noon we're all going to run a stress test and then then at 2 p.m., we run a scale test.
And overnight, we run a soak test.
You're empowered as a developer, and you could do this through flags on the pipeline,
or you could do this manually and get it scheduled and running.
And then they have full control to log in and change what, hey, guys, do you do the soak test?
Let's say I have a new team.
Like, we just got our baseline test running, and I tell them,
you need to feed more test data to do 12 hours.
So I'm like, okay, we're not quite ready.
But when they're ready, they're like, we'll take our baseline test,
we'll put our CSV files or data files or whatever you have to do.
And then they're like, ah, awesome, now we're starting to get it.
So if somebody's not getting test results,
it's no longer on the separate siloed expert load testing guy named Mark.
Right.
It's a collaborative effort that everyone agrees to kind of come together and use this sort of independent cycling 24-7 continuous work.
So that's where the self-service piece comes in from a Rundeck perspective.
There's also times when people need to stop all the scheduled stuff
and do a triage emergency deep dive.
Right, right.
So we hit a button, go through all the jobs,
and just disable the scheduling,
and then we go back into manual mode.
Now that's, for me, driven by a JIRA ticket.
So if somebody puts in a load test JIRA ticket
and I have to go schedule it, put it on the calendar, et cetera,
what I've done is automated that load test Jira ticket and I have to go schedule it, put it on the calendar, etc. What I've done is automated that load test Jira ticket number, the URL or the name of
it, is the key in all of the results.
So all the jobs, if you have a load test ticket number, it'll store all the results for your
test cycle in that folder.
And then we parse it.
And we actually, after a test is run, we parse all the results and put all the links back
in a comment on that Jira ticket.
So a developer can say, I have a release of X, Y, and Z.
I want special load testing.
And I haven't done it yet, but you could set,
I might set, if they put in a priority P0, P1,
I'll automatically stop the scheduled running tests.
I'll flashback and clean up the systems, notify everyone, create the Slack channel, and then
bring everyone together and say, the load test environment is ready for you to do your
deep dive emergency load testing.
And we would manually, that's when you would want the human attention to be brought together,
but you want it to be continuously running unattended
when everyone else is out doing the rest of their job.
So that concept was that everything I just said in the last 15 minutes,
that is the mini version of the session.
So you have the developers.
They have all this freedom to check in code.
They have all this freedom to check in code
and actually run every day at the same time, so on and so forth,
how do you avoid the challenge of a different understanding of performance
and load profiles and scheduling among all the different groups that can check in code?
So you might have one group checking in.
Different maturity levels.
No think times, no pacing.
I didn't say that I tolerated crappy work.
Well, you may not catch it right away.
Oh, I definitely would.
Here's kind of the thing that we were mentioning before,
is that if you have even just bad app code,
or even a bad script script now the scripting practices
my job goes from being the guy who writes the script for you to being the guy that advises you
come and advise i'm right there for you man let me block two days get right in the rally sprint
schedule whatever somebody says i have to make changes to my script hey get mark marks that guy
so but i'm not mark that guy to go do work like Brent
in the corner in a silo that no one else can do. I'm here to help them. Hey, here's a few tweaks
we can do. We can do such and such. And once it's checked back into the repo, away they go.
But the two things that I will say, because when it's running continuously in the background,
if the app is screwed up, let's say a dependency is screwed up. Like I've got my app one and I'm dependent on app two.
Well, app two pushes a change that's really crappy.
And they look fine in the load test, but for some reason, functional integration, whatever, big red errors in the load testing world.
What we had talked about before is if you did a sort of manually facilitated load test project, whatever, we'd be in the lab with that
version and I would run the test and then I would log a bug and wait for a fix. And so maybe you
see the load test failures once. In the continuous model, it will just keep showing red, red, red,
red, red on the big monitors in the dev world. And the PM will eventually get the alerts in their chat.
Why is the app one load test failing, failing, failing, failing?
And pretty soon it comes back to the load.
So it's punishment by visibility of big red errors.
So it might be the dependency issue, but it could be someone wrote bad code and broke the load test.
It could be bad code in the app.
You get two bad scripts and they break the load test. It could be bad code in the app. You get two bad scripts and they break the load test.
Sure.
And to be honest with you,
there are lots of people in the world
who don't know how to write good load testing scripts.
We know that.
Yes, that's exactly why I'm asking this question.
Well, that's also part of why going back,
I think we were discussing,
it should really be more that the load team
is coaching the development team early on
how to write these tests so that chances are... load team is not going to be running the test.
Why would a developer listen to a tester?
They've graduated from that role.
Because the thing you missed before, we share the scripts.
So the same script used for unit load testing or unit testing in JMeter is the same script I run.
And if you're a silo developer not helping write tests, you're going to be out of
my company.
No, no, no.
I mean, this is where you
receive. So if a developer
says, hey, I can get small 10 thread unit
tests, early load testing, and Mark's
going to use the same script and I'll understand the same
results. And if I, they're the first ones to
see it. And that way I can't
complain if the results come back and I don't like
them.
We're all invested in the success. The first one is to see a failure. And that way I can't complain if the results come back and I don't like them.
Because I have some ownership. We're all invested in the success.
And that way I know that the tester is also testing it in ways that it's meant to be operated.
But the tester is also going to be providing feedback to say, this is how you want it tested, but this is how it could also be done in different worlds.
There's the whole collaboration.
Part of the spirit of the whole DevOps thing is testers help developers help operations operations, and all these other teams help each other to understand the different roles.
Software engineer across the lifecycle.
The seal there.
It is.
So you show up and it's like, look, if we get this...
You're the PayPal seal for performance.
Well, maybe.
Can you bark?
No, no, no.
No, don't.
Don't go there.
It scares me.
But to your point, this is a cultural change from the siloed special team to somebody that works across the lifecycle.
And in DevOps, it was like, you have a vested interest in starting your performance testing early.
Let me give you a JMeter script as an example to get started.
Now they're getting some value out of it.
And that helps them detect stuff before it comes down the pipeline. But it's just enough load testing that they get enough value that they're like, oh, I
made this change to this web service or made this change.
I'll update the script at the same time and check it in.
And boom.
Then I inherit it.
I inherit only when I push the app, I push the scripts into the load test environment.
I've got the latest, greatest scripts with the latest, greatest code.
And it's no longer a throw your thing over the wall and punish the load testing
scripting team.
And if you think about, so this is not, I don't think this is an original idea, but
we hear it from Bernd, our CTO all the time, right?
Yeah.
When we did our transformation to Ruxit, when we did our transformation from six-month waterfall
to two-week release cycles, first thing he said is we have developers in an up-down silo.
I'm looking at vertical, right?
Vertical silos.
All these vertical silos, developer, testers, performance people, operations, business.
Take that, turn it on its side, and now you have a parallel track.
You remember that from all the Andy talks, right?
That was last year.
To me, that's the seal.
And that's what makes this work.
And that's what makes all this work because you're sharing all this information with each other.
You're co-educating people from different teams.
And that, again, when you really go back to what the spirit of DevOps is all about, which is why people say DevOps.
No, no, no.
Freedom is dangerous.
Knowledge is dangerous.
This is like cats and dogs living together.
Okay.
All right.
Well, good luck being in business next year.
Yeah. But I like the collaborative part. This is like cats and dogs living together. Okay. All right. Well, good luck being in business next year.
But I like the collaborative part.
I can't expect someone to do something if they're not going to get something out of it.
And the developers are notorious for, I need them to do extra work for me.
And then they're like, why?
It has value to them.
Now it's like, they'll get something out of it.
They'll probably do it.
And they're not being hammered over the head like you have to do this from management. They're like, no, I like seeing the telemetry in development.
Then I can see it in QA.
And they spend less time working on this stuff.
It helps me make decisions and be a smarter developer and grow.
I mean, they're getting something out of it.
So, Mark, what you're saying is it's
all about them. And as long as you can make it all
about them... You've got to keep your developers happy.
And I get less punished downstream
in the pipeline, including the
ops guys who are like, oh, I'm so glad
you... You know what I mean?
Do not punish your downstream
friends. To that point, exactly. The developer's going to
have to do a little bit of extra work to write this little JMeter
script, which is going to be pretty simple for them.
For them, yeah.
Or whatever tool.
But what they get out of it is they're not going to be working on something else
and have five things coming back to them because they're going to use that early on in their cycle.
I mean, this is what we've been talking about for years now.
They're going to be testing that the proper way early on in their cycle.
Once they pass it out, they know from everything they could have done, it's good code.
We hope. Well, I mean, there could be
load, there could be concurrency, there could be other things,
but there is enough for them to say
at dev level, this is
ready to go to the next phase. And guess what?
That's a ton of workload off of them. It could be a baby step.
But it's the next step in the flow.
Yeah, but it's not going to be,
oh, I'm going to push this out, and then while I'm working
on this next crazy thing,
those four other things that I pushed out are all coming back to me.
Less likely.
It's going to be less likely.
So they do have the time.
Now, when they say, I don't have time to do this,
they say, well, no, I do have the time because I know this is going to save me later.
So the blowback rate goes down.
Yes, exactly.
Now, you might translate that to a QA metric or something.
But they get the immediate feedback.
So the defect is addressed closer to point of origin.
Their inception.
It's in their head.
Cheaper.
Cheaper because they'll remember.
Oh, darn it, stupid me.
And we fixed the code.
And they didn't also then code five things that were dependent on that that's already out there.
For someone else.
That doesn't work.
Yeah, yeah.
Now, the one thing I wanted to address, because when you listen to this,
hey, this sounds great and groovy, man, and it's going to be super easy.
But in some cases, one of the challenges,
especially for the team maintaining this CI load or performance testing environment,
load testing environment,
is the environment's going to have to be maintained, right?
So data refreshes, maybe some data massaging.
I can imagine maybe if you're testing,
we had an issue a long time ago
where we were testing message boards on our site.
And it was going to a message, posting a reply.
And that message, by the end of, I guess it was probably about six months of load testing because this project was so long, was humongous.
And then by the end of the testing, we were like, boy, you guys have been breaking this build worse and worse and worse because things are slowing down more and more and more, especially on the message boards.
But that's because we have a thread that has like 20 million replies on it, which doesn't exist.
That's because you've not returned your data to its same initial condition.
Right. So all that has to be factored in.
This is the shopping cart with a million items in it.
But meanwhile, you also want to make sure your application is growing as it would be growing in production.
So you still have these models to consider and take into it.
Because if you just set it back to zero every time, maybe performance is going to be great every time.
So set it back to a realistic, real-world initial state, hopefully.
Exactly.
I think to James' point, it's still that old throw-it-over-the-wall, disconnected, siloed way of maintaining test assets.
And QA people outside of performance have these same issues, right?
One of the things I'm thinking is that it used to be punishment was the rule
and you were just elated if a developer or an upstream engineer
actually did some extra work to maintain the script.
I mean, that was like the exception.
Now I think we're flipping it on the ear. And if you have a small pool of experts in the performance team,
they have less and less time. They now just become, like you say, advisors or consultants
back to those teams saying, by the exception, I'm having to do some heavy lifting to fix an
incomplete test design. But I can, as a senior engineer in performance, I can predict, here's a new team that's never
done testing before.
I'm going to book one month of extra cadence to work with them, bring them to the speed.
But I have teams that are really capable, full stack developers, if you wanted to call
them that, mostly full stack.
Once they pick up my funky JMeter way of doing stuff,
they're like, this is great.
I know how to do this.
We can do this.
And then they're like, we still hate JMeter's GUI or JMX file,
so can you move us to Locust or move us to another tool?
Or if you're in the Lode Runner world, I'm sorry, we don't know C.
But take your JMeter.
Well, you know, Java's for failed C developers anyway.
I know.
So anyway, I think that's right, though.
But it flips it opposite.
Now the failing, screwed-up, bad test scripting becomes the exception,
whereby a smaller number of experts can be used to help teams get better at what they're doing.
At least that's what I'm experiencing anyway.
So that's what I'm experiencing anyway. So that's continuous performance.
The other thing, the last part I didn't quite get in the session because we ran out of time because I always run out of time, was how to get started.
Of course I talk a lot.
The one thing I wanted to share is that you don't have to, if you draw the three circles between dev, performance, and prod, and you have a flow through there to get this
continuous performance thing happening don't worry about prod so much just get started between dev
and test dev and perf it's funny you say this you know because at the same time you're giving
your talk our friends uh beer and nester from citrix who were on pure performance they were
on talking about their transformation and I told you
I think we said the other day
they're about half way through
and their part of their transformation
has been working with test and dev
getting them all united
and there's another project going on
between test and ops
I'd have to go back to get the particulars
but they started in the one section
they're going to start slowly working on bridging those two all together now because...
They'll have three circles.
Yes, but it's about exactly as you're saying.
Start with one area.
Don't try to do the whole thing at once because as you're figuring out with the one area,
you'll know it's going to work better once you get to, let's say, the prod side,
which is a lot more sensitive to this stuff, right?
Once you're at prod, you don't want to be doing as much experimentation
in practice and technique
and let's see how we're going to do this.
You can work that out with your dev team
and the test team and get that going.
It's funny because they were on the same time you were.
So it's funny.
It's a coincidence, a coinkydink.
Coinkydink.
Of course, the question everybody wants to know in order to go through this transformation,
do I have to get tattoos?
Do I have to color or shave my head?
Piercings.
Piercings. Yes.
I'm really not into the piercing.
I'm just going to encourage the non-permanent hair coloring.
That's always a good one.
Yeah.
And, see, tattoos and piercings are usually pretty permanent.
Piercings can heal.
So the tattoos are definitely more permanent of the three.
So I would just start with hair coloring.
You could do a UV ink.
What about a henna tattoo?
What about a UV ink?
Very temporary.
A UV ink tattoo that you can only see when you go into Spencer Gifts
by the Blacklight posters.
Yeah, well, unless you own your own Blacklight.
Yes, that too.
Where you go on those certain rides.
I think that was an episode of CSI based here in Las Vegas.
There's one in CSI.
Then it was the Gone in 60 Seconds.
A tattoo that was done with ultraviolet ink.
I didn't even know that existed.
Okay, so do you notice they're packing up the swag booth?
The swag booth. So if you wanted a t packing up the swag booth? The swag booth.
So if you wanted a T-shirt, there's a whole stack of Dynatrace UFOs over there on sale.
We're going to give one away.
I wonder if they get cheaper the closer they get to shutting down.
I have no idea.
Probably not because they have a shipping container.
Not a container, but sure.
You can try.
The thing that I want to figure out is mounting the UFO on a stand.
I've got to figure that out.
Yeah, that was interesting.
Maybe it's just resting on top there?
I think it has the stem, and it just upside down through the stem.
So it would hold it in there.
No, it was in there.
You could shake the thing, and it would stay on.
Well, because it's got the cable.
Right, right.
So what that means, though.
There's someone who would know.
What this means, though, is that
we're over halfway through
this last day.
We might do one
more broadcast here as we get to the
evening thing, but we might do that portably.
We've never done a
portable broadcast, have we?
No, I don't think we have. And I don't think the party's here.
There's going to be Brooklyn Bowl. it's going to be at Brooklyn Bowl.
There's going to be bowling and loud music.
So it might be challenging.
We'll have to see if we can pull it off.
We can pull that off.
Anyway, Don and Trace perform.
Perform, no?
We'll do...
There'll be some more stories.
We might pull some more stories in.
Yeah.
Let's perform 2018.
Perform 2018.
Don and Trace.
You just said perform.
Well, it is...
Yes. Perform 2018. What did I say? You just said perform. Well, it is.
Yes.
Perform 2018.
Yes.
Last two things I wanted to say.
No, wait.
Wait, wait, wait.
84 Lumber.
Okay, last one thing I wanted to say. First of all, and you also have not told a cow joke today.
Oh, okay.
So I'll tell a cow joke today.
Day three cow joke.
Okay, day three cow joke today.
I do want to point out that this weekend is, in fact, our Super Bowl for performance.
So 84.
84 a lot of numbers.
We're going to have a whole new set of failures.
Although you can't use the word Super Bowl.
You can't get shut down and sued.
But anyhow.
Yes.
So the cow joke.
All right.
Day three, cow joke.
Cow walks into a bar.
A lot of cows in bars are my jokes.
Apparently.
And he goes, bartender, let's get a glass of milk.
The bartender looks at the cow.
He's like, you're a cow, aren't you, right?
The cow's like, yeah, last time I looked, buddy, I was a cow.
He's like, well, you have udders that make milk, right?
He's like, yeah.
He's like, well, why don't you just get some of your own milk?
And the cow goes, oh, jeez, I never thought of that.
Thank you. See so i told you i told you every no one believes me when i tell them the jokes on you with my jokes and then every time i tell them
they look at me like i can't believe you just said that but i told you it's a cow joke it's a good
thing andrew dice play is not standing here. Because he would steal my material?
No, he would smack you.
Yes, you would be smacked.
Why Andrew Dice Clay, of all people?
Yeah.
It's Vegas.
So let's sign off for now.
And then we'll come back for the total show wrap-up, maybe another few stories.
All right.
That sounds good.
Thank you, everyone, for listening.
I don't think I'm broadcasting again.
Stupid connections here.
I'll just download the...