PurePerformance - 032 Agile Performance Engineering with Rick Boyd
Episode Date: April 10, 2017In the second episode with Rick Boyd (check out his GitHub repo - https://github.com/DJRickyB ) we talk about how performance engineering evolved over time – especially in an agile and DevOps settin...g. It’s about how to evolve your traditional performance testing towards injecting performance engineering into your organizational DNA, providing performance engineering as a service. Making it easy accessible to developers whenever they need performance feedback. Rick gives us insights on how he is currently transforming performance engineering at IBM Watson. We also gave a couple of shout outs to Mark Tomlinson and his take on Performance in a DevOps world!
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hey, and we are back with pure performance uh we are still here with rick rick you're still with me
sure i'm and brian you're also there even though you just promised it you let me to do the intro but brian you're still there with me i'm still here you're still here
that's awesome and actually i'm uh sorry that was a little quiet in the previous one, but I'm still struggling a little bit with the jet lag.
Give me a moment to shine, Andy.
It's great.
So in the first episode, we talked about continuous performance testing when we had Rick on the line.
And we talked about the importance of fast feedback and integrating that into the pipeline. I really love some of the ideas, Rick, that you brought up with when you have your pull
requests and then you have your engineers look at it, but then also automatically look
at all the data that comes in from your performance testing tools, from your monitoring tools,
whether it is through a bot or some other means.
I thought that was pretty cool.
And Rick, what you also made very clear is that we totally forgot the fact
that you were actually one of the presenters at our Perform Conference in February.
And I feel embarrassed that we didn't mention that.
There was no need for that.
No, there was no need, no, because this is not the way we value our customers.
It's just been the only excuse that we had that both Brian and I were busy and we didn't get to see your presentation.
But you talked about evolving performance engineering, I believe.
Was that the title?
Yes.
It had a really long subtitle, but yes, that was the point of the talk, certainly.
Perfect. So evolving performance engineering, I assume this is also kind of where we want to talk about in this episode now,
where we talked about continuous performance testing earlier,
but kind of now, how has performance engineering evolved over the time from traditional performance engineering,
how we used to do it before, to what we need to do now in a more agile DevOps world.
Is this correct? Yeah. Yeah. So I think, I think there's a lot of, a lot of, um, things that I
borrow from, uh, uh, performance engineering and a DevOps world, which is often something that Mark
Tomlinson talks about. Uh, but I, I have my own experiences and my own sort of direction and
things that I see and feel about, uh,
performance engineering, even in established agile cultures, like where, where it's, it's still not,
not quite, um, moved up. And I'd like to, to expand on that.
Yeah. And I think too, what we're going to uncover here, which we discussed a little
before the recording is, you know, sort of the definition of performance engineering versus maybe performance testing, which I kind of lean towards a definition in the earlier phases.
And when you cross that threshold to becoming a performance engineer, when can you put that
actual official hat on? So anyway, I just want to throw that in there before you go in just to, just to, just to, to, to tease the idea a little bit. Sure. Yeah. So I, um, I, I, I basically,
I'd like to just sort of tell the story and then, and then that should lead us up to,
you know, where, where we are today and where we need to go. Uh, but, but I guess the first
question that I have, I have to lead it off with a question, because I know, Brian, you used to work for HP?
Nope.
Is that right?
No.
Was this prior to HP?
Where did you work?
Before Dynatrace, I was at WebMD.
Oh, okay.
But you were their load runner guy.
Yeah.
I've been a load runner.
I was a load runner guy from about 2001 until 2011 when I joined Dynatrace.
Yeah, and Andy, you were at Borland, right?
Exactly, yeah.
Segway and then Borland, yeah.
So both coming from sort of the performance testing background or load testing sort of background.
Back then, was the term performance engineer meant to be – was that equivalent to performance tester?
I don't think I ever came across that until the very end of my time as a performance – in performance.
I think so too.
I think it was a – I don't think I ran across this term either, and I believe we only talked about performance testing and load testing and performance testers that run their tests.
And they did their work, which was something at the end of the development cycle, which is what you do at the end.
This is testing, and performance testing was just something, a special type of testing.
Sure. Okay. and performance testing was just something, a special type of testing.
Sure. Okay. So, I mean, I've used, I've heard it used often interchangeably, you know,
for as long as I've been in the industry, which, you know, I think anybody can agree that's a pretty big fragmentation of that term, you know, given the way that I apply it,
which is somebody who's not only focused on pre-production and, and, and load testing, but also like analyzing, you know, getting,
getting deep in time in terms of the code and the configuration and the architecture and everything,
and looking at, at how can we make this thing run faster? How can we parallelize and get value?
And you mentioned that the cross, the interchangeability of the of the
things since you've been in the industry but you've been in the industry for i think you said
about five years now right right exactly so that's the interesting thing is because you know obviously
i was going back from 2001 andy i'm not sure when you started and all this but i think maybe
somewhere somewhere between five and seven years ago i'd say is when this concept of performance
engineer at least when i first started hearing about it. So I just find that kind of fascinating that, you know, you came in
when it started being tossed around. So there has been, it's just a difference, I guess,
in the perspective. Cause I remember when I first started seeing that term popping up
and of course I immediately started calling myself a performance engineer until I started
thinking about it more, but I didn't, but please continue on there. I didn't want to
sidetrack too much. No, yeah, sure. I think, I think that's, you know, there's,
there's no, um, there's no central definition of it. And I think people who are, who are leveled
up, you know, feel like it should be a leveled up term. Uh, other people think that, that there's,
there's like something, uh, in, in line of a performance architect. And I would, I would hope would hope that anybody who is an architect is a performance architect,
and so we don't need that very specific title.
And then, yeah, people are just good engineers,
and that means that they're pretty wholesome, or at least T-shaped.
You know what I mean?
Like I specialize in performance analysis.
Somebody else on my team specializes in testing,
but we all can do automation.
We all can do analysis. We all can do analysis.
We all can do test writing, that kind of a thing.
But anyway, traditionally, I say – go ahead.
I just want to put one thing in.
It's the definition that Mark gave at Perform and what is performance engineering.
And it was from Jim Duggan, former Gartner guy.
And he said – and I'm reading it now.
I have the slides here.
If you can impact the performance of the code before it is written, then and only then can you consider your work to be performance engineering.
Otherwise, you're just doing testing.
And I thought that was actually a pretty fantastic quote.
And I thought that hit it pretty pretty well if you can somehow impact the code
that is written before it gets written meaning you actually work closely with the engineers
giving them feedback and then impact the next iteration of their code you know becoming part
of the story writing becoming part of the planning i, becoming part of the planning.
I think that's perfect.
That's a great way. Yeah, and maybe that definition is why we have these different things
because if you think about traditional performance engineering,
first part of our story, you're in the waterfall, you're a gatekeeper.
There's no room for that definition in that organization.
You know what I mean?
You are the guy who waits for the performance test requirements,
writes the performance tests, schedules the war room call, and then actually executes it, right?
But to say that you don't have a performance engineering practice is not something that
people really want. So they have this performance center of excellence that staffs engineers.
But essentially, they end up being test writers.
They may even be good with schools like Atman or something like that where they can help deliver
results, but you're still offloading those considerations off to the people at the
beginning of the pipeline, the decision makers. I think in those scenarios, in those scenarios you have, you know, if you're if you have a good practice, then you have a template word document that that people can can fill out.
And you've got a process and hopefully like naming conventions and things like that. we're in a waterfall shop, we're not getting, uh, we're not using a feedback loop and we're not using, um, uh, collaboration, cross team collaboration, getting estimation or capability
definitions from engineers. You know, you are just reporting results and leaving the decisions
and the analysis up to, to the architects, to the PMs, to whoever, whoever might be running the
project, you know, and they, they, they get they get to make the decisions of go or no go,
or you do based on fixed business requirements.
You move into Agile, as so many companies have, and they take the same center of excellence,
and they might even take the same sort of artifacts and documents and the same testing things,
but their new requirements to live in these shorter
sprint cycles mean that, okay, I need to start maintaining these tests. I cannot just rewrite
them annually, you know, as I get pulled in for the new version of application A or whatever it
might be, right? And so that level of maintenance drives people to have to push their test repos into source control.
And that automatically ruled out a lot of the big legacy players in the performance
testing space because their scripts were written in XML or they were like auto-generated C
or something like that.
And then they looked gross after you recorded them or something along those lines.
So people started looking into open source tools.
Then you have some leveling up there of people in terms of technical capability.
But performance engineering as a practice is still behaving like gatekeepers
because they're still just executing at the end of the pipeline.
You know, and you might even be able to give a this is good performance, this is bad performance, or this is a risky performance thing. But because we're on the, let's say, the eighth day of a 10
business day, you know, two week sprint, we're not going to refactor. Like we're just going to
deliver and then we'll put an item in the backlog for that,
and then we'll address it.
And I think what people do is they automatically presume that because we are agile,
we're going to accomplish every ticket that ever gets written for this project.
We get to sprint planning.
We see the tickets that are there.
We pick up what's reasonable, hopefully
by priority, hopefully by a decent estimation and talented engineers and everything.
But what so often happens is those tests are run and they're maintained and we give
results, but nothing is done with them. We've had that problem here before. Business is always going to be asking for more features
or usability enhancements or whatever. And so performance items, especially if we say
this is working in production right now, oftentimes people are willing to just sort of live with
the pain, especially if it doesn't affect them, as is so true for many developers.
So, you know, so the people who support it have to continually support it, even though
I've, you know, maybe I've written out like a whole, not saying that this is true, and
certainly not saying that this is not true, written out a whole three-page document with
screenshots and code examples of how to remediate a specific performance
issue and attached it to a ticket and just said, please fix this. Um, yeah, but you, you know,
in, in this, in this world where we are good and this is, this is where a lot of people are now
is we are good. We, we, we maintain our tests and central source control. We integrate with tools like
AppMon, with automation tools like Jenkins and Gradle, and hopefully have some provisioning
capability or something along those lines in order to do environmental control as well.
Now, before you go on to the next section, though, I did want to interrupt and say this to me from, again, Agile, when I left the performance world and started with Dynatrace, this was just starting to come into the picture.
And the biggest question I had in my head was, how does this all fit in, right?
Because you nailed it there.
Like, you know, let's say you have a 14-day sprint or whatever.
Maybe it's not until the 8th, 9th, or 10th day that the performance tests start getting run.
Maintaining those scripts, even if it is checked in, it's still a behemoth of a process.
And it results in exactly that same situation where you're saying, hey, we found some kind of performance issue, but no one's complaining about it in production.
Does this get put on the back burner? There's still a whole bunch of backlog, a lot of fuzziness.
And in my mind, it was always very unclear how this performance testing and load testing fit
into that agile cycle. And it seemed like it was very sloppily done. Even in this description of
it, it's still kind of sloppy and leaves a lot to be desired. So I just wanted to stop you there
because that's always been my issue. So now I'm very,
very, very excited to hear where you're taking this because let me, let me take it a step further
because I realized something is, as you sort of, sort of reflected that back to me is even if we
are implementing continuous performance testing and we've got meaningful test results from what's
happening. Uh, uh, I don't know about the shops
that you work with, but ours here, every single work item becomes a ticket, which is beneficial.
It's a very good thing, but for instance, remediation of some performance regression,
you know, such as the ones that you guys use in the test automation slide decks or whatever,
uh, would become a ticket. And maybe there would be some detailed analysis data there
maybe we would link it to dynatrace through a cool plugin i developed um but whatever whatever
the the actual outcome is it would be a ticket it would be backlogable it would be ignorable
essentially at the top level uh so even even if i were giving daily results and had a regression
early in the sprint there there still might not be enough time for it.
It still might get backlogged.
Okay.
And in those cases, you have to have – if that is going to be the responsibility of the application teams, then they are going to have to have something in their process that says we allocate this much time for performance.
I think – go ahead.
I have one story.
So two weeks ago, I was fortunate enough to be at WAPR,
which is a workshop on performance engineering in New Zealand.
And one of the people in the room in this conference, it was a peer conference, was Gorenka.
She's responsible for site reliability
engineering at facebook responsible for hundreds and thousands of servers at facebook and what she
was saying uh for for facebook the most critical thing is features so you develop a feature and
you get the feature out as fast as possible and you want to get feedback because here's the thing if the feature if nobody likes the feature then they figure it out pretty fast
based on real user monitoring and then they just kill the feature and there's no need
to invest anything more in making it better but here's the but if the feature works out well
then the teams basically get a ticket, kind of like a fix-me
ticket or a fix-it-later ticket or something, whatever you want to call it. I think there's
a terminology in the US when you get pulled over by the police and you have like a broken light,
and then you get a ticket that says you don't have to pay something now, but you have to fix it,
a fix-it ticket. So they get a fix-it ticket that says you have to now improve performance until that
particular point in time because we know the feature is exactly what people want but right
now it's not efficient enough and with the scale at facebook you know five ten percent performance
improvements of a feature are significant and but it's that's part of their process so feature first
and then if it proved out to be a good feature, it's part of the next sprint, part of the next iteration that they need to work on performance.
I like that approach. had enough, I would say, engineering resources that the person who worked a feature or the team that worked a feature or set of features had enough time after GA in order to iterate based on what we find in production.
I take, I think, what you learned there or what you talked about there.
I say this all the time, actually. But if you're presented with the choice of feature richness,
ease of use, and performance, and I tell you to pick two, nobody's leaving performance on the
table. As the end user, nobody's leaving performance on the table. And yet so often,
we leave those things on the table in the development room. And so I stress that a lot
with our teams here and with other people in the industry room. Uh, and so I, I, I stress that a lot with our, our teams here and
with, with, uh, other people in the industry I talked to about it as well. Um, so I think one
of the ways that, that, that we get around it here, uh, is I have right access to everything.
So I, I literally am, am, am the, the dangerous sort of embedded splinter agent that can go into your repository, whether that's a configuration management thing in Puppet or an actual application configuration or a code level change or something like that. I'm empowered by having Atman here, having method level root cause
analysis for every transaction for the last two months or something like that, to be able to make
an informed data-driven decision that something should be done. And I write tickets about it.
If they sit there for a while, I make time in my day, in my week as the performance engineer, as the person who is responsible for supporting performance here to go in and attempt that change.
Now, obviously, with a pull request, with a code review, but I think people should be looking at trying to level themselves up in that case. And going back to Brian's definition of you need to be an engineer and
not just a tester. And oftentimes remediation is part of the performance strategy. And so it
misses the target on the definition that Andy brought up. But that is something that I think
people should be able to do is see their performance teams as being cross-functional.
And if not actually writing the code, then finding mentorship or peership in the engineering teams who actually can and talking with them about strategies and showing them the data, being very proactive
about those.
But I think the best thing that an organization can do is open up access to their systems
to people.
Because there are ways to lock master.
You know what I mean?
There's really no risk to letting somebody on the ops team see the application code.
There's no risks to that, and the reward can be insane,
especially if you have a 10-time culture
where people are learning about other things.
Or again, performance engineering is one where I don't want to wait
for the system to be built and then have to look at how I'm supposed to support it.
I want to help build it.
I want to influence the way that it's happening.
And so through those interactions with the code, you start to interact more with developers, with engineers, with architects, with DBAs, with operations people, and building a better culture of performance by exposing them to the kind of information that you have that can help inform their decisions. And then maybe we start, we start approaching, uh, uh, the, uh, the Jim Duggan definition of, of this and, and, and influencing the way the code is written.
And then I think it's, it's ultimately the, the, the goal of the, the way that we're trying to
attack it here, but also to remediate the possibility that I'm not actually providing
any value. I'm just generating reports and possibly generating detailed tickets but not actually doing anything for the business because nobody is doing anything about what I'm generating.
So the – I had to listen to what you said.
It makes a lot of – it's perfect. So that I want to say here, and I think it comes towards what we see.
The industry is moving towards DevOps, the concept of having – and I think you said it earlier.
Maybe it was in a side discussion that we had between the podcasts.
DevOps sometimes is seen as you create a separate team and they're doing all the magic, but that's not what it is.
It's really about shared responsibility.
And I think what I've seen and what also we did internally at Dynatrace
and our engineering team, we were basically creating these cross-functional teams.
We call them application feature teams,
and they're responsible end-to-end for the features that they build.
And like what Facebook is doing, right, Facebook is empowering their developers to build great
features, but then demanding from them also to make sure that their features are efficient
enough.
And in order for that to work, these feature teams have to have, or these application teams, they have to have
great developers, front-end, back-end.
They have to have testers in the team.
They have to have architects and performance engineers in the team.
And they all together, and operation teams as well, or operation folks, and all of them
together are responsible for the outcome, which is something that in the end is beneficial for the company,
which is great features that actually operate well.
And I believe this is where we're all moving towards, right?
It's these whatever you want to call them, application team, feature teams.
It is basically teams that are empowered end-to-end to build the right thing, making sure it's properly tested,
making sure that it actually can scale, then operate it with the feedback from operation, make sure to make it even better, or throw it away in case nobody needs it.
I think this is another thing that people need to do.
The way we did this internally, and I'm not sure, Rick, what your thoughts are from an IBM perspective for your kind of evolution of your engineering teams.
But we, all of us within Dynatrace,
we spent a lot of effort in building a pipeline
and an orchestration layer that actually,
this is actually the product that our DevOps team has built.
So the DevOps team within Dynatrace
is responsible for building that pipeline,
which is a set of tools and frameworks that enable developers to write code, that it's automatically tested, that then gets automatically deployed into different environments, properly tested, including giving the developers access to, you know, log data, monitoring data, all the data they need so that they can get feedback.
But the idea really was let's empower developers or not only developers,
let's empower the application teams to build the right things in the right quality
and then give them whatever they need so that they can iteratively also make sure it's performing well
and it has the right resource consumption.
But I totally agree with you, what you said, if I understood this correctly.
We have to have these cross-functional teams where we all help together.
We all learn from each other, where the operation folks should have an interest in seeing what's coming down the pipe
before it gets just dumped in their operation, in their ops environment.
They have to have influence, and they should have influence by just collaborating with engineers and saying,
hey, I have something to say.
I've learned something over the years, and I believe we should do this or the other way.
Yeah, absolutely.
I think the other thing that I wanted to mention, and I don't know if it was
more appropriate for the last episode of this episode, but just that from an engineering
perspective, there are good ways that if I am an engineer, that I can apply my time and the
CPU cycles of my brain. And I think gathering data and trying to replicate production load
is not one of those. And so I'm not going to be doing it for our performance tests.
I think my time is better spent doing this analysis, doing any sort of pull requests
that I can to help implement or help enable other people on our performance tools. But production load modeling is expensive.
And the only thing that it could serve to tell me is the information that I can already get,
which is I open up production.
I look at our in Dynatrace Atman at our production system profile and look at bottlenecks,
look at typical capacity or whatever, and then attack
from there. So I've got two sources of information now, my tests and my production data to write out
those backlog tickets and then either attack them or delegate attacking them or provide enough
information that attacking them is trivial for the, uh, for the seasoned developer or the seasoned,
uh, ops, uh, person or whoever it may be. Um, yeah, so I, that's, that's the only other thing
is, is, is really just, just make sure that, that as a performance engineer, as a performance
center of excellence, that you are, uh, offloading repeatable tasks to automation, and then evaluating really the value that you're providing when you're doing very data or research intensive tasks.
The combination of those two things and evaluation of those can make you that much more valuable to your organization.
And then you get paid and you generate a name for yourself and everything
everybody's happy with everything and you get a promotion hey i wanted to bring up some a point
and andy maybe you you have a different uh perspective on it too when you talk about not
modeling uh production right we recently had uh someone you know, Brian Chandler on on.
Right. And one of the big things and Andy even did a webinar with Brian on this.
So I Andy, correct me here if I get the concept wrong and or how maybe it either confirms or conflicts with what Rick, you're saying. One of the things that Brian had seen is
they were testing an API with maybe 20 functions and the testing team was distributing the load
amongst those 20 functions equally when they were testing it. And then they started raising an issue
saying there was a problem because it's not performing well and we can't push this to
production, right? It was something that's already in production. They were saying it couldn't, you know, the new build
couldn't go because it was performing poorly. But when they went to look at the production,
they saw that the load was not distributed evenly amongst the 20 different functions.
There were maybe two functions that were taking 80% of the load and the other ones were taking
maybe the 20% of the load. And when they took that production model and applied it back to the test environment,
they saw the performance characteristic of that API change completely,
and therefore it was passing well.
So Andy, am I interpreting this wrong?
But to me, that seems like it is a good case to take some production modeling to your,
do some production modeling to your do some production
modeling i i guess where i'm going with this yeah yeah yes and no i believe i believe what what both
of you are saying i think um rick you were basically saying agile performance engineering
means including looking at production uh but looking at the live production monitoring data
and then you know figuring out what's wrong there and then take this information back to improve.
But on the other side, Brian, you're correct.
If you have some pre-production tests,
then you want to make sure that you model them correctly
depending on the real load characteristic.
And the example that you explained was exactly right.
So it was actually three APIs that were consuming 60% of the traffic
or they were getting hit by 60% of the traffic.
And then the other 57 that they had out of 60 total,
they were just consuming the rest of whatever else was left.
And, I mean, honestly, I believe the world is moving more towards the Facebook model and we deploy it into production and then figure out how it is behaving.
And if we do things like feature flagging, we can even easily revert the change in case we have disaster upon us and then bring it back to engineering and say,
let's fix it before we roll it out again.
I think we're moving more into that model where we don't have to run
a duplicate traffic of production in a pre-built environment,
especially because it's very hard actually to run production like load.
I think we're moving into that direction.
But I think a certain amount of load test obviously has to happen in pre-production.
And then I believe it's essential that it is realistic.
So I believe for that you have to look at production data
and then constantly update your load tests or your distribution of load depending on real-life production data.
Because otherwise you were just testing something that is so unrealistic and you actually give it a go and you deploy it in production.
And then everything falls over because you just tested actually a total different world.
And so I'm not sure if I answered the if i answered the question i think it's a
combination of both and i i expect a disagreement on on this point and what the one thing that i'll
say about about the the raymond james case and i like those guys i've i spoke at length with
brian and graham and jeff at perform a few weeks ago um but but the thing that they're implicitly doing by having that story is
gatekeeping with performance testing, which is something that often an agile company,
especially if they are not very nuanced with trying to ingest those results,
you're probably doing. You're saying, I'm performance testing and it's a go or no go
based on these test results rather than what are our risks and sort of prioritizing them in a more fuzzy way and then saying we can address it, especially if you have a weekly release cycle or even a two-week release cycle of, okay, this is our risk.
If it's not completely falling over, then we've got an issue. But I would also argue that if I can stress the system and discover a bottleneck, like I'm not only good with production because it's
already in production from last week, but the day that the production load doesn't look like
the production load, I've been proactively preparing for it by eliminating the bottlenecks in my system, by throwing unusual load at the system in pre-prod.
So I think I understand the two different strategies.
I just don't. I have a two person team here and no influence on whether the application goes to prod on Friday. You know, that's really it.
I'm taking more of a deliver information to engineers,
let them make their own decisions,
let them be empowered by that,
and then also in the back end,
help them proactively attack our risks
and seeing things less in the black and white,
red and green state.
You're delivering what I would call how they call it performance as a service yeah which which actually means and you know this can obviously be seen
in different ways performance as a service or more you know and everything
is to service these days but I think as I explained earlier where our DevOps
team is empowering our developers to deliver features through the pipeline all the way into production, it's basically a pipeline as a service that they provide.
And in your case, your two-man team, you're providing performance engineering as a service, which means giving them access to performance data.
But it's a service, and so in the end, you empower them to use that service, but make the right decisions.
Yeah, coupled with, you know, hands-on help.
Yeah, absolutely.
Well, another big, big topic we got tackled here.
Yeah, I rolled up my sleeves there at the end.
I was really...
You sure did
um well i i really want to thank you a lot for uh for taking the time where we're just at about
out of time for this episode but i definitely want to give you all a chance to to any final
thoughts or conclusions yeah i mean for me for me the, what I like a lot, I mean, I know you called it, you called the
title, the Agile Performance Engineering and what that means.
And I believe now looking back and especially what we discussed in the very end, for me
in the end, it's all about empowering cross-functional teams to make sure that they are delivering the right thing to support, you know,
obviously the business success of the company they're working for.
And what that means is we need to empower them by providing different services, just as you said, either through giving them some automation APIs where they
can kick off some tests automatically and they get feedback through mentoring, what
you also mentioned, right?
So performance engineering as a service is a set of, it's a combination of tools, automation,
but also mentoring.
But in the end, we all as employees of of a company we all together want to make sure
we're doing the right thing for the benefit of the company and in the software world obviously
that means we need to build the right things in the right way and making sure it's performant
because we know performance in the end has a negative impact on end users and they don't
like us anymore and they don't spend money anymore with us.
So I believe this kind of performance engineering as a service
is part of the whole transformation we see with DevOps,
where we're going to build pipelines, where we use automation,
and where we just empower and ask for cross-functional teams.
I think that's a big key.
And performance engineering is just part of the whole thing.
I have two takeaways.
First one, I just want to get out of the way that I just super love working with the both of you.
For the listeners out there, Brian's deep, dark secret is he's actually a super smart, insightful dude.
So thank you so much for having me on today.
Thank you for saying that.
The second thing, which was really my original thesis, and I just wanted to make sure I didn't track too far away from, was if you're out there, you're a performance engineer, a performance center of excellence, or performance as a service in an agile company, and you are not, you know, you produce a performance risk, and the code goes
into production anyway, and you take it, that performance risk, and nobody works on it,
you're not providing anything to your business, and it may not be very long before somebody notices.
So just really everybody needs to level up.
Everybody needs to evolve.
That means getting involved with automation, getting involved with analysis,
and potentially, if they're nice, getting involved with remediation.
If they're not nice, then muscle your way into the room and make them nice.
All I have to add, well, thank you for the compliment, Rick.
But all I have to add is, well, number one, you're awesome too.
And so is Andy. We're all awesome, but seriously.
But you know,
it really just comes back to something where Andy and I,
especially Andy has been saying over and over and over again is just level up
right. No matter where you are. And, you know,
earlier at the beginning of this podcast, I was talking about like a performance tester versus
an engineer and all not to in any way degrade someone who is starting out and is in that
earlier phase of writing scripts and running tests and learning. It's make that your goal,
make that your vision, make that where you want to get to. You might not have all those skills. It might be daunting. And even when we have these talks and I hear some of this stuff,
I get, you know, it scares me the thought of like, oh, wow, how would I have fared if I had stayed
on that side of the, on that side of the fence, if I had stayed in the, in the performance field
and then haven't, hadn't become a sales engineer, right? There's a, there's always a lot of work
that I have to do and leveling up even in my own job now, but it's not the hands-on here's a release coming up,
right? And I'm responsible for this, you know, almost, you know, live or die situation for the
company. If I do my job wrong, it might blow up, you know, so it's definitely a scary thing and
there's a lot to learn, but make that your goal. Level up. And even if you do level up, never stop.
Just keep pushing and pushing and pushing because that's where it gets more and more and more and more fun.
So keep on keeping on, really.
It sort of comes down to – I guess that's the end of this episode.
I think so too.
I'm really sorry that I'm just fighting with my jet lag here.
I would have liked to have more energy.
It was really great.
Do you want to maybe re-record it tomorrow?
Well, we didn't finish this one yet.
Yeah, it's Saturday morning.
Let's do right after I watched the Dungeons & Dragons cartoon on – I forget which show that was on.
Seesaw?
What?
Seesaw?
Which one?
Dungeons and Dragons?
When I was little, there was a Dungeons and Dragons cartoon.
Oh, okay.
I thought you were talking about, what is it, Harmon?
It's not Harmon Town, but it's Dan Harmon's.
He does Dungeons and Dragons and they animate over it.
Oh, that's awesome.
I haven't watched it.
It's supposed to be pretty funny.
That's pretty awesome.
I like Dan Harmon.
Anyhow.
I just think that my girlfriend, Gabi, would not be happy if i'm gone for four weeks that i'm coming home
and then i spent my saturday morning recording another podcast but uh yeah what i thought was
very interesting andy that was hearing you getting excited with your more subdued um tones
because you could see suddenly like you're quiet and you hear something and you just start
trying to perk it up and you're like, Oh my gosh, this is just so what I want to talk about. And
it was, it was kind of like the groggy Andy. But anyway, the Andy, thank you for, for doing this
on such a tired, um, timeframe, the real Andy, ladies and gentlemen, the real Andy will be back
on the next podcast. We promise you that if not, we will have a robot andy uh and you won't know the difference um i'll just come on and say level up and you know i don't know i'm making fun of you
anyhow thank you we're rambling thank you everyone for for listening again uh you can contact us
through uh via email if you want the old-fashioned way pure performance at dynatrace.com you can uh
do twitter feedback uh at pure underscore dt i'm emperor wilson we have
grabner andy g-r-a-b-n-e-r-a-n-d-i and go ahead and file complaints or issues on rick's github
repository at what is that again dj smoothie jazz what is it dj Ricky B. There you go. All right.
Thank you, everybody.
DJ Ricky B, are you, because we've bumped into each other several times at conferences the last year.
Anything that is coming up for you?
I was, I missed the deadline to submit to the next couple of stars.
I don't think I'm going to get to go to Velocity.
So I think I'm going to be pretty boring this year.
We'll see
I just saw the submission deadline
coming through
it's an email from DevOps East and DevOps West
I think one of the two
it could be an interesting one
maybe keep that in mind
I'll look at that
there was a good conference last year
and Andy do you have anything
coming up?
I think I do.
I'm actually going to be in Switzerland in two weeks at Swiss Testing Days,
which I believe is probably after this one airs. I'm now on a DevOps panel.
All right.
And I'm doing some Jenkins work next week in Boston,
Jenkins Dojo and the Jenkins Meetup.
Is that with Thomas?
That's with Thomas, exactly. Yeah.
And then
there's more coming up later
this year.
Everything should be on the community.
We have a website, a page
that is called Upcoming Events.
And also
what I would love to do, Rick,
maybe, I know
you had your presentation at Perform
if there's anything we could potentially
repurpose for
not only the podcast here
but maybe for one of my performance clinics
you know
we should talk maybe
there's something that we can show how you're using
Appmon
to actually implement performance engineering and more proactive performance engineering.
That would be cool.
Sure.
Excellent.
Well, I want to thank you once again, Rick, for taking the time.
I hope you enjoy whatever that was you opened earlier.
And I hope you have a wonderful weekend.
I guess your day is almost over.
It's closer than mine, at least.
And Andy, you're right in the castle there. So go get some sleep. I think his day is almost over. It's closer than mine, at least. And Andy, you're right on the cusp there.
So go get some sleep, Andy.
I think his day is probably over.
Yeah, get some sleep, Andy.
Recover.
And we'll talk to you all soon.
Thank you all for listening.
Thank you.
Bye.
Bye.