PurePerformance - 024 What the hell is “Continuous Acceleration of Performance”?
Episode Date: December 19, 2016Mark Tomlinson, still a veteran and performance god, is enlightening us on his concept of Continuous Acceleration of Performance. Continuous Delivery is all about getting faster feedback from code cha...nges as code gets deployed faster in smaller increments to the end user. One aspect that is often left out is feedback on performance metrics and behavior. In the “old days” performance feedback was given very late – either in the load testing phase at the end of the project lifecycle or even as late as when it hits production. That could be too late and it makes it hard to fix the root cause.Listen to our conversation on how to accelerate performance related feedback loops without getting overwhelmed with too much data!
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Welcome, listeners, to another episode of Pure Performance.
And this time it's actually me starting the whole thing,
and I can repeat what normally Brian is always saying.
It's going to be another awesome show because I'm not a fortune teller.
How do you know that?
Well, because I'm a fortune teller just as you.
I kind of inherited that gift.
It kind of dawned on me.
And I'm actually very close to my mic because I want to also sound a little godly when I introduce, reintroduce one of the gods of performance.
You're over-modulating a little, though, doing that.
I'm over-modulating?
Okay, I'm lowering my excitement now.
Anyway, Mark Tomlin's now friend is still with us,
even though it's another episode, I know.
I'm glad you didn't die, Mark, in between.
Yeah.
This is the voice of the performance god.
Oh, my God.
So, Mark, in the last episode, we talked about DevOps.
We started out with a rant, and then we figured out that actually it's not that bad,
and a lot of people will get it, will get there.
Some people may not.
And I think one of the promises of DevOps, at least the way I see it, is that people want to go towards continuous integration, continuous delivery,
using new technologies to push out application changes faster than ever before
at lightning speed, if you so will.
But you actually proposed a little topic, which I have no clue what that is all about.
Do you want to tell us about it?
Is it a term that you just made up?
I think so.
Continuous acceleration of performance?
Yes.
Are you marketing now and sales? All you have to do is put continuous in front of anything. I think so. Continuous acceleration of performance? Yes.
Are you marketing now in sales?
All you have to do is put continuous in front of anything.
Yeah, continuous car sales.
Continuous nose picking.
Yeah.
Continuous is the hip.
That's marketing.
It used to be cloud.
It used to be anything was a cloud, right?
A cloud toaster, cloud bagels.
Yeah, and it was Uber for a while.
Uber was also not only a cool company name, but everything was Uber.
Yes.
Oh, that's right.
That's right.
You have to add digital in there somewhere.
So continuous digital acceleration of performance.
Guys, if we back it up even to like the 1999, everything was, we still have it, like e-trust, iCloud, iRadar, e and i as the, you know what I'm saying?
E-trade is still here.
Everyone used to put an e and an i in front of things.
Now it's continuous and cloud.
And I still don't understand Uber, the word, because is it just because somebody, English speaking, invented the word and they didn't find the U on their keyboard because it should really be Über, which is a German word?
Yeah, there's two dots over it.
The two dots makes it an U, which is a German umlaut.
Well, we're American, so we're lazy.
That's a special key combination you have to learn, and we don't have it on our keyboard, and you'd have to learn like, well, they'd have to switch it, switch the keyboard layout.
Hey, so Mark continues, continues acceleration of performance.
Fill us in.
This popped up in a conversation.
So I did, truth be told, I did not invent this term, you know, in order to persuade people to my way of thinking in the world, because truth be told, I'm not a guy, a God.
I'm not a performance
god check your ego at the door um but uh but i do i did hear this sort of recently and they were
talking about like accelerating development generally right so they were saying we can go
to we can get lots of feedback let's say you you're a full sonar shop and you get lots of feedback
loops um there's another guy in our industry who's
now at Amazon. Seth Elliott talked a lot about data-driven testing, data-driven quality,
and doing again, sort of progressive feedback loops to make sure that you're building the
right product, that you're getting the right kind of feedback, what's working, what's not working
and drive the backlog instantaneously and continuously based on, and that accelerates development.
You can actually move faster, smaller chunks, faster feedback.
And I like that idea.
And we were having the conversation about how to pull actual performance practices into continuous integration and continuous development.
Now, this has been around a while.
I mean, if you think about test tools, performance test tools, unit testing tools, getting plugged into Jenkins and Travis and TeamCity and
Bamboo and all of these things, that's pretty common. And in fact, it's very common if you're
a continuous shop. What's not so common is that you start getting performance measurements of
some kind and performance information annotated in your feedback.
And there's two ways that I see people thinking about this in terms of acceleration.
One, if you're lucky enough to have a user story where the product owner or tech product
owner or even the business people actually know that there's a performance requirement
and they're not just reacting to an outage. They actually proactively are like, we really want this to run fast. And they get explicit about it, not something like that, they will write the code differently. And so getting, you know, integrating
a measurement of some kind in the component level in the early testing, and accelerating the
frequency with which they get that measurement. So every functional test, every unit test,
every combined unit test, where you integrate several different component calls,
obviously then in your more elaborate regression suites, full-on load tests, pre-production load tests.
And those loops, those feedback loops run faster.
And as we add more measurements on each loop, they get more context and more information about performance,
and it accelerates because the more information they get more context and more information about performance, and it accelerates.
Because the more information they get about performance,
hopefully the idea is they spend less time screwing up performance.
So we can accelerate the speed with which we put code out and its performant code.
So I'm not, the contrast to this, the thing that always lights up for me is the contrast to.
All right. In the old days, a developer may write the code and get almost no performance feedback until it actually goes into production.
And by that time, they may have quit. They may have moved on to another team.
They get pulled into prod for triage and they're like, hey, was I drunk when I wrote this code? This is horrible.
Who wrote this?
And, of course, that latency is decelerating performance.
So that's kind of the general concept that I picked up from these guys is what, you know, accelerating the feedback and the frequency of the feedback in the normal loops of feedback.
And I love it.
I mean, that's basically what we've also been promoting for a while, getting more performance metrics out of your tests that you hopefully already execute and then use that to stop
the build earlier instead of pushing everything through to the whole pipeline where you have
a long running test at the end that
may take an hour or so.
But if you start pushing not only one build per day, but one build per hour or even one
build per commit through the pipeline and the load test at the end takes an hour, obviously
you cannot scale that.
That's why you need to fail faster.
This is also what Adam Auerbach from Capital One said, shift left performance and many
others are doing the same thing. So now, Mark, here's my question, though.
If we are asking people to measure more, how do we make sure that they're not getting overwhelmed by too much data?
How can we make sure they understand what they're looking at?
Because, you know, to be honest, honest i mean not everybody is a performance expert so just just adding more measures to instead of green red functional now we
give them 10 additional measures on these tests i mean do we not overwhelm them i the what i'm
seeing and at least from what i'm observing like like I mentioned, for a lot of times, there aren't explicit objectives for performance written in the story or even as the attributes to a story.
And that's where you would see the presumptions about security and performance, the non-functional world. financial attribute to a story that the business isn't really talking about because they're like,
well, I'm not going to bother you with all the calculations on the financial upside of the blah,
blah, blah. We're just going to say, build this button this way. And so there is, you're even,
you know, taking risk and abstracting business sort of, I want to call them non-functional
context about the business from a story. And that would be interesting too. But let's stick
to performance as you asked. I think if you have, if you're lucky enough to get some explicit
objectives, response time, concurrency, throughput, number of users, or even to scale the system,
if it's a performance story specifically, then you don't have to overwhelm people with all of the data in the feedback loop.
You could still gather all of that data.
I mean, you look at the infrastructure of the Dynatrace agents, and I mean, all the data is there.
But you may only choose to pull certain values back into Jenkins or back into the dashboard that the developer or the development team,
whoever, all the people involved, that they would see.
And my common suggestion to people is if you publish a number of some kind and nobody cares,
no one changes what they do based on that number, then stop publishing the number. But there is I think there is some already preexisting structure around sort of the simplicity of Jenkins.
Go do this task and give me a return code or return number.
It could be a threshold on response time.
It could be number of threads of requests per second.
It could be things like that that are fairly finite to begin with.
And so I think that's that's how you would do it. Based on the objectives in the story,
you'd pick the few performance metrics or performance measurements that would make the
biggest difference for that particular story. And I think some of the tooling and the way we
write stories or story attributes helps to minimize that number. Different than triage
and like meantime to failure and triage and stuff that you would see in prod
where it's a needle in a haystack and you're going through all this stuff.
It's a very different kind of way of getting that feedback early.
I think what we would try to do and what we built into the product is,
as you said, we capture a lot of metrics,
but what we think is necessary
is actually to identify regression.
So if a code change actually has a negative impact, that's why we baseline these performance
metrics where it's not necessarily response time, which is one aspect of performance.
But when you talk about unit or integration tests, then it's more things like how many round trips do we make
to that new microservice? Do we call it once or 50 times? How many objects do we allocate?
And then we basically baseline these numbers. And if a developer checks in a code and triggers the
build, and this number goes from 20, and it has always been 20, but now it goes up to 30 or 40,
then we raise the flag, but the developer
doesn't have to look at all the 20 metrics all the time.
The developer only looks at the regression, and we bubble that up.
So this is one aspect.
And the other aspect, which is built into Dynatrix 6.5, not only do we look at metrics,
but we actually look at patterns.
So we identified, I think, like 15 or 20 common problem patterns.
And obviously, I have to bring up the number one that I always love.
It's the N plus 1 query problem, either with the database or through microservices.
So we basically say, hey, developer, in that test, we never detected the N plus 1 query problem.
But all of a sudden, since the latest code change, we see the M plus one query problem to the database.
So you change something.
So there's change to behavior.
And you're now introducing an architectural pattern that we wanted to make you aware of because maybe you did it unintentionally.
And that's something we try to do and basically make it easier because if you want to accelerate, you need to make it as easy as possible for everyone that has never,
maybe a lot of people have never dealt with performance, but make everybody an expert
instantaneously fast in order to accelerate. And that's the way we try to do it.
Yeah. And accelerate the identification and resolution or remediation of the performance
problem as soon as possible. And so it's interesting, you know, if you just use your, you know,
hand it off to the other performance team and they're going to run their tests
and Brian and I are using Lode Runner over there and away we go
and we say, hey, response time looks fine.
Forget the fact that it's a scaled down environment and all this kind of stuff.
For all we know, there is an N plus one problem in the background that no one knew about.
But from the old way of sort of the slow way, I'll say, you know, I didn't see a performance response time SLA.
Brian said the test passed.
So we gave it a green light when actually the entire time from the minute that code was written, you, those anomalies are in there.
And I think if I understand you right, Andy, it's a dynamic list.
And there is sort of a precedence in the scientific world in statistics to just say, is it statistically significant, this change, this difference?
So let's say it goes from making one database call per record to breaking out.
It's not a severe N plus one
problem. It's making two or three more database calls. You might not flag that because it's not
more than 15% difference or something like that. But if it went like we've seen with some of the
N plus ones, if it goes from making one or two database calls to one or two calls, and then boom,
it's making 10, 000 more requests and round trips
wow that's way over a 15 difference in that metric now we're going to bubble that up and
prioritize and sort that list and say hey pay attention to these things red flags yeah and and
if you if you think about feature teams and development teams that actually talk to the
ops team and then actually say hey hey, we just introduced this regression
or we introduced a feature change that means we're calling the database now twice instead of once.
How often is this used out there?
Is only one person per day using it or is it 10,000 people per hour using it?
That also kind of the feedback loop from production, understanding which feature is used how often,
also allows you to prioritize.
Yeah.
Right?
And I also think, too, you'd have to take in consideration
how much extra time that adds to the transaction.
I mean, in general, if you're going from one to two,
you would expect only a small response time change,
if any might get masked by improvements in other places.
But kind of going back, you know, with that idea, going back to what Mark said earlier,
as far as, you know, picking a small set of metrics and, you know, only publishing the ones
that have an impact, you know, I might debate you there a little bit, Mark, and say there's a larger set of metrics that should be collected, but they should be hopefully automatically monitored over time so that no one's looking at those metrics unless there's a change.
And then when that change occurs, it gets brought to the forefront.
So maybe you have an N plus one query going from one to two um but as you know maybe andy said uh if that gets
bubbled up saying hey there was a change here you also have the statistics that andy was talking
about is okay but this only gets hit like maybe twice a day um so we really don't need to care
about it um you know i would almost say capture the data but be smart about what gets bubbled up
and that that i guess would be the trickier part but you know they're obviously yeah you're I would almost say capture the data, but be smart about what gets bubbled up.
And that, I guess, would be the trickier part.
But you're there, obviously.
Yeah, you're absolutely right.
And forgive me, Brian.
I do make the assumption that memory is cheap and storage is cheap nowadays.
So you're going to have all the data there.
And if you don't, you've really screwed up the whole plan for gathering these things.
So I just make the assumption you've got loads of data. It's just figuring out the two different parts of the equation. One is the epistemology of a bottleneck,
meaning, is there a problem? And does the problem exist? And it's almost binary. But, you know,
you're studying how do we know that a problem is or is not there? And all those other metrics, to your point, Brian, would be getting into the heuristics to understand why it exists and what do we know about the context of its existence as you would right click and drill down into that world.
So, yeah, you're good to call me out on it just because I do make the assumption, dudes, we have all the data.
It's just trying to figure out how to simplify it and thus accelerate it. Right. Because it does get tricky. If that test does pass,
as you were talking about the flow, and I tested it and green-lighted it, and you push it out to
production, and then it blows up, right? Because no load test is ever going to be an exact replica
of production. And you always find the weird cases or might just even be unpredictable cases out
there in production.
Um,
but you know,
have us knowing at least what those changes are going into it,
um,
gives a team that better,
um,
the ability to look at what,
what's the change here and what is the likelihood that although it didn't flag anything in our load test,
is it going to trigger anything in production?
And if we take that database example again, oh, this is something that only happens once during a cron job during a day, fine.
You know, we could take that risk.
There's another shift that I may have suggested to you guys before that I've seen from our brothers and sisters in the security world is the nomenclature that they use about vulnerabilities.
And it's sort of the risk.
And the analysis or the description that they would say is, you know, we found a security vulnerability.
And it may or may not be exploited.
And it may be probability-wise, how often would this – what's the chance of this being exploited?
And if it were exploited, are there any secondary or tert vulnerability, we can learn a lot from the security side of, you know, static analysis and other forms of dynamic analysis.
And I think, Andy, your example of, you know, seeing what changed in these calls is a dynamic analysis part.
In the performance world, like why haven't we taken those semantics about a vulnerability and brought those forward into a development cycle as well?
I think it's just a it's a mindset, too, right?
Looking at things as a vulnerability as opposed to a release stopping problem.
Right.
Yeah. conceptually, if you're saying, hey, we see this and we recognize it,
but we can evaluate it and say this is a very low-risk vulnerability,
as in the security kind of a world where, okay, the chances of this happening are very, very slim,
and we can now make a cost-based analysis to decide whether or not we want to actually put in the time and hold things up to resolve this, or if it's something we come back to in the future,
or if it's something we just cross our fingers and say, this is low probability.
But it changes it from binary.
Right.
Yeah.
Yeah.
The assessment.
And I think to your point, Brian, is like if you're accelerating in the car and there's always going to be these like we've all done load tests where you like there's this little spike. Should I pay to it or there's this seems like some blocking but it didn't occur again clearly i don't
have all the information i see the symptom of it i see the external part of it but i'm am i going
to put full on hit the brakes go into abs screeching halt because of this what could be an
outlier um could be an anomaly an artifact from the tool or something else that the system under tests the environment.
And so I guess that's part of acceleration as well, is knowing whether you should tap the brakes or whether you should come to a hard stop or ignore the brakes completely.
Keep your foot on the gas and go faster and faster and faster. And I think that's part of the dynamic capability you described, Andy, of saying, hey, something
did have a statistically significant change in the number of calls, the number of GC operations,
number of objects.
And this is significantly different.
Let's make sure you know about this.
And it could actually tie back to
static analysis or even a manual code review to say, yep, we understand. Yep. Check. We know that.
That's good. We expect that. Versus being surprised in production when all of a sudden GC goes through
the roof. Yeah. And I think the reason why this is very important is if you look at developers and
you develop code, whether it's, you know, going back to our previous episode where we talked about the legacy apps,
or if you now build something into cool new apps on platforms like Cloud Foundry, on Spring Boot, whatever you use,
then you're building code, a very small piece of code actually,
but it is sitting on an ecosystem of old code,
legacy code, frameworks, a pass environment, and all of a sudden your 10 lines of codes
are actually executing hundreds and thousands of lines of codes of underlying frameworks.
So you're responsible for everything that happens, but you only control seemingly a
very small piece of it.
Now, having this analysis, and if you make a code change in your little code,
and that has an impact in the underlying frameworks and the services you call,
I think that's the key thing, too, that you understand what the impact is to the overall system.
And especially with a lot of the frameworks out there,
they're all basically abstracting a lot of complexity away.
They allow to be configured through config files or config services.
If you make a change, you want to understand what that change really does.
And that, you know, Andy, that can be like quicksand.
You know, like there's a lot of, I can't see what's underneath the surface in this framework.
And I trust the community and I, or I trust the vendor, you know, just like we, we, with any ISV or any, like we trust somebody
else tested this and what I'm doing on top of the framework or in the framework is fully compatible.
Um, and, and oftentimes it's not. And, you know, when you're in that quicksand, if you don't,
if it's not visible, then it slows everything down slows everything down i mean i i've been in the dev
sprint where you're going to push on thursday you're working till three in the morning on
tuesday and wednesday because we just haven't cracked the nut and then i the frustrating
thing when it comes to this acceleration concept to me is like i hate i've seen developers get let go abruptly testers even where you get all the way to production and you dig through the haystack and you find the needle.
And then if you describe all the metadata about that bottleneck, that problem, you're like, well, you know, we could have found this way, way, way, way way way back upstream and i think that's the idea that this particular customer is
thinking about we want to not only accelerate how often and how fast we can get this feedback
but we want to make it more visible um and make it visible sooner um so that you don't you don't
have to be a master you know the old days like me i was like yeah i know every i know remember andy
we did here's the top 25 counters
for dot net and you must monitor them and the old monitoring stuff back in the tools
you know that's now it's like well there are hundreds of thousands of different monitors
all across all the different components i'm no one is a master of all of them you know but the
tools can automate the just they can discern hey hey, here's a particular counter that hasn't changed in the last six months of builds, and now it's way out of tolerance from what it was.
What's, whoa, end up with systems that are no longer monolithic,
but they potentially talk with 10,000 different processes
and even more Docker containers spread across infrastructure.
So this is also what we, I think not only we,
but a lot of the tool vendors also try to do,
try to apply artificial intelligence on all of the data we have.
And instead of sitting there as a
performance engineer or a developer or an operations guy to look at 10,000 metrics,
trying to figure out the correlation and what's abnormal, we need to have a different approach
as well from the tool vendors to say, we are applying some artificial intelligence on the
metrics, but the way we can do it better is
actually by understanding how these components of your software talk with each other how are
they related with each other and how were they talking with each other before you made the latest
deployment change before you sent out that email campaign and got 50 000 more people on the on the
page so this is stuff that you that at least we try to solve,
and I'm sure other vendors are trying to solve this as well.
Yeah, and then, of course, there's iRobot,
and then the robots become self-aware.
Exactly.
Isn't that...
It's no longer artificial intelligence.
It's just intelligence.
Or becoming self-aware.
Isn't that another...
Yeah.
Isn't there another big movie with that, with some guy from Austria?
There's the one with the...
Yeah, there's the one with the...
That metal guy.
He's metal, right?
What?
The metal guy, and he goes, I'll be back.
Oh, yeah.
Schwarzenegger from your home country.
I was just trying to be stupid about it.
Well, there's the one with Haley Joel Osment, the Spielberg movie.
Oh, AI.
AI, the movie, which had these creepy mommy personality disorders.
The little kid wanted to find it.
The robot wanted to find his mommy or something.
And that just got sick after the fourth time I heard him say it.
I'm like, I can't watch the rest of this movie.
Wasn't that the one that was a Kubrick film that Spielberg ended up doing because Kubrick died?
Yeah, it was something like that.
But it was the Blue Fairy.
I want to find the Blue Fairy.
And I'm like, oh, my God.
Shut up, kid.
The Blue Fairy.
I mean, you're a robot.
I'm not going to hurt the robot's feelings.
Does a robot have feelings?
I don't care.
Shut up about the Blue Fairy.
There's no Blue Fairy.
Asimov has a lot to say about that, doesn't he?
Hey, let me, guys, let me get you back from fairytale land to the real thing.
You brought up AI.
I just went with it.
I know.
I know.
And, you know, where I come from, it's not AI.
It's A-O-O-A.
Oh, that's true, yes.
Yeah.
So to kind of sum it up and to kind of conclude, continuous acceleration of performance means faster, tighter feedback loops because we want to make sure that developers immediately understand if they basically made a bad code change that potentially breaks things, right?
Yep.
That's great.
And I think what I like a lot is, you know, the shift left performance concept, leveraging
existing functional tests, but then also going into the other side of the pipeline, looking
at production data.
And when you deploy something, you understand what is the impact and also use that data
to prioritize what you're doing next
like the example we had earlier does it really matter if the one feature that is executed once
per day is now executing 10 database statements more no nobody cares about it yeah and it may not
be of significant impact yeah and and again this is sort of that individual experience of being an engineer who's used to be given, you're given
whole phase, you have veto on the, on the release gatekeeping, classic quality gates and, and all
this kind of stuff from the old ITSM world, um, and previous methodologies before then all of that
kind of gets set aside as we're like, look, we can know all these things about performance in
the new world, and we can gather this information without being hung up on, you didn't test
in a prod environment, you didn't use my magic tool, and you know, all those kinds of things
get checked aside and built in a new way using the new tools.
And to me, that's the opportunity for any old dog to learn new tricks. And I think that's cool.
The other thing I wanted to say is, you know, only because there may be people of different experience levels listening,
just to reiterate the fact that when we're talking about performance, we're not talking about, you know,
there's the difference between load and performance, which we kind of talked about in another show with you.
But performance is not necessarily load. and oftentimes it's under no load.
So this is obviously any kind of load that you can shift left is always good to do.
That's usually a lot more difficult of a task, but we're not talking about trying to,
hey, I've got to go figure out a way to shift my load test all the way to the left.
I have 50 scripts that I have to run and maintain.
No, this is more about understanding the performance of your code based on key metrics and values and all that
and tracking those throughout the cycle
so that when you do get that into the load test,
you can more easily discern, okay, how does this translate into volume?
But it's not trying to push that full-blown load test
all the way to the left.
That's right.
Cool.
Cool.
All right, well.
Can we wrap it up?
Yeah, Andy has to do the closing now,
because he did this.
Actually, I want to have the last word,
because I have to say a big thank you to somebody,
but I want to give,
so I want to give you the chance first to say goodbye to the audience before I say my final thank you. Mark.
What do you want?
You say, thank you. Thank you, listeners.
Show people you have gratitude for them listening to you.
I have an incredible amount of gratitude for the fact that people are listening to this.
Awesome.
Thank you, everybody.
And I do want to... Andy, were you planning on giving a
shout out to how we're recording this or no?
No, you can
do that.
I just want to thank Zencaster.
This episode and the previous one with Mark, we are trying our first foray into Zencaster,
Z-E-N-C-A-S-T-R, which allows us to do high-quality podcast recording online,
which then sends me the audio that I can mix offline later.
So it just makes it a heck of a lot easier so far than hooking up all the Skypes
and routing the audio within the computer. later so it just makes it a heck of a lot easier so far than uh hooking up all the skypes and
routing the audio within the computer so uh we're hoping uh good stuff so far uh i've been enjoying
this uh i'll see how it goes when i go to mix it but uh it's been a pretty awesome experience i
think for all of us right um so big shout out to them and what they're doing at the site um mark
what's the there's there's a person behind this, right? Yeah, so it's Josh.
Josh on the web.
And he's working with, I think he's got a handful of other contributors there,
mostly Tim.
But they're in beta, and he's just about to go big time.
So special thanks to Josh, man.
Nice work.
Yeah, if you're doing your own podcast.
Zencaster.
With one Z.
No, with one Z.
Caster. I was trying to be like, hey, raceway park.
That's for New Jersey people.
Anyhow, if you're from New Jersey, you know that.
That's right.
Sorry.
Can I say my final thank you now?
Yes.
So Andy's.
Yes.
Yes, Father Andy.
Thank you.
Thank you.
We will allow it now.
Thank you.
So I want to have a special shout-out to Wolfgang Gottesheim.
So he's my colleague, and he actually made this podcast recording possible
because he was gracious enough to give me his hotel room key,
let me in here while I'm at Java 1,
and I will be a good guy and will not mess up his room even more.
It's a little messy already, but I guess there's actually not much damage I can do here.
But thanks, Wolfgang,
for that, and
thanks, Mark and Brian,
for a cool conversation.
And Andy, are you going to let him know that you dipped his
toothbrush in the toilet bowl before you left?
Don't say that.
Don't spoil the fun.
Thank you for
giving Andy the room.
Give us a chance to have a fun recording today thank you everybody uh again um uh at emperor wilson if you have any feedback
um you can uh tweet uh hashtag pure performance at dynatrace or you can email us at pure performance
at dynatrace.com you also have at at Grabner Andy. And then there is the what?
At Mark underscore.
Fill me in there again.
On the web?
No, on the.
Mark on task.
It's Mark underscore on underscore task.
There you go.
So thank you, everybody.
Right on.
Thank you.
Bye. Thank you.