PurePerformance - 031 Continuous Performance Testing with Rick Boyd

Episode Date: March 27, 2017

We got Rick Boyd ( https://www.linkedin.com/in/richardjboyd/ ) – Application Performance Engineer at IBM Watson – and elaborated what Continuous Performance Testing is all about. We all concluded ...its about Faster Feedback in the development cycle back to the developers – integrated into your delivery pipeline. As compared to delivering performance feedback only at the end of a release cycle. We discussed different approaches on how to “shift left performance” with the benefit of continuous performance feedback!

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance. Get your stopwatches ready. It's time of Pure Performance. I believe this is officially number 31. Andy, how are you doing Andy? I'm good, a little tired. Yeah, you just came back from a whirlwind tour of the southeast corner of the globe. That's true, yeah.
Starting point is 00:00:48 And my body is still 13 hours in the future, I believe, because it's been a 13-hour difference the last stop that I did in Singapore. And I just got back last night. But now coffee keeps me awake, so it's all good. I believe it was in a William Gibson novel, Pattern Recognition, where the main character had to travel a lot and talked about time travel and losing – I forget exactly what it was, but the whole effect of that on the body. But also just in more of a scientific or sci-, uh, impact of leaving your body in the future and all this kind of fun stuff, but glad you're back. Um, we just also, uh, although it's been about probably a month or so when this airs, we, we wrapped up perform, had a lot of fun there and
Starting point is 00:01:36 you were running around in your leader hose in with your UFO. Um, as, as Mark Tomlinson and James Pulley and I were doing the live podcast. So that was very fun to see everyone in a while just swing by in some outfit. Yeah, and I'm actually sorry that I told you guys. I'm sorry that I couldn't be more active in podcasting, but it was a crazy, crazy busy week, a good week. It was, yes. But, yeah, I had to run around a lot. And if any of our listeners were there, thanks for coming.
Starting point is 00:02:03 Anyway, we have a a special guest today um one one of the only people who ever responded back in the early days of the podcast we were doing the um the no prize the know prize if you recall right and we only had one person respond ever and that was our guest today and our guest is the one and only Rick Boyd. Hi, Rick. Hi, how's it going? Good. How are you?
Starting point is 00:02:32 I am super excited to be here. You got me. As of time recording, it's the middle of my workday. So I'm just sitting in a conference room. I kick my feet up and just sort of relax a little bit. What was that? Did you just pop open a beer? TBD.
Starting point is 00:02:52 That's a good way to kill some of your Friday afternoon before heading out to the weekend. So before you tell us who you are, a little more detail, I did want to just address the no prize because I never did officially get back to you on your incorrect answer. So the question you tried to answer was what was the first computer that Brian ever used? And your guess was an Apple II Plus, which was a very good guess. That was the first computer I ever owned. I hand me down used Apple II Plus with 48K memory.
Starting point is 00:03:19 First computer I ever used, though, was and I'm not counting calculators. I remember I had one computer science class and the guy was like, well, a calculator is technically a computer. I'm not talking about calculators, but my first computer was the Timex Sinclair. Anyhow, that's that. Rick, tell us who you are. I know you from your days at Dynatrace, but for obviously people listening don't know who you are. So why don't you go ahead and introduce yourself? Yeah.
Starting point is 00:03:46 I mean, so I have been in this industry now coming up on five years, fresh out of college at the time. If your listeners are familiar with the Guardian program, which I'm sure that they are from talking to past Guardians like Eisen Gruber Eisen Gruber and, uh, Brian Chandler, you know, there's probably been some discussion about that, but, um, you know, that's, that's first and foremost, sort of what I characterize myself as, even though I've had a couple of different roles since that time. So as a, as a, a performance, uh, sort of consultant, the performance guy for whoever I'm working with. So I did that for a number of years at Dynatrace, moved on to delivery consulting, starting to travel and do less dedicated work and more short-term gigs around the US. And then this last year, left Dynatrace to go start
Starting point is 00:04:40 working for IBM in their Watson Health division to help sort of build a performance practice in some of these new key initiatives that they're creating. And I think what's interesting is that the roles of the three of us, right? So you were, well, I go in Dynatrace to prospective customers and show off all the cool features and things that we do. Andy, you do a lot more of the experimental kind of things and hardcore research projects and a lot more of the, how would you describe? You know, you got your UFO project. You do a lot of research on user experience, all these other kind of things to help just in general knowledge of performance and then share that as a thought leader within Dynatrace. Would you characterize that as – is that a good characterization? I would think so, yeah. And also, you know, from a more high-level perspective, getting people excited about general trends, whether it's DevOps, which has been a big topic of mine.
Starting point is 00:05:42 But, yeah, you're right. More on some strategic projects, some new technology, some new methodology stuff. And then feeding that back to the engineering team, yeah. Yeah, and then Rick, you were the one who actually
Starting point is 00:05:53 had to execute this stuff, right? So I would promise all these things during an engagement. Andy's showing off all these cool potential things of what you can be and what you can become. And your role was actually, you know, just for people not knowing a guardian,
Starting point is 00:06:07 your role was actually working within the clients, making these things work. Right. So so from a point of view of knowledge, you know, you had to implement all the kind of stuff we talked about and demonstrated and did. Yeah. And I think I think I've literally had customers who were handed off from you and Andy before, too. So anyway, I think you come from a great background there. And I guess today, we're going to hopefully record two shows with you. But the first topic of today's show is something we talk about often, but it is something that I think
Starting point is 00:06:47 there's always a different take on different views on. So it's definitely a world worth going into. And that's just the general idea of continuous performance testing. And so what did you want to kind of talk about on that? Well, I, you know, I, I don't know if you know this about me. Um, you know, we, we, we talk a bit, but we're probably a bit, but probably not to the degree that I talk about things with people. But I'm kind of a blasphemer in the performance space. And I really am just so thankful to have this soapbox that I can stand on to talk about my views about continuous performance testing and how they differ from other people. A lot of things that are still being fleshed out around continuous performance testing, I think that's why people talk about it so much, is that there isn't really an establishment practice.
Starting point is 00:07:36 I was talking to James, James Pulley, a known friend of the podcast about this at Perform. Are people actually doing it? And the short answer, I think, is no. I would say the short answer might be yes, but is, you know, the idea around a test may not be completely fulfilled by how people are actually implementing continuous performance testing. And so we still have to do some exploration. And before you go on, and I can tell Andy's really tired because he's barely said a word. Before you go on, when you're talking about performance testing, right, because we've had these discussions with Mark, who's James's colleague before as well, because there are
Starting point is 00:08:19 some confusing terms out there or interchange, not really interchangeable with terms that people like to interchange. The difference between a performance test and a load test. When you talk about performance testing, are you talking about what people traditionally think of as a load test? Or are you talking about any kind of test where you're gathering performance matrix metrics, whether it's a single execution of a test script or it's under load or any, anywhere in between? Yeah, the latter, I would say so so you know you've got your your functional tests and then over on the non-functional side you've got security and
Starting point is 00:08:50 then the third category which is the the whole of what i'm talking about when i say performance testing so it just when you're looking at the performance metrics while the test is any kind of test is being run right yes okay yeah do you think this is true for everyone out there? That's actually interesting because I have a different perspective on that. For me, performance testing means I'm not just taking a functional test and look at performance metrics, but for me I definitely want to bring the application at least into a condition where I have more than one user accessing it.
Starting point is 00:09:27 And then – but it obviously doesn't have to be a large-scale load test. But I want to – I think Mark Tomlinson, he also talked about like his performance unit test that he has where he runs some functional tests. For instance, also using unit testing frameworks, but let's say running it like 10 at a time, just multi-threaded, and then getting some performance characteristics out of it. Again, not a real load test or soak test or stress test, but at least something that is a little more than a functional test. Yeah, I would say that that zeroes in on it a little bit better. When Brian said pulling out performance metrics from functional stuff, I was say that that zeroes in on a little bit better. When Brian said pulling out
Starting point is 00:10:05 performance metrics from functional stuff, I was just trying to be agreeable. See, but and I know this is a topic we're not going to dive too deep back into. But you know, when when when when code gets checked in and a either of unit tests or some any kind of one off test runs on it, you know, I would kind of counter-argue that you could call that performance if you're looking at things like what we even talk about during the automation phase as far as how many database executions were there, what was the CPU? I call this something differently.
Starting point is 00:10:36 I would probably call this like an architectural validation because for me the metrics that we always talk about are countable metrics mainly like number of database queries number of round trips number of log messages created and these are for me architectural metrics that tell me how resource intensive an operation is and obviously that can be eventually be a performance or scalability issue or relate to or tell me about a performance and scalability issue or relate to or tell me about a performance and scalability issue under load okay um but you know obviously you know we're throwing around terminology and everybody has a different perspective yes but for me for me i will
Starting point is 00:11:16 probably if it's a functional test and i look at at these architectural metrics i probably would call it i'm doing architectural validation by looking at these metrics, and they can lead to conclusions that tell me more about performance and scalability when there is more load hitting the system. Very good. Very good. I like it. I don't know, Andy, if you remember talking about this at length at StarWest last year, but, you know, you and I had this discussion of, like, these should be the tests that you're already using for functional stuff. Like, we want to stress that that kind of a thing shouldn't be any additional script writing or any additional test writing. We're just adding architectural validation to assets we already have. Exactly. And I think, I mean, it's obviously something we built with the Dynatrace test automation feature
Starting point is 00:12:07 where we can capture these metrics. And then I think at StarWest, Rick, correct me if I'm wrong, but did we also talk with Adam from Capital One? He was there and I think he showed and he talked about that they reuse, just as we do internally at Dynatrace, they're reusing their Selenium scripts,
Starting point is 00:12:25 but then running them on multiple Docker containers in parallel. Yeah, I believe you mentioned that in the podcast as well that we had with them. It's an interesting approach. I would have some issues implementing that here for a number of reasons, not technological ones, but just the way that we do things strategically. I don't know if that's interesting enough to record about. Maybe if we have time. So about continuous performance testing.
Starting point is 00:12:53 So what's the challenge with it? What's your definition then? Sure. So I think traditionally, like, you know, I could say this traditionally in terms of both the modern and the classic, you know, performance test, you know, and by performance test, I guess what I mean is seem to slow down the Agile pipeline, you know. or in lockstep with my code as things are going along and validating that we haven't had a performance regression from build to build or week to week or month to month or whatever it might be. But I think what oftentimes people lose sight of when they start to implement continuous
Starting point is 00:13:57 performance testing is that every test you execute, every single test you execute should be framed in the form of a science experiment. And if it's not, then you are wasting your time and your company's money. You're trying to answer very specific questions from these performance tests. And that's why you've got, you know, classically you're doing production load modeling. And then you're using some scaler of that over some duration because it meets business requirements um but but but at the same time if i can just boil that down to okay what is what is not significant is is getting you know a certification on some fixed
Starting point is 00:14:39 scenario or what's not significant is is saying how many users I threw at it or how many hours it ran for. But really, what can I confidently say was the business concern that we addressed? We can maybe start to rethink how we do performance testing or the profiles of the performance tests we run in order to answer those questions more quickly. And again, I think duration of test is often a concern when you, when you go into an agile shop, that's, that's really implementing continuous delivery is like, how, how can I fit a performance test into my, into my life cycle
Starting point is 00:15:15 design? If it's just going to slow me down. And I think in, in the little outline of, of, you know, kind of what we're going to talk about today, you put in three points, right? And I think in the little outline of, you know, kind of what we're going to talk about today, you put in three points, right? And I think they're worth mentioning because it almost is the framework that should guide a test, right? So you had said, you know, earlier you talked about it should be a scientific approach. And then you asked, you know, what question am I trying to answer with the test? How quickly can I get my answer? And the third one, which I think is a whole nother kind of topic too, is production load modeling worth it? But those first two in particular, I think kind of drives this idea that you're talking about of new approaches
Starting point is 00:16:00 in performance testing is number one, knowing what you're trying to test. And in the past, there were ideas of like a soak test or how many concurrent users, right? But when those kind of tests are usually harder to set up, harder to maintain and take a lot longer to run. So correct me if I'm wrong, your supposition then is to say, well, let's take a different approach to testing and say, what do we need to test for? Right. Yes. And, and have that all figured out first, and then we can write, have our tests for that. And then the, how quickly can I answer, get my answer is always, I guess that's a hard, in your experience, do you find that's a hard one to deal with? Because, you know, the whole, my whole experience with, with testing systems is sometimes, okay, I run a one hour test and everything looks good. And just before I'm about to kill the test,
Starting point is 00:16:55 something blows up, right? Like five minutes after the one hour. So how, how do you approach, how do you approach that? I guess. Yeah. So I've, uh've uh you know classically i've been told that just sending an insane number of users and transactions at a system and trying to knock it over is not beneficial um i i think i think that there there may be some some some answers that we can get from a 10 minute you knowminute capacity load sort of, of, um, you know, singular resource I have to hold on to, or any deadlocking that might happen in, in, in database records or something like that, especially if my tests are written well. Um, you know, uh, the, the other things that we might, we might ask ourselves is what is the, the capacity requirement for the servers. And in those cases, you don't need an eight-hour test.
Starting point is 00:18:09 I would say you don't need like a business hours level test. You just need to know what your peak load is and to run it for long enough to validate that there's no memory leaks and that you have enough service provision. So I do recognize that I have a lot of advantages and opportunities where I work. I've been cut a performance lab, which is a complete replica of one of our production stacks. A lot of places don't get that, although they certainly should be railing on it until they do, because otherwise you're really not performing a production validating test. But yeah, the idea of the duration of time that my test is going to take is the biggest concern for people who are not performance testers in terms of how am I going to affect the pipeline. Our continuous delivery manager here is looking at that.
Starting point is 00:18:59 Our apps engineers, our build engineers here are looking at that. And if you think of a tool like Hygieia or even Jenkins Pipeline, it breaks down those pipelines by stages. And then everybody's job is to sort of reduce the time spent in the longest running stage or try to find some way to make it more efficient. And undoubtedly, performance testing is going to be one of the longer ones. And so you've got to think about more strategically, what is it I'm really trying to sort of curb in terms of a risk? And then what is a load profile I can use to reasonably say that I have tested for that? Here's a thought. I think you obviously bring up a great point. And this is also what we've discussed with Adam from Capital One when they built the Hygeia dashboard that they saw that the biggest problem
Starting point is 00:19:51 is obviously performance testing, where on average, the performance test, let's say, runs an hour. And now you have so many tests coming in. And we're basically slowing everything down because we are all relying on this test environment that we all share and i have for me continuous performance testing is actually getting quick feedback on uh on what's the changed performance characteristic of the code that i'm now going to commit and one of the things that i thought especially true for modern application in architectures where I can, let's say, deploy and change individual services independently from each other. Wouldn't it make sense and wouldn't it be cool and possible if we have a load environment like yours, which is a replica of production that is constantly under load. And then I give developers the ability to say, well, if you have a code change,
Starting point is 00:20:57 then you just deploy your new version of your service into that environment that is constantly under load anyway. And after you deploy it five minutes later, I give you the first quick feedback on how the performance characteristic has changed. And I think that's, for me, a great way for continuous performance testing because basically I'm continuously testing an environment anyway. It's my staging environment or my production replica. And my developers can use it to deploy changes into that environment. And obviously, if have this the right architecture where i can modify and deploy you know a version two or version three of a service and then the system and the orchestration layer takes care of that if i can do that then i can immediately give
Starting point is 00:21:37 very fast feedback on the change performance characteristics after i deploy a new version and this doesn't mean i need to run an hour or an eight-hour load test, but within minutes, I can see the difference. And this could then obviously be, I think, well integrated into your pipeline. And you can say, hey, I have this built. I did some unit tests. Now I'm deploying it into this strange environment. And then five minutes later, I asked the environment,
Starting point is 00:22:03 now, do you see that the world has gotten better or worse from a performance perspective and if it's worse then probably don't even deploy it into another environment where we run heavy stress tests and heavy load tests sure i i i would i would counter uh i i do like that idea i think it probably works for many people uh for us we have a mixture of environments and technologies and then also use cases for performance testing. So it's not just going to be staging. And I think that was that was part of part of the hygiene question, too, is like if if we are just talking about performance all the way on the right, you know, we have to do a lot of refactoring to get it back. So my thought is to approach what you're talking about. I think that that works well. I
Starting point is 00:22:54 think we may even implement some model of that. But if I could talk about my approach, maybe we can do some compare and contrast. And there's two major bullets left in what we had outlined for this podcast. I think both of them speak pretty well to what you're talking about. So the first one is your strategy for automation. For us, what we have now with our new projects is that when a commit is done on a branch of a piece of code, that code is built. It's deployed into a Docker container. That Docker container is then pushed into a Kubernetes namespace, which is essentially an isolated environment named for that branch. From there, automatically, we kick off functional and performance tests against that and report back through the Jenkins build status back to the commit of a pass or fail.
Starting point is 00:23:47 Now, obviously, green and red is not really fuzzy enough for performance insights, but imagine when that branches, you go for a pull request, you go to merge that back to master, and you open it up and you say, you know, Frank and Johnny and Sarah, go ahead and review this code. But in addition to those reviewers, I'm going to add, you know, perf service account or tiny Rick or whatever, whatever you want to name your service. I kind of want to name mine tiny Rick and Morty. Yeah. Rick and Morty. You ever see it? It's pretty good. You should watch it. OK.
Starting point is 00:24:27 After you're done with Star wars clone wars okay anyway so the uh the idea there would be uh back inside of that that pull request you've got a bunch of people commenting on the code and one of those comments could be from an automated system that says these are your regressions that's cool yeah specifically this branch before and then and then you have to you have to be the these the engineer that says okay that's that's acceptable we'll merge it back in and that's that's before it even goes to master once it goes to master then we can start doing doing all of the deeper level integration tests and higher scale scale load tests or anything like that but we may have already answered a lot of questions that we have. So basically what you're telling me, and I love this, this is a great idea.
Starting point is 00:25:10 So basically you say you run your tests on your branch and then a bot, a performance engineering bot is basically giving you review information as part of your merge process. Yeah, all of these pieces already exist. I'm not suggesting that anybody needs to go out there and write this bot. You have the Jenkins and Git integration. You have the Dynatrace plugin. You have the performance signature plugin. All of that stuff can be you know automated and
Starting point is 00:25:45 and functionally pushed in this way i think the the biggest challenge is process there to say that that that i have a branch of code uh and and it exists out there in an environment automatically without me having to intervene like you have to you have to make some some investments in that but trust me from from what we've experienced here it's well worth the investment no i like i like this a lot yeah i mean sorry i like i like this a lot because because the concept you're right i mean the tools are all there and then the process of merging code back you say obviously i'm looking at all the evidence and there's human interaction that looked to do code reviews but then there is some evidence-based some facts and measure-based checking as well and then the data comes in
Starting point is 00:26:34 from these automation tools from these performance automation tools that are already collecting data in your environments and that's yeah but i like the idea of i kind of envisioned it now that we're sitting all in the code reviews and then as you said andy and joe and jake and and all of them and then rick and morty come in and they say oh from our perspective the performance the performance characteristics don't look that good so don't please don't merge it in yeah that's pretty cool because i i would love to review every pr that we have in our company i have a lot to say about people's code when I'm looking at it. But if I can write a little bot, then I think that, or I can essentially chain a bunch of services together to become my bot, then I'm good with that too. I think,
Starting point is 00:27:16 you know, ops has had their time, build engineers have had their time, and release engineers have had their time. And now it's performance engineers really, really need to be on board with being automation engineers. And that's a complete, you know, sort of edict of the times that we're in now in terms of the technology that we have available to us, but also, you know, the requirements we have to deliver faster and to apply our brains to more complicated challenges as systems get more complex and all of that jargon. And just to take a step back, and Rick, I won't do the accent, but the general idea, what you're saying here is when that code gets checked in and the red and green comes back, you're also sending back some of this performance data with it for the review before, so that before it gets checked into the master branch, um,
Starting point is 00:28:11 there's, there's a set of performance data to review along with it. Is that like really, really high level kind of idea what you're saying? And then you have these tools that, that automate that process. Yeah. And, and there's, you know, there's, I, I am a big fan of, of bring the data to where the people are so i'm not asking people to open up the dynatrace client i'm just saying you are in we use bitbucket the atlassian tool right here as our get server um it's it's excellent i don't know how much it costs so i'm not i'm not going out and praising it but i think it works really well but that's where people are looking at code.
Starting point is 00:28:45 That's definitely where people are doing code reviews is in that tool. And so I'd like to bring the performance level information there. Similarly, you know, I wrote a plug in for JIRA last month to say, you know, here's just just click on this link in JIRA and it will open up the Dynatrace session related to this ticket. Like all of that, I'm very much into the idea of don't give anybody a manual process to go from point A to point B. Right. And you put that up in the community, right? It is. I have the binary on my GitHub. If you look up, it's DJ Ricky B on GitHub. You can definitely find it and download it and try it out. I am working with my employer on open sourcing development. So I think the other things that you have to do with automation, if you can, if you can, and we absolutely can here, is provision new environments for your application. You don't want anything, especially if you've got that environment and use for multiple different applications or different configurations or different builds. If I have a singular performance environment and somebody wants to test their new
Starting point is 00:29:57 version of service A and they finish what they're doing and then somebody else wants to come in and test their version of service B, but the branched development version of service a is still in the environment i have not done an isolated test to tell you what the performance impact is of your of your change specifically and so we do have uh besides the kubernetes environment which is is awesome and infinitely automatable we do have vms which are working to be able to reprovision through Catello and issue their SSL certs and everything, you know, with the click of a button. And that would essentially become one of our pipeline stages as we're doing performance testing, especially in the context of our release.
Starting point is 00:30:37 So that's obviously, I mean, I think this goes into what I said earlier, right? I mean, you have infrastructure as code. You allow everyone automated to deploy a version of a stable environment including their changes of their service on the click of a button or automated through the pipeline then then test it and then automatically get feedback and if people then want to merge changes back then they just look at all the all the evidence and then figure out if this is a good code change or not a good code change. Yeah, and take what you said there. You're asking a lot.
Starting point is 00:31:13 I mean, if I'm just the lowly performance tester who listens to excellent podcasts about application performance, I've got a lot of ideas and a lot of resources, then I may come into a shop where I have a single performance environment and I can't scale for everybody's needs. Well, then you just set up a queuing system, right? You reprovision those exact same servers every time and just say you're next, you're next, you're next. This is something else that I talked about at Perform when I did speak there. You guys didn't mention that, but I was actually a speaker there so i'm on andy's level for a day um uh you know i didn't unfortunately i didn't see your i didn't see your your talk i just heard you were running around like a madman that's that's perfectly fine i'm sure there were more interesting talks at the time anyway and we're talking about it now um you know if if you think about in a
Starting point is 00:32:05 performance testing group, they have this artifact. It's just a Word document that says, you know, what am I going to test? What are my load profiles? Who am I reporting to? How do you want the report? All that stuff. But all of that, all of those could be parameters for a Jenkins job. And then if I just say, don't do concurrent builds on that, then bam, everybody gets their job as soon as it's humanly possible. It runs all the automation for provisioning the environment, for configuring which scenarios to run and the level and the load profile and the reporting and the reporting format and all that stuff. And everybody gets what they want out of the environment without any human interaction from, from me. And that's, that's why, again, I'm trying to skew more towards being an automation engineer so I can do what I love, which is, you know, being more proactive about, about performance issues that we might have any,
Starting point is 00:32:56 any, uh, uh, compile time issues or runtime issues, uh, just related to our environment, just sort of tackling those and removing the bottlenecks. You know, if you think about Gene Kim or the lean manufacturing stuff, just searching for our bottlenecks and attacking it, I think is a strategy that's really well applied to the idea of performance. I think that's probably why you had him speak at your conference a couple of years ago. So the one point you mentioned earlier, I mean, you said not everybody has the luxury of endless resources so that you can run as many tests as you want in parallel.
Starting point is 00:33:32 So as you said, you have basically one environment and then you're making sure that people register when they want to get their test executed. But wouldn't the cloud solve that problem? So why does your employer not give you access to to the cloud and then you can spawn up as many instances as you want in parallel but also shut them down in case they're not needed wouldn't that has ever come up in a discussion that that might actually be more efficient overall and more flexible there may be some some some goodness there um we are the cloud on our
Starting point is 00:34:09 side so it's i can't really speak to most people's conversations but like we are if not the only we're one of very very few uh um cloud providers who actually ingest, uh, PHI personal health information. Uh, so we can't, we can't offload our test data or, or our application code or any, any aspect of, of our system out to, you know, uh, a cloud provider. We have, we have our own managed data center. So we're not, we're not, yeah. So we, we, we are, our processes are, are, are such that, our infrastructure engineering team, a lot of people call them our DevOps team, but as the three of us know, DevOps isn't really a team or a role or anything like that. They essentially are building us our own AWS in terms of our ability to automate and scale and everything. I think the cloud is good if you have test data that you can push out there or you're not overly concerned about data or code in transit. But if you're building something cloud native, absolutely.
Starting point is 00:35:17 Or if you're migrating to the cloud, that's probably where you should start testing because you're going to start to figure out not only what are the performance issues being there but also just what are your what are your how do you deliver there how do you actually push the code out to those to those environments and so if if that strategy is is is coming down the pipe i highly recommend maybe maybe doing performance first there and and really really understanding especially if it's going to be a staging style thing what is the automation around that look like? How can I make this the least amount of headache when we actually pull the trigger on moving to the cloud? Certainly.
Starting point is 00:35:51 But in your case, is this really now the test data is the issue with confidential information? Because I assume test data can be scrambled and you can. I think Mark Tomlinson had a great uh terminology for that he said swiss cheesing my test my test data meaning putting a lot of holes in there uh so that it's still a cheese but obviously it's it doesn't uh it doesn't contain obviously all the sensitive information um so is it really the test data which is in your case in your particular industry also the the code itself obviously that you don't want to upload anywhere publicly and potentially somebody gets their hand on it yeah potentially
Starting point is 00:36:32 because i mean we we actually have a desensitized data set but we sell it so also that leaking would not be a good thing for us either yeah yeah and you know andy gome your your question was a little bit along the lines of where i was going to go with this, because because Rick mentioned also in terms of if you're if you don't have the kind of resources for these larger scale environments that that he does and some of the more fortunate people that companies have, you end up having that one environment that you keep repurposing and you end up building a queue, which kind of goes to what, you know, Adam was talking about, um, in terms of, you know, the bottleneck of performance testing. But if you, if you automate that process, if you automate the, here's the new build, here's the new test and everyone gets in line and, you know, number one, that's going to speed up the process somewhat from the way it was. Right. But I think that can then be used as a leverage point to maybe turn around to back to the company and say, Hey, look, look what
Starting point is 00:37:31 we've done with what we have. Now imagine if you get us an internal or external cloud or some other kind of larger environment to run parallel tests, we can, we now have a measured pipeline of how long it takes all these tests to get through, which we've improved with automation. So we proved that we can automate this process. Now, if you give us the resources and the money to get a better environment or more scalable parallel environment, we can you know, you can you'll be able to prove to the company and show how much you can improve that process and speed things up. Yeah, and likewise, even without the idea of forward thinking on that, you are still just maximizing this resource. You could measure, to some degree, you could measure the success of your efforts as a performance engineer or as a center of excellence for performance
Starting point is 00:38:21 by the percent utilization of your performance lab. Right. You know, that can be one aspect of it. And I think it just kind of talks to the idea of, you know, you might not have the best of everything, but take a step back and look at what you have and try to reimagine what you can do with what you have. Right. Put on a whole different view of it and say, okay, this is just, you know, I'm talking about bare metal servers here, but you know, however you want to think about it,
Starting point is 00:38:51 this is just hardware, right? We're using it this way now, but it's hardware. We can, we can use this hardware almost any way that we can imagine these days, you know, virtualization and everything else. Take, take, take a different approach and take a step back and, and think about how you can become that performance hero and, and switch up something within a new organization. Um, definitely it's possible, I think if you're given the time to do it. So, um, I mean, just kind of, kind of wrapping, wrapping up the topic a little bit here. So continuous performance testing rick what do you think is the biggest benefit of continuous performance testing and why people should do it
Starting point is 00:39:33 i mean it's a it's it's sounds kind of sad because it's it's something that that gets harped on a lot but just the idea of shifting left, finding those performance issues early and, and refactoring in a way that's, that's not costly. Yeah. That's, that's, that's huge. But, but automation is, it is fun and everybody should do it is my second, my sort of secret answer to that question. Yeah. Yeah. Yeah. Because for Brian, what about you? Repeat the question. Come on. didn't you listen to me i kind of felt like i was what's the what's the benefit of continuous performance uh testing faster feedback for
Starting point is 00:40:16 sure you know i mean the number the number one that that's that's the whole game right getting the feedback early failing fast failing failing often, and succeeding as best you can or as efficiently as you can. Yeah, I agree with that totally. And I think what's often maybe what we didn't A, then they deploy it into an environment, into a, let's say, clean build, but their service A has the new version, then they get performance feedback on their service and not performance feedback of their service change, including all the other things that other people have changed in the same week or month or whatever, how long your build times are, your sprint times. So I think this is the critical thing about continuous performance testing, giving fast feedback really on the changes that you have made to your particular part of the software so that you can actually focus on these and not later on at the end of
Starting point is 00:41:25 the month when you do your big performance test trying to figure out wow it's really slow but who could it potentially be we have a thousand code changes that went through the that went into that release so i think continuous feedback is fast feedback but also narrowed down to individual code changes and i think that's the key here yeah i, and again, I would say that I'm a blasphemer. What I want to say is just sort of an ending note, a to be continued, is that that might still not be enough. And you should tune into the next episode to hear why I think that.
Starting point is 00:41:59 Ooh, a cliffhanger. Yeah. All right, well, with that, Andy Andy is there anything else you wanted to contribute before we wrap up okay pretty good
Starting point is 00:42:08 I think Rick did a perfect job in getting hopefully people excited to listen more yes and speaking of
Starting point is 00:42:15 people listening we've recently upped our followers on Spreaker so thank you to everybody who has been
Starting point is 00:42:22 who's following us and so for everybody who has been following us. And so for everybody who's new to the podcast, the Twitter for the podcast itself is pure underscore DT. You can follow myself at Emperor Wilson.
Starting point is 00:42:38 Andy is Grabner, G-R-A-B-N-E-R Andy. Sorry, Grabner Andy it is. So G-R-A-B-N- G R A B N E R A N D I. Uh, Rick, do you have a, I do.
Starting point is 00:42:49 Yeah. I'm, I'm at Richard J Boyd. I'm not very active on there. Uh, I, I, if I got more GitHub followers than,
Starting point is 00:42:56 than Twitter followers this week, I'd be pretty happy about that. Again, that's at DJ Ricky B. Okay. And yeah, any feedback, any comments,
Starting point is 00:43:03 please, you know, send them a tweet so you can always email us, uh, at pure performance at Dinah trace.com. And yeah, any feedback, any comments, please, you know, send them a tweet. So you can always email us at pure performance at dynatrace.com. Thank you. File an issue on my GitHub or file. Yes. We, or you just make up issues on his GitHub and give him,
Starting point is 00:43:15 give him extra work. So we'll have to do a lot on the weekend. Anyway, thanks for listening. Everybody. We'll be back with another episode with Rick Boyd. Thank you, Rick.
Starting point is 00:43:23 Thank you. Goodbye. Thank you. Goodbye. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.