PurePerformance - 100th Episode! Continuous Performance & Continuous Podcasting with Mark Tomlinson

Episode Date: November 25, 2019

Wait! What? This is our 100th Episode of PurePerformance? For this special anniversary we invited Mark Tomlinson, Performacologist & “The Performance Sherpa”, who also inspired us through his Perf...Bytes Podcast to run our own PurePerformance Podcast.While we start with talking about performance in podcasting we move over to learning more about how Mark is establishing a Continuous Performance process at his current employer. We learn about new ways to do performance engineering in a continuous way, how to integrate it with your monitoring and why it is not always important to run the big load tests but rather focus on short feedback cycles.We want to give Mark credit for what he has done for the performance community and use this to say THANK YOU!! Hope to have you back for many more episodes to come and definitely for episode 200!https://www.linkedin.com/in/perfsherpa/https://www.perfbytes.com/

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance! Get your stopwatches ready, 100th episode of Pure Performance. Pure Performance! Wait, you don't sound like Andy. Where's Andy? Andy? Andy, are you there? Did we lose Andy? I am. No, I'm still here. here but the thing is never ever has it happened except mark is on the show that the guest is speaking before the hosts are speaking mark is the only rude guest i know all right so who's this mark fella who's this mark fella i don't
Starting point is 00:00:56 think people know who he is mark tomlinson some performance guy right i think he must have accidentally received the link to the recording here. Yeah, I snuck in. I'm like a homeless person that walks in off the street. Hey, buddy, can I run some load tests for you, man? Exactly. That happened to me once. So, Mark, so obviously this is, listeners will understand right away that this is going to be a very serious discussion today, a very serious episode. Mark, for those few, like two or three people in the world that don't know who you are, let them know who you are.
Starting point is 00:01:41 I am the performacologist, the performance sherpa, perf sherpa. Somebody called me- I'm the alpha and the omega. No, that's not another person referred to me on Slack as the perforator. That's not, it has a whole other implication. I think like you could, yeah, that's like paper.
Starting point is 00:01:57 You could perforate the paper. Uh, no, I am a, I'm a now 27 year performance performance testing founder of the PerfBytes podcast, inspiring muse of the peer performance podcast that you all are celebrating. Congratulations to both of you making 100 episodes. And I mean that from the deepest part of my soul.
Starting point is 00:02:20 The deepest, darkest, most awful part of your soul, right? Yeah. Yeah, right. That place where old vugen versions gonna die so we want to i think we i think we have to say thank you obviously because for people that don't know you truly inspired us to do this and we have well brian you you now even also run a show on PerfBytes. Right. And I remember before Pure Performance, you guys invited me on PerfBytes. I think back in the days, we talked about SharePoint. Was that the first episode?
Starting point is 00:02:55 It was. Well, it was not the first episode we had. You went to Howard's house. Howard's house, yeah, out in Lexington. Yeah, and what did you have for breakfast? Probably pancakes. And some bacon? And some bacon.
Starting point is 00:03:08 Yeah. I'm pretty sure a lot of syrup on it. And it was breakfast, so there was probably no beer involved, at least. No, no, no. Yeah, that's... So, yeah, so we had started the PerfBytes thing on about 2012. And I think it was the end of 2012. And Andy, we had you on the show, kind of right off the bat. And James and I were both independent performance geeks in the world. And it was, that was the first we thought, is anyone really doing a serious performance testing, performance engineering show? And we couldn't find a lot. There were casual things, topics here and there. A few people have tried, you know, back and forth.
Starting point is 00:03:57 So it just started for us. And then Andy, after you joined us, I think even maybe came to a couple live shows. Brian, I think you joined us in Denver for something when we were there with an SDPCon. And shortly thereafter, I think, Andy, we talked about, hey, would Dynatrace want to ever sponsor the show? And we got invited to do Perform. So we went to Perform 2014 or 2015, I think. It was still back in Orlando, right? Yeah. And did a live, James and I just went there and did a live thing. And it was awesome.
Starting point is 00:04:36 And so from that point forward, we, you know, you guys had said, well, maybe we could not spy on ice. So why don't you start another one? Because, you know, competition is good in certain ways, but it's also inspiring just to cover so many different topics because you guys could drill down the Dynatrace sort of, the digital transformation message was just coming out. Ruxit was still incubating and hadn't become Dynatrace as you know it today.
Starting point is 00:05:04 So there were really exciting things going in that direction. And having more people with more topics and more things to cover is always a good thing. So I thought that was really good. Yeah, I remember when it was an STP con, I went out to meet you. And we were at that, what's the name of the place? It has the bowling alley and the arcade, whatever. And you said to me, oh, you should do a podcast. And I remember my first reaction to you was, do people listen to podcasts?
Starting point is 00:05:33 No. Apparently they do. Apparently they still don't. But the funny thing was is I went to my boss then and said, hey, I think it would be a good idea. Maybe we can do a Dynatrace-based podcast. And he was like, nope. Nope. a big idea maybe we can do a uh dynatrace based podcast and he was like nope um and then and then you talked to andy and andy said hey brian and i should do performance based podcast because andy knew i had i knew that i'd record yeah yeah and then they were like yep and basically because
Starting point is 00:06:00 andy came aboard right so andy i want to thank you for coming on board because obviously I wouldn't be able to do this without you, literally. But also, you bring in all the guests and you're really the star of the show. I control the knobs and make my jokes here and there. But Andy, it's wonderful having you to do this with. Thank you, but I think it's a team effort, right? It's always having you to do this with. Thank you. But I think it's a team effort, right?
Starting point is 00:06:27 It's always a team effort. I would say the combination of the two of us inspired by Mark and Perf. We're like Laverne and Shirley. Does that make me Fonzie because I'm from the other show? No, you're Carmine. Carmine. Okay. Didn't they make a cameo, the crossover show that came on to Laverne?
Starting point is 00:06:45 There's Lenny and Squiggy, but I thought the Fonz made a crossover. There was some crossovers. Yeah. Yeah, exactly. But you're not the Fonzie. And then, so, but there's, there's more now, right? I know that Señor Performo, he has his own show now, right? He's doing podcasting. Yeah. Leandro Melendez, Señor Performo from Mexico this year. And things are taking off.
Starting point is 00:07:07 He just joined me in STP Con in Boston, which was great. We have some sponsorship inquiries and his market is a little bit different. It goes beyond, to your point, Brian, not a lot of people listen to just audio
Starting point is 00:07:23 in Latin America. A lot of that is video. a lot of YouTube and just the popularity. People spend a lot more time watching video stuff. We're thinking that Leandro, who who has a face for radio is marvelous handlebar mustache. He's fantastic for doing that. But we also we welcomed him not just to expand the language base to performance engineers that don't speak English as their native language, but also technology is so much dominated by English. As you know, Andy, speaking two different languages, you also are like, hey, I can speak two other languages. Most of the tech talk is in English. So I'm
Starting point is 00:08:06 still waiting for the German version of Pure Performance to come out. Oh, wow. Yeah. All German. Yeah, that's going to be interesting. I have a problem. Actually, I have problems when I have to present my content in front of a German audience
Starting point is 00:08:22 and have to speak German because for me, German is basically a second language because for me German is basically a second language because my native dialect is up Austrian or move it that's right yeah so so to help people understand that would be like if you come from Texas and you're talking and you're going to speak in like Minnesota right it's totally different language, you know, so. Right. Or it's every word that James Pulley says. Is that English? Is he speaking? I thought it was Portuguese. is that performance is not, at least according to him, performance is not as much of a focus in Latin America. It's not as much of a talk about practice.
Starting point is 00:09:13 It might be done, but it's not as much of a, doesn't have any spotlight or much notice. Kind of similar, it's been like that for ages and ages here until recently, right? So he's helping try to find and build that community which is really awesome except for in brazil where they speak portuguese i actually have a colleague who is in brazil was asking me about how to get set up for uh podcasting because he's like you got all brazil who doesn't really understand too much spanish you know they understand english but it'd be much better for them to do portuguese so just
Starting point is 00:09:44 i just want to throw it out there anybody in in any, you know, in any language, start up a podcast. And the great thing about Leandro, not to keep, you know, tooting his horn and all, but he didn't come in with any real audio experience. And he just picked it up like instantly. I remember when he first did his little demo one, we were like, wow, that sounds, you're ready to go he's he's a natural yeah that way yeah and also much like you brian you take kind of you kind of produce the audio that you want the sound you want with kind of little creative segues background music interesting kinds of things uh the way you've done it you know you get your own theme music which is very cool
Starting point is 00:10:22 um and yeah martha and i did the as usual, because Martha will help out with any of the PerfBytes voiceovers. She did a lot of the Spanish voiceover in Leandro's intro music. Okay. Yeah. So the sexy voice of PerfBytes, sexy Irish voice, did a sexy Spanish-Irish voice of PerfBytes. Exactly. So yeah. So yeah.
Starting point is 00:10:44 So we're expanding languages. And speaking, as you mentioned as well, there's a couple other, there's a couple web performance podcasts that are out there. I'd have to go look up the actual thing. So, I've seen that. They're not 100% performance or web perf, but they do cover kind of front-end UI, UX kind of podcasts. So, they cover some of that. And Joe Colantonio just has the test guilds now. Right.
Starting point is 00:11:09 That's the new one, right? That's a new one. Leandro was actually his first guest and I, I be on there soon. So we'll see how that works out. But I mean, again, he's,
Starting point is 00:11:18 he's got the Joe Colantonio test talks approach a lot of interview stuff and very topic driven kinds of things. So that's kind of cool um so see that also we have we have henrik from neotis and he's doing a lot of a uh obviously they just had vpac and uh in podcasting wise he doesn't have his own show from a podcast perspective but he's been obviously a guest on all of our podcasts. Yeah. Didn't he used to be the performance chef. Didn't he have the,
Starting point is 00:11:49 like in the kitchen, the kitchen, he does have the kitchen, but that's video. And so I think you should, I think you should do keep the performance chef going on. Cause that's to me, I always think of the Swedish chef, even though he's not,
Starting point is 00:12:01 he's not Swiss or is that Swedish? Those are two very different things um but yeah so i that it's the muppet show the swedish chef that you could be for performance he could pull it off just just reading reading a little bit of the chat that we have going on here on the side i think i just uh noticed that people are actually chatting in here well the three of us are chatting yeah well mark is chatting and nobody's listening to you but now i read it and you're you're right i won't read out loud what you said but besides besides besides the um you know where we came from and there's other stuff out there so people check out there's more than than just perf bites and pure performance a lot of great stuff
Starting point is 00:12:41 and if you feel encouraged by what we've been doing and you speak another language and you would like to do that in your language, feel free to reach out to us. Mark especially. Mark especially. Exactly. He's the one that has all the good advice. Mark, besides you being the inspiration for the podcast, you've also been inspiring, obviously, with the work that you've been doing i and i just told you i had two meetings today where i always used a slide that i kind of borrowed from you from one of the webinars we did years ago where you talked about continuous performance back then when you were at paypal and just to rephrase the the story as far as i remember at least that the way i tell
Starting point is 00:13:24 it hopefully it's still kind of accurate but the way I tell it. Hopefully, it's still kind of accurate. But the way I tell it is that you were kind of getting bored of people asking you to run this test, and then you ran the test, and then you are doing the analysis and giving back the results. And then you build automation where people could open up a Jira ticket, and then you build automation that picked up these Jira tickets. So basically, they formulated the request of what they wanted to get tested and what they wanted to get performance feedback on. And then your automation was taking that request, deploying the app, running the test, and then on that same ticket, pushing back results.
Starting point is 00:13:56 So making performance as a self-service almost available, right, to different people. And this story resonates extremely well with the people I talk to. I mean, I know you've been doing this a couple of years back when you were at PayPal, but I think you just said, before we hit the record button here, that you're still kind of wearing this continuous performance hat at your company, your kind of lighthouse that is kind of preaching it and implementing it. Now, can I ask you, so continuous performance, I think it's very inspiring for everyone, but people still have a problem actually getting there.
Starting point is 00:14:37 So can you tell me a little bit more on what you do right now in your new role? How do you get continuous performance into the mindsets, into the organization? Has anything changed since you left your previous gig, PayPal, in your approach to continuous performance? What is there for us to learn? I think the one biggest thing I'm seeing right now is people talking about, do I even need to run a load test?
Starting point is 00:15:02 Could I do continuous performance, just measuring each environment and not actually having build scripts, run load, you know, test data, like the essential old project that you would do to build a load test and then have that predefined load test built and running on a continuous schedule. Or on demand, like you said, Andy, like you put in a JIRA ticket if your company's run by JIRA. And sometimes the robots in the company, human beings only do things if you give them a JIRA ticket. So if you can say this, what type of test do you want to run? Just the baseline test.
Starting point is 00:15:42 And then that's a predefined baseline test that runs all the scripts we know about all the apps we know about. But I'm seeing now even people say, look, can I, we talk about early performance testing, where it's relative to the previous builds relative, what's the trend of performance in a given environment, even if that environment is really small. So you're in QAa you've got functional regression tests running and they always run maybe potentially in the same order so the profile of usage over the course of let's say takes an hour to run all of your scripts is going to look pretty similar and you could get other metrics like hardcore metrics number of database calls number of requests and couldn't i just
Starting point is 00:16:22 measure and compare the run of the functional test cases? Now, back in the old day, you have to build a load test and the transaction has to go all the way to the database and back and has to do all this. But I see people who are like, I'm not even building all the scripts. I'm just monitoring my integration environment, my UAT environment. I'm monitoring any automated, very similarly repeated regression type run in automation and just comparing the performance sort of high level across that stack. And then they get other metrics that they can trend in there, pull it into some sort of graph or set some SLAs or some alerts on the graph thresholds that would say, all right, how long does it take to run the first 10 scripts?
Starting point is 00:17:06 And those are our high priority transactions. And all right, if that takes longer, it shouldn't. But why did it take longer? And mostly I see the questions that people are asking are not that good old absolute, here are my performance requirements. You have to prove pass or fail that we've met those. And then you create the scenarios to that. It's more like, hey, keep us on the good path. And things are going to change over time and comparing them is very, the most single most disruptive thing to classic performance testing that I've ever seen. It almost pulls the performance concepts from APM, old school APM, and drags them all the way up into the life cycle. Because now it's like, then you do need synthetics at some point. You might say, I really do want to generate 10x load on a stress test because I just, I want to do a what if, and maybe I only do that two, three times a year or more. Holiday readiness is a good example. Like we don't have production
Starting point is 00:18:22 load that goes to holiday volumes, two, two X, three X. So we don't have production load that goes to holiday volumes to 2x, 3x. So we do need some synthetics that do do that. So it doesn't, I'm not saying it replaces classic load testing, but to me, for me, continuousness in performance is like the biggest thing that's changed to the point where people now are like, do I, can I start just measuring and not really do classic J meter load test, Neo load load tests, a load runner. And I'm like, yeah, of course. I mean, anyone that asked me, should I measure performance? I'm like, yes, absolutely. All the time at every point. Why wouldn't you? And, uh, yeah. So to me, that's still huge. And the things I'm chewing on now, I may be, let's say, two, three times a month, I will get a special request for a special situation load test in the payment space. There's a lot of things in real time fraud. And so like, let's say, for example, you want to do blacklisting of a set of cards, you find some operation in a hotel room that's like, hey, we were cranking out fake credit cards and blah, blah, blah, and they're going to go use them. You want to be able to immediately blacklist those cards.
Starting point is 00:19:34 Well, that's a pretty intensive thing if you're talking about a really giant payment system in a giant database. So how do I screen those blacklists when I'm running two, three, 400 transactions per second through a single engine, and I have to check every single transaction for a blacklist. So there's some, I've got a special case where my architects or the business are nervous about something. And that's what generates a special stop the presses load test. So to me, that means continuous performance is sort of your everyday, make sure everything's on an even keel and you get to give more attention to the things that actually put the business at risk. And it becomes like exception based load testing. Yeah.
Starting point is 00:20:20 I have a question though. I first of all, comment, and then a question. First, a comment. The kind of trending and looking at certain metrics, this reminds me of the performance signature that Thomas Schoenmaier, our chief performance architect, he basically said after – he also has a continuous performance environment, and he's basically deploying a new build every night, and then he's always looking at, let's say, 20, 30, 50 metrics. And instead of doing it manually, he just pulls them out through an API and then puts in a database and then charts it and does basic regression analysis.
Starting point is 00:20:56 So that's what he's been doing. And the whole concept of the performance signature is also what you are saying. It's like, yeah, we want to see where things are going and kind of keep us on the happy path and inform us in case something goes completely strange. But other than that, we're probably good. So I think he was also spot on with his approach. But now I have a question to you. When you say, well, we just monitor,
Starting point is 00:21:21 we don't care about the load tests, and you do this in a pre-prod environment, you do have to have some type of load though on it continuously which means you you would really just go with your functional test is that what you what i heard earlier i even see it happening within the dev environment during unit tests um in some cases we've added a special phase after the unit test where you run 100 transactions as fast as you can with a single thread. And you measure the elapsed time, not just the individual time, and just graph that over time, trend that number. Another one is just run some small number of
Starting point is 00:21:56 threads, equally, you know, 10 threads doing 100 transactions as fast as you can to do thread contention, concurrency type issues, but very small scale. And the database guys are like, eh, big deal. Yeah, sure. Of course the database guys, that's not real load compared to prod. But then I'm talking to the database guys and they're like, well, we already have all of our metrics with all the trending and all the graphing. We're looking at it through Dynatrace or whatever tool they're using. And so they're like, we're already in touch with that. We're just interested in special scenarios when we're going to do a you know whack an index somewhere and it's going to take forever to run
Starting point is 00:22:30 an impact performance if we're going to have to do some massive update so again their their need for load testing what if scenarios what's going to, become very exceptional and higher risk. And then they're the number one thing that pisses off any database person is like, oh, that stupid developer didn't catch an N plus one problem, another database contention problem, and they didn't know what they were doing. So usually you can catch most of those in smaller environments. So yeah, dev QA are perfectly good places to be measuring that and trending it. And you don't need super extra huge load tests the way, you know, you see like the old COE
Starting point is 00:23:11 tearing down the old COE that had, you know, here's our pipeline. You have to give me your non-functional requirements. It takes me two to three days per script. How many scripts do you want? Give me your test date. I mean, that is the, that's, that's 10, 15 years old, the COE model. And there's companies that do that today, spending lots of money. And here's the question I got recently, even at the SDPCon, which is somebody saying, if I'm doing this early performance testing, do I still need to do, like you're saying, do I still need to do load testing? And I'm like, well, try just doing this early performance sampling and measurements, maybe these small kinds of unit performance tests. And after a month or two, start asking the COE, hey, how many bugs have you guys
Starting point is 00:23:56 found in the last month or two? And they might say, well, we found two or three at the database because, you know, we can run contention on the database. But for the most part, we're finding almost no bugs. And a lot of big, heavy, late game performance testing is more of a safety net. I guess it's more of I'm going to hold your hand reassurance. It's a soft touch. Hey, let's feel strong and confident. And nothing wrong with that.
Starting point is 00:24:22 But if you really look at sort of defects found and fixed, I just don't see it happening with the COE guys anymore. Two of the people at SDPCon were like, we had a COE and we kind of tore it down and pushed everyone into the dev teams, shifting left. So I think that's really happening now. And I think it's important for the performance teams to understand that there's a lot that they can bring
Starting point is 00:24:43 and offer to those dev teams and everything, right? Because knowing what to look for, knowing how to look for it, knowing what tests make sense, even if it's just a single unit test, right? An N plus one query problem, you can see that with a single run. But knowing to look for it there and knowing where all the gates that we need to gather this information from before it gets to production so that we can find these issues. And, you know, to a large part, I agree. If you have, and a lot of it, there's a lot of ifs around this, right? But hopefully some of these ifs bring the other ones into fruition. If you have a good pipeline, if you have a good, a fast delivery system where if an issue is found,
Starting point is 00:25:27 let's say it gets, doesn't get found till production, but if an issue gets found, you can get a push a fix in pretty rapidly, right? Then you can start reducing the need to do those COE type of tests because you're going to find most of those issues. And if something does slip by, if you can get something out quick enough, then, you know, you start balancing the, you know, how many hundreds of thousands of dollars are we spending on COE
Starting point is 00:25:53 versus, okay, two days and we can keep resetting our server every six hours and get the fix in. If you're actually realizing of vulnerability, a risk in production, the speed of remediation is, can be so fast now that the duration when you're at risk is much, much shorter where it used to be maybe weeks or months, um, to fix something like that. And nowadays it can be, you know,
Starting point is 00:26:17 you can be a matter of minutes. But that's, if you have that setup. One thing I noticed when I talk a lot to a lot of our prospects is, and it's always funny because Andy does all these amazing talks. I hear you doing all these amazing talks and there's all these wonderful ideas going around and all the conferences and everything. And then you get on with the real world people at one of our, you know, prospects or a client. And it's like, yeah, no, we're still running load runner. We're looking for another tool and it takes us this minute, you know, it's, it's the whole,
Starting point is 00:26:46 you know, not, I don't want to say unicorny thing, but it's practice theory versus practice. And there's a lot of people who want to get there. So I think all the speeches and all the talks and everything are extremely, extremely important to keep giving everybody that inspiration, but it's still a very, very slow adoption.
Starting point is 00:27:03 Yeah. And I think if I were to list two barriers to the adoption of modern performance techniques, and continuousness is part of them, the one barrier is just permission. They don't perceive the role of a performance person. You'd be stranded in the infrastructure team. The budget is allocated.
Starting point is 00:27:22 The HR department says, these guys look like infrastructure guys to me. Let's put them over here. Don't they do, don't they live in, in, in production and ops and infrastructure? So one is just, I will never give permissions to those people to shift their work into earlier environments. Uh, that's why Andy, I think, uh, maybe the, the clinics you see, you'll see devs showing up to some of the performance clinics. Um, and, and that shift, that those people may have even lesser permissions to try these new techniques, Brian, to your point, right? It's just you have to jump through hoops and whatever, and we're not prepared for that. So you can change your thinking, but if you don't change their permissions and the role, then they can't do that. And the second thing is just entrenchment.
Starting point is 00:28:25 You know, the status quo is just entrenchment. You know, the status quo is the most powerful force in the universe. So someone who's addicted to late game load testing is going to keep doing that until they start seeing some new techniques happen in development. So I think it's still smart for us as practitioners and teachers or people being inspirational or evangelizing these new techniques to attach to the people that have the power to try them. And even like I'm saying, even compared to the conferences that I've been to in the last 10 years, people even in the testing space now who were always entrenched in Lode Runner and late game load testing, they're even walking in now saying, I, I never, I don't even know what load runner is. So, so I'm,
Starting point is 00:29:11 I'm picking up, you know, early game load testing. I'm doing my own build my own stuff with Python or building it in.net, whatever you're doing, it's, they're doing it themselves. And they're like, but this is, I want to learn some testing stuff. And I knew you were doing performance stuff at SDP, so I went. So yeah, same kind of thing. So to kind of sum up what I just learned
Starting point is 00:29:32 or kind of repeat on what I believe continuous performance engineering can look like in the lower level environments. If you are in a dev environment and you have your Jenkins builds, your Jenkins pipelines or bamboo, whatever it is, build it, deploy it, and run your set of unit tests, your functional tests. Some things that I sometimes advocate is use an HTTP-based testing tool, whether it's Chainmeter or something else. Test your APIs and then run the same test maybe with five concurrent threads to at least get some basic kind of load on it and then let this test let
Starting point is 00:30:06 this whole pipeline not run longer than five to ten minutes i would say right try to get as much as much in as possible and then at the end of the test then look at your monitoring data your database data and then if you do this on a continuous basis, then you can really trend it build to build and also to build to build comparison and basically see if a code change has a significant, let's say, regression on one of these performance metrics. And that's kind of what it is. And what we've been doing, Mark, I know you're aware of it, but Keptn is the open source project. And part of Keptn is the quality gate aspect. And part of Keptn is the QualityGate aspect. And the QualityGate implementation uses a library called Bitometer. And basically, that's exactly what it does.
Starting point is 00:30:51 It automates pulling out data from different sources for a particular time range and is then using that data and either comparing it with a threshold or with the previous build and just kind of shows it over time. And I think that's why I also believe that this particular component from Captain will see adoption, at least with our user base, because it basically exactly enables people to do what you've just said. At the end of your build, whether it just runs five or 10 minutes, look at key metrics, but do it in an automated
Starting point is 00:31:24 and do it in a continuous way. And Andy, let me ask a question just for anybody who's out there who might be familiar with Captain. Maybe they've heard you talk about it and see it and they're like, well, we're not really running anything in Kubernetes and all. Is the photometer part, can that be separated out and used without the rest of Captain or without being on Kubernetes? Just clarify for anyone listening in this way. So to clarify what we are, so Pitometer is a library that can be used on its own, but what we're doing right now,
Starting point is 00:31:56 and that's happening in 0.6. So right now we're working in 0.5, depending on when this airs, 0.6 may already be out. 0.6 of Captain will have the capability to just use Captain Quality Gates on its own. That means you can install Captain and then just use the Captain Quality Gates through an API. And that means you can just trigger it
Starting point is 00:32:19 from a Jenkins pipeline. That's awesome. So at the end of Jenkins, you say, Captain Quality Gate, evaluate these metrics, then Captain will store the results, give you a UI, build-to-build comparison, and all that stuff. That's why we also want to make it available, you know, kind of independent. And the second thing is the whole area around self-healing and auto-remediation. That's also something that we will focus a lot of energy on, on also making this particular piece kind of more standalone available. Awesome. And this all relates to performance, right, when you think about it.
Starting point is 00:33:00 Of course. Again, I mean, because, well, sorry to cut you off, Mark. I was just going to say, because a lot of times when I'm talking with you, Mark, I'm thinking front performance from like load testing point of view. And when I'm talking to Andy, I'm thinking broader picture performance and monitoring, but all these worlds are related, you know? And so if you're someone who's used to listening to Mark and the PerfBytes teams and all, and you're coming from a load background, you know, this is all related and same thing. If you're on the monitoring side, anything else, everything, all related. And same thing, if you're on the monitoring side,
Starting point is 00:33:25 anything else, everything, all these pieces fit together. But Mark, yeah, I cut you off as you were about to speak. Well, I was going to say, the captain stuff is great. And just as an aside, whenever I see it, I always think to myself, captain, oh, my captain, oh, captain, my captain. Remember that? I think of Captain Kangaroo.
Starting point is 00:33:43 Captain Kangaroo, yes, exactly. But the thing that's also changed over the years of all this podcasting is the prevalence of the cloud the cloud actually becoming viable as you know the early cloud stuff without kubernetes without any of the management stuff built in without any of apm in it at all, you know, it was just enough to get something running, but you still had all the overhead of doing all the work of managing and keeping things going. And nowadays, it's like we've automated amazing amounts of stuff where you see the rise of no ops, you see the rise of, hey, my old infrastructure team just got fired, and everything's running in AWS, have a nice day. Um, those strides that
Starting point is 00:34:26 Azure has made, a lot of people attacked Azure when it first came out that it was not ready for full to be a first-class cloud citizen. And it, they made some amazing progress just in the last couple of years. Um, and so I think, and the load testing space has evolved as well, as you see things becoming ephemeral within the cloud platform, within the cloud infrastructure itself, the way we operate, all those load testing tools that were built on cloud first kind of existence, they suddenly they're more viable in a continuous mode of operation. And whereas, you know, and I was one of the first guys to like, let's do load runner in the cloud on EC2. And I was one of the first guys to, like, let's do load runner in the cloud on EC2, and it was a beta. And everyone I talked to was like, I'll never use it. No, we can't use it. We'll never use it. We'll never use it.
Starting point is 00:35:15 And it was like, why not? I mean, we were way ahead of the game. That's like the single biggest thing, to your point, Brian, for me, is if you do this kind of stuff all day long, every day, you know, and you, and you're kind of in touch with everybody that's building stuff. Uh, yeah, you end up thinking of stuff, solving problems that no one's having that makes you a terrible product manager. Just like, yeah, we're going to have this problem in the future. If I could, you know, put you in a time machine and a TARDIS and take us forward, I'll show you
Starting point is 00:35:43 what we're going to run into next. But it does make it entertaining to sort of help people take a few steps towards continuous operation, early performance testing, putting some things in the cloud, ephemeral load environments, ephemeral test data. That's a huge thing with large database testing and testing at scale. Now that we're better at being able to launch environments at different scale, here's a 50% scale, 10% scale environment, and it's almost exactly perfectly that way. In the old days, it was just like, oh, that crappy old hardware almost can't be compared on any scale to new hardware. Now that we're in the cloud, it's like, yeah, this is this. And think about serverless. Think of some of the load testing vendors that are doing stuff, generating load out of serverless stuff. That's pretty exciting because no one wants to buy a load generator. So yeah. All right. So run, run a serverless
Starting point is 00:36:40 load generator and then you're not paying for it at all when it's not there. So a lot of these innovations have changed how we're able to employ some of these cool new ideas, which to me were ideas that I would hope could come to fruition over the last 10 years. Another interesting trend that I see, especially in the cloud native space and for those that are, you know, mainly I think building new apps from scratch is the ability to deploy a new build side by side with your in-production, with your, let's say, main branch or main canary. We talk about canary deployment where you open up, let's say, 10% of the traffic, the real user to the canary, or just do traffic duplication and basically send traffic from production to your, let's say, new version and see how that performs compared to the version that is currently in production. So I think that's also interesting. Another concept that I've seen, and the name slipped my mind on what the approach is,
Starting point is 00:37:51 but basically doing the same thing of traffic mirroring. So let's assume version one is in prod and now you have version two coming along. So you basically deploy version two and another copy of version one next to it and basically give a mirror traffic to to these two versions one and two now the reason why you also do one because if you start up a new app then you obviously have a different performance behavior in the beginning so in order to to really make it comparable you keep your regular production traffic on let's say 10 nodes and then you add two additional nodes
Starting point is 00:38:25 for the new version and then another two nodes for another version one and then let the traffic go against it and so it's easier to compare yeah so i thought that's also interesting yeah that's a kind of uh the garenka's talks on facebook is just using real testing with real load uh and you know there's some interesting data issues for certain environments, certain industries, you got to work around those. And sometimes, I think, 10 years ago, DBAs wouldn't even have the conversation with you. And then, because performance testing becomes an impossibility so much, eventually, those guys starve for oxygen. And they're like, okay, I really have to get some
Starting point is 00:39:07 numbers. Let's figure out the very complex sometimes data situations where I can run load against a new version without real users. Or sometimes you have opt-in stuff for customers. They opt me in, copy my profile over to the test system and I'm happy to do some stuff like that. One of the other things on the list that I put to you guys in, in a hundred episodes, do you remember a favorite episode? I know that's like asking you which one of your children is a favorite child. Of course, Andy and I don't have kids. So Brian, that's really just for you. Yeah. I mean, I think it's, it's hard to say because it's a favorite at the time. And that's the same
Starting point is 00:39:46 as with children. Like, you know, my one daughter might be my favorite this week and then she's a jerk next week. So my other daughter is my favorite, right? Or sometimes neither of them are. And I lament the fact that I ever had kids and I'm thinking, what the heck did I do? Why are we so stupid that we keep having kids? But wait, did I say that out loud? So I think it's more like a time. It was like, you know, if you asked me a year ago, what was my favorite episode would be different. Two years ago, it would have been different. I remember way back. I forget who we were talking with. It was about someone who goes in and they reevaluate everything they're doing at a company and help them get onto more of a different track and introduce performance into different areas of the thing.
Starting point is 00:40:28 But recently, so I'll stick to recently. Recently, my favorite talks have been with Adrian Hornsby from AWS, I believe, Andy, right? Oh, yeah, yeah. So he was talking about chaos engineering and resiliency testing and resiliency engineering, which to me resonated so much because, again, my background is from load testing, performance testing and all this. And I saw a lot of parallels between the two where it's, you know, you're going to test something, you have a hypothesis, which maybe you don't do in load testing anymore, as you said earlier. But you're running an experiment, right? And that's what most of load testing is,
Starting point is 00:41:08 at least in the earlier days when I was doing it, is I'm going to run an experiment, observe what happens, take back whatever I can and figure out why this thing happened and then figure out how we can get past it. Very similar is going on in chaos engineering, where this is about, you know, if you heard about the chaos monkey, you know, pull the plug on something. What happens? Does your system survive? Does it not? And that also ties into the, you know, resiliency engineering, where if you do have something go down, what happens to your application? Do you get the, the fail whale page? Or is most of your
Starting point is 00:41:41 site still working for that one place has a nice kind message saying, Hey, we're experiencing problems right now. Please try again in 30 minutes while we resolve this problem. But the rest of the site might still be available for her to engage your customer with. So that all kind of ties in. And the reason I love that so much is because, again, it's all performance-based, right? It's all kind of those related to a lot of those what-if load scenarios. What if suddenly we've got a spike in traffic?
Starting point is 00:42:05 What if, what happens if the database crashes? Do we have a caching mechanism on top? How can we make it so that we shockproof the system? And they even have, Adrian was going over with us. If you recall way back when Souders came out with his, you know, webperf checklist, they're starting to come up with a, I don't think they formalized it yet, but he was going over a lot of principles of a resilient app. You know, if you're going to be in the cloud, make sure you're, well, of course, the whole idea is being in the cloud. Make sure you're in three availability zones.
Starting point is 00:42:37 So when one goes down, you still have two up. Two go down, you still have one up. And you can, you have time, you know, so there's all these little, it's just, I don't know, I loved it. That's all I can say. I that what about you andy um well i would say similar like you the initial answer depending on kind of which time frame like more recently it would definitely be adam tornhill was amazing when he talked about your code is a crime scene and how to analyze all that yeah that was great in the very beginning i love that we had people like goranka from facebook or yeah or gene kim on the call right i mean who would have thought that we can even ever attract people of their caliber to get on this uh on the podcast but one one thing that i remember and and maybe also more vividly because I've been since working with them very closely, is Nestor and Abir from Citrix.
Starting point is 00:43:32 They were just great and talk about their traditional ops to agile transformation. I believe that's what the episode was called. And that was great. It was more like, what did they do? How did they change at Citrix to become a more agile organization? And you know what I loved about that one, Andy? They were in the middle of it.
Starting point is 00:43:53 They weren't, you know, sometimes you talk to people and they're just starting out, so they talk about their plans. A lot of times, especially in the DevOps conferences, we hear about people who came out on the other side and they're telling their stories.
Starting point is 00:44:04 They were right in the middle. So they'd gotten past a few humps and they've got a whole you know it's long road ahead uh so it was really fun because we got to hear the stuff they did and all the stuff they're kind of scared to move forward into because they know how hard of a road it was so far uh so it was yeah that was a really cool one for sure. Yeah. And then I also remember the episode that I recorded in Mark's house. That was over. That was that, that was the, some, some whiskey or something, right? I love a lot of things from Mark. Yeah.
Starting point is 00:44:37 That's one of them. No, but it's, it was, yeah, it was a really good time. And you know, a hundred episodes, who would have thought that we're still here? Yeah. So one more thing I would bring. My favorite recently, again, I'll go with Brian kind of in the recent world, was the DevOps conversation you had with Emily Friedman. I'm an Emily Friedman fan.
Starting point is 00:45:01 She does some great stuff. That was actually a really, really cool, uh, the DevOps. That was the Spartans talk. DevOps for dummies. And, and kind of the way she approaches that stuff is pretty cool. Um,
Starting point is 00:45:12 her book is out now. Yeah, totally. By the way, for anyone listening, cause I remember we were talking about it, I think on the podcast, but the book's been out for,
Starting point is 00:45:18 for several weeks, a month or two, maybe three months, probably about that. Yeah. Uh, the other most important thing that we may note on here as well is that Alois has been on five times. What are you tracking?
Starting point is 00:45:31 Yeah, of course. It's a competition. Like if I started this. We used to have that, right? That's from the early days. Well, no, he had one like within the last, I mean, within the last. He did a captain talk. He did a captain talk.
Starting point is 00:45:43 About five times, really? Is he the old time? Is he the current leader? Well, if you don't count... He had his own episode once. He did. He had his own episode once. Yeah, if you don't count the Dynatrace perform,
Starting point is 00:45:54 the event stuff... We don't count those. And those aren't counted in this hundred. Yeah, that's fine. And then there you go. Alois has been on there more than me. So basically what I hear is that your ego is basically hurt. Well, you know, I have goals, Andy.
Starting point is 00:46:13 I have aspirations. The fragile male ego. You should go on Martha's show and ask her to talk about your fragile male ego. There you go. I do have a question some people might not know. I mean, there's Peer Performance and then there's Peer Performance Cafe. They're in the same feed, but how do you guys differentiate between sort of cafe episodes and other episodes? when I was running around with my hand mic at different conferences and interviewing people. We tried to keep them short, like five to 10 minutes. Somebody at a conference,
Starting point is 00:46:50 maybe one of the product managers on the Dynatrace side. So more specific to either a conference or also Dynatrace content, whereas the Pure Performance episodes are typically longer, at half an hour to an hour. And we have a guest and we really spend we elaborate much deeper and broader in a certain topic yeah see and to me it's a lot simpler to me
Starting point is 00:47:15 cafes for talking about dynatrace pure performance is about talking about performance not specifically dynatrace i mean dynatrace comes up here and there, right? Because we are talking about performance and monitoring and stuff, but it's not. A lot of times when Andy's at conferences, a lot of them, Andy, have been our own conferences or DevOne and things like that, where it's talking about people
Starting point is 00:47:39 about things about Dynatrace. And as you know, we try not to make this podcast a marketing arm of Dynatrace. I mean, obviously, we're with Dynatrace. And as you know, we try not to make this podcast a marketing arm of Dynatrace. I mean, obviously, we're with Dynatrace and we don't hide the fact. And we hope maybe some people are like,
Starting point is 00:47:53 oh, those guys are talking about cool stuff. Who do they work for? You know, that kind of stuff. But to me, it's much more of that. And sometimes Andy will come back with stuff because I would be fine with if, you know, if Andy gave me a bunch of field recordings
Starting point is 00:48:04 from a conference and they weren't Dynatrace-centric, I mean, we could always theoretically string them together and put them on Pure Performance. But yeah, I guess there is part of that conference part. I never thought about that, Andy. Yeah, that's pretty cool. And the people might not know if you actually go to the feed, this is the 100th Pure Performance actual episode, but you guys have like 200 episodes if you count the cafe and the Pureform. You've done double that in the time that you've been doing this podcast since just the last three years. Right, and the numbered ones are the ones that we release on schedule. So we release on a two-week schedule.
Starting point is 00:48:41 Yeah, it's been, what, three years now, Andy? I think we figured it out. 2016. Yeah, 2016. Yeah, wow. Yeah, 2016. It was like in April or something, I think. week schedule um yeah it's been what three years now i think we figured out 2016 yeah 2016 yeah wow yeah it was like in april or something i think april or may was the first one so it's been pretty awesome you know i gotta say i've been fortunate for for andy and i donatrace lets us do this uh you know i have my roles you know uh andy is more of um he gets to run around and talk at conferences and do a lot of fun.
Starting point is 00:49:06 I'm like a sales engineer. I have like a nine to five where I have to engage with customers and help the sales cycle. And also the fact that they, I mean, this fits naturally into Andy's job description. This does not fit into my job description. So I've been fortunate that, you know, all of my management has been like. I want to correct you. I defined my own job description. See, there you go.
Starting point is 00:49:28 That's what it is. Nobody asked me to do it. I just did it. He's a perform evangelist. Right. But if I were to try to start being like, no, I'm going to go to these conferences, they're like, no, you're not.
Starting point is 00:49:41 That was part of my initial job description, obviously. So it's just been awesome that I can do it right of course it's awesome yeah it's great oh come on don't edit it out you have to beep it yeah I had to beep you on
Starting point is 00:50:01 an Ask Perf by his episode recently and I forget what I used. I used a fun beep. I didn't use a beep. Was it a horn? A little er-er? Yeah, something like that. Good. I have a mouth like a sailor.
Starting point is 00:50:12 So continuous performance to kind of circle back to. Wait, Andy, you're not summer rating, are you? No, no, no. But I just want to say, Mark, thanks for your work on continuous performance. I definitely, I learned a lot and it's great. I mean, it's something that you've been doing for a while. And still, every time when I bring it up, it just brings, what's it called? What's the right description it it brings sparkles in the eyes of people when i explain to them what you have already been doing for the last couple of years when i bring up the paypal example and so you know i think um let's let's continue advocating for continuous performance and and let's share with the with the community on the different approaches i really like what you said earlier. Maybe the days of the traditional load tests are at some point over.
Starting point is 00:51:08 That's also why we see also the testing tool vendors are kind of reinventing themselves and what they're doing, for instance, what Neotis is doing with getting closer to the developer. And I think that some of the vendors are seeing that trend as well. And that's great. I was teaching recently, well, just last
Starting point is 00:51:26 week in Boston. And the thing that sparked most of the people is when I refer to performance decision-making in the continuous context. And people don't realize this, like, am I making a decision in my code, in my configuration that is performance related. And what am I basing that decision on? Is it based on a knowledge base article that I looked up somewhere in the web? And maybe that fits me or not? Am I basing it on fear or previous experience of failure? Like, what are you basing your decision on? And wouldn't you rather base that decision on actual measurements from the code that's sitting the build before the build before operations the week before, you know, actual data driven
Starting point is 00:52:13 decisions, you still you still are going to figure out how to make that decision. But you're making it now with real data. And to me, that the understanding how your organization makes decisions around performance, that's what opens up the opportunity. Well, could I make that decision sooner? Yeah. If I knew X and Y, I would do it in dev. I would set that setting in dev and away we go. Or each environment would have their own configuration.
Starting point is 00:52:40 Oh, I can do that with continuous deployment tools. Easy. So the thing is, if you don't tell people that they're empowered to make choices and decisions at any time, based on real data and real measurements, if they don't feel empowered to make those decisions, then it, like you say, Brian, it just becomes a dead end street. They're like, well, I have great ideas, but I can't do them. So to me, empowerment is probably the most important thing we can do to help people get into the modern continuous performance. And education through podcasts like Pure Performance are essential in that empowerment. And I want to bring this a step further. And I'm again stealing or borrowing from a slide that i that i got from you but you
Starting point is 00:53:26 also got it from somebody else but i don't remember the name but you had you once said uh true performance engineering is if you actually influence the next line of code that the developers write in because otherwise all you do is just testing if you just uh test and then deliver feedback in the end but if you can truly impact the way developers think about performance with every line of code that they're writing and i believe if we give them continuous feedback in a fully automated way and empower them then this actually happens even faster right because if i continuously get my feedback about what am i what are my code changes actually
Starting point is 00:54:05 doing to performance to scalability and then wow this thing i changed 10 minutes ago is now now i get the feedback oh now i understand so i'm changing the way i write my code in the future this in our industry functionality is still the most talked about thing before anyone writes code what do you want me to build? Let's be really specific. Let's break it down into stories. Let's make sure it matches an architecture. We're following good practices, standards, et cetera. And only in the last 20 years have, with the web being huge, is suddenly that second class citizen now would be, all right, let's also talk about security. But I still have, for the majority, late game static analysis and security. Everyone had to read the book, How to Write Secure Code, and that got a lot of developers woken up.
Starting point is 00:54:56 Before you write code, they now have security in their brain. I think our third position, we're still third in the list in performance. And there's lots of reasons that we're not the number one thing in the list. Because we still have the rubber really does still hit the road in operations and infrastructure, which can be a very dynamic environment now compared to 20 years ago. So I still think that's a good order of functionality, security, and then performance as the things to influence somebody's thinking before they sit down and say, no, I'm going to write this thing. And by the way, that was Jim Duggan from Gartner way back in the day. And he's like, you're doing performance testing if the code's already been written. You're just validating
Starting point is 00:55:40 that it's written right. But if you can influence the code before it gets written, now you're truly engineering. You're thinking about the engineering, the mathematics, the mechanics, the physics of it even before it actually hits a build. Hey guys, you know, you're talking about inspiration and sparks and all this. And I just kind of made me want to acknowledge a few people that helped me get where I am in terms of inspiration. So obviously there's you, Mark, right? You helped me out early in my career and you've been an inspiration since. And Andy, you know, not just saying you guys, because you're on the podcast, but you know, Andy,
Starting point is 00:56:19 all your talks and all the blogs that I read really inspired me to try to do new things. But there's two people I want to mention going back. Um, when I was at WebMD before joining Dynastrace, actually the first one was, um, so I was managing the performance team there and I hired this guy, Robert Ravinsky. Um, you've, you've run into him a few times and I was, I was in a dead end in performance. I was hating it. I was like, I got to get a new career. And he started sharing these new, some articles. He started sharing things about web perf and we started getting these ideas. And he really just got me sparked in, you know, really interested in performance again. He brought me into these new topics.
Starting point is 00:56:59 I was just in a siloed world. Wasn't even aware of the performance community out there. So big kudos to him for getting my interest going on again. And also the head of operations over there, Teresa, she actually wanted to get the performance team under the operations team so that they could develop an SRE team. This is back in 2010, 11-ish. She had the foresight to see
Starting point is 00:57:24 that this is where things should be going, that we should be more baked into the pipeline, more baked into the process instead of just being under QA. At one point, the QA manager left and we ended up under product management, which is a conflict of interest. And she was trying to get us out of there. And in my talks with her about saying like, yeah, let's do this. Um, that's when I first started seeing these potentials that now are reality. And she had the foresight to do that. Of course, our CTO at the time was like, nope, that was the end of that. And that's when I finally left. But it was because though, you know, I could almost say it was almost because she tried doing that and it got said no to that. I was like, yeah, I'm out of here. It's a, it's a dead end
Starting point is 00:58:02 here. And I want to try to do something along those lines but uh so i just wanted to give a shout out to to the to the uh robert and theresa as well as you too because i think um you know if any of us look back there are certain few there are key moments in our careers and key people who have helped us move to the next phases always always i don't know if you have any i do i have uh i have a couple of folks i have robert baminger who is a very good friend of mine now he actually initially brought me back then to segway software from my employer before that and segway was the company where you know we built superformer but he actually brought me over and then at segue it i think i learned a lot from didi strasser and who was my first boss i was basically doing
Starting point is 00:58:54 testing on so performer so i tested so performer with so performer and then ernstambich he was our chief software architect so he turned he he taught me a lot about performance engineering so i think these are my my heroes and my my mentors besides mark who i also met i mean i i think every time we talk about it i i forget how long it's been but it's been a long long time 50 years or something it's like it's 16 years so it's like 2002 2003 yeah it's great yeah it's crazy yeah that was a that was a great gig uh to to work with you and that was awesome and i got to speak a little german on the rusty bucket that is my german exactly uh but uh but yeah you know that's the things that you find the reasons you're inspired to your point, Brian, is like I hit a brick wall.
Starting point is 00:59:45 I kind of I'm just getting frustrated. And I've done non-performance things in my career. And, you know, there's I'm all right. I'll go do some partner work. OK, well, I'll go over here and do like some other kind of development work or I'll do PM work. And, you know, I it's stuck on me. I can't get away from it. I mean, even when I'm doing other gigs at Microsoft and other roles, someone will call me up inevitably and say, hey, can you come over to the lab? We have, you know, just want to bounce some ideas off of you about what this customer is doing.
Starting point is 01:00:16 Four hours later and a messy whiteboard. And I'm like, I just I got to go back and do performance work. Like if it's in you and you can find that, uh, that is, I don't want to, passion's a lame word. It's an addiction. Uh, I think, uh, is this it's complexity. If you love that complexity, that specific kind of complexity, you're going to do great. You'll, you'll probably listen to every episode of every performance podcast that you can get your hands on. And I think that's what fuels the three of us. And, uh, certainly the PerfBytes folks that we talked to. Um, but yeah, I still have certain individuals that come back to me five, 10, 15 years later who are like, Hey, do you remember X, Y, and Z? I'm like, absolutely. You're like, well,
Starting point is 01:00:59 I just got back into whatever. And they're like, great, welcome back. And then a whole new crop of people who are learning for the first time in a context, totally different than where we started. So the fact that you guys covered the topics you do that are pushing the envelope, just like the industry, uh, I think is fantastic. Um, and that, that keeps new people coming, uh, a new generation, uh, younger people, it could be older people with new techniques, you know, old dog, new tricks kind of thing. But I think it's awesome because those same performance-based decisions are being made about, you know,
Starting point is 01:01:35 Kubernetes running a bunch of node apps and expanding stuff. And, you know, you want to write stuff in Python, you want to do serverless, you want to do Lambda, you want to look at code efficiency within Lambda. There you go. You're right on it. It's pretty cool and exciting to keep the flame going, Brian.
Starting point is 01:01:51 Yeah. Well, I think you almost stole Andy's Summary job there a little bit. Sort of, except for the fact that Andy has given the shout out about certain content and certain slides. And I just want to raise my hand that the three circles flow that in dynatrace became the four circles flow from andy what that i think we shared that um not at the kitchen table here andy over some breakfast yeah that's true it was good i'm not saying i own it you just you took you took the inspiration and went to a whole new level. Thank you.
Starting point is 01:02:25 I love seeing that slide because I'm like, ah, I recognize where it's over there. I think Mark just wants to negotiate a couple of points on how many times has he been on the podcast. He wants to beat Alois. That's right. And with this, he wants to bribe us or kind of blackmail us. Something like that. Yeah, that's right.
Starting point is 01:02:47 Or I just want some free swag, you know. Yeah, exactly. So Alois, this is notice to you. Mark's coming after you, so you better get some more topics. I have some ideas from the new stuff. I'm just challenging Alois. So Andy, do you have any summaries
Starting point is 01:03:03 that you would like to summarize? I think, not really, I don't want to add anything. I just want to say hopefully this is, we're still in the initial phase of our podcast and we still have a lot of topics to go and hopefully we will be back in well, many, many times before the 200th episode but definitely want to have you back at the 200th episode and see where we are then i'm sure there will be a lot of new topics coming up i'm pretty sure continuous performance even in a in a hundred topics from now will still be a hot topic and um i also look very much looking forward to seeing all of you like Mark, um, uh,
Starting point is 01:03:48 and Brian at perform and hopefully many of the listeners to in the first, uh, first week of February in Vegas. That's right. I'm going to be there and we're going to have a whole bunch like we did last year. Uh, earlier this year, we did a whole bunch of sort of episodes kind of pre interviews. So we'll have that fired up pretty soon here. Um, so did a whole bunch of sort of episodes kind of pre-interviews. So we'll have that fired up pretty soon here. So we're going to have tons of cool conversations. And maybe we take a focus that, you know, kind of where with everything that's going on, that would be kind of cool.
Starting point is 01:04:16 Yeah. And I want to, I just think hopefully we'll have a 200th episode because, you know, I imagine it'll just be some computers speaking binary to each other because they'll all be programming everything anyway. So it will be replaced. Yeah, WebSockets and AI. Yeah, we'll have the Googlebot talking to the Alexa bot, which if you never saw that,
Starting point is 01:04:35 that was the most amazing thing I've ever seen online. I thought that was going to awaken the singularity when that happened. But I also really want to give a huge shout out to all of our listeners. Obviously, we couldn't be doing this without you. If we had 10 listens an episode, they'd be like, yeah, you guys are done. So thank you to everybody who listens, who continues to listen,
Starting point is 01:04:52 and to put up with my bad jokes, I should say, Andy. I don't think you know how to make a joke. It's just my jokes. I want to add to this I mean I'm in a fortunate situation to travel a lot and actually meet a lot of people face to face and thanks for those of you that come to me and say that the podcast is something
Starting point is 01:05:18 you listen to and that you learn and that definitely encourages us so please give us keep giving us the feedback whether whether it is through Twitter, email, or also if you have a chance to meet us face-to-face. Very much appreciated. And through Twitter, you can tweet at Pure underscore DT. You can do an old-fashioned email at pureperformance at dynatrace.com. And I will mention, since you're on the phone, Mark, Alois sent me an old-fashioned email.
Starting point is 01:05:45 I sent me an email with a picture of an old-fashioned drink in the recipe. So another way he's got you beat. He's just superior to you. Oh, really? Would you like some scotch with that? Yes, I would. Okay. All right.
Starting point is 01:05:57 I can work that. All right. So thank you, everybody. Thank you, Mark. Mark, thank you so much for all of it, for getting this rolling and continuing to be a great inspiration for us. And hopefully, I think, you know, hundreds or millions of other people. Yes. Congratulations to you guys. And keep it going.
Starting point is 01:06:17 I would say bring more new people. If you can find somebody who has a gift for podcast and talking, a little bit extroverted, and they're also completely motivated and addicted to performance stuff, bring them into the podcast world. Leandra's doing great. And we had such fun. Bring a new podcaster into your game, and it'll keep you going and expand what you're doing. So best of luck for the future. And I look forward to many, many more conversations. Thank you. Bye-bye. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.