PurePerformance - 025 Evolution of Load Testing: The Past, The Present, The Future with Daniel Freij

Episode Date: January 2, 2017

HAPPY NEW YEARDaniel Freij (@DanielFreij) – Senior Performance Engineer and Community Manager at Apica – has been doing hundreds of load tests in his career. 5-10 years ago performance engineers u...sed the “well known” load testing tools such as Load Runner. But things have changed as we have seen both a Shift-Left and a Shift-Right of performance engineering away from the classical performance and load testing teams. Tools became easier, automatable and cloud ready. In this session we discuss these changes that happened in the recent years, what it means for today’s engineering teams and also what might happen in 5-10 years from now. We also want to do a shout out to a performance clinic Daniel and Andi are doing on January 25th 2017 where they walk you through a modern cloud based pipeline using AWS CodePipeline, Jenkins, Apica and Dynatrace. Registration link can be found here: http://bit.ly/onlineperfclinicRelated Link:ZebraTester Community: https://community.zebratester.com/

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance. Get your stopwatches ready. It's time for Pure Performance with Andy Grabner and Brian Wilson. Hello, everybody, and welcome to another episode of Pure Performance. I am Brian Wilson and with me today as always is my co-host Andy Krabner. Hello Andy. Hey Brian. Boy, it's cold here.
Starting point is 00:00:40 It's going to get cold here very... Tonight we're going to be dipping below zero. Below zero what?heit or celsius uh fahrenheit fahrenheit good yeah we reached that today in boston uh and the wind chill factor is even i think pushing it to minus 10 yeah well that's uh it's a cold day so we'll warm up with a great topic before we dive into the topic i did want to give a quick shout out though to an old former colleague of mine robert rovinski only because he's pressured me to. Um, so he's like, when do I get a shout out? So Robert, this is your shout out and I hope you're happy with it. Um, do you get anything? Do you get anything in return being inside of, you know, not getting, not getting asked anymore?
Starting point is 00:01:20 No. Well, you know what? I will go on record and say Robert has given me plenty in the past in terms of helpful guidance, even though he worked for me. He was a little bit more of my mentor at the time. Fun stuff. Anyway, we've got a show about load testing today, correct? Well, I would say yes and no i think i think i don't want to call it load testing necessarily anymore i think i want to call it how load testing has changed and i know we talked about this in the past especially with mark how devops especially has changed what we want from load testing now i i also know and i just want to point this out here at least i hear this and i don't want to talk bad about any tool vendors but it seems that more and more people are moving away from the traditional load testing tools like the load runners of the world because I think, representing one of these new tools, tool vendors,
Starting point is 00:02:27 even though I don't want to talk a lot about the tools today maybe, but more of what has changed in the last couple of years. Why are people moving away from the traditional load testing tools? Because what are the new requirements for load testing tools? So without further ado, I think I want to introduce Daniel. Daniel, are you with us? I'm with you. Hi, Daniel.
Starting point is 00:02:49 Hello, Daniel. Maybe you want to tell the audience a little bit about yourself, kind of a little background, why you are passionate about performance. And I know you've been doing performance for a little while, for a couple of years. And maybe also give us a little background on what you see, what has changed, and then we dig into some of the details on what we see is changing and how performance engineering is now just different and why we need different tools.
Starting point is 00:03:15 Absolutely. So, yeah, my journey into performance engineering started around 10 years ago. It was pretty much just as a hobby. Pretty much no software out there had good performance, and everything was pretty slow. So I started doing what I could with the benchmarking tools that were around. And this eventually led to me starting working at Apica Systems Sweden as a performance engineer.
Starting point is 00:03:44 So since then, I've been doing load testing for seven years for both big and small companies. I think around four or five hundred load tests in right now and just recently switched over to be the community manager for our product Zebra Tester and doing a little bit more of everything. Cool,edish-based company you must be used to the cold then huh yeah i'm currently in santa monica though and uh we've had rain we're happy with that oh wow that's santa monica that's a little warmer um so uh daniel let me ask you then i mean you you said you have several hundred load tests, and I assume the way you execute load tests has changed over the years.
Starting point is 00:04:30 Is it different now than it used to be five to ten years ago? Oh, yes. I think that not only have we come very far in how people perceive load testing as something more important rather than a checklist at the end of the project. It's matured a lot in what people ask for and what kind of tests they're looking for. It's becoming more advanced, and I get the feeling that everyone is more aligned in testing what they should be testing and testing it more carefully. Now, I mean, go ahead, Brian. No, I was just going to say, in terms of testing becoming more accepted,
Starting point is 00:05:10 I'm going back to my old days of testing, and every time there would be a news story about some public site crashing, we would all rejoice because that meant performance and load testing was getting some spotlight, and maybe some more people would start thinking of it as more of a pre-thought as opposed to an afterthought. Yeah. And it's become more evident also that it actually helps in crisis management to have done these things for government. A big example is the ash cloud that hit Northern Europe a couple of years ago. Maybe actually more than a couple of years ago.
Starting point is 00:05:47 Maybe actually more than a couple of years ago. Yeah, that kind of puts importance on that. Your sites need to be up and running in some cases. They just can't go down. And you need to prepare for these cases, even if it's the case of a volcano eruption. That's on the map now. And that's actually a factor that will drive load yeah of course because everybody's then frantically going on all these websites that trying to figure out first of all what's happening with the loved ones what's happening with my
Starting point is 00:06:16 flight i remember back then it was 2011 i believe my all my flights got canceled because i was i was then quote unquote stuck in europe because i wanted to go back to the US. But I remember that and then we were pounding all the websites trying to figure out what we do now and then everything went down. And all the newspapers of course directed all the traffic to the information sites rather than informing people there.
Starting point is 00:06:38 So it's kind of like a control DDoS attack from the newspapers against the government agencies to pound on that information. It's an interesting case in that you need to consider the volcano eruption too. You can't just go with normal traffic
Starting point is 00:06:54 and peak season. So let me ask you because I started with the statement in the beginning that I think traditional load testing tools that we used 5 to 10 years ago are no longer longer I mean they are obviously still used and they will be used for a while but I think we saw a lot of new tools coming out and I think we also see other people different types of people performing load tests in a different way and I'm just just interested. I mean, I know you're working for a tool vendor
Starting point is 00:07:25 and you might be a little bit biased, but still tell me a little bit about what has changed. And so who is the new audience for load tests and when are they testing and what are they testing and how are they testing it? Would just be very interesting because some people, some of our listeners might still be, you know, running the load test at the end of a release cycle over the weekend. Maybe they want to hear what they should be doing or what they will probably do soon. Yeah, one of the trends that I think I've been seeing over a couple of years is trying to make load testing into something easier and involving as many stakeholders as
Starting point is 00:08:03 possible. You not only need a solution that, oh, yeah, I can create a test and run that test for 10 million users. You need to do it in such a way that not only the technicians understand it, but you can also deliver information to the stakeholders or easily involve the stakeholders in the process. And that's usually a presentation, but also enabling the stakeholders to be able to create the scenarios used for testing without being a developer. So I've seen more tools move closer and closer to starting using HAR files to import sessions and then create tests from that and automating it more and more and
Starting point is 00:08:46 one big thing that I've seen is that everything is shifting kind of left but there's also a group of people that's screaming that it should shift right but I'm on the left shift side in my opinion So what would that interesting, what would shift right mean? Shift right would mean going
Starting point is 00:09:03 back to where it used to be or going into production? Going into production, usually, to actually continuously not only verify that the performance is okay before you're deploying or while you're deploying, but also almost monitor your production environment to make sure nothing is going wrong. Right. So it's not the only test in production. I remember a few years back there was the funneled slogan, production is the new QA. So you're not talking about shifting right and only executing those tests once you're in production. That is sort of a mix of do your testing on the left and also continuously monitor with load and production. Is that correct?
Starting point is 00:09:48 Yeah, absolutely. I think the shift left is all about learning the application, how it behaves, so you don't get surprised. But the right shift is more of controlling it when it's actually released. So it doesn't have to be load testing. It can actually be monitoring that's connected to the load testing activity in some way. So you have the same numbers when you're looking at both sides. Yeah. I mean, that's basically what we also, I mean, what we try to do with our clients, right?
Starting point is 00:10:14 We, in pre-prod, when we do testing, you test your different use cases, features. We call them business transactions, right? Your homepage, your login and all that stuff. And then we can monitor the same features in production with reload. And then you can actually see, hey, look at this. This is actually how often this feature is used. And we thought it's like 80%, but we only see 20% of people using it. So first conclusion that you can draw, we had a total wrong assumption about the load patterns.
Starting point is 00:10:44 Second thing is you can see, you know, what's the real performance behavior? What's the real resource consumption? What's the real behavior, especially when the whole thing runs in production with a real loaded database? You know, data-driven problems, I believe, are still out there because people still, many people have still not figured out how to correctly, in pre-prod, load test against a production-like system. And the database is one thing. So yeah, I totally agree with you. We need to, I agree with the fact that we need to shift in both directions. We definitely need to expand the horizons of people that are doing
Starting point is 00:11:22 performance engineering into production, because production is just a real life system. And we as much as we can do in pre-prod, we need to understand what's really happening out there. And then hopefully we are agile enough and DevOps enough so that we can actually then make decisions based on that data in the life system and saying, hey, I'm a performance engineer. I know what that means. So we need to make this and this change in production now to kind of prevent a certain problem. But then we also need to learn what it means for the next test cycle, for the next build that we push through. Absolutely. And are you seeing – we touched upon the idea of load and performance in production.
Starting point is 00:12:06 Are you seeing – and I'm concentrating on this shift right a little bit, just because it's a little bit fascinating, but are you seeing people actually running load in production? Or is it kind of more akin to the concepts Andy and I talk about a lot where on the left side, you're running all these tests, you're collecting all this data, but the biggest part of it is collecting very specific data and different measures and metrics and then monitoring that same data in production under the natural production load so that you can understand how that behavior between pre-production and production is along a set of standardized metrics that you're collecting exactly yeah so i also think that there's a slight disconnect uh when you go around the
Starting point is 00:12:53 conferences and you mentioned like testing production why why not load test your production environment certify it make sure it can handle it a lot of people get super scared and takes like 10 steps backwards and almost starts running uh there's A lot of people get super scared and takes like 10 steps backwards and almost starts running. There's a lot of people saying like you should never test in production. Like how do you know how your production environment actually works then?
Starting point is 00:13:15 And it also comes, like when I say that you should shift in both directions, you can also see it as translating all the numbers you know into something readable, like for not just the technicians. Like the Dynatrace does this really good by enabling a collection of specific methods, like, oh, a purchase happened for how much and what exact time? What does this mean?
Starting point is 00:13:38 How much have we earned the last five minutes? And when we have slow performance, how is that affected? That's something that I feel kind of satisfies the shift, right? A lot. Because you're translating the information in both directions, not only towards the developers or technicians, but also the people that actually are there to measure the successes. Right. Like, how do you know how much impact you had if you count the correlation between response time and revenue? Exactly.
Starting point is 00:14:11 The response time revenue, that's a great thing. One thing that I've seen recently, at least I try to promote it and maybe I'm the only one, but here's a thought. Response time is obviously the first thing we think about when we talk about performance engineering. But for me, especially when we talk about SaaS solutions where whole business models are just driven by software, if you think about Uber, if you think about Airbnb and all these companies. So I believe that from a performance engineering perspective, you also need to focus on resource consumption and efficiency. So how efficient is my code?
Starting point is 00:14:51 Not only how fast is it, but how many CPU cycles do I consume? How many logs do I write? How many database statements do I create? How many bytes do I send over the wire? Yeah, I think that is important. That allows you to attach a cost to the transaction. Exactly. And I believe, especially with these new business models
Starting point is 00:15:11 where everything eventually is going to be driven by volume, the business success of a company is going to be driven by how many people they can attract to the service. But if the service is inefficient, I mean, it scales endlessly thanks to the cloud, but if you then end up paying, I don't know, twice the amount of dollars
Starting point is 00:15:29 that you would pay for a system that is correctly architectured and is really efficient, that means it eats your margin and could potentially impact your business success. So are you, when we talk about performance testing,
Starting point is 00:15:43 coming back to testing again, are you looking at these things as well? Or are you primarily still focusing on response time and throughput? Or are you also looking at resource consumption and then how that changes from build to build or release to release? Yeah, that's something I absolutely look at. It's somewhat dependent on from a case-on-case basis who wants to do what. So when you're doing continuous integration, for example, and you run night load tests with every deploy and so on, you can absolutely extrapolate all of these numbers and attach a cost to it. We sometimes do that in scalability testing where we kind of like, okay, we have 200 servers.
Starting point is 00:16:23 They cost this much, and they can push this many transactions. So we simplify it a bit, but that's one aspect that we're looking at it, but also how much resources did this test use with 10 users, 20 users, and so on. So we do that to a certain degree, but it's not always we have the ability, unfortunately, especially around Black Friday time where everybody's just panicking. It's funny, too, because, Andy, as you've been saying, I've been bringing up that topic quite a lot as well in people that I speak with. I think we're probably going to be hearing a lot more of that in time to come. And I just say congratulations to you and your team, Daniel, for entertaining that idea and even starting that process.
Starting point is 00:17:18 Cost is going to become, I think, a big thing. Absolutely agree with that concept. Yeah, the more we push it into the cloud, the easier it's going to be to see how much a transaction costs as well. I would love to see an API that you can push all your stuff directly against and get a cost analysis back. That would be amazing. I also think microservices can enable this even more, especially when you're running micro instances in Amazon and so on. And you have a large environment of many servers, but small instances of just APIs. That will just simplify everything so much too.
Starting point is 00:17:52 Well, and especially, I think, I mean, that's a great point. Microservices is one thing, but then now Amazon and also Microsoft, I'm sure Google too, with their function as a service, you know, Amazon calls them Lambdas, Microsoft, I think, calls them function as a service functions i mean then they charge by by transaction right by how often this function is executed so if you are a developer and you use that feature and a transaction costs 0.01 cent that's great and it's nothing but if you then do something inefficiently and you need three function calls for doing one thing and then all of
Starting point is 00:18:21 a sudden your company becomes very successful and then you have millions and billions of transactions executed every day and the real unfortunate one dollar transaction that someone forgot to optimize that's a really cool way of looking at things like when you can say that this transaction costs this much per request yeah and i think it makes our life so much easier yeah and I think we need to really educate the market and we need to educate engineering teams that cost becomes just so much
Starting point is 00:18:53 more important I mean in the old days when we had our data centers and the hardware was just there I mean yes we looked at resource consumption a little bit but in in the end, nobody, I don't think nobody cared that much because in the end, nobody saw the bill of what the data center really costs.
Starting point is 00:19:12 But now with, you know, kind of trying to figure out the costs, the infrastructure costs, the cloud costs per application, per even down by feature, or even I think, and this is a big thing we need to figure out for SaaS-based models. If you think about Uber again or Airbnb, you need to figure out how much money do we make from a user that goes to our website and consumes our service through ads or through whatever they recharge them, and how much money does one average user that clicks through these five pages actually cost us. And then it becomes not only an efficiency discussion with the engineering team to optimize
Starting point is 00:19:54 their algorithms, it also becomes a discussion with the UI designers and with the people that actually design the workflow. Because if you can deliver a service in 15 clicks, but you can also deliver it in 10, it means you have to deliver less JavaScript files, less images, blah, blah, blah. And so it becomes just more efficient. And in the end, obviously, it also helps the end user because it's just easier to click to 10 clicks instead of 15. Yeah. because it's just easier to click to 10 clicks instead of 15. There's another aspect to the design process as well.
Starting point is 00:20:32 It's that it's easy to kill your environment by having the wrong workflow. Yeah. So it becomes important to keep that in check. I don't know how many cases I've seen where the streaming services or sport broadcasters they have like a gate and only open it for everyone at the same time which kills their site i'm sure this is something that you've seen before but that's just a badly designed workflow if you have just a floodgate that you open and you have no control over how many can rush in. This is why buildings have a capacity limit. And opening up the floodgates is just sucker-punching your environment.
Starting point is 00:21:12 So I think designers are starting to get the responsibility now as well because they don't usually see the impact of what the design and workflow is going to be. We can look back at the good old days, like 10 years ago, when we were doing our HTML and starting to do
Starting point is 00:21:28 PHP to call databases and this turned into something like when you coded and when you had the HTML page you didn't just do the client side and then hoped it worked you actually saw the cause and effect directly it's a bit more fussy nowadays
Starting point is 00:21:44 for designers and UI implementators to actually see what effect does my design have on the performance. Yeah. And that's also going to become a bigger problem a bit the longer we go. But I think you still have the Dynatrace client-side monitor, I think, where you can get all the JavaScript and tunnels. That's something that I like. Yeah, yeah, yeah.
Starting point is 00:22:09 No, that's true. And I think it's, I mean, it's obviously we have tools, but then there's, I think for designers, there's fortunately the browser vendors, they put in a lot of great tools where you can also simulate, you know, different bandwidth, different types of resolutions, different type of devices. And it's just a matter, I believe, a cultural change to actually force your developers, your designers to actually use these tools and then figure out what's the user experience, but not only the user experience, but what is also the cost we push down to our end users. Because if you think about mobile apps, if you think about web apps,
Starting point is 00:22:47 you should always think that, well, there's one cost associated from our side where we need to send these bytes from our data center or from our cloud out to the Internet. But the end user also needs to consume that data. So if I'm, for instance, on my mobile and I'm roaming somewhere because I'm traveling, that means if I can use a service and the service forces me to download five megabytes every time I open the app, then I wonder how long I will be using that service when I'm traveling. Yeah, you have a good point. And this is not only apps, but I think Dynapatrace wrote a blog about third-party not long ago.
Starting point is 00:23:27 Yeah. I think that's a great example, like what impact does the third-party have? That's also an aspect when it comes to the client side. You're downloading a 500-kilobyte website, and then there's four megabytes of third-party. No, I just feel it's something that people disregard so much. And not only are you wrecking in the user experience, but you're consuming someone else's bandwidth unwillingly. So earlier we were talking about the shift right.
Starting point is 00:23:55 I wanted to shift left a little bit and kind of try to understand, you know, going back to my old experience in load testing, you know, we had a massive set of try to understand, you know, going back to my old experience in, in, in load testing, um, you know, we had a massive set of scripts to maintain anytime there was a new release, and this is going back to waterfall days. Right. But I think that's kind of the point here. Um, we'd have to check our scripts, make sure they work, update ones that didn't, but we also had to wait until the code was stable enough. And then maybe our scripts would work. But then some bug fixes would come in. And by the time we got to execute our tests, the script doesn't work.
Starting point is 00:24:30 So we have to maintain it again and rerun it, which really made it a very, very heavy process. It was always at the end of the cycle that we could really run any meaningful tests, because that was when the system, you know, the code base was finally stable enough to be able to run these tests on. And with the move for Agile and DevOps and CICD and all, there is a low testing is in a much more precarious position. But a lot of what you've been talking about, or not yet, hopefully we're going to be talking about it now, is the of of shifting that left somewhat and i guess conceptually to me how does that shift left and what is it you know you you work with this newer generation of tools what is it with about the newer generation of tools that allows you to shift some of this
Starting point is 00:25:19 stuff left and make it less cumbersome to try to maintain and manage all this stuff? I think it's about making it all easier. If we look at the hardware world, most of the cool stuff that we see in enterprise implementation won't really hit the market until maybe a decade later. I think we're starting to reach a point where usability and enabling pretty much anyone to get started with these things is important. And some of these things is just making iterations simpler and easier. As an example, something that we're working on with our tool is to be able to generate a test for part of a transaction
Starting point is 00:26:01 and then just mix and match all those parts and run that as a tied together transactions that means well we change the login that doesn't mean that we've invalidated the whole scenario we only validated that step and that's easily updated so that's one of the things that we're trying to help people when they are kind of like in the middle of a development process or if you're a gaming company in a crunch, to actually be able to quickly iterate, just record your session, like correlate the information that you need to and add your test data, upload it to, say, the Apica Lotus portal and run a load test. That's something that we prove and can be done in less than 10 minutes and no coding involved.
Starting point is 00:26:46 It sometimes depends on the complexity of the applications, but I feel doing stuff there to make it easier to update your scripts and making it easier to create them and correlate them is something that's really important. Because you can't have that agility if you're spending hours and hours on screenings. Yeah, I'm not doing that. Like a classical problem that I've heard from colleagues that come from the load running world is that, well, I know C sharp. I'm a tester. I should focus on testing, not development. There's always points where you can be like, oh, I need a plugin. But you can always have
Starting point is 00:27:23 a colleague help you or contact a vendor to see if there's an existing solution for it. And I feel that we should take the development out of testing as much as possible so we can iterate quicker and actually leave developers to do real development that improves life. Well, this is an interesting point, and I want to now challenge
Starting point is 00:27:47 you on that. Because I believe that, I mean, if you look at the agile teams, the idea is that in an agile team, you have different engineers that have obviously different roles, but you have testing and engineering as part of a whole team, right? So I actually encourage developers to become part of the testing best practice or become part of testing because I believe developers should also think about testing. They need to do test-driven development.
Starting point is 00:28:22 But maybe I'm – and this is where I want to challenge you, but maybe you're talking about two different things. No, I agree with that as well. And that's something that we're calling like a performance center of excellence, where you actually have a group that focuses on performance from multiple departments, multiple stakeholders. The developers need to be involved in the process and know what's changing and why we're testing it and all of that information. But I feel that putting
Starting point is 00:28:51 valuable development time sometimes into managing load tests is sometimes the wrong approach to it. They should be involved in the process and help to be able to do this load test. We need to do something about this request and this is how you use it. I feel that I'm always going to be able to do things as a load tester 10 times quicker if I have the developer near me and I can ask him questions. But I sometimes feel that if you're putting all the responsibility to create the test
Starting point is 00:29:24 and maintain the tests to the developers, things are going to start getting missed. So here's a proposal then, or kind of like maybe a thought. We in our engineering team, and we talked about this at a webinar I did with our DevOps manager, Anita. She and her team are responsible for the pipeline. That means they see themselves not as a separate team, but they see themselves as actually, well, they see themselves as a product team. Their product is the pipeline, and the pipeline basically provides tooling and guidance for our engineering teams to push code faster through the pipeline. The pipeline itself does things like obviously compiling, running unit tests, running functional tests, running load tests and all that stuff. But what this team is also doing as part of delivering and providing the pipeline, they
Starting point is 00:30:13 also provide easy ways for the application teams, and I'm not calling them explicitly application teams and not development teams, for those teams to not only write code, but also make it very easy through frameworks to write tests that can then be executed automatically. And then pipeline basically takes care of the test execution. So the developers or application teams don't need to worry about this. The pipeline also takes care about looking at the metrics and stopping bad builds early, comparing tests between builds. So here's my thought.
Starting point is 00:30:44 When you talk about a center of excellence for performance engineering, instead of having them as a service organization that says, I'll take care of your scripting, I'll take care of test execution, wouldn't it be more DevOps-y if these types of teams say, yes, we are the experts in executing performance tests and we may manage maybe the performance test environment and the tooling and the licenses that we need for that but what we really do we enable every single team that wants to load this so that they can do it with very minimum effort and maybe coaching them on how to maintain certain aspects of the script like what you said earlier if you have a certain application team that are in charge of a certain feature then let them develop that feature but also update that little part of the test that then you can then pull back
Starting point is 00:31:39 in into your pipeline and say well as part of the pipeline we're testing their feature and we have their test but we're also running the large scale load test so we combine all of that stuff so kind of like that's absolutely a good way yeah i don't feel that either of these ways are the way to go it's uh kind of like what fits your organization as well because if you have uh if you're a like really big company i'm just gonna see if i can say something as an example without hitting a customer let's just take let's let's take the u.s government as an example you have so many like it's you have so many different line of businesses and so many different objectives
Starting point is 00:32:23 on these line of businesses that you need someone in the middle helping to organize communication and like all of the requirements. I feel that that's more kind of like when is absolutely the best way because everyone is involved. Yeah. I mean, obviously, and you're totally right. I think you said what the thing is right now. I mean, we're talking about DevOps and we're talking about all these cool things and shifting left and executing tests on a continuous basis and developers writing tests. I think this is what we hope to do as an industry.
Starting point is 00:33:17 But also as Gene Kim said, one of the godfathers of DevOps, he said only 2% of companies worldwide are currently embracing DevOps best practices, which means there's a lot of companies out there that A, will never do it because they are just either, maybe they don't have the needs
Starting point is 00:33:34 because there's no competitive pressure or no whatever other pressure that typical people have. And some of them are still years away from that and and for these i agree with you for these at the current moment in time what you're explaining makes a lot of sense and it's great i was just saying i just brought up that statistic earlier on a on a call with uh asad actually um and i think it sets up um your example sets up a perfect transition in a way. If you have that load performance center of excellence, they're the ones who can then, as a company, if they can afford to transition into these teams, you can then take and start putting those members into those application, to offer the guidance. Because, you know, earlier, Andy, you were talking,
Starting point is 00:34:26 and there was a lot of interchange between the word load and performance. And I think we have to be careful when we're talking about load and performance because although they're strongly related, they're also very different, right? A developer, I think, can run a performance test and collect metrics. But it's somebody who understands load theory and bigger pictures of, okay, we don't just run a bunch of scripts at the same time. That's not a load test. It's understanding about designing a model of load,
Starting point is 00:35:00 knowing about, you know, tongue-tied there. Even on a unit test, like say for search, when we go back to that concept of search and if we're using caching or not caching and whether or not we're using one search term, 30 search terms or an unlimited amount of search terms, that's something development can put into their unit test because they should understand, maybe with the help of the load teams or whatever, we need to make sure we exercise cache or not cache what we're using.
Starting point is 00:35:31 But those are the bigger picture concepts that those load teams should hopefully be able to offer. And I definitely agree with Daniel that the easier you can make the tool, the better. But you always want to have some extensibility that if you do need to get a little crazy, you can get in there and get crazy with it. But I don't think there's a need for a tool to necessarily be complicated or for you to have to have some sort of developer background to use it. But any tool that you're using through Lifecycle has to have enough of a maturity or enough of a back a back door to get into if you do need to make some complexity to it because i have used
Starting point is 00:36:12 or tried to use some tools that have super simple record playback but then when you want to go into say parametrizing or let's say your example before login breaks and login is part of all your scripts, you couldn't even copy and paste a fixed login into those other tools. So there's definitely a balance and requirement. Yeah, and I think that's something that I'm starting to see in our product, CibaTester. We just started with having the session recording of network traffic and quickly started looking at horror files to be able to import those to get the session. But like while we're trying to make it easier, we've always had kind of like that extensive ability. If you want to test a protocol outside of HTTP and HBS, you can write a plugin for it if you want. But we've also added something that's
Starting point is 00:37:02 called inline scripting. That's like a middle ground between not having to code and having to code something. It's just utilizing, it's used like in BASIC basically. We call it PRX BASIC. So that's a way of simplifying things. Like you can write functions and add logic to your test, but you don't need to do it in Java. You can also do it in the simplified language rather than just saying but you don't need to do it in java you can also do it in this simplified language rather than just saying if you do it do it in c sharp or if you do it you write your python script and all the logic and all the metrics that you want to extract cool hey um i know you just mentioned i mean you know obviously you work for Apica Systems, and you're on Zebra Tester.
Starting point is 00:37:46 I just actually go to your website, and one thing that just stucks out to me, and I should mention that as well, obviously you're also doing testing from the cloud using AWS. And I was just writing a blog post about AWS CodePipeline, and obviously when I wrote that blog, we too had an interaction because you guys are fully also integrated into the pipeline of AWS, but I'm sure in other pipelines as well. And just a little commercial break here. We too are also doing a so-called performance clinic
Starting point is 00:38:20 at the end of January where we then go a little into more detail of what a pipeline can look like with Abica, with Dynatrace. So just a little shout out that I want to make here. Because we're all in. And do you have a date for that yet? Do you have a date for that yet?
Starting point is 00:38:37 We have a date. If you go to bit.ly slash onlineperfclinic, you will find it. And it is on I think January 25th exactly January 25th we're doing a session on AWS CodePipeline and it's going to be at
Starting point is 00:38:55 4pm Central European Time which is 10am Eastern and 7am in the morning on the West Coast so yeah well that'll be four days before my birthday. But I also want to point out this will be the first podcast of, or this, since people will be listening to it when I say this, this is the first podcast of the new year.
Starting point is 00:39:15 So this will be airing before that. So that's why I wanted to get that information out there. That's perfect. Thank you. Sweet. So happy new year to everybody. Happy new year. Hopefully we Happy New Year to everybody. Happy New Year. Hopefully we'll have a brighter 2017.
Starting point is 00:39:31 Come on, Bowie. It started with Bowie. We all should have known then. Thank you, Mishab. Hey, I have one last. I know we want to probably get at the end of the show. But one very strange thought that I have in the end. Do we believe or I fear it could happen that load testing as we know it is totally going away?
Starting point is 00:39:57 I even fear that a lot of the pre-production is going away. Why? Because I believe that in the future, like the cloud natives, the people that just run everything in the cloud, that do continuous delivery, these application teams may just deploy everything that they code in production all the time. Code changes may not make it as an active feature through feature toggling. But in the end, I believe if you, from the beginning, have good monitoring, and as you said earlier, Daniel, I think this is the shift right, I believe what could happen is that performance engineering teams will be 100% responsible in the future for production monitoring, at least for certain aspects of it.
Starting point is 00:40:43 And when developers push their code changes no longer through a lengthy pipeline with a lot of load tests, but direct into production, then turning new features on for a certain part of users using A-B testing, using Canary releases. I think that might be actually the future for performance engineering, which means I'm just saying, you know, the death to all the load testing tools. But I think this could potentially happen. I'm not sure. Maybe I'm dreaming. No, like, I don't think it's going to be the death of it. I think it's just going to be very different. It's like how we're testing today as compared to 10 years ago. It's like how we're testing today as compared to 10 years ago, it started very different. Ten years ago, we just went in, we did benchmarks and tested worst case scenarios.
Starting point is 00:41:39 We've moved from that to being less of I work with load testing and I work with load testing only to developers being conscious about load testing and I work with load testing only to developers being conscious about load testing and actually being enabled by tools a bit more. And this is going to change more and more. So it just becomes more and more abstracted, I think. I just think it's going to be a different way of looking at it. I still think there's going to be performance steps that validate things. But how we do it as compared today might just
Starting point is 00:42:05 today's load testing might look very archaic in 10 years yeah yeah because i know that when i look back at my reports and what we were worried about for 10 years ago it was very different like everybody has talked about the three second limit and page views and not really even paying mind to conversions and things like that. We matured a lot, and we do it very differently today than we did 10 years ago. I'm sure that's going to be the future, but I don't think that any of us is going to lose our job in the next 40 years. Yeah, no, no, that's not what I want to say, but I think you're right. I think, too, though, Andy, to your point, I think there might be a movement towards attempting that
Starting point is 00:42:49 because a lot of the innovation seems to be grounded in getting out faster and faster and faster and breaking the old models, right? But I think eventually there's going to be a breaking point in that and then kind of putting on my Criswell predicting the future kind of thing. Uh, I think people might try to, to, to skip a lot of that stuff and, um, rely on processes and other components, but at some point something's going to blow up pretty severely, I think. And there's going to have to be some, you know, as Daniel was saying,
Starting point is 00:43:27 some sort of new model or something we don't recognize right now. I could be totally wrong, though. They might, you know. It can actually be just that a new framework comes out that enables people to write code without performance issues. Yeah. That might be something like, yeah, but that's something kind of like, things are changing.
Starting point is 00:43:51 And like, as a developer, I would want a simplified language that allows me to write scalable applications without any worries about it. Like, at some point, people will be enabled by something like that. That's the iterations of languages and frameworks but i've it might be that just a customer that would need or ask these questions about performance will change maybe more into a vendor rather than a customer yeah i mean currently we see what we're bound by um you know if we had if quantum computing ever becomes a reality and we have a whole new set of hardware where resources and, you know, memory and all that kind of stuff is unlimited or vastly greater than where we are now, we could be looking at something different. But there are still a lot of physical constraints, both if we're talking about threads and connections
Starting point is 00:44:50 versus CPU and power consumption and everything, where I think performance is always going to be a major concern. And I would be shocked, Andy, if there wasn't some kind of checks in place or some kind of testing in place, as opposed to just, hey, throw it out there. And if something doesn't work, we can just keep moving forward and fixing it until it does. But, you know. I'm just, you know, and I didn't want to say we don't do any testing anymore.
Starting point is 00:45:18 But Facebook has proved it with, you know, we know the story when they introduced the JET, that the JET was running for several months, unknown by anybody in the world, and they were basically just pushing the feature out, hidden, and it was actually already testing itself in production. Right. I mean, they did use a very kind of, like, known and well-used protocol, though. They're also very large, right?
Starting point is 00:45:44 I mean, we're not... Yeah. At least, I don't think... Like, they did test it for a long time, but they picked something that's... It's not something new that they should have been worried about. It was based in Jadber,
Starting point is 00:45:57 if I don't remember badly. Yeah, but... You can connect, yeah. Yeah, but again, I just want to throw it out there. I believe, in the future, I believe we will see, obviously, as we've always seen, a shift and things will change. So I just wanted to throw it out there. I know it was a lot of future thinking.
Starting point is 00:46:16 No, no. Things are always going to change. I can see it happening, but it might just be that who does it changes. It might be more of a research thing rather than a developer thing. Yeah, could be. Or a vendor thing. Yeah. Okay.
Starting point is 00:46:31 Hey, Brian. We got a long show today. I know. That's all right. I think this is a fun one too, though. But, yeah, let's wrap it up. Speaking of New Year, Andy, so we have on the 25th, we're going to have the performance clinic, right? Are you going to be traveling, doing any speaking engagements, you or Daniel, coming up in the New Year that you might want to mention?
Starting point is 00:46:54 Or is there kind of everything clear on both of your calendars? Let me see. I think, well, I will be traveling. The stuff that I know, I will be in New Zealand and in Sydney. Mid of February, we have Whopper. Hi, mate. Whopper 25 is happening in New Zealand. And actually, I mean, what we have earlier, in early February, first week of February,
Starting point is 00:47:23 we have our Dynatrace Perform Conference. Oh, yeah. there i think you will not be there i will be there i'm scheduled to be there yeah because we do a podcasting from there oh well that's not what got me there i had to do a hot day oh nice in order to get there so and uh yeah so perform is happening. Then, yeah, I'm doing an Agile meetup on January 17th in Omaha. That's going to be nice. It's something new for me. It's a new city. But, yeah, definitely check out Perform. Perform.Dynatrace.com.
Starting point is 00:47:56 That's where the Worldwide Conference for Performance Geeks is happening in Las Vegas. February 7th to 9th. I guess that means I have to come. Please do. Well, you should. Daniel, do you ever go out and do any speaking engagements besides the Perf Clinic we have coming up that you'd like to promote?
Starting point is 00:48:16 Nothing planned, unfortunately. It's the January 25th one, the Performance Clinic that I'm doing. And outside of that, I have not started planning yet. I've been looking at my vacation too closely. Okay, excellent. I would just like to wrap up by saying that, you know,
Starting point is 00:48:36 we have a new Twitter handle for the show. It's at pure underscore DT for Twitter. And also our RSS feeds in the show. We split it so that there are two separate ones. A separate show for the Pure Performance Cafe. So if you listen to the Pure Performance Cafe and you're wondering where that is and you haven't seen, that is a separate show now on Spreaker and also a separate
Starting point is 00:48:57 RSS feed. And we're also publishing these onto YouTube as well. So please feel free to give any feedback. Those are my last words. Andy, Daniel, any last words from you all? Well, as the ZebraTester community manager, please hit up our community, community.zebratester.com,
Starting point is 00:49:17 if you have any questions about our tools or solutions, or if you're just stuck and don't know how to proceed with your load test. Excellent. And Andy? Well, from my side, folks out there, get ready for change. There's always change coming and also in the performance engineering world and the way we load tests 10 years
Starting point is 00:49:36 ago is different than we do it now and it's going to be probably a little different in 10 years from now. So stay up to date what's happening. Follow us and follow others as well and level a big shout out also to pure to perfect right our
Starting point is 00:49:49 friends yes perfect calm and mark on task awesome yeah I did a podcast with them not far not long ago excellent yeah full disclosure we are actually sponsoring the
Starting point is 00:50:06 show of it. Yeah. Wonderful. All right. Well, thank you everybody. Happy New Year.
Starting point is 00:50:12 And we have some hopefully have a whole year's worth of great shows coming up for everybody. So please be
Starting point is 00:50:20 sure to stay tuned. Thanks, everybody. Goodbye. Bye. Thank you. Bye. Thank you. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.