PurePerformance - Perform 2020 Updates and News with Henrik Rexed of Neotys

Episode Date: February 6, 2020

...

Transcript
Discussion (0)
Starting point is 00:00:00 Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance! Hello everyone, we are live in Las Vegas with the performance chef. Exactly. I just have lunch. A lot of things to cook. Do you have any commentary on the quality of the food here? I think there was a bit too much of auto-scaling. I had that in my mouth. It's sort of a strange, it's like a briny, salty. Yeah, it sounds pretty fat, and I think at the end of the day it's really expensive.
Starting point is 00:00:49 Yeah, and auto-scaling, just so everyone knows, is not gluten-free. Some of the labels don't disclose the full information. It happens to be very fat. AIOps is an interesting topic that we hear a little bit about here, and Andy's been doing stuff with the ACE team. Are you guys doing anything like the Autonomous Cloud initiative that we're hearing this week from a Neotis perspective? Yes, I mean, I've been involved with Andy last year.
Starting point is 00:01:14 We did a hot day on the topic, where part of the quality gates in the ACM, there was Neoload involved. Okay. So we were definitely part of it, and I think in that spirit, Captain is, I think, the next generation. And I think I had the chance, thanks to Andy, to be involved in the Captain project and be a contributor to that. And I think that's...
Starting point is 00:01:42 So Captain comes directly with JMeter's service. Yep. And I built the NEO Load Service and said, okay, let's do something smart with the NEO Load Service. Yeah, yeah. And I think, yeah, it's there. It's available. And recently with Captain 060, which is
Starting point is 00:02:00 the official release of this. Right. There is a quality gate and the quality gate is pretty much inspired by the SRE specifications. Yeah, that's right. With the notion of indicators or SLI and objectives with the SLO. And now with this, I've added, this week I published, in fact, this week, just before perform, the NeoLoad SLI provider. So now you can define some SLI coming from NeoLoad.
Starting point is 00:02:31 And Captain will ask gently to the SLI provider, say, hey, s'il te plaît, NeoLoad, can you give me your indicators? Exactly. I like that. That's awesome. Now, people may not know this. I mean, you do a lot of evangelism and teaching things in terms of the NeoLoad world. You lead us in the Performance Advisory Council and your countless partnering pieces. But you are a developer near and dear at your heart. You're actually actively building and contributing to the product and things like Captivate. So I'm not contributing directly in the product. What I'm doing is building prototypes.
Starting point is 00:03:08 So usually I will try to build things that will maybe happen, be in the product in two years or maybe not. But sort of the experimenting, the engineering part of experimenting, what if we try to pull these things together or do something interesting? Yeah, for example, the blockchain support. So for those who doesn't know blockchain, there is a white block available in the market. And we have an official now support of generating blockchain load testing. So it's on the market now.
Starting point is 00:03:39 That's cool. And there's all this stuff that we did. So the Captain is one of them. The new Diatrix integration is one of them. Yep. And also all the IT stack as well. So there's plenty of things. Yeah.
Starting point is 00:03:51 What else is new in the Neolode world? Because I know there's some things cooking in the chef's kitchen. Do you mean officially or unofficially? No, the unofficial part would be the things that we can't. We'll just mute them mute those mics and they're i can't believe you guys are doing that yeah it's really going to be powerful yeah you see yeah that'll be great changeable but there are some things that you guys have announced recently yeah i think uh everyone i mean scott moore has uh now announced it in linkedin we are starting to announce it as well so there's a beta program on our Citrix support.
Starting point is 00:04:28 Yeah. And there's a GA coming in soon, very, very soon. Very soon, yeah. And, yeah, I'm pretty happy to have the feedback of Scott. Scott has been very... And he was integral when I was in the load runner world and doing ICA work back when that was first built and many, many moons ago.
Starting point is 00:04:47 But the Citrix API set, the way ICA works in the protocol, there's a lot of new things that make it easier to build that. The effort so far was pretty good. I think, yeah, the first thing we wanted to provide is support and make a smooth way of designing Citrix in Argue. Because, I mean, I had the chance, or I don't know
Starting point is 00:05:12 if I can say a chance, but I had the pain. I had the pain to build scenarios in Citrix in the past where you build a scenario and then you know the next morning it's not going to work. One pixel changes here, one pixel changes there. It's really old school analog type record and replay.
Starting point is 00:05:29 That's true. But then getting into an object model is the new way to do it, right? So yeah, we added a few things in our recorder. So the recorder, from the moment you record, you have the options to start designing at least, sort of. So I think you're saving quite an important amount of time. And I think when we pick the
Starting point is 00:05:50 technology, say we need to be smart and select the right OCR engine. And I think, I mean, I don't know if you've been using Google to translate menus and basically with a picture. They did pretty well on that.
Starting point is 00:06:05 I'm pretty impressed. Sometimes you don't know what you just ordered. You don't find out until it's way too late. Oh, I shouldn't have eaten that. So basically we picked the Google engine. So the OCR engine is the one that works pretty well from my perspective. And at the moment, it works fine.
Starting point is 00:06:27 And so we cover, of course, both worlds, so the storm front, so the web access to Citrix environment, and the traditional Citrix access. Yeah. technology built into NeoLoad. Is there technology you can leverage in other Neodos products as well as leveraging it for Dynatrace work or having that new information, the metadata about that? When it comes to Citrix, I know that Dynatrace came out
Starting point is 00:06:58 with a way of measuring Citrix users. I already have a story about it, so I would like to implement it. So I'm not going to say much about it. It's like you can instrument the server side and the client side from the monitoring. There's metrics or measurements that you can grab
Starting point is 00:07:15 from a Citrix library that's running, from the XQ. And that's sort of, it's sort of like monitoring the health. That's correct. Of course, when you offer a new protocol and technology, it's important to have the monitoring layer that will bring at least the indicators that will help you to make the right decision. So in the Citrix world, Citrix is exposing in the Perfmon, the metrics. So at least we have from our Windows traditional monitor those things.
Starting point is 00:07:51 But again, we want to have something that will give more insights. Because at the end of the day, we are trying to test everything, but we have less and less and less more time to do everything. So if you have something that helps you to test and have the right, the indicators that helps you to decide, that will be really, really efficient. So things like refresh latency and stuff like that. Cool, so check out the new Citrus.
Starting point is 00:08:17 Citrus, you said it's in beta now, GA-ing in a little while. Yeah, it's going to be in a few weeks you will hear about it. A few weeks, something. Cool, that sounds great. Are there any other Dynatrace-specific things that are new? You talked a little bit about the captain work, pulling NeoLoad into that, but you also did a hot day session here,
Starting point is 00:08:35 like three hot day sessions. Yeah, three hot day sessions. Which, you know, I did one of the five years ago, I think I did a hot day with load testing in Atman, but you guys were doing load performance and the new Dynatrace. Yeah, we want to present the concept of performance as a self-service.
Starting point is 00:08:52 I think now you're going to shift left and give the power, the ability for anyone to run a load test and get feedback that will promote or not an application at the next level. Performance as a self performance itself is the right thing and uh so we we had a class on that so the idea of the class of course uh it's a dynatrace
Starting point is 00:09:10 event so i'm not going to expose me a little everywhere and put stickers in the room i tried it but uh they yeah yeah if anyone's not familiar i mean a lot of the hands-on training here is not just come learn dynatrace it's come come learn a discipline, a practice, an idea, a concept, and actually apply that concept. So I did my stuff with JMeter or other stuff. So yeah, but that was the idea from Andy's like three years ago. And my work at PayPal was the self-service. Like I'm not a load testing expert.
Starting point is 00:09:40 I'm just a developer. But I know I need load testing here. How do I kick off that process? And the system could say, I've got a script testing expert, I'm just a developer, but I know I need load testing here, how do I kick off that process? And the system could say, I've got a script that can run against that. I already have it, let's go ahead and make that happen for you. And start giving you that feedback as fast as possible.
Starting point is 00:09:56 So yeah, all the class was in that spirit, so we used, in that class, technology was well known, so Jenkins, for example. Jenkins, yeah. And then we tried to explain what are the features that will help you to deliver that from a data trace standpoint, and we did a lot of exercise, a lot of hands-on
Starting point is 00:10:11 and I think when you describe performance as a self-service Captain is just designed for that, I love the fact because I did a lot of demonstration on that I love the fact that I'm pushing a new artifact and it goes, deploy, and then load test is involved.
Starting point is 00:10:28 You don't have, in the service that we build, you don't even have to worry about deploying your load meters and controllers. It does everything automatically. So the developer is basically, you just have to say, I want to just deploy that. It's sort of like Henrik in a box. Right?
Starting point is 00:10:46 Or artificial Hendrix. It could be a quick one. That's great. Then we have to go, what, the movie? What was that movie? There was the AI movie that was with Macaulay Culkin, the creepy Spielberg one. Ex Machina was it. So you'll be like
Starting point is 00:11:01 Ex Henrik. Yes, exactly, for robots. How else? This is multiple Dynatrace performances you've been at. How are you feeling about this one? I love it. For us, and for me as well, I love it because people here are coming and they love performance. They are educated in performance.
Starting point is 00:11:22 They know what it is. Like above average acumen. So you have pretty good discussions. At least it's real discussions. They understand concepts and things that you try to share. So it's a really good conference for sure. Yeah, really. And Neotis is a marvelous partner in many ways.
Starting point is 00:11:41 We thank you for your sponsorship of our work in the podcast world. And, of course, I still want to come to the Performance Kitchen someday and whip something up. I now want to come to your barbecues as well. Yeah, we're going to plan one. 2020. Should be kind of good. James Pulley, our
Starting point is 00:11:59 colleague in the back there, you know, you got anything you want to add to the conversation here? You're the disembodied voice of the guy behind the camera. Yes, the disembodied voice. So Heinrich, have you watched Scott Moore's video of the performance tour for his first video? Yeah, so I started with the episode one. I was impressed by the professional intro. The dude's got polish.
Starting point is 00:12:31 I believe he has stereotyped you as the SRE. No, no. He's not blonde. It's not me. It's a different likeness. I don't get it. I saw some stickers on the laptop. I saw that's anyone. And then I saw the beard.
Starting point is 00:12:48 That's anyone. Yeah. That could be. That could be anyone. And then you saw the top knot. Yeah, yeah, I saw that. I saw that. Awesome.
Starting point is 00:12:56 So, yeah, any other outside of the Citrix stuff? Anything new coming for Neotis this year? Neoload? Something? For those who are not familiar with our product, I mean, we used to have this GUI traditional client similar to LoadRunner or SelfPerformer and the others. We have the NeoWeb, which is on the new,
Starting point is 00:13:16 I would say it's similar to the transition that DanaTrace made between AppMon and the new DanaTrace. We are in that way. Transforming the client experience, how you interact with your assets. Yeah. Execution and the whole, yeah. Exactly. The web is going to be the central dashboards for everything.
Starting point is 00:13:35 Cool. And we don't want to have infrastructure that you manage anymore. We think that with the modern elasticity and containerized. Automated deployments. We could basically, it's going to be a dynamic infrastructure. Okay. Where you don't, the concept of booking your infra, I think it's. Yeah.
Starting point is 00:13:53 That was true a few years ago. In many years, you had a dedicated lab. Yeah, sure. And 90% of the time, it's out there doing nothing. Yeah. Yeah, and now, just a very simple few set of API calls into Azure or EC2. Spin them up, spin them down, spin them up, spin them down. Now, at the moment, we already have an official integration with OpenShift,
Starting point is 00:14:12 so we're going to add others. Yes. Which means NeoLoadWeb, from the moment it's configured, OpenShift is configured, NeoLoadWeb, you don't have to determine your infrastructure anymore. You say, I want to run a load test. Yeah. And then NeoLoadWeb reaches out to your OpenShift clusters,
Starting point is 00:14:26 spin up controllers required, spin up the load meters required for the test, and then they tear it down. So in some cases, I see in the cloud a lot of people look at the test lab as, oh, that's a huge expense. And in hardware, absolutely. Power, cooling, the physical hardware. But really even then was the storage. So you have a test database.
Starting point is 00:14:46 Maybe it's a scrubbed version of production or built from scratch. And even in the cloud now, if I build my pre-prod or my load test environment, that data sitting there in a quiescent state is very costly. And the app servers are really per hour if they're not being cranked. They don't cost that much. But I personally, if you've got a couple billion rows that you need to test against for load testing, that's an expense. That's an interesting pain I'm dealing with. I know some other people are dealing with, well, I want to run a load test, but I need to create 100 billion rows
Starting point is 00:15:19 to be like production. In some cases, that's the only way to do it. In other cases, people are in microservices. It's like, all right, we can do small samples, maybe not extrapolate them thoroughly, right, or overextend our extrapolation. But are you finding that as well, as people learn about cost of load testing lab target environments in the cloud?
Starting point is 00:15:43 Yeah, I think more than that is that sometimes having the situation where I'm not able to run a test because the infrastructure is limited, it's quite a pain. So I think if everyone wants to do continuous testing or perform as a self-service, the person that click on launch the bill or deployment, he wants to have basically a feedback on,
Starting point is 00:16:07 can I promote my release or not? And being blocked by just an infrastructure problem is... Or even someone else running something at the same time that skews all the results. Yeah, yeah. So I think that's very important. I think the other step would be that you don't even have to precise the number of machines you want,
Starting point is 00:16:26 saying that the solution itself says, oh, I'm going to add an extra node because it seems that you're ramping up and need more power. Elastic load generation is really cool. Yeah. That's something I would like to see. So will there be some of that potentially in the future? Some today, some in the future? That would be something.
Starting point is 00:16:46 There is a lot of things we're looking around. Because I think the performance testing industry has changed in a sense. But still, we have this old-fashioned, we still need to do scripting.
Starting point is 00:16:57 We still need to do heavy decoloration because otherwise there's no testing, no realistic. Why don't we have a way to remove that part? That would be...
Starting point is 00:17:07 Gee, I wonder if anyone spends time thinking about that, Henrik. We need to... What ingredients would we need in the kitchen to whip up a solution to that? That's tasty and good for you. Tasty and good for you. First, we need a secret sauce. Hmm. Okay. A. Okay.
Starting point is 00:17:25 A bit spicy. I think you will need some really low latency time series capture of events across the entire stack. So I think there is something. Just going to throw that out there. You know me. I have habitual thoughts I can't give up. Dreams of a better world. I think the streaming out the entire RAM data
Starting point is 00:17:48 and having machine learning, understanding what is really happening and figuring out that this user session is in fact the same than the other one because they go through the same areas of the app would be great. So instead of
Starting point is 00:18:04 having out of one million user browsing during the day, you say, hey, if you want to test the load, you need two scenarios. And here's a workload model. Or our traditional thing was educated guesswork. I have an idea in the fantasy of what production looks like, that it should be these top ten transactions with this workload, whatever james does a bunch of stuff uh in some of his workshops actually studying the exhaust the exhaust from your app sir and designing understanding where you can you can do tuning but you could also harvest that information and i think neil load
Starting point is 00:18:40 actually has some of that you can you can bring in and learn from a production log. Hey, what were the top ten things? Yeah, so that was one of the prototypes I made. It was a while ago, yeah. Yeah, it was a while ago. It worked, I mean, but... It's not as cool as real ROM data. It's having that, but then having a step where I want to have the script out of that.
Starting point is 00:19:01 At the moment, there is some technical limitations, I would say. Because due to some, I don't know, privacy reasons, we don't see the payload of the request anymore. We don't see all the get parameters anymore. So if we want to be able to build a script out of that, basically there's some things missing there.
Starting point is 00:19:19 And that makes sense from a PII perspective. It does. That you would not see that, or credit card, some of the rules and regulations there. Yeah. But being able to see the path of traversal using the referrer tags and the top-level URL, at least we know what those common paths are. Yeah, that's part of the way. I know there's also things like if you look at profiling within a runtime engine,
Starting point is 00:19:44 like the CLR in.NET or even in Java's JVM, for compliance, they'll force you to run in sort of a secured mode where if you're not allowed to hook in, like if you look at even the one agent, you can run in sort of secure mode. And a lot of financial institutions, even health care, they don't have full access to everything you could run in. Now, in pre-production environments, if you're not in what we would say like a confidential data environment like production, you would turn on the full profiling capability within the tool. But that's something that maybe people don't know because we're all talking about new stuff that needle load and dynatrace together monitor I mean that's part of what you did in the hot day session was being able to to run those things and correlate the the impact of
Starting point is 00:20:30 the load I mean that's been around a while yeah it's been a while needle dynatrace running together yeah I think the the two main reason of the integration is to see the traffic understand that this is the traffic of this particular transaction of my load test. Yep. And once you see that, then you can utilize all those awesome diagnosed tools that is available in Dynatrace. Yeah. Because the frustration that I had in the past is say,
Starting point is 00:20:59 all right, so I have this request, but is it mine? Yeah. Maybe I have to adjust the filters. Yeah. But then having that question on and on, is it mine? Yeah. Maybe I have to adjust the filters. Yeah. But then having that question on and on, is it mine? Is it my track? Now the clock starts ticking. Why is that taking time?
Starting point is 00:21:12 Why is that? Well, I have to think, and then you have to come up with a rationale as to why that is or isn't. Yeah, that's tough stuff. So, yeah, so in Dynatrace stuff, obviously we put the header information like all the load testing tools, so no surprise. But we create automatically the configuration, so we go to the Android and say, hey, do you have those roles? No, you don't, so I'm going to create them.
Starting point is 00:21:34 And then you say, hey, the Android Trace, the traffic that will be marked as new load, don't name it with the URL anymore. Name it with the transaction name, username, scenario name, transaction name, with the URL anymore. Yeah. Name it with a transaction name, a user name, a scenario name, transaction name, and then URL. Yeah.
Starting point is 00:21:49 And that comes back into NeoLoad. Now you can correlate much better on that. Now, for the monitoring perspective, the way we do it, it's like I don't know who's familiar with it as much as with Dynatrace, but there's a concept of tagging. Yeah, yeah. And not tagging like from a front-end HTML, inside the Dynatrace itself.
Starting point is 00:22:08 Yeah. Yeah. So the way it works is like when you, the integration is based on tags. So if I test, for example, an application called customer, and I use that customer tag, then I will only see the services that are marked as customer in Dynatrace. So if I want to collect metrics from there, then I will only see the services that are marked as customer in Danitrace. So if I want to collect metrics from there,
Starting point is 00:22:27 then I will be quite frustrated because if I do a load test and I just see the front-end server, I say, okay, what happened in the back-end? So what Neoloid is doing, he's taking advantage of the smartscape. So smartscape, for those not aware, is like the entire topology of the application. It actually has a really awesome visual, the way they render it yeah so i i start with the service say hey this is my service customer i'm testing okay give me the dependencies oh there's dependency and okay this service give
Starting point is 00:22:55 me the dependency and then we're going down down down down until the bottom yeah and then say okay now i know the the flow of services and now I'm going to look at the process. Yeah. And then we go to the process, we do the same thing. And then we go to host, we do the same thing. Yeah. And once we have this entire picture, we say, okay, Dynatrace, now mark that has near load tags. Yeah, tag it all.
Starting point is 00:23:16 Tag it all. And then what happens is that you can collect all the monitoring of all the stack. So then you have everything. Yeah. That's what you get at the end of four days with Brian Wilson. Brian Wilson of Pure Performance. Hi, I'm Brian. Nice to meet you, Brian.
Starting point is 00:23:36 Nice to meet you, Henrik. This is Henrik, the performance chef. Yes, yes. He was just talking about putting NeoLoad tags everywhere in a SmartScape specific to a load test. Did I say that right? Yeah. Yeah, we saw that in the hot day. Yeah.
Starting point is 00:23:50 Which a lot of hard work went into. No, I'm just mentioning anybody who was there, but also acknowledging all the hard work he and Rob put into putting that session together. Yeah, cool. That was funny because just one hour, two hours before our hot day, we realized that because in Yellow there's a notion of big tokens. Yeah. And instead of putting it in a server, they printed out the token on a sheet. On a card.
Starting point is 00:24:16 On a card. So I would assume that the students typing this weird token would take forever. So they found a way to get them on, the machines. All right. Except one guy, one unfortunate person, jumped ahead a little bit. Yes. And when it went to go do the password,
Starting point is 00:24:33 on the card it said password, and then had the password and then the token with the space. So there was no, like, it didn't say token colon. Yeah. So this poor person typed the entire token out, and it didn't take it. He's like, I think I did something wrong with the password. And I see like dots all across the street. And I'm like,
Starting point is 00:24:49 did you type in this part? He's like, yeah. I'm like, I'm sorry. That was not supposed to happen. That was a literal translation of what to do. Yeah. Awesome. So since you were involved with Hatte as well, is there anything particular that sticks out in terms of Neolode and Dynatrace working together that you're like, that is really cool?
Starting point is 00:25:07 Yeah, absolutely. You know, when I left my real-world job, I was doing performance testing, and I was using the old tool. The old man tool. Your old tool, our good old load runner. And we were pretty, you know, at that point I was done with that tool. Yeah. They hadn't innovated. They hadn't done anything at all.
Starting point is 00:25:25 And it was like, this was 2009, 10, we were looking for a new tool. And I left and we didn't end up getting one because I went to Dynatrace. But seeing where Neotis is now. Yeah. And Neoload, all the innovation that you're baking into it, first of all, reinforces this idea like, yeah, you can actually still innovate with a tool. How long has Neolode been around for? Like you said, 15 years, I think?
Starting point is 00:25:51 It's going to be the anniversary in this year's 15 years. Right, so 15 years and still innovating, still doing really cool things, which means you can still do it. But I also look at what's going on now in the capabilities and the ways you can do this continuous load testing. And although I never want to have to go back to doing that, if I did, I think it's a really exciting time to do that.
Starting point is 00:26:14 Because with all the automation pieces you could add and with the auto analysis, with the quality gates. Right. And the integrations of taking the data from like NeoLoad, populating to Dynatrace, back and forth. Just all the things that you can make the tools do with each other is awesome and exciting. And it takes you out of the, all right, I'm going to spend, as we were even talking in the classes, I got four, we told the project management team
Starting point is 00:26:36 we need at least a month and a half to get, check all of our old scripts, write the new scripts, run the tests, do the analysis, and by the time it gets to you you have a week and a half which which means you'll have yeah you'll have you have exact that's and that's a week worth of scripting that's just frustrating as yeah yeah and then like two days worth of one test run but the analysis to the analysis is always the worst too and also the fact that like it would fail like let's say you're in a crunch and you're running a test like
Starting point is 00:27:01 oh i'm gonna i'm gonna go to bed run the, I'll check it in the morning, you know, and then you go get ready for bed. And if I suddenly got an alert on my phone because the problem blew up and something was wrong with the test, all right, I'll stay up another 10 minutes, see if there's something I did wrong maybe. You know, you're going to get that immediate feedback
Starting point is 00:27:15 like developers are supposed to get with pushing their code. Extending that to the testing teams is just quite amazing. And I think it's wonderful work you've done integrating in that because, yes, you can do it all with the API, but we're not in the business of building the API integrations and all that. So we tell our customers, well, yeah, you can do this, and there's some work, and it'll be a great learning experience for you. But the reality is when our tester is going to have the time,
Starting point is 00:27:42 yes, they need to learn how to automate. Yes, they know, but in reality, it's going to take a long time. If you can get something out of the box, that's going to do this. Some people are never going to invest in that. Sometimes they might not even have the time to. Or it might be down the road. And, of course, then you can take a look at, okay, what were they doing under the hood? And I'm sure they would have the ability to modify that.
Starting point is 00:28:01 And if they see things they want to do differently, they can add some other pieces. But it's a starting block. I will give you my experience just as a DIY guy, because in my heartbreak in letting Lode Runner go up to the great load testings place in the sky, I've been doing DIY stuff between JMeter or just even command line scripting stuff, curl stuff, Python stuff, some of the new Locus stuff. I mean, it's all dabbling and then piece it together. Looking back on my last N number of years of all the cool stuff I built to integrate with Dynatrace or whatever, I'm with you. I'm not going to start a new job without a Dynatrace Neo load combo.
Starting point is 00:28:48 If you guys are lucky, you two full-time employees, it would be a Dynatrace neoload combo if you guys are lucky you two full-time employees it would be a dynatrace i'll find the give me your ideal customers where i should just go to work uh i think the the ideal customer my employer might be watching that's not happening anytime soon but yeah but it is The innovations are so compelling. It's like I just would never try to do this pure DIY play again. Just gluing stuff together. It goes back to the same place. When you see the popularity of Kubernetes, when you see the popularity of OpenShift, right? When we started down the container path, this is when DevOps started coming in, and there were not existing tools that could do all these things
Starting point is 00:29:26 so everyone started DIYing and like creating things and out of this grew Kubernetes out of this grew OpenShift, out of this grew these other things and now they're there and now it's like you don't have to although you might want to get the street credit building your own thing don't waste your time on that. Same thing then goes
Starting point is 00:29:42 in with monitoring. If you think about the open monitoring initiatives versus the vendors, yeah, you can use an open monitoring initiative to try to do all this, but there are vendors with decades worth of collective experience, maybe even centuries worth of collective experience, you know,
Starting point is 00:29:55 that have been doing this forever and ever. And now when you add into the package load tools, instead of doing the DIY, you have all the support and the expertise of like the Neotis team. And instead of, yeah, you can learn how to do the API and do the automation, but they're doing it. And they're supporting it so that you don't
Starting point is 00:30:12 have, you know, you can concentrate on teaching the developers how to think about performance. Or escalating high risk. Exactly. You can focus on the things that are going to impact the business. I agree. I think what you mentioned is even more true. I think from the moment you start to have this real cloud-native architecture,
Starting point is 00:30:30 there's a real, real need for performance. And performance is not only about response times. It's going to be the cost. I mean, I'm pretty sure that I will survive with smart elasticity. Again, I'm not sure, but I can survive a bit. But at the end of the day, end of the month, I would say, you receive a nice billing and you're happy to deliver really, really bad code. Or in the opposite, you are really happy to do some promo testing. And those prices seem, for competitive reasons at first, that they were going to go down.
Starting point is 00:31:11 And if you really ask somebody that's been doing cloud deployment and operations for, like Mark Kaplan in Barbary. They've been full on cloud for two years and made the full transition there with Dynatrace. That price is going to go up from here. So, yeah, being more conscientious, I agree with you. Well, it was low in the beginning because they were trying to attract everybody to the cloud. They were competing, yeah. But they're still competing, but now that everybody wants to go to the cloud, they're like, we don't have to compete with you.
Starting point is 00:31:32 Let's all make some more money and bring it up and find new ways to make money. I just thought about it. Maybe someday there will be a notion of performance in the green level. Say, how much power did I consume? You can track some of that already. I think Azure and Amazon both will tell you power consumption for your account because there's a lot
Starting point is 00:31:54 of accounting. There used to be tax credits in certain countries for a green credit if you could actually prove that you had done stuff. I wish they had APIs on an instance level. You can look at those things on your account level in most cases. But, like, to be able to... Or an operation level would be nice.
Starting point is 00:32:10 Right, but if you were to say, like, hey, I'm a developer, I'm going to check in my code, and I'm going to run it, and what I need to run my code costs X amount at a granular level. Like, my code on a... This function. Whatever it's running in is going to cost this much to run on. And now if I change that to instead of costing three cents per hour to two cents per hour, multiplying that, and even for the performance team, you can turn around and say,
Starting point is 00:32:34 how much money is this saving or impacting the company? How much power does an N plus one problem actually cost you? You're like, whoops. That was a lot of electrons. And if you pull that in via the API, you can now include that in your report. Like, imagine a performance report with here's how it's going to perform and here's how much it's going to cost. Or the cost of a limited
Starting point is 00:32:51 scalable system, meaning you're only ever able to get n number of value out of the given transactions because, oh, I'm going to throw hardware at the problem. It's easy to throw hardware at a problem in the cloud, really fast. You don't need to think about it. Make it very easy.
Starting point is 00:33:08 They didn't solve the pain of power consumption or resource consumption. They solved the problem of spending money. They make it easier to spend your money faster. That's great, but let's think about the planet. The cool thing is then, if you're not doing, if you are doing something like Fargate
Starting point is 00:33:24 or functions where you don't even know about the hardware, you won't even be thinking about the power. Yeah. And in a way that that almost might be even better because who will be thinking about the power will be AWS. Yeah. And they're going to be thinking about the return on what they're selling it for. Right. So they have two options. Number one is they can either increase the price and risk maybe losing you,
Starting point is 00:33:46 or they can figure out a way to run this cleaner so that they can keep the price the same and make more money. It makes me think about the Core OS stuff being a new version of a really low overhead kernel. Yeah. Forget the coolness of the Compilot, fully ephemeral OS, but
Starting point is 00:34:06 just the low size of it would be very cool. Awesome. All right. Any last thoughts as we close out? This is day four of Dynatrace. Yeah, it's tomorrow. I'll be heading to Salt Lake City, enjoy the powder champagne in Utah. But yeah, just for the
Starting point is 00:34:22 closing, I would say that for guys who's not aware, there's a PAX Santorini happening in three weeks. So we won't broadcast anything because the last time the quality was quite poor. So we will focus more on building content after the event. So you will hear more after the session. But if you're looking for videos, the traditional virtual pack will still be there. Virtual pack is later this year.
Starting point is 00:34:49 It's going to be in September. Which is always a good time. And that one, again, we're going to do another 24 by 7 around the sun. Again. I think I need to definitely ask for sponsorship for an energy drink or Starbuck.
Starting point is 00:35:06 I don't know, but something. Or just a coffee manufacturer. Yeah, all the way. You know? Like, what's some of the... What's the one with the mule that comes in? The famous Juan something? Juan Valdez.
Starting point is 00:35:19 You got Juan Valdez coffee. Yeah, I do. That'd be good. So, yeah, Henrik, thanks for joining us very much. People can follow you obviously at at Nealode.
Starting point is 00:35:28 What's your actual Twitter handle? It's at Ash Rexed. Yep. And otherwise
Starting point is 00:35:35 in LinkedIn you can find me as well as Rexed H Rexed. Yep. H Rexed.
Starting point is 00:35:39 So this is both a way to follow me or you will see a weird Lego guy so don't be scared. It's not a commercial for toys, but it's me. Yeah, that is you. And that'll be cool.
Starting point is 00:35:51 So, yeah, thanks very much for joining us, and thanks, everyone, for tuning in. Thanks. Ladies and gentlemen, Elvis has left the building. He's right over there, actually. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.