PurePerformance - Dynatrace Perform 2018 Henrik Rexed Neotys

Episode Date: January 30, 2018

Henrik discusses the new Neotys integration with Dynatrace...

Transcript
Discussion (0)
Starting point is 00:00:00 We're live. Hello, everybody. Welcome back to Perform 2018. Data Trace Perform 2018, that is. I am Brian Wilson with, I guess maybe we say PerfBytes, Pure Performance, and James Pulley of PerfBytes. Hello, James. Hello. Good to see you again. It's been a while, huh? Yeah. Yeah, it sure has. And we've got this crazy person sitting next to us right now. Mr. Henry Rexit. Yes, Mr. Henry Rexit of Neotis. How are you doing? Fine, thanks. Good to see you. Yeah, me too. It's always a pleasure to be here and perform. Yes, excellent. James is looking at me, it's awesome. You've come here from far distance, right? Far distance, it's a couple of hours of flight, I guess. No, I mean the elevators are pretty far.
Starting point is 00:00:57 No, no, from the same place as the Codegs. From France. You got some new news about Neonis and Dynatrace to share with us? Yeah, I think last year, the last time we saw the latest version of Dynatrace for the AI, so I think that was an awesome feature. So seeing the AI, let's take advantage of the AI for load testing. Everyone wants to have that for sure, save time during analysis. So this is what we did by building this fresh new integration with the new platform Dynatrace. So when you combine NeoLoad and Dynatrace, simply you get all the data in one place, which is Dynatrace.
Starting point is 00:01:42 I don't know if you've been... I remember when I was doing load testing with AppMoney, which is a really awesome tool, but there was a couple of metrics missing, so I was switching nodes, getting back in AppMoney, switching to the other... Yeah, it's always cumbersome to switch from one tool to the other to get a comprehensive view of performance of the
Starting point is 00:01:59 system. And correct me if I'm wrong, what you've done is you've taken all of those discrete metrics related to application performance that you're collecting and need to load, and you're pushing them into Dynastrace so you get these extra data points that you can
Starting point is 00:02:15 use for concurrency. Yeah, exactly. The idea is that Dynastrace doesn't know the user load, Dynastrace doesn't know if there's a transaction passed or failed, or a failure rate during the test itself. So those data, we're pushing it directly with the help of your API, the API that's been built on top of Dynatrace. So all those missing data that was not there is now available. And then we're also putting some events on the services of the application. And I think from this moment, you get custom events,
Starting point is 00:02:46 then you can do a very easy cross-degradation between one test and another. So it helps you to train and detect if there's a regression from one test to another. You know, I'm curious to see. I know part of what we're going to be talking about might be tomorrow. I know we have upgrades to our AI engine. They're gonna be announced, the details of that, they don't really show the stuff that does the tumors beforehand.
Starting point is 00:03:11 But I'd be curious to see, I know part of the idea is that it's gonna allow you to feed external data into the AI, so I'd be curious to see, round two of this maybe, if it would be possible to feed in data, let's say you have a dedicated performance instance service, so that you can feed in data to the AI, and have the AI help you analyze the performance testing.
Starting point is 00:03:34 Obviously, the performance data is usually analyzed a little bit different than production data. Because you're doing control tests, user comparing, test over test. So it'll be really interesting to see what we can do on it. See, one of the coolest side effects I think of this is not only the AI that comes into play, we can analyze from patterns and things of that nature, but since Dynatrace can live in the cloud, there's no reason why, as a test is ongoing, I can't have two or three different analysts around the world, all with different eyes on what's going on in the test, providing live analysis of the test, remote,
Starting point is 00:04:18 and then when the test is complete, providing complementary, in some cases overlapping views of what's going on in performance. It just makes it that much easier to approach the data from a remote perspective, do the analysis, and provide a comprehensive report back to the owners of the system. Let's face it, a lot of systems today are blocked in the sense of multi-user access to test results while the test is ongoing, and even in the end, can you have multiple people actually opening the same test results? Existing tools are cumbersome, you can't do it, but having it all in a service architecture like Dynatrace, being able to access it, that's fantastic. Another thing that we would probably like is the fact that we're
Starting point is 00:04:58 linking our data to our Neolink web dashboard. So like you said, if you want to have cross-cooperation, different teams doing cross-analysis, you can jump back to Nilo and say, what about the settings, what about the statistics that we see in Nilo, and you can just one click jump back to the dashboard. Yeah, and that's fantastic. When you were talking about that, you put me back in my flashbacks and nightmares. So years ago, I used to be a performance tester, and I used a different tool, one of the bigger tools that I hope is no longer a competition. It's probably still kicking around. We won't mention names, but Mark used to work on it.
Starting point is 00:05:44 Man, I tell you, the worst part... Oh, well, yeah. So to me, it was always the most painful part, was after the test run, gathering up all the data into, not only presentable form, gathering the data to be able to get other people to look at it without having to publicize it.
Starting point is 00:06:05 And as you were talking about this idea of just people logging into a website, or a website, you know, with a web-based tool, looking for data. I hate to use this term, but promiscuous access to performance data. That's a good metaphor, but it's, man, I do not miss that. I don't know. Do not miss that. So it's, you know, to be a performance tester today, to have integrations with the building, to have, you know, just also the fact that, you mentioned this the other day when we were talking, the fact that people aren't being like, this is our data, it stays in our tool, we're not going to put it
Starting point is 00:06:37 in the other tools, we want to put it in our tool. It's like, let's make this easy for people to use. We have data, you have data, it's fine, a great place for everybody to look at the data and consume it. And that's fine, right? There's no longer that ownership and that jealousy of most different tools. Even if they're not competing tools, it's just this, you know, the whole DevOps community kind of goes into the whole tool piece too. Yeah, that's getting into like data points and things of that nature, publicly accessible data within the enterprise, for read access that anybody can leverage. But yeah, one other thing that I had in mind is from the moment
Starting point is 00:07:18 we got a centralized data for all the application aids, response time, testing assets and so on. And imagine instead of having one Jenkins plugin for Neolab, one Jenkins plugin for Benetrends, and people are jumping back and forth to see, okay, how is my code behaving, how is my server behaving, and then going back to Neolab plugin and say, how was my training going on there? Then you can imagine one single plugin-in for every one ring. So where you can just extract the data from Dantrix because we are sending our
Starting point is 00:07:51 data in Dantrix. So with Dantrix you can imagine that that plug-in will give you a 360 degrees vision of what is the level of performance. Of course the miracle would be that the AI responds back to Jenkins with a gate and says, do not deploy further. You shall go no further. Well, we got this back. That's not the only purpose of the game. It's the other purpose of the game.
Starting point is 00:08:13 It's the other purpose of the game. But the other purpose, too, is, again, I'm not saying, oh, it has to be fed to the dinosaurs. Whatever you do, not only do you have your performance results, I'll say you see some strange behaviors and strange mental readings in there. If you are a manager and your production system is also hooked up to the same tenant, you'd say, well, let me see the success of the production right there. Switch it over to the production tenant, or the production set of the data, and say, okay, this is an existing pattern, we've seen production already, we know, yeah, we don't necessarily have this, but it exists, it's pre-existing production. As a core, this is not happening in production,
Starting point is 00:08:51 or you may be fine, this isn't happening in production, but look at our traffic model, we've seen this kind of traffic model in production, so then, of course, maybe your test is wrong. But all in that same view, I mean, it's cool stuff. Yeah, I think so. And I think also one part of the announcement that has been done today is about the thing that I really like,
Starting point is 00:09:14 is the replay of the users. So this is, when I saw that, I say, oh my God, I'm going to use Selenium, combine it with load testing, and then for poor users, if something goes bad, they can go back to Dynadress and see the replay. So this is just awesome. Ah, yes, so using replay to look at your browser during tests.
Starting point is 00:09:36 It's very interesting. Very interesting idea. And I think another cool thing that you might be able to do, and I would like to see, because again, none of us know what to do with the other people. I would like to see if there are data points under that hood, because I guess you have kind of have it in your head, but if you're looking at modeling load, where you can actually see how users interact, not just see the actions they take and the training they take, you can actually see how they interact with it,
Starting point is 00:10:07 which might, may or may not help you model a load test, but it'd be interesting to see how it might contribute. Where it would help you model a load test is if you have loading which is progressive, below the fold, that is, I only load what is visible, and as I page down, then I'm loading additional assets just below the level of what's visible. So it's transparent to the user,
Starting point is 00:10:31 but I'm not having to load 10,000 things on a page right up front. I'm just loading 50, 50, 50, 50 small discrete amounts. So I can absolutely see where having that analysis of user behavior in the system would be completely valuable. And something also I had in mind is that when I look at the new Dynastr platform, there's a lot of things potentially that we can make. I think we talked about it in the PSE, you mentioned an AI system that helps you to tune out your servers. Yes. And I said, right, why not?
Starting point is 00:11:10 Let's try to make a script with a couple of different settings, run the tests. Mark and I have always been an advocate for third-party plugins for AI systems. Why not be able to leverage what you've seen, what you've seen, what you've seen in production what I've seen as a set of rules that can further educate
Starting point is 00:11:30 an AI agent, this is a pattern to look for this is a symptom, this is the resolution and then have it build its own inference rule set to go through and analyze based upon the data that it has I think that would be extremely powerful we haven't really seen its own inference rule set to go through and analyze based upon the data that it has.
Starting point is 00:11:46 I think that would be extremely powerful. We haven't really seen an analytical engine which allows us to introduce user-defined rule sets in this case. I'm hoping that interest gets there. I know we're opening up the AI for external inputs, so I'm not exactly sure that's going to do it, but I think we will find out a little bit more about that tomorrow, probably. That would be exciting. But the other cool thing, too, is that we're even setting up different testing and different configurations. Just the other day, we announced the auto-remediation via expert talent.
Starting point is 00:12:18 So I'm brainstorming in my head, let's say you want to test three or four different configurations for your system. I wonder if you'd be able to run a test and all via automation, leverage that by saying, with auto-remediation, be like, hey, if you notice the CPU hitting 100%, maybe add it in your server or something. I wonder if you could hack it so that if the test runs, when that test completes, auto-remediate to push out a new configuration, run the test again. I don't know if it's possible, but just, again, if you have these APIs, I think any of this stuff probably is possible. Imagine that there's a fresh new architecture, and I need to queue my server.
Starting point is 00:13:01 So I run this on Friday evening at 8pm, or., or I say Thursday evening at 8 p.m., I go to home, and then Friday morning, I get the best settings that has been defined, and the settings have been pushed through a perfect script or I don't know, any type of configuration scripts in my Git repository, and then we have all the configuration. Well, let's take a finite example. Let's say I have a font on every page,
Starting point is 00:13:29 and as I'm running my test, the analytical engine will notice I keep requesting this font. This is a fixed asset. It should be cached at least within the context of the user session. So the resolution of that is I build a rule which changes the mind type caching for font, so it's at least as long as user session, so the resolution of that is I build a rule which changes the MIME type caching for font so it's at least as long as user session, hopefully as long as the build delivery schedule, one week, two weeks, three weeks, so I only have to download that asset once if I'm a new customer to the site.
Starting point is 00:14:00 And boom, that goes directly back into Git, that gets pushed into the next build, and the server gets provisioned, the mind type gets configured and changed. Next time you run the test, it only downloads that asset once during the entire user session. Boom, load reduces, cost reduces to the client, and improvements are resized. Yeah, I think we're just scratching the surface. We're not there yet. That's me, Mr. Finer. But it's approaching, right? There's steps that I mean, we're getting step by step closer and closer to those things.
Starting point is 00:14:36 And I think that's the exciting part. As companies like R2 work on it, it seems a great way for us to figure out what we can do. More things start presenting themselves as possibilities. And then the architecture is there. It's a matter of figuring out how to put the layer into control. So I think it's just a matter of who knows how long. But potentially, it's there. I can't see a technical reason why that can't be done. So think about the time from one generational innovation to the next is decreasing as we move through the IT generations.
Starting point is 00:15:14 We're getting almost like a Moore's Law in software, in this case, a doubling of capability once every 18 months. So I see no reason why, after two to three generations of AI, that we're not sophisticated enough that the AI can make these decisions for us. Yeah, speed up the testing, speed up the releases, and increase the quality. And then Skype then comes to work. And we all die. Well, hopefully not.
Starting point is 00:15:43 We all have to work anyway. We'll figure out something. And we're all done. Well, hopefully not. You don't have to work anymore. We'll figure out something. Alright, anything else? You're leaving tonight, right? So how have you been here? You were here yesterday. Did you come on Sunday? I arrived Sunday night, yes. So how was your time here today?
Starting point is 00:16:04 Awesome. I mean, once you perform, it's like being in a casino. You don't feel the time anymore. I guess it's all days. So, no, it's always a pleasure. And I think it's too bad that I'm not able to make it tomorrow. There's always things to do for business. But, sure, next year for sure. And we will also be at the Forum with Vanzaloa.
Starting point is 00:16:29 Oh, I should have got to get over there. I think it's going to be, I mean, Europe or Catalonia? I don't know. Yeah, we'll see. And it actually puts in the new replays from the opening. So I think that's why they're basing it there. I'm not going over to France. Oh, you have to. There's also France you'll see.
Starting point is 00:16:54 The French California. Excellent. James, anything else? Performed in Catalonia? Mmm, Catalonia. Yeah, also maybe one small announcements for those who followed the new TSP so we prepare because you have the virtual PC coming on so we probably are you're interested to be part of the PC don't forget to send me an email we'll be very pleased pleased to get to be a part of it,
Starting point is 00:17:25 present any topic. We would love to have you on board. And that's the PAC-MEM. And this is generic masculine in this case. Does not exclude women. No, of course. Or we have to convert to PAC-HUMAN.
Starting point is 00:17:42 We'll figure it out. But what if it's an orangutan? If you can find an orangutan who has a fantastic view on application performance and can present well, I think we should open it up to different species.
Starting point is 00:17:57 Okay. Even unicorns. Even unicorns. Alright. Thank you very much. Alright, thank you. Thanks. Thank you very much.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.