PurePerformance - Dynatrace PERFORM 2019 Neotys Updates with Henrik Rexed
Episode Date: January 29, 2019We chat with Henrik about how awesome Francis is, the new integrations anc capabilities for Neotys and Dynatrace, and the upcoming Neotys Performance Advisory Council in Chamonix, France...
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance!
Oh, hey everybody, we're back here at Perform 2019 with Diana Trace in Las Vegas.
Woo-hoo!
With Mark and James of Perpites.
And, of course, the one and only Henrik Rexit of Neotis.
Yeah!
Yay!
Hang on.
I have effects.
There's an applause and cheers.
There we go.
Yeah.
Yeah.
So, yeah.
He walks on the stage like a rock star.
Yeah.
So, good to see you guys.
It's been one year now since we saw each other.
One year since you've been here in perform.
Almost exactly to the day.
Wow.
Exactly, which is really exciting.
A lot of things have happened since then.
Yeah.
Actually, it's really nice to see Neota still having a flourishing partnership with Dynatrace.
So one of the things we'd like to do is get kind of an update on what's new and what you've been doing.
You also did a hot day as well.
Yeah, so Andy asked me for the opportunity to be part of the hot day, and I think thanks, Andy, for that.
That's just a super hot opportunity for me.
Yeah, cool. And yeah, I think we built pretty cool. That's just a super hot opportunity for me. Yeah, cool.
I think we built pretty cool stuff.
What did you guys talk about?
Everything is about
performance.
Interesting enough, it's about performance.
Some Jenkins integration stuff?
Yeah, everything. All the pipeline,
how to manage load and performance
testing in a really smart way
in a pipeline using Dynatrace.
I mean, Dynatrace has a lot of great features on that.
And I think a lot of load testing
provide a lot of great features as well.
And I would say Neolotrace
provide a lot of great features on that as well.
Of course.
So two great tastes that taste great together.
Neolotrace and Dynatrace.
Isn't that Reese's Pieces?
You got your peanut butter in my chocolate.
You got your chocolate in my peanut butter.
That's Reese's Peanut Butter.
It's like a cocktail.
At the end of the day, you need refreshments.
Well, we are in the Cosmopolitan Hotel.
Oh, you could have a Cosmopolitan drink.
Yeah, we can call it.
We can invite a cocktail called
the Dynatrace Neolod
refreshment drink.
Yes.
Who is this?
Is Jason?
Oh, I know what we could do.
We could try to mix Chambord with scotch and schnapps.
The three of those together, mixed together.
I'm sure it's going to taste good.
We need a French alcohol in that.
Oh, I thought Chambord was, no?
Oh, yeah, it is.
Yeah, yeah.
So Chambord.
Or cognac.
You do cognac? Absinthe? Armagnac? Oh, yeah, it is. Yeah, yeah. So Chambord. Or Cognac. You do Cognac?
Absinthe?
Armagnac?
No, that's like wormwood from Greece.
I thought Absinthe was French.
Well, I had a French friend who loved Absinthe.
Anyway.
All right.
Either way.
Anyway.
So what did it show in the hot day you guys were teaching?
And you had the NeoLoad Dynatrace integration working in the pipeline, right?
Yeah.
So Andy prepared.
Every student has this open shift cluster
where we automatically deployed
a Dynatrace tenant,
Jenkins pipeline with JMeter,
so people can at least see
how they can run load tests with JMeter.
And then at the end say,
okay, you've been struggling a lot now,
let's see the easy peasy stuff.
And so we did the other pipeline with Neolux. Look at all this stuff that's just taken care of for you. Yeah. That was
like he lobbed a ball up in the air and all you had to do was swing the bat. It's cool.
It's an upsell. Yeah. I like it. Basically, you can give that pipeline to your kids and
you can validate the performance. Hours of fun for the kids. But are there some native
things in Neolab you can tell us about that already work in pipelines or with technologies like OpenShift?
Sure. First of all, we
try to
migrate all our components
into containers.
So I think now this day doesn't
make sense to have one
load testing solution sitting
in a server forever when you use it
once a day or twice a day for a build.
Or for some people, once a month.
So I think it's more smarter or twice a day for a build. Or for some people, once a month. Yeah.
So I think it's more smarter to have like a, spin up a container, do your stuff, and
then once you don't need it, you just destroy it.
Right.
So I think that makes sense.
And then, of course, we had the Jenkins integrations there.
That makes sense, where you can do some training graphs as well.
Yeah.
And it does more, like with performance, it pulls back all the results and does some parsing
and puts that back in the pipeline.
And I think the thing that we've just added,
I don't know if all the community is aware of it,
we introduced another way to design scenarios in our product.
So we have this, traditionally we come from the UI level,
so you have to record, do the design from the UI,
and if I'm a developer and you give me a UI tool in my hands, I say,
it's going to take forever to build a scenario.
So that's why we came up with the
two different ways to build scenarios.
First, we have a
JUnit library, so it's going
to be a... So we have a
variable in Maven Central, so you add
a dependency, and then from there, you can
basically build scenarios,
variables, your tests.
And the thing is, one of the cool stuff is once the build starts, we're not going to run the test directly.
Because obviously, when you build a solution, you need to deploy that.
And before you do any load test, there's a couple of primaries checked to do that.
So during the build of your solution, what we do, in fact, behind
the scene, we build the project, which means the developer code, and then later on there's
someone else in the organization say, hey, I want to use that test there. And I have
no coding skills. Hey, no problem. Just double click on the project, you get the project
and UI, and you can do the changes.
Right.
So that's the first thing.
That's nice.
Yeah. And then the other thing we did is, I mean, if you have a development environment, which is not Java based, and pretty sure that
developers don't want to invest and learn and the Java skills. So that's why we have the other
format, which is, we call it the test as code. So it's a YAML description file. So either you
could build a one big YAML file with everything, the variables, the scenarios,
populations, blah, blah, blah.
Or you can build four different
YAML files, one for
your scenarios, one for
the test that you want to run,
the load policy you want to apply, and so on.
So it brings a lot of
effects. It's quite flexible, and I
think the great thing with the code approach is that
it's hooked to your source control,
you want to change the load,
it's basically one
update, and that's it.
This is the kind of stuff Andy's always talking about, like with us,
I mean, not that this is his idea, obviously,
but we're always talking about doing everything as code,
monitoring as code, testing as code,
now you have your scenarios as code, which is just really
awesome, because now
anybody, well because now anybody,
well, not anybody, but anybody who has an idea
of what we want to test can code up a scenario,
check it in with the build.
So with Andy, we have a dream.
And one of the dreams is, why don't we put a standard?
I mean, you at Dynatrace, you have this notion
of a performance signature file.
You define the mon spec, what you want to monitor,
what would be the thresholds and stuff.
And at the moment, we are doing something in parallel
called anatomy detection files.
It's a new thing that we include with the new version of integration.
So basically, we are learning the AI with the SLI for the tests.
So the AI basically will detect problems there.
And at the moment, it's two different files.
And now I think we want to merge everything.
So there will be one standard file.
And it will be quite cool that in that file, you've got the MON spec, the SLAs definitions, but also the scenario itself.
So basically one file that rules them all.
Yeah, I was just going to say that.
Of course.
It's good.
It's a Lord of the Rings reference.
It is.
That's fantastic.
Before we go into the next thing, there's also some, are you allowed to talk about some of the stuff?
You recently went to Lens and worked with Lab and some Andy to try to start a new project.
Yep.
Are you allowed to talk about that yet or give a hint at what you're?
I can expose the main objectives of this.
So I think it's, one of the things is make the standard
happening. The other thing is, I don't know
if people have been doing a lot of
continuous testing. You do
a build, you run a test,
you start looking at training graphs
and then you say, oh, there's a regression
and you look at it and you spend time analyzing and then
at the end you say, oh, it's a deployment problem.
Oh, shush. So basically
you were running a test and the test basically you just lost 20 minutes or 10 minutes.
It's a waste of time, yeah.
So at the moment, I built something called the sanity check in NeoLoad.
Yeah.
And this basically, before you do any load test after deployment, you run that sanity check.
NeoLoad scans the entire architecture with the services, count the number of processes or containers.
Right. And build like a picture of it. So it could of processes of containers. Right.
And build like a picture of it.
So it could be the baseline file. Yep.
And so the next time you've built, so that file needs to be pushed in your repo. Everything should be there, yeah.
And then the next time you've built,
the signature check runs again, and he's going to
say, okay, I'm going to take the picture,
and I'm going to compare it to the baseline. And it says, oh,
yesterday, we had three containers.
Today, we all have only one container. That's weird. Right. So we're going to compare it to the baseline. And it says, oh, yesterday we had three containers.
Today we all have only one container.
That's weird.
So we're going to break the pipeline.
So you could also get into the CMDB part to say what is actually deployed in my version.
It's like every time I run,
you should be going either on the new version
or the next version.
You could actually go back and say,
I should have the new version,
but somebody didn't push something
or it didn't
make it.
That's a quite excellent thing.
I have to add it.
Yeah.
But it's not just what container is there, but what version is running in that container.
And then is this worth it?
I already have a test result for that version.
Do I need to run it again?
Or what has changed, what should have changed, and did it change?
That's free.
No charge for that idea.
And the other thing we do is, imagine you have
three containers, and expected
you have three containers, but
the previous build you had, let's say
in total you were consuming 10%
of CPU, and maybe 10
megs of memory.
By the next build you run it, suddenly
you're consuming 70%
of CPU after deployment.
And only one meg of memory.
And one gig of memory or one meg, whatever.
Then it's weird.
So basically we're going to fail the build as well.
Yep.
So I think it's basic stuff just to check that at least the environment seems normal.
Yeah.
It's a void to run a test straight away without that check.
I think it's a way to avoid losing time for nothing.
As part of your sanity, are you also asking does it work for one?
Pushing one user through the system just to see if it's SLA compliant before you actually ramp up the load.
So that's a good point.
I will say the recommendation by using this is you deploy,
and sometimes some apps need some few users to wake up.
It's a little warm-up.
Yeah, like a warm-up.
So what we do is basically in the pipeline that we did yesterday in Hotday, we had that warm-up script.
So Nilo runs a few iterations with one user just to warm-up.
And then I run this in HTTP.
Yeah, yeah.
The warm-up stuff I've done, but I hadn't really done the comparative piece.
That's pretty cool.
That really is.
That's very cool.
It's, you know, when I finished load testing, that's when I came here.
In the old days.
Once upon a time, Brian was a load tester.
I used to be one of you poor souls.
I once was a performance tester.
Now I'm a performance engineer.
That's right.
Yeah, exactly.
Just because I changed my title.
Yeah, and I was a performance tester.
We didn't have this fancy performance engineer back when I was doing it.
Everything was manual back then.
Manual testing, manual everything.
And seeing how this is all evolving in the
time is just really really amazing and i imagine it's really exciting for all you again all your
poor souls over on that side um to to get to take advantage of all this because
it to me it would just make it much you burn out right how many times you're going to be
rewriting your tests running these things now you have new things to play with.
Of course, it's going to bring the practice up higher,
but just even personally to be doing these kind of things,
it probably is just so much more exciting and fun to do again.
What else is new in the NeoLoad world?
The NeoLoad world, what we have in the integration,
one of the things that happens in Dynatrace with the previous integration,
I'm pretty sure if you're familiar with Dynatrace,
if you do load tests, you need to set request attributes.
So request attributes is a way to basically tag your traffic and be able to see it as a load testing traffic
and not a real user traffic.
So you had to set the rules manually.
And now what we're doing,
we're checking in Dynatrace if the rules exist.
And if not, we are pushing them.
Via the API?
Via the API. So basically, we are pushing them. Via the API? Via the API.
So basically, we are pushing a standard name,
and you'll see later on what we're going to do afterward.
Yeah, cool.
We have a lot of plans about it.
So because we have those standard names,
it will be able to always to do,
to grab the URL of the service flow or the pure path.
And then follow on with the tagging and everything else.
The other thing cool we do is also,
by default,
Dynatrace group the traffic based on the URL.
So if you hit, say, slash carts, you will see slash carts in Dynatrace.
And all the statistics will group that.
And when you do a load test, what happens is,
I'd say I'm doing a load test,
but there is a team in my same environment that do some manual testing or they're running some Neon scripts.
And in that group there,
you see you're not sure if you have to filter
to make sure that you only analyze stuff.
So what we're doing now,
we are setting what they call in Dynatrace
request naming rules.
Yeah.
It's like scoping.
Yeah, it's a way that now the traffic
that will be captured by NeLoad will be named differently.
So you will see slash cards, but the slash cards made by NeoLoad will be named differently.
It will be name of the scenario, name of the transaction, and the URL.
Which means then if you go to the diagnosis tools, I want to see the top request or I want to do...
I can basically, if I
look at that
naming role, I
know that it's
my new low
traffic and it's
not something else.
For load
testing and
production, you
can separate that
traffic as well.
Excellent.
That's really
great.
That's very
cool.
And that's
another one of
those creative
uses of the
APIs.
I forget who
we were talking
to.
Morgan, right?
Chris Morgan. We were talking to. Morgan, right? Chris Morgan.
We were talking about OpenShift.
We reached its maturity when people are using it in ways they hadn't planned for.
And tools have APIs.
And suddenly people are starting to take really creative advantage of them.
So it's really awesome to see you doing that stuff.
I love hearing this.
This is awesome.
Great.
Very cool.
What else you got?
Something else?
Just to point out, something that should not be overlooked,
because you can see all of your load test results inside the Dynatrace console,
this opens up the world of possibilities for remote analysis while tests are ongoing.
You don't have to wait for a test to complete.
You don't even have to have an analyst on site because that data is available on a worldwide basis.
You can take advantage of an expert in a remote location that cannot fly in for the test and maybe even doesn't even know NeoLoad.
Yeah, I mean, in fact, the thing is, what is cool is if you imagine a pipeline with Jenkins with a built server, you have no UI.
So if you want to see the metrics, you have to wait until the end of the test to see what happens in the dashboard.
And now with NeoLoad Web, all the data that we are collecting, we're streaming it out to our platform, the dashboard.
So either it could be on-prem or on the SaaS.
And basically, if you connect Slack to NeoLoad Web, then your dev team has got an alert saying, hey, there's a load test running, by the SaaS. And basically, if you connect Slack to Neon Web,
then your dev team's got an alert saying,
hey, there's a load test running, by the way.
Yeah.
And the load test is finished.
So you can click on the link, go to the dashboard,
and once you're in the dashboard, you see all the history.
So you can see your live tests.
And while the live test is running, I say,
oh, I'm not going to wait until the end of the test.
I'm going to grab the metrics on the build number one. And you select the history, pop, and you drag and drop it, and you do live compression. You don't have to wait in the end of the test.
This is time-saving.
That's major time-saving.
It's a very, very powerful feature. Oft overlooked, and I think it's important to bring that up.
Yeah. Now, we have announcements today for Binaryrace around the developer-free lifetime. And are there ways for people to get together with Neoload or Neoload Web
and use the new developer license, free-for-life kind of stuff?
I mean, there's a trial for Neoload.
Is the Neoload community license aware of the developer edition?
I didn't know that.
This is the second idea that we're going to share with you.
No charge, of course.
Because we have the to share with you. No charge, of course.
Because we have the freemium license.
So basically, the developer has the developer license for 9.3,
and they can use the community edition for the build.
I mean, when you test the microservices, you don't need more than 50 users.
So it's pretty good.
So, yeah, it's a match made in heaven.
I haven't seen what the whole developer thing looks like, but we're just going based off the press release and what we're hearing here.
But yeah, it sounds like they'll be able to do something with these together.
And related to the developers, I think one cool thing is,
I mean, Dynatrace is super powerful with the AI once they've got the baseline.
So if you're in a production environment, Dynatrace understands well what is the normal situation,
what is not a normal situation, and opens tickets.
So, then, I mentioned before that we have a configuration file where you can set those thresholds.
Say, all right, so this is the rules I want to set in Dynatrace in the AI.
So, if the response times of that service is higher than 100 milliseconds, or if the CPU reached that level or whatever, it's a performance issue
or it's a big error or whatever.
Yeah.
So, Neo takes that file and then basically set the rule in the AI.
Yeah.
So, while the test is running, Dynatrace open problems tickets.
And then if you have the ServiceNow integrations or Jira integrations, you can open tickets.
It's scheduled, escalated, yeah.
Cool.
You were going to say something.
No.
Here's the next thing I have for you.
So along that same piece, you have the trajectory or the telemetry of an incoming release that
you have estimation and prediction where your future thresholds will be adjusted upon the
push of that release.
And so you take the auto-baselining piece
that's in Dynatrace from an AI perspective already.
You can do more forward, say,
don't wait for the AI on this release.
We know it's going to take twice the memory.
So don't throw more...
We knew it was coming for the last two weeks
on this iteration or this sprint.
So let's actually take that and predictively or proactively bump those
thresholds and push it as part of the code.
Yeah.
I don't know if that's something.
It's there.
You could just pull it out.
The anomaly rules, it's a JSON file that you hook in your code.
So you can resource control it.
You can change it. So you can, it's go with your release. So you can resource control it. You can change it.
It's go with your release.
So you're learning along the incoming
vector of the new
release coming in. Now there's some danger in that.
Of course. And then a developer may decide
well I think this should be the new normal.
Because I can't do it better.
But at least now we know who
touched it and who to vote off the island.
Yes. Right?
Exactly.
Awesome.
Speaking of islands.
Mountains, islands, you know.
Speaking of airplanes, I had to fly in an airplane to Vegas.
Now, speaking of airplanes, you will be flying back to France.
Next week is the Neotis Pack.
Yes, correct.
The Performance Advisory Council, the second meeting.
The second physical edition. The second physical
edition.
But this is the
third edition because
you had the 24-hour,
right?
Yeah, so we
considered like two
different events.
Okay.
Because the physical
one is more we bring
experts, we want to
bring open discussions,
we want to see how
we can improve
things together.
But I just always
have to bring up that
24-hour one because you were awake the whole time,
so it was amazing.
So for those who want,
the next virtual one will be in September.
In September.
So for those who want to hook with me
and stay with me 24 hours,
that will be cool.
And are you still taking papers?
There's a CFP for the virtual pack?
Yeah.
The virtual pack, we'll open the papers later on.
So at the moment, it's not open yet.
So if you're listening and you want to submit, get yourself ready.
So plenty of time to come up with an idea and present.
But the physical one is going to be, where is it?
It's in France now?
So, yeah, we thought that we did Scotland last year with an awesome location.
We said, ah, how could we do better?
And we thought that, yeah, January, it's snow, it's winter.
So why don't we go and do it in the Alps?
Let's go skiing in the Alps.
Yeah.
So it's going to be Chamonix.
In Chamonix, so for those who doesn't know Chamonix, if you're familiar with the Mont Blanc.
Yes.
Mont Blanc is the summit of the Alps.
Famous pens.
It is as well.
Yeah. So it's going to happen pens. It is as well. Yeah.
So it's going to happen there.
It goes close to Chamonix.
So it's a challenge for the attendees because it's not the best location.
It's remote, yeah.
But that's sort of the lure of it.
It's like the castle.
This is unique experience.
Yeah.
And the location is awesome.
I mean, we have those very small chalet in the mountains.
So I think it's going to be really, really cool.
That sounds cool.
This sounds like something Spectre,
it would be like a meeting of Spectre from James Bond,
being this little small thing.
It's hard to get to, and then you find out the door's closed,
and they don't open until you promise to improve your performance.
Wait, but now you're scaring people.
No, that would be fun.
Are you kidding me?
Would it be fun to be trapped? No, it would be fun. Are you kidding me? It would be fun to be trapped?
No, it would be like a whole James Bond-y kind of thing.
I will not volunteer to go kill Mr. White.
You do your ski escape down the mountain.
Yeah, so we'll take the 12 experts in the top of the mountain.
Just throw them off.
And then we say, okay, now your mission is to go back to the Echelon.
Yeah, good luck.
Good luck.
It has nothing to do with an assembler.
I'm glad I'm not there.
I fall gracefully.
I do not ski very well.
Yeah, exactly.
So that'll be cool.
Now, there's 12 people, I think?
Yeah, 12 experts.
From all over the world.
Most of the experts, there are a lot of experts that are in the virtual pack and the physical one.
Yeah.
And we have quite new ones.
Yeah.
Good.
And so we're quite excited
that we have new pack members.
Yeah.
We have Leandro,
our amigo.
By the way,
Leandro,
I saw your slides.
I love it.
Good call.
Yeah.
He's good.
We're glad.
We're glad to have him
as a host,
but he's excited to come
as well.
Steven Townsend, I think, is coming.
Directly from New Zealand.
Is he from Australia?
New Zealand.
Yeah, he's from New Zealand.
And Shrivali as well from Oakland will join us.
Yes, cool.
And then we have Alexander Podelko.
Mr. Podelko.
It's the only U.S. expert that will be at the chalet with us.
We're just lackeys.
We made it halfway around the world for the Whopper.
That's where we met Steven the first time.
But we have to make it to a pack.
These full-time jobs, they just get in the way of everything.
Yeah, I don't know why.
I don't get it.
Why are you so much pressure for it?
Where are my priorities?
So that's really exciting.
So this is happening next week, middle of the week, 6th and 7th?
Yeah, 6th and 7th.
We'll do some live podcast wrap-up stuff like we did last time.
You're going to be there podcasting?
Well, yeah, remotely we'll pull some people in, which would be cool.
But then people can access some of the information after the fact, right?
Yeah, so first of all, you can register to the pack, and you will receive the recorded sessions.
Okay.
Not lively, but a few days after.
A few days, 10 days after.
And, of course, like previously, we'll post blogs, post the content, the slides that have been shared.
Speaking of which, I still owe you a blog series from the virtual pack.
It's in there somewhere.
We should point out the big topics that were in
Scotland last year. I think the biggest topic
was AI. It was a big AI.
It was AI. So this year
I think there is a lot of
joint topic about
I think it's a big trend. All the continuous
testing pipelines.
How can I
take advantage of my data coming from
APM products? So there will be a lot of
similar presentations
but I think it's not a theme
but it's good to have a similar discussion
because at least we can basically try to
figure out a best approach
and Andy is going to present something about
why don't we build up a standard
for the community
so I think that will be very helpful
we're behind that big time and there is two talks about build up a standard for the community. So I think that will be very helpful.
We're behind that big time.
That sounds good.
And there is two talks about IoT.
Yeah.
So one of his, Bruno Odu,
he's going to share his experience on a project on a connected car.
Oh, yeah.
So I've been involved, not in the project directly,
but I've been in touch with Bruno on this.
It's a really nice project, by the way.
And then there will be another presenter that will talk about IoT as well.
So, yeah.
Cool.
I think two main topics this year are mainly automation and pipeline,
and the other one will be more IoT stuff.
That sounds great. Now, for people that are interested, the other one will be more IoT stuff. That sounds great.
Now, for people that are interested, they can go and register again
at the website, and just Neotis, I think, slash
PAC, or? Slash Performance
Advisory Council. Performance Advisory Council, one word.
And then there's a hashtag. Of course, you
can follow you on Twitter.
It's hashtag Neotis,
N-E-O-T-Y-S-P-A-C.
Yeah, so Neotis PAC N-E-O-T-Y-S-P-A-C. Yeah, so Neotis Pack.
And then what's your...
There's the Neotis Twitter, but you're on Twitter as well as yourself.
Yeah.
Yeah, so you can go...
No, as his Lego self.
As your Lego self.
Yeah, Lego self.
Oh, meanwhile, where is all the Lego?
Give us your Twitter handle.
It's Henrik, so it's H-Rexit.
H-Rexit. H-Rexit.
H-Rexit.
Yes.
Now, before we came down here, there was a lot of talk about building a Lego UFO.
Yeah, I was expecting that someone bring the blocks, and I'm so disappointed.
I was expecting you to bring the blocks.
Yesterday I had my awesome Lego t-shirt about Trump building a wall.
Don't go there.
It's not happening, I guess.
But I think we start a GitHub repo with the plans, the Lego pieces, put the whole thing together.
Let's create a Lego UFO.
That would be cool.
Because the plans are already there.
It's not Arduino.
It's the other microboard with the controller and the LED strip.
Half the parts, I was like you know, I was like,
we're going to be doing podcasting.
I can't bring a soldering iron and all of this stuff.
But it's a cool idea.
Yeah, and I think what would be cool is that if the pipeline breaks,
then you have to break the UFO,
and the guy who broke the pipeline has to rebuild it again.
Has to rebuild the UFO.
And the thing is, you also get alternate shapes.
So you could make UFOs in different shapes
and just
lace the leds through the led strip through it yeah we can sometimes build a car or change here
we go see i can i can hear it hear it happening now already awesome so that's a challenge for
anybody out there if you want to beat henrik and mark and james or anyone else to building this
build a lego ufo build a lego ufo an operative one so you have to you have to get the uh you know beat Henrik and Mark and James or anyone else to building this. Build a LEGO UFO.
Build a LEGO UFO. An operative one.
So you have to get the
wires all working. Not just a LEGO UFO.
Obviously a
Dynatrace LEGO UFO.
We just have to figure out, pick the pictures.
I think there's one color
missing in the LEGO.
To fit with the Dynatrace logo.
So we have to be creative I think.
Okay.
Very cool.
Henrik, thanks for joining us.
Yeah.
Thanks for your support
of everything.
It's awesome to see you here.
Thanks for everything
you're doing with the
load testing stuff
and with the tools and all.
It's really awesome.
Rock and roll.
Thanks.
Cool.
So I think I'm going to
I just saw that there is
an arcade game from Pac-Man
so just to promote the pack
I'm going to go and play
a bit on Pac.
It is. It's right over there isn't it
awesome okay so we'll see you thank you
yeah thanks
see ya