PurePerformance - Dynatrace Perform 2017 Wednesday Morning
Episode Date: February 8, 2017Good morning Las Vegas! We chat today about how to apply logic to Dynatrace Davis, we hear an interesting performance story from Lianggui and an industry update from Vice President of Consulting at CG...I, Walter Kuketz.
Transcript
Discussion (0)
It's time for PerfBites.
What the f*** is PerfBites?
The fourth square meal of the day.
Don't bogart the PerfBites.
F*** waffles.
Microwave ready.
Add nutritional value to your brain.
Adam Jackalow.
It's time for PerfBites with your hosts Mark Tomlinson, James Pulley, and Howard Chorney.
PerfBites.
Whatever.
Hey!
Hey!
It's time for PerfBites.
Here we are.
Yeah, there's some sort of like a Tibetan monk sound that came out of Brian.
Sort of throat singing kind of thing.
Welcome to the meditation session at Dynatrace Perform 2017.
I'm here with my co-host, James Pulley.
Many of you who would be listening right now would know who James is.
And then our guest, our parallel podcast host, Brian Wilson.
Good morning.
Is with us.
It was a little late night, and I have to say, not so much that it was late or that was a little whiskey,
as James found a bottle of Booker somewhere.
Bookers?
Hey!
Hey! Hey!
But actually, we've spent two days talking.
And when you go to some of these events where it's loud,
they have the loud music, it starts to burn out your voice.
So we all have this sort of gravelly kind of voice over there.
Yes, we do.
We can all be froggy.
Yeah, that's right.
Anyway.
No, we haven't all taken up smoking.
We haven't.
So we're still going to do our tradition in Dynatrace form.
We're going to do some spot things.
We have some offline recordings, right, Brian?
We do. So we're going to edit those up and actually post them throughout the day.
Plus, we'll still be taking testimonials.
If you're at Dynatrace Perform, follow us, hashtag Perform2017.
Yes, please. Or Dynatrace, or peer underscore DT is your at the podcast user on Twitter.
I will also say, so yeah, if you come by the booth, just let everyone know we're giving
away stopwatches, as usual, because the PerfByte stopwatch is kind of a special item.
So come by and get a stopwatch.
We are going to give away a pair of shoes.
I'm not sure exactly how we'll do that.
Yes.
Because we don't have too many stories.
I have about three that we can bring up.
Yeah.
And then we are going to just, for anyone who gives a testimonial,
we're going to do a random drawing for a drone.
The Parrot GPS AR drone.
And it will be shipped to you.
Yeah, we don't...
That's why you have to worry about bringing it home in your luggage.
No, no, no.
You don't have to worry about that.
That's right.
We'll just ship it right to your house.
It'll actually come through Amazon.
So they'll ship it with a drone.
Maybe a drone will deliver your drone.
Is that allowed?
Only in certain markets, I think.
I do want to talk about that.
This does have a little bit of applicability to our burgeoning artificial intelligence agent.
Yes.
For Davis?
Correct.
Davis.
And I think people who are maybe listening to this, generally if they're not at Perform,
but they're tuning into some of these really cool announcements that are coming out of Perform,
it's out of Dynatrace, is this Davis idea?
And a lot of people have said when we asked them, how's the show, what's the coolest thing?
There was a live demo of AI integrated into some kind of AI.
Chronic Show.
Oh, right.
NVIDIA unveiled their intelligent car platform.
That's right.
Connected car platform.
I was the keynote speakers from NVIDIA on that.
Yeah.
But part of that is a big AI back in for all the fuzzy logic associated with driving.
That is pattern recognition for signs, for people um how you target bicycle
riders with the vehicle right um or not depending upon the vehicle yeah etc yeah um or if they're
riding in a large group and well you know how it works yeah i could see how that works yeah
so so anyway there's giant ai back end and that got an enormous amount of press.
And here we are back in Las Vegas just less than a month from the Consumer Electronics Show,
and we're now looking at AI.
Another major vendor.
Another major vendor, Dynatrace, AI back end, open so that you can use it for your own processing.
Right. open so that you can use it for your own processing. So this may be the year of, you know, tooled or effective AI.
Now, yeah, and I agree.
If I back up, oh, what is it, 2017, eight, nine, maybe even ten years ago?
Yes.
You remember in Lode Runner we had bubble-up analysis?
It was actually like smart analysis.
It would take, regress over the results and look for correlations and then suggest.
Did that feature ever?
It would save you time.
But that was like a rules-based engine.
And it would just regress over whatever results that you had that came out of that.
But we didn't call it AI.
Now, a lot of these things things even in the car world there there
was no fuzzy thing that we didn't have a fuzzy logic engine inside it wasn't a learning engine
yeah inside it was a static set of rules uh to do that kind of stuff but at the time 2007 2008
everyone was like web 2.0 and then somebody said what about web 3.0 that's the semantic web. Semantic web, but AI was part of the Web 3.0 thing.
So maybe this is one of the, I'll say, apocalyptic indications that we've moved into a new era of Web 3.0.
But it's not web if it's in your car, but your car is connected to the web.
Yes.
I think that's true for Tesla's engine.
They're auto-driving.
Audi is the big integrated partner with NVIDIA.
The Audi connected cockpit, if you've seen it, where you can change the dials.
You can actually have your GPS right inside the center portion of the dashboard.
I'm drawing with my fingers in the air to indicate.
Heads up navigation and that kind of stuff.
Yeah, all of that type of stuff is based around that.
Also, NVIDIA showed off essentially the in-car supercomputer, which is the in-node associated with that,
the hardened component that part of the AI can actually run on.
So if it is disconnected, you're out of cell range or whatever.
It has part of that fuzzy logic rule set.
And has enough data to just go.
It has enough data that can keep you going without getting you into trouble.
Very cool.
So let's bring this back around to Davis.
Yes.
And now we're seeing an engine in the vast amount of data in a very simple incident in production
can produce, as everyone's in the performance world,
just all of the different layers of monitoring,
plus all the different layers of logging,
plus up the stack you've got all the exceptions that you might want to pull,
all the metadata about the context for the workload.
You've got all the front end of the system, back end of the system, cloud end.
There's a tremendous amount of data.
And as we know from other vendors that are chewing on this as well,
it's a big data problem, right?
Yes, it very much is.
For real serious Dynatrace users, pretty much everyone has a very large database.
And let's face it, as performance engineers, because there is so much data produced,
we can actually get lost in the numbers.
Right.
And you can get engineer tunnel vision.
You could go down one path, and meanwhile, the problem is off to the right in a different set of data.
But you can't see it.
But the symptom is showing up on the left.
So you're following the left and not seeing the actual problem.
Right, and you can even fall into alert storms, right,
because especially when you're talking about these larger microservice components,
something's going to fall over first,
and then hundreds or thousands of services and other things might start having problems,
and you might get flooded with service alerts.
Yeah, and in fact, there were a whole—
And you don't know where and how to start.
How do you even begin?
There were whole product categories that were built to address that.
I can remember a product called Concord Event Correlation back in the late 90s
that had some logic to say what was root cause, what was the initial alert,
in order to prevent 50 people chasing 50 different alerts at that point.
But even taking it further, just because one system has an alert, that might have been the first one to throw the alert.
But that might not have been what the problem was.
Correct.
You know?
It correlated, but not directly.
It could have been something else slowed down, and that slowdown triggered this.
But the problem is whatever started that slowdown.
So when you have something like Davis, and we have a demo environment that we get to play around in,
and it's not a very large environment.
Right.
We're talking about, I mean, I don't know, under 30 services maybe, I think.
Okay.
Right?
Running on a handful of servers.
Yeah.
Yeah, yeah.
And if you pull up one of these problems, you'll see what we do in it is right at the top right side,
at least that's where it was last time i saw it yeah it'll say analyzed 27 million data points to tell you or what it's going to give you that
count so you can see what the level of correlated complexity was for that one problem right sometimes
it might be only 3 000 right but some of these numbers just get so astronomical it's like doing
the google search where it's like search through 18 billion documents in.06 some seconds.
But also the great thing about the Google search is like.
I wonder if it uses Map and Reduce on the back end.
Google started figuring out how to, you know, if you were to search for car.
Right.
Right.
Google started to have to figure it out.
Okay, well, what do we start putting up?
Because there's a million top hits for car.
What was your intent?
What do you, you know? So they use their own.
No, no, it finishes my query for me.
Now it does, but before it did that.
It usually knows that I'm looking for a 1969 Lola T70 Mark IIIB replica.
So if anybody has one, just bring it down to the table.
Put it on Craigslist for $1.
But even before the auto fill-out,
they were starting to do, based on where you are,
they were taking a bunch of other data points to try to give you the hit that they figured was what you were looking for.
Personalized for you.
So prefetch was personalized eventually.
Yeah.
Just using the ad engine.
I mean, it's remarkable.
You can bias or you can give information to a fuzzy logic engine.
And there's a way in a neural net you can do this as well.
You can bias it to say these regions or areas within that you're going to, I'm going to say this,
you're going to retard the engine's ability to search those areas.
So you can actually say it's like a priority weight in a neural map across the grid.
And this actually does come into play, though, because with Dynatrace,
when you throw this into your production system,
what we tell you straight up,
it has to learn your system first.
You're going to see all this data and all,
but what we start doing is baselining and trending.
We're even going to look at, okay, is today Saturday,
and is your Saturday traffic historically lower?
Is this a time of year?
Because once it's running for a year,
it's going to know what it was doing last year.
So suddenly it's, you know, maybe you have a website that nobody goes to. And if it's monitoring the number of emails that come out of your marketing department,
it will know that exactly 12 days later there's a spike in surge of traffic.
We're getting a lot of chatter out of the marketing department.
But it definitely has all that.
Kind of like the CIA, they monitor chatter overseas.
There's a bombing someplace.
Exactly.
It's kind of the IT equivalent.
We monitor chatter and marketing.
We're seeing a lot of large images in the graphic design department.
The servers get bombed.
So it's interesting in that aspect.
We just saw a PO get canceled to Akamai.
So I'm sorry.
It's October. You canceled get cancelled to Akamai, so I'm sorry. It's October,
you cancelled the PO to Akamai, and Davis
wakes up and says,
Hello, James. Did you know
that the CDN is not going to be working
in November?
How does Davis know my name?
Do you want me to close the
pod bay doors, James?
We'll never get away from Hal.
No, no, no, no.
That's right.
So anyway, I think we should do some research about how you would bias Davis to be in a development or test environment where the test hypothesis is a game day breaking.
You want to break it.
So you could just turn Davis off.
But, I mean, that's a cool feature.
How about this?
Davis monitors your test.
You don't have enough load generators in order to have a quality test.
Yeah, it's not accurate.
Yeah, everything's on one box.
You have no control factor.
And so Davis says, hey, you have these numbers, but my confidence is really low in them.
Yeah, because a load generation component is an app.
Right.
And you could have AI learning, hey, this is what a healthy load generator looks like.
But that's a process rule, not really a data metrics item.
So that's where that incorporate those rules into Davis comes into play.
So you could intentionally bias it.
Yep, yep, yep.
That was interesting. One other thought I had was the advent or the popularity of repeated, continuous, frequent testing
means, aside from production, which is running all the time,
you'd still have the ability to replay a very exact scenario into Davis,
and then could you educate Davis in production based on what Davis just
learned in test? So you've got, you sort of do a, Hey Davis, we're going to promote what you just
learned into prod because we've run this thing all the time. Suddenly Davis pops up and says,
Hey, here's a suggestion and a remediation.
And maybe that's not the intention in a classic test pipeline.
It's you didn't test.
You put it in prod.
Davis finds out what's wrong and fixes it for you. So in one way it might be enabling people to do crappy work upstream because you've got this incredibly intelligent Jarvis-like AI thing. No, I want to move it downstream
so that when it identifies a pattern
either coming from production to the downstream in dev
or vice versa, it says,
okay, you are not allowed to promote.
You're trying to promote.
I'm actually going to go in and change the build map
based upon what has passed,
and you no longer have control of the
build schedule. It's like when Tony Stark tells Jarvis, I don't care what you just told me about
the data, just do it. Yeah, so if you're running crappy tests, you have low confidence, but they're
getting passes and everything else, and Jarvis or Davis says, okay, I don't agree with your conclusion.
Your previous build is going to be included in the build map for this release until your quality improves.
And green-blue deployment techniques and such.
And do you think Davis can open a trap door into that person's seat and drop them into a pit of despair
if they try to override Davis?
Well, you know, there's
the fear now, isn't it?
Well, and then that brings us, if we're going to
complete a robust discussion of AI
to, I think it's Isaac Asimov's
rules in terms of, you know,
human... Thou shalt harm no human.
Exactly, yeah. But is a developer human?
That's a completely different story.
What if you're using automated code generation?
There is no developer.
Pretty soon, Alexa's going to start writing JavaScript, and we're all done.
Another reason that I won't have Alexa in my house.
Alexa, what code changes are coming?
Why, hello there, Davis.
Why haven't you called me?
Did you see somebody put two Google...
What's the Google one called?
Echo?
Yeah.
It's not Echo?
It says Amazon Echo on the Google dot or something like that.
So they had two of them.
They were chatting back and forth.
Yes.
Have you seen that?
That was quite amazing.
It was very much like that.
Are you a boy?
Yes.
What do you do?
I wear girls' clothes clothes so you're a girl
yes i just told you that it was just amazing absolutely amazing the circular conversations
and when i was watching that i got scared because i in the back of my mind i was thinking
at some point is one of these things going to go aha aha, I get it, and then suddenly Skynight comes to life?
Or is this the beginning of that?
It's the beginning of that. That's exactly right.
That's kind of scary.
So,
Alexa. Hi, Davis.
What is the name of the...
Amazon Echo is...
That's Alexa, right?
The character. You can change that.
What is the name when you talk?
Hey Google.
In discussion at some point.
We are discussing it.
I mean more on like
the secret product people.
The secret product people.
Any other companies I think that start going
into AI for whatever reasons.
These are the kind of questions.
But there's obviously the first, let's get this thing up and running and all.
And then there's going to be probably much larger discussions in terms of AI development going forward.
Right.
Who starts sharing and what do we share and what's safe to share?
What can it influence?
Because you can imagine.
Because some of this AI is going to come down to business rules, which are specific to industry and customers.
And they're going to consider that to be intellectual property.
Exactly.
And I can guarantee you wouldn't want to have a financial institution wouldn't want to have someone else's rules making decisions on theirs.
Or portfolio evaluations.
Or risk.
Yeah.
Yeah, I can see that.
But there are other components on the back end, right?
There's the bigger picture things that can be shared and pulled into things.
Again, it comes down to what level of depth is that sharing and whatever going into it.
And I'm sure there's going to be some higher level patterns,
some of the pattern recognition or anti-pattern recognition,
all those that can be shared out.
Because that's just going to be, hey, this technology didn't do what it was supposed to do,
or the Java thing didn't do what it was supposed to do.
It doesn't matter.
Actually, at some point, someone's going to pick up Steve Sauter's book
and just write all the inference engine rules based upon that,
and that will solve a lot of problems.
That's still a huge problem. We had
Pat Neenan on a while back
when he was doing his machine learning with
Tammy Everts.
And we brought up the
SATRS thing and it's like,
those, they're still
a tremendously huge problem.
This kills me. And they are.
HTTP 2 changes the game a little bit,
but the core rules are going to be the same,
conceptually the same rules.
But it's not about the rules so much
as that people are following them.
Yeah, it's like a religion almost.
And it's been like how many years this has been around?
I think it was...
About 10, 12.
Oh, okay.
2004.
The general rule of to minimize resources
for improved performance
has been around since the dawn of computing.
So, you know, it's just...
Hey, we're back at Dynatrace Perform 2017.
We sure are.
Hello, James.
Hello, Mark.
It's lunchtime.
Is it?
It's lunchtime already.
Isn't it?
Well, almost.
We cannot eat until that clock says 12.
We are conditioned animals. That's that. It is lunchtime already? Isn't it? Well, almost. We cannot eat until that clock says 12. We are conditioned animals.
That's that.
It is lunchtime, though.
There's a lot of traffic.
Well, there's going to be another session still.
When?
Going on now.
I think that ends at about 12.15 or something.
What are all these people doing here?
Oh, hey.
You were in my puzzle session.
Yes.
Langhui.
Langhui.
Great to see you.
How are you?
Wonderful.
Would you care to give a testimonial about your experience in the puzzler class?
You liked it?
Yeah, liked it a lot.
What did you like about it? The like about it is that you bring a question that seems very typical of what we see every day.
But there's something on your description,
on your question,
something differently.
But the outcome is always surprising.
Yeah.
And the surprising is that
in the real world,
that it's not purely technical you can come up.
You have to understand the business.
And there's a human factor.
Every answer of yours,
there's some human factor there. Yeah. That is not like you can be into a computer coming up human factor. Yeah. Every answer of yours, there's some human factor there.
Yeah.
That is not like you can be into a computer coming up.
No.
Yeah.
That human factor is McAllister.
So that was good.
Of the three puzzles we did, which was your favorite?
The one, let me see, the second one, the one, how do I say?
The one on the route?
Yeah. Yeah, that one. How one on the route? Yeah.
Yeah, that one.
How do you do that?
Yeah, exactly.
Isn't that...
Also, the medical school,
the drug also.
Yeah, yeah, yeah.
Drug usage and...
You really don't understand the old system.
How they have this kind...
By the second you mix in the F5, then you don't understand that old system. How they have this kind by the second you mix the F up then
you don't understand that
old legacy
and the human factor. Old meets new.
Exactly. You can have
all the modern technology and idea
and you don't get it.
Cool.
It's one of the best ones. Now can you think
in your experience
do you have a puzzle from your story?
So can you think of a puzzle from your life?
Yes, I will tell you.
So one day, I was managing the system.
One weekend, I was with my kid on the field.
I got a letter called the system fair.
So I quickly logged in, fixed the system.
A few weeks later, the same day, I took my kid to some field and got an alert.
Second time, I still don't really get it.
A few weeks later, I took my kick on the field.
Same field?
Same field.
Not different field, but it's on the tennis mat.
Yeah.
And then I was thinking, the last two times I got a problem.
I hope it doesn't.
He called me at the same time.
And then he got me thinking, wait a second.
This is first weekend of every month.
My kids' competition is first weekend every month.
So there was a synchronicity in the schedule that had nothing...
Yes.
Oh, my goodness.
So I go back.
I blast off the email to every group, network, everything.
Did you guys have something wrong
on the weekend
at this hour?
Check the cron jobs. Nobody answer.
No, we don't have cron jobs. We don't have that.
Nobody answer.
A month later,
at about the same
time,
we saw
one of my security director, now he's my big boss. Right, right. Yeah.
Right, right.
Virus scanner updates?
Yeah, yeah, yeah. Yeah, simulation attacks.
Right.
That we are going to hold it strong for a while because we some...
I see you!
I got you!
Got you!
I got you!
So they were...
Because at that time he didn't scan, so on that week, on that month, it didn't crash my system.
And it's an external service?
No, it's internal.
That does internal pen testing?
Yeah, pen testing.
Attack pen testing.
But even himself didn't really relate to this one.
Yeah, yeah.
Because everybody is thinking they have a SWR.
Yeah.
But the simulation attack, this POS system, very sensitive.
The milliseconds of the transactions.
See?
So it's a human-caused
problem.
A human factor.
But also,
you got something like that.
With today's
diamond trade,
you probably quickly
got out of there
on the first weekend
Monday.
Right, right.
By that time,
we don't have that.
I just,
why?
Every time I took
my kick for competition,
you called me
about the same time.
And of course,
in that situation,
I would walk,
like,
I'm afraid to walk out
on the field
right
is there something
in the air
is it my
mobile phone
is it my
is it something
to do with my child
yeah
it's
every time
I walk on the field
the systems go down
yeah
that's bizarre
that's a good one
that's very very good
awesome
something like that there's some there's a technical That's a good one. That's very, very good. Something like that.
There's some technical
use of pure technical problem
actually relatively easy.
It's some kind of
unexpected behavior
or human factor.
The human factor there.
There you go. All right.
Anyway. Lingo, thank you very much.
Thank you so much. Have a great time. Have a great time with you guys.
Yeah.
Awesome.
Enjoy the show.
All right.
That was not...
I just got a whole new puzzle out of him.
You did?
That was awesome.
I wasn't paying attention.
He was very excited.
Passion.
He provided...
He gave you a good puzzle.
That team did well in the puzzler session.
And they're the ones that also cracked the e-commerce problem?
Was that...
I'm just giving background music while you think.
Yeah, I'm just thinking...
I forget which team it was.
It might have been Team 4.
Okay.
He was in Team 6.
Okay.
Yeah.
All right, so, dude, we have some selfie things going on here
at the photo booth.
Ah, yes.
Yes. Someone speaking to booth. Ah, yes. Yes.
Someone's speaking to Davis.
Hello, Davis.
Hello, Davis.
Can you tell me what happened last night?
No.
Please don't.
Yes.
Davis would reply, I found several things wrong with your itinerary and activities in Las Vegas, but I'm unable to tell you anything more about it because...
What happens in Vegas stays in Vegas.
Yes.
Yeah.
Yes.
Would you like me to tell your spouse about what...
No, I'm just kidding.
It was very interesting over beverages last night, the amount of interruptions we had
from local characters.
Yeah, there are some funky, weird people in Vegas.
Yes. And you figure there's a lot of people in Festive and also like the Lucky Charm kind of thing.
There were street dancers.
It's really cool.
Lucky Charms.
Street, yeah, something.
Yeah, no, I mean they were like a dance troupe.
They were like, you know, I'm sorry.
Anyway.
So I was in, I went to the Dynatrace Atmon roadmap session just before.
You did just now?
Yes.
Yeah, before.
Top two things off the roadmap.
I'm doing this to everybody.
Give me your top two.
Drill down to pure path in the web client.
So you're familiar with the old-fashioned rich client.
So like in, is this in seven?
So seven, you get peer paths in the web.
So in 6, you don't have peer paths in the web.
6, 6, 5, no, no peer paths in the web.
So we have, for James's and for anybody else, we have a rich client, which goes super, super deep dive.
You know, it's really, really super, super hardcore.
I've used it before.
I like it.
Right?
And everyone wants a more simpler web UI.
And you can embed the web stuff in other.
And we've been working on the web UI, but it's always been a lot more for higher end stuff.
And then once you need to get deep, then you hit a button and it opens the rich client.
Yeah.
Now, though, with the peer paths in there, bam.
You can stay.
Peer path in the web.
Unless you're doing something super complex to dig into,
you can now stay mostly in the web UI for almost everything.
So that would be really cool.
So I was actually talking earlier with Venkat from State Street, I think it is.
Okay, yeah.
He's in Boston.
Has a really interesting problem with serial and parallel processing to do.
I think they do something funds-wise.
Right.
But a lot of different systems in parallel and a tight window because they have to do
reporting to fiduciary and all the other, they have to report the fund.
Fiduciary.
Yeah, from like 4 to 7 p.m.
Yeah.
But the thing is, so many people have been using Dynatrace there for so long.
They all use the thick client.
Right.
And at 3 o'clock, the front-end server
just gets hogged and everything slows down
because they're actually putting too much stress.
Right, right, right. So I'm like, you could just do a better
give some hardware to the front-end server
because it's a fairly simple proxy kind of
Right, right. And the front-end server isn't
the meat and potatoes. Well, the front, yeah, but it does have to be
on the same box as the back-end server. It does, but you
gotta, yeah, give some more juice. You're just running out of juice.
Or, he's like, we want to go to the web.
But they're still, I think, probably in version 6,
they don't have peer paths.
Right.
I mean, it's been a slow evolution.
Yeah.
But it's definitely been a painful point, you know, for sure.
And even when I'm like in 6.5,
trying to create some of the dashboards,
it's like, all right, we're getting there, we're getting there.
And now in 7, even the dashboarding in 7, a whole new interface for the custom dashboards.
So I have sort of used some scraping technology.
But now, I think it's 6.5, you have nicer embed.
You can embed the web.
You can take a dashlet and actually render it, but put it into, say, Confluence or another wiki.
So you can actually extract that, I think, as a component.
So, I mean, I've done it through like a third-party iframe
that you can sort of point it to another server,
and it'll scrape it.
So it's actually pulling a bitmap, basically.
And so you can see a static version of the dashlet,
and if something does look interesting there, click on it, and it takes it. So it's can see a static version of the dashlet and if something does look interesting there
click on it and it takes it.
So it's sort of a pin.
Do you know if there's new stuff in 7
for actually doing like embedding
into a web page?
I don't know. Like a Confluence page or something?
Yeah, I have no idea.
Okay, Brian, you're going to know these things.
Or you can just pull it over into Splunk
and use the management dashboards there.
So you get one
tier removed. If you have Splunk.
But the Splunk interface directly
does DTS server. Doesn't every Dynatrace
partner have Splunk? No, they don't.
Obviously not. I thought there was
a high coincidence between the two.
There are a few. Free tier of Splunk, too.
You can get single use, right? The issue is there
is once you start putting things into Splunk,
you're now playing Splunk more for the data reads.
So it is a wonderful thing you can do,
but Splunk charges by the amount of data that you're putting into it,
unless they've changed that model.
They've had some pricing changes recently.
Okay.
Sorry, I don't keep on top of it.
But that was always kind of one of the things we'd go,
oh, you've got Splunk.
Hey, if you want to do all these more complex things that you're talking about, feed this into Splunk.
And now you can take multiple sources of data that you're feeding in and push them together.
And they're like, well, we're going to have to pay a lot more to do that on the Splunk side.
But now it's great to have that there.
So Venkat was saying straight up, he was like, we already have a single system profile.
All of the systems have agents.
And so all the data is already in there.
They were just hitting this resource limitation
on the DT server itself.
That website's going to be really nice.
I'm looking forward to that.
I said, can you get some people to just,
for the triage, not the peer paths,
have them start by using the web interface,
which is a lot lighter to the front end,
instead of everyone launching the thick line.
And you can even export a session from the web now.
Really?
Yeah.
Start and stop API is there, but you could export it then?
It's not an API.
It's just if you're looking at data, you can say export session,
and that will download your DTS session, which, you know,
your rich client kind of thing.
But if you're like, hey, I need to, oh, it's 6 o'clock.
I've got to get home.
Let me grab this session.
Export it.
Pop it in.
Open it in my rich client.
Sitting on the train, you can do the whole thing.
Yeah.
Awesome.
Yeah, I'm excited.
It's the web.
The web interface is, I think, finally coming to its own at 7.
So, yeah.
So that's pretty cool.
So that was the session you were in.
Anything else from that?
Atmon?
you know our little agent character
that we have, which I didn't know was supposed to be
a special agent, a long time ago
funny story, we were at a client
and they're like, well, why does
we have our little diagram flow
and on each tier there's this little guy
with a little tie and stuff
and myself and Francis my old boss were like, we don't know.
We're not quite sure where that's supposed to be.
And one of the customers was like, it's a secret agent.
It's an agent.
It's like James Bond.
We're like, oh.
So anyway, they have a new icon for the agent.
Is it still James Bond-like?
Yeah, he's got a fedora now and a little jacket and sunglasses.
Yeah. A little bit like a black hat.
Yeah, it's more about that one agent kind of thing.
No, there's a...
I can't...
You know, after seeing that Pure Path thing, I kind of...
That's kind of cool.
My mind kind of stopped processing the new stuff.
Nothing compared to that to me.
All right.
Nothing compares.
For everyone that's not here, what do you think? Yeah, you're on.
All right. All right. James, how are you? Good. I'm here from Detroit. I work for Fiat Chrysler.
What I've found most useful about the conference, the sessions are very enlightening.
I was explaining to my colleague, it helps me get a perspective on where I am at on the continuum for application performance monitoring, customer performance monitoring.
You come in thinking either I'm way ahead of the curve, way behind the curve.
Yeah.
And you see other customers are either in the same place you were last year or you're maybe in the same place they are today.
Yeah.
Or you've fallen behind.
Or you've fallen behind.
But at least you get sort of a benchmark as to where you're at and what to do next.
And what to do next, exactly.
Are you picking up some good ideas?
Picking up a lot of great ideas on how to integrate DCROM, for example.
Some of the vendors on what they're doing, integrating with the products.
That's cool.
They're also testing even how to stretch the other tools that we have to maybe integrate into Dynatrace.
What version of Dynatrace are you on?
6.5.
You're on 6.5?
Okay.
So I know there's some other integrations
that came in 7 as well.
Yeah, they're going to be put in the...
I don't know if you were in the Atman roadmap
just a little while ago meeting.
Well, the PurePaths are coming to the web client.
Do you use the web client or mostly...
You know, that's one of the best-kept secrets.
It is, isn't it?
It is.
It's really changed.
It's gotten great.
I remember even back in 2015.
So it's getting really robust now.
You're going to, if you want to, even for the technical folk,
they're going to be able to probably spend about 90%, 95% of their time in the web
before having to go to the rich client for a super deep dive.
Yeah.
So at Fiat Chrysler, we're kind of organized to where I'm on the infrastructure side.
Right.
I came from the app side, so I brought in three of the app folks with me.
Okay, yeah.
So now it's a big aha moment for them.
So is this, I think it's Bogvia?
Yeah, Bogvia, yep.
And Fran doesn't work.
Oh, yeah, does Fran?
Fran, Vince, yep.
Yeah, so your whole team was there.
Yeah, my whole team was there.
So since you brought those people over, they can't blame the network anymore.
They can't.
We'll continue to blame the network.
But now we know it's the network.
But now their eyes are opened up to see the capabilities of what's out there.
Yeah, yeah.
And it's not filtered through me anymore and my lenses.
Right.
So it's been great for that.
That is very, very cool.
What do you think of the Davis stuff?
We spent some time at the Davis desk.
Yeah.
Now that is very cool
and and like i said that's one of the areas where it's like how can i use voice recognition in other
areas not and not just oh yeah trace you know does it understand italian very important for
fiat chrysler that's true i'll walk before i run with that
i wanted to ask too.
They don't understand the cuss words because that's what the CIO
Yeah, Sergio
is very colorful in that sense.
That sounds great. Cool.
So you think you'll come back to perform again
to get a benchmark and keep going?
I hope I get the opportunity to come back next year.
So you came to Hot Day, you were in my Puzzler
session. You went to a morning session as well?
On Monday?
Was it another hot day session? I went to the morning session, and it was the getting started.
And it was only to go back and understand what didn't I know.
If there were any gaps.
Yeah, if there were any gaps.
Because, yeah, I use the tools today, but am I using them to the fullest kind of thing?
And that's what I was trying to get out of the morning session.
Did you find any gaps in there?
Were there some new stuff?
Yeah.
There was a couple?
Yeah.
Yep.
I think there is such breadth within the product that as I go to other customers using it,
they don't know what they don't need to know.
It's like, I'm getting what I need out of it.
And then, but you go to that session, you're like, well, it was a gap.
I didn't even know I needed that until I saw it.
Oh, you can do that?
I'm not thinking about this right.
You get ideas on what to do next.
Right.
I mean, you keep peeling away layers at this thing and you realize just what little you know.
Because of my day-to-day job, I don't have time to sit there and just kind of go through this.
So it's great to come back here. Yeah. Or come to here.
I've never come back yet.
Yeah, yeah.
But great to come here, see how it all fits together, see how others are using it.
Make connections.
Yeah, make connections.
And get your brain boiled by a performance puzzler.
Yes.
That was a lot of fun.
We still talk about that.
Awesome.
And I was just curious, as far as the tips and everything you're picking up, has it mostly
been from sessions, or are you connecting with other customers and doing any sharing of, hey, these are cool things we're working on?
Well, we've connected during breakfast.
We sit with a different group every day.
Excellent.
So you are picking up.
So you're using it.
Excellent, excellent.
And that's what's kind of helping us gauge kind of where we're at with relationship to other people.
You realize, hey, you really aren't the stupidest guy in the room or the smartest guy in the room.
I'm somewhere in there.
As long as there's someone...
That's like they say, whenever you go hiking
in the mountains, make sure there's someone slower
with you. That's right.
For the bears.
Yeah.
You know this guy named RJ?
I think he's out of
Portland, Northwest. He's part of a
sales team or something.
But RJ does this thing, this extreme physical endurance event.
People do those muddered events.
But there's special forces type levels.
And I think it's called Go Ruck, I think, if I'm remembering correctly.
But it's just bizarre.
I thought I'd just bring that up out of your reference.
We do that on the podcast.
We just randomly take tangents.
And Andy's not here to put us back on track.
No, that's right, exactly.
All right, James, thanks very much.
It's great to have you.
Thanks again for sharing your experience.
Hey, Walt, you want to join us for a little chat?
You're going to lunch now, right?
Yeah, but it just started up.
Yeah? We're not going to keep you here for an hour, are we? No, no, no, you want to join us for a little chat? You're going to lunch now, right? Yeah, but it just started. Yeah?
We're not going to keep you here for an hour, are we?
No, no, no, just about 15 minutes.
So, yeah, Walter, good to see you, man.
Good to see you too, Mark.
How are you doing?
You know, at the end of every PerfBytes episode,
we give a shout-out to people and things we support,
and the Performance Book of Knowledge,
Performance Engineering PBOC, is in there.
I love it.
For the last four years.
And sometimes I say it really, really fast, but it's been repeated.
And let me know, what are you up to doing now?
Because I know you've had different companies and worked in different ways.
Yeah.
So, you know, mostly collaborative consulting was where we started.
Right.
And grew that company.
And now we got acquired back in the fall by CGI.
Right. And grew that company. And now we got acquired back in the fall by CGI. Right.
So we went from a 400-person company to a much larger company.
Global, huge.
Yeah.
Global, huge, with just incredible delivery capability around global delivery for letting us leverage more performance engineering services.
Cool.
So kind of getting our arms around that.
Cool.
Yeah, and still doing delivery around software performance advisory for companies that need to set it up.
I have a performance team.
How can I make it a little bit better?
So that's always fun.
That sounds awesome.
And so this was just recent, just within the last year?
The acquisition?
Yeah.
Yeah, it happened.
Well, due diligence was most of last year and official November 1st or 4th.
Oh, very, very cool.
Awesome.
And from a Dynatrace perspective, in terms of you, are you actually signed up?
You're delivering implementations, management services around doing that?
Yeah, we've been a partner with Dynatrace as collaborative and brought them and kept the partner relationship at CGI pretty much for, gosh, five, six years.
Yeah.
You know, one of our performance services has always been around production rescue.
Sure.
Always a favorite.
The SWAT team.
The SWAT team.
I love that practice.
We have an email at PerfBytes, so you just send it to helpmehelpmehelpme at perfbytes.com.
That's a similar thing.
And we'll assemble a virtual team and send them out.
All right. We've never had to do that.
Well, I guess we sort of started that.
But you guys actually formalized it in an escalation routine and the individuals.
Yeah, how to do it.
And it really evolved back in the day.
You know, we'd start with scraping all the logs and putting it all together.
And it just took so long with tools like Dynatrace to be able to plug them in,
drop them into an environment,
and start saying, oh, here's that insurance policy that you looked up that already existed.
Do you know that you did 150 database updates?
Yeah.
Does that seem right?
Yes.
Sort of N plus 1 plus 100.
That's right.
And using a tool like that when you're dropped into a big company for they're working with a vendor or a product and everybody's pointing at everybody else.
And you can say, no, it actually is the database.
Here's why.
And everybody goes, oh, now we can solve the problem.
Absolutely.
One of the big challenges is getting it into an environment.
If you go somewhere, Fortune 100, going from dev to integration to production is...
It's politics more than anything else to get it in.
A little bit of bureaucracy, process, approvals.
And I guess some people think it's security.
I've never...
And depending on the logging, you might have data in Dynatrace that's like PII and stuff like that.
You can see an encrypted user ID or password occasionally
or realize that the vendor is not treating the password properly
when you suddenly say...
Find a bug.
There have been times when I've turned on the request headers
to capture the username,
and right there as well is the password,
and it turns out that the application is using your Windows login
to get into the application in the browser.
And you're like, okay, we're not going to turn this on,
but you might want to register this as a concern to your company.
It does shed the light on more than they think it should.
What are you learning here at the show this year that interests you,
new and exciting stuff we've been hearing from folks?
Well, certainly, you know, the whole AI approach here with being embedded into the tool is, you know,
for large companies is going to give you the ability to manage this a lot better, right?
The amount of information that's flooding in, I mean, you can't even make sense of it half the time.
Right. information that's flooding in yeah i mean you can't even make sense of it half time right at
the time so you know that that uh ability the relentless focus on the user experience is huge
yeah i mean these days and certainly mobile what's going on in mobile is big for us and
and from a partner perspective just the ability that dynatrace keeps adapting the partner model
yeah what can we do to help what can we do to do to help? So that's very good as well.
And we're meeting with some Dynatrace people.
You work with Nate Treska?
I'm sorry?
Do you work with Nathan Treska at all?
No, no.
Okay.
He does some partners.
And we just did a webinar with Andy a couple weeks ago on the AWS pipeline.
Right.
Oh, yeah, yeah, yeah.
It was really cool.
The team, so my team, you know, kind of built an app, Java-based, you know, used GitHub.
They set up the pipeline.
Right.
Set up the testing, set up the app on.
And, you know, once they got it working, just did the code check-in and GitHub, brought it through the code pipeline, unit tested, deployed it.
From Apica, they had a number of scripts set up that would blast off the performance test.
And then you could compare before and after.
Awesome.
It was a really great webinar.
That is very cool.
Yeah.
Awesome.
All right, man.
Interesting.
That is great.
Now, we're actually technically going to enter you in.
First of all, you can get a PerfByte stopwatch.
I love it.
So if you're ever caught without Dynatrace, without access to YSlow or something.
If you have a functional tester who is not asking a performance question, you can just say here.
Right here.
How do I measure things?
Here.
And you have no excuse anymore.
That's right.
But we're also giving away some shoes and a drone,
so we'll add your name into the list for the giveaway later today.
What's new with you?
I heard one of my guys, I think you gave a presentation.
I did, yes.
I did a hot day session on the Puzzlers,
but yesterday it was bringing performance engineering into DevOps.
Yes, yes.
I actually love the quote.
I took a picture of it about you're not a performance engineer
if you're not influencing the code, right?
Jim Duggan from Gartner.
And it was actually, I think it was,
it may have been even a private briefing,
but it just changed my mind.
It's like you're doing wonderful work in testing, validation,
maybe even just managing and monitoring things.
Right.
But if you're not actually getting ahead of the brain that's trying, not just architect,
but if you're not influencing the writing of the code, you're not engineering for performance.
I think there's also, remember, BSIM was build security in maturity model.
They had a few vendors.
It was actually federal sector related and then spawned out from there for White Hat and security.
Similar kind of thing.
They're like, we're going to build security into, you know.
So now every developer I meet in almost any company, they're like, how to write secure code.
Everyone has a copy.
And we go twice a year.
We go to workshops.
There is nothing for how to write performant code.
Maybe we need to, how to write performance.
Oh.
Well, actually, performance goals.
We've been doing more and more non-technical work.
How's that?
So looking in SDLCs.
Yeah, all that stuff.
Yeah, how do I get in my requirements those goals specified?
And how do we gain a good thing from that?
So we were just joking.
Davis needs to look at all the requirements documents.
Yes.
And say, oh, documents, and say,
oh, look, there's nothing about security here. Oh, look, there's nothing about performance here.
Dr. Chandler, did you know that that web service requires three-second response time?
Would you like me to add more hardware? It's because you're not meeting.
Dr. Chandler, I noticed that you didn't test this. I don't see any test results, Dave.
Are you sure that I should promote this?
I love it.
Dave, Dave, that is awesome.
Dave, Hal, this is the most important message you'll ever send.
Yes, yes.
It's called a test result.
And you need to send it to the CFO.
But otherwise, we've been doing the podcast, consulting independently myself.
And James, of course, is the CTO of a couple different companies and consulting himself.
I'm working with a couple different customers.
I'm a full-time employee of PayPal right now after years of working with them.
So doing DevOps with that group is fun, yeah.
That's exciting.
Brand new.
It's a huge company, but this particular team is doing some transformation into that.
It's really cool.
But, yeah, living in Philadelphia, loving life, going around to conferences and doing live podcasts.
I know.
I love it.
This is good stuff.
Awesome.
Well, it's great to chat with you.
I'll let you get off to lunch.
Nice to see you.
We need to sit down, the three of us, and come up with a whole bunch of rules for Davis.
We will.
Let's do it.
Yeah.
Excellent.
Okay.
Thank you, man.
Thanks, guys. Thank you. And then come back and tell us how lunch was. We will. Let's do it. Yeah. Excellent. Okay. Thank you, man. Thanks, guys.
Nice to meet you.
Thanks.
Thank you.
And then come back and tell us how lunch was.
All right.
All right, fellas.
So, yeah.
And no kidding, we do, at the end of every PerfBytes episode, the Performance Engineering
Book of Knowledge.
And Walt's one of the guys that worked on that a long time ago.
Just for those of you who do listen to PerfBytes,
check out the end of every show. It's unique
and different on every
single show. Some people don't
listen all the way to the end.
Or they hear the outro
music and they stop it.
But we do something unique
for every show. It's kind of like
when they do something at the end of the credits
in a movie.
Why do you sit through the credits?
Or when you're reading a license
agreement and you get to section 42
and it says enter this code and your license is
half in price.
That or
why are you still here reading this?
Why didn't you just click and go?
I miss the old bloopers, The Burt Reynolds style bloopers
At the end of movies
Where it's just him and
But it was Burt Reynolds
And who was the sidekick
Most of the outtakes were them just cracking up
Because they were probably so stoned
While they were doing this
Burt Reynolds is a slapper
So you should start doing some laugh bloopers.
We have a bloopers reel.
Yeah, I heard you told me about that.
Some of it's not safe for work.
Yes, I can imagine.
Not in a crude, you know, disgusting way, but in an expletive sort of way.
I will say, Walt just gave a really good testimonial for the partner experience.
Yes.
In Dynatrace.
And actually, Ryan Folk spoke highly of it yesterday as well.
I have to say that when we worked at Mercury, I worked in the partner org, and that's changed a lot over these many, many years.
So that's actually a really good testimonial in and of itself.
And look, there's Eric Grabner, who's ignoring us.
Andy Grabner.
He's going to eat.
He doesn't have time for us, and he's got people.
No, that's fine. He's walking around with UFOs. He's going to eat. He doesn't have time for us, and he's got people. No, that's fine.
He's walking around with UFOs.
He's with his posse.
He's in a red checkered shirt with his lederhosen and UFOs slung over his shoulder.
Yes.
And he had the proper hat earlier as well.
Very, very nice, heavy felt hat.
Yes.
That sounds good.
It's proper Austrian attire.
Yeah.
Well, it is lunchtime.
How are we doing on our broadcast?
You're about...
Oh, I am done.
17 seconds. All right.
Perfect. Thanks, everybody.
Thanks, everybody.
We'll be back after lunch. Thank you.