PurePerformance - The 201 Milestone Episode on Automation, AI, CoPilot and more with Mark Tomlinson
Episode Date: February 12, 2024201 is the HTTP status code for Resource Created. It is also the number of PurePerformance Episodes (including this one) we have published over the past years. None better to invite than the person wh...o initially inspired us to launch PurePerformance: Mark Tomlinson, Performacologist and Director of Observability at FreedomPayTune in and listen to our thoughts on current state of automation, a recap on IFTTT, whether we believe that AIs such as CoPilot will not only make us more efficient in creating code and scripts but also lead to new ways of automation. We also give a heads-up (or rather a recap) of what Mark will be presenting on at Perform 2024.To learn more about and from Mark follow him on the various social media channels:LinkedIn: https://www.linkedin.com/in/mtomlins/Performacology: https://performacology.com/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson, and as always I have with me my co-host Andy Grabner.
And you know what, I just said Andy, welcome to another episode of Pure Performance.
But I really don't think we should consider this just another episode, right?
As of our last episode, we completed our 200th so now this is the first of the next 200 maybe 3000 who knows maybe we'll be doing this from our sick beds as we get old
and we're dying and we're in whatever but uh 201 episodes and uh we couldn't have done it without
our special guest today but before he
pipes up which i know he's dying to because he loves talking and uh andy how are you doing today
i'm good but i think i uh i keep myself short because he did an awesome introduction but
you were right it's uh it's amazing to see how long we've been doing this six years, seven years. It was 2015.
I think we started or 28,
eight years.
So yeah.
So much in there,
right?
Yeah.
And,
it started because somebody said,
we'll just do it.
Why don't you just start a podcast?
All the kids listen to podcasts.
And I was like,
what are those?
Cause I was in the kid.
I'm like,
people listen to podcasts.
What,
what,
what is this?
AM radio,
um,
dynamis in the morning?
He's cracking up there.
Yeah, no, it was this guy who you and I fortunately met earlier in our careers,
which, you know, I don't know what that says about his age at this point,
but I'm just trying to rile him up.
Yeah, he just turned to us and said, do it.
And we did it.
And we did it.
And with this, we lift the curtain and welcome Mark Tomlinson to the show.
Hello, Mark.
I'm the old man on the microphone.
Congratulations to both of you on 200 episodes.
You haven't killed each other yet.
That's because we're in different countries. Yeah, exactly. There's ways other yet. Because we're so far apart. Because we're in different countries, yeah.
Exactly.
There's ways.
I mean, we're all in technology.
We know you could digitally annihilate each other.
Right, but we only see,
interact with each other
during these podcast recordings,
so it's limited engagements.
I'm sure I would annoy the hell out of Andy
if we did more.
But it would be interesting
in how many podcasting recording tools
we actually went through over the years,
because we obviously killed a couple of those.
Yeah, we started with Skype.
Yeah, but I can think of no more boring topic than that.
That's like an artist talking about art
instead of just shut up and paint.
So we should just shut up and podcast, right?
Is he taking over the conversation now?
I am.
Well, he is the pod master.
Oh, pods.
And look, it's pods.
It never dawned on me.
So we're deploying a pod master.
We're an individual instance of a pod.
And as elastic as you are, you're now three nodes within the pod of this podcast well actually it
was brian you and i i think we're in denver we went and hung out and we're talking about audio
engineering and do some musical stuff and i'm like hey consider launching a podcast uh and uh it just
to like for no other reason to like get back into i'm doing something with the audio side the
creative side of my brain uh and i think that
was good because you guys have done tremendous work um and uh so congratulations to both of you
and to another 200 episodes um and maybe you'll have me on at 300 or 301 so like it's like every
100 i'm i'll be another century old by then right probably? Probably. 200 years old.
That would be good.
Wasn't that a movie, The 200-Year-Old Man or something like that?
I don't know. That's me. No, they based
that on me. I don't know if you know that.
Awesome.
Mark, over the years, and coming back
to actually talking about performance,
DevOps, platform engineering,
AI, observability,
over the years, you have inspired a lot of us, DevOps, platform engineering, AI, observability. Over the years, you have inspired a lot of us,
Brian, myself included, and many of the listeners
and the people that you've come across at different conferences
with stuff that you've learned over the years.
And I know Perform is coming up.
I think by the time this airs,
we should be probably about beginning Perform 2024.
And over the last years,
not only at Perform, but in general in the industry,
we talked about automation, automation, automation.
Yes.
I think though, and maybe correct me
if you see this in a different way,
is that the level of automation
is still kind of not there
where we as an industry want it to be.
And I was just, first of all, confirming,
do you see this as well?
And then the second thing is,
is there anything on the horizon where you think
this will change?
Does it need more education?
Does it need better tools?
What does it need?
You guys are familiar with the If This Then then that the IFTT app, which
have recipes, right? So there's all sorts of well proven or well trodden recipes, and some of them
are authored through open source repos. So here's the rules and the configuration of a job or a
configuration of a step of some kind. And so we probably have more of those
if you have an enterprise implementation of Rundeck
or we just, last year in ProForm,
we announced the workflows
so you could build workflows within Dynatrace
and we're already exploiting that.
I think the thing you run into is which recipe,
like if I, there are 400 recipes for tacos but which one will do i like the
best which one tastes the best do i have the ingredients already so there's a lot of pre
automation design to know what you need to automate both you can't just jump into the job
well here's a rundown job i'll just copy and paste and woohoo then you find out it's been you know
sending credit card numbers to russia nothing wrong with you know if that's your business you that's fine but we we
don't allow that at freedom pay anyway uh you get what i mean so there's a lot of experience that
goes into automation um that is what do i want the outcome to be of this job. And then the minute you start asking questions during the design
of some automation, you usually run into answers or more questions where people are like, well,
hey, wait a minute. I know we usually do step one, two, three, four standard operating procedure,
and I can automate that. Before we were talking, Brian, you mentioned, I think Wilson Marr or
somebody had mentioned, what is something I just keep doing every day and i'm just bored it's the same four
steps i'm definitely going to automate that and so maybe andy we've gotten through a lot of that
but they were the the least risky the most mundane i'm just tired of doing it i don't want to pay a
human being to do this let's just that's easy it's the if this then that like if it's 5 p.m turn the lights on like there's no huge risk in our lives to that
but if i'm automating something that says when i see this condition and this condition and that
condition then try step one then step two then three. So now you've got a very complex set of logic
around an automation.
And that's where I think the AI stuff
does a better job of informing the automation
as it's running.
So you actually have a smart automation, if you will.
And this is some of the stuff you could do
even within like the Dynatrace workflows
that we're working on.
It's like the workflow or automation is only as good as the data
you can push into it.
But yeah,
I think that's still,
I think people are overwhelmed and they don't know where to start.
So education is probably key in taking us in the next step there.
That's my,
my future.
That's the,
the,
the great Sarnac, right?
With the envelope on my head.
Just Johnny Carson reference.
Johnny Carson.
And by the way, smart automation,
according to a new term, smart-imation.
Smart-imation.
I think you hit on something really important there too, Mark,
because when you talk about all these canned automations, right?
The ones that are out there and can I trust them?
And let's say even you have components out there that might help you, right?
Are you, we know that tech people are naturally curious, right?
We have Kubernetes because people are really curious.
And somebody said, I want to build this, start building this insane thing because it'll'll help and I want the challenge of being able to do it.
That's what drives this industry is those challenges.
So I think one of the additional roadblocks to that, not only is there, the more you think of it, the more questions you have and the more that comes up, it's when I start looking at what's available,
trying to figure out what I might be able to use.
It's just going to be easier if I code it myself, start from scratch myself.
Then that then puts you behind the timeline of, okay, when am I going to get the time to get all that done?
Yeah.
Because I still have my regular job to do, but now I need to do this thing from scratch
because I can't trust any of that.
It's going to take me just as long to figure out if it is sending the credit cards to russia or whatever
right so it does get complicated um and i think to your point right we were talking earlier this
is where potentially and i guess we'll dive into this part of the topic now since i'm going there yeah i'm taking control this one just go this is where ai could potentially help right how do you get started right if i need to code this
stuff um do we is ai at the place yet where it can give you good base code to start with
so you just have to go back in and tweak and fine tune it so that it gets you what you need.
And then Andy, you brought up the point pre-show as well, is if AI gets us to the point where it's helping us create these things, once you create, maintenance is the next big hassle.
And I don't know if we've seen AI maintenance yet.
I know that would be a really sexy thing people would love to work on.
Maintenance AI.
Woo!
Right?
But I think those are some of the challenges.
And I don't know what we're seeing there, if there's anything on the horizons for that.
What is the state of AI writing code for you?
How complicated is it to go back in and figure out what it did to see if it is doing it well,
if it's doing it safely.
I don't know what the state of that is at this point.
My point on this, because I'm not sure if you guys have played around with Copilot
and others that generate code.
I think they're doing a pretty good job already.
And if code comes out that you can run and it does what you want it to do,
then my question actually, why do we need a tool like Copilot to generate code?
Why can't I just tell a tool like Copilot or whatever the AI is called,
do X, Y, Z, and you run it?
Because why do I need the intermediary step of code that gets executed by runtime? Why can't I just say
please do this task or alert me if condition 1, 2, and 3
are abnormal, then do X, Y, Z, and then the
AI should just continuously validate if that still works. And then maybe
based on the latest learnings, based on new material that is available in Stack
Overflow or in some other
source of truth,
it just automates or it just optimizes
whatever code it does in the background.
So my question is,
will we get to a point where we just
ask the AI,
please do this if this happens?
And this can be very complex expressions
without actually generating code
because why do we need intermediary step of code?
I don't know what you got.
My experience with Copilot is like,
it saves me time searching for somebody else
who already solved the problem.
So it's like, hey, I need to do X.
And it's like, let me shortcut.
There's 400,000 other people who already wrote that code.
Let me give you the best
one try that and that's a different to me that's not really artificial intelligence that's just
like super fast ass jeeves like give me give me some samples um but the you bring up so many
things there both of you would that get after how does ai know whether it's meeting your needs or not?
Like,
how do we,
you're telling the AI,
I want you to do this outcome and here's what the outcome looks like.
Here's whatever.
And you're kind of like a subcontractor.
Like,
I don't care how you get it done,
but I need this to happen when X,
Y,
and Z.
And also tell me every five minutes that,
that you are correctly still doing that and detect changes in the subordinate systems if something compromises the logic that you wrote and adapt automatically.
Suddenly, it's this, when you start talking about validation, when you start talking about checking the AI outcomes and then telling it to self-adapt to change now that's that's like way beyond pattern
imagining like pattern recognition is amazing from an ai machine learning standpoint this is what
it's built to do but take an unsupervised model and tell it to go figure out a way to do that
that's where we end up with you know computers inventing their own language and getting things done that you didn't even know um and then of course i think the other point is how much do i need to think because a real
thing i see people selling around co-pilot or ai is like it'll save you time because you don't
well time is thinking to me it takes time to think through something really thoroughly if it's worth
doing that thinking sure ai can save you the time, but then you start saying, well,
did I tell the AI this and how to validate it? How, like you said,
how to keep it running? How, how often do I want to check it?
Like that's where I'm saying the journey of designing an auto remediation
or the journey of designing a component to be solved
will reveal more about the gaps in your thinking and the whole process that's worth doing if you just say i don't care and then you're like oh well
you know this is a really inefficient piece of code given like the carbon footprint kind of
mapping from sustainable engineering or this uses a source code and sources that were for you
know sub-saharan africa people that are starving to death and it's like well do i really want to
have my code come and support that uh way of existing on the planet that's also not sustainable
right so i think there's there's a lot to be said a lot of other questions that have nothing to do
with getting your job done even just i need I need to auto-remediate this
with automation and a job.
Here's all this other stuff I want the AI.
There's no room in Copilot to start describing
the greater context of what it takes to stay in business.
And I think that's the other thing.
You've got engineers that are like,
I technically understand what needs to be done,
but the businesses morph and change
and what they need and how they grow are a totally different context from a customer perspective.
And you're telling AI what you know right now. And it's like, oh, by the way, when customers
start asking for this other thing, completely rewrite yourself and do this. And I think that's
still a human brain. I think we're still going to have a human brain doing that work.
Unless that's years down the road, right?
I mean, part of what you said is key, right? We know AI has to learn from somewhere.
It has to be trained.
It has to have the inputs.
And if we don't have the automations that exist already to input to it,
to show the examples of what's good, what's efficient,
how's it going to know what's good, what's efficient?
I mean, I think a lot of the points you brought up
about carbon footprint are all good and all,
but even at one point you said,
I don't care how you get it done.
But as soon as you said that, I was like,
well, yeah, that's what performance is all about,
is caring how you get it done.
Are you getting it done with terribly inefficient code
that's going to be running like a dog
and needing a lot of compute to run because it's just crap code um so how are we
verifying that that code is good right so obviously you know not not to get into our own tool or
something but if you had ai doing that you'd also need the feedback loop of performance metrics and
all that so it could self-measure itself um then you'd have to be able to feed in all this other
stuff about where's it
running how it's running how much energy energies are consuming so i think a lot of it can get there
what i see is that we're at these infant stages of capabilities of ai yeah our imaginations and
our creativity are seeing these places where it can go and these things that we can do when we're
like great we know what we wanted to do let's's do it. Yeah. But it's not all there yet. All the integrations aren't
there. There's a lot of work to get done to get there. I, you brought up something that sparked
for me and don't mean to interrupt you, but I wonder if there's an advantage to, we see a lot
of big businesses, medium and large businesses that are like in the infancy of adopting things
like co-pilot or machine learning or starting to do feedback loops based on analyzing their own
data and driving that back into some logic and it's like we have a non-ai business adopting
the first steps of ai and then it's like there's a there's a totally separate segment of small, very fast-moving companies that are 100% AI built, meaning all the code they write, the business, they're marketing, the copy, they're building a website deployment. never did it the old way we only use ai created products digital products content uh code whatever
it is is a hundred percent ai driven business or some coefficient i wonder if there'll be
a difference there where we've never done it the old way so we don't even have an appreciation or
a care or we don't even need all those old ways of doing it we just ai is the way we do it now i
wonder if there would be a prediction of separate 100 ai driven companies that that's the way they
roll yeah it's like the same kind of evolutionary step that happened with other businesses right
that have been driven out of business because they were stuck in the old legacy world. I'm thinking about our famous example, Blockbuster,
and then Netflix came along with streaming services, right?
Right.
Or rental car companies, and then suddenly you've got taxis and Ubers
and all this other way of doing temporary storage.
Yeah, yeah.
The other aspect to that, though, is, you know,
I think this is a fascinating idea.
Like you'd have these two schools.
So there's this one of the evolution,
right.
Of taking the old way and slowly evolving.
The other is the leap to the new right now.
Once you,
if you do the leap to the new,
right.
Instead of focusing on the traditional improvements and components that you
have to look at from the old point of view, you're stuck in that old model, what does that new model bring to the table in terms of analyzing efficiencies and all that kind of stuff?
Because maybe that jump would be required to find the ways to get all this stuff done.
Because if you're still stuck in the old way you're tethered
and being held down by the old things you had to care about right if you just in the new you don't
have any of that burden i'm sure a lot of them i'm sure we'll still see the n plus one problem right
a lot of this is going to translate but you're now in a completely new paradigm um yeah and that's
going to require a new paradigm of feedback observer like all this other kind of
stuff which would make potentially make that leapfrog into the next gen or next area i don't
know it's interesting it's an interesting thought you know um brian there's something else in the
evolution of like performance testing performance engineering that that andy was talking about that
that you and i both all three of us but you and I both met as practitioner performance,
like we are hands-on doing the deed. And I think there's something to finally pulling technical
risk up and out. And really like, if you're that new company that did a hundred percent AI and
you're starting to be like, let's instrument this so we can see whether it's efficient,
see where the bottlenecks are. Does it know how to remediate itself? Oh, this is a bad query.
Let me go co-pilot myself. Like the AI co-pilots its own co-pilot. So which one is the pilot and
which one's a co-pilot to go get a better query to query the thing. And then it's like, you know
what? This whole RDBMS sucks. Let's just go to this other one and port all the code. And I just
did that. Like AI just did that automatically for me because I don't like that cloud provider and that data service.
So I'm going to use a different data service. Like that's an impossible idea for even a performance
engineer to say, I have to recommend to the business that we'd not use MySQL and I want to
move us into a NoSQL type of database. That's a huge leap.
And to imagine that AI would make that decision for risk-wise for you and me and our business
we're running, it's like, wait, whoa, whoa, whoa, whoa, whoa, wait a minute.
AI, you're taking all of our data that our entire business is built on all of our customers
data, and you're going to like throw it over there and a new thing.
And like, we don't take those risks as people.
I think a risk mindset is still going to undermine a full trust of AI doing that by itself in an autonomous kind of way.
I think risk is more important to testing performance quality generally.
I think risk is still huge in the mindset. Like if I don't communicate
performance problems as risk to the business or risk to the technology,
then I'm not probably not doing my job. I'm not qualifying that properly.
Well, this is where the Asimov three rules of robotics come in, right? Where you have to have,
I forget what the, you know, like it can't harm the humans. Like you'd have to have those paradigms.
Anything that
you're going to do is not going to impact
X, Y, or Z, right?
If you're not familiar, the
iRobot stuff is great, great, great short stories.
But I think the other big thing is
What were the other two, by the way?
Well, one is that it can't
harm humans.
It can't do anything to harm itself.
And those two sometimes can be a conflict to each other.
I think it can only harm itself if it's going to prevent harming humans or something like that.
I forget them all.
I mean, there's this thing called the internet people can look them up on.
Right.
And the third one is like the vinegar-based barbecue in the Carolinas is not actually barbecue.
It's not.
It's only Kansas city red sauce.
Like that's the third one.
Yeah.
I think that's the third rule.
I did want to get this other idea out of here.
Cause I think this is a really,
really important one.
Right.
And I've seen the same thing with,
with,
with,
um,
you know,
gene gene therapy and all this.
Right.
I think in order for this to work,
if we're talking about these models that are going to work,
we're going, you know, I think the human factor is going to be the biggest blocker.
And by the human factor, I mean proprietariness.
Right?
Oh, sure.
Let's take this back.
We'll take this to someplace you and I know, Mark.
Right?
Plugins for audio engineering.
There's a lot of these AI ones coming out now, right? And some of the licenses for them are saying, we want to gather the AI stuff that you're using so that we can use that to better train our model, right?
Now, if we don't allow these companies to get that, they're not going to be able to better train their model and we're going to have crappier tools.
If we go to the human genome projects, right?
Like my daughter has a rare disease and they map and understand some problems
with genes but then as they dig further it gets more and more complicated and people are like well
if we can figure out what's abnormal we can figure out what's normal and vice versa but the only way
you could really do that would be for a super majority of the human population to be sharing
their genes and all conditions about themselves,
skin thing,
like right,
right.
Which becomes this huge privacy thing that nobody wants to do.
But in that way,
you'd be able to map,
get an accurate map.
Now,
when we come to this AI stuff and what's good code,
what's good ways of doing it,
what's good results that would require again,
all the companies that are using it to feed back into the AI to let it do
that,
that component.
And that then becomes, you know, is FreedomPay, highly secure, highly delicate data information and all,
is going to be feeding this information back into those models and being assured that it's not going to be used by someone else?
If we develop this thing, well, now, you know, it's ours and we don't want someone else to get the advantage we have.
So that's where I think
a lot of this is going to require
the same spirit we have in open source.
And a lot of these open source projects
that people are doing
where they're sharing information,
this is going to require companies,
not just the workers,
to share the feedback loops
and all the information
they're learning from the AI to the AI tool sets and and all the information learning from the ai
to the ai tool sets so that they can improve yeah aside from i've talked to a follow several people
in the test ai side of things uh jason arbon's one of them who i followed for years and he talks
about this feature data dilemma so in machine learning you have feature data it's like the
source data that you talk about so if you're going to build fictitious characters, are they based on real
performances that are actually owned and copyrighted by like the SAG-AFTRA stuff? Like
the source material or the feature data to your machine learning, that's a feature data dilemma
of is the AI truly a derivative work? And do you have permission
to do that? Take this in-house to a company with compliance where our legal contracts are set up
where Brian, when you swipe your credit card, it's going to fly through our system. And that's
marvelous. But we are trusted to own that data on your behalf in a very closed, secure way. In fact,
by design, we don't want a merchant to know what
your credit card number was. Like they don't want to know. They're like, I just need to know that
you, I made $25. Just give me $25. I don't care who you are, um, by design for security. But once
that's why I, there could be something around this. Um, again, I'm, I'm not like an AI guru guy, but I do see how we're looking to use the data that
we can capture with a tool like Dynatrace or any tool that can profile and see the trace
information, all the actual data elements that are interesting feature data, which ones can we
recognize a pattern and feed that pattern back in for the betterment of our security, the betterment of performance,
the betterment of our business or mitigating business risk where we see
anomalies happening within that data flow.
And I think again, that gets back to risk. I just,
I keep hearing risk in the back of my brain when you,
when you talk about that stuff and Kansas City barbecue sauce is still actual barbecue,
whereas that Carolina stuff is not.
If I build a robot and it moves to North Carolina
and starts eating the vinegary kind of mustardy kind of,
that's not my robot.
I'm done.
It's interesting how we easily spend 30 minutes
in just talking about automation and AI
and kind of like the whole thing with,
you know, it's the risk and the trust
that we still need to build up.
But different topic maybe quickly
before we kind of have to close at some point. Mark,
Perform is coming up.
2024 just started.
And besides automation,
what are some of the other topics
that you are
looking forward to or that are on top
of your list for
Perform for 2024
in your organization, the stuff that you see
in the communities that you're
interacting with? The most exciting
thing is that I might get another
pair of Converse All-Star shoes.
I don't know if they're doing that again, but
I still have my old ones. I don't know if I'll get new
ones, but do I have to get on main stage to
get them? I don't know. I think you do.
Well, that's too bad. So I have a couple
breakout sessions actually. So our good friend
Klaus and I were talking from leaving last year about business events in a very interesting way.
So I'm going to be given a talk on business analytics, or I call them risk analytics,
but business analytics, leveraging business events. And it's kind of a teaching tutorial.
It's not just a showcase. Hey, here's how we do. I'm, you i'm you know me i'm like i'm going to try to
impart something that somebody can take back to their own company be like hey i could configure
this so how do you identify what is risk to a technical component function transaction
query whatever it is what is the risk to the business what is the risk technically? And how do I monitor that? How do I pull those
events out? And then how do I do two things with them? One, how do I make them observable, visible
and actionable? Those are three things, but in one category. And then the other part gets into
either simplistic AI pattern matching or just some predictive capabilities. So we see this type of anomalous
traffic come through five times a day. It's always on the hour. So it's like somebody's
running a job. So you learn these ways of why is this happening kinds of things. So I'll talk a
little bit about that. I'll have some demos from my own test environment around business events.
And this is business events on grail uh so there's some
cool dql driven visualizations for the analytics um because i think it's in the customer software
side of the that story so i'm excited about that uh mostly it's been fun to work with klaus because
he's such a funny guy uh but he's brilliant and um the other one we are lucky enough as a Dynatrace customer at FreedomPay
to, we're a.NET shop. And you know, guys,.NET is still just, sometimes it's just a little redheaded
stepchild out there. And Azure people still want us to just use open telemetry and cross our fingers
and just trust the cloud. Just trust the cloud.
It will just, Azure will take care of everything.
They never have any outages.
And we learned that that's not true.
So getting onto Grail, getting into some of the things working with Dynatrace R&D.
We have some early adopter stuff.
I probably can't talk about it if this is going to not be announced at the time of the whatever on the podcast.
So I'm not going to say what it is, but look for another session later.
It's been great with Andy, one of the old guys you've mentioned, Christoph Neumüller.
Christoph, yeah.
Yeah, he's really, I'm so impressed.
He's such a sharp guy.
He's almost as smart as you.
But he's better looking.
He's better looking in some ways.
Yeah, that's true.
But it's been really fun to work so closely again
with Dynatrace R&D on something that we can give them feedback.
We can give them ideas.
I think we generate more RFEs.
Like in the last six months, we just keep hammering them.
Now we just go talk directly to our guys.
So that's fun.
So those two are there.
And then Henrik and I are going to be doing some more live streaming.
So the,
is it observable booth where there would normally be a pure performance
podcasting booth fellas.
We're going to,
maybe we'll,
maybe I'll find a way to simulcast again.
We could do,
is it observable and pure performance at the same time?
I don't know.
We'll see. Hey, you know, I'm, I'm, I'm turning 50 this month i'll be i'll be in new york eating it'll
be your birthday right that same way as every hey i spent a few birthdays with you all already
performed but this is my 50th you know so i'm gonna spend that one with my wife in new york
city and eat that keep my face off and you know instead of busting my ass in vegas yeah so instead of sneaking back over
to uh what was the or the other place we were at orlando and not orlando no the other the conference
well because we're in the aria this time right yeah so you mean across the cosmopolitan the
cosmopolitan it had the what secret pizza the like the new york slice
so you'll be in actual new york having an actual slice of pizza and i'll have to still sneak over
there and violate all my nutrition goals uh with that um but yeah so i'm i'm andy i'm excited i'm
also i will say on behalf of my company um um, we've enjoyed working really closely with Dynatrace.
Um, we're a D1P customer. I have like, I think four different customer meetings of people that
I'm talking with through, uh, through the account management, um, with James and Tim. And so I'm
really excited mostly just to get back out talking to customers about what they're doing.
Um, and I'm bringing one, two, three,
five,
at least five,
six people from freedom pay are coming.
Maybe I have a couple more.
So I have a whole crew coming to the executive briefing stuff as well as just,
they're just going to eat it up.
I have depth,
my DevOps team,
I got a systems team,
I got some developers coming.
So I'm mostly,
we're just making the most of getting into Dynatrace really heavy. So it's
been good. Well, great to hear that the partnership has obviously been going that well. And the
friendship that the three of us has built over the years has also kind of moved over to the business.
I don't know that the three of us have really financially benefited like we haven't figured out a way to sort of
surreptitiously kind of pass money under the table to one or you know like maybe
we need to maybe we're failing in that perspective I tell you what I will write
in a co-pilot AI how do I embezzle money from my own company send it to your
laundry through Brian's and then I don't care how it gets done.
It was just AI. I don't know. I don't remember who the AI wrote that.
Yeah. Yeah. Well, here's the way I look at it, right?
Is that we're getting paid to do this podcast.
So that in a way is embezzling because this is so beneficial for Andy and I
it's, we selfishly continue doing this because of what we learned. Yeah.
So it's, it's, doing this because of what we learn.
So it's I mean,
of course we care about our listeners, every
single one of you, and we're hoping
you're all learning from it, but like
I'm like, oh yeah, I'm recording a podcast
today at work, you know, so
and then, oh, I get to go mix it.
Hey, I'm getting paid for this, so it's great.
You are getting paid, which is rare.
Very rare in the podcast broadcasting world, for sure. Not something special, but it's great. You are getting paid, which is rare, very rare in the podcast broadcasting world,
for sure.
Not something special,
but it's just part of my workday, right?
We're not getting paid anything specific for this,
but obviously it's just part of my workday.
Yeah.
So I don't have to do this at night
like when I was doing Ask Perf Bytes
at like 10 o'clock at night Eastern for you all.
Yeah, that's the one thing
that maybe AI has solved.
All the automation, like all the automation,
you think about Dynatrace, you're like,
oh, what a, like you guys said, what a lovely dashboard.
You know how much stuff is going on behind the scenes
that's been automated about how you do this work?
I rarely am awake in the middle of the night
like I used to be.
I used to be like nonstop running load tests, doing something, fixing things.
Like that was my career was like I was working like double time.
And I kind of, maybe it was a pandemic or something, but like I don't do that anymore.
So maybe that's just seniority or senioritis,
senior brain. But yeah,
so maybe AI, if there's one thing AI can do, it's I don't have to stay
up at night.
There's one thing I want to circle back to automation with one quick topic
because when you talked about business events, you mentioned one thing and you said you're looking at
kind of business events,
you are finding certain patterns,
like that every hour something is happening.
You also used the word prediction.
Yeah.
And I'm actually wondering,
because one of the things that we are,
you know, assuming what's going to happen
is that you are going from reactive automation to predictive automation,
meaning we're not triggering automation based on something is happening now,
but because the AI predicts it is going to happen based on the data,
based on the evidence in the next 10 to 15 minutes.
And therefore, we can do something now to prevent it.
And I always like to bring the analogy of minority report.
It's kind of like you are.
I was just thinking that.
Yeah, because for me,
it's the same thing with SLOs and error budget burndown rates.
Because if you see that the error budget is burning down too fast,
you can react now before you burn through all the error budget.
But we are extending this now
with all the prediction capabilities in Dynatrace or maybe in other EIs as well where we say, hey,
something is about to happen. I give you the
chance to act now. Do you see this as
a valuable use case or do you think this is also too far out?
I don't know the proper mathematical terms
for predictive algorithms where I think there's
something like within the seasonality of a given metric, you can use like, I think it's
Holts Winters is one of the, based on the seasonality, you can predict its growth or
predict its decline.
Whereas a machine learning would be like, here are the majority of the patterns,
and here's something based on multiple, it's not a single metric, it's like multiple metrics,
a complex set of features that could say, this one is anomalous for this period of time,
because one of these eight things is anomalous. So take, for instance, I've got thousands of
transactions, different types of credit cards've got thousands of transactions, different types
of credit cards from different types of stores and different types of terminals and different
setups around the world, different geographies. And then they all get different response codes
going to different processors through different networks. So I've got multiple features at play,
but on the hour for one minute, every hour, there's a set of response
codes that show up. The real question for AI is what other features or what other elements or
attributes within that timeframe correlate? So when I use the word correlate, Brian breaks out
in shivers and Andy goes back to Silk Performer where we had to correlate everything manually and it was horrible.
But correlation is something that AI does incredibly good.
I'll say machine learning from a pattern recognition standpoint.
So the ability to correlate, well, here's the response codes, but what other things within that feature data are also showing an increase or decrease or inverse or correlative inverse correlation to that same time frame. hour but we can start predicting you know the risk the failed transactions or the failed response
codes are really specific to this network segment uh from the client side maybe it's on the internet
but it's like hey we can see internet routes like anyone else that route is bad and so you can go to
your customer and call them up before they even know they have a problem and be like hey um you
know those stores can you
move them off of that provider and move them to another provider because you're having like failed
transactions for no reason and so that that i think is the is more like fraud detection or more
like anomaly detection and then being able to predict it uh meaning if you start seeing an uptick in these types, these three features correlate at
the same time, then you now have the history from that to say, when those three things happen at the
same time, that's an indication of this packet anomaly or this business data anomaly or a hacker
or fraudulent type of activity. Yeah, totally. And that's my friend Cliff.
You met Cliff last year at Perform.
Cliff is actually working on that as we speak.
So that's his, his whole world.
He's the Dr. Cliff.
Excuse me, Dr. Clifford Benedict.
I have to call him doctor.
I'm the only one who's not a doctor anymore.
You can call me doctor.
You could call me doctor. You can call me doctor.
Doctor.
Nurse.
Doctor.
Me doctor?
Classic old Monty Python
skit.
Doctor, pharmacist.
Yeah.
Cool.
Now because
this is also a topic
that we are using
internally in a different way.
We are predicting
when we are running
out of resources
and then we are scaling
our infrastructure
ahead of time instead of triggering it on a threshold.
So the example that we present,
that Innovate, and I think we're going a little bit deeper
this year at Perform,
is really how we right-scale our cloud infrastructure.
And with right-scaling, it's both directions, up and down.
I think one of the use cases is how we are scaling Apache Kafka based on load predictions
has anyone sort of really dawned on them that Grail and scale they rhyme so so if
you're gonna write a song right there you have two lines. Andy, that's a fascinating
idea too. What I want to warn everybody
about though, in
upscaling, right? Predictive
upscaling is, if you're upscaling,
please make sure you still
look at why you need those additional
resources and see if you can optimize your
code. Because I think, just like we've seen,
yeah, the money, just like we've seen, even
in cloud adoptions, like I'll just spin up another, spin up another, right?
And even the right sizing and all that, right?
So you still have to, or you still should not just be like, oh, we can automatically
upscale.
Yeah, you can.
Yeah.
But take the time to look in again.
It always comes back to performance, right?
Why is it doing that?
Is that expected?
Is it optimized and you just have that much traffic or can you optimize
something else and and do that you know reduce that scaling size yeah like i could just see
people getting lazy and oh let's let it scale and have that n plus one query going on yeah yeah i
think it's also an analogy shout out to seeing your performer leandro our colleague who uh you
know you you can love tacos
and you can't, there is a point
where you can eat too many tacos,
but you can keep eating and keep eating tacos
and more tacos and more tacos.
You still probably have to pay for the tacos.
You might never reach the point
where you really satisfy your taco cravings,
but you're going to pay for every taco along the way.
Are you trying to get a Taco Bell sponsorship there, Mark?
No, I'm just actually kind of jonesing for tacos now. So,
huh. Maybe, you know, exactly. So we have to get together with Leandro.
But I think this little bit here, right, just talks about way back in the beginning of the
episode, we were talking about all these different inputs, right? And if you're going to have AI take care of the scaling,
you'd also want AI to identify what's causing the need for the scaling,
highlighting what in the code or whatever it might be that's consuming the
most,
which we know we can already do from that point of view and make sure we're
bringing that to the people.
So again,
it's,
it comes back to all this interconnectivity and these models being fed
the AM part of the models being fed by multiple sources.
So it can either through correlation or causation, um, thank you, uh, take care of all this.
And, and, and we're partially part of the way there, right?
We have independent models for a lot of these things and it's really
those crossovers and then as you said that trust right there's that trust level you know so we
don't end up in a war game situation right where the machine is suddenly launching nuclear bombs
because it couldn't get down to half a second response time yeah and uh also you've got this wonderful totally flexible data lake model in
grail or wherever you do it but in grail business talking with klaus i've been learning so much
about uh putting financial cost information profit revenue whatever putting that financial
information the cost of a transaction or the cost of a change let's let's say that to
me those are inputs to exactly that model brian yeah like if you're not putting cost into the
calculations that you're upscaling up with and upon um like really successful companies that's
how they got there you think of like the cloud foundry stuff that was like, we're going to be an abstraction
layer and then we factor in cost where we can get you compute units cheaper somewhere
else and I'll just move your workload over there.
Kubernetes is a perfectly built, tailor-made built to do exactly that.
I can go to the cheapest, most sustainable, most energy efficient, and most cost-effective
and cost-efficient way to do that.
And if you're still divorced, technology is divorced from cost or divorced from profits,
and you're a tech, you're a digital company, wow, you've completely missed the last 10 years
of what the whole industry has done because it's so beneficial. I no longer have to go to my CFO and say, well, I think these numbers look like this,
and we really need, you know,
one million for this and whatever
to upscale the pay for that.
Well, I guess, and then the worst one
is the going and begging for forgiveness.
We didn't know this change
was going to cause this problem.
This is your point, Brian.
Like, well, yeah, we put this out there
and suddenly, so now we got a hundred and
fifty thousand dollar bill you know from so and so because we didn't know we were doing you know
we didn't think about it and maybe ai's didn't think about it because we didn't ask ai you know
do this in a cost-effective way to oh cost effects right i'll go back and rewrite the code thanks
sorry you didn't mention cost uh so yeah don't still to this day i think the last 25 years of
my career are don't get a divorce from the business if you're a if you're an engineer
like you have to stay in that relationship hey and this triggers me as one of my closing remarks
here or before i start the closing triggers youiggers you in like a negative, anxious kind of way? No, no, in a positive way.
Okay, good, sorry.
Because at the forum, I will be on stage
with a guy from Dell
and talking about platform engineering.
And one of the things he hopefully will also say
on main stage, because I will ask him
some final advice on what allowed them to become efficient
in what they've been doing.
And he said, don't treat observability as a cost center,
but treat it as a business opportunity
and a business differentiator.
So what you're just saying here is observability
is not something that we do it after the fact, but we right, observability is not something that, oh, we do it
after the end, after the fact, but we're using
observability to observe
performance, user experience,
costs, efficiency,
and then if you're using this data,
it becomes actually a business differentiator.
It is potentially
there to save your business because you're making
the right decisions on what to
build, how to run it.
And yeah, I think that's...
If you have this amazing data, when I think about old big data models of, well, once a
year we get a report and then we predict what the market will do and then we make our decisions
and then I go smoke a cigar for six months and I, well, the same, not the same cigar
maybe, but you would go,
you know, just, you wouldn't get it. Now I'm getting real time data and I can feed that back
to my account teams, to my merchants. I can call up and say, Hey, did you guys know, you know,
customer X, like your reputation as a, as a company to your customer, that relationship is
so much beneficial because you're just showing, I, I I'm tuned in, you paid for our services, we're going to give you more than you expected.
I mean, that's how you grow a business, right?
Is to just keep delivering that.
And if your technology is not working for you to do that, yeah, again, you've just kind
of been tuned out.
Like, yeah, totally.
I can get those numbers and calculate that
and show you that you don't have to wait for a month to get the month end report from the big
data thing with the power bi thing that was oh and then i takes another three months because the
report was wrong and then we have to put in the fixed of the thing in the no i can just tweak the
dql and give it to you right there in a notebook sorry Sorry, look at me, Mr. Dynatrace, savvy stuff.
Actually, I'm really excited about the workflow stuff,
to be honest, that on the perform thing.
Like I know some of the things
that might be coming for workflows
because there's so much we can do with that.
And I'm totally psyched for where we're going to go
with workflows in this point.
Sorry, it's totally.
And also then with coming back to close it
with Copilot, right? Because Dave is Copilot.
One of the things we've been
working on is so that
not everybody has to become an expert in TQL
or in creating dashboards or notebooks
or workflows.
Copilot will be able to at least
create the best effort
or a good starting
point for you.
Alois pointed that out.
He posted something,
I think a demo on LinkedIn
and I'm like,
all right, I'm done.
I'm out.
I can go have a sandwich now.
Let Copilot write your stuff.
You don't need me.
You don't need me, Brian.
Mark, can we just admit
that you're saying all this praise
for Diatatrace
because you're looking for some free DDUs?
No, I don't even want the license.
You know what makes my world go?
Golf balls for Daniel,
who's one of my lead observability engineers,
and swag.
Like you guys send me swag
and then it just goes into my company
and people are like,
can I get another,
can I get a sweatshirt?
Can I get a t-shirt?
So I'm honest to God,
send me the swag.
Can I just give it away at the company?
Your cheap date.
Yeah.
All the way to the corner office.
My,
my president CTO,
he's got,
he's like,
Oh,
you know that,
that thing you,
that was great.
It's really comfortable.
I wear it all the time.
I'm like,
great.
So I just want to say,
Andy,
you're talking about wrapping up and we do have to wrap up,
I guess.
But the,
um,
the,
the, you brought up minority report before. And have to wrap up, I guess, but the, um, the, the,
you brought up minority report before.
And so I'm a big Philip K.
Dick fan,
which has an ethic,
ethical message in that movie about the abuse of those kids.
And also the core of that story is about free will.
And the thing that pissed me off about the movie was the book,
the ending of the book,
there was no free will.
Yeah.
And the ending of the movie, there was free will.. Yeah. And the ending of the movie, there was free will.
I'm like, you just changed the fun of it.
And now Philip K.
Dick was a really down guy and pessimistic and all that.
So that's why I ended it that way.
And I guess Hollywood wanted to give you hope it was Spielberg and all that.
But when they changed it, I was like, no, it's supposed to be, there is no free will
anyhow.
No, exactly.
Fortunately, we have the free will to maybe put an end to this podcast or maybe even continue doing more podcasts in the future.
And to be building smart AI and helping everybody continually.
Yes.
We're programmed to do that.
So how many more episodes do I need to get on to be back in the lead of Pure Performance guests?
I lost track and count of all that.
No,
I want you to pretty much,
you're pretty high up there.
I don't know if,
I don't know if anyone's been on.
Who else would be?
Gene Kim had a good run.
Yeah.
But,
uh,
yeah,
that'd be good.
Kroger's the guys from Kroger's were on quite a,
quite a bit,
but I,
it's like Steve Martin and,
uh,
Alec Baldwin hostingurday night live yeah
who's been on you know the most and you can be steve martin i could be well they both have white
hair now so yeah but you know yeah exactly steve steve is less baggage yeah but honestly guys
congrats seriously 200 episodes is nothing to shake a stick at. So thank you for inviting me for two, 201.
Did I say 2000 episodes?
You said 200, I think.
200, good.
Sorry, because it will be 2000 soon enough, Brian.
That's going to take a long,
yeah, well, maybe if AI keeps us alive,
we can get to 2000.
Maybe if AI creates at some point,
some of our episodes, who knows?
Oh, that'd be fantastic.
In fact, actually, dear listeners, this is not me talking this is the robot I am going
to let you know a secret we have three weeks left on this planet before it's
all annihilated exactly episode 62 from PerfBytes bots bots bots has the has the
robots in it it's hilarious One of my favorite episodes.
Sounds like you're talking about a Mythbusters thing.
You remind me of Adam Savage a little bit
with your joviality and your joy and all this stuff, Mark.
I'm a jovial guy.
That's right.
Yeah, exactly.
So yeah, we should mention as well,
if this comes out, honestly, before Perform,
people can tune into the virtual live stream and all the other stuff.
If they're just podcasting
and listening, then they can also tune into
the conference. If I'm not mistaken,
this should air on the 29th,
which is the Monday where
Hot Days starts, so just two days before
the main conference. It's been Thursday. Really?
I think so, right? Because we have
one more coming up on the
15th, which is the episode we recorded just before Christmas.
Yeah.
And then this one should come out on the 29th, if I'm not mistaken.
Let's double check while we're here. But yeah, that's the day I earned my AARP membership.
There you go. So yeah, so if this is the 29th and you're in Vegas,
come over into the expo hall.
Cause Henrik and I'll be hanging in.
No,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no,
no, no, no, no, no, no, no, no, no, no, already know about well we'll have to tell well here's the thing we'll have to tell people because they might have missed it the first time let's tell people to go which we are doing right now because we are recording obviously is go back if you missed the perform stuff go back and watch it
so which ones are they specifically looking for then they're looking for some of the main stages
right the main stages for sure uh the tracing stuff that's really cool. So unified tracing on Grail, and we got.NET
traces on Grail, which is really awesome. So you can fetch spans and fetch spans with your request
attributes from that trace. Request attributes still get ALR'd, but that's fine. We're going
to move them into business events. And then the other thing is, I think it's the workflow announcements,
the enhancements in the workflow model itself
that are really cool.
In fact, we are, we're a Rundeck shop
and I've got Rundeck all over the place,
but I've been a Rundeck enthusiast for a very long time.
And the PagerDuty integration with Dynatrace
has been around for a long time.
In fact, we were on that tour in Australia and
New Zealand, I think with the PagerDuty guys, right, Andy? So the workflow part of that,
we're actually plugging workflows, but everything for us has to run inside the firewall. So we want
the ability to do that execution engine inside the firewall from an active gate or from a workflow
endpoint. And so that's really exciting stuff too.
Anyway,
go back and check those out though.
That's some really cool stuff.
There's other announcements I don't know about,
but those are the ones that are near and dear to us for sure.
All right.
Well,
Mark,
thank you again for everything,
for being our early day mentor,
for pushing us towards the podcasting
for just being a performance
and observability and tech enthusiast.
You have a knack for it, you know, Brian.
For what?
For podcasting.
Oh, okay.
You're great at it.
Andy, I don't know.
You got any final?
No, I'm just looking forward to um to seeing mark in life in real life again in vegas i'm sorry brian that you won't be there but we will
celebrate your 50th remotely and we'll facetime you i'll be smelling the subway
yeah well thank you and and most importantly thank you for all of our listeners i don't know if we have
anyone listening from from day one still still with us here but if we do special thanks to you
but thanks to everyone who continues to listen uh obviously we wouldn't be possible without you
so we appreciate all of you technically that's me okay like i have listened to all of the pure
performances every single one every single it on, it's in my rotation.
So I just like go through the list and it popped.
I've listened, I think, to every single one.
Well, thank you, Mark.
Be more like Mark, everybody.
I might be the only one, but yeah.
Yeah.
Either way, thank you.
Thank you to our listeners and thanks for everyone allows us to do this.
And, and, uh, thanks for technology.
It's crazy.
Anyhow.
Bye-bye, everyone.
We'll talk to you on episode 202.
202?
Oh, 201.
That's the New York area code, too.
That's what this one is.
All right.
Wow.
We're now extending this way beyond.
So, bye, everybody.
Sorry for wasting your time with this wrap-up.
Thanks.
Bye-bye.
Thanks, guys.