Screaming in the Cloud - See Why GenAI Workloads Are Breaking Observability with Wayne Segar
Episode Date: June 26, 2025What happens when you try to monitor something fundamentally unpredictable? In this featured guest episode, Wayne Segar from Dynatrace joins Corey Quinn to tackle the messy reality of observi...ng AI workloads in enterprise environments. They explore why traditional monitoring breaks down with non-deterministic AI systems, how AI Centers of Excellence are helping overcome compliance roadblocks, and why “human in the loop” beats full automation in most real-world scenarios.From Cursor’s AI-driven customer service fail to why enterprises are consolidating from 15+ observability vendors, this conversation dives into the gap between AI hype and operational reality, and why the companies not shouting the loudest about AI might be the ones actually using it best.Show Highlights(00:00) - Cold Open(00:48) – Introductions and what Dynatrace actually does(03:28) – Who Dynatrace serves(04:55) – Why AI isn't prominently featured on Dynatrace's homepage(05:41) – How Dynatrace built AI into its platform 10 years ago(07:32) – Observability for GenAI workloads and their complexity(08:00) – Why AI workloads are "non-deterministic" and what that means for monitoring(12:00) – When AI goes wrong(13:35) – “Human in the loop”: Why the smartest companies keep people in control(16:00) – How AI Centers of Excellence are solving the compliance bottleneck(18:00) – Are enterprises too paranoid about their data?(21:00) – Why startups can innovate faster than enterprises(26:00) – The "multi-function printer problem" plaguing observability platforms(29:00) – Why you rarely hear customers complain about Dynatrace(31:28) – Free trials and playground environmentsAbout Wayne SegarWayne Segar is Director of Global Field CTOs at Dynatrace and part of the Global Center of Excellence where he focuses on cutting-edge cloud technologies and enabling the adoption of Dynatrace at large enterprise customers. Prior to joining Dynatrace, Wayne was a Dynatrace customer where he was responsible for performance and customer experience at a large financial institution. LinksDynatrace website: https://dynatrace.comDynatrace free trial: https://dynatrace.com/trialDynatrace AI observability: https://dynatrace.com/platform/artificial-intelligence/Wayne Segar on LinkedIn: https://www.linkedin.com/in/wayne-segar/SponsorDynatrace: http://www.dynatrace.com
Transcript
Discussion (0)
it becomes very imperative that you were observing how the health of everything was working before.
It becomes very imperative that you monitor the backend of your AI componentry that you're putting in place now,
particularly because in a lot of cases, you don't necessarily even know what outcome is going to happen.
It is, to an extent, non-deterministic.
You want to understand what's the performance of this now, that model I'm working with.
How much is it costing me?
That's another big thing, depending on how I'm using it.
So there's all these different things that come into it that become just as critical
because it's even more complex than even the complexity you already had that was running
your main applications.
Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week on this promoted guest episode by Wayne Seeger, who is the director of field CTOs at Dynatrace. Wayne, thank you for
joining me. Thank you very much. It's a pleasure to be here. This episode is brought to us by our friends at Dynatrace.
Today's development teams do more than ever,
but challenges like fragmented dueling,
reactive debugging, and rising complexity
can break flow and stall innovation.
Dynatrace makes troubleshooting outages easy
with a unified observability platform
that delivers AI-powered analysis
and live debugging.
That means less time grappling with complexity, more time writing code, and a frictionless
developer experience.
Try it free at dinatrace.com.
One of the challenges of large companies is as they start having folks involved in various
different adjectives, there's always an expansion specifically of job titles. People start to collect adjectives like they're going out of style or
whatnot. What is Dynatrace and what do you do there? Yeah, so Dynatrace is, you know, at the very,
you know, very definition of it. We are an observability and security company, right? So
we live, we kind of live in that space. Now, what we do is certainly very broad.
I mean, really, we like to say our goal or our vision
is to make software work perfectly
or a world where software works perfectly.
Now, very aspirational,
that I think we would all like to be there,
but we also know that that's certainly challenging.
But that is what we do and what we strive to do.
And so what we predominantly focus on
is helping customers and helping businesses understand
if there are problems, really the health of their systems
and where said problems are, and ultimately how
to fix them in a timely fashion.
Or predominantly, what we'd really like to do
is automate those things so they don't even
impact people at all. So the best way I describe it to everybody, you know, even the
when somebody like, you know, my mom or somebody who's not in the technology space answers, I say
is you ultimately, everybody takes a flight, everybody takes an Uber, everybody checks into
a hotel. Most of the time you're doing that through a digital interaction.
At some point when it doesn't work,
it's a really bad experience.
We work to prevent those bad experiences.
Unfortunately, we hit a point where saying,
oh, we're an observability company
is basically a half step better than,
oh, we're an AI company.
It's, well, you have just told me
of a hemisphere that you live in, great.
Who are your customers that are your bread and butter?
Where do you folks start?
Where do you folks stop?
Yeah, so we predominantly focus, you know,
where our customer base is,
I would say the larger kind of scale enterprises.
Now, certainly that, you know,
we're not excluding anybody by saying that,
but in terms of where our,
a lot of our install base is,
is you can start to think about, you know, a global 15,000 or something like that in terms of where our, a lot of our install base is, is you can start to think about, you know,
a global 15,000 or something like that in terms of, you know, in terms of rankings.
But it's really a lot of those type of, type of customers. But like I said, it can also span,
you know, in down, you know, down into, you know, smaller companies, because at the end of the day,
if they've got a digital property that somebody's using, they need to ensure that it's actually
working and that experience is done well. It's strange. I tend to live on both sides of a very weird snake where I
build stuff for fun that runs all of seven cents a month. I confess I've never used Dynatrace for
any of these things. Yet when I fix AWS bills for very large companies, you folks are everywhere.
So it's always interesting to realize that yeah
Some folks think that you are the end all be all of observabilities and others will misspell your name because they're that unfamiliar with it
I want to give you folks some credit as well when I visited your website at dinatrace.com as of this recording
I'm sure some market person will fix this before we publish, but AI is nowhere above the fold.
Yes, it's the first thing below the fold, but you have not rebadged your entire company
as the AI company, which frankly is laudable.
Yeah, I appreciate that.
And there is a little bit of a reason for that, or maybe there's a little bit of a history
to it.
Now, we can go back and talk about the history of the company which won't bore you too much with but but basically since we you know
since we launched kind of the the platform that we know of today kind of
our flagship product which was about ten years ago at this point really is you
know we we built it we actually did build AI slash machine learning at the
core of it okay so we did do that so So that was a while ago. So it's very much been a part of the platform
for a long time. And we were talking like back then we actually had AI in a lot of the marketing.
And at that point, people would not even believe us. They would say that that's not a thing.
That doesn't work. You were AI washing before AI washing got big.
It was exactly right.
And so we've obviously changed a little bit that.
Now we find it interesting now that every company out there,
like you just said, has AI plastered on something,
whether they're doing it or not, they're marketing towards it.
And so that's where we look at it,
is we've evolved the platform, of course, as AI has evolved, but we've actually had it as a core piece since its inception.
Right. If you're I guess you're orthogonal to what we do in that you are we both sort
of overlap, not really orthogonal. If I learn to use words correctly, that would be
fantastic. But we are alike in that we both have large data sets that we have to make decisions based upon.
So like, are you using AI in this?
Well, I don't really know how to answer that.
If you're not using machine learning,
I have several questions about what you think
I would be doing to wind up getting to reasonable outcomes.
But am I just throwing this all into an LLM API?
Categorically, no, that would cause way more harm
than it would good.
So where does AI start and stop is honestly increasingly becoming a question for the
philosophers. Exactly. Yeah, exactly. Agreed. So I want to talk about something that you folks have
done that I find fascinating from it because I care about it very much, but I see it from a potentially different angle. And that is observability of a gen AI.
And that can mean two different things.
The idea of, I don't want to conflate it with,
oh, we're using AI to wind up telling you
what's going on in your environment.
You have large customers across the spectrum,
but biasing toward the large, who are clearly
doing a bunch of gen AI things.
How are you thinking about observability
for what is effectively the only workload people
are allowed to talk about this year?
Yeah, and so what we're definitely seeing
and where we've kind of focused is customers are specifically,
like you said, in the larger size.
What they're doing is everybody's
doing some sort of project, like you said.
People are working on it, they're talking about it.
Now is it the most wide scale?
Is it running their most critical revenue line application yet?
No, not really.
At least I'm not seeing that, but that's maybe an aspiration, of course.
What we do look at it is these AI projects and these new workloads that they're developing
are becoming a piece of their,
let's say their broader application landscape.
It becomes very imperative that you
were observing how the health of everything was working before.
It becomes very imperative that you monitor
the backend of your AI componentry
that you're putting in place now,
particularly because in a lot of cases,
you don't necessarily even know what outcome is going to happen.
It is, to an extent, non-deterministic.
You want to understand what's the performance of this now,
that model I'm working with?
How much is it costing me? That's another big thing, right? Depending on how I'm using it.
So there's all these different things that kind of come into it that become just as critical because it's even more complex than
even the complexity you already had that was running, you know, your main applications.
Yeah, I see it from the cost side,
but it tends to take the perspective of being at the more of the micro level than it is
the macro. Companies generally aren't sitting there saying,
well, we're spending a few hundred million a year on AWS.
So in our next contractual commitment, we're going to
boost that by 100 million because of all the gen AI. But
they do care that, okay, we have a workload that effectively
can run indefinitely, continue to refine the outputs, have
agents discuss these things.
At what point do we hit a budget cap on that workload
and then say, okay, and now this result
is what we're going with.
You see that sometimes with model refinements as well.
Exactly, that was actually the next point I was gonna make
is that's what we're also seeing too,
is people look at it from that perspective of,
well, we'll change the model out and we'll see,
okay, could this have been more cost effective in the more macro world?
So maybe like you said, it's a rounding error in terms of our Cloud Build today,
but it now gives us the opportunity to understand how we can be efficient with this
when we do scale, which inevitably will happen.
I do find that when people are trying to figure out,
does this thing even work, there's
not even a question. They reach for the latest and greatest top tier frontier model to figure out is
this thing even possible? Because once it is and okay yes it turns out for like for example take
a toy application I built that to generate alt text for images before I put them up on the internet.
Can it do this? Terrific, great. Now, if I start using this at significant scale
and it starts costing money,
I can switch over from Claude four sonnet
all the way back to, I don't know, Amazon Nova
or an earlier Claude version
or whatever the economics makes sense.
But at the moment, the cost for this stuff
on a monthly basis rounds up to a dime.
So I really don't care all that much about cost
at the current time.
And that seems to be where a lot of folks are at with their experiments.
If does this even work?
If it's expensive, so be it.
People's time and energy and the lack of focus on other things
is already more expensive than this thing is going to be by far.
At least that's how I'm seeing it.
I think I see that as well.
I'm kind of in agreement with you there
because it's certainly not at any scale yet.
You're right.
It's very much in the stage of can we make something
that works?
And then I think people are starting to think about this,
is can we not only make it work, but does it actually
provide value back?
Which I think is the other big thing that people have struggled with.
It's like, this is cool,
but cool doesn't necessarily make or save us any money.
Right, and in many cases,
it seems that companies are taking what they've been doing
for a decade and a half and now calling it AI, which okay.
And there are a lot of bad takes on it.
People are, oh, we're gonna replace
our frontline customer service folks with a chat bot.
Cool, I've yet to find a customer that's happy with that.
To give one great example that I found,
I was poking around on Reddit last night,
looking at a few of the technical things
as I sometimes do when I'm looking for inspiration.
And someone mentioned that they canceled Cursor
because that is the first time in
20 years where they've done that for just poor customer service and I had the same experience I emailed in about a billing issue
I got a robot that replied that was in very fine gray text the bottom the fact that it was a robot
So I missed it the first time and then it basically chastised me for sending a second email in a couple days later
This will not improve response times.
It's, OK, I understand the rationale business
side of why you would do that.
People, as people, don't like it.
They want to be able to reach out and talk to humans.
That's something that the big enterprise clouds
had to learn is that, OK, if you're
talking about large value transactions,
they want a human to get on the end of the phone
or take them out for dinner or whatnot. That doesn't necessarily scale from small user all the way up to giant enterprise.
Customers have different profiles and need to be handled differently.
That's exactly right. Yeah. And I find your story kind of funny in a sense that the
the AI, which is the promise of it, is supposed to make things more efficient.
And it literally just did the same thing of being it was rude to you and said that it couldn't even do the job any faster. Exactly. Where I do see value for things
like that with frontline support is, okay, ticket comes in, look at it in your ticketing portal as a
human customer service person and it already picked up the tone. It pre-writes a couple of
different responses and links you to internal resources that are likely to help. The only way
I've seen customer facing AI, things like that,
make sense or what it is very clear that it's an AI thing coming back
and or it gets human review before going out the door.
Yeah, yeah. And that's even what we're seeing, too, is
like what I like to call it human in the loop.
Yeah, that's a good expression. I like that is what I see that
know where where some people are doing that today.
This is more and more we start talking about
autonomous or using more autonomous operations where it's like,
okay, great, something can tell me the problem.
Like Dynatrace, of course, we can point you to causation of a problem.
Even tell you in some cases what it is that you should resolve or do,
or at least give you suggestions of what they should be. Now, would you have some
other, you know, agent interact with that, make the change, and immediately go and
push it out and say we're done? Probably not. And that's where, again, human in the
loop is where I starting to see a lot of people, you know, that's more of where people are envisioning it right now.
It is more of the strategy.
It's somewhat sophomoric to look at observability as it tells me when the site
is down, great pass a certain point of scale. The question has to become,
how down is it? But it goes significantly further than that.
Since you have that position of seeing these entire
workflows start to finish, how do you find that companies, specifically the
enterprise space, are taking these projects from development to
small-scale production to large-scale production, while I guess being
respectful of the enterprise concerns? Obviously there's performance and
security, but compliance starts to play a large role in it as well.
What are you seeing?
Yeah, so I would say that if I went back,
and again, the space moves very quickly,
but so if you only go went back six or eight months ago,
people were raising their hands saying that,
compliance was the biggest blocker of everything,
meaning that you could maybe test some things out, but in terms of what you could do, what data you could use, it became, let's say, very challenging. Now, I've seen that where that
started to open up a bit because companies have created AI centers of excellence internally that are, let's say, a little bit more designed
to understand what the compliance needs should be,
and then what is acceptable, and certainly then what isn't.
And that's kind of giving people guidance
as to how to maybe fast track or what is
acceptable with their projects.
So that's one of the things I've seen be more predominant
is this actual AI center of excellence that has come up in a lot of enterprises.
That's helped a lot.
With that in mind, that's allowed customers and let's say people in the company
to basically come up with a better strategy of
what they really want to end up with at the end
of the day. So it's like starting backwards. That's the other thing I've seen people start
to do a little bit more is start to think a little bit more backwards as, yeah, this idea
sounds cool, but what would it do? How would it improve either maybe it's an internal customer
experience, which I think is where a lot of people are starting with is like, start with our
internal applications that we have that in service our internal users.
What can we do to improve their lives, make things more self-service, creating apps or
AI based apps that do that. And then that gives us a lot of learnings to ultimately transition and
move things to things that may be more external facing. So that's kind of the progression I've
started to see and it's still seeing it, obviously, as it matures.
Trusting your AI stack is non-negotiable.
That's why Dynatrace pairs perfectly with Amazon Bedrock.
Together they deliver on paralleled observability across your generative AI workflows.
Monitor everything from model performance to token anomalies, all in real time.
See how Dynatrace enhances AI with Amazon Bedrock.
Start your free trial at dynatrace.com.
Something I have found is that a lot of these enterprises
are over indexing from where I sit
on how precious their data is.
Like it is their core IP that if it got out into the world
would destroy their business.
I've always been
something of a skeptic on a lot of these claims. Even take the stuff that like at
the duck bill group things that we have built for internal tooling and whatnot.
If that were to suddenly leak because we're suddenly terrible at computer
security, it doesn't change our business any. It's not really a threat because the
value is the relationships we've built. how to apply the outputs of these things, the context, the understanding.
If I get access to all of AWS's code to run their hypervisors, I'm not going to be a threat
to AWS.
I'm not going to build a better cloud now.
And I think a lot of companies find themselves in that position, but they still talk about
it as if it's the end of days if a prompt leaks, for example.
I agree.
I mean, I think that there's probably some correctness
to the concern in certain industries.
Oh, I'm painting with a very broad brush.
I want to be clear here.
Yeah, yeah.
But I think that there is probably some over hesitancy.
And that is where, again, where I've
seen when people have created these more AI
centers of excellence. And they've brought on people who
aren't, let's say, your traditional, let's say,
compliance folks who look at things
in a very black and white manner,
they're usually more of a, OK, what does this really mean?
If this data gets out there, does it matter?
They look at it from that perspective versus maybe
a more traditional black and white compliance person
would say,
data getting out there, that equals bad,
that never happens, right?
You know what I mean?
That's an immediate no.
So I'm seeing some of that come around and change
and that's kind of maybe one of the driving forces
that have somewhat greased some of the skids
to allowing enterprises to adopt things
a little bit more readily.
Yeah, and I think that also people at companies
tend to get a little too insular.
I've seen it with my customers.
I've seen it my own career.
I find it incredibly relaxing to work in environments
where we're only dealing with money.
Because I had a job once where I worked with,
where an environment where if the data leaked,
people could die.
That is something that weighs on you very heavily.
But when all you do is work at a bank, for example,
it's easy to think that you're the only thing
that holds the force of darknesses at bay
is the ATM spitting out the right balance.
Maybe you're closer to that than I want to acknowledge,
but there is a sense of, at some point,
what are we actually doing?
What is the actual risk?
What is the truly sensitive data
versus the stuff that just makes us feel bad
or is embarrassing or makes us a violation
of some contractual breach issue.
There's, it is a broad, there's a broad area there
and there's a lot of nuance to it.
I'm not saying people who care about this stuff
are misguided, but it does lead to an observation
that a lot of the upstart AI companies
are able to innovate far faster and get further
than a lot of the large enterprises,
specifically because as a natural course of growth, innovate far faster and get further than a lot of the large enterprises, specifically
because as a natural course of growth, at some point your perspective shifts from capturing
upside to protecting against downside, to risk management.
If a small company starting in a coding assistant winds up having it say unhinged ridiculous
things, well, that's a PR experience that could potentially end in hilarious upside.
Whereas for a giant hyperscaler with very serious customers,
that could be disastrous.
So they put significant effort into guardrails rather than innovating forward on
capabilities because you have to choose at some point.
That's right. Yeah, I agree.
And I see that as well.
I mean, that is the big thing is it depends on the company size. And, you know, ultimately, what's like you said, what's the risk that as well that causes companies to freak out about
things that frankly they don't need to. Like, well, I was about to sign a deal for multiple
millions of dollars with this company, except that one dude on Glassdoor who had a bad time
working there for three months. I don't know. Nevermind. That does not happen among reasonable
people. But I understand the knee-jerk reflexive, oh dear God, what's happening here.
Yeah, I agree. And I'm kind of in that camp as well, which is that there's always going to be
positive and negative things out there on any company, right? But I tend to live on more of
the rational side of things, which I think you get to is some bad things or some negative Reddit posts that may happen because of a
bad experience is not, again, that's not going to destroy a company, right?
People are going to buy because they like the people, they like the product, they like
the technology.
I agree.
That is what sensible people tend to do.
Getting back to, I guess, your place in the ecosystem on some level, one thing that becomes
a truism with basically every workload
past a certain point of scale is you
don't have an observability vendor so much as you
have an observability pipeline.
Different tools doing different things
from different points of view.
As Gen.I proliferates into a variety of workloads
in a variety of different ways, why is it
that customers are going to Dynatrace for this
instead of, ah, we have 15 observability vendors
and now we're gonna add number 16
that purely does the AI piece?
Yeah, really good question.
And I think the answer to that lies where,
it's particularly again in the more enterprise space.
What the shift that I've observed happening, pun observed,
that I've seen happen is even before, I'd say, the AI boom
or people really going into AI was ultimately
that people were consolidating things.
They were really more on a consolidation play, which
is to say, we're trying to get down from the 15,
like you said, maybe in your example,
you're trying to get down from the 15 different vendors that
do very similar things, and maybe get down to four.
I just make up a number.
It's not always many to one, it rarely is,
but it can be many to few.
And there's a ton of reasons why there's beneficial around doing that.
There's economics.
There's efficiency gains and all that stuff.
And so I've seen that start to happen.
And so that's why, going back to your question,
why would somebody, let's say, look at somebody again
like a dinotrace when you get into the observability space,
instead of finding some, let's say, point solution
that maybe does that specific niche.
It goes back again to it's a consolidation play,
because customers just don't want
to have to manage a portfolio of 16 or 20 things
that are very similar.
Oh, I agree wholeheartedly with that.
Absolutely it does, but the reality as well is so many,
there are a bunch of terrific observability companies
that once they reach a certain growth tipping point
need to do all things.
And I think it's to their detriment,
where if you have a company that, I don't know,
emphasizes logging, and that is their bread and butter,
and that is what they grew up doing,
and now they're, okay, we need to check a box,
so we're gonna do metrics now.
A week after they launched,
someone who's never heard of them before
stumbles across them, implements their metrics solution,
like this doesn't seem well-baked at all,
I guess it's all terrible across the board.
It becomes harder and harder to distinguish
at what areas a particular vendor shines,
versus which is more or less check the box
as part of a platform play.
Ideally, in the fullness of time,
they fill in those gaps and become solid across the board,
but it still also feels like a bit
of the multifunction printer problem
where it does three things, none of them particularly well.
How do you square that circle?
Yeah, it's a very difficult one to square, like you said.
Now the way that we look at it,
this is, it could be different from company to company. The way we look at it, you know, this is, you know, it could be different from company to company.
The way we look at it is we try not to be the, oh, well, we'll release this thing because it, you know,
sounds like the market thinks we want it, but it does maybe 5% of what people really need it to do.
And so we look at it from that perspective of, you know, what is the real pain point?
I always like to try to work backwards. what is the real pain point? I always try to work backwards.
What's the real pain point that customers have?
Is there additional value that they gain
because of the rest of the platform?
The synergy you can get from the rest of the platform.
If the answer is no to those questions,
then we have to discuss whether that's
a real area that we want to invest in.
Because, like you said, it's like, great, we do this one little thing very well, but
even from a go-to-market standpoint, it doesn't usually take off because you're trying to
sell to an audience or provide value to an audience that you're only doing 5% of what
they really need.
Yeah.
And I think that that is the trick and the challenge
because you simultaneously have to,
you want to be able to provide all things to all people,
but you also have to be able to interoperate effectively
because every environment is its own bespoke unicorn
past a certain point of scale.
No one makes the same decision the same way.
So as you start aggregating all those decisions together,
things that make perfect sense for one customer
might be disastrous for another.
And you're always faced with the challenge
of how configurable do you wanna make the thing?
Do you want to have it be highly prescriptive
and it'll be awesome this way?
Or do you want someone to basically
have to build their own observability
from the spare parts you provide them?
It's a spectrum.
Yeah, I agree, 100%.
And like I said, kind of going back a little bit
like the way that we look at it is we view it
as the power in observability now
and then the power of observability going forward
is having, yes, you're collecting a whole bunch of data
and that's getting exponential, but the data in context,
having the contextualness of it.
And that's what can provide you actual value
at the end of the day.
So, that's why I said, is that would we go into something that doesn't kind of marry
that up or doesn't make sense?
Like I said, it doesn't provide value to a lot of people, but it can.
So, again, go back to the AI observability stuff is one of the things we do and that
is a little bit unique to us is we have a topology model of a customer's environment,
like real-time view of this dependency belongs to this,
and this, and that, and all these type of things.
Now that you're injecting a brand new,
let's say, piece to your topology,
that being the AI infrastructure,
if you're going to do that inside of your application space, you're going to want to have that context now of how do these maybe distributed
systems, how are they interacting with it? And then, so there's, like I said, a lot of value in
doing it in that case. And that's really where we focus on. Yeah. And I think that's a very fair
positioning to have. And I, not to name names, but I do talk to a lot of
companies about observability because like it or not the ultimate arbiter of
truth of what's really running in your environment is the AWS bill. Observability
via spend is not a complete joke and I hear complaints about vendors, it's
always the squeaky wheel that gets to the grease, I don't hear complaints about
you folks very often,
though I do find you in these environments.
So your positioning and the way that you're talking
to customers and the way you're addressing their problems
is very clearly onto something.
Believe it or not, you don't just live in one of the boxes
of the Gartner Magic Quadrant.
You are out there in the real world.
Yes, yes, yes we are.
And like you said, we are predominantly deployed, like you said,
in the larger enterprise space. And you know, the way that we look at, you know, when we
deal with customer and interaction, so if I can maybe go back to your point before where
we're not coming up as the squeaky wheel, we prefer obviously to be the valuable wheel
or the valuable cog in the wheel per se se, is we work very, very diligently
to ensure that we're solving the customer's problem,
not just finding a way to get them to use something new.
We wanna focus on the fact that you have a new challenge
and how can we help address that?
Where's the value in the platform
that can help you address that?
Not just, hey, use this new thing because we have it and maybe it'll be cool.
I cannot adequately express how much of a differentiator that is between you and other
vendors that I come across fairly regularly. It's, well, we need to boost market share,
buy this thing too. But I don't want to buy this thing. Well, tough. It's now getting rolled into your next contract.
It becomes a challenge of at some point you have to start going broad
instead of going deep.
And I still think that this is an emerging enough space
where that doesn't work for everyone, nor should we try and force that square
peg into the round hole.
No. And like I said, and at the end of the day, all it does, if you do that,
it creates frustration on two sides. Right. Because one, and at the end of the day, all it does if you do that, it creates frustration
on two sides, right?
Because one, as you're saying, customers get frustrated
because now they are either paying for something that's
not really valuable or it's not working out.
And then from a company standpoint,
you've now just created, let's say,
a bad taste in somebody's mouth or a bad relationship.
And it's like I said, it doesn't really, it doesn't really have benefit in the long term
for either side.
It doesn't work. It can't work. It's, I like the idea of sustainable companies doing things
that are not necessarily as flashy, but they get the work done. I've, I've always had an
affinity for quote unquote boring companies.
It's a lot less exciting than living on the edge and being in the news every week,
but maybe I don't want my core vendors to constantly be adjusting each other for
headlines instead of solving the actual problem that they're paid to solve.
Exactly.
I really want to thank you for taking the time to chat with me.
If people want to learn more, where should they go?
Yeah. Easiest way, of course,
is dinatrace.com, plenty of information there.
We also have, well, really two things you could do.
You can certainly take a free trial,
again, no strings attached,
you don't have to put in payment information in, nothing like that.
You could just deploy it in your own environment,
actually see it work, which is very well.
And then what you're also good to get access to is
we have a playground as well.
So if you don't even want to try it
in your own environment yet,
you're not ready or can't do it.
We have a data that actual environment that's running
that you can actually play around
with the actual live product.
Which is fantastic.
I love that easy exposure to here,
play with this and see how it works
and then not, sure, somewhat contrived, but not massively so,
as opposed to, oh, you wanna learn how our product works,
click here to set up a call with the sales team.
I understand that that is how enterprises buy,
but there's also small-scale experiments
where people just want to see if the thing works,
usually in weird hours,
and putting that blocker in for people
not being able to get to actually kicking the tires doesn't serve anyone particularly well. I
get it doesn't work for every product, but it should for this.
Yep, I agree.
Thank you once again for your time. I really do appreciate it. Wayne Seeger, Director of
Global Field CTOs at Dynatrace. I'm Cloud Economist Corey Quinn, and this is Screaming
in the Cloud. If you've
enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform
of choice along with an angry, insulting comment that will have no idea when it showed up just
because honestly, we don't have great observability into those things.