PurePerformance - Dynatrace PERFORM 2019 Day 1 Mainstage Review
Episode Date: January 29, 2019We'll chat about today's announcements on the mainstage...
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Fear Performance!
Hey! We are live in Las Vegas!
Yes, we are.
Hey!
Yes, exactly.
We've got my good friend Brian Wilson and co-host James Pulley.
And Mark Tomlinson.
And me, Mark Tomlinson. Maybe you remember me.
Good morning, everyone.
We've just been listening to the main stage conversation here at Dining Trace Perform 2019.
And commenting on a bunch of different stuff.
Brian, there are a few things we wanted to cover.
There were some announcements, right?
Yeah, I think two of the larger announcements
was the announcement of the OpenAI
and the developer edition.
Let me get the developer edition is actually...
Yeah, we have actual press releases you can see.
And, of course, Brian also found a bug in one of the press releases,
which is when you publish a URL,
it's really important to make sure that that URL works.
So we'll talk about the developer edition first,
and you can find out more on the press release,
or I think it's dynatrace.com slash developers, right?
Something like that, yeah.
It's a free-for-life developer.
Yeah.
So everybody likes free software.
I don't know if you know that.
Yeah, well, except unless it's malware.
That's free.
Yeah, Matt.
Well, viruses are free.
There's a cost.
Like all free software is free like puppies.
Like free puppies, right?
Yeah.
Free puppy?
Free puppies. Like you have to still buy food and... Vet bills. Vet bills. Stuff like puppies. Like free puppies, right? Yeah. Free puppy? Free puppies. Like you have to still
buy food and vet bills.
Stuff like that. You also have to clean up
after them. Or not.
You know, different people have
different standards of cleanliness.
People love free software, yeah.
So yeah, now this is something that
a lot of the Dynatrace places I've been
that have Atman years ago,
sometimes they'd negotiate in the deal
you know, your
development environment, you could get
like a, it would
include in your license, but it was still licensed.
Right, and at a certain point
It was a traditional non-prod license.
Yeah, yeah, you couldn't use it in production
but, you know, it was
still part of a contract. I mean, it was tied
to your main licensing
for your lower environments.
Right. And then eventually what we ended up doing is we just started giving away Atman.
Like with the free trial instance, we had our good friend and co-host of Pure Performance,
Andy Grabner.
Did the trial.
Yeah. So if a developer downloaded the free trial and then did the share your PurePath
program and said, sent like here's some
problems help me analyze them to andy yeah andy would use his genius head to analyze their problems
and then you would get a new license for like maybe three agents or something and that would
never that would never expire and but you kind of had to you had to give a little to get a little
right again you had to be i think you had to be an atmon cut no free. You didn't even have to be a Dynatrace customer back then.
Right, right.
And then we had the free trial for Dynatrace, but, of course, that was like a 15-day free trial.
And then you have the sales reps knocking on your door saying, hey, can we introduce you?
And it's hosted in the cloud, usually.
Yeah, it's hosted in the cloud.
It's like, hey, we're going to, you know, how about we talk about some sales here and everything.
And you get, like, a couple agents or three agents.
Well, it's a good way to test drive it
before you maybe want to dive into doing a full-on POC.
But the piece missing, as we're saying here,
was the, hey, I'm a developer.
I need some kind of tool to just be able to help me out.
And, you know, that's...
We know, I think, we're not going to make money
on selling a license to a developer.
And I think we know that if we can get these developers
into the
software and find that it's
helping them, that's just going to be
not only is it going to help get better software everywhere,
right? This is almost a
repeat of Apple giving
Macintoshes to universities,
getting students used to using it.
I like it. It's very much so.
It's a hook, right? Plus, when you
escalate, not just in DevOps,
but even traditional old methodologies for operations,
like, oh, the software has gone bad in production,
you're going to end up pulling a developer in.
And you're asking profiler questions, show me the logs,
do you have a stack trace?
And they're like, well, wait a minute.
I use this cool product thing called Dynatrace on my laptop in my development environment.
Why can't you guys install that in production?
Bing, bada-boom, ba-boom.
Sounds just like that.
Yeah.
Next thing you know, you're a Dynatrace customer in production.
But beyond even, like, hooking you in, right, there's the other idea of what do we do all the time as either developers or test automators or anything else?
We need to experiment and figure out how to do things.
So as we see in part of some of the announcements, Dynatrace is so much more open now.
You can hook it into your CICD pipeline.
You can integrate it with JIRA into ServiceNow.
There's this whole robust API that you can do these things with.
So if we say to the development team, hey, we want you to, not we, but if I'm a boss and I say to my developers, we want to hook this into whatever.
Well, they're going to have to now start playing around in our production environment or whatever environment we're running in.
They're going to keep messing around, maybe getting things right, maybe getting things wrong.
Who knows what that happens to our bug base.
So now they have the developer edition is a playground that they can use to experiment with this.
From what I was reading on the press release,
there's going to be a bunch of videos and tutorials and
guidance, and also they're going to try to make it a community
of developers so you can help each other out.
Yeah, yeah. The app itself
connects into the community, which is robust.
Right, so it seems a bit beyond
just, here's your personal copy.
It's, here's your personal copy and be part
of this group. Which is similar to the original share your path ideology, right?
Yeah, but building more of a community, which is nice too.
The other thing I would say is you not only get to experiment.
Like in my situation, I would go into a new customer
or work with someone I'm mentoring,
and they don't have any Dynatrace.
They may not be a developer,
but at the same time, they're like,
I want not just to experiment with my app,
I want to experiment and learn Dynatrace.
And you can experiment with all the integrations
getting set up, and you don't have to like,
well, I have to go through the whole sales process
and then figure out the integrations
and then figure out all this other stuff.
And let's face it, developers don't like to talk to salespeople.
Oh, I don't know.
That's like an allergy.
It frustrates the salespeople, too.
Yeah, of course.
I mean, no offense to my salespeople colleagues, but does anybody like to talk to a salesperson?
Like to.
I'm saying like.
Well, okay.
It's a part of a purchase.
Does anyone like to have a social conversation with a developer?
Right. Or does anybody like to talk to a... Dep conversation with a developer? Right.
Or does anybody like to talk to a...
Depends on the developer.
There's a whole spectrum.
And when I say spectrum, I really mean spectrum when it comes to developers.
Exactly.
High-functioning developers who often have normative communication skills.
I'll get you a shovel so you can dig yourself out of that one.
We worked at Microsoft.
We know this too well.
And honestly,
though, it is nice. Because, yeah, as you're
saying, a developer, one of the
things that would be the biggest turnoff for a developer
in trying to do one of these tools is if they're going to
get, like, every week another call
or another email.
Yeah, I think this will be great.
So the announcement came out.
You can check out the press release.
Dynatrace announces free-for-life developer program for its open software,
whatever the next word is there.
Yeah, and let's see.
If you want to, if you go to developer, without an S,
the URL to go to Dynatrace.com,
that will redirect you to Dynatrace.com slash developers.
Let me just see if it's even on the main page.
Yeah, they might.
That's cool. Anyway, dynatrace.com
slash developers.
Go check that out. Go sign up.
I'm going to have to do that just to see what we have.
Yeah.
And again, you don't have to
have a title on your LinkedIn page to say
I'm a developer.
You just need to be working in the development of software, part of the software development team,
maybe on an agile team, sprinting, or whatever you're doing, right?
Yep. Perfect.
So then the other thing that we were talking about this morning was the sort of teasing out,
they didn't really call it AI 2.0, but internally that's kind of what we thought of it as,
AI 2.0, but it's really this open AI and some changes to the way we do the AI.
So I know, I think it was two years ago
when they were first talking about AI.
And we were talking about Davis, right?
Davis and the cosmopolitan.
The two of you started running with all kinds of ways
it could be used.
We could feed it this information and that information and do all these things.
So they listened to our podcast?
Yes.
And then they took it back to R&D and Lintz?
Yeah, exactly.
They said, we want to make Mark and James happy.
Yes.
And that's what you have now.
That's just how it works out.
And now James and I only get moderate amounts of compensation for that.
Yeah, well, only what, $2 million apiece?
That's it?
Can we copyright it since we said it on the podcast?
Where did this number come from that you just mentioned?
You didn't hear what you're getting for this idea?
Are you?
No.
Oh, his is already in the session.
But can we copyright it?
Oh, is that why you had $4 million in your bank account?
No.
It's copyrighted under the Creative Commons license on our podcast.
So everything we talked about.
So if you took any of our ideas from the podcast, James and I could claim that that.
Yeah, I'm sure that'll hold up.
Yeah, spend the money on the lawyers and try to make that work.
Given the fact that we were here at the conference where the announcement was being made.
Anyway, so Davis two years ago was a lot of Davis, the Amazon Echo integrations with Davis.
And then now we're talking not only is it some upgrades in terms of capability, but it's also open.
The whole thing is opening architecture.
We're actually kind of rebranding the whole concept of this as Davis, I believe.
So instead of talking about AI this, AI that, we're just going to, you know, in Davis,
initially you might have heard it was kind of the voice integration.
Now we're actually giving the AI the full-on name.
This is Davis.
That's the aspect of it.
So a couple of things that happen with it, right? So you could feed data into it, and if it's part of your system,
like let's say an F5 or was it a data power system?
Of course, what comes to mind is
this is how you get Skynet.
Oh, of course, but
we're not quite there yet.
Amazon Echo doesn't have arms, though.
I've never seen... Alexa
doesn't have arms.
But the cool thing is you can feed in this data.
You have an F5 in there, right? Or you have a data power device
in there. NetScan. Anything. Whatever you right? Or you have a data power device in there.
NetScan.
Yeah, NetScan.
Whatever you're going to do.
Internet of Things.
Windows Box.
Well, not necessarily.
Well, Internet of Things is there.
But where I'm going with this, though, is you would, in the old days, original days of Dynatrace and probably many other tools, you have a device that you can't put an agent on, right?
Yeah, you have an API.
Yeah, you can feed data into it.
And you can look at metrics and monitors.
Traditional monitoring. But now, you can say, hey, here's a thing.
It's completely disjointed because I don't have it in my thing anymore.
With all the dependency analysis that we already have,
we're now playing that into the whole smartscape ecosystem,
the dependency analysis.
Like a Netscaler, for example, we have all the connections known.
Now we know this is being connected.
So you can kind of see the connections over the last two years.
So last year, it was kind of the log analysis, log management. That's known. Now we know this is being connected. So you can kind of see the connections over the last two years. So last year, it was kind of the log analysis, log management.
That's true.
Taking unstructured machine data and making it into structured data.
Now we're able to glue that structured data directly into the AI engine from a third-party source.
Right, right, yeah.
And so it's very interesting to watch that.
Yeah, and now we're also going to be able to say, like, hey, if it's one of these external pieces of data,
well, we know, again, let's say you have your web server behind your NetScaler,
and there's some kind of an issue.
We're not seeing the problem on the web server.
We're now going to go ahead, and not we, but Davis is now going to go ahead and look,
well, I have all this information from the NetScaler that it's attached to.
Let me go check out that.
Oh, look, we were running at, and it's not just about a static threshold violation. Davis is going to
look at what's the normal operating level of that net scalar and not rely on the static threshold.
Say, hey, there's an anomaly. This changed. There's a big delta before the other anomalies
were seen. So now I can say the problem's on your F5. I think this is cool because now we can go out and we can get instant statuses from like an Akamai or other CDN provider.
And we can see those items that are outside of the control of our environment, but we have that structured data coming in.
So if there is something that we have a dependency to, but we have insight, we can still get kind of live analysis.
I'm curious if we can actually do that with like an external
CDN.
As long as we can
get a status feed. Yeah.
There's got to be API data feed
in there. Well, part of what's announced
today is that it's just that it
does it more automatically than having to be
pre-configured. But it also takes it into consideration.
We've had the ability to ingest data
and put it into the metrics.
You know, you could create a graph with it.
It's more now the AI engine is going to ingest that data
and take it into consideration.
And figure out where it sits in the smart space.
Well, we kind of knew that already, especially with an F5.
It was more about it would take that into consideration.
Right.
Because in the past, we weren't baselining that information.
Yeah. And now what we're doing is that's the other part of the new AI into consideration. Because in the past, we weren't baselining that information.
And now what we're doing is,
that's the other part of the new AI,
is that we're not even relying on baselines anymore.
So we'll rely on a baseline or a static threshold to see when a problem occurs.
Because we know the norm, something violates.
And you're out of tolerance.
Once we're out of tolerance,
we're not just going to look at,
well, what were the other baselines that were violated?
We're looking at every single metric
on the chain that got impacted and looking for a difference.
And if that difference occurred after the problem occurred, we tossed that out because that has nothing to do with the root cause.
If it occurred before, now it's considered as possible root cause and we didn't analyze it.
In the past, data that we were ingesting from outside of the agents was not included in that.
So that's where this big piece of this is.
And that's big for those external dependencies for the F5, Netscaler, a CDM provider, all of those external data feeds.
Right.
So as long as we can identify that that is a dependency in that part of the chain of events, it will be considered and that will be in there.
So it's a really awesome new component to it.
And it's going to really open a lot more for a lot more reliability.
Can you explain for folks SmartScape?
Because in the Atman world, people might not know that.
And we want to bring them into the Dynatrace world.
SmartScape is actually kind of a way of interacting with the data in Dynatrace, right?
Right.
I mean, SmartScape is really dependency analysis, right?
I mean, if you want to really, really simplify it.
You know, if you look in, so if you think from Atman, right, and one of the big themes we're hearing a lot at Perform this year is a lot of the people who are successfully moving over from Atman to Dynatrace.
And that was my class. That was, in fact, the class that I did today.
That was the class yesterday, yeah.
But, you know, if you think in the old Atman
terms, we had the transaction flow.
Right. Right. That would show you your
processes on your hosts. And kind of the
interactive map that you could
hop across the hosts. Now, of course, in Dynatrace
we have the service flow because we're more granular.
We're not stuck at only looking at the process level. We're going to
the service level. It's not even any
of those two, but I bring those up as a
starting point of reference. What the smart
scape is doing is anytime
you have an agent anywhere,
we're automatically discovering
all the connections, all the interdependencies
between everything that's running on that
host. Because if you recall, the
agent automatically instruments
all of your instrumentable processes.
We're also aware of all of your key processes
that are running on that application.
We know the network connections, all this kind of stuff.
So the smartscape is really showing the dependency map
from a data center point of view, a host point of view,
process point of view, service point of view,
and an application point of view.
All those dimensions, all the dependencies all across
of who's talking to who, what's communicating with
each other, and that then puts this
huge bit of spaghetti together when you look at it
visually. But the
importance of that is not necessarily the
visualization of it. The visualization
of it, yeah, it looks cool for us and everyone
likes to look at it and say, ooh, wow, it looks awesome.
But the real importance of it is
for Davis to know
what's happening because if one Tomcat instance has an issue, like it sees a slowdown,
Davis now knows, okay, this Tomcat instance is connected to these three other processes or services, whatever.
I need to go check out what's happening there.
And it goes to the next one and says, okay, there's no problems on that one.
I'll go to the second one. Oh, there's a problem on the second one. So now on the second one, I need to
know what that's all in communication with. And it's kind of like a web crawler. It's following
and tracing. And for every node that it checks, it goes both horizontally across the flow,
but also vertically up and down the stack to look for any metric that's out of whack.
And again, and I really want to stress the new bit with that metric change is it's not like, oh, it's a threshold
violation or a baseline violation.
It's, you know, you might have a change
in your metric without violating your threshold.
But if that occurred
before your baseline
on that other system went, that could possibly
be a contributing factor.
So what's the trigger in this case?
Two standard deviations outside the norm?
Three standard deviations?
That's all configurable.
Yes.
We have Tim Western and Percy on the chat with us.
Okay.
We have some live people tuning in.
Percy's asking, how does SmartScape handle ephemeral Docker containers?
That word came up, ephemeral, earlier today.
It's good.
Yes, excellent.
So SmartScape is a way of everything, right?
So if it comes and goes. There's two layers, there's two ideas of SmartScape.
One, the visual representation, right?
So visual representation is an easy one because I can speak of it visually.
Your Docker container is going to show up there.
If it disappears, you'll see the lines connecting to become dotted lines so that you know that it's no longer connected to anything.
And you'll see it up there for 72 hours.
I think it was 72 hours.
I forget how long it is.
Maybe it's two hours.
72 or two hours.
I forget which one.
And probably configurable.
No.
The length of time you see it for on the smart scape is not configurable.
But all that data is in there.
So we know when it's still up there, right?
Because, again,
when you install one agent on the
host that's running your Docker containers, we automatically
inject into every single container with that
configuration. So our
smartscape, our AI engine knows
that that existed at this point.
And that translates from Amazon machine
image to Docker containers.
And so we know
it's there. So if it's part of the issue, if it was part of the problem
at the time, we have all the metrics.
Yeah.
It's not like we go to the machine after the fact.
You know, the agent's streaming those metrics in.
We're getting all the information.
So we know about it.
We'll tell you it was that one, and you'll be like, well, that one doesn't exist anymore.
But it's still, hey, that was the code running on there, right?
Yeah.
So from a Dynatrace standpoint, the announcement is really about AIOps as kind of a category.
Yeah.
Which is, and they were mentioning some competitors and things on the stage just a little bit.
Right, right, right.
But it is a space that Dynatrace has been ahead for almost four years.
Yeah.
From an AIOps, I guess it's labeling that category.
We're not analysts.
But I know people think, oh, well, I work for Dynatrace.
I drink the Kool-Aid.
But in a way, absolutely.
It's good Kool-Aid, though.
It's very tasty.
We put a lot of sugar in it.
And if you think about it this way, as you said, we've got the four-year jump.
But the big thing we've done is back in 2013, 2014, when we saw this coming, we understood as a company and as technologists that there's absolutely no way we can change a product that we created in, what, 2004, 2005-ish to scale and be able to run and do a lot.
There's no way you can recode and fix and modify something that was designed for a three-factor.
So we had, from the absolute ground up, brand-new architecture, brand-new code, everything, created a brand-new product.
And there's always been sort of a conditional rules engine inside parts of the Dynatrace engine.
Yeah, so you have to stand with that.
So, I mean, people say that's not AI, but, I mean, it is a rules engine of some sort.
There is some intelligence on those thresholds built into that.
It's just extending it to Davis and expanding it.
So I'm hoping next year we get to plug in to write our own rules.
I still like that.
Analysis plug-ins.
But to finish your question, the pure performance anomaly detection engine.
Well, yeah. That'll be your second paycheck.
That'll be your second paycheck.
But just to go back to, since you brought up the competition, I'm not going to name any names,
but another one of them just announced.
So one of our thorn in the side competitions whose tactic is to just try to beat you down,
they bought an AI company and are doing the whole,
well, let's duct tape it to our existing product.
And the problem there is that you have a product that was developed in 2005, 2006
that was not developed for cloud, for scale, for anything.
It's not going to work.
Now, if they were going to go ahead and re-architect,
we could talk about something serious going on. Now, if they were going to go ahead and re-architect, we could talk about serious, you know, something serious going on.
Now, unfortunately for us, there are, unfortunately or fortunately, depending on how you look at it.
Famously or infamously.
There are some newer companies that are trying to enter the space, but they're very, very small.
And I say fortunate, too, because competition always makes us strive to be even better.
Yeah, it's very good.
But, you know, just from a sales point of view, it just always makes it a little harder.
Tim Western actually throws something.
Now, he's not here at Dynatrace where we're amidst a room full of Dynatracers.
But he mentions that he had a recruiter ping him recently where Dynatrace was a requirement they had for the position.
So if anyone thinks that Dynatrace is a niche tool, absolutely not.
Of course, in this room, nobody thinks it's a niche tool.
You should just come here, Tim, and hang out at the conference
with a room full of Dynatracers.
We'll teach you Dynatrace and then you can go get that job.
Super geeks, yeah.
But, well, good for Tim.
If you need to get a position, looking for a job,
oh, that mentions a job, add.
Oh, that mentions another thing I heard.
If you come to perform, there's also Dynatrace certification.
Like you can get a free certification.
First level, though.
But yeah, absolutely. Yeah, just to get you started.
So there's a lot of people that come and are like, I've been using Dynatrace, but I've never gotten certified.
So the doors are wide open.
I think all today and tomorrow, but they were even open yesterday,
to take the cert and get certified for free.
So all you've got to do is fly out here.
Well, that's free.
That's the cost of doing business, right?
Opportunity cost.
I quit my job to go get certified in Dynatrace for free.
No, I didn't.
I didn't, exactly.
All right, so those are our two announcements that came from the main stage.
We watched a few of the other main stage things as well.
Any reflections on what we heard this morning?
It's pretty exciting.
Also, the tone was a little bit more somber.
It was a little bit more serious.
Like Dave Anderson was talking about kind of reducing work-life balance why are
we really leveraging technology to improve our experience as engineers and do you get to spend
time more time with your family or you work in 50 he also posted i saw a video on linkedin
to to submit a story like if you've been leveraging any technology diner to us of course uh but any of
the ecosystem.
Hey, I automated this,
and now I don't have to stay up until midnight every night
running load tests.
You know, that kind of thing.
So are we actually leveraging
all this great, cool innovation?
I think SAP kind of nailed
that concept of automation
is trying to take humans
as much out of the loop
as possible for configuration
and promotion of changes.
But is that improved work-life balance
if you get put
out of a job because we automated
you? Is that a fear?
I mean, I talk about this.
It doesn't put you out of a job.
Because, you know what, number one,
all that automation has to be created. All that automation
has to be maintained.
All that automation has to be improved upon. Developed, customized be maintained. Right? All that automation has to be improved upon.
Developed, customized, configured, managed.
Your new job is doing that stuff and or working on something else.
Right?
And we look at this in the testing field all the time.
Right?
Yeah.
Where the more things you automate, you know, the more you can do.
Yeah.
Right?
Because if I don't have to take and say, you know, let me go back to my old load Runner days where I would copy and paste the data out of the analysis tool into an Excel spreadsheet and do this, I can automate that.
Yeah.
I just saved a ton of time.
And I know I'm going to have to be writing more complex scripts or scenarios or research or something else.
Well, we've seen this automation debate happening from the corner office to the line worker in the automobile industry.
Yeah.
Right?
I mean, that's happened for 100 years or more.
Yeah, but I think in this case, a lot of it isn't.
I mean, maybe at some point people will be put out of jobs,
but there's so much maintenance around so much of this
and so much of this has to be created until we have code writing code.
New applications.
New ideas.
Robots writing code, that's when we have an issue.
Yeah, the boring stuff. Like now we have newspapers that are robots writing code. You know, that's when we have an issue. Yeah, the boring stuff.
Like now we have newspapers that are robots writing sports stories.
I saw a statistic that in 2020 there will be four devices for every person on the planet.
Thank God for IPv6.
If we ever deploy it.
It's deployed.
It just doesn't get relied on for everything.
So let's think about that.
I mean, the only way that those things can be managed is with automation.
Right.
Right.
There's no other way.
Another question to back up a little bit.
Percy's asking, does SmartScape provide root cause analysis on existing data
from providing topology of the underlying system?
He says we can ignore this if the question's too late,
but SmartScape itself is the interactive display.
SmartScape is the dependency map, really.
SmartScape, think about SmartScape as the dependency map.
Now, I might get, I mean, I'm sure someone might tell me,
oh, it's a little more than that, Brian.
But, I mean, think about that as the core of it.
It's the dependency map of everything in there.
So SmartScape does not do the root cause analysis.
It's the causation engine that's runningScape does not do the root cause analysis. It's the causation engine that's
running with inside Davis
that gives you the root cause analysis.
And if, Percy, if you're not
quite aware, if you need to see more,
donatrace.com, free trial. You can always send these questions
to ask at perfbytes.com
and Brian will put it on an episode.
Which would be very awesome.
And don't forget, free trial. You can go on there. Plus,
now there's the developer edition. Developer edition, yes.
You just download it and start playing around.
And, again, the URL is, well, dynatrace.com developers.
Something like that, yeah.
Touch developers.
The other phrase I heard this morning,
I think it was Steve Tack was saying data on glass
as kind of a derogatory term for people who, like,
have a pretty-looking picture. Like, oh, yeah, that's just data on glass term for people who have a pretty looking picture.
Like, oh yeah, that's just data on glass.
They don't have a process.
They don't have insights.
They're not converting data into knowledge, into wisdom, into action.
So I thought that was articulated really particularly well.
And that's the core practice that I think when you reach an advanced level of doing performance work,
you're relied upon not to just write scripts or run a test and push a button.
Or provide data.
Or just here's some data.
Answers or information.
What does it mean?
What should we do?
How should we change?
So it's good to see that on the main stage,
even from the higher levels of the organization,
Dynatrace is always like, look, the real work is the analysis.
The real work is the analysis.
The real work is the experience and applying your wisdom.
Exactly.
But I love that phrase.
Oh, yeah, that's nice.
That's a lot of data on glass.
Yeah.
But that's the problem, too.
I mean, not to knock the DIY monitoring community, you know, open tracing and all this kind of stuff.
But most, again, we have, I don't know how many,
four, five, six hundred developers.
All of the monitoring tools have been doing this for a long time.
We know the practice of it.
If you're tasking your developers with saying,
hey, add tracing to your tools and give us data,
yeah, you can do that,
but you're not building the ecosystem around it to consume that in the right way.
All you're doing is extracting a bunch of data.
Yeah.
And now you have to figure out, now how do we process this data?
Is there something else with this data?
I had this conversation.
Andy and I actually had a guy who used to be one of our competitors.
He worked for one of our competitors.
He moved on to another company.
Yeah. And we were talking about this idea of
why are you going to build what
there's an industry of experts building already?
Oh, yeah.
You know, it's like, oh, I want a car.
I'm going to build my own car from scratch
and design my own engine.
Why not?
Well, yeah, it's a cool woodshed. I could do that and all.
But it's probably going to be a really bad car.
I'm going to build an airplane in my garage.
I come from the heart of NASCAR country.
People don't build bad cars.
As a hobby, it's fine.
When you're running a business and providing public service
and doing all sorts of stuff, I mean, that's a different...
Yeah, and I get it too because we wouldn't have things like Kubernetes
or containers if people weren't experimenting and trying to build things.
But you're trying to do these things in a realm that already exists.
So anyway, it just bugs me
when that comes in.
James, any thoughts from the main stage?
Anything? No, he's good.
I'm good. It's exciting.
It's always exciting.
AI is always my excitement
point, so I'm glad
to see what's coming
and I look forward to future enhancements on the AI as well. I like the openness. I'm glad to see what's coming and I look forward to future
enhancements on the AI as well.
I like the openness. One of the
big knocks even in Atman world
was building out the REST
API and now with Dynatrace it's fully
there. It's really a bust. It's crazy.
I think Dynatrace has always gotten knocked for not
making it easy to integrate
and in the last couple of years it's just gone off.
We're also following openI standards and all that.
Right, right. And the last thing I've got to comment
on, just because I've got to give Dave Anderson
a little hard time, is last year
you were saying it was more somber today.
Yeah, more serious. We had Ozzy, Juan, Kenobi
and the whole stuff.
And there were UFOs on the stage
and things like that. Were they dancing UFOs?
I don't know. So, it's funny.
I expect we're going to see something else from Dave
because this is way too uncharacteristic of Dave
to have come out.
It's been serious.
I'm like, wait, who is this person?
So I'm waiting to see.
I mean, when you're used to having your own Bill Bomber,
you get disappointed when he doesn't act like Bill Bomber.
That's right.
I remember at Vignette Software down in Austin, Texas.
I think it was Mark Hurd before he went to HP.
I think he came out as Darth Vader or not Princess Leia.
That would be weird.
It was years ago.
And that was at their company conference. that was like, yeah, really scary.
Isn't Mark Hurd at Oracle now?
Yes.
He was, yeah, we won't go on.
Apparently he was qualified to be a CEO only because he was a tennis coach somewhere.
He was a really good tennis guy or something.
I don't know. But I worked at HP when he was in the CEO seat flying around in the jet.
I'll just stop my conversations right here.
Yeah.
Yeah.
Quit while you're ahead, Mark.
Yes.
And I won't make any derogatory comments about Oleg Derpasca.
I just want to apologize, and I won't say anything more again about Russian oligarchs or anything.
Yeah, exactly.
All right, so that's a wrap from our first thing.
We have some other interviews coming today.
We're going to interview some people after they get off the stage or do their session.
And then we'll check in with some live stories from the crowd later on.
Cool?
Yep, excellent.
Sound good?
Sounds great.
Awesome.
We'll see you guys later. Bye-bye.
And thanks to Tim and Percy for jumping on. We'll see you guys over the next later on. Cool? Yep, excellent. Sound good? Sounds great. Awesome. We'll see you guys later. Bye-bye. And thanks to Tim and Percy
for jumping on.
We'll see you guys
over the next couple days.
Thank you.
Bye.