PurePerformance - 015 Leading the APM Market from Enterprise into Cloud Native
Episode Date: October 10, 2016We got to talk with Bernd Greifeneder, Founder and CTO of Dynatrace, who recently gave a talk on “From 0 to NoOps in 80 Days” explaining the “Digital Transformation Story of Dynatrace – the pr...oduct as well as the company”The transformation started in 2012 when Dynatrace used to deploy 2 major releases of its Dynatrace AppMon & UEM product to the market. The incubation of the startup Ruxit within Dynatrace allowed engineering, marketing and sales to come up with new ways and ideas that allow continuous innovating. In 2016 the incubated team was brought back to Dynatrace to accelerate the “Go To Market” of all the innovations. A new version of its Dynatrace SaaS and Managed offering is now released every 2 weeks with 170 production updates per day. Many aspects were also applied to all other product lines and engineering teams which boosted the output and raised quality of these enterprise products.
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everybody, and welcome back to another episode of Pure Performance.
We have an exciting one today. I think I always say they're always exciting.
This one is particularly exciting though, I think.
But Andy, hello Andy. Before I begin, I have to mention that yesterday I was mixing our Pat Meenan episodes.
And if you recall, that was episode 13. Actually, it's 13 and 14.
But I want to... Andy,
just say hi so I know that you made
it through the unlucky 13.
Oh, that's right, because we knew
somebody's going to die.
We don't know if Pat actually survived.
I haven't seen any news about him
expiring, though.
So I think we all made it through, fortunately.
So how have you been, Andy?
Not too bad.
I was actually, I just came back from a trip to Latin America, to Chile and Peru.
And the reason why I want to mention it, not because people think I'm just salsa dancing all the time when I travel.
I did in Chile, which was fantastic.
But what was more interesting, which actually segues over to the discussion we have today, I was in the lucky situation to meet
a couple of companies down there and their innovation teams and actually companies,
mainly banks and telcos that are also doing stuff and trying to do stuff. What we hear
today from Bernd, our CTO and founder of Dynatrace,
because these guys also realized that they have to change the way they develop and deploy
software.
And I thought that was pretty interesting.
And so, yeah, that's what I did the last couple of weeks, or last week.
And Brian, let's go right ahead and introduce our guest today.
Is that okay?
Go ahead.
Yes, absolutely.
All right. With me in the room today in our Boston office is Bernd Greifenator or Bernd.
I have to use the proper German pronunciation.
Thanks, Bernd, for joining us today.
Sure. Thanks for having me here.
Yeah. And so, Bernd, we invited you to the show because it was, I think, in June or July when we were both presenting at a perform day in Europe.
And we were basically talking about digital transformation, DevOps, continuous delivery.
And you had a very interesting talk because you basically said how Dynatrace as a software company evolved over the last couple of years, the way we develop software.
And it just was very intriguing to me to then get you,
and I wanted to get you on the show
because I think you, from a C-level perspective,
you can tell the audience out there
kind of which transition we went through
in the last couple of years,
why we actually made the changes that we did,
what the changes were.
And so that's why we wanted to invite you,
because I think your session was called back then, DevOps in Action,
how we achieved rapid, ruxed innovation with high quality.
And I know you also have a session at one of the DevOps events in Linz,
our hometown, in a couple of weeks on the topic from zero to DevOps in 80 days.
Right.
So, Bernd, without further ado,
if you want to introduce yourself more to the audience,
maybe that's a good start, but then just, I guess, people know who you are.
Maybe, maybe not.
No.
Yeah, so basically we have always been developing software for application performance management
and sort of, as Andy just asked for the intro,
the whole story started out in 2005 when I founded Dynatrace back then.
And back then, everything was about building software that runs on-premises with normal development cycles.
So as Dynatrace grew and became more and more successful, obviously, we advanced our development processes as well.
And we did even Agile before we knew it was Agile software development.
But then we had sort of also the formal description with Scrum and these kind of things as well to do it.
And then we improved it, of course, further.
But then when we grew further and became part of a bigger organization
and two other products that have joined the family,
I was put into a situation also to actually bring three products together.
And I figured that okay integrating is
more duct-taping for a customer integration is providing some value but
it's never providing you a slick super seamless product and is some at this point in time, this was in 2012, so already seven years after founding the company, Dynatrace, it was really all the time to we said, okay, when you can't integrate three products in the APM domain space to make it super easy,
you need to do it Greenfield.
But if you do it Greenfield, you're better sure it has all the new forward-looking trends already anticipated back then this is also actually what we started in internally and we also knew if we
do this greenfield it needs to be also sas as well as on premises because all the previous products
were on premises so and this also then was for us then the trigger point okay we do a new product alongside our existing products, and we also, with anticipating
the future and with the SaaS model, have to figure out kind of what is the now proper
release psyche.
Yes, it's still SaaS and on-premises, so you could argue why change? Let's do it the same as we
always did, meaning two major releases a year plus one or the other intermediate kind of update
release. That's sort of all what's in there, so why not continue it? And of the the story for us then started actually also with the point that
it took always for us such a long time to get customer feedback with the on-premises product
so if you released then it took three months of the adoption of the first 30% of your customer base, which was actually pretty fast in our
market, but still, this was the initial adoption.
And then it continued, the customer started to using it, and sort of you got the first
deeper feedback beyond, of course, your early access programs that you do after six months
that you have released.
And at that time, you're already down the road of your next release.
So everything is so delayed in terms of feedback.
And since I'm personally a very strong believer that you can only build great software if
you're close to your customers and understand their domain, their problems, and only then we can
build the best products.
The feedback factor is key to me.
And this drove us then further to also think about, okay, how can we increase the release
cycles then with our new greenfield project and having all the sars in
mind there it provided the additional opportunity for us to get feedback not only by verbal means
from customers or through the classic channels of support and whatever you have, but also through the value of SaaS that you're connected with your customers' usage
and you have sort of the usage pattern.
So that's, of course, the additional benefit.
And then I went back sort of to my team and said, okay, with modern architecture,
it needs to be web-scaled, it needs to be web scale it needs to be
size it needs to scale to 100 thousands of hosts for monitoring purposes we
also said we want to have the feedback rapidly I mean where we have a feedback
rep at least through a SAS offering and this means that also we need to be able then to also provide fixes or simply improvements.
I mean, even if it's just paper cuts, also super quick.
And that coupled with another factor, actually, up to that new Greenfield project, we never had to host our product ourselves because all of the customers hosted an EPM offering. ourselves and so we were then focused also or forced to think about should we build them
a 24-7 knock team ourselves.
In this developers, we hated to think about that.
I mean, no one wants to stay up sort of in shifts 24-7 and so forth.
But still, when you build a SaaS service for monitoring purposes,
it has to be super reliable and high value.
So in this force, then also the process of how do we do then the operations for this?
And we had some experience with a classic NOC that does use three global locations in a sort of follow the sun operations model.
It consumed 16 people just to keep it operationally running. And then we also sort of figured, okay, when we want to use this team, then we have to also follow their procedures.
And so we did some test runs.
We used our existing product, put it into Amazon Cloud and hit it and being operated by that existing NOC team
of 16 people.
And then we had once the situation that we found a bug in our software, wanted to patch
this, and then we ran through the normal procedure.
The normal procedure said, okay, there is every Thursday a NOC team change
request meeting, and they could submit that request, and then they handled it until the
next meeting, and then at the next meeting, you could then discuss when then the date
of deployment is of the fixed, then one or two weeks later. And so to us, having done development in an agile fashion
and having already had a very strong continuous integration in place,
it did not at all make any sense to why does it then take three weeks to deploy a patch into production for kind of even a SaaS deployment.
So this didn't compute for us.
And this is why we said, no, we do not want to build such kind of a NOC team for our new offerings.
It needs to be much more agile it can't have procedures that takes that
long to to do something good to customers and this forced us then also
into the thinking okay so we need to be able as a team from code to production
including all key smoke tests and regression tests, to be executed in a
time of one hour.
And this was sort of the goal that I gave the team.
Of course, everyone was kind of waving the you're an idiot sign kind of to me or kind of,
this is totally out of whack request. But I put it as a stake into the ground because I believed
in it. We need to have a goal that really drives us forward and enfor forces us to think differently so the team went off i mean
we had lots of discussions how can we ever build software that is strategic it has major
architectural changes and so forth and we need to reflect or something but then do this in a
sprint cycle how can you actually couple that?
And we believe that this is impossible.
But I'm glad that I do have a team that sort of had no idea how we can solve this,
but they still stuck with me in the goal
and just did their best to kind of
work towards this so so we basically extended then our continuous integration
step by step towards the deployment pipeline sort of step by step they
advanced continuous integration into continuous delivery.
But then also, as I at the beginning mentioned,
but the feedback is always the key piece, not just from customers,
but also in your deployment phases to the engineers, how the steps are doing.
We had advanced this to continuous deployment and feedback. So at every step, the engineers get immediately feedback when something fails,
whether it's the build failing or it's the continuous integration tests failing
or the smoke tests failing, and all of this has to be automated.
And this was in the initial phase to actually get to sprint-based releases, a key pain point.
Maybe I should also mention in this process, we also set our goal.
We want to have a major release with every sprint.
So in contrast to two major releases a year, what we were
used to in the past. So mantra in place to get there,
we need to have also every day a sort of production-ready build together that we can internally use.
Of course, we didn't want to release every day to customers,
but the whole point is the build quality must be as good as production-ready
on a daily basis.
And this was the biggest hurdle.
So it took us at least three months where we totally struggled to get a build working within a sprint cycle at all.
Every day it failed, and every day it failed at a different area.
Sometimes it was the automation.
Sometimes it was just engineers checking in
on friday afternoon and breaking it for everyone else sometimes it it's um just a complexity problem so we had to re-architect the way we are compiling builds so many different factors
human technology factors automation factors process factors all played together.
But the beauty was that after the first three months, we really had big doubts whether we reached the goal of having a build deployed within an hour from dev to production,
whether we could reach this at all.
So we were really pretty much down to that, but we kept hanging on to that.
And sort of also the other moments where it was clear you had to have a top management buy-in for that goal because many things had to move and
fall in place so there were investments that had to be done sort of in order to throw a technology
by other software tool investments in changing team setups in order to get there. Investments in cross lab development, sort of many things
had to be adjusted top down in order to make it happen to get to a daily build that actually
is production ready first hand in order to get then to our sprint based release. And it actually then, after this original pain period, worked it out for us.
We had all the moving parts in place then, and we were there then at our first release
that we could do in a sprint, and all the subsequent 10-20 sprints we of course also
didn't have a successful build every day even today not every day the build is fully successful
but today we are at the stage where we can say maybe one build fails over the course of a sprint but it's not many days so we have
caught up to high quality for the team but it's really touching everyone in the
engineering team and you need to think about that also even in at the beginning
it was already hundred engineers checking in today so 200 checking in. Today, it's over 200 checking in to that code base.
So this is not just a matter of
one scrum team
to get coordinated. This is
a large group of folks checking
in.
That was the key challenge.
So at that time, when we
then were able to release
a sprint by sprint
to production, we still weren't
there to get from quotes to production within one hour it was a mighty hour
piece so we still had to do further work in speeding up our continuous delivery
pipeline but this is also there one needs to recognize that sort of the entire test automation team and build automation team has also built the continuous integrations out of beforehand.
Actually, it's not sort of a team that is not touching code. Actually, you should see them as a team of dedicated engineering teams,
dedicated scrum teams, whose purpose is to build product too.
But their product that they're building is actually the continuous delivery
product that facilitates all the automation mechanisms
end to end.
And this is also kind of one of the lessons, many lessons learned is
that first you need to build such a dedicated team
and you need to have all this smart engineering skills there.
You can't have test and scheduling people kind of there who just
use soft and all you need engineers
in those teams.
And then the second
part of lesson
learned there is that
those teams
who build this
continuous delivery
pipeline,
continuous delivery framework, continuous delivery pipeline continuous delivery framework continuous delivery in fact thingy the thingy the magic magic yeah yeah no I mean you build this
team who are the experts in in engineering of that but also the
continuous delivery it is the key that it's one and
the same team. So, and this is not the point actually I wanted to make, that in many organizations
you typically have to hand over to the ops team. But it was intentional and also essential that there is a continuum in the technology stack
from continuous integration into the continuous delivery into production sort of if there was
a test automation and test deployment stack that's different in continuous integration,
and then the ops team, for instance, would have gone off and used Puppet or Chef or whatever in production
and have total different deployment of techniques,
then we would have a break in the continuous deployment.
This would not be seamless.
And with such a stack break, we could never have tuned it, the continuous delivery cycle
tuned it to the reliability we are in, as well as the speed. It's those two factors.
Because now, in our continuous delivery pipeline,
the technology stack, also for deploying and testing,
is the same in the engineering pipeline stage,
in the first continuous integration stage,
and then also in the staging stages
where then everything is tested prior to production.
And then of course in production,
we also run certain tests.
Everything is the same,
and that is very key in the lesson learned.
This was also mandatory to then tune it down
to actually achieve our goal a couple of months later
than of really one hour from code to deployment
not to be fear of course it's there are tests like we run for a full sprint heavy load tests
all the time continuously so of course you can squeeze those long-term tests into one hour
but the point is if there is a a fix to be done all the key smoke
tests that are important still run through in the one hour and this allows us to provide patches and
fixes truly within that one hour psyche so i'm glad that we achieved that goal and um it was a
true team effort to get there. Wow.
I think Brian and I, we both let you talk because I think the story on itself is amazing, the way you actually walked us through the process.
So, Brian, did you have a question?
Because I have a couple.
Yeah, I mean, a couple of comments.
Well, I had some questions, but he kind of addressed them sort of in there.
But I wanted to just highlight them a little bit. Um, you know, I, I started using Dynatrace as a customer in the 3.x days and then joined in, in the 4.x days, uh, actually just after 4.0 was released. Um, and one of the
things that really attracted me to it was all of the talk of speeding up your cycles. You know, this was back in 2010, 2011,
Agile was starting to slowly replace Waterfall, but at my organization, we were hardcore Waterfall.
So there was that big attraction, especially of what the Dynatrace people were telling us about,
hey, you can start speeding up the cycle through this. So, you know, for me, hearing the story of
one of the corporate speak words that I hate is, you know, eat your own dog food being kind of more of a geek.
I like who watches the Watchmen better.
But, you know, this was just a classic story of a company using its own, you know, improving on its process in the same way that we try to get encourage other companies to go through that same transformation.
So I really just,
you know, loved hearing that story about that. But you know, the one question I was going to ask was,
you know, how does when you have a team, right, and you're telling them you're going to go to a
release every hour or not a release, but whatever, you know, code deployment every hour that you were
talking about before. And you get a lot of those, you know, sideways looks and mumblings when you
leave the room.
I think one of the other challenges, and that was the question I was going to ask you,
is how do you get the team to actually adopt that and stop resisting and take it on?
And what you pointed out was very important.
This comes from the top down.
I think any developer or operational or any team member worth their salt these days wants to do these things.
Right.
And we, I see them all the time when we go on engagements where we're dealing with the
developers or the testers, where they're very excited about these new processes.
And I guess we can't call them new anymore, but the newer type processes, but if they
don't have that support from the top saying, we're going to do this and I'm going to, you
know, we're going to give you what you need and we're going to make it happen. They're going to be disappointed. And
that's, you know, the answer you kind of pushed out there was, yes, this was coming from, from
you was coming down from the top. Uh, it was fully supported and that's the key to making a, um,
making the success. Um, the one other thing I just wanted to bring up from what you said towards the
end there, which was, uh, I think is, is critical and it's amazing, uh, how easy it is to do in
the modern day and age is that same deployment process, uh, throughout the life cycle.
Uh, again, from my old days, um, there was always the dev environment maintained by the
dev team, the test environment maintained by the test team and the operations environment
maintained by the operations team and testing and environment maintained by the operations team. And testing and other deployments would go on in these environments, and they would pass in
one environment only to go to the next and fail because something as simple as maybe a configuration
difference in between the environments. So that's, you know, of all, there's all sorts of processes,
you know, that you were talking about that are very important to making this happen.
But I think from an execution phase, if you don't have that seamless, integrated, same deployment style throughout the lifecycle, you're bound to just have a million, million issues.
So I'm glad that you mentioned that because I think that's very key.
You can't have situations where things are different in different environments
and no one's using the same thing.
So the unification of that release cycle is important.
Right.
And also to add to that a little bit into the part of the story,
we have made some experiences with a more classic NOC team and this was
also then the moment to decide, okay, we do not want to have such a NOC team. And having
driven all that from the engineering side, we said, okay, actually we do not want to have such a NOC team if it's not needed.
And since we had no experience at all, we had in the early phase of operating the SaaS offering,
we still had the NOC team involved.
We had them to write runbooks.
And this was another piece the engineers hate to create, runbook entries.
So we wrote them, and when we looked then at those, then somehow they looked to us almost like just a different programming language, but more or less it's a programmatic code.
Like a script exactly so we went out there and said okay let's have the fun and with every
sprint eliminate at least one if not more of those run books by actually
automating our orchestration because maybe this is one thing also to add.
In order to operate and answer the production deployments
of the new APM offering,
we had put an orchestration in place
because we operate the clusters in the Amazon cloud
and in order to orchestrate them, we build our own orchestration engine.
So the point was to automate those runbooks
and put then the automation into that orchestration layout.
And interestingly, it took us just five or seven sprint cycles that we came down to one single runbook entry
and that runbook entry said if the orchestration layer cannot heal our
production system fully automatically then call R&D.
That's it.
Basically, it meant that we reached a point where it was obsolete to have a knock team,
and all was it, now let's automate the last step, automate calling, then R&D.
And this is actually how we arrived,
triggered actually by the pain of slow classic NOC processes,
to move to not just to DevOps, but actually to almost sort of bypass DevOps, but arrive at no ops.
Because truly, now we operate with a team with a true no ops fashion.
So there is no NOC at all needed for operating that.
And I'm speaking of a small deployment. We have 400 instances in Amazon, around the globe, fully high availability,
failover, multiple zones.
And we have survived perfectly, totally seamless to customers.
Many issues that Amazon had when they had issues and downtimes and so forth, all works seamlessly well.
And now take that, the team, the no-ops team, is actually a team of three people. And then the next piece is actually if you now think, oh, this is just a team of three operations people,
then you can ditch this idea or this picture in your mind.
Because actually this team, you should think of a product management team, the NOAPS team,
their product managers to continuously drive and improve the orchestration technology and the orchestration product, if you will, that operates that.
And that's what their job is yes those three people are also hooked up on
option II that is also similar to page of duty you know sort of to to be on
stand-by but kind of they are barely triggered by our systems that they need
to do any manual intervention and the point is they feel the pain on their own if the automation doesn't work properly then they get would get more calls
at night or so but so what they do is instead they make sure that the automation and the self
healing works beautifully so and i think this is the right piece there also in the lesson learned to arrive at no ops put the pain of high availability
and good user experience back to the engineering teams back to the product managers and back to
the chiefs of the architects if the right people feel the pain of of a production issue then those people will actually fix it
because they're able to fix it. This is also another kind of key item the chief
software architects now are responsible for production and that's new it's not
the ops team but also it doesn't mean that they run a 24-7 cycle no it just means that every day in the morning they check in what
is um um what are the apm results what are the log results what are the key metrics of a system
so to be proactive and also tweak the current sprint um if if they have found issues that
should be proactively fixed or some things may happen
reactively.
But however, all the non-functional requirements actually reside within the chief software
architects for production.
So, and the product managers, of course, for the functional ones.
And this is also key about feedback, bringing it to the team and bringing it to those who actually can
fix this so I think this was also a key as a key item get to know ops and part
of the speed delivery so we can focus actually on building software and not
operating it so I want to now ask or kind of reiterate one, which I saw a lot with other companies that I talked with.
So I mentioned last week I was visiting several banks that are going through what they call a digital transformation.
I would assume we kind of also went through a digital transformation because we needed to build.
Let's say that way. We needed to escape the innovation dilemma.
As you said, we had damage rates for seven years at Mon, the product we installed on premise. We knew the market is changing. We knew we cannot just duct tape
things together to make it work for the next five, 10, 15 years, but we had to innovate and build
something new. And I think the key thing used in the very beginning, one very big thing was the
feedback loop, how long it takes to deploy a feature and understand whether customers like it or not and whether we
built the right thing and if that takes six months because that's the release cycle that's obviously
you know not good enough and that's why we we actually it was this was a big pain right because
basically the the future of diamond trace was at stake because if we i mean i'm sure we can we can
as you say we can still sell apmon for years to years to come but next generation of software needs to be monitored, needs to be monitored in a different way,
because it's also a different type of software, right?
Right. So there's an additional factor to that as disruptive as the iPhone back then was also the beautiness and user experience that is totally new for this entire domain that no one did before.
Everything was about features before.
And now we said, no, the mantra is people need to just love working with it.
It needs to be easy, beauty, fast, slick, and self-explaining, and just work.
So that goal was sort of also a culture change for us. But that goal also implies that you need feedback super quickly
in order to steer them, the ease of use, and this is self-explaining.
And this is why also today we also look at usage patterns every sprint and and what we measure in
production influences right away the ongoing sprint as well so it's not just
about box or whatever no it's actually about the usage adoption of capabilities
and so forth so that that we drive there and we have adjusted many things.
If you just think about our free trial process,
I mean, this was tons of lessons learned.
We looked at conversion rates
and how did we set up just the steps of signing up
and getting the users to deploy the agent into production
and how easy was that so we tuned a lot just by
observing the the users and their patterns and the conversion rate looking
at the right metrics correct and that's key and this is where today I say I
could not ever think anymore about being slower
because they're relying on that feedback and having that timely.
In fact, actually, if we could even accelerate it further, it would be good,
but there is also then some limitation also to drag customers along as well
because I should also mention that, yes, it's obvious for a SaaS solution
to release every sprint. And, I mean,
of course, we actually do 170 deployments per day.
So it's just sort of feature-wise for customers.
They notice that this is rolled out then on a sprint-by-sprint
basis.
But the point I wanted to make is that, yes, for a SaaS solution,
it's more obvious to have that speed.
We all know the Facebook and Etsy stories and so forth.
But what's new is that we took the same mantra also to an on-premises solution as well.
So this is why today you can buy Dynatrace SaaS and Dynatrace Managed On-Premises.
It's actually the same APM solution,
just the difference is that
the Dynatrace Managed On-Premises solution
keeps all the data on-premises,
and what's super important for many financial and healthcare companies and all kinds who have sort of regulatory requirements.
But with the beauty that still this managed cluster is operated like a SaaS offering.
So we have a very transparent channel to that on-premises cluster
so we can update remotely and see the health remotely
and customers can also opt in to provide us some additional feedback patterns.
But sort of beyond also released them.
They're in the same pace as our SaaS solution, and the customers get the same immediate value
also on-premises, and that's the beauty.
And this is sort of also one of the other pieces that I never wanted to lose.
I never wanted to be any slower with feedback and with providing value
and solutions to customers any more than with the SaaS offering.
So we carried it over also to the on-premises offering.
That's new to the market, but it's actually super well accepted and adopted. Originally, we had some concerns about
so that we maintain a channel there.
But truly, over 90% of all the financial institutions
immediately say,
okay, yeah, we need to check this,
but oh, here's your transparent description.
Then they're fine.
There's just a small portion for whom it's trickier.
But for those who have all the solutions, it's not an issue.
But that was key, taking that same speed to on-premises as well
and getting feedback.
And I think what we also benefited from is that our, let's say,
the products that we've been developing for 10 years,
Appmon, if you look at that address, Appmon and UEM,
these were the products years ago that we deployed twice a year.
And we learned a lot from what we did with the new approach
and applied these rapid deployment models, the feedback loops,
into, let's say, the more traditional product as well.
Because we have feedback channels in AppMount and UEM,
so we can learn which features are used by users out there.
We have a much more rapid deployment model.
We deploy monthly now,
and even we can deploy much faster if we need to,
to address some issues.
So I think it was an amazing benefit of having this new approach
and then kind of feeding back how we can also change
the development of our traditional products.
Fascinating.
That's correct.
So you can learn from this this way.
For instance, if you look at building microservices or so and have a new project,
the easiest, of course, is to start there for continuous delivery and super quick releases.
And then when you have established a culture there and know what it means, because it means breaking the silos that you have in typically enterprise environments,
and then use this sort of small learning pattern or kind of a prototype,
and then you can take some lessons learned back onto your more classic teams
and up-level that.
But I believe for enterprises speeding up,
the number one challenge is not technology.
The number one challenge is their work structure of the classic silos
that there is test and there is dev and there is ops
and it just doesn't work anymore this way.
You need to have a totally integrated set of teams across dev, test, ops and business
actually too.
Yeah, I think we will actually have Anita on the podcast in a couple of in probably
two weeks after this one airs and she will give us a little more insight into actually the different
team structures what a team is doing what they're responsible for end to end i think that's going to
be very interesting now uh brian i know we tried to keep this podcast actually shorter than the
original the other ones because we got feedback from our listeners.
But it's just so interesting and hard to stop.
I would, however, try to come to a conclusion soon.
And I actually have one last question to burn, which could be kind of the conclusion from my side.
But I also want to give you the chance if you have anything else that you think the audience needs to know about.
No, I'm good.
I got my questions or statements in, and I think anything that I was going to ask Bernd, he brought up on his own.
So well done there.
So my kind of final question, and maybe as a closing statement, you as a CTO, what recommendation recommendation what guidance can you give
other C-level people out there
that feel the pain
that their organization is not moving
fast enough but they're fearing
they're seeing resistance
and I mean like what did
you do to convince not only
yourself obviously because you were the main driver
behind this but what did you do
to convince all the C-level management
to actually push through the plan of taking aside a team of developers
and building the future of the company?
I'm sure it was not easy, but maybe some guidance, some help,
some recommendations to other people out there that need to go
and lead their company to a transformation like this.
You're creating a new product. other people out there that need to go and lead their company to a transformation like this?
You're creating a new product.
Greenfield is obviously a huge decision because there is lots of cost involved and it also always takes time to get there.
But I also have to say that here in our case,
I benefited also from having a great CEO as well who understood that strategy is as important as tactics.
And who understands also that it takes a while architecture of our product and a different approach. tomorrow that was the key ingredient to convince them also the ceo of hey we need to start
greenfield plus also i did a little bit of a trick i mean it doesn't apply directly to our products
but i still sort of frightened the management team you think about the kodak story there, how Kodak did. They had already 30% market share in the digital camera market and still totally messed it
up.
And this anti-pattern helped me also a lot.
No, no, no, no, we are not Kodak.
This is not us.
Of course, we are not Kodak per se. It's not such a drastic generation change, but still with a new cloud and microservices and soft-to-define networks and all that different approaches,
the change is still drastic enough that you really need the new approach to APM,
plus all of that people have no time anymore these days.
They need to have the whole topic of APM to be automatic and intelligent,
and this is sort of kind of what drove that.
And I think also APM, I mean, it's application performance management, but I think what it's more is actually more about, it's about continuous innovation and continuous operation by giving you insight into what your users are doing and how your app behaves or how your apps behave.
It's not only about performance anymore.
It's about user behavior.
It's about how users use your applications to make you successful
as a company right so maybe we actually need to redefine apm i think we tried bpm is digital
performance management because that's more what it is right exactly this is much more about digital
performance management i mean we see it ourselves yes performance is no doubt crucial. But as I mentioned before, in the feedback cycle for every sprint,
you also need to understand how users are using your product.
So the functional use aspect and the user experience aspect
is at least as important as performance.
And this is also why we have done lots of focus to monitor that too.
And APM is an old handle.
So you're right, Andy.
Cool.
Hey, Bern, thank you so much.
This is phenomenal.
Brian, do you know when this will air?
Not specifically, but we have today, episode 12 aired.
In two weeks, we have the combined 13 and 14 let me just pull up my calendar
here so probably not like early october the reason why i'm asking is uh bernard and i we are actually
doing a webinar on this as well if you remember i'm not sure if you do remember we do actually
so we do a webinar on this early november where we actually walk through some of the stories again, but on a webinar basis.
So folks that are listening to this, check out our webinar calendar on the website.
We are early November talking about the transformation story of Dynatrace.
How do we went from zero to DevOps or no ops in a very short amount of time.
Yeah, this will be early October. So people can definitely check that out. went from zero to no DevOps or no ops in a very short amount of time. And so,
yeah, this will be early October.
So people can definitely check that out.
Um,
the burn.
Thank you so much for taking the time to,
uh,
come on our meager little podcast here.
Thanks for having me.
And,
um,
how long are you,
uh,
in the States for?
This week.
Okay.
Well, enjoy your time here,
and hopefully see you soon.
Thank you so much.
Andy, any other besides the webinar?
What was the date of that webinar?
Did you have a date? Yeah, the webinar was, I think, November 3rd.
I have burned online,
and then a week later I have Anita online as well.
And I also encourage folks, if this airs in October, to check out my presentations at QCon, at CMG, some conferences that are coming up.
And so we'll be glad to meet you there.
And also check out our local Dynatrace user groups.
Go to meetup.com, check out, look for Dynatrace.
I've been doing a lot of user group meetings these days.
So happy to chat with you
and get excited about how we help you innovate
through our innovation.
I think that's it.
That's good.
Yep.
And you can check out my musical performance
on November 5th in Jersey City.
Different kind of performance.
Yeah. Yes, excellent.
Thank you everyone once again.
The next podcast, we
will be taking this concept a little bit further
and be talking to
Anita from
Andy, what's her
department?
She is
our NO OPS lead for Dynatrace right so that'll be how this was
all implemented in you know on the ground so it should be very interesting to um to hear how
conceptual meets reality and i'm looking forward to that one as well so thank you everybody for
listening again any questions or comments you can tweet us um hashtag pure
performance at dynatrace or you can send an email to pureperformance at dynatrace.com we always love
any feedback or ideas that you have thank you so much uh goodbye here from denver and you guys want
to say your goodbye goodbye from boston of all time boston yeah it's pretty nice here
so goodbye
bye
thanks
bye