PurePerformance - Java Observability and Performance in Azure Spring Cloud with Asir Vedamuthu Selvasingh
Episode Date: November 8, 2021Java developers love using Spring. But running high performing and scaling Java apps in production takes a little bit more than just compiling your code.In this episode we have Asir Vedamuthu Selvasin...gh who has been working with Java for 26 years. In the past 25 years Asir (@asirselvasingh) helped Microsoft provide services to their developer community that make it easier to deploy, run and operate Java based applications at scale – nowadays on the Azure Spring Cloud offering.Listen in and learn more about observability when deploying apps on the Azure Cloud, which performance and scaling aspects you have to consider and get a look behind the scenes on how your packaged java app magically becomes available across the globe.Show Links:Asir on Linkedinhttps://www.linkedin.com/in/asir-architect-javaonazure/Asir on Twitterhttps://twitter.com/asirselvasinghMonitoring Spring Boot Apps with Dynatracehttps://docs.microsoft.com/en-us/azure/spring-cloud/how-to-dynatrace-one-agent-monitorObservability on Azure Spring Cloudhttps://www.dynatrace.com/news/blog/dynatrace-extends-observability-to-azure-spring-cloud/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello and welcome to another episode of Pure Performance.
My name is Brian Wilson. And as always, since this one is being recorded on October 28th, my annual guest, this co-host this year is Andy Candy Grabner.
Andy, welcome back.
It's good to have you again.
I guess this is a reference to Halloween.
Yeah, way back.
You don't remember this, do you?
I don't remember this, no.
Way back, it was like your Halloween name was Candy Grabner,
like as in candy grabber,
like the guy who walks down the street grabbing candy
from the kids who are trick-or-treating
because you're the grumpy old man.
When I grew up as a kid, at least here in Austria,
Halloween was not a thing.
Now it's getting more and more popular.
And yeah, but I think I never did.
I saw it when I lived in the States, obviously.
I was watching the kids walking by and knocking on our door.
And then I didn't know what they want from me.
And I said, go away.
It's always a good excuse to buy a bunch of bags of candy that you like in hopes that no one comes.
And then you're stuck with all the leftover candy. But that's why it's always bunch of bags of candy that you like in hopes that no one comes and then you're stuck with all the leftover candy.
But that's why it's always important to buy the candy that you like.
But anyway.
Yeah.
I think this is not an episode about candy or Halloween.
But it might be a little spooky for some because we're talking about things that happen in the cloud.
And not every one of us knows what's really happening in the cloud.
But we have somebody here as a guest that knows exactly what's happening in the cloud,
especially when we talk about Azure and when we talk about Spring
and we talk about Spring Boot applications in the Azure cloud.
And without further ado, Asir, I hope I get your name correct,
but please do me the favor, introduce yourself to our listeners.
Hi, you got my name correctly. I'm Asif. Thank you for inviting me. At Microsoft,
I'm on point for everything developers and customers need to build, migrate, and scale
Java apps on Azure. I'm also a Java developer. I started in 1995 with jdk1 and i've been having
an amazing time java ever since and you you've stayed there for 26 is it 22 to the math correctly
26 years on java 26 years on java and 17 years at mic. Just focused on Java all the time.
And, you know, with Azure, it's really like a big playground
where you can do amazing experiments with Java,
amazing innovations with Java.
So that's what I really, really like about Azure.
Now, I think Brian and I,
we also have Java backgrounds,
at least when I started my career
as a performance engineer,
there was a lot of Java apps
that we tested
when I was still working
for a testing,
like a load testing company.
And then as I switched over
to Dynatrace,
it was the initial idea
and the capabilities of Dynatrace was it was the initial idea and the capabilities of Dynatrace was
tracing Java
applications on the load. And this was also
by the internal name. The first name
of Dynatrace was actually JLoadTrace.
So for Java load tracing.
Oh, really? I didn't know that, Andy.
See? Trivia.
I knew lowercase d
capital T.
If you look at the very old Jira tickets, I knew lowercase d capital T. Yeah.
If you look at the very old Jira tickets,
the project was called JLT,
Java Load Trace,
because the initial idea was,
you know, Bernd, our founder,
and some others,
we were back then at Segway,
then Borland,
which is now Micro Focus or HP.
I don't even know what they are right now,
but we did performance testing mainly against Java apps.
And so Bernd then said,
hey, why can't we do automated distributed tracing
on Java apps and the loads
to really find out where things break?
So Java has been also with me for a long, long time.
And also Microsoft.
I remember I started my career in 1998.
And it was a shop where we mainly built applications running on Windows.
So Microsoft was always big for me.
I did all of my Microsoft certifications.
I don't remember what this was back then.
Windows NT and all this stuff.
95.
95, yeah. Windows NT and all this stuff a long time ago. 1995. Yeah.
When I was at
WeightWatchers.com,
we were an early.NET shop.
So we used to get invited to do our load
testing down at the Microsoft Lab
in
South Carolina or North Carolina?
I always forget. And that's where I actually met Mark Tomlinson,
who are friends of the show now. But it's funny because you mentioned that you've been doing
Java with Microsoft for 17 years. And that's, again, the repeated theme we've seen with Azure,
right, is Azure is not the old Microsoft. And obviously, it's even crazier that you've been
doing Java at Microsoft for 17 years. So that means that was the.NET heavy Windows-based one.
But obviously, Java being platform independent, there are no fools.
They're embracing that.
So it just goes to show you can never doubt what Microsoft is doing
or you can never think you know what Microsoft is doing
because they're doing a million things that always surprise you every day.
It's also about the success of the Java developers, right?
Java developers have been amazingly successful.
And you can see new students, new professionals,
they're all learning Java.
So it is that success that translates into Azure
being an amazing environment.
And today, it's almost like a home.
It is the home of many many many enterprise
java apps today now can you tell us a little bit right if i'm a developer and i assume we have a
lot of developers listening in and also performance engineers s3s whatever they call them can you
explain a little bit why somebody should deploy their app their their Spring Boot app on Azure?
What do you make, what do you offer
that makes it easier for them?
Yeah, so Spring Boot developers, right?
When they deploy,
they also need a cloud platform
where they can deploy scale
with minimal effort but maximum
impact. So this is where when they
deploy Spring Boot apps, they need several
things around it for dynamic scaling.
Monitoring is such an
important piece.
They also want the app lifecycle so their community of developers
within their teams can work together and build.
This is where they could always start with any virtual machines
or containers.
If they do that, they have to build this dynamically scalable infrastructure
and monitoring all these pieces.
Now that will take a significant amount of time. Now many of our customers, developers, we see them
trying to build that on containers and virtual machines and they build it as they go. When they
do that, there are some challenges. The challenges like these learnings and trying and doing it
can translate to downtimes, loss of revenue,
not meeting the end-user requirements
or even the business requirements, right?
So this is where, when they deploy to Azure,
we built a new service.
We announced it like two years ago. It's now generally
available. Many customers are deploying and running them in mission-critical systems on
production. Now, when they deploy to the service called Azure Spring Cloud, all they have to
do is just deploy the charts and they're done. And then they can just scale. They can scale unlimited.
Within a matter of minutes,
they could be serving billions of requests per day
or even millions of requests per day.
Now, when it comes to monitoring,
it just becomes magical.
Everything is wired up.
All you have to do is activate
whatever tools you like,
including Dynatrace,
particularly Dynatrace.
And magically, they're going to
see some meaningful
insights,
actionable logs and
metrics that they can really
act on.
So this is where they can deploy to Azure
and not worry about all the running the scalable systems,
wiring them all up and building app lifecycle.
They don't have to worry.
I mean, I just mentioned three things
that are many, many, many such dimensions
that you have to think when you're in the cloud.
Now, when they deploy to Azure, just deploy your jars,
your ton, and then open up your monitoring tool, Dynatrace.
There you go.
So you don't have to, just wanted a quick question on that, Randy.
When you're uploading your jars,
you don't have to say how much memory you think you need to run this, right?
Is this all like an opinionated platform where it's going to figure this all out for you? Like you literally just say, you know, take it and run it
and that's it. And Azure takes care of all the rest, sizing and everything else.
The default is opinionated, but that's not enough for every usage, right? You can certainly start
dialing up and dialing out, right? You can say, I want four cores. I want six gigabytes of RAM.
Simply ask for how much ever you want for one instance, right?
Once you ask for one instance, and then when you are serving the loads,
it's completely different.
You can declaratively state when the load reaches a certain point,
multiply by two instances, multiply by three instances,
multiply by 500 instances, right?
Yeah.
Those declarative statements,
well, you don't have to worry about what is the machine underneath it.
You don't have to worry about
what Kubernetes is running,
what API is running.
You just declare it.
And as the load changes,
you would see Azure doing the hard work
of scaling out and
scaling back in. Now, that could be
a load-based, meaning they can
watch the CPU, watch the memory, watch
the threads, watch the inbound requests,
anything they can declaratively
state it. They can
also come up with
something like scheduled ribbon.
If you know now the Thanksgiving
week is going to be busy,
you can predefine for that week, say, hey, I want to have a constant load,
and above that constant load, I want auto scale.
So you can have all those.
Now, here again, it's a great opinionated platform that once you declare it,
you don't have to worry. Azure will take care of
the hard work of scaling every
layer out and in
to meet what you have
declaratively asked for.
Now, technically, what
runs underneath the hood?
Do you really run
VMs and then on every VM
you have your Java runtimes or do you actually containerizes and then on every VM you have your Java runtimes
or do you actually containerize it and then run it on Kubernetes
and take Kubernetes orchestration?
What happens behind the scenes, if you can tell us?
Azure Spring Cloud is built on Kubernetes.
So you get the power of Kubernetes, but you don't have to learn,
manage, or govern, or touch anything.
So if you look at the inside of Azure Spring Cloud, every service instance has two Kubernetes clusters.
One cluster is reserved for the runtime, where we run all the Spring Cloud runtime elements,
logs, app lifecycle, security,
all those runtime elements are running in one cluster.
And the second cluster is reserved for your applications.
So when you're deploying inside Azure Spring Cloud,
it'll take the jars, it'll take the code,
and it will convert it into OCI compliant container image
and then deploy it to the app cluster.
So that way it is built using Azure Kubernetes service,
but it's completely abstracted away from our users.
All they have to worry about the jars of of the code, upload and you're done.
And everything else is taken care of behind the scenes.
So that means if I understand this correctly, because you mentioned earlier,
by default you have some opinionated deployment, opinionated settings,
but you can override them. That basically means that the settings will then influence
the Kubernetes deployment configuration for your container, like your resource limits,
and then the JVM, obviously, that you then run in these containers.
Can you override some of the JVM parameters as well?
Is this also possible that you override the opinion,
not only on memory sizing, but also on the classical JVM parameters?
Yes. Anything that you
do today when you're running a jar, we want to maintain
it for you. There are no changes. So if you're setting the JVM options,
in the JVM options, you'll be configuring many, many things.
All of those are available. The other thing which is
very important for developers is
hydrating information from the environment right so there's a way to declare the environment
variables and you can hydrate it now some of them are configuration you can just hydrate
the configuration but some are secrets like connection connection parameters. So this is where you do not want to put the connection values
inside an application just willy-nilly
because that just introduces data exfiltration risk.
Anybody can steal information and steal the data after that.
So to avoid that, we have the concept of Azure Key Vault.
You can store all your secrets centrally.
It can be administered by a security team.
And at the app level,
all you have to say is,
my secrets are coming from there.
And then the app knows how to hydrate them
at startup time.
And when it hydrates,
it only hydrates in memory within the app scope.
And when the app disappears,
then the secrets also disappear, right?
And Key Vault also gives you a complete auditing.
So you also know who touched the secret.
And that way in future,
if you figure out through auditing somewhere data got lost,
you can trace back to who picked up the secret.
It was Andy.
Exactly.
Hey, Dan, fascinating.
You talked about the scaling, basically rule-based scaling,
or the declarative rules.
Are these rules based on resource consumption?
So you say if you hit a certain CPU utilization, memory utilization, or can you also scale on other,
let's say more performance relevant metrics
like response time or anything like this?
So there are many, many metrics you can use it, right?
Earlier when I called out a few of those,
but the ones that are popularly used by developers, right?
So you can scale based on CPU, how much CPU is being consumed,
how much memory is being consumed, how many threads are being spawned,
how many requests that are coming in, how much response time.
Or you can do a complex, right?
You can combine some of these and have different set of declarative rules.
I remember there's about 29 metrics that are there.
You can pick whatever you want.
These metrics are directly from the JVM itself.
That way, you can figure out,
because there's no one-size-fits-all.
Different businesses have different needs.
Now, the reason why we actually got to talk
in the first place is because, one, you work with one
of our colleagues, Sophia, and she wrote a nice blog article
and actually explaining how we bring observability
into Azure, into these Spring Boot applications.
Now, from a technical perspective,
I think now, as you explained,
that underneath the hood there's Kubernetes
and you're basically on the fly
creating a container with the jar.
I saw that you provide an option
to specify environment variables.
Like in our case for Dynatrace,
you specify the API token, the endpoint.
And I assume in the integration that you build with Dynatrace, you specify the API token, the endpoint. And I assume in the integration that you build with Dynatrace,
when you package that container, you automatically then load our agent
in and parameterize it.
Is this the way it works?
Or is it more on the container level?
Or are you able to also install us on the Kubernetes cluster level?
How does that work?
Sure, yeah.
So first, I want to thank you for the great partnership,
particularly from our end developers' point of view, right?
Yeah.
Many of our developers, they are very familiar with Dynatrace.
Even beyond familiarity,
they have existing systems that are already instrumented.
And when they come to cloud,
a portion of that is coming to cloud.
So they want to watch the entire system
using Dynatrace, right?
So from this point of view,
thanks for the great partnership
because it really helps our developers.
Now for the details,
now each of these container images that we are creating behind the details, now, each of these container images
that we are creating behind the scenes,
the Dynatrace agent is installed at the OS level,
and it is configured in a way all we have to,
all the developers have to do, our end developers have to do
is simply provide the parameters.
There are three parameters that you have to provide for the tenant trace.
Once you supply those three parameters, either as an environment variable
for the app or through Terraform or through CLI
or through ARM templates, it gets activated.
The activation happens behind the scene.
Now, the agent by itself is a software.
Anytime you deliver a software like that,
Dynatrace as a company will also be maintaining it
and evolving it and releasing and releasing, right?
So from this point of view, both companies,
Microsoft and Dynatrace,
will work together to make sure
the latest agent is always installed for developers, right?
That way, they don't have to worry.
All they have to do is activate, and they're done.
Activate your Dynatrace, and you're done.
You just open up your Dynatrace portal.
You're going to see your form of apps talking to each other,
the data flow.
You can see how much memory is being used.
There's a lot of intelligence as well.
The software was showing us it was amazing,
and many of our customers are pretty excited by that.
And especially, I think you brought up an interesting point
because clearly if you just run, let's say, an app and it's a standalone app, a standalone service, and I assume you are also out of the box providing some metrics that the real key comes in, in having a solution like Dynatrace that these apps that are deployed might just be a part of a bigger end-to-end application.
And this is where end-to-end tracing from,
let's say, I don't know, your front end
that then goes into Azure,
then maybe goes back into on-premise,
into maybe the mainframe,
who knows where it goes to.
And then you get all of this end-to-end visibility.
So that's great.
It's also really great the way you built it
that on the fly, you're basically building
these container images, ensuring that the agent
and the latest version is there.
Now, do you have, have you seen any,
because we're talking with a lot of performance
experts out there, are there any additional
performance recommendations that you see
or considerations for performance engineers
when they are testing or optimizing apps
running on Azure?
Is there anything in particular
they have to watch out for?
Or do you say actually not
because you're running on the latest
and greatest hardware anyway
underneath the hood?
Or is there anything that people
need to watch out for?
One thing we have learned a lot from many of our developers,
particularly on some of these areas.
One of the things that are very key is,
they have been running it on-prem. On-prem, your resources are finite.
And you're running within that.
Now you come to the cloud,
there's no longer
finite resources. It's unlimited
resources. You can just turn it on how much
ever you want.
Some of our customers
or developers, when they
ran it on-prem,
they say, okay, we have been monitoring 300 stores.
And we are limited by what hardware,
that's all we can do.
And we come to the cloud,
now we are monitoring 3,000 stores, 3,500 stores.
It just went from 300 to 10 times monitoring, right?
So your base just expands, right?
When that expands with scale out,
there are many, many things that happen.
One is like, how's your memory used?
Suddenly you get a lot of production scale data,
which you have never tested before.
And you see this memory consumption, get a lot of production scale data, which you have never tested before.
And you see this memory consumption.
And then you see, hey, that app is running out of memory. This app, this instance is running out of memory. That instance is running out of memory, which you never saw before.
And it is touching newer
edge cases. So when you're in that
situation,
you want a tool like Dynatrace, right?
Where Dynatrace by itself will pick it up,
it'll base lines and then shows you the anomalies, right?
Those anomalies can be alerts even before
out of memory happens and becomes fatal.
And then you have to look at the fatal scenario
and understand what happened.
So that's one thing we saw over and over and over again.
Now, when you go from 300 stores to 3,000 stores,
there's so much data coming in, right? And managing that farm is also a pretty, pretty, pretty big challenge, right?
And then we saw customers who were like developers who were streaming in events.
And when those events are coming in, they're keeping the CPU high, very high, right?
In the on-prem world, when they had limited finite set of machines, it basically
went into throttling. Now, when you're
in the cloud and you're scaling out, it's just unlimited resources coming
to suck in all the events. So when you're processing that scale
of events, then you want to see what is happening
inside, how you can optimize for the best result,
how much score, how much if you're doing that kind of testing, the capacity planning is very
important. So for that capacity planning, you probably want to look at tools like Dynatrace,
where you want to see how much is being consumed. So can come up with the the combination of the declarative
set of rules that helps you to go at scale uh in an unlimited scaling environment
and then imagine with that with those those rules it's great that you have the option the
the multiples that you can do um for instance if my c CPU is at 70% and my response time has degraded,
then I want to go up. But if my response time hasn't degraded yet, we can let it go up higher.
I imagine a lot of the testing, right? If you think about from a testing point of view,
anytime you go into cloud, cost always becomes a factor, right? And I think that's part of
performance testing these days,
or should be at least. When you're moving something from, let's say, VMs or whatever
you're running on-prem into the cloud, as you know, you said there's those opinionated defaults,
which if they're in small chunks, they might work fantastic. Whereas in the CPU to memory ratio is
being utilized well, and as you scale, you're using those fully, but monitoring that to
make sure that's correct. You know, what if your setup is you're only using 10% of the memory on
each of those, but you keep scaling because you need more CPU? Well, that's another performance
issue to look at so that you can override the opinionated. This way, as you do scale,
you're scaling the right size components. To me, it's always fascinating, this idea that in performance, when you go to cloud,
making sure that cost and proper scaling under real load is taken into consideration.
Something that people didn't have to do as much as on-prem, because you didn't have these options.
You had what you had on-prem, and you couldn't control it anyway.
It was like, do we have enough capacity? Yes.
Great.
We're good.
With cloud, it's a whole new world.
And there's been a lot of other podcasts and all that discuss that type, that approach to testing.
But it's very fascinating.
And I think it's also fantastic that you have all the controls in place to control that.
I think to the point that you just mentioned, VC developers, when they go into production,
they're coming from the on-prem world,
so they're very conservatively setting up the capacity.
Most of the time when they go into the first production,
it is over-allocation of resources.
And then they figure out how it is scaling out and scaling in.
And they really see the actual consumption during different
periods of time. And then they're able to efficiently allocate them
so that they get the best utilization
and the best bang for the bucks that they are spending.
Particularly in an unlimited infinite scale environment.
And I see them doing that.
And that's where tools like Dynatrace really helps them to look into it
and understand what is actually being consumed.
And then how can they translate into real capacities and then since
they're able to save some money they can have more projects running right yeah and that way they can
expand their business apps space one more question from a development perspective do you also provide
because you mentioned in the beginning i think you managed a life cycle what does this mean do One more question from a development perspective. Do you also provide,
because you mentioned in the beginning,
I think you managed to lifecycle.
What does this mean? Do you provide
separation between pre-prod and prod?
So do you allow your,
like if I'm a developer and I have a Spring Boot app,
do you allow me to,
I don't know, using Azure DevOps or I don't know, maybe it's something else that you provide
to take this, first deploy it into a pre-prod environment,
then run some tests, evaluate,
and then push it forward to production?
Do you also provide this as part of your offering?
Or is this something that you would kind of build on top of it
and then maybe using two Azure Spring Boot
environments to separate pre-prod and prod?
So I can answer that in different layers, right?
So Azure by itself provides many different ways to organize it.
One way we see developers organize is they use different subscriptions.
There's one Azure subscription dedicated for prod.
Another Azure subscription dedicated for UAT.
Another one dedicated for dev and test.
They typically use three or four subscriptions like that.
Now, within these subscriptions, the prod can
have multiple instances of Azure Spring Cloud, particularly if you're thinking about
global. You could have two regions
installations in America, let's say South Central and East US, one
in Amsterdam, another one in Singapore, another one in Hong Kong.
You can have multiple instances of Azure Spring Cloud.
Now, when the calls are coming in, they can come in through a friend door,
Azure friend door or F5.
That can do the switching by regions.
So if I'm coming in from San Francisco, I might be Seattle or San Francisco,
I'll be directed to South Central in Texas.
If I'm coming in from London, I'll be directed to Amsterdam.
Similarly, if I'm coming in from Singapore, I'll be directed to Singapore.
So that level of switching will happen at the front,
and then they can be sent to different environments,
different regions in production itself.
Now, within that production instance, we give you, for each app, a way to stage the app and then swap the staged app to a production.
Now, that's what that was.
Now, when you are in the deployment, if you're a dev test environment, let's say your dev test is going in Texas, you have an instance running in Texas. And you can set up the deployment in a canary style.
Every time I check in, it just gets deployed.
Every time I check in, it gets deployed.
So that's the canary style.
So the key thing is Azure gives us all the building blocks for developers
so that developers can build what they need from a business point of view, from a team culture point of view,
so that they all can be successful.
Are there any, I think you mentioned earlier,
what the great thing about the offer is I don't have to worry
about whether I have one user on my system
or a million users on the systems.
Hopefully it's a million or even more because you build something
that all of a sudden becomes very popular. But do you have best practices on how you need to
write these apps to really leverage the scalability factor that you provide? Because I'm pretty sure
I can write an app that works perfectly for one user and for two, but still, even if I put all
the hardware on it, it doesn't scale
because I don't follow certain best practices and principles.
Do you have guides for developers that are new to the platform, to that runtime, to that
framework to say, hey, and this is how you should write your app so you can truly leverage
the scalability effects of it?
To start with, you can deploy any Spring Boot app.
That's very important.
The Spring Boot cap could be a monolith.
It could be a microservice or somewhere in the middle.
Everything is fine.
You'll be able to deploy and you'll be able to run at scale.
All of those things are fine.
If you'd like to, as you said,
if you'd like to use the best power that Azure and the
cloud platform and everything provides you, the 12-factor principles are great. That is all you
need. 12-factor principles. Principles like externalizing your config. Use registration and discovery.
Do not write logs to a file system,
but push them all to the console,
standard in and standard out.
Those kind of the 12 factor principles,
right?
Don't save any business state inside your app.
Always store your business state in a, in a database.
And if you want to accelerate that, always use a cache like RedisCache.
So if you follow those 12-factor principles,
then you're in a safe place.
You can get the best out of the cloud. This is where when you pump all the logs
and we pick up all the metrics, it can show up in Dynatrace, it can show up
in Azure Log Analytics,
then you can really monitor the systems.
But if you write to a file, that's the old world, right? When you do it on on-premise systems,
then you are dependent on some file system.
That dependency, all of those things can be taken out.
So simply following the 12-factor principles
will get you to an amazing place in the cloud.
I think this is also the time when we should actually maybe reference back to
Brian. Remember, I think it was two years ago when we had a session with Josh Long,
I believe. He's also one of the advocates of Spring.
And I think back then we also talked about 12-factor apps and the benefits of Spring Boot.
I think it's important to state the 12-factor.
It's not just for Spring.
That's for anything, right?
I mean, at least most cloud.
Because what we see, and this is the funny thing,
is that sometimes we see people wanting to get in the cloud.
And while technically you can take and lift and shift into the cloud,
they'll be like, hey, I have my monolith application.
I'm going to lift and shift that to Kubernetes.
And it's like, yeah, you can do that.
But if this is your introduction to Kubernetes and the introduction to the cloud, it's going to be a terrible, terrible experience.
You really should take the time to refactor and move with that. So, you know, that, that 12 factor app approach, I think is just so critical as you go into the cloud and, you know, going cloud native and designing, re-architecting for
the cloud is just, it might be more upfront work, right? Maybe like, Hey, let's get on the cloud,
let's get on the cloud. But you're, I think what we've seen from plenty of people is that cloud
experience is going to be pretty horrible if you don't do it for the cloud. Yes, you can, but it's just going to be so painful.
And, you know, I don't know.
I just can't stress it enough that I encourage everybody to refactor for cloud.
But that's my opinion.
One thing we want to help many of these developers is,
particularly if they have monolith app, if they want to go to the cloud,
we like to offer them some recipes that they can quickly apply the 12-factor principles.
Because many of those are clerical.
Some of them require thinking, but most of them are clerical.
And they're also thinking, we are working with the Spring community
to see if we can have some machine automation that automatically runs through the clinical approach.
That way they can get to utilize the best experience on the cloud.
Cool.
Hey, well, first of all, thank you so much
for all the information that you gave us.
I was not that aware of all of the offering
and what it can truly bring to
developers. I think that was great.
Also great to know the partnership that we
have, that our two companies have,
that you thought ahead and really
made sure that external
tool vendors like Dynatrace can
get bundled automatically into
these containers that you built to
do, like in our case,
observability and monitoring.
Is there anything we missed out?
Is there anything else you wanted
to make sure people understand?
Or is there maybe a good way to start, where to look at?
I want to mention two things.
We talked a lot about Spring Boot App.
From Azure point of view, any Java ecosystem is pretty large.
There's Java SE, Tomcat apps are there, Spring Boot apps are there,
Java EE, JBoss, WebSphere, WebLogic, Wildfly.
Any app type, any app server, you can deploy them all on Azure,
across Azure.
And you can use any monitoring tools, including, importantly,
Dynatrace on any of these systems and monitor them.
You can deploy them to containers.
You can deploy them to virtual machines. You can deploy them to Azure Spring Cloud as well.
So that way, you can monitor using any of these tools.
So that's one important thing. The second one is recently Microsoft,
Dynastrace and Microsoft in collaboration
announced Dynastrace software as a service on Azure.
This is an amazing offer for many of our customers
because it offers a turnkey solution.
So they just turn it on and it's there for them and it's
operated by dynatrace um and what what what important thing they they're going to get is
you know when when when you monitor using dynatrace you could be running it on on your
on-prem system and monitoring it in the cloud um there's ingress egress cost associated with it. But when you run it in Azure,
particularly using this software as
a service, then there's
no egress cost.
That's a very good argument.
So turnkey
and no egress cost is an amazing
combination to take advantage of that.
And you're doing a pretty good job in
advocating for our partnership and for
Dynatrace.
It's a great partnership.
We love working with your teams and we love to continue working along those
days to make sure our developers get the best.
Yeah.
Cool.
Perfect.
Hey,
Brian,
any final parting words from you?
Yeah,
just my,
every time we talk to anybody from Microsoft,
I always have the same thought
and I always reiterate it
because for anyone who's been listening
for a long time,
you've heard this before,
but for anyone who's new
or anybody who just hasn't thought of Microsoft
or Azure, I should say,
too much, it's, you know,
who was it who said,
it's not your father's Microsoft?
Who was that guest a while back? Brian who was it who said it's not your father's Microsoft? Who was that guest a while back?
It was either Brown or it was Kelsey Hightower, actually.
But I think it was Donovan Brown.
Donovan, yeah, yeah.
You know, it's the fact that we're sitting here talking about JavaScript
and Spring Boot, you know, it just always astounds me, you know,
especially going back to the old school Microsoft.
But also the fact that, you know, first we had this idea of serverless functions where you just drop your function in.
And then it was, okay, you can run your Kubernetes and now Kubernetes is offered as a system.
And now we have the Spring Boot abstraction.
And, you know, where cloud is going is just quite so amazing.
It's such exciting times to be living through in a technological point of view.
So, you know, hats off again to Microsoft and Azure for everything you're doing on the cutting edge with that.
And for really just breaking the old mold.
And, you know, who would have thought we'd be here with that from way back?
So it's pretty awesome to hear all this stuff.
And thanks for being on.
Thank you. Thank you for the great conversation yeah and uh we want to make sure that we are putting the links to the blogs that we have right um i see you're also on twitter uh
at least from the name here so i will put that in in case people want to follow up i'm sure they
can connect on linkedin or twitter and, we'll make sure that everything is there
in the description of the podcast.
That sounds good.
Yeah.
All right.
Well, thank you for listening.
And thanks for all of our listeners
for tuning in once again.
If you have any questions or comments,
you can reach us at pure underscore DT
or send us an email at pureperformance at dynatrace.com.
Once again, we'll have all the other links
in the show notes.
So thank you, everybody. and have a wonderful day.
Bye-bye.
Happy Halloween.