PurePerformance - Busting 4 Java Tuning Myths with Stefano Doni
Episode Date: September 13, 2021Tuning the JVM GC to reduce garbage collection time will speed up application performance. If you agree with that statement then I encourage you to listen to this episode where I have Stefano Doni, CT...O at Akamas, walk us through 4 Java Tuning Facts & Myths. He is going into details why even in 2021 with great improvements in the JVM it is still important to optimize the JVM specific to the environment, workload and application behavior. If you want some visuals try to catch his presentation from this years Performance Summit called “How AI optimization will debunk 4 long standing Java tuning myths”To follow up with Stefano check out their resources such as their tutorials on explore.akamas.io, check out their blog posts or videos on their new website akamas.io or follow them on twitter @akamaslabsLinks from the Show:Stefano on Linkedinhttps://www.linkedin.com/in/stefanodoni/YouTube: How AI optimization will debunk 4 long-standing Java tuning mythshttps://www.youtube.com/watch?v=9VvaxATyYsAAkamas Tutorialshttps://explore.akamas.io/Akamas websitehttps://www.akamas.io/Akamas on Twitterhttps://twitter.com/AkamasLabs
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's episode of Pure Performance.
I know normally you expect the voice of Mr. Brian Wilson,
who does a much better job in doing the initial intro.
I'm pretty sure as he has been always going to post-process the whole thing,
he will come up with a great intro music or something to make me sound funny in the end but i'm here today without brian because he's uh part of the training um and that's
why i have the honor to interview and talk with uh stefano doni completely to myself stefano i
have you for myself for however long this will take. But thanks for accepting this invitation. First,
for the people that have not heard or seen you before, who are you and what makes you
excited about performance? Hi, everybody. And thanks, Andy, for having me today. I'm
really excited to be here. Well, basically, just a quick intro about myself. I'm Stefano Doni. I'm co-founder and CTO at Akanas.
So really, my background is mainly on performance work.
So I started off doing capacity planning and optimization work about 16 years ago.
So my first exposure to performance was actually to build mathematical models
to predict capacity of enterprise systems like servers,
storage, and network, et cetera.
Then I gradually moved to product position where I focused on trying to create new solutions
to solve hard performance engineering problems in a better way, in a sense.
And that brings me to Akamas, which is a company that I co-founded in 2019, which is about
solving performance optimization problems
in a completely new way using artificial intelligence.
That's great.
And first of all, I have to go a little back to a different topic.
It's clear for most, based on your name, that you're Italian.
And I have to congratulate you and your nation again to the great victory.
And of course it was the Austrians that let you pass to the last rounds
because we thought in the end,
Italy is definitely better Euro cup leader than we.
That's why we are graduating.
This is just a little side joke because Brian and I have been talking about
the Euro cup in the last couple of episodes.
But seriously, I'm really happy to have you, I think, again, on one of my channels because we talked about Akamas in previous episodes in webinars.
I really like that you are really also helping us on the open source side with Captain to bring all of these performance optimization use cases
to the community. You just made me aware of that you have explore.acamas.io, which is a
codelab environment where people can explore Acamas. We'll definitely make sure to link.
As I'm looking at it right now, if somebody now, let's say, goes to explore the Akamask.io,
what's the best kind of tutorial to get started with?
Yes, I guess the initial one is great,
which is the welcome tutorial,
which is about really two-minute video where you can get a sense of what Akamask actually do.
And we have actually tutorials to optimize Java applications.
So I guess we touch on this towards the end,
but we recently launched the free trial initiative at Akamas.
So you will actually have the possibility
and lots of performance engineers
are actually signing up this month to try Akamas out,
even on your laptop, you can study it locally and try out the first guide
to optimize the Java performance
and see actually what we will be talking about
about this kind of Java phone tuning myth
on your actual laptop.
Yeah, and that brings me now.
Let's jump into the topic because I told you
it's really great when you suggested the episode
on four Java tuning myths.
And people are always excited about things like things that break or things that they thought
they know about things. I think myth is another big topic. So let's go kind of step by step,
because I know we have a lot of Java performance enthusiasts as listeners. So let's talk one by one.
What's the first myth?
Okay, great.
So the first myth is about tuning the JVN garbage collection performance
actually lead to faster applications.
That's not a myth.
I thought that's a given fact.
Yes, actually, that's an interesting piece, actually.
So this myth, it's actually, if you are into JVM tuning,
and I know lots of listeners are actually
are JVM tuners, or actually Java performance lovers,
you know that actually the main target of tuning the JVM
is actually the garbage collector.
So the garbage collection is actually the process
within the JVM that basically manages the memory.
So the main feature is to free up memory
that the Java program, the Java application actually allocates
in such a way that the application developer
actually doesn't need to care about memory management.
It's one of the biggest benefits of the JVM.
But everybody knows that the JVM is also one of the biggest performance problems.
The GCSR is also one of the biggest performance problems of the JVM itself.
So there are really countless of blog posts and entries and tuning guides
and rule of thumbs about how to tune the garbage collection over the years, even by the community,
by even high profile organization, et cetera.
And really, if you look at, there's
really a pattern across all those guides, which
is actually this first golden rule, as we call them,
of garbage collection performance, which is actually
basically also stated by Oracle itself in the tutorials about garbage
collection tuning.
And this actually states that you really
need to lower the garbage collection time,
the garbage collection overhead, in order to make
your application run faster.
So let me briefly describe what it actually means.
So if you're not in the garbage collection, how it works,
it basically has this task of freeing up the memory.
And in doing that, it basically needs to stop the application threads.
So you will add this pause time or stop the word pause or suspension time.
Actually, various tools calls them in different ways.
Well, your application threads
are completely stopped
in waiting for the garbage collection
to do its work.
So the rule states basically
that you need to decrease
the amount of time
the garbage collection
actually stops the application.
So if you think about it,
actually, it makes totally sense.
So if we decrease
the garbage collection time,
we will get higher application throughput,
higher application performance,
because actually the application threads
can run for more time without being stopped.
Yeah, makes sense for me.
Okay.
The interesting thing is that at Agamarts,
actually tuning the JVM is one of the main targets
because lots of enterprises and also cloud-native applications today are based on tuning the JVM is one of the main targets because lots of enterprises and also native applications
today are based on using the JVM.
So we have a pretty interesting experience in using AI
to explore the effects of tuning the different JVM parameters.
So today, our JVM has more than 800 parameters that you can tune.
Actually, it's very, very complex for the human brain
to do that.
And by looking at the results that Akamas achieved
by tuning this, lots and lots of JVMs,
we came to realize, actually, and we were basically also
ourselves very surprised that if you tune the JVM to lower
the garbage collection time, which is basically
what everybody is doing, you may actually end up slowing down your application.
How does that work?
Making them run faster.
Yeah.
But why?
Why is that?
So let me tell you the story.
So basically, we did lots of experiments in this case.
By the way, we have published the results,
the complete results in a recent blog post. We did an experiment on a Java 11 application using OpenJDK 11,
varying all those JVM parameters with the Akamas, and then we executed the Java application. It was
a Spark application. And then we measured the performance that Spark job actually took with the different
kinds of JVM settings.
And after this optimization study, actually, what we got was a striking result.
So we have looked at the best JVM settings that Akamas found, so the ones that made the
application run faster.
And the first interesting thing is that we noticed that the best JVM configuration was
actually able to make the application run 30% faster than the baseline.
So which is already great, meaning that you have this kind of, in a way, space of potential
exploration and potential performance improvement going past the default values that today's
JVMs, in a way, have built in.
But what is striking is actually what
happened to the garbage collection overhead
during those optimizations.
So the configuration that had the higher performance
actually also had higher garbage collection overhead
with respect to the baseline.
So these are completely, in a way, counterintuitive results
because actually it's literally contrary to counterintuitive
and the opposite of what we might have expected.
So how can it be that if we, in a way,
let the garbage collection stop for a greater amount of time
our application, the end-to-end application performance
will actually improve.
That was so in a way counterintuitive that we needed
to absolutely in a way understand also our self being performance engineers,
what is happening, what was happening under the hood.
We dug deeper on this issue and in order to do that,
we have done some deeper performance analysis using on the Linux
kernel level. So what we did was to actually trace the CPU scheduler, which is actually basically
a key component of the Linux OS that actually has to decide where to schedule the different
application threads onto which CPU. There are lots of open source tools to do that.
And for the most common, most famous one is the Linux Perf,
which lets you do this.
We use a tool that is called Perfecto, which is made by Google
and is actually open source.
And it's interesting because it's
a tool used by Android application developers
in order to understand the behavior of the Android system.
For example, when you have performance glitches
on your mobile apps.
But it's interesting because it basically
runs on every Linux OS.
So we use that to actually understand
how the different threads within the JVM
were scaled onto the virtual machine CPUs, basically, in order to understand how the things actually
were being scheduled.
And what we understood was pretty striking results,
which is actually one of the explanations for these,
in a way, counterintuitive results
that we are talking about.
So the first thing that we notice
is that the garbage collection actually
that is being used in JDK 11 is the so-called G1GC.
And G1GC has been designed in a way
to reduce the application pauses.
So it's a GC that, by default, tries to minimize the amount of time it stops the application pauses. So it's a GC that, by default, tries
to minimize the amount of time it stops the application.
And in order to do that, actually, the design feature
of G1GC is that it has some threads that actually can clean
up memory while the application is running.
So those are called concurrent GC threads.
So the G1GC tries to freeze up and, in a way,
do some memory reclamation activities
as the application is running.
And then, of course, at some point,
it needs to stop the application.
So there are other threads for the G1GC
that are called parallel threads that actually
do the cleaning work when the application is actually
stopped.
And the key, in a way, inside that we got
by looking at how those threads are actually running with
respect to the application threads,
we noticed that actually the concurrent threads of the G1GC
was actually competing with the application threads.
So they were running concurrently.
And in a way, they were stealing CPU cycles from the actual application threads. So they were running concurrently. And in a way, they were stealing CPU cycles
from the actual application threads.
So in a way, they were making the application run slower,
even in the baseline.
And why the G1GC actually does that, because it wants,
by design, it wants to minimize the actual pose.
So the JVN configuration that Akam has found
was a configuration that actually minimized,
when we look at it, the work of those concurrent threads,
which is something that led the application actually
to run faster for most of the time at the expense
of higher application poses.
So this kind of, in a way, counterintuitive result
actually made the application, the overall end-to-end application
run faster.
And the insight here is that really why the golden rule
doesn't work anymore.
Because actually, this work of the G1GC concurrent threads
is not really counted as suspension time.
Because actually, they are really not
suspending the application in a way that, as we defined
before.
But nonetheless, they are still, in a way, interfering with application performance because
they are scheduled at the same time.
So even though you might see from your tools lower garbage collection suspension time,
this doesn't mean that garbage collection is not actually impacting your overall application
performance.
And that also means that a good performance engineer needs to look beyond the boundaries
of your runtime, because in this case, you also need to look down to what the OS level,
the kernel, what happens down there, right?
Because in the end, these threads are executed on the OS level. And then if you only focus on what you see
from what you think you know,
then you may look at the different,
at wrong data and then do,
make wrong assumptions
and then optimize in the wrong place.
That is really interesting.
Wow.
Cool.
So that was,
that means it's on the one side,
but I guess for many use cases, it still holds true that tuning the JVM garbage collector leads to performance gains.
But it's still a myth that this is true for every use case.
So that is very fascinating.
Yeah, I think so.
Actually, it's a little bit of sad news for the performance engineer because actually those kind of rules are actually performance models suddenly realize that they don't work anymore.
But I think, nonetheless, it is important to recognize those facts.
And it's really not a JVM fault here.
It's actually, there are lots of other languages that have got this.
But basically, the same thing can happen at the container level with Kubernetes.
It can happen on the US level.
So basically, the thing to realize is that our environments are actually more complex.
And the thing is, we have less, in a way,
insights as performance engineers to guide tuning
when these kind of rules doesn't work anymore.
But we need to be, in a way, to be forced
to look more on the end-to-end application performance.
And of course, automating all these kind of tasks
is actually one of the
best ways to move forward as a performance engineer.
Awesome. So this was fact number one. What about the next one? What's the next myth or
insight?
Yeah, the next insight is actually also interesting because we call it the JVM short blanket in
a way.
So another well-known fact about the JVM performance is that there are some inherent, in a way,
design trade-offs within the JVM.
It's actually, if you think about it, it's actually even more general than the JVM.
But let's focus on the JVM.
So you have this so-called performance triangle,
meaning that you have three key performance indicators in a Java application that are
throughput, latency, and footprint. Throughput, of course, means how many, in a way, operations
can I do in a given amount of time. Latency is, of course, the amount of time it takes to do one single operation.
Footprint is basically how much resources
do I need, CPU and memory.
So that's, again, the rules of performance
of the JVM, which is, again, pretty well known also
in different conferences where even Oracle performance leader
actually put it very clearly, is that
if you think about those three KPIs, throughput, latency, and footprint, you can't really get
two of them.
So you cannot at the same time improve throughput, you decrease latency, and decrease footprint.
This would be like the dream for a performance engineer, in a way, instantly make the application
run faster,
being able to process more work and cost less.
Of course, that's the goal.
But there are actually some inner design issues,
design properties that actually make reaching this goal
very difficult without some significant development work. That's more or less the short blank, as we call it.
So again, it completely makes sense also from a performance engineering point of view.
I guess we are pretty used to reason about those terms and the fact that you can really
take an application and improve all the three KPIs.
And again, we have an interesting fact here, because we use Akamas.
That's a study that we did recently for our Telco customer.
And we got a very recently result
by optimizing their CRM system that was a game based on Java.
I guess it was OpenJDK 9.
And we use again Akamas to find out a JVN configuration that actually minimized the footprint,
in this case, the memory, which was the bottleneck for them,
while still keeping the performance level
that they had before.
So again, in a way, you want to make the application cost less
because you want to reduce the infrastructure.
But of course, you don't want to sacrifice latency or throughput.
And when we analyzed the results, so Akramas actually was able to achieve a pretty significant
result of reducing the heap size by 80% or so, but the configuration that actually was
able to reduce the heap size to 80% also managed to improve the throughput and the
latency by more or less 20% at the same time.
And is this because, I assume, less memory, less footprint also means less garbage collection in the end?
Or is that the reason?
Yeah, again, that's interesting.
So this was a customer application, so we couldn't reproduce it locally and study it more.
But actually, the tradeoff, the typical, in a way, again, rule of thumb is that, of course,
you can let the garbage collection run better by assigning more memory.
That's again the typical suggestion.
But some cases, actually, this can have some drawbacks because you will incur higher garbage
collection times. Again, this is a kind of experiment that you need to do because it can be different from
application to application, from workload to workload.
And again, it's interesting because typically you need to do significant development work,
as the expert says, to improve all the trick APIs. But if you are able, like we did with Arcanas,
to explore this huge optimization space with 800
parameter, et cetera, you will actually
be able to find configurations, specific configurations
for your application that perhaps
in a way, free lunch.
Yeah, and I think this is a very important point to
just make um every application is different based on the memory behavior based on performance
behavior and this is why there's no rule of thumb that these are the perfect settings for a jvm it
always has to be evaluated assessed and optimized based on an application but i guess also based on
a particular load pattern right that an application has because an application, but I guess also based on a particular load pattern, right?
That an application has, because an application that sits idle,
but doesn't do a whole lot,
behaves differently than one that is under heavy load.
And so this is why I completely agree with you.
It's so important to constantly, you know,
analyze and then optimize based on current load patterns.
The best way continuously, fully automated.
But we'll get to more of this later on.
So this was myth number two.
That means you just told me that you can escape the golden triangle of throughput, latency, and footprint and optimize all three of them.
It's not that you can only pick two.
The next myth, and I want to also read it out loud for you this time because thankfully you gave
me all of these myths that we talk about and this is one that i i'm also kind of wondering why in
2021 do we still need to tune the jvm and not just let the jvm do its job because the JVM itself should be smart enough. Yeah, yeah, that's great.
And actually, we have this myth.
Actually, in all fairness, of course, the JVM itself and the OpenJDK developers are
actually improving the JVM over time.
So there are lots of work that is being constantly done to improve performance, improve the
average collection, reduce impulse time.
So I don't want to pick on the JVM developers here.
But the interesting thing is that we need to consider
the JVM performance in a more broader context,
which includes, again,
your particular execution environments
and also your performance goals.
And those may not match what are the default, in a way,
JVM performance goals that comes out of the box.
So I have two actually interesting data points
here that I want to share.
So the first one is a very simple performance experiment
that everybody can basically replicate.
We use the Renaissance, which is a very popular open source Java
benchmark.
It's actually a suite that comprise
about more than 20 applications.
And it's being used to use by the Oracle
and OpenJDK developer themselves
as part of the benchmarking activities
to understand if the JVM is improving
and on which workload, and et cetera.
So in this case, we took this benchmark suite,
and we did a very simple experiment.
So we used, again, OpenJDK 11, and we
ran a benchmark with an increasing amount of heap size.
So we started from 1 gig, and in 1 gig increment up to five gig.
We didn't change any parameter here
other than the maximum heap size.
So again, this means that we are again getting
the G1GC garbage collector, which is the different one.
What is interesting is that we noticed,
we recorded during this experiment,
basically the major performance KPIs of the JVM,
like the garbage collection time, the CPU utilization,
and also the amount of memory that the whole JVM process
actually used during the benchmark execution.
And when we analyzed, actually, we found something very
interesting.
So what we noticed is that in the initial configuration,
we did a 1 gig experiment.
Basically, the garbage collection time
was pretty interesting already, so pretty high.
It was about 20%.
But when we increased the memory,
we assigned more memory in the subsequent experiment, 2 gig,
3 gig, up to 5 gig.
We noticed that this garbage collection time actually
did not decrease that much.
It was already, in a way, an initial surprising fact.
Because actually, again, when you have GVM memory stress,
the usual reaction is, let's add more memory.
The second thing that we noticed is that we recorded also the GC logs,
and we analyzed all the different actual allocation patterns
and the actual memory utilization was during the run.
And what we discovered was that actually G1GC,
with the default settings, didn't actually use all the memory
that we assigned.
So, for example, out of 5 gig, we have the data
that we will be publishing as our next blog post also.
It was actually using a little bit more than 3 gig.
So that's very interesting to reflect a little bit
on this result because in this, what is actually
is that basically the garbage collection as a trade-off,
the JVM itself has to decide many things. In this case, the JVM needed to decide actually
to trade off to save memory because actually it didn't allocate, didn't decide to allocate the
full five gig that was available, but instead to in in a way, spend more on doing garbage collection
and also spend more CPU cycles in doing that.
So again, that is interesting because they take away,
I guess, from the performance engineer,
and that is, of course, just one of the tunable, which
is the garbage collection kind that you have on the JVM.
Actually, JVM itself has to decide those kind of things.
So actually, your performance goals might be different
because perhaps we might want to have, I don't know,
lower CPU cycles, for example.
So we might want to fully use the memory out of the box
and smaller poses.
Which just basically means that, as you said earlier, as we said earlier, a runtime, whether it's a JVM or any other runtime,
is basically optimized by default for a certain scenario.
And as you said, while these developers are doing a great job
to, I think, provide better runtimes out of the box for many use cases,
it's impossible for the runtime developers to provide something that always works in an optimal way for your application,
for your load behavior.
But I think this is what you just added.
What is your ultimate performance goal?
And the performance goal could be throughput, latency, CPU consumption, memory, whatever it is, right? So that means that kind of a goal-driven optimization
is something that is great,
but a JVM developer,
or there are many JVM developers out there
that focus on performance,
don't really know what is your particular goal,
and therefore they cannot just deliver
an out-of-the-box experience that works for you.
That makes a lot of sense.
Yeah, that's a great summary.
Also, you mentioned it, of course,
we don't want to pick on the JVM developers.
Actually, I think JVM is actually the best runtime
as it comes from performance observability out there
because you can really inspect everything that is happening
and you can get this kind of insights.
I'm sure that those kind of, in a way, aspects also exist with other runtimes.
But what is interesting is that also if you think about the first myth, the thing is that
GVN can only think about their own internal metrics.
Not only they don't know your goal, but actually they can, cannot happen. For example, what is happening on the broader, I don't know, container level resources or
on the actual application performance, which might mean, I don't know, your 99% of your
payments.
Of course there's, there are layers in the stack.
Exactly.
You can only reason about their own internal metrics and actually auto-adjust in some cases
with respect to their own metrics,
but they can really understand
and can really see what is your goal
and with respect to other business-level,
application-level metrics
that actually are far higher in the stack.
Yeah.
Great.
So that makes it three myths already.
What's the last one that we want to talk about today?
Okay. The last one is actually what we call the cargo culting JVN tuning
or actually copying and pasting JVN settings across different applications.
That is something that is funny,
but it's something that doing lots of those kind of exercises for our customers,
actually, we see, actually, due to these complexities of tuning the JVM,
developers tend to actually stick to a good configuration that are apparently good for a given amount of application.
And then, basically, they suddenly become their own best practice to configure the JVM for every kind of application.
So this of course it's not really good because as we have said before
every application is different by the way you may have JVM so for different versions also and we
have also different execution environment we may may have containers. We may have different goals, different workloads.
So actually, this practice really has to be avoided.
But we can understand because actually,
due to the complexity of the JVM,
people might be tempted to do that.
In this particular, in a way, myth,
actually, those are more actual misconceptions that we have found.
Two particular ones that I wanted to mention here is about actually garbage collection tuning.
Again, we're talking about the GC threads in the first myth and the fact that you may have concurrent threads, you may have parallel threads.
The good news is that all those kind of threads
are also tunables.
And people have come up with the funny thing
is that we have noticed that people have come up
with various theories about how to best set
the number of threads.
If you think, for example, in container environments
where you have Kubernetes, resource requests,
and limits specified for your container,
how many threads you want to locate.
So here we see different, again, theories.
And sometimes we see people, in a way,
coming up with funny ways to specify threads.
For example, let's just put half of my container threads
into the GC threads, because in this way,
I will leave more, in a way, CPU cycle to the application.
Again, in theory, it might sound like a good idea,
but in the end, the JVM is complex.
And doing that, you might actually
limit, for example, the number of threads
also for the parallel phases, which is the post time.
So by doing this kind of tuning, in a way, manual tuning,
you might actually end up increasing your suspension
time, which is pretty opposite with respect
to your original goal.
I mean, in the end, all these formulas,
they come down to, at least for me, something very obvious.
And the obvious thing is that you need to tune your app
on your specific runtime in your specific environment
for your specific goals and throughput.
Because that's the only way you can truly say
the application runs within the expected,
let's say, classical performance behavior,
which means throughput and response time, but also within, let's say, maybe your resource
budgets that you have, whether it's memory or CPU, but it's always very specific.
And as also applications change from build to build, from version to version, you need
to, I guess, continuously do this type of optimization.
And this is why, and now, hooray, obviously, right?
You are working for Akamai and you not just talk about these things
because you do this manually on a day-to-day basis,
but you have built a whole product around it.
And this is just fascinating.
And the product that can be fully automated
and that is then using insights
from different observability platforms
like Dynatrace, for instance,
to really then make decisions.
So this is really, really, really phenomenal.
Yeah, thanks for the assignment.
It was great.
Actually, we have come from this kind of background
in tuning the JVMs, in tuning...
By the way, it's not just JVM problems.
It's something that it's more widespread.
Pretty much any technology has these kind of issues.
So also other runtimes, databases contain Kubernetes containers
or the system, the cloud itself.
So basically, this actually happened pretty much
across the board in the stack of modern applications.
So our approach is actually using
artificial intelligence in a smarter way,
also for automating the performance tuning
of all those parameters in a way that on one side,
it's automated so you don't need to be a GVM expert,
as this may actually turn out being counterintuitive,
just following the rules or actually trying out
by yourself some particular parameters.
So we have already included in our product,
for example, the knowledge of what
are the most important GVM parameters
so that you don't need to know them.
And the second thing, which is very core, which is very important,
is that, of course, we use artificial intelligence
to actually understand what are the best settings
and actually explore this very vast optimization space
for you in a completely automated way.
So performance engineers are actually loving the approach.
The use of the fact that it allows you to reduce the time basically, that is to do the tuning. But more
importantly, if you think about the myth, what is actually very, very true for a performance
engineer is that if you cannot rely anymore on those rules, what you actually are going to do,
because if you just see the garbage collection that is high, you don't know
anymore what you are really going to do without looking at the actual application performance and
the end application performance. So the thing is that it's, Akonis is goal driven, meaning that
you cannot, you can say actually let's optimize my goal, which might be my 99 percentile. Let's
forget about all the GVM metrics.
Of course, you can still analyze them because it's important to understand what is happening.
But the key thing is that AI is actually driven by the goal that you can specify,
which may be application metrics.
And that is one of the biggest value.
Yeah.
Hey, Stefano, thank you so much for walking us through it i know there's
a lot of material on this topic already out there i think you already gave some presentations
at other conferences by the time this show airs i'm sure there will be even more content people
should look at your website akamas.io again explore.akamas.io. Great tutorials
to walk through videos.
And obviously,
Akamas Labs on Twitter.
Anything else
that people should be aware of?
Any other ways
to get in touch with you?
Yeah, I guess one of the major news
this year
was the actual launch
of the Akamas free trial.
So we got,
we had a lot of market requests
to actually try out the product in a kind of self- free trial. So we got, we had a lot of market requests to actually try
out the product in a kind of self-service fashion. So actually we launched this new go-to-market
model actually end of May. So it's pretty new. So you have now the possibility to actually try out
actually what we are talking about today, because with the free trial, we'll get the possibility to actually download Akamas,
the full product, and even install on your laptop.
And this is the app that we call Akamas in a Box.
We got inspired by Captain,
the great work that you did there.
And the interesting thing is that we also included
into this box, not only the product,
but also a sample Java application.
Again, the Renaissance Venture that I was mentioning before. So that in a couple
of steps, you can actually use Akamas
AI and optimize the GVM performance and experience this new
performance engineering way to do optimization
very quickly. So you can actually see what we
are talking about today
and analyze the GVM performance
and get a sense of how the new approach
can bring you the benefit as performance engineers.
Perfect.
So with this, again, recap,
if you're still listening and you're a performance geek,
if you want to optimize performance,
you've learned four performance myths.
First of all, I really like that not always does tuning the GC lead to faster applications.
The second one was the trade-off, the triangle,
the golden triangle between throughput, latency, and throughput.
Sometimes you don't have to only choose two.
Sometimes you can even choose three, as you have proved.
While the JVM in 2021 has been amazing, a lot of improvements went in,
still, you need to optimize for your own application.
It cannot just figure out everything yourself,
especially if you have certain performance goals.
And then also the copy-ping from one setting to other applications comes back to, while it might be a good start because every environment is also different, as we have learned, whether you run on bare metal or in the Kubernetes world on a virtual provider
somewhere in the cloud, everything feels different.
And with Akamas, there's a great product now also available in a box for every developer
to try out themselves.
So check this out.
Stefano, again again congratulations to Italy
and winning the Euro Cup I'm pretty sure you will do
amazingly well also next
year at the World Cup
and looking forward to
finally after so many months
seeing you at some point at some
event in person
yeah
thanks a lot Andy for the nice one by the way
it was a great match with Austria it It was very tough. I really enjoyed it. So congratulations on you also. had a workshop to attend but he will be back the next time. But still, thanks for making
this show possible because he's doing all the
post-production even though he's not
here with us today. With that,
all the best. Bye-bye.
Ciao.