PurePerformance - 046 Java 9! A technical deep dive
Episode Date: October 9, 2017Project Jigsaw, G1 as default garbage collector, ahead-of-time compilation, Stack Walking API and many more changes that you should be aware of when upgrading to Java 9. Philipp Lengauer, whom we met ...at devone.at, gives us all the answers and technical deep dive into all these JVM changes. Especially for performance engineers an episode worth while listening to.If you want to learn more check out Philipps presentation at devone:https://youtu.be/Nsg_rhlf4_U?list=PLfi6VUNSzNYmUyeZ2BTM_WmZjgi0FOqRl
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time of Pure Performance.
We've got a rowdy crowd today, Andy. How are you today?
I'm very good, but I'm still sad and shocked after the event from last Sunday night.
I know it's not a spoiler alert because until this airs, it's going to be later anyway.
But they have a dragon now.
Right.
And to point out there, I did want to bring up your theory because we'll either find out in the next episode, in which case, again, this is going to be airing way later.
Or it'll be maybe next season we'll find out, which in this case, it won't be a spoiler.
But since you brought it up, let's put it out there.
The theory about Cersei's pregnancy. I don't think she's pregnant. I think she's just,
uh, playing this card to make sure that, uh, Jaime is really, you know, supporting her and not going
with the brother, um, like supporting him with his, with his ideas. And I think, uh, telling him
that she's pregnant again from him, uh, just the allegiance but i think she's faking it yes and as our guest who will
introduce in a moment who we just found out um you went comes from your town right yeah and you
had some of his relatives in your school instruction so that's really awesome but
it was uh the principal of my school yeah right but but he pointed out too in
this conversation that the witch had predicted had only talked about three children so that kind of
helps solidify this idea anyway this who knows when we'll find out about this this might be old
news it might not be but uh anyway andy yeah who do we have on the show today so today we have
philip who besides being a fan of Game of Thrones,
is also extremely well-versed in Java.
So Philip is actually a colleague of us working out of our Lintz lab.
But really, to be honest with you, I really got to know Philip
and met him for real the first time at DevOne,
which is a conference that we hosted this year in Lintz.
It was a DevOps conference from developers for developers.
And Philip gave a talk on what's happening in the Java world, especially on Java 9.
And I thought it would be great to have him on the show because he sits at the source
when it comes to what's happening in the Java world.
And so without further ado, Philip, welcome to the show.
Please tell the audience a little bit more about yourself and then let's jump right into the topic about java 9 and what people need to know about it
especially around performance about around monitoring oh uh so hello from my side also
um i've recently joined the dynatrace team about a few months ago i've been working before
with the dynatrace collaboration at the university of linz so i've
been in touch with dynatrace for a long time actually but i've just recently started working
there before that i've just recently completed my phd in performance monitoring especially in
memory performance monitoring and i've been a lot i've been doing a lot of teaching
in the area of performance monitoring and compiler construction and just programming, Java programming in general.
Can you just fill us in the collaboration between the university and Dynatrace?
There's a lab at the university that is focusing on Java compiler's what's going on yes there is a there is a lab it's called a
christian doppel laboratory named after the famous physicist which is partly funded by dynatrace and
we have done a lot of gc and memory optimization and performance analysis in that lab and general data analysis in the area of both memory
and CPU times and locking analysis.
And, well, my area of expertise was memory in that lab.
You know, I had no idea that existed.
See?
That's awesome.
See, a lot of memory in general so uh sorry for that love as i i like that little side punch here on the memory um no i think it's
cool right and so now java 9 and so you present that if people want to watch your presentation
they can go to dev1.at this is is the conference page, and there's all the presentations listed,
including for your session also the video.
So you can check out the video of the full presentation.
But from the highlights perspective, so your highlights,
when you talk with people about Java 9, what is coming,
what is finally coming, because I remember when you did the presentation
in early June, I believe it was, not everything was really clear from the timeline of Java 9, but now we have the details.
So fill us in.
What's happening and what's important to know?
Well, whenever somebody asks me about Java 9 or somebody mentions Java 9, it's always talking about Jigsaw, which is the new module system in Java 9. So in Java 9, the developer can now split
or package his classes in modules.
And for every module, the developer can specify
what packages the developer wants to export.
So what packages are visible to users of that module,
what other modules are required,
even in what versions they are required,
and also all other packages within that module
that are not explicitly exported
are automatically hidden and cannot be used by other modules.
So it's also a kind of a visibility thing
going on there cool and and i remember when you talked about this at dev one there was still a
lot of limbo around the project jigsaw right that's been solved it's it changed a lot in the last few weeks or months probably because the Java or Oracle wanted to do a very aggressive,
to use this very aggressively.
So, for example, per default,
all the internal packages from the JDK were hidden.
Of course, the JDK has been modulized also.
So, for example, there's the Java base module
containing all the core Java classes
like everything that is in Java long and Java util and so on.
And there are also other modules like Java RMI
which then contains all the RMI stuff
and Java SQL which contains all the SQL stuff and so on.
And there are a lot of packages that have always been in the JDK.
And since they have been there, they have been used but they have never been really part of the spec so and these internal
packages of course are now internal packages in these modules and cannot be accessed anymore
so everything every library or application that used those internal packages
was broken at that point
because the classes couldn't be resolved anymore.
And they now kind of went back a little and said, well, they can be used with Java 9.
So even if you hide a package, it can still be resolved.
But you get a big fat warning.
It's even called a loud warning in the VM.
So no matter what you do, you cannot turn this off.
When you access such a class, you get the warning to standard error.
And you cannot turn this off.
And this warning is pretty aggressive.
It tells that you should immediately contact the vendor of this certain class because it uses an internal API and it should not.
And so this kind of, they made it a little weaker, this check,
and the plan is that with Java 10, this check is reintroduced
and then you cannot access it.
So the people have a little more time to migrate to the module system and to
get their stuff up and running again without those internal classes
and and so what does this mean from a developer perspective i can modularize my code i can say
what i want to expose what not to expose does this also mean class loading is changing obviously
because it has to be enforced somewhere and does this have a performance implication um from the performance perspective it pretty much stays the same because
the class loading is not so much changed at runtime um we at runtime the jdk doesn't think
of a module as a single entity but it's more like every class loader still loads a specific class where every class
is loaded by a specific class loader and but additionally now java 9 associates a module
with a class and when somebody accesses or some class accesses another class an additional module
check is performed whether they are from the same module or whether this module has or the using
module has access to the to the used module so this is only during class resolution a few additional
checks which are not really that critical in terms of performance but the class loading has
changed a little at the jdk level for example now, up to Java 8, there have been three out-of-the-box class loaders, which are the Bootstrap class loader, the extension class loader, and the system class loader.
And basically, the Bootstrap class loader loaded all the JDK.
Then the extension class loader did some magic nobody really understands.
And then there was the system class loader that loaded your application or starts loading your application and now there's
this new concept of a platform class loader this is something actually new that has changed since
my talk the platform class loader now guarantees to have access to all JDK classes. But whether they are actually loaded
by the platform class loader
or by some other class loader is not defined.
So there might still be some JDK modules
that are loaded by the bootstrap class loader.
Some might be loaded by the platform class loader
or some might be loaded by some class loader in between.
But just the platform class loader guarantees you
that you can resolve every JDK class
via this class loader.
And again, just to my question,
I mean, class loaders have always been a challenge
for always on the one side developers,
but also for tool vendors,
especially if you do back-coding instrumentation.
Having multiple class loaders was was sometimes a challenge to make sure that we're instrumenting all the right things and
then we actually get get hooked into the class so is there anything that has changed for tool
vendors now or is that should that be transparent if you have let's put it like this if you have
implemented your application according to the spec
and have not done some magic in class loading,
like circumventing the parent delegation or something,
like some popular APIs do,
then you should not have a problem.
But of course, if you rely on specific classes
to be loaded by a specific class loader, you might be running into problems.
For example, we in Dynatrace now have the problem supporting Java 9 that the JDK is now,
some JDK modules are loaded by the bootstrap class loader and some are loaded by the platform class loader so not everything in the that is loaded by the
bootstrap class loader can access the entire jdk because it might not be loaded by the bootstrap
class loader okay yeah so in the dynatrace case for example we inject our agent to the bootstrap
class loader well we have so until now and from there we can't access the entire jdk
anymore which is well for example sql and rmi are loaded by the platform class loader
and we would very very much like to access those of course so we have to do some matching there to
some magic there to still get all the things we want yeah cool and that's
uh well hopefully and as you said we're doing some magic there or something different
uh but and that's going to be a challenge for other tool vendors as well um cool uh all right
so so you talked about jigsaw i understand that uh you talked about the class loading
anything on these topics that we missed any
anything else um we should we should mention well the big things we already mentioned
those are the on the one hand the class loading is a little different and on the other hand it's
that some packages are now hidden and you can still access them with Java 9, but at the latest Java 10, you will have significant problems whenever you use these classes or even use libraries that use those classes, of course, because they will break.
And so that means for people that monitor these applications and they want to make sure that they are according to standard in Java 9, watch out for the error logs because there will be these loud loud warnings as
you call them that's something to watch out for and what you can also do is you can even enable
the restricted mode in java 9 already it's just a regular command line flag where you can enforce
those uh checks to be to really throw exceptions and then you will see very fast whether something breaks
cool and i assume uh this option is well documented so people that want to check it
out and look it up yes they just go to the java 9 doc yeah yes exactly so and you can even see
it by just calling minus help on the uh on the command line tool yeah cool uh other topics i mean i'm looking at your presentation
right now and i want one thing that strikes i mean there's several things in your presentation
that i liked a lot i'm just picking one now which is uh the change uh the g1 is a default
garbage collector yes i mean memories too yeah because memory is your field of expertise
so what do we need to know there well g1 has been introduced i think with java 7 update something
um and it's now until now the the so the entire jdk8 or up to JDK8, the parallel old GC was the default collector.
And now it switched to G1.
So you still have all garbage collectors available.
But if you don't specify one explicitly, the default one will change or is changed with Java 9.
And G1 has one big difference compared to the others.
It's partly concurrent, meaning it tries to collect while your application is still active.
Or at least do a lot of work while the application is running and also it can uh collect very large heaps efficiently so this is one step towards um
making java more feasible for really big heaps uh we are talking 100 gigabytes and above because
other garbage collectors just can't be bothered with this kind of heap they just take forever
collecting it and the g1 can kind of guess what what region contains a lot of garbage and then
just collect that region so the runtime of the garbage collector is not dependent in any way on the actual heap size,
which is the case for all other garbage collectors.
Now, the question I had with the garbage collection,
I read a teeny, teeny, teeny little bit about this before we started,
and it seemed like the G1 might be, someone was alluding to the idea that G1 might be more beneficial
in the larger heaps.
Does that mean G1 is not,
like if you're running microservices
and you have a very small heap,
is G1 still going to give you benefit
over the old parallel?
Or is that the kind of time
when developers should say to themselves,
hey, I should switch back,
manually switch back to parallel?
Or do you think G1 is going to have performance gains for any size heap?
Well, it's difficult to predict without having the actual allocation
and memory usage profile of a specific application.
But I would guess that with very small heaps,
it would not make much difference because the chuvan really for small heaps it's just doing some things for nothing but it it
should not have any negative impact at least okay it's just like using the old collector from
performance wise and uh just let me ask you one more question.
Now, when we are, you know,
we talked a lot about memory diagnostics in the past
and we have a lot of material online
about how to do memory diagnostics in Java,
which metrics to look at.
What changes from a monitoring perspective
when we look at G1 metrics?
What are the things to watch out for
to understand
whether you actually have a memory issue or not are there any new metrics any new measures
anything else to watch for well the i don't know whether it's really a new metric but the g1 has a
completely different paradigm of of working and dealing with garbage. So, for example, the old garbage cracker, the parallel old GC,
which is still very well supported in Java 9, if you need it,
just waits until your heap is full,
and then it tries to collect as efficient as possible.
For the G1, you can specify a pause limit,
meaning you can, for example, specify my application
shall not pause for more than one second at a time.
And of course, it's kind of a soft constraint,
so it cannot absolutely guarantee it
because Java does not run in a real-time environment.
But it is usually very good at achieving this goal.
So it's not so much a difference,
it makes not so much difference about what metrics to look at, but probably more with how to tune it and how to really configure it.
Because this can make a lot of difference.
And there is another thing you might want to look out for, because the parallel old GC and all other GCs have two types of collections.
They have minor collections and major collections, which is basically just a bad word for partial and full GCs.
Minor collection usually means a part of the heap is collected and minor collections are usually very fast.
But they can only collect a part of the heap.
And major collection is usually very slow
and means the entire heap is collected.
And what you observe with the parallel GCs,
you have a lot of minor GCs,
and then at some point you have one major GC.
And then you have again a lot of minor GCs
until you have one major again.
The G1 only has minor GCs, usually.
Which means it always, even if it collects the old generation
it always just collects a part of the heap never the entire heap which makes it very efficient for
big heaps because it just when you run out of memory it selects an heap region which it estimates
contains a lot of garbage and then it collects that region, no matter whether it's old space or new space
or survivor spaces or whatever.
And G1 also has a major collection,
but this is an absolute emergency for the G1 GC.
So it usually only needs to do major collections
when everything is running completely out of memory
or there's something really weird going on in your heap.
So this might be something to look out for
if you're using the G1
and you see major collections in your log,
then something really bad is going on.
Hey, then I have a question to this.
So if collections happen,
whether it's minor or major,
does the JVM block current
executing threads in both cases?
Or is this just
true for major GCs?
For the
old, for the parallel old GC,
all, both
minor and major, what we
call stop the world garbage collections,
meaning all application threads
are suspended, then the garbage collection is executed., meaning all application threads are suspended,
then the garbage collection is executed,
and then all application threads are resumed again.
For the G1, the major has also stopped the world
because it's really, really an emergency action.
So it really needs the application to stop doing whatever it's doing
and tries to get itself together again.
For manager C's parts, part of the work for the manager C's and tries to get itself together again. For managed GCs,
part of the work for the managed GCs
are done concurrently,
meaning the application is not stopped.
So this is specifically the marking phase.
So what's usually how a G1 collection will work
is the G1 will decide to do a garbage collection,
then the VM will be halted.
So all application threads will be stopped
for a very short time
just to gather some statistics
and start the garbage collection.
And then the application threads are immediately resumed
and then objects are marked or live objects are marked
and a lot of the garbage collection work is actually done.
And then at the end, it again needs a small pause
just to put it all together and finish the garbage collection.
So instead of one big pause, you have two very small pauses.
Small ones.
And in between, the application might be a little slower
because there are garbage collections in the background
trying to do their work.
The reason why I'm asking this question is because when we do memory analytics
and especially trying to figure out the impact of garbage collections
on the actual running applications and transactions that are executed,
in the damage risk world, we often ask people to go and look for the suspension time, which we report per transaction.
So how long was this particular transaction suspended because of, for instance, a GC?
And so the way you tell it to me now is we should see these numbers go down because the hold the world or stop the world is is less common but unless you have this you know it's the end of
the world almost then you obviously have uh uh then winter is not coming but winter is here
but yeah the suspension times will probably go down but you will still have we probably have
to think about a little differently because we still have some impact from the garbage
collector when it runs in the background because there are threads consuming cpu time right that we probably have to think about it a little differently because we still have some impact from the garbage collector
when it runs in the background
because there are threads consuming CPU time.
Right, I was going to go there.
That's going to make it a little more difficult, right?
Because at least in the old way, at least at Dynatrace,
we could see that that CPU time was actually during the suspension,
so we knew it was garbage collection,
whereas now we'd have to look for, yeah, I guess,
is it a case where we'd have to look for a suspension,
then a spike in CPU, and then like another small suspension
for those begin and end if we wanted to try to isolate
whether or not that spike in CPU is related to memory or?
I don't know whether you will see spikes
because this depends on what the application is
doing right right right but or a little uptick maybe i guess but yeah but you will the the
garbage collection logs already uh will show this very beautifully because you use then see that
they have the first pause is called an initial marking and the second pause is called the final
marking and you will see those with timestamps in the log.
And we'll probably have to do some adjustments in Dynatrace
to properly visualize that.
So for everybody who's finally gotten used to analyzing memory,
winter is coming.
Yeah, exactly.
And, Philipp, I assume there's already some material out there,
I think you mentioned earlier, if you want to read up on G1 optimization, G1 strategies, G1 analyzing G1 activity, there's material out there that we can point people to.
Yeah, there are a lot of – just you need to Google it and you'll find a lot of just web pages
that show different optimization strategies
and they try to explain how to do it.
Also, it's very easy to find is the original paper
that just says that the outlined
the actual algorithm of the G1.
And it's not so difficult to understand
basically what it basically does.
It's pretty easy.
But it's a lot. i know the implementation of g1 and that's a completely different story because there's a lot of very
weird optimization and c++ magic going on in there to really get it perform
now so thanks for that that was a great great you know, segue or great overview of G1.
Other parts in your presentation that you covered, I think there's two interesting parts for me that I see here.
And I want to go with the first one, which is the stack walking API.
The reason why this is kind of, you know, interesting for me is because I know we do a lot of stack analysis when we are capturing our auto sensor data.
We're trying to get information about what's currently on the stack.
So what is the stack walking API?
Is this just a new API that was introduced or it's just an extension to what we had before?
And what is this all about?
Stack walking is a difficult operation in general in java and in other technologies
because whenever you take a stack it's difficult to know at what position the actual application
is at and you need to resolve the method properly you need to resolve the method properly. You need to resolve the calling stack frames and so on.
So it's not that easy to decode a stack trace.
And this is why the VM usually uses safe points to take stack traces.
And a safe point is basically just a location in the code
or in the machine code that is defined by the VM automatically.
And it just means the VM remembers and stores all the data and all the information it has
about a specific point in the machine code.
So whenever threads are at a safe point, it can probably decode a stack because it knows
how big the stack frame is and so on, and what values are in what registers and so on.
And until now, taking stack traces meant forcing all threads into a save point.
So basically when you wanted to take stack traces of all application threads,
or even of one application thread,
it meant that the entire VM had to run into a save point,
meaning all applications had to reach a save point,
and then you take the thread you want
and you walk its stack while it's suspended, basically.
And there have already been a few optimizations out there
that when you request the stack of your own thread,
meaning you request your own stack,
you don't have to be at the safe point
because the other threads really don't
matter and you already know your own state or the thread knows its own state so it can
work more easily.
And this has been possible via the Java Azure Machine Tool interface for quite some time. On the managed level, you always had to go via exceptions.
So, for example, you just create an exception object to get the stack trace out of it,
if you wanted to know your own stack trace.
And now there's a very pretty API that actually lets you walk the stack.
So it's called the Stack Walker.
And using delegates, you can stack walker and you can, using delegates,
you can specify how many stacks you want,
whether you want to ignore
some internal stack frames
that are generated from reflection
or via some generated code and so on.
And you can walk your own stack this way.
So when you want to look at
who is calling me
or make some security checks
based on callers
or just print for debugging where you currently are.
This new API is very efficient.
And the second thing that is very expensive doing stack walks
is decoding the stack,
meaning you need not just only be at the safe point,
you also need to decode everything in stack frame,
meaning finding out what method that actually is,
finding the parent and so on.
And using this stack walking API,
you can now specify beforehand how many frames should be decoded
and what their properties should be.
So when you just want to notice the caller,
you only need to decode the
caller by exceptions when you go the the way uh through exceptions you always had to decode the
entire stack which if you have 100 stack frames on there meaning you have to decode 100 stack frames
just to access then the first element yeah and thanks for bringing that up so what you just reminded me of we did a blog series
on the overhead of exceptions and we saw a lot of examples from our customers that were using
logging frameworks where either in a debug log mode or even general log modes they were basically
logging out the current exception stack or the current stack and they were basically just doing
what you said creating an exception object and this is not a problem if you are a developer on your local
machine but it becomes a problem if this actually ends up in a high load environment
and and i have i remember i have a couple blog posts where uh 50 of the cpu is actually just
running into creating these exception objects and not only that, because in the end,
somebody needs to garbage collect all of the information as well.
So there's a lot of overhead. But so this seems like a great way to make this much more efficient.
So the Stack Walking API, which has been introduced with Java 9, it's cool.
We have even observed similar behavior in well-known Java benchmarks.
We found a benchmark that actually tries to read some files,
and they have implemented their own file stream API or input stream API.
And instead of returning a dedicated end-of-file token
when the end of the file was reached,
they threw an end- file exception and just kind of read in an endless loop until the end of file exception was thrown
and this resulted actually in more than half of all the objects being allocated just being
exceptions and stack trace information cool so stack walking api. Anything else to that or did we miss?
I think the Stack Walking API, we kind of covered that. It's not very difficult, but it's, I think, a huge improvement if you want to implement those things.
This will be a shout out also for log providers, log framework providers or however else you use Stack Traces right now.
So look at this.
The other area that I wanted to touch upon is you talk a lot about compiler
and how Java-level compilers changed.
And I assume that also has maybe an impact on compilation times,
but also maybe performance?
Maybe there's some improvements on the code
that gets generated in terms of performance?
There are two big changes,
or two things that are introduced in Java 9
that I'm not entirely sure of how much they will be used,
those features, but they are definitely interesting.
The first one is the JVMCI,
which stands for Java Virtual Machine Compiler Interface. definitely interesting. The first one is the JVM CI,
which stands for Java Virtual Machine Compiler Interface,
meaning you can now write your own JIT compiler,
not a head-of-time compiler, but JIT compiler,
in Java, running inside your own VM,
and also in turn JIT-ing itself.
So until now in Hotspot, you started out interpreting code,
and then at some point the client compiler kicked in,
generating some kind of good machine code,
and then at some point the server compiler kicked in
and generated very highly optimized code.
And now you can also have an additional tier in there,
which can just be your own compiler
so and there are compilers out there for example carl by which is a research project also by
at the university of linds um that is a jit compiler is completely implemented in java
and to run in inside the environment and also chit themselves which is pretty interesting
so we might at some point run into applications with compiled code that is just
we might run into situations where companies have their own chit compilers for specific situations
um i don't think this feature will be used very widely
I kind of think this is more like an internal Oracle feature
because they really are tired of maintaining all this complicated C++ code
in which those JITs are currently implemented
and it's very difficult to try new things because it's just such a big thing, a JITs are currently implemented. And it's very difficult to try new things
because it's just such a big thing, a JIT compiler.
And supporting an interface
where you can implement the JIT compiler in Java
and also maintain it in Java makes it a lot easier
because Java, of course, is a lot easier to develop
in terms of bugs and things you need to take care of compared to c++
and so this is more i think a feature mainly for for researchers and oracle itself but it's still
a very interesting feature i think yeah definitely and you mentioned gral so g-r-a-l one of the
projects that you've been working on cool
it's just a project of the University of Linz
I have not been working on that one
but it's still very interesting
cool and now from there
so you talked about this aspect
of what has changed in
the compiler world
but then there's another thing if I look at your slides
which is the
ahead of time, I guess.
Exactly.
So the Java VM now supports ahead-of-time compilation, which there is already an ahead-of-time compilation, which is called Java C, right?
You have a Java source file or maybe a Scala source file or whatever.
And you generate bytecode via the Java C.
And then at some point, the VM takes a bytecode when it needs to execute a class.
And then at some point might translate that bytecode to machine code.
And now you can specify a pre-compiled native library for the JIT so that the JIT doesn't have to compile it at runtime
and the VM doesn't have to start interpreting classes
and interpreting bytecode.
It can immediately take the pre-compiled library.
So you can just take a class file
and pre-compile it to a shared object or DLL,
whatever operating system you're running on,
and then provide that library to the VM.
And then it will take this library as machine code
and execute it as such.
There are a few restrictions.
For example, you need to already specify
all command line flags the VM will run in
because, for example, what garbage collector you use
has a significant impact on the generated machine code.
So you already have to specify what garbage collector
this machine code should be generated for.
And if the supplied library does not match,
the VM can still fall back to just interpreting the class file
and then chitting it itself.
But for some specifically small applications
that don't run very often, or not very long at least,
where code or hot code might not be chitted,
this might be an interesting way to speed up.
I remember this from the.NET world.
I think the.NET runtime had this feature where you could actually pre-compile your.NET intermediate code into machine code and then deploy it on that machine.
And I think the main argument was definitely launch speed up time or speeding up launch time of applications yeah yeah but does does this also
mean if you if you want to do monitoring of these apps and obviously we get our monitoring agent
in the jvm which obviously modifies bytecode would we work does do monitoring tools that
inject agents into an application what would happen in this case well it would
definitely work um the vm would just throw away the machine code because um at some point the vm
has to detect that um well like monitoring what monitoring tools do is they take the class file
and instrument it right so they change the bytecode and at that point the
bytecode doesn't match the pre-compiled library anymore so what the vm would do is just throw
away the library the vm has a very graceful fallback where it when it detects for some
reason whatever that reason may be that the library just does not uh work like it's compiled for a wrong option uh it's this class is slightly different or it has
changed somewhat it just throws the library away and starts interpreting and cheating again
so in this case we would just negate the the uh or the monitoring tools would just negate the
the probably performance enhancements that come from that library uh what would be more
interesting is maybe even pre-compiling um sensors into that code yeah that's what i thought too it
was cool yeah but i don't know whether this is possible it would be definitely very interesting
but it sounds like some big magic going on there.
Cool.
Hey, I know we've already kind of crossed our – we try to keep it to 30 minutes, but it's very interesting what we discussed here.
Is there anything else we should mention?
I mean, we talked about a lot of different things.
Anything else in Java 9?
Well, there are a lot of just small language things that are probably interesting for people that really program Java.
I would suggest we just put on the slides online probably with this podcast, and then people can look through a few of them.
There are quite minor things, but a few nice things also just keep in mind that java 9 has not been uh there has not been a release
candidate when this presentation was created and it's still not released right so we have
28 days to go right september 21st right yes so i do want to point out uh september 21st, right? Yes. So I do want to point out, September 21st is the day that The Hobbit was originally published.
And it's also, it's Bill Murray's birthday.
And it is also Independence Day for Malta, which I'm supposedly have ancestors from.
So it is an auspicious day for Java, but we'll see if they make the date, right?
Is it pretty set, or do you think there might be – is there a chance of delay at this point or are they –
I don't think so because the original – it has been postponed because the community voted against it due to the Jigsaw version they had back then.
But the second vote was unanimous.
It was a unanimous okay. So there's
no veto
from the community.
And
I'm working a lot with the release candidate
to get Dynatrace up and running for Java 9.
So
it looks stable. So
I don't see any reason why it shouldn't be released.
Everyone using Java 9 can always think of Bill Murray then and The Hobbit.
And the Independence Day at Malta.
Yes, well, there's a lot of other Independence Days.
I just singled out Malta because my grandmother supposedly from there.
Cool. Hey, Philip, anything else?
The smaller changes?
I mean, walking through your slide deck,
there are some, I think it's also smaller changes.
The spin weight hints, the HTTP2 client,
I guess these are what you consider smaller improvements?
Yes.
What I find particularly interesting,
there's now a tool called the JShell,
which is basically an interactive shell
where you can execute Java code.
Ah, okay. This is is a lot of languages already have
that i just think it's nice that now java also has this feature you can just type in arbitrary
java code and let it execute yeah and i assume that j shell is also fully i mean most of the
ids out there where it's eclipse or intellij they probably have
their own little version of it they embed it like a view probably i don't know until now the uh
java 9 support in ids is pretty poor so in most ids you can't even install a JDK 9 without crashing the IDE.
So this is an interesting topic, how fast the IDEs will get it up and running.
Well, maybe we can encourage listeners, whether you are from an EIDE vendor or from or you just have a time to extend other ids because most ids have extension options
right and you can just write the only extension and put it up on github and then make others
happy because they can use these cool features in their ids awesome i think the one supporting it the most was IntelliJ. Yeah. Works the best right now. Yeah.
All right. Hey,
Philipp,
did we miss anything that people need to
know about Java 9? I don't think
so. I think
we have covered at least the most important
things. Yeah.
All right.
Brian, do you want to, how do we do the
wrap-up? Well, I do want to add one more thing
that happened on september 21st just because just because the title is so cool uh i'm sure it was
actually horrible because this was in uh 1380 so there was probably a lot of horrible horrible
blunt force deaths but it is a russian military day of honor for the victory over the Golden Horde,
which just sounds really awesome to me, but I'm sure it was really horrible.
Is this another Game of Thrones reference?
No, this was 1380, a Russian military day.
It just sounds cool, though.
Anyhow, yeah, why don't you go ahead and give us some up, Andy?
We'll go on to our summer. Yeah, maybe should I get a summerator? Why don't we go ahead and give us some up, Andy? We'll go on to our summary.
Yeah, maybe should I get a summerator?
Why don't we put it to the audience?
Should we get a summerator sound effect for you?
Or maybe, I don't know if we can use an Arnold clip.
Sure we can.
We just ask for forgiveness later.
Oh, now we can.
Why not?
Let's do it.
Either way, yes.
Andy, please go ahead and pull up your accelerator.
So basically kind of what I learned today is that Java 9 is finally almost there, September 21st,
sharing that date with many other cool things that happen in the world.
Thanks, Philip, for giving us insight, especially what happened around G1 as the default garbage collector.
You also told us about Project Jigsaw, which in Java 9 is not enforced, kind of like hard enforced, but soft enforced with error messages that are written out in case your application is accessing modules that are not to be accessed because they're kind
of hidden away so that means lesson learned for people that are testing java 9 applications watch
out for the error log and see if your application currently accesses any modules any classes in
models that are not exposed because in java 10 you don't have the option anymore your application
will just not work anymore so So we learned about that.
We also learned about the class loaders and how that can be tricky,
especially also for monitoring tools.
That includes the Anitrace,
where we're working on making sure
we can actually instrument bytecode
independent of which class loader is loading it.
And then in the end,
I think you enlightened us a lot
about what's happening on the compiler side, being able to use your own Java-based JIT compiler.
The Project Graal, one of the projects that came out of the University of Linz, is using that.
And we also learned about the head of time compilation, where you can already pre-compile Java code to the machine where you're going to run it,
which will speed up launch time of applications.
So there's a lot of cool stuff happening, and I know the Java community is a huge community,
so I'm sure they will be leveraging the new features.
It seems, though, that the IDE vendors have to get their game together and provide updates soon,
because right now, based on what Philip told us, there's still work to be done there. have to get their game together and provide updates soon.
Because right now, based on what Philip told us, there's still work to be done there.
Philip, thanks for this show.
Thanks for the insights.
I hope I didn't miss it. Thanks for having me.
Brian, final words?
Any other references to other events that happened on September 21st or any other spoiler alerts from Game of Thrones?
That's about all I can contribute to this conversation because, you know, as I'm listening
to this, I realized, wow, there is so much about coding and developing that I just have no idea
about. You know, my background at Phillip, we've never met before. My background was
performance testing and metrics and that kind of stuff. So when we start going into this level,
it's way over my head, but it's amazing stuff. And it just start going into this level, it's, it's way over my head. Um, but it's amazing
stuff and it just really kind of, uh, gives me a reinforces the respect I have for developers who
have to go into this, this kind of depth of technicality for all this stuff. And, uh, it,
yeah. So hats off to all the developers who are going to especially have to deal with, um, some
of the changes that are coming up here. Uh, it doesn't sound like it's going to be the smoothest transition,
but it's not going to be a break the world transition,
but I'm sure there'll be a lot of road bumps along the way.
And hopefully everyone can join together and share what they learn
and figure out great new ways to leverage the features in Java 9.
And Philip, great to meet you.
I didn't even know you were working for us. So that's even
awesome. And a lot I learned today. So thanks for being on the show. It was wonderful. And yeah,
that's all I've got, Andy. Philip, any final things to the audience? Any appearances coming up?
Sorry?
Do you do any speaking engagements? Any final thoughts, anything you want to put out there?
Not really in the next few months.
I can also, I can always kind of make some advertisement.
I do, there's a lecture, I hold a lecture every semester about Java performance monitoring benchmarking.
So if you are at the University of Linz, you're invited to join.
Awesome.
And Andy.
That's great.
Yeah.
I was going to say,
anything else on your side, Philip? I didn't mean to cut you off there.
No, not really.
I'm just, thank you for having me.
It was fun.
Andy, this will be
towards the end of September,
early November, I think.
Anything you would need, want to promote?
I guess you meant end of September, early October.
Oh, yeah.
Yeah, yeah, yeah.
I will be in Dublin for Quest for Quality, first week of October.
So that will be great to meet people in case you are in Dublin.
And then we have a couple of DevOps events coming up, DevOps Days in Philly,
maybe also Nashville.
And then at the end of October,
I believe, no, that's later in November,
it's AWS reInvent.
So check it out.
There is also a page on the community,
on the Dynatrace community,
where I keep all the events
or you just go to our website
and search for events.
We typically list these
things um yeah but quest for quest for quality in dublin is going to be something i'm looking
forward to that's awesome and i actually just did my first meetup last night yeah at the that's
right the docker meetup in denver so thanks to them this will be obviously a while back i may be doing one with the Denver Java Users Group in early November.
And the Docker meetup team, they're, I guess, family with the Denver DevOps Group.
And I might be presenting there.
So just keep a lookout on my Twitter feed, I guess.
Yeah.
It's all local stuff, but hey, it's first time.
It's the way you get started yeah excellent all right well
thanks again philip it was wonderful to have you on and wonderful to go into this yeah it's it's
it's crazy stuff but uh i'm sure for the people who work in this it's not that it's crazy thank you
all right bye