PurePerformance - 038 101 Series: Node.js
Episode Date: June 19, 2017If you think Node.js is just a technology used by small start ups then you better listen to this 101 episode. Daniel Khan (@dkhan) – a member of the Node.js community and working group – answers a... lot of questions on why large enterprises such as Walmart, Paypal or Intuit use Node.js to innovate. Daniel also explains the internals of Node.js, its event driven processing model, its non-blocking asynchronous nature, and how that enables a list of interesting use cases. We also discuss how to monitor and optimize applications running on Node.js and why that might be different for a developer as compared to an Ops team that runs Node.js in combination with other enterprise software.
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time of Pure Performance.
I'm Brian Wilson, and as always, to the left of me here is sitting in person, virtually, Andy Grabner.
Hello, Andy. Good morning.
Hey, Brian. How do you know it's the left and not
the right what's wrong i mean how does this work i think it's the right of you it's radiant well
it's it's it's audio only so nobody knows so just go with it um andy i'm i'm coming from my uh my
dining room today no feast but i've been kicked out of my uh my normal spot today so we'll see
what kind of distractions i might get distracted by a dog outside or something so hopefully okay yeah no worries uh well i survived uh cinco de mayo i think the
last recording we did was just before the cinco de mayo party in our office survived that actually
we did a salsa lessons uh with some of the office staff you were teaching and you were teaching i
imagine uh i was teaching, yeah.
At least I tried.
And I think people enjoyed it.
At least they stayed.
And yeah, it was a fun thing.
So talking about fun things, what do we do today?
Well, we have another one of our 101 sessions today, right?
And we're going to be talking about Node, even though it's been around for a while.
I think we both agree it's a pretty important topic.
And also we're going to touch upon serverless, which I think is kind of this mind-blowing,
one of the most bizarre things I've ever heard of, but it seems to be picking up.
So we're going to talk about those.
And Andy, who's here today to talk to us about these?
Not Genghis Khan, but it's Daniel Khan.
I'm sure he has heard this joke many, many times before.
So Daniel is actually one of my colleagues.
We're both working in the innovation lab and without further ado.
Daniel, are you with us?
Are you still with us on the air?
Sure, I'm here.
Yeah, I've heard this joke before.
Now, see, that's not the first one that popped in my mind.
I was thinking, Khan!
Oh, yeah, of course.
Right, because to me, I always look at yours from the U.S. point of view of English.
I look at it as Khan, not Kan.
So to me, it's the Star Trek.
I'm sure you get that as well.
So many associations with my name.
Okay.
Do you know that one or no yeah sure
it's excellent so daniel uh before we go into node.js maybe a couple of words about yourself
so people don't know you yeah as you said andy i'm working at the innovation lab of
dynatrace and there i'm owning this topic around node.js and also serverless i've
i'm in the industry for about 17 years now and i'm doing node for i think four or five years now
and i joined dynatrace two years ago to help covering this whole topic around node.js and now there i'm really doing try to do a lot
with the community so i'm frequently at conferences i'm speaking at conferences about
node.js and performance i'm also in some working groups around the the node project and yeah
at least i i try my best to to help the project there as well so that's what i do
so that's cool so that means you actually can you have a stake in influencing
at least a future direction of node.js if you're part of that community
node.js is a very open community i I have to say, so everyone is free to join.
And, yeah, I once started working with them, and it was quite easy to get started.
So it's not a big deal.
And, again, this is a one-on-one session, so maybe to really just get it off the plate here,
what is Node in general?
Can you give us a quick intro for people that may have not heard about Node
or have heard about it but don't really know what it is?
What is Node?
Yeah, I'd like to maybe start with
how Ryan Dahl, maybe the inventor of Node,
kind of introduced it in 2009
because he said basically Node.js
is server-side JavaScript.
So it's JavaScript you can execute on the server,
and so you can create server-based JavaScript applications.
And to accomplish that, you need some runtime that runs this JavaScript,
and for that Node uses uh the v8 engine that's the same javascript engine that's
used in google v8 in the google chrome browser um and the neat thing about javascript is and we
notice from the browser that that it's evented this means it can react to events and it has
functions as first-class members.
And Ryan Dahl, when he created Node.js, thought, yeah, that's a good match,
so I can use JavaScript to really create more or less like evented server applications that react to incoming events and then execute functions on that.
And that's more or less the core of the whole
node.js project node.js is highly asynchronous so it's really a good fit but maybe we talk about
it later for everything that consumes apis or talks with databases and it's a very I would say, it does not consume much resources on servers because of this evented model it uses.
You can very much compare it with maybe Nginx here, or like Nginx versus Apache.
That's maybe some kind of similarity in how node works compared to maybe java or php
now the um and and well thanks for that so that's that's uh understood so if i write a node program
that means i i write my my code my javascript code and then basically on the server the only thing i do is i just launch the
node process or does node run within the context of a web server like nginx or how would that work
now the neat thing is that node comes with a built-in web server so it has an http module
in its core so in your node application you basically start or initiate a listening connection like on some port.
And then you start this application with Node and then the file name.
You want to start where the main file is actually.
And then the Node process will listen for incoming requests on this port and usually in your application you
also add a function that has to be executed when this incoming connection or this incoming request
comes in and as it is evented this means this function can also again trigger different functions, or do a call to a database.
And this all creates another so-called callback,
and in this callback you trigger the next function.
So more or less a node application is actually a cascade of asynchronous callbacks
that is at some point terminated by, for instance, returning some result.
Wow, and so that means, if I understood this correctly, by, for instance, returning some result. Wow.
And so that means, if I understood this correctly,
if I develop a function in Node and I call another function,
I typically do it by basically not calling the function directly as I would normally do in other programming languages,
but I basically make an Asian corners request.
So I say I want to invoke that function and please wake me up
or please call this callback in case you're done.
Is this correct?
Not directly.
So your userland code usually, so when you do, I don't know,
parse a string, echo something out or do something
or process some templates or calculate something,
that's usually regular synchronous code.
It looks like, so within a callbacks return function,
it looks like regular imperative code.
Like it's really sequential.
The asynchronity starts every time you want to do I.O.
This means you want to talk to maybe the file system, a database, do some request over HTTP.
In this case, you will call the function like, for instance, fileFS for file system read.
Then you have the file name you want to read. And then you add as next parameter, you add the callback,
the function that is to be called when this operation completes. So this means callbacks
or this asynchronicity always happens when you're doing IO. And this then also runs not directly in
your JavaScript user land codes. There is another machinery in place that's called LibUV.
That's the event loop that really takes care of doing all those asynchronous jobs and then calling back the function when the asynchronous job is done.
Cool. Wow.
And in that context, storing the state somewhere?
Or can I pass the state from one function until the next?
How is this handled normally?
State handling is indeed a little bit of a challenge so it's easy to do when you do it for your um
when you do it like session handling your so the user state keeping that is quite easy because
it's more or less handled by the framework so you you use like express for for web applications
and there you have the request object that is passed through most of your application.
And then there you can pick up the session from there.
So there is enough kind of modules available that deal with that.
It gets complicated when, but we're really getting deep here already, but it gets complicated when we talk about monitoring
because there you need this transactional state and this continuation state.
And here I have to add that makes it way more complicated that Node.js runs in a single thread
because it's evented, it can do that.
So, for instance, if you compare it maybe to a PHP application, when a request comes into a PHP application, one single thread or process will handle this one request.
And you can assume that within this request or in this process, everything that's going on at a given time, it will be all for this one request.
So it's easy to have a context or store a context global to this process.
In OGS, every request is handled by one process, just not at the same time.
And this means that it's harder to, for instance, store state of a request state in your application and in one request.
And that's a challenge for when it comes to monitoring because you really have to do a
lot of work called monkey patching to get this transactional state through
all cascades of callbacks.
Every time when you pass into the event loop, you are prone to lose your context, actually.
And there, things get quite complicated.
We won't cover that, I guess.
But yeah, it's a challenge.
So I think at some point, we will want to cover some of the monitoring, maybe in a little bit.
But I think before we go into the monitoring, because I'm sure there's a lot of aspects of monitoring Node, it might be great to frame, like, how is this being used?
Where is it being used?
In what context is it being used, you know, is this kind of a fast, cheap and easy startup tool that once an organization gets more complicated and robust, they switch out to something else?
Or is this something that, you know, really done in a in the whole you know the big
microservices type of way we're seeing everything going on you know what's the landscape of nodes
usage i guess in short yeah so when i started at dynatrace i was also asked so if if node is maybe
the next ruby on rails like as you said something like with Twitter, where you start with Ruby on Rails, and then you shift away to maybe Scala or something or Java or something else.
And there already I looked at the landscape of companies that are investing into Node.js and are using node.js and if you look at them these are
companies like for instance paypal intuit autodesk ebay ibm also netflix uber nasa or for the singles
here also tinder so these these are all uh these are all Node applications.
So and while, yeah, for sure, Uber and Tinder or Netflix are more or less startup-like, eBay or Autodesk or Intuit are definitely not. So I asked myself, what makes those real companies with a lot of legacy going on really use Node.js?
And I looked for use cases for them.
And I found three use cases that really clearly show how Node.js is used in the world at enterprises.
For instance, when we take Intuit.
Intuit, yeah, one guy, Alex Balash of Intuit,
wanted to change something and wanted to create something new.
And he kind of created a team he called Pirate Ship.
So they worked after hours and really did things on their own
outside of this usual company development scheme.
And they decided to use Node for it.
And that was TurboTax then, what they created with Node.js, actually.
So it was a way to really quickly create something new.
That's one use case paypal for instance switched to nodechess and paypal is
meanwhile a real large nodechess shop they had this regular java infrastructure or system and
applications and this was quite monolithic and one guy that was more front-endy, Trevor Livingston, found out that it takes them six months to change some piece of text on the landing page of PayPal.
Because, yeah, you know, you won't deploy the whole thing just for a text change, so you know.
So things were really…
The old waterfall kind of…
Yeah, it was really slowed down, and that was really a problem.
So they used Node.js as a templating platform first.
So just to show how things could look like, and then moved a little bit into the Java world by then using template compilers on Java, but they are basically
based somehow on JavaScript.
But then they really took the step and moved the whole front end part gradually to Node.js.
And now very much of PayPal is Node.js and now very much of of PayPal is Node.js. So here we have Node.js more or less
also as a migration platform for existing applications and third a good example is also
Walmart. Walmart typically not a startup very much legacy going on there. And Aaron Hammer worked as a software architect there at this time,
and he used Node.js as really as a gluing tier, so as a migration tier.
He put a node in front of the whole enterprise stack and the whole legacy stack
and used it to enable new offerings to connect to the enterprise stack and the whole legacy stack and used it used it to enable new offerings to connect
to the enterprise stack through node and that's the third use case so it's tip so node is all
first of all node is used by teams that are moving that are moving fast in companies
node is used as a migration platform because it's easy to kind of move away from old things to Node. like also mainframes, whatever you have, to modern offerings like mobile apps
or single-page applications that are used to use JSON,
and there you cannot start, initiate some, I don't know, connection to a DB2 or something.
So this is all done within Node.js, and it's then transformed and kind of published to
all those new platforms we have today and that's yeah no going on and that's basically i would say
that the reason why node.js became so popular that that does not mean that no it's no does
anything very new it was just a very easy way to have this whole asynchronicity and this new platform.
There are many JavaScript developers around.
JavaScript is easy.
So it was really around ease of use and providing a simple API to actually quite complicated computational problems of computer science.
I was just going to say, based on all those companies that you were mentioning using it,
it's obviously not just, you know, it's great that it's not just a startup.
It's something you can choose to use knowing it's proven, right?
Especially on the enterprise level, if you're an enterprise customer,
the risk is always hey
we want to use one of these hot new technologies but are we going to be able to keep using this
it sounds like based on all those examples you gave uh it's definitely um although it's new-ish
right 2009 when it first started that's you know not so long ago but it's it's got a proven track record for for sure absolutely i would not i would maybe not
use node for i just lately talked to to to one guy of of of a large german company and they still
have java in the back end and everything and i would also say that node is not a platform to
replace really things no one will shove out their mainframe
and switch everything to node,
but that just does not happen.
It's a platform to augment existing legacy
or enterprise stacks
and make something new with them.
Yeah, I think, hey, that's a great explanation, especially the use cases.
I think they're very, very useful.
Now, from a monitoring perspective, you mentioned it runs on V8.
So I assume monitoring can be done by some tooling that comes with V8 itself.
When we talk about individual node instances is this a fair
assumption or is there nothing coming with the product with v8 itself it does but that's mostly
mostly uh like more debugging uh or like tracing and not so much production monitoring what's provided with the V8. But sure, you can, and that's actually quite awesome.
You can, for instance, create or collect heap dumps or CPU samples from a node process
and use the output of that directly and load it into your Chrome developer tools, and you will have full, like, the same functionality
as if you would profile a browser application.
Even more, there's now this new dash-dash inspect or switch for Node.
This will give you directly a new array you can put into your browser,
into Chrome in this case and then it will
really even open a few for you that shows the code that even allows you to create breakpoints there
so you have in your browser you have your application and you can like do in-place editing
and everything and debug your application so that's really a really neat thing
but it's possible because yeah v8 is v8 and this does not care so much about the underlying
lying machinery it's still javascript so it works and so that means what are the typical ide's
though what do people use to develop the code is this also the developer tools in chrome i don't
think so right no no no so the idea is to develop i so i use or not to mention a vendor but uh the
coverage in in those chat brains products is quite good um then there is also, and it's really awesome and free, Visual Studio Code from Microsoft.
So there you all see the investment.
And Visual Studio Code is even done, as far as I know, somehow with Electron, which is basically Node.js again.
So it's a Node application in a way.
And it comes with built-in Node.js support out of the box,
so you can open any Node application,
and it will do all syntax highlighting, everything,
and also offer you debugging facilities with a simple click for any given application.
So it's really a great tool, and it's free.
The only thing I'm missing there is for really larger projects,
you might want to have something with a little bit more features.
But for a regular project, Visual Studio Code is just a great IDE that does quite everything you need.
Now, coming back to my monitoring question, obviously you can do debugging on individual nodes,
but what are the main use cases why developers or people that run node applications
go to a commercial provider or let's say an APM vendor like Dynatrace?
What are the use cases that we provide that are not handled by the debugging features that are coming with V8?
So the thing is, when we talk about monitoring, and I think that's really key, and that also took me a while to learn in the beginning, was that we rarely deal with those developers, right?
So with those people that actually wrote that code.
Application performance monitoring,
and if we talk about enterprise scenarios,
always involves some other teams like operations teams
that kind of keep the whole thing running.
And usually Node.js is just another tier
in this whole landscape they have.
And we see this very often. Node.js is just another tier in this whole landscape they have. But, and we see this very often, Node.js is a very important tier because it's very often kind of a proxy tier.
Or, and I forgot to cover that with my use case before, also the tier that handles all those microservices.
Node.js is great for microservices, by the way.
And then if you don't have the coverage of Node.js, you lose a lot.
Why?
Because you want to see the transactions passing through your application because that's the only way to really nail down a root cause.
So if a user clicks on something and something breaks down the stack you want to
see the whole transaction passing through all tiers and to get that you have to monitor node
chess as well and that's for for me also the most important use case it's not it's i i can talk a
little bit later about metrics within node chessjs that are important, but this transactional tracing through Node.js is really key from my point of view. of have a request coming in and not losing kind of this request id on your way through all those
callbacks and when the request then leaves that the node tier for instance to call some java
backend this request id should be on this request again to be picked up on the java tier for
instance and this is really a challenge in node just yeah and in those
call just just and in those node calls would you say like most some or all or what what percent of
node uh requests that hit the node tier go on to another sub tier from node like to java or
somewhere else or maybe does node sometimes go directly to the database what or uh is situations where Node's handling everything and just sending it right on back?
I think my knowledge here is a little bit filtered because our customers are mostly enterprise customers.
And they use Node heavily as cluing tier.
So as proxy and most things are kind of going through the Node tier.
For instance, Node does some authentication or something.
So Node, most of the time when I ask the customers and they say,
does Node.js talk to services down the stack?
It's always a yes.
And it's not so often even the database sometimes that happens or
it uses redis for some kind of session management etc but down the stack there is in most cases some
kind of um yeah enterprisey like oracle database or some data warehouse or whatever. And that even applies to those now, I would say, modern microservice platforms
because those microservices are, I would say, like a cloud around those
enterprise-y cores companies have.
So those microservices still reach down the stack and talk with some APIs or with some services that are not Node.js, often Java, when you're looking at the response times of your transactions running through, it's not enough to just say, you know, okay, it's a node transaction.
We have a node problem.
It could still be anywhere in that stack because there's, you know, a lot of different places where node could be reaching out to other services or other components that it can be interacting with.
It's not the self-contained component and andy that that i just wanted to bring that in as you were going into that monitoring because i think it was just kind
of important to establish that node is not just a standalone node but that whole deeper deeper
view there so and i think i have to add here i have to add here that also the node teams they
they are as i said very often very fast moveving teams in organizations really have to get into this mindset as well to be part of kind of a bigger whole in a way.
Because I see often that Node teams are kind of doing their own little monitoring for Node, dedicated for Node, and they think they are set with that but when something goes wrong
they might be the first to blame when something goes because that the errors first occurs then
on node.js because it's the very first tier it it hits and also when the transaction fails down
the stack the error there might be exception in node.js and then it will be the node team that is to blame for so it makes really sense to see see to have a holistic view of the
whole application to to find root causes where where they where they happen yes hey before i
ask you a little more about monitoring and also like what to look for. What are the typical platforms then that people run nodes?
I would assume it's heavily Linux based or do you also see other platforms?
We see really mostly Linux.
So there are sometimes we see Windows, but that's very rarely.
That also has something to do that the Windows support of Node was not so great for a while.
And yeah, so that's more or less also historically.
You have to consider that while Node.js is JavaScript and the V8 also runs on Windows without a problem.
You see it every day on your Windows machine, on your browser. The event loop that kind of delegates all those jobs, asynchronous jobs, to the operating system
actually really heavily relies on the operating system it runs on
because it really utilizes kernel functions there.
So it really makes a difference.
So the implementation is hugely different.
So if you're kind of listening for a kernel event in Linux or on Windows,
these are really different things.
And that's the reason why historically I think Linux is by far the most used platform here.
But honestly, Linux is in all those server environments most of the time.
And you mentioned that Node.js is executing on a single thread. So does this really mean that every time I launch a Node instance, that I really only get one thread?
Or do I get a thread per core, which would probably more – which well which would i guess also work to be more
efficient or how does this work is it just one thread so there are a few things uh first of all
you can start node using the built-in core module cluster and this will basically spawn worker
threads um those child threads uh and will then internally use inter-process communications to delegate load
to those those childs you do this also code wise like you would do it in other languages so you
ask if it's master you do something or is child you do something else and then Node.js will take care of the rest. So usually you then create cluster processes times number of CPUs.
It makes sense.
So you usually have then times CPUs plus one
because that's the master process that kind of starts in the beginning.
But that's just one piece of the story
because when people talk about node
chess they very often also mention a thread pool of libio v of the event loop so node also maintains
a thread pool um which i think it starts with four or eight threads or something like that
and people then tend to say okay then we are back at regular threading, right?
Because there we have the thread pool.
But you have to know that,
so LibUV sometimes delegates tasks to the thread pool,
to a thread, and the thread will do the task
and then will come back to LibUV
and LibUV will call the callback.
But this is not the most frequent case because very many
apis interfaces of of modern operating systems already are asynchronous so it can be directly
used then by by libv and they will call the native interface that is asynchronous already
and does not have to delegate to the thread pool.
Just in rare cases, it might happen that the thread pool is utilized to do some asynchronous tasks
if there is no better way available for that.
And then we also have a few threads.
The V8 engine also will spawn a thread for
instance i think for garbage collection so so there are some kind of workers that will start
with node.js but your javascript code and also the event loop will run on one single thread and if
you spawn it on one single thread per cpu So that means scaling, if I have a powerful box,
then if I want to scale Node.js,
it doesn't actually make sense to spawn multiple instances
because the idea is that we are leveraging the CPO
in its best intention anyway
because we're always, thanks to the event model,
are never putting the CPU in idle,
even though if we have just one node instance running
with however many worker threads.
So that means one node instance can fully utilize the CPU.
Yeah, one CPU.
So one node instance can utilize basically one CPU at a given time.
And if you have more cores, for sure, start one for every core.
But the CPU is best utilized already.
So I think that causes a lot of confusion sometimes because they think of the event loop about some kind of totally deterministic machinery.
But there are a lot of heuristics in place here.
So it really tries to do smart guesses
how to utilize stuff.
And that's exactly what I was talking about before.
The Node.js provides a simple API
to complicated processes
because if you do this on your own
in Java or in C or whatever,
you have to take care of that yourself.
Cool. All right. on your own in Java or in C or whatever, you have to take care of that yourself.
Cool.
All right.
I think what I would like to conclude this topic with,
and I know we said we wanted to cover both Node.js and serverless in a kind of like a 30-minute session,
and we've already been talking 30 minutes on Node.js alone,
but I think this is just so valuable.
I would love to wrap up this Node.js specific topic by asking you now on a monitoring perspective,
what are the things that people look for besides obviously the end-to-end tracing?
What are the things people look at to optimize their code that is running?
How do they find hotspots?
What are the typical hotspots?
Is it bad coding?
Is it memory?
What are some of the things you've seen out there so there are a few things first of all
node.js is a long is a long running process so this means not like in in maybe php there one
process lives for just one request you start this process and then it handles request and so
you're prone to every kind of memory leak you can think of obviously so you can store something in
global scope that will kind of clutter your your memory but you can also um um have some garbage
collection issues with the node chess and they will kind of where it's no problem
in php because for those two one second the the before the the process dies everything kind of
gets for this one second that's no problem but if if in node chest this runs for
three weeks you you see how the memory gets consumed so
first of all collecting heap dumps is one thing or looking at memory usage and
also garbage collection runs how long how long done to garbage collection runs
take for instance to allocate a lot of objects here and does this really slow
down my application because every time the garbage collector runs
the node process will be stopped.
So these are things you want to look at.
Then for sure, CPU metrics are interesting.
So how much CPU am I using actually,
like for every other application that's an important metric.
And this is something we are just getting ready now.
Event loop metrics are important.
This means we measure, so one event loop run is called a tick, and we now created metrics that measure the duration of one such tick and also the latency
and how many or also the frequency of event loop runs and this tells you a few things this tells
you can tell you for instance if the event loop is is blocked for some reason, like waiting for some IO, or if it's doing a lot
of JavaScript code or userland code, so if it's heavy on the userland side. So you see all of that
in those event loop latency metrics. And this also tells you a lot about what's going on in your code. So there are two things for sure that can cause Node to slow down.
First of all, for sure, it talks to some slow process via, oh, like, I don't know,
some HTTP, some REST API that is responding slow, that will, for instance, also slow down the event loop because all those pending requests pile up in the event loop.
Or there is another thing that would be when you're in your user land, do, and you should never do that, actually do CPU heavy computations.
But this can already start with string manipulations or parsing large JSON objects.
This really can cause your application to slow down.
And you would see that basically through the event loop metrics
or through CPU sampling, of course, as well.
It's funny.
A lot of those problems I hear um as although framed a little differently and
different characteristics within node they all kind of tie right back into the same common
problem patterns you know with the string manipulation and too many requests to another
tier it's just a different flavor of it yeah but in node the problem is differently to other
kind of platforms is if you take i always always take PHP because it's such an easy
example. If you take PHP and you do something very CPU heavy on one process, on one request,
all other requests can still be handled, right? Because it spawns a new thread and you're done.
In Node.js, if you have one single thread and you i don't know
calculate fibonacci somewhere in your javascript code and this takes two seconds then within this
two seconds really nothing happens in your node application so it cannot even take one single
request so you can really halt everything with such operations So that's the critical thing.
That's the reason why you should never use
do CPU-heavy stuff with Node.js directly.
Cool.
Wow.
Hey, Daniel, I know we could probably go on much further,
and I know there's a lot more material out there.
You are a regular speaker.
You're a blogger.
But I think I want to conclude that topic uh for
this episode and then invite you back for another one on serverless but um kind of to sum up what i
learned is you know where node.js came from it's been around for a while thanks especially for
the summary and overview of all the different use cases, especially in the enterprise that we are dealing with.
It seems to be often used as a gluing tier
between some of the new projects that are going on
and the enterprise stack that is there.
Also, obviously, it allows certain teams to move much faster.
That's great.
I learned a lot about architecture,
and thanks a lot for explaining
the different monitoring use cases. And it seems really what we try to solve as Dynatrace and as
other APM vendors probably in the space is making sure we can not only monitor a single node instance
because that is rarely ever the case or helpful in a large production deployment.
You really need to understand the end-to-end transaction flow when it flows through Node.js and what's happening there.
Is this kind of a good summary?
Yeah, and concluding, I have to say that we are in the Node Diagnostics Working Group
are really trying to get even better transactional tracing in Node.js
because every vendor, and we have a few APM vendors in there, are kind of facing the same problems here.
And we are really working on a generic solution to make monitoring of Node.js applications even better.
Great. And I wanted to say thank you for using the word Fibonacci.
You're the first to do that. And if I could give you an award, I would.
It's a longstanding inside joke with me and my friends, but I don't know if any of them even
listened. So it gave me a great feeling though, to hear that. So thank you. And the other, the
other big thing I wanted to point out, right, you were just stressing the whole bit about the,
you know, large CPU consumption and processing and how monitoring CPU utilization in Node is very
important. And you also mentioned monitoring garbage collection is very important. And that
then brought back to mind, Andy, our conversation just about garbage collection in general, when we
were talking about memory, how one of the most important things to be able to see is the CPU utilization of garbage collection.
Because if you are not seeing that garbage collection specifically, GC is going to show up as CPU.
GC uses CPU.
So the importance of being able to see this is code execution CPU versus this is garbage collection CPU execution
is very, very important.
And yeah, I just wanted to bring that up
because if you're just looking at that CPU,
it's hard to tell,
is this something intensive in the code or is this GC?
So being able to monitor that is key, I think, as well.
Yeah, and luckily, the Deviate engine,
in this case of node just
has events you can listen and listen to for gc runs so it's quite simple to time them and to
graph them out great great excellent cool all right let's wrap up this episode brian yes
daniel do you have any engagements?
This is probably going to be airing in June, I think.
Do you have any summer engagements that you'll be speaking at or attending that you would like to promote?
I cannot tell yet.
I might be at Node Summit.
Node, I think it's called.
No, Node. One second second give me a second we praise
uh it's andy give us some music
i might be at note summit in san francisco I cannot tell yet. I was invited to the board of those people that kind of vote on the submissions,
on the call for papers.
So I hope this also means I will be in there as well.
And next, most probably, I will be at Node Interactive in Canada,
sometimes in fall.
But, yeah, I don't know yet.
Mostly, most things are really on a short call.
And we will post links to your blogs and everything up on the site for this.
And again, thank you very, very much for taking the time.
I found it very enlightening, as always.
And I want to thank our listeners for being with us all this time.
And any questions or feedback if you want to hear.
If you have any specific questions, this is a great opportunity.
Since we have to delay the serverless talk, if you have any questions you want us to address on this serverless conversation, go ahead and either email them to pureperformance at dynatrace.com or you can tweet them to at pure underscore DT.
Or you can also reach out to Andy at Grabner Andy or me at Emperor Wilson.
Daniel, you have a, do you tweet?
Yes, you can reach me at DKhan, so D-K-H-A-N on Twitter.
Excellent.
Any last words from anybody?
No.
All good.
Those were last words.
It's a trick question.
Got you again.
All right.
Well, then, until next time, everybody, thank you.
Thank you very much.
Good.
Bye.