PurePerformance - 081 Mastering Memory Aware .NET Software Development with Konrad Kokosa
Episode Date: March 4, 2019The .NET Runtime – whether .NET Framework or .NET Core – provides many ways to optimize memory management. But they don’t come in the form of configuration switches as we know if from Java. Whil...e there are a handful of settings, the .NET Runtime favors a different approach: asking developers to write memory aware software that follows a couple of core memory aware principles and best practices.In this podcast we get to talk with Konrad Kokosa (@konradkokosa) – author of Pro .NET Memory Management. In his book he gives developers and operators great tips on how to optimize your .net applications and environments such as #1: start with proper monitoring; #2: reduce memory allocations; #3: well – for this and more you should check out Konrad’s book.Listen in to a great discussion with somebody that has been working very close with the .NET Engineering Teams over the past years and brings the internal secrets of .NET Memory Management to everyone out there that wants to write Memory Aware .NET Software!https://prodotnetmemory.com/https://twitter.com/konradkokosa
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always we have with me Andy Grabner, my co-host.
Andy, how are you doing today?
Hey Brian, I'm good. Just came back, well I know at the time when this records, we just came back from Neotis Pack in France. Ah, yes. I was very jealous about that.
I saw you all renting some ski equipment at one point, I think.
We did in the afternoon of the last day.
So that was really great.
Yeah, well, I guess I can't complain.
I do live in Denver.
But yeah, I'd never been to the Alps.
That looked really awesome.
I'm also really excited today
because I have a real nice delivery
coming from Sweetwater, some new audio gear so uh i'll get back to some recording gonna be able to record my drums again and so it's
a good day and it's a good day for other reasons right we have a good guest today andy do you want
to yeah we do actually and i think it's been and i need to ask him when we met the first time in Poland, Konrad.
He is an author of a book called Pro.net Memory Management.
And I think memory management is a big topic for most of our customers.
And therefore, I was excited that he actually reached out to us again and said, Hey, we met a couple of years ago.
Do you want to talk about this topic?
Is this interesting?
So, Konrad, maybe you want to take it away from here
and maybe introduce yourself and then we'll get started.
Sure.
Hello, everyone.
As you said, I'm Konrad.
I'm living in Poland.
As you said, also, I'm author of this book,
huge book because it's over 1,000 pages.
Wow.
I remember that we met in Warsaw, we were talking Andreas about web performance in
a meetup, which I was organizing then. And I remember that I was
saying that I'm just starting to write this book. It was, I don't
know, three years ago. So a few months ago, I finished so
indeed, it took me over two years to write it.
And as you said, I'm a consultant, freelancer,.NET related.
I'm giving trainings.
I'm doing consultancy also on the topics related with the performance and architecture.
And this is my everyday job currently.
And as I said, I finished this book a few months ago,
and now I'm trying to, let's say, gather the feedback from the market.
Well, congratulations on completing the book.
That sounds like a massive book, but I'm sure there's plenty.
It's two kilograms.
Wow. like a massive book but i'm sure there's plenty it's two kilograms wow is this is this a measure of a is this something that you authors internally measure yourself against like how heavy is your
book it was not but since my book i i just a lot of people was just having fun from the size of
this book when the book was published people were publishing on Twitter, for example,
tweets with the photo of this book saying,
what it is, why it is so big, and so on.
There was even one tweet when someone weighed it.
From this, I know that it is 2 kilograms.
Well, it puts you up there with Herman
Melville and Leo Tolstoy
as far as sizes go. So congratulations.
You're in good literary company.
So, Conrad, let me ask
you a question. I remember
when I was
doing more Active.NET development,
everything was so easy in.NET.
At least they made it seem because
there was not a whole lot of things you could actually change and configure when it came to garbage collection, heap sizes, all the stuff we knew from the Java side, how you could configure and size and tweak, memory management, garbage collection.
This was all really not there.
And obviously, this was years ago. So let me ask you, why does somebody need to write a book for a thousand pages?
And why does somebody need to read a book?
What are the challenges now?
What has changed over the last couple of years, especially in modern.NET environments?
It's a very good question.
In general, this book has been written after more than 10 years of.NET existence.
So this by itself says that it was not very needed
because.NET was very good without such a book.
Because as you said,.NET is very convenient in writing
mainly because such things like automatic memory management so
a lot of people doesn't have to care about it in most cases we can say for example that 18
of development should not be very aware of dotnet memory management but still there are some cases, like 20% of cases when things are not going very well
and we should start to think about it.
So in general, answering your question,.NET memory management is still very good
and it doesn't have to be fixed in any way. And this book is not an answer to such needs.
Simply, I think that after so many years of.NET development,
we gathered some patterns, some anti-patterns
from experience of people related to.NET memory management.
They are also some typical problems or caveats
that people are just encountering so it's
just a book about all of this experience of this.NET ecosystem on that
field yeah so does this mean if I understand is correct
Microsoft did not go to the.NET team did not go down that route like Java did
where you have a lot of different configuration
options.
So you can tweak the Java memory management based on your own need.
But it's more the opposite, where the.NET memory management is still kind of like, you
know, you take it, you don't need to configure a whole lot.
But over the years now, we have learned how to develop in a way
so that.NET memory management works in an optimal way.
So in a sense of instead of configuring it for your needs,
you develop based on best practices?
Yes, exactly.
And as you said, it was a very well-thought decision made at the very beginning of.NET that it
was designed that like it doesn't require so many parameters to tune.
Like in Java, there is a lot of flexibility, you can change GC, you have a lot of tunings
there, a lot of parameters, but.NET decided to not take that
path because they wanted to provide to expose some very high level way of configuring GC
and just to tune it for some typical scenarios and doesn't put all these parameters and the
choices on the developer.
So yes, in general, we have in.NET
some very high-level way of configuring things.
But still, there are some patterns
that we should be knowing
if we want to write memory-aware software.
Hey, Andy, this sounds a lot like
when we always used to talk about things like throwing a framework into something, you know, the classic example was always using Hibernate for JDBC connections, right? Where you're going to throw this package in, the pitfalls of it, you can get yourself in a lot of danger.
And Conrad, it sounds to me like that's exactly what you're saying.
.NET memory management works really good.
However, there are a lot of ways in which a developer can use it in the wrong way, which can expose maybe, let's say, flaws or just limitations on how it works which then gets you into memory trouble
exactly i'm not sure if you know such law coined by joel spolsky which is a law of leaky
abstractions that eventually every abstraction will leak and will hurt you hurt you somehow
and it's exactly like that in.NET memory management. In every automatic
memory management ecosystem, the memory is kind of abstraction of the infinite memory, in fact,
because we are, for example, only allocating objects and doesn't care about releasing them. So it's a kind of abstraction of infinite memory. Yes. But in the
end, this abstraction will eventually leak you and for example, out of a sudden, it turns out that
there is something like memory leak in an environment which theoretically is releasing
memory. So people are sometimes surprised
that something like that may happen.
Yeah.
And Andy and I,
we've had several podcasts in the past,
like way back,
if we go back to the beginning of the podcast,
where we would like to talk about
where we talked about
some of our favorite anti-patterns,
some of them involving.NET,
some involving Java.
But what I really,
and I'm hoping Andy,
this is where you'd like to go with this as well,
is let's hear about some of these really great
anti-patterns that you're addressing in the book.
Because I think those are usually really, really
fun to hear about. Yes, I mean
when someone is asking
me or other people related
to the performance what they
should do and what are the patterns
or anti-patterns,
the first thing that we are saying is to
measure because and the rule should which sounds as merely because without measuring we don't know
even we have any problem without measurement we don't understand our software we don't
know where is the problem and whether we should control something more to don't have this problem.
So the very first thing, the very generic one, without even going into the details, I would put
here that we should simply measure things to know what our system are doing in the context of memory management, for example.
So the very first pattern is to measure, not even touching the code.
Yeah.
So what do you measure?
What are the key things that people need to look at?
Are these the classical performance counters we've been looking at for all the years?
Or is this anything where you say,
man, I would wish that every developer on dot net
would at least measure these metrics i would put here uh the memory usage of course of course but
also we should understand the what measurement is most important so a lot of people for example
are measuring the total memory usage and not knowing that for example
they are taking or not taking into consideration the fragmentation of the memory so sometimes
i was seeing memory usage of dozens of gigabytes when most of the space was empty for example
and they thought that they are using a lot of memory. And the second measurement would be the overhead of the GC
because not only the memory usage is important,
but how much the GC itself consumes the CPU, for example.
And here, obviously, there are those performance counters that you mentioned,
but the problem is that they are only supported
in dotnet framework the main framework used on windows and in case of linux things are getting
a little more complicated at least currently because they don't they don't support and
performance counters but still there are some ways how to measure it for example using etw events but still
those are the two most important measurements memory usage and gc overhead now the the memory
usage you made a comment on by segmentation i assume you talk about the different memory
generations so i think remember if i remember back to my days as a developer there was generation zero one and three in the
large object heap is still is this still the case or has yeah yes yes it's still the same
it doesn't change but recently the one thing changed here because for so many years the threshold about which objects are landing in large object
heap and in small object heap was constant it was 85,000 bytes and it was on so many
interview questions to ask which objects are landing where and only recently this threshold
is configurable so we will be able to increase
for example this threshold above which objects are allocated in large object heap and it is the only
difference here so that's interesting if you're running microservices right i would imagine that
was probably taken to into account for that where you might not be dealing with the traditional
large objects like you might have be dealing with the traditional large objects
like you might have been dealing with in a monolithic application.
So it sounds like that would be used.
Do you find, I guess my question there is, do you find that in more cases, people might
tweak that setting down to accommodate maybe smaller bits of information they're working
with in a microservices architecture that they may want to keep in large object or the
opposite?
Unfortunately, it can be only increased.
Oh, okay.
Due, I don't know, some design decision,
maybe to maintain some kind of backward compatibility,
we can only increase this size.
So, for example, if we allocate, if we operate on a lot of big objects
and a lot of them were landing in large objective we can avoid this
situation by tuning this parameter okay so i was totally wrong but again without measurement we
have completely no idea whether we will benefit or not so we in such scenario we have to measure maybe less important things but we should
just know for example the distribution of our objects. How many of them are dying in
generation 2 or for example how many objects are allocated in large object heap. This is
important because large object heap is not compacted, for example,
so it can grow faster due to fragmentation. And also allocations on large object heaps
are slower a little bit. So without measurement, we just won't know whether we are only allocating
in small object heap or mainly in in large objective or something like that
and just um just for understanding because you just said let's measure how many objects uh die
in generation two so these are basically objects that live long enough to be promoted into
generation two but then eventually get garbage collected so these are very long
living objects yes and and i assume these are good candidates to figure out
why do we keep them alive so long what's the reason because i would assume you want to
actually try to to minimize the lifetime of objects, correct?
Yes, absolutely.
One of the most common anti-patterns in the development
is to introduce something which is called midlife crisis.
So, because garbage collector is designed in a way that mostly and most efficiently handles
objects that are either living very shortly or living very long.
But the most non optimal way to handle object lifetime is when they are living long enough
to go to the generation two, but suddenly, after a very short time, they are
just should be garbage collected. And they are just there for a very long, very, very
short time. So it's like promoting them to generation two only to garbage collect them
there. And because garbage collection of the generation two is the most expensive one, we should avoid it.
That's awesome.
I like the term midlife crisis.
I have to.
Yes, it is.
Yes.
Very, very, very funny name.
But in general, yes, it's one of the most typical anti-patterns and there are ways to measure it. For example, PerfVue tool
has such report that shows how how what objects are dying in generation two. And this is good
question. Why are dying there? Maybe we should pull them to have a pool of long living objects and not recreate many objects just to die
in generation two.
Wow, that's pretty cool. I had another question on measuring garbage collection. Because we
talked about this earlier. So memory usage makes sense, generation one to zero, one to
small and large objective. And now garbage collection. I remember there were metrics around number of activations
and number of suspension time or total time.
Do I remember this correctly?
So what type of garbage collection metrics itself are important to watch?
In fact, from the garbage collector itself,
the metrics are most important.
One are those pauses that you are saying about
because there are some phases, even in concurrent GCs,
there are some phases when just every thread has to be stopped.
And this is obviously not very well from the perspective of performance,
but because our application is being stopped
for milliseconds, for example,
hopefully only for milliseconds.
So this impacts the throughput
and the performance of the application.
But also we can look at the CPU usage
because there are also such anti-patterns and I've seen
such scenarios on production that theoretically everything is fine, application is running,
there are no memory leaks, memory usage is stable. But when we look at the CPU overhead of the GCC, na górę CPU, widzimy, że większość czasu
to, co się robi tylko w aplikacji, to to, że GC, bo GC jest tak
zajęty, że chce zapłacić pamięć, bo na przykład, jak się opłacamy
tak wiele obiektów, że GC jest tak zajęty, że chce zapłacić pamięć po nich, więc money so many objects that gc is so busy reclaiming memory after them so the cpu overhead of the gc is
also important uh is there a is this so i would assume this is another anti-pattern as you just
said it is allocating too many very large objects that then need to be garbage collected is there
another pattern name like the midlife crisis name that you have here?
Not so fancy.
Just rather not allocate.
I mean, avoid allocations,
no matter whether it is about small objects or large objects.
In general, avoiding allocations
is one of the most important
performance tips, tricks, which we can use to make our application faster because allocation
by itself costs something because GC has to find space for it. Let's say that in general.
And then when we allocate, obviously, GC will have to reclaim
memory after this object, eventually. So allocate not allocating is always the best choice from the
performance perspective. And in fact, I dedicated dozens of pages about this. What are the common sources of allocations and how to avoid them because as
As we are we were saying before it is very convenient
abstractions that we have
Abstract of infinite memory we can always allocate so people learning C sharp
For example only has are aware of new ink up
objects, they even don't care about something like that, that the memory should be reclaimed,
yes, we are not learning people about this. And then out of the sudden, people are discovering
that allocations are costly, and we should avoid them. And which for many people is quite
surprising that we should start to think about avoiding
allocations.
Go on, Andy.
Go ahead, Brian.
I was just going to just ask how much, we know there's obviously the full-fledged.NET
that runs on the traditional Windows ecosystem.
Now in the last few years, we've had.NET Core and you've already addressed a little
bit about how some of the monitoring doesn't really work on Linux because you can't
get into that view. But in terms of, you know, the thousand pages of your book and all the research
you've done in there is, I mean, in terms of the memory operation in.NET, is it the same in the
full-fledged, I don't want to say full-fledged, but in the full version of.NET versus the newer.NET Core?
Are the memory management components the same?
I figured the anti-patterns are going to be the same, right?
Because anti-patterns typically become somewhat universal.
But is.NET memory management the same across full stack.NET and.NET Core?
Yes, in general yes, because when.NET Core was created, the code of the GC was just,
let's say, copy-pasted to the.NET Core and only 40,000 lines of code.
But yes, it was the same GC simply.
So the whole implementation, the whole implementation details
can be directly translated between those two.NET frameworks that we have currently.
They are some minor changes on the level of the API and the communication with the operating system because some things was not so easy
to translate between Windows API and the Linux API.
So there are some things done a little bit differently, but the influence of those differences
is very small in fact.
So there are some very very very small differences in
the implementation but the whole algorithm the whole thing is the same okay and the main difference
is uh what from the perspective of the let's say regular development is the the monitoring thing
because right right during those over 10 years of.NET on Windows,
a lot of tools and good practices were created,
smaller or bigger commercial, non-commercial tools
for supporting monitoring and diagnostic of.NET.
But on the Linux, things are a little bit worse currently, unfortunately. But I believe it will be changing because.NET Core on Linux is getting more and more popularity.
So things will be just, you have to wait a little bit for probably.
So Andy, that brings to mind, we used to joke, or I used to joke at least,
especially when we talked to Donovan Brown at Microsoft, how you could, in theory,
use.NET Core for free running it on,
you know, not paying,
you don't have to pay a license for.NET Core.
You could run it on a Linux OS,
which you don't have to pay a license for.
And then if you're running it in Azure,
the only thing you're paying for is the compute.
But in a way,
and I don't think they did this on purpose,
but obviously the one way that helps
keeping it on Windows, right,
is the fact that you lose that monitoring
or a lot of that monitoring capability,
which, you know, as we all know,
if you've spent any time in applications,
monitoring your memory is really, really important.
So I think short-term,
it helps them probably keep people on Windows for it.
Yeah, or it's obviously a great way for tool vendors.
And obviously that includes us as Dynatrace.
I'm sure there's many other tool vendors out there
to kind of fill the gap, right?
I assume so.
Yes.
So, Conrad, I assume there are ways, as you said,
to get around some of the shortcomings
either through, I think you mentioned ETW,
but I'm sure there's other ways
on how to get performance metrics
that are currently maybe not provided in the same convenient way as it used to be the case on Windows.
In general, ETW has been translated into LLTNG's infrastructure TNG infrastructure with something abbreviated as Linux tracing for Windows next generation
or something like that.
But in general, it's ETW infrastructure on Linux.
So.NET Core is using it on Linux to provide something very similar to ETW on Linux. The event, the names of the event is the same.
So it's good.
The main or the major in fact problem is for example,
that it is very important from diagnostic point of view,
for example, that on Windows,
ETW events can be augmented with stack traces. And it is very powerful thing,
for example, to see when each exception was thrown. And on Linux, LLTNG infrastructure
doesn't provide information about stack traces. So this is a quite big limitation because for now, for example, events like allocations or events like exceptions
doesn't provide on Linux the stack trace.
This is currently the most sad thing about diagnostic and monitoring of Linuxux.net core wouldn't you be able um to do it through an instrumentation-based
approach where you would instrument uh let's say the constructors of objects and then just you know
capture the current stack trace would that be yes if they work around right yes yes yes but the
difference the main difference of the etw and this counterpart event infrastructure on linux Tak, tak, ale główną różnicą w ETW i w tym przeciwnikowi
infrastruktury na Linuxu jest to, że jest bardzo łatwy, więc nie wprowadza prawie żadnego
przejścia. Kiedy zaczynamy myśleć o instrumentacji, oczywiście
mamy coraz więcej przejściów, więc powinniśmy balansować tutaj, co we are getting more and more overhead so we should balance here what one what we want to
monitor and at what cost yeah right yeah i mean that comes back to you know a topic that we have
been uh you know addressing over the last couple of years always the balance with what level of
visibility do you want to give for what particular for a particular cost and that's the
same whether it's for uh end-to-end tracing as what we've been doing for years um or for deep
dive memory instrumentation um i agree with you yeah yes very cool um let me ask you so there's
a lot of change obviously going on in the.NET ecosystem, in the.NET community.
I'm sure you are very well involved and very well connected with the GC team of the.net was reviewing my book, so I get quite in touch with them.
What was the one of the most pleasant things for me because these things is those things are
changing so fast that I was afraid that my book will be outdated before finishing. So getting in touch
with the team which is developing GC for example, in
dotnet was a very, very convenient thing for me. And
thanks to that I know a little bit that what will be changing
what was changing, I have a possibility to include it in the
book, for example. But still, even now, after the book publication, it's getting outdated day by day a little bit.
Because, as you said, things in.NET are changing so fast, especially on the field of performance,
that it's really hard to catch up, especially when publishing a book.
Yeah.
By the way, I would assume your book is also available as an electronic version and not just hard copy?
Yes, yes, yes, it is.
And all these things that are... Sorry, sorry?
As I said, that gives you an easier way to do continuous delivery,
meaning you can continuously update your content and hopefully your consumers
your readers that buy it online or in an electronic version can get the latest updates from you and i
wonder how much the electrons weigh for the ebook continuously continuous publishing let's say yeah
uh but you are aware of probably about all those things, especially the more and
more emphasis put on structure usage on stack allocations on span of using span of T, and
all those things that are going to happen and is happening currently in.NET ecosystem. Those are great things because they really open
new possibilities. And I have never
seen so big emphasis on performance from the.NET
in fact. Obviously it was good, obviously
it was working good, it was fast enough, but what is happening
now, it's really nice to see all those changes.
Yeah.
Let me ask you a question.
One of the things I've experienced when I interface with developers,
especially.NET developers, is a partial fear of memory
and an attitude of, well, it's all handled. There's nothing I could do about it. So
I don't really think about it. But as we see, obviously, there are anti-patterns or memory
problems that come up. I think sort of an anti-pattern in approaching memory, especially
in Java, where you have a lot more control, is instead of learning about memory and using it well,
people just try to tweak the memory settings
to make it run as good as it can.
So it's fixing the way memory...
They're trying to tackle it by changing the way memory is managed,
how GC is run and everything,
instead of really getting at the core of their memory usage. As we see in.NET, you really don't have too many controls, but yet these anti-patterns
come up a lot, right? And the only way a developer can figure out how to avoid the anti-patterns is
to really start really embracing the concepts of learning how memory is used and really diving
into it more, which, as I mentioned,
when I started, this is kind of where there's a bit of fear. So obviously one of the resources
that are available will be your, your phenomenally, your phenomenal book. Um, but in terms of advice
to developers to say, stop fearing memory and embrace it. What do you have any advice of like,
even just like a very getting started,
how do you ease them into
really starting to pay attention in memory
and really looking at that as a discipline?
I would return to the measurements
because what can they do
is just to measure on production, for example,
or even on the development station,
what is this usage to just have some kind of clue
what, how it looks like.
And then just to, and especially about this,
for example, distribution of generations,
what is allocated where and what is dying where.
And then probably they will start,
they should think about those allocations to start to
be at least to be aware of what and where they are allocating where whether they are just
whether they are only uh you know on aware about those allocations or maybe completely not and they are just writing code without
any thinking about it. So, allocations is the second one. To think where they are allocating
things. For example, whether they can use some kind of pooling to not allocate things, whether they can use struct and why structs are good also would be very nice
for them to understand.
So it would be those things, I believe,
from the code perspective, to be aware of the allocations.
So in general, it sounds like the most basic piece
is just observe and monitor your memory just so you're aware of it.
And the more you're aware of it, the more you'll probably start looking into some of these other things.
Exactly, exactly. To the specific problems, for example.
Yeah, I mean, because I can't tell you how many times I've heard.NET developers when I've pointed out some memory issues and they're like,
well,.NET handles the memory, there's nothing I can do.
Yes, exactly.
And exactly this were the questions about, for example,
my work on this book. Why I wrote a book about something
which is automatic and it should work. Obviously,
as we are all constantly repeating today, there are
some cases when we should simply
more know, know more about what are going and what and how the things are going is underneath.
So yes, exactly. And by looking by measuring, I am just saying about this, because it's
the best thing, of course, we can, you course, we can learn a lot of rules about memory, we can learn a lot about internals
of the GC.
But in the end, the most important thing for the particular developer will be something
which directly relates to his hair product.
So it's the best to just look at its own product and see whether we have problems
there or not hey i was just browsing uh on your pro.netmemory.com website where you know people
can buy your book but there's also a free poster that is the.net memory management poster which i
thought is pretty cool for people it's a great visualization of all the different memory spaces.
I think you do a great job here
in actually explaining,
even though it's very crowded
because there's a lot of stuff going on,
but it's a great way to get an overview
of everything that is related to memory.
So everybody that wants to,
I mean, we'll put the link probably
on the podcast page as well.
But for people that want to find it right now,
pro.netmemory.com, pretty cool.
It would be great.
It's designed to be printed
and hanged on the wall or somewhere
just as a reference to show
what are the regions of the memory in dotnet process,
what are those generations, and the concept of roots, what are the roots of the objects.
So what is keeping objects alive, for example, because we even we don't we didn't touch the
problem of memory leak here, which also can happen in.NET.
So a lot of people are just surprised that memory leak may happen in the environment
which should reclaim memory.
So all this is visualized somehow on this poster.
Yeah, that's a great little poster.
And I think we shouldn't go into every single pattern and all the details
because we want still people to buy a book.
Yeah. Oh, yes, only a thousand
pages. I'm hoping there's a lot
of patterns in there, Eddie, for days.
Yeah.
Very cool. I do have one last
question because I'm just browsing
through the chapters of your book.
Chapter 15, Program
Medical APIs.
And is there one example or one use case
where it makes sense to trigger the GC through the API?
Give me one example.
In general, the obvious rule is do not call it, of course.
But there are very rare cases for example when let's say
gc has some heuristics and measurements about the memory usage but obviously in your program
you you know better sometimes that for example you have just allocated a lot of data to be processed ale po prostu wywołaliśmy wiele danych do procesu i te dane nie są już potrzebne
powiedzmy, że przeprocesujemy coś, żeby wygenerować nowy poziom w grze
albo coś takiego, więc wywołaliśmy wiele danych i nie będzie ich już potrzebne
więc nie ma sensu czekać na to, żeby GC
zauważył, że te dane nie są już potrzebne sense to wait for the GCs to notice that those data are not needed anymore, we can trigger it and
just to get rid of those not needed memory as fast as possible. So it would be one of the scenarios
when you explicitly know that you have allocated a lot of data and you don't need it anymore and this is one of the most common
scenarios that we can think of yeah very cool hey um connor is there anything else uh to kind of
close this episode is there anything else that people should should definitely need to know
other than obviously you know looking at your book i believe that i i really encourage people to to learn about dotnet memory i'm not
directly referring to my book but i i have a lot of i have a lot of contacts with the dotnet related
people with dotnet developers uh giving trainings for example about architecture not about dotnet related people with dotnet developers giving trainings for example about architecture
not about dotnet memory and i see how many people are is surprised that it is even a even a case
that they should know about dotnet memory management and i believe they should because
we really are not aware about the overhead, for example, that we can introduce
by the GC itself. So I really encourage people to find information about it. There is a lot of
blog posts, a lot of videos about.NET memory management and.NET GC. So just took any any from the internet and start reading
kind of not not necessarily from my book because it's a lot of
work to read it. But any blog post about it, I believe will be
just enough to start this journey because obviously, in
18% of our development, maybe it's not very necessary. So when
we are starting to learn someone to develop in C sharp, we just
don't want to put too many information about such things.
But when we are developing, and when we are earning money for
C sharp, from C sharp development for years, I believe
it's kind of professionalism to know such things, how things are working, how they are
influencing the efficiency, the cost of the software which we
are writing. So I just encourage people for for searching for
such things. And one more thing, because a lot of people are
saying that memory is cheap.
So currently memory is cheap.
There is such phrase that I am hearing quite often, but it's obviously not very...
From my perspective, it's not very true because there is always someone paying for this memory, even if you are using for example serverless, we are paying
for something like metrics for something like gigabytes per seconds or something like that.
So the more memory we are using, it directly translates to the more money we are spending on
this. So this phrase that memory is cheap is really very oversimplification
of things I believe. And we should, it's the same about performance in general, but the
memory is one of the most influential parts here. So what I am trying to say simply is
that I can encourage all listeners to find information about it.
Yeah.
And I think one of the times you usually hear a groan from a problem analysis is when people realize there is a memory issue.
Because then it's, well, who knows enough to really dive in to understand why that's happening, right?
Especially, again, when we go back to.NET,.NET takes care of these things for you.
So we're obviously violating some sort of rule or some sort of way that.NET's going
to handle the memory to actually create a problem.
And finding someone who knows enough about memory utilization, how it all works, is always
the hard part.
That's what the developers don't like to necessarily spend time on.
So I think it's great that you encourage people to learn about it because it'll make someone really valuable.
Yes, and there are two aspects of it. There's aspects of diagnostics that as you said
developers are not necessarily very keen of, but also there is an aspect of writing memory-aware code. nie są zbyt zadowoleni, ale jest też aspekt pisania kodów pamięci oczyszczonych.
I to jest całkiem fajne, bo
jest tam wiele bardzo interesujących rzeczy.
Pisanie i kodowanie oczyszczonych
jest dla mnie po prostu
zabawą, bo
jest to coś, co wymaga,
i coś, o czym trzeba o tym
myśleć.
To jest interesujący aspekt
na przykład dla regularnych deweloperów dotnetu.
I wszystkie strux, na przykład. Dlaczego strux jest użyte?
Wiele ludzi nie używa struxu, struktur, bo nie widzą
żadnych przydatnych. W rzeczywistości jest ich wiele. because they are not seeing any advantages of it. And in fact, there are many.
So one of the things that people listening to us can start from,
where can they start, is to start about reading
why structs in C Sharp can be useful.
And it will just begin some journey for them, I believe.
Awesome.
So, hey, Andy,
do you think it's time to summon the Summaryator?
I think it's time.
Absolutely, yeah.
All right.
Do it now.
All right.
So, I mean, I have to say,
I think the last couple of minutes
were already a great reminder of all the stuff
Conrad told us and what we need to do.
I just want to highlight the things that I've taken away
is that rule number one, measure,
because obviously what you don't know, you cannot optimize.
Rule number two, understand and minimize your allocations,
because we can reduce the overhead
that memory and garbage collection especially has
by simply allocating
less. And I really like some of the patterns that you just talked about, especially the first one,
which has a very, let's say, interesting name, the midlife crisis pattern, where we want to
avoid objects that get promoted to Gen 2 for a very short time. So basically just getting promoted,
just getting old enough to then kind of get garbage
collected. I really hope that a lot of people will start thinking about memory management in.NET
from now on to really start writing memory-aware software. I like that term a lot as well.
It seems that maybe the analogy will be only the best developers are like the best let's say race
drivers right the best race driver is the one that not only knows how to drive the car but
actually understands how the car actually his car actually works on that particular track how the
engine works when to shift gears and i think this is the same with engineers we need to write in
this case our code so that it can optimally run on the underlying
engine, which in this case is.NET, with a very predefined memory management approach.
And the more people that know about it, the more people that are aware of what impact
they can have when they write memory-aware software, the better it is. And a great start is the book from Conrad,
and you can find it on pro.netmemory.com.
Excellent, Andy.
Thank you very much.
Yeah, I got nothing else to add to that.
I think you summed it up perfectly.
And Conrad, you have one other thing I noticed
on your book's website, or yeah,ory.com is you're also available for online consulting and on-site trainings for.NET.
So if people are looking to maybe bring up some of their employees and get them to dive into this.NET stuff, the memory stuff that is, you're available for that it looks like correct absolutely i'm more
than interested in doing this kind of trainings on-site trainings on-site consultancy or online
consultancy there is a lot of interesting things to to learn to teach on that field so absolutely
all right and if people want to follow you see what's going on with you are you on any social media that they can follow
I'm a fan of
Twitter so there is a handle
Conrad Cocosa simply
probably will link
something in the blog
post yep perfect
I just wanted to thank everybody for listening today
I think it was a great topic I always love when we go back to these
kind of what I might call
the basics but they're always evolving and always really, really important to never forget about.
So thanks, everyone, for listening.
If you have any questions or comments, tweet us at pure underscore DT,
or you can send us an email at pureperformance at dynatrace.com.
If you have any stories or would like to be on the show,
also get in touch with us and see if we can make that work.
But for me, I'm going to say thank you and goodbye, everybody.
Goodbye.
Goodbye. Thank you.