C++ Club - 162. Modules, Xmake, CMake, constexpr, operator overloading
Episode Date: July 3, 2023With Gianluca Delfino, Frances Buontempo, VladimÃr ArnoÅ¡t, Andrew Fodiman, Paul Etheridge and other colleagues.Notes: https://cppclub.uk/meetings/2023/162/Video: https://youtu.be/ok-g9NAyrcg...
Transcript
Discussion (0)
Welcome everyone, welcome to the club. This is meeting 162 and today is the 25th of May 2023.
First of all, let's help out Timur. He posted on Mastodon.
Hello C++ community. In preparation for my upcoming talk Cpp on C, on C++ and safety,
I'm doing a little experiment, a survey on the perceived impact of undefined behavior in C++.
If you code in C++, please help me out and participate here.
And there is a link to Google Docs.
It's anonymous and consists of only three questions.
Takes only a couple of minutes. Results will be revealed and discussed in June at CppOnSea.
The questions are, first one is, for your C++ codebase, how much effort and resources
do you spend on mitigating undefined behavior during development, such as by using static analyzers and sanitizers,
enforcing strict coding guidelines, or other measures. No effort, some effort, significant
effort, a huge amount of effort. The second question is, how much negative impact do you
experience from undefined behavior that you fail to mitigate during development,
such as by crashes
in production, security vulnerabilities, etc. No negative impact, some negative impact,
significant negative impact, a huge amount of negative impact. And the third question is,
what sector of industry does your C++ code base target? This is a multiple choice question and it's optional. If you don't want to
say it, just skip it. So yeah, it'll be interesting to gauge the community's view on undefined when it's later revealed in the conference.
Right.
Next, let's do a bit of a warm-up
by getting angry. Stop saying C C++! It's an article by Bryce Vandergrift.
It starts with this quote.
For as long as I can remember, I have heard people say C slash C++ when referring to a project written in C and or C++.
A lot of programming developer jobs also refer to C slash C++ when they need a programmer who knows either C or C++. A lot of programming developer jobs also refer to C slash C++ when they need
a programmer who knows either C or C++. To most people who have never touched C or C++,
this might not seem like a big deal. However, the problem is that when people say this term
C slash C++, they make it seem like C and C++ are similar or closely related programming
languages. That is not true. These two languages have slowly drifted apart over the years to the
point where they share less and less in common. Then he goes on to illustrate some of the
incompatibilities to illustrate that some modern C code does
not compile with a C++ compiler, and goes on to say that C and C++ programmers are very
different. won't touch C++. Mentioning Linus Tovalds. And he says, only if you're using C together
with C++ would it be acceptable to say C slash C++. One quote from this article was,
many beginner programmers are led by the term C slash C++ to think that they're basically the same language.
In fact, there are many tutorials out there
that are advertised as C slash C++ tutorials,
continuing the confusion.
This can also scare away C beginners
by making them think that understanding the complexities of C++
are required to understand C.
Spoiler, they're not.
I read this article and I had this thought.
Hang on.
Cpp2 can be combined with C++ in the same source file.
Therefore, I propose to rename it C++ slash Cpp2.
Or even C++ slash 2
need to let
Herb know, I think there's a real gem
I have here
IBM might object though
and continuing
this, there was a Reddit post
job descriptions asking for
X years of experience with
C slash C++
different slash job description is asking for X years of experience with C slash C++.
Different slash? Interesting.
Backslash, yeah. That's probably a typo.
So the quote from this thread.
In my experience, the C slash C++ tag translates to we have a 30-year-old C codebase, which we made an intern rename all the files to CPP
and fix bugs until it compiled again.
Our codebase is still a nightmare escape of object-oriented patterns implemented using
arrays of structs and function pointer tables.
Sometimes our more knowledgeable engineers will use their std vector.
Please never template anything.
And yet another related link is
this. Author docs C++. I think we visited this before, but basically it's...
I think the author is a game developer which wouldn't surprise me and it's basically
don't use any modern C++ features
and you'll be fine for the next 30 years or so
I think John Carmack famously
is one of those advocating for
C with classes
no references, only pointers.
Well, he's a game developer.
Yeah.
The rule holds.
Next up is a new release of XMake,
my current favorite build system.
And one of the interesting things about it is they specifically explain
how to use C++ modules with it.
So it looks like...
What compiler do they suggest and And what compiler do you use?
Yes.
They say that at the end of this section, I think
the latest Visual Studio
preview supports it.
I think non-preview is also fine.
And Clang.
The latest Clang, they say, doesn't...
Ah, right.
Quote.
It seems that the latest Clang does not yet fully support C++23 standard modules.
But XMake does support it.
And they just say GCC, it's not currently supported.
But maybe they are saying
this about the standard modules import. I've read that GCC has made some progress in module
support. So maybe if you don't use the standard module import, you'll be more or less fine.
Possibly. It wasn't long ago, I remember that Visual Studio was lagging on features and
you had to use Clang for the latest and greatest. And nowadays Visual Studio is leading the
way on things and everybody else is lagging.
Yeah, indeed. Exciting times. So XM8 now supports distribution of C++ modules as packages, so that they can easily
be integrated in other projects.
And judging by the syntax of the actual build description that's required to build and use
modules, it's pretty simple as far as the actual entries in the file are concerned.
Which is not necessarily true for other build systems.
So this is an example of
a make file.
So it's like hardcore
build setup.
This is probably in those code bases, cdus slash cpp.
C slash c++, yeah.
The old school stuff.
But it's useful in the sense that that's the actual meat of the module support.
So those are the commands that that's the actual meat of the module support. So those are the commands
that you need. And that's a good reference point for any other build system that is supposed to
generate make files, like CMake, for example. And this is a repo or a gist. Oh, it's a gist. Okay. So it's just a basic main.cpp, a makefile,
and then something makefile double step. What does that mean? It's like makefile calling another
makefile maybe? And then the module definitions. They use the extension CPPM
and the module implementations.
They even provide the actual output of the make.
I thought it was very useful
as a bare-bones module example
without even using any fancy build systems.
And next we have another post by
Teo Dutra about their... I think Anilin is their game engine and they decided to try and use it as modules.
With some help from Daniela Engert,
they've got it working.
So this...
There are some interesting tidbits,
because it's an existing codebase,
and it's being migrated to use modules.
And... and it's being migrated to use modules. It says configure projects to support C++20 latest and so on. They explain how to write the actual
module files. There are some interesting error messages that they
mention, like internal compiler error from MMSVC, which is the worst. You can't
really effectively mitigate it. You can work around it by changing stuff
randomly until it works. That's been my experience with internal
compiler errors. And another interesting tidbit from the article is that modules don't like cyclic
dependencies very much. So in my experience there are many codebases that do have those. And if you, say, are on Linux,
then you might not even notice it
because normally on linking your libraries,
you don't require everything to be linked.
So there are unresolved dependencies
until the executable loads those libraries.
With Visual Studio, you will get a link error
even for your DLLs if there are
cyclic dependencies. And it looks like
getting rid of them is a precondition for using modules,
which is good to know.
Probably a good thing to do in general as well.
That's true, yeah.
Because I think one of the symptoms
of circular dependencies
is that when you change one file
and suddenly a bunch of other files
are being rebuilt
that you didn't expect to,
you're like, hmm,
there are some hidden dependencies.
They also deal with macros.
You can sort of have macros, but in a separate header file.
So yeah, it's a really useful article.
And at the end, in the results section, they provide build timings so it looks like modules are much faster.
Well in this case I think it's around twice as fast maybe or one and a half
times. Link times have increased insignificantly, I'd say. And in another case with the example app,
the build times have decreased slightly, I would say. The quote is, these results are a bit odd.
Probably more build time impact was anticipated. Maybe that's due to some compiler support being immature or maybe it's just how it is with this
particular code base as more and more people use modules or will get more stats there are lots of
references to other modules articles at the end of the article so that's useful Even if it's not a mind-blowing improvement,
I think it's a welcome improvement nonetheless.
Yeah, yeah, indeed.
It's not just about build times,
but it's about code organization
and isolation of modules into their own parts of the code base.
I guess even just finding all the cyclic dependencies
that were hiding in there has got to be a
useful thing to tidy up.
So, yeah.
Right. And then
the next one is an article
on the Kitware blog.
And this is a company
that manages, maintains
CMake, as far as I know.
And they say, work is underway to implement support for C++20 modules in CMake.
It's a work in progress.
There is a quick introduction.
They say that they are reusing CMake support for Fortran modules to implement C++ module support.
This is an interesting quote.
In order for Fortran modules to work, CMake added a simple Fortran parser.
Given the complexity of C++, adding a C++ parser to CMake is not something anyone wants.
So CMake will need help from compiler vendors and come up with a standard way
for the compilers to give this information to CMake during the build.
I wonder how XMake does it.
It doesn't seem to need a C++ parser.
They say there is a SG15 request for help with C++ parsing from Gitware.
So it's a concerted effort to get CMake to support modules.
To enable CMake experimental support, there are settings like
Set CMake experimental CXX module CMake API
followed by a GUID, which is different for each CMake API followed by a GUID which is different for each CMake
version. I mean if you want something straightforward that is not it.
People love to dunk on the CMake language syntax and everything and there's this
whole modern CMake thing which encourages you to
describe your build in CMake in declarative terms as opposed to like imperative commands and such.
But still, I mean, CMake has a lot of baggage and this syntax is just so arbitrary.
I prefer to generate CMake scripts by using other meta build tools like XMake.
Why not?
So does XMake generate CMake?
Or it can generate anything?
Is it always generating CMake?
It generates lots of other formats
like Visual Studio project files and solution files.
It generates CMake files and
Xcode and some other IDEs I'm not sure. I'd be curious to see what XMake generates
when you try to build with modules. Yeah that's an interesting thing I haven't tried it. For
small toy programs XMMake is what I use.
It's just 10 seconds to set up a new C++ project.
I don't fancy writing CMake scripts by hand.
I was showing this C++ project setup to someone,
and I did, after much googling, write a bare-bones CMake project by hand,
and showed them, and then said, well, never do that again. Use this instead.
Luckily, there are good templates around. Jason Turner has a good template for CMake.
Yes, but...
It's not as simple as one would hope, but it does the job. I will try x make
though, you know, you're selling it quite well, I must say.
Yeah. So as far as I know, it was started by one programmer in China. and now it has quite a few contributors. The documentation is pretty good
and the URL is xmeg.io and it's installable on Windows by using WinPKG, you know this new
Windows package manager in Windows 11. I think it's called WinPKG. You can install all kinds of
things with it. So it's like you go to PowerShell console, do the command and then you have
it. It's very convenient.
How does it manage its dependencies? Does it have like find statements like CMake?
So it supports its own package repository, Xrepo,
which has a good set of most widely used packages.
And it also very easily supports both Conan and VS Code, VSPKG.
So it's a one-liner to say,
I'm using VCPKG,
and then I need this package,
and it goes and fetches it.
It's very convenient.
Sounds very good.
Right.
So the next one is another modules article by Viktor Zverovich this time
of FMT text formatting fame.
And he describes how he tried to use modules with FMT.
He says, unfortunately, it doesn't give a measurable build speed up
compared to using the lightweight core API.
He even goes
and looks at what takes the most of the build and tries to optimize it. On the
reddit thread related to this there are some comments like this one. Modules are
really nice, like absurdly nice and improve C++ immensely. We as C++
programmers put up with a lot of annoying crap that can be traced all the way
back to hashinglude.
And just getting rid of all that crap is just so much nicer.
Another reddit post about modules.
C++20 modules.
Best practices for abstraction and encapsulation.
They say, before modules, this was pretty straightforward.
Forward declarations and what are deemed part of some applications
of a library's API went in headers,
and implementations of those APIs, functions, constructors, etc.,
went in CPP files.
Except templated code, of course.
You can also do that if you instantiate manually.
Now with modules, it appears that the keyword for public user-facing access is export,
and anything that is not explicitly declared export cannot be accessed by the consumers
of a given module.
And they go on to outline the differences. differences and the question was how to separate apis implementate and implementations
to reduce code duplication and yeah how to organize a code base using c++ modules
and this first answer is pretty good quote depending on the complexity of what you want to do, on my repos I have both approaches.
If the module is simple enough that everything can be described in a single file, then I
have everything together.
Also note that for modules, writing code inside classes' tracts isn't implicit inline like
it happens on header files. If, however, the module is relatively
complex, I'd rather use an interface file and then scatter the content across several partial
module implementation files. And someone replies, this is the way. Yeah, so again, as people start
using modules more and more, there'll be more guidelines and basically case studies on how best to organize C++ modules using code.
I think that's good progress.
Do you remember what happens with macros in modules?
If you define a macro in a module, it stays in the module unless you export it.
Can you export a macro?
Can that affect other modules that are imported after this one?
I think it's not exported. Like you say, if you declare it outside the export part, then nothing else sees it.
I think if you... I'm not sure.
No, I think you have to use headers, maybe.
Or maybe it's possible to declare macros before starting the module declaration. You know, like, the first line is module, semicolon, right?
But then the export declaration,
the actual module definition
doesn't have to start immediately after.
There's a sort of a gap.
I'm not sure, but I suspect
if you declare macros there,
then they might be visible to others.
But I'm not sure.
I'll try it out.
It's not like I want to write macro, but I am interested to see how, because the original
purpose was to kind of prevent macro from getting violently disseminated in all your
code bases whenever you include it somewhere.
But I think that was somewhat walked back.
I'll give it a go. I've never actually written a module.
I tried to use the standard ones, but it was a couple of years ago
and there was not much availability then.
Yeah, it's worth trying.
If you're converting an existing codebase, then you're likely to have macros and you
need to decide what to do with them.
Next up is a post on the developer community for MSVC.
It's an interesting optimizer bug that someone discovered.
This is a very small code example which has an external function declaration, external void g, taking an int x, and then you have an int f function that takes two
integers a and b. And in the body of the function, there is a call to g, and the parentheses contain a ternary expression b question 42 colon 43.
The next line in the function is return a divided by b.
And the essence of this bug is that the compiler will assume that b must be non-zero because later there's a division
and this apparently is incorrect because g may terminate the program so i don't know how this
assumption makes the compiler generate incorrect code but it looks like the assumption is incorrect but i'm not sure about the effect of it
so if i understand correctly the compiler thinks it would be undefined behavior
so if i divide it by zero so b must not be zero therefore the ternary operation goes
always in the 42 case yes yes you're right you're right. That's how it is.
Yeah.
I mean, it's a bit hard to argue with that,
but it is true that technically G could terminate
and I'm not sure.
So you're basically writing a bug later on
that would lead to undefined behavior,
but you still expect your terminate
to actually happen before,
assuming that's what you wanted anyway.
I don't know if it's fair to call this a bug.
I don't know.
Well, the other compilers don't exhibit this behavior.
The reporter says other compilers are not affected by this problem.
I wonder if there's anything else that could be,
instead of terminate, valid uh complain uh point because
terminate you okay fine terminate i understand is there something else that could have happened
that would legitimately uh and unequivocally uh say that msvc is behaving badly can't come up
with anything else no i think i think you're right. Terminate is the only thing that can affect this behavior
because if G doesn't terminate,
then B must be non-zero to avoid a UB.
If G throws an exception or something, is that a...
Well...
Yeah.
It could be.
It could be, yeah, you're right.
I think so.
Still, it feels kind of bad for msvc you know
i think they'll be fine i think they have resources to fix this
right next up is a reddit post what's the most hilarious use of operator overloading you've seen
he hey guys c sharp developers are here i accidentally started a nerd bender on C++ because I was pissed that I couldn't overload the function call operator in my language of choice.
And now I'm marveling at some of the wild things you guys do.
Bit shift writes to a stream? Awesome.
What's the funniest, weirdest, most clever, or just plain stupidest use of operator
overloading you've seen?
It makes me think, are we on a permanent nerd bender?
Yes.
Being C++ developers, you know?
Definitely.
That set me off thinking, I remember ages ago, just before Phil Nash did his first version
of Catch, I think Kevlin henney had been
playing with some similar ideas and there was some perversion going on with overloading comma
operators and phil nash ended up going to town giving loads and loads of talks about every single
overloaded operator he could think of and came out with some really quite obscure stuff. But some of that's made its way into the catch unit testing framework. It was actually quite useful.
But yeah, there's some obscure historic things there that I'd have to dust off to find.
Loads of fun.
Yeah, I
remember in one interview I answered about a comma operator of loading to initialize an array.
And apparently that was correct at the time.
This reply, I find it very amusing that std file system path has an operator slash for concatenation.
And the same operator is overloaded for dates as well.
You can construct dates using slashes.
Someone replied, I both hate and love it.
They compare it with Python's path class,
which also supports slash for concatenation.
Does it have an operator backslash for Windows paths?
Yeah, luckily backslash is not an operator.
Double pipe to test if lines are parallel.
What?
Is that really a thing in some graphics API or something?
I thought this was a troll.
Probably.
And then someone says,
that's awful, operator pipe-pipe
has short-circuit behavior.
And someone replies,
not if you overload it.
That is true.
Yeah, which is a good point to remember
despite all this joking.
Someone should overload the bitwise OR operator or a pair of them,
which would mean an absolute value. It would be really, really messy.
Someone says, I've similarly seen pipe and carrot for dot and cross products,
respectively. And apparently something like that is used by Unreal's gaming
engine. And this is the comma thing. It's not exactly hilarious, someone says, but I've seen
people overload a preta comma to append to vectors. Apparently Eigen library does that.
And it's amazing, someone says. The feedback is
something. That's horrible.
And next. I actually love
that. We are
also censoring some of the response.
Obviously.
Boost high-order functions
library allows arbitrary
named infix operators
by overloading
less than and greater than to give this syntax var1 less than
operation greater than var2 so it's like an arbitrary infix function that's pretty clever
and also horrible i think imagine if someone was crazy enough to overload bit shift operators
to do something completely different, such as piping data.
For instance.
For instance.
And someone says, I seem to recall a UI library that amongst its weirdness
overloaded plus for adding elements to a window.
The tutorial proudly boasted window plus button to add a button to a window.
There is lots of others but this one caught my attention. So
I've done some com programming in my previous lives and apparently comptrt it's a built-in MSVC type for Windows COM programming,
overloaded operator ampersand as a COM out pointer operator,
which meant that ampersand P had a side effect,
clearing the smart pointer,
as well as returning an unexpected type pointer to the internal storage
instead of the smart pointer itself.
That is horrible.
Someone
gave an example of range adapters
which is using pipes
and yeah, that's like
idiomatic C++ now,
isn't it?
Took a while to get used to that.
Yeah. Are you using ranges
in your day-to-day job?
I'm still resisting that.
I'm trying to avoid, but eventually.
Yeah, it's probably coming sooner or later.
Someone asks on Reddit,
why are template errors so horrendously verbose?
It seems that almost all template errors
are notoriously hard to read and
lengthy beyond comprehension. Why is this? And the first reply is this. As USTL, which is Stefan T.
Loveway of Microsoft, would say, why are there so many gauges and lights in the cockpit? They are
not hard to read, they are tedious to read.
The compiler is dumping the whole instantiation chain so you can find the information you need.
How is it supposed to know which information you need? I'd rather have too much information
than too little. Yeah, they can be very long though. It's a problem for beginners especially, because after a while you kind of learn to filter
the noise, but they don't make it easy for people to learn.
Yeah.
This is an article by Meg Parikh, Force inline in C++.
The author implemented a universal force inline macro.
They used various compiler-specific attributes.
So they support Clang, GCC, and MSVC.
I suppose that could be useful.
But it's missing a point a bit.
The Redditors have discussed this article and explained why it's a misleading thing to do.
Quote, ultimately, a sane compiler will optimize the code much better on average
than you ever will hope to achieve by hand optimization.
Inlining is not about the function call overhead.
It's about allowing more optimizations to happen at the call site.
And inline, anyway, it lost its original purpose, I think. The keyword inline nowadays has the sole purpose of preventing ODR violations and doesn't necessarily result in inlining, which is not confusing at all. Well, what we actually need is also the opposite.
A standard way how to say do not inline this,
even though it's actually declared inline
in some C++ class declaration.
Because sometimes too much inlining actually kills performance.
It's too complex.
You need to benchmark your code.
That's a good point.
You're the only one who actually wants to opt out,
but it may be a good thing to ask for.
Yeah, essentially inlining is fine,
but it sometimes creates too much code,
and it's actually better sometimes to call a function multiple times
than to have it rolled out or inlined multiple times
because it can reuse some CPU level one, level zero caches and whatever
the hardware actually provides. So, yeah, you have to benchmark and see
which one, which option works best for you.
Next up is a new tool, a new for me anyway, Omnitrace.
Application profiling, tracing and analysis.
It's an open source tool by AMD.
This is its GitHub repository. It supports various metrics, including GPU, data analysis,
parallelism, CPU metrics. There's a good documentation. I think it supports multiple platforms yeah it's a really good tool
and there are examples of visualizations
that it produces
which are really detailed
and I mean they look better than
what I had with Intel VTune for example
maybe I missed something what I had with Intel VTune, for example.
Maybe I missed something,
but I didn't see anything like this there.
It's based on a system called Perfetto.
And that one is another open source thing for system profiling, app tracing, and trace analysis.
It's an open source project. i think it might be by google
yeah it's i think profeto is just basically the update of what used to be part of the chrome
perf tracer and this is what it is so they extracted it and enabled it for like other c++
yeah i was using it for profiling one time
and I came across this
and that's how I kind of saw this.
It's basically an update to,
I mean, you could still use the same old one
built into Chrome,
but this is a separate project.
I think it runs in line in your browser.
Right, right.
Interesting.
So yeah, it seems like a good and useful product.
Note that Omnitrace being an AMD project only supports OpenCL and not CUDA.
Take that, NVIDIA.
Next up, I wanted to show you this Regex performance shootout.
This is going to hurt. It's going to be painful.
In a sense.
How bad is the
STD regex going to be?
Oh yeah.
That's like a benchmark
of the worst.
There are quite a few regex
engines tested in this
benchmark, including
CTRE, compile-time regular expressions, Intel
hyperscan and various others and Perl compatible regular expressions too, Rust
boost regex, standard regex and Yara. Most of them I haven't heard of. And there
are quite interesting results there. You can look at the
detailed results later, but they basically say that Boost Regex is passable, and so is
compile-time regular expressions. Although, honestly, I think I expected better from
CTRE than being compile-time. There's no surprise that the regex is
the slowest.
But what's new is that Intel Hyperscan
beat all of them. And I haven't heard of that library before. And apparently
it's an open source library from Intel.
They say Hyperscan is a high-performance
regular expressions matching library from Intel
that runs on x86 platforms
and offers PCRE syntax support.
And it's distributed as open source
under a BSD license,
which is permissive,
so you can use it any way you want.
We should check the code
and make sure that it doesn't disable vectorization
if it's being built on AMD.
Good point.
Good point.
Although, hang on, let's see.
Let's go back to this.
They did test on both AMD and Intel CPUs.
So maybe it's not bad on both.
Anyway, lots of useful information if you want to choose which library to use for your RAGXs.
Next up, Eric Niebler asked on Twitter, quote,
say I declare a constexpr object, like constexpr int 0 equals 0.
Now, when I take the address of 0,
I get back a const int pointer,
not a constexpr int pointer.
That is, constexpr isn't part of the type system.
What is it then?
Like an attribute, what mental bin do I put it in?
It's a philosophical question.
And Vilja Vutilainen replies,
quote, you put it into a bin related to static, but not exactly
similar.
It's a specifier that provides additional semantics on your variable, but not its type.
That's why it's a declaration specifier and not part of a type specifier and not part
of the type system.
That's a very enlightening reply.
There is an article on constexpr on Daniel Lemire's blog. He's a computer science professor
at the data science laboratory of the University of Quebec in Montreal. And the article is C++20, const eval and constexpr functions.
He illustrates constexpr function and says that the compiler may compute the result of
that function at compile time, but it doesn't guarantee it.
So if you want it guaranteed in C++20, there's a new attribute called constval, which ensures
that the function is evaluated at compile time. And if the parameter of that function cannot be
determined at compile time, or I suppose anything else within the function is not constexpr compatible, there should be a compiler error.
I think there's a trick to ensure
that a particular constexpr function
is evaluated at compile time,
and that is to assign its result to a constexpr variable.
And then it will be an error
if it cannot be evaluated at compile time.
But as you see in C++20 we have const eval. It's good. Right, I think that'll be the end of it.
And I want to leave you with this interesting Wikipedia entry.
Ostrich algorithm.
Quote,
In computer science, the ostrich algorithm is a strategy of ignoring potential problems
on the basis that they may be exceedingly rare.
It is named after the ostrich effect,
which is defined as to stick one's head
in the sand and pretend there's no problem. It is used when it is more cost-effective to allow
the problem to occur than to attempt its prevention. I'm just surprised they've invented
an official name of this. Anecdotally, in one of the projects I worked on,
there was a bunch of services with lots of memory leaks
because memory was passed around to child processors.
And if you fixed the leaks, the whole system stopped working.
It relied on parent processors being killed by the OS
and thus freeing the memory.
And another case which is sadly relevant today.
Missile firmware.
I was once working with a customer who was producing onboard software for a missile.
In my analysis of the code, I pointed out that they had a number of problems with storage
leaks.
Imagine my surprise when the customer's chief software engineer said,
of course it leaks.
He went on to point out that they had calculated the amount of memory the application would
leak in the total possible flight time for the missile, and then doubled that number.
They added this much additional memory to the hardware to support the leaks.
Since the missile will explode when it hits its target, or at the end of its flight, the
ultimate in-garbage collection is performed without programmer intervention.
Yeah.
That's it for today.
Thank you very much for joining me.
And I'll talk to you soon.
Bye.