CppCast - Benchmarking Language Keywords
Episode Date: September 6, 2024Benjamin Summerton joins Timur and Phil. Ben talks to us about what led him to benchmark the impact of the final and noexcept keywords, how to interpret his results, and the project that inspired him ...to do so in the first place. News Boost 1.86 released RealtimeSanitizer - new real-time safety testing tool for C and C++ projects that comes with Clang 20 "Honey, I shrunk {fmt}: bringing binary size to 14k and ditching the C++ runtime" Links Join us for the CppCast CppCon Special Previous episodes covering std lib implementations: Stephan T. Lavavej (MSVC) Stephan T. Lavavej and Sy Brand (MSVC) Billy O'Neil (MSVC) Marshall Clow (libc++) Eric Fiselier (libc++) "noexcept affects libstdc++’s unordered_set" - Arthur O'Dwyer Episode with Martin Hořeňovský, discussing non-portal random distribution Episode with Frances Buontempo, also mentioning random numbers and the portable distribution issue "Free Your Functions" (video) - Klaus Iglberger (timed link to the bit that talks about performance) Ben's PSRayTracing repo
Transcript
Discussion (0)
Episode 389 of CBPcast with guest Benjamin Somerton, recorded 4th of September 2024.
In this episode, we talk about the latest Boost release,
the new Klang real-time sanitizer,
and how to shrink the FMT library.
Then we are joined by Benjamin Somerton.
Ben talks to us about benchmarking,
the no-accept keyword, and ray tracing. Welcome to episode 389 of CppCast, the first podcast for C++ developers
by C++ developers. I'm your host, Timo Dummler, joined by my co-host, Phil Nash. Phil, how are
you doing today? I'm all right, Timo. Good to see you back. How are you doing today i'm all right timo good to see you back how are you doing thank you thank you yeah it's good to be back i'm doing great i um had a break um of a
few months uh which was sorely needed yeah spent some time traveling i spent some time with my
family which was pretty great um have pretty much not touched my laptop in two months which is uh
the longest break i had since i did my really long six-month
backpacking trip all the way back in 2017. Yeah, that felt really good, actually. I feel refreshed
and motivated to be back here. So it's pretty great. Well, I'm jealous. I might be a bit rusty.
I haven't done this in a few months so forgive me if I if I you know say
anything wrong or anything like that but it's great to be back and thank you so much Phil for
holding the fort while I was away um yeah listen to the episodes you did they were actually really
really awesome well thank you yeah so you haven't done much C++ for a few months so you're a bit
you're a bit rusty uh not in that sense yeah no no no no ross just just you know just uh but lots of rest lots of rest
exactly yeah no that was good actually not not so much rest but more like family time and you know
good to hear it yeah it was great um what have you been up to in the meantime well i had a holiday
as well so because we were both away that's why we had a bit of a
longer break between this and the last episode. We should be back on our two-week schedule again
now. Other than that, I've just been catching up on stuff since all my conferences this year. So
it doesn't feel like I've actually had any downtime at all, actually.
All right. Well, don't work too much. All right.
So at the top of every episode,
we'd like to read a piece of feedback.
This time we got an email from Paul.
Paul is saying,
this is a long email,
but I'm just going to read
like a part of that email.
So Paul says,
I would like to suggest a topic
for one of your future episodes,
a discussion on the two major
open source C++ standard libraries
on Linux, GNU's Libscript C++ and LLVM's Lip C++.
It would be fascinating to hear from the maintainers of these projects directly.
Perhaps you could invite a maintainer from each project to discuss their experiences and insights on air.
And then Paul goes on with a list of questions that we could ask them that he would find interesting,
mainly revolving around how one could become a contributor
and get started as a contributor to those projects.
So thank you very much, Paul, for this email.
I think it's an interesting idea.
I'm curious why you're focusing on Linux and on these two,
because as far as I know, the Microsoft Standard Library
has been open source as well now for a few years.
Obviously, it's not directly targeting Linux,
but it is an open source standard library implementation.
Anyway, I did some digging in the archive.
There were actually at least two CppCast episodes about the Microsoft STL
back in the days when Rob and Jason were running the show.
One with Stefan Lavaway in 2017 and one with BillionNeil in 2020.
And at least two about libc C++ the Clang one as well
there was one in Marshall Clow in 2017
and one with Eric Fuselier in 2019
but I couldn't
so first of all
all of these were quite a few years ago
and also I couldn't find a single episode
about LibStat C++
which is the GCC standard library
so maybe that's something worth doing at this point.
What do you think, Phil?
Yeah, we should definitely try to get someone on
and prioritize someone to speak about libstd C++.
I think there are maybe a few more aspects to it
than just how to become a contributor or stuff like this.
I think there's more things to talk about there.
So that episode might take on a little bit of a different
twist or not. We'll
figure it out. But let's see. Let's see
whom we can get on the show.
Yeah, absolutely.
So, yeah, thanks, Paul. And we'd like to hear
your thoughts about the show. You can always reach
out to us on xmastered on LinkedIn
or email us at feedback at
tppcast.com.
Joining us today is Benjamin Somerton.
Benjamin is a developer based in Boston, originally from Germany.
He grew up in rural upstate New York.
He obtained his computer science degree in 2016,
interning at places such as Google.
Afterwards, he has been at smaller firms focusing on C++ and Qt.
Professionally, he has worked on special effects, medical devices,
chemical detection, lasers, and all sorts of odds and ends.
UI, UX, cross-platform development, and localization are some of his specialties.
Outside of tech, he likes to study foreign languages,
such as Japanese, dance, and make films and animation.
Ben, welcome to the show.
Hi, thank you very much for having me on tomorrow, Phil.
Very welcome.
A long list of interests there, Welcome to the show. Hi, thank you very much for having me on, Timor and Phil. Very welcome.
Long list of interests there, and I do wonder if you've tried to mix some of those, like making films about lasers in a foreign language.
No, the lasers is a bit more recent.
It's one of my more recent work experiences.
But talking about mixing film and tech, when I was in university, I actually did sometime.
No, not sometimes.
I actually did this twice.
I actually wrote my own custom rendering software for some of my film classes.
They're like these experimental film workshops.
If you go check out my blog, I have this article about this thing called random art.
It's not my original idea.
It's someone else's.
But, you know, I then implemented it once over in C Sharp.
Then, like, I had this, like, thing where I spun up these Google Cloud compute instances.
And then, like, actually did all, like, the frame rendering.
And then on my desktop machine that was running in my dorm room, then it would like grab all the frames
and then compile them into an actual like film I would show.
Nice, nice.
And that sort of fits in with the ray tracing
that I know you've done as well.
So a little bit of foreshadowing there.
We'll come back to that a bit later.
Yeah.
So we will come back to that later.
But first we'll have a couple of news articles to talk about.
So Ben, feel free to comment on any of these, okay?
Sure.
So yeah, a few things have
happened since the last episode, which now I think
is it four weeks ago? Four weeks, yeah.
Four weeks ago. So a few things have
happened since then. The first
thing is that there's a new major boost
release. Boost 1.86 has
been released. It doesn't actually have
any new libraries, but it has
major updates of several libraries
such as Graph and the UID
library. And there's also a big new feature in the stack trace library. You can now get a stack
trace from arbitrary exceptions for Windows, which is pretty cool. There's also several more
libraries that are now dropping support for C++ 03. For example, now has a c++14 as a minimum so that seems to be an
ongoing uh process as we have discussed already on the show yeah every library kind of decides
for itself but more and more kind of kind of abandoning the older standards now yeah i don't
have any particular comments on this release itself other than just to say i think it's great
that we are we seem to have fallen into regularly announcing new Boost releases.
I think that's important because Boost is not so much in the consciousness of the community
as it used to be.
Anyone that was around in the late 90s, early 2000s, Boost was so close to being like a
second standard.
You pretty much had to have it on every project.
And certainly since C++11 and 14, that's been less the the case but it's still a great selection of libraries in there and um i think we
need to to keep making sure that people are aware of them so we should keep doing that all right
so there was another release which i thought was very interesting um there is now a new sanitizer
that is part of the clang llbm project. It's called a real-time sanitizer.
And it now lives alongside such sanitizers as address sanitizer and threat sanitizer and UB sanitizer.
And it's a new real-time safety testing tool for C and C++ projects.
It comes with Clang 20. And what it basically does is it detects things that you're not supposed or allowed
to do if your code has real-time constraints or is sensitive to how long it takes or, I think,
more precisely, you can't afford to call things or do things in your code that take an unbounded
amount of time because you have some kind of deadline. So I obviously have done a lot of this kind of coding when I was doing audio, but there's
also quite a few other industries where such constraints play a role.
And this sanitizer now detects at least some instances of things that you shouldn't be
doing there because it's not compatible with kind of low latency real-time stuff
because like it might call into things that take an unbounded amount of time
or might block your thread or do other things that you're not allowed to do there
if you want to satisfy your time guarantees there.
So it actually introduces a new vendor-specific attribute,
double square bracket clang colon colon non-blocking,
which you can use to annotate a function
to say that it shouldn't be blocking.
It should be real-time safe.
And then when you annotate a function with that attribute,
it detects things in that function
that are not real-time safe or blocking
and gives you an error if it finds something like this.
So this involves lots of syscalls, like calls to allocating memory, for example, whether
it's malloc or free or interacting with the threat scheduler, like locking a mutex or
anything else that could have a non-deterministic execution time will raise an error.
And the great thing about this is that the runtime slowdown you get while,
you know,
running your app with this sanitizer compiled in is pretty much negligible.
So at least unless you hit one of those errors, I guess.
So yeah,
that's really useful tool for people who do like low latency and real time
programming.
I certainly know that the audio software world
has been asking for a tool like this for decades,
and I'm very, very excited that it finally exists in Clang.
Timo, when you were working in audio,
I know that's kind of a little similar to video games
where you have to, say, render something
or update the world every 60 times a second,
but I know it's different from audio.
How much time did you actually have to say
to compute a frame of audio?
Oof, I guess it depends on the,
you mean how much time you have
to contribute to frame of audio.
So it depends on your user settings.
It depends on things like the sample rate
and the buffer size,
which are things you can set
in your audio settings, actually, on most platforms.
But it's somewhere typically between 1 and 10 milliseconds.
I think the default settings will land you somewhere
around 3 or 4 milliseconds.
Yeah, I think it needs to be below 5 or 6
to be not noticeable when you're playing an instrument.
So if you're like a consumer,
then you will probably be somewhere around 3 to 5 milliseconds. If you're a an instrument so if you're like a consumer uh then you will probably be
somewhere around three to five milliseconds if you're a professional audio programmer you will
probably work with low buffer sizes and and um then it's going to be as as as short as a millisecond
when i get that latency down were you typically working with c++ strictly
when you're in this space or did you have to other reach to things such as like c or assembly for it
i personally uh i personally always worked in c++ because i wasn't really doing much of the actual
audio algorithm stuff i was kind of doing like the plumbing and middleware around it like this is kind of more my speciality
but um i've seen some uh audio like dsp algorithms written in c and in assembly
um and other languages actually but i think most of it at least from what i've seen would have been
c++ maybe not the most modern c++ but but C++. Okay. But there is
another news item. There were a few blog
posts that came out the last few weeks
that I thought were interesting.
I'm just going to talk about one of them
because we don't have infinite time.
It's a blog post by
Viktor Zverevich, who
is the author of the FMT library, the
formatting library.
And the one that inspired std fmt,
and that std fmt was mostly based on.
And the blog post is called Honey, I Shrunk FMT,
Bringing Binary Size to 14K and Ditching the CSS Runtime,
which I think really nicely dovetails
with the last episode, Phil, that you did before this one
about binary sizes with Sander Dago.
So this is kind of exactly what this is about.
Apparently, there are quite a few people who want to use FMT on embedded devices, for example,
where binary size is really important.
And it's already, considering what it does, not that huge.
Like on GCC, which is, I think, the typical compiler for embedded, like FMT is 75K if you strip the symbols out.
But the block post describes how to bring it down to 14K, which is less than 10K overhead over an MTC main, according to the article.
And you can do that by applying a series of hacks that Victor explains how to do.
So you have to remove locale support.
You have to remove support for floating point.
You have to remove the standard library,
dependency on a standard library,
which I thought was interesting
that you can actually do that.
And there's a few other hacks.
And at the end,
like you shrunk it down to a fraction of a size
and it's actually kind of reaching the size
where it's usable for small embedded chips,
which is really cool.
So you can go and, you know, format your strings very nicely on embedded platforms, which is really cool. So you can go and format your strings very nicely
on embedded platforms, which is really cool.
The one that I found actually a little fascinating
was actually replacing the use of new and delete
with malloc and free from the original C library.
I didn't really realize that new and delete
actually have that bit of a size overhead.
Yeah, I think that was so that he could actually get rid
of the C++ standard runtime overhead completely
because there were the only remaining things
that he actually needed.
The bit that surprised me more was that removing
the locale support actually had quite a small difference.
I was expecting much bigger.
And he did actually address that further in the article.
So I'm going to leave that exercise for the reader
to find out what actually came out. All so this concludes our news items um except we have
one more news item which is about us and um this is something we don't normally do we don't normally
talk about what we're going to do next time at the next episode it's normally just a complete
surprise when it drops but this time we have something very exciting. So you want to talk about what we're going to do next time.
So our next episode is going to be in two weeks and that's going to fall
into CppCon, which is a big conference that Phil and I are both attending.
So we figured we could do a CppCon special episode of CppCons.
And our guests for this episode will be all of you.
So the idea is that instead of having a dedicated guest we're going to talk to the audience and the audience can ask us any questions
they want and we're gonna try and answer them live on the show so you can ask us about our
opinions on c++ you can ask us about the show and like what's involved in running it or like some of the
content or whatever you can ask us about what you're up to in our day jobs or what our opinion
is on other superstars related things so whatever whatever you want if you're physically at cpp con
you can actually we're going to record this in a room uh with like a live audience so you can
actually show up and you know join the audience there's going to be a live audience. So you can actually show up and, you know, join the audience.
It's going to be a mic going around where you can ask us questions live.
I believe our slot is Tuesday lunchtime.
So that's Tuesday,
the 17th of September.
So go to the CppCon like program,
and it's going to show up there as a session during lunchtime.
And if you're not physically at CppCon,
we don't really have the resources to set up a full hybrid thing
where we can answer questions live from remote people.
That's technically a little bit too challenging to set up for us,
but you can send us your questions in advance
to feedback at cppcast.com via email,
and we will do our best to fit them into that episode
and answer those questions as well during that episode.
So I'm really looking forward to that.
It's going to be something different,
something special, and hopefully fun.
Yeah.
Yeah, when I was doing a CPP chat with John Kalb
a few years ago,
we did a couple of these live at CPPCon.
They're always a lot of fun.
They're definitely worth going to.
And everyone getting involved, it's going to be a lot of fun. They're definitely worth going to. And everyone getting involved.
It's going to be a lot of fun.
And while you're there, you might as well enjoy the rest of the conference as well,
because CppCon is pretty awesome.
Bonus.
So that's taking place in Aurora, Colorado, 15th through 20th of September.
And I believe you can still get tickets for that.
All right.
So that concludes our news section.
And now we're very excited to have Ben as our guest today.
Hello again, Ben.
Hello again.
So we became aware of you after you wrote your blog post,
the performance impact of C++'s final keyword,
which is now a few months ago.
It's received many views since then and it attracted my attention because uh i was like finally somebody finally doesn't just pontificate
about like use this keyword or don't use that keyword because it does this but actually measures
what happens uh and i wish more people would be doing that and i i thought that that was a great
approach so i very much enjoyed
your blog post. You then had a few follow-up blog posts as well, which you're going to talk about
too, but let's start with this one. So what led you to actually writing that blog post?
Writing that blog post. So this wasn't like the first blog post I've written about this project,
you know, it was a bit more of an existing project from a couple of years prior.
And, you know, I would, you know, work on the project and drop it off for a couple of months,
you know, typical thing that a lot of developers do with, you know, their free time personal
projects. But I just remember, I think I was reading a Reddit's r slash CPP community,
where there was a blog post link, like called the performance benefits of the final keyword.
I can't remember which one it specifically was. I'm like, Oh, just reading through it. Wait a minute. Huh? This keyword looks like a, just a
free optimization that I can just like sprinkle around my code, like some sort of all purpose
seasoning. Um, but just reading through it some more, I'm like, okay, they're talking about the
assembly. Um, you know, I don't really know that much about assembly i'm a very
surface level c++ developer i mean it is important to know about and while i was working on the
project i did use the godbolt tool to actually like inspect things to try to reduce you know
the amount of you know generated assembly maybe use more vectorized instructions but all these
blog posts talk about the mechanics of how it worked um not a single one actually had a benchmark to actually like
show that if you used final in this case, it actually would cause a performance increase.
Um, so I was like, you know what, let's just try this out. So, you know, I just pulled down the
code, which I haven't touched in like a year, year and a half since the last time I wrote an
article about the project and just, let's just put final in somewhere, some places and see what
happens, which is, you know, a very risky thing to do, but you know what, um, worst case, I'm just
wasting some electricity and, um, I run the default scene with the default settings. Um,
and for those of you who aren't aware, um, Peter Shirley, um, he wrote a series of little mini
books on ray tracing and the default scene is the one from
book two and its final scene which kind of tests pretty much everything that you've kind of written
except for book three but you know it's a very good intensive scene and i noticed that like huh
it's kind of like one percent slower if i like run it 10 times over like this is a little strange
all these articles online and it wasn't just one article,
it was multiple ones when I Googled some more in it. They all said my code should run faster
with final. What's going on? And then I tested some other scenes. I'm like, huh, wait a minute,
this is actually a little bit slower. So then I decided, you know what, let's actually kind of
put this to the test. Let's see where all the places where I could apply the final keyword in
my program, because a lot of these articles are making the claim,
use the final keyword.
It'll make your code faster.
This is why.
And let's actually measure it.
Some,
a blog post or two prior about this side,
right about how I made this like little automated testing script.
So I could actually like fuzz test the program,
but also,
you know,
ray tracing is a very slow process where you very much care about how long something takes to run, how long it takes to render an image.
It could also double as, you know, doing performance metering.
So after this, I found it like, huh, it's not actually faster.
Some cases, it's not faster across the board.
Some cases it's faster.
Some cases it's slower.
But then combining it to, you know, I want to make sure that this program could also run not just in gcc but also on clang on msvc so i'm like wait a minute
we have different compilers going on too um they definitely generate different code and turns out
that was different compilers um were you know had different run times um and top of the i'm like
wait a minute but we also have different chips like you know intel and different runtimes. And top of the top, I'm like, wait a minute, but we also have different chips,
like, you know, Intel and AMD,
but they both implement x86 processors,
but they probably implement them in a different way.
Could there be a performance difference on them?
Oh, wait, now we also have Apple
who are making, you know, ARM processors.
Could this actually run a little bit differently as well?
We have smartphones now, so that, you know,
which, you know, run ARM processors as well. well um so you know i'm like huh to just claim that this keyword is faster
but not actually measured anywhere just felt odd to me so i'm like you know let's just try it out
and see what happens and i was fairly surprised so we covered this blog post a few times here on
the show already in the news section but now that we have you here um we can actually blog post a few times here on the show already in the news section.
But now that we have you here, we can actually dig in a little bit more, which is pretty exciting.
Oh, boy.
So first of all, how did you measure this?
Did you just run the scene and just measured how long it takes?
Did you use a particular benchmarking framework or something?
So I think your first idea was a bit
more correct. So it wasn't just one scene. Um, the book actually has multiple scenes. I think my last
count was maybe 39 or 40 of them. So some of them like, say have like one object in them, other than
have 10 objects. Some of them then have like, say a thousand or maybe even 10,000 objects. I can't
quite remember. Um, so you get this smattering of like, you know,
these different scenes, um, many of them also virtualized objects that you have a sphere here,
you have a cylinder there, but they all have a common interface. Uh, some of them are boxes.
And then on top of the two, some of them like have different textures, like maybe a white noise,
marble texture. Other ones just have like standard color. Some are different materials.
And so that's how, and we're testing like as i say 40 different scenes
but to make sure that we are getting a more you know nicer sample sometimes we render an image
that's very wide sometimes an image that's very tall sometimes we say take like 10 samples per
pixel other times we take 100 samples per pixel sometimes we render with one core with two cores
with four cores so we essentially compute this like
permutation now all possibilities together like let's say i gave every argument or every parameter
about three different options that's easily 500 plus different ways you could render the same
scene out um that's a little i would say computationally bonkers to do um so we just
maybe take you know say like 30 or 50
of those um potentially generated permutations and then we run it we run it with the keyword on
so with final on and then we ran it with the then i ran it with the final keyword off and then we
compared the performance difference so so do you actually have a lot of places in this code base where final semantically does
something like where you have like virtual functions that you know would could be overridden
but then the final keyword makes them not overridable or um is it actually is it actually
code base where this matters or maybe this is more something like, you know,
if you had swapped around these two lines of code,
you would have given a different, you've gotten an effect as well.
I mean, I would think so.
You know, it's been a while since I wrote some of the code.
I mean, you know, when I do look at the output from this,
you know, I do see cases, you know, where it was more performant.
It actually did cause a performance boost, but other cases where there was a significant drop so because i've
seen some benchmarks where people change something but then and they like measure a performance
difference and one or two percent but what's actually happening is that because they did
some random change and doesn't like actually matter like what it does semantically like the
code is just laid out
slightly differently and then you know uh maybe the way like like the things fall on cache lines
and the instruction cache are slightly different then you just get a cache miss somewhere where
in a hot loop where you wouldn't get one otherwise and that is the thing that you're measuring and
that's actually not related to you know the the semantic like properties of the feature that you're measuring and that's actually not related to, you know, the semantic like
properties of the feature that you actually enabled or disabled.
So I've seen some stuff like this.
I wonder if this could be something that's going on or if it's actually the final keyword
that causes the difference here.
There was one person.
So I remember when I tested out on Clang Intel, that there actually was a very significant
performance drop
across the board.
I can't remember his name,
but he actually did do a bit of an investigation
and found there actually was a bug in the Clang compiler
that was not, I believe, folding calls to log L,
and that's actually why it was getting a performance drop.
Okay, and how was that triggered?
By enabling or disabling final that's interesting yeah that
is something i am not privy to i do not understand most compiler internals and how they work so
as i said very neither do i yeah very surface level c++ developer here yeah but you know my
concern was people just saying use the keyword um i use the keyword and well yes and no right so so what's the conclusion should we use final
um personally i at first i was thinking about you know there were some cases where i did see
a 10 speed up but you know overall it just doesn't seem consistent for me so my final word on final is probably don't, but you maybe should still measure it first to see if it actually benefits
you.
I mean,
some programs,
you know,
they're running on one operating system.
They're going to use one compiler and they're no,
no.
Well,
what CPU they're going to use.
If that's what you're going to run with and it's fine,
but you know,
I want to test like this broad smathering of different operating systems, runtime environments,
compilers, and chips to see what happens across the board.
And we can see it's not necessarily consistent.
It's not just different platforms or even different compilers,
but different versions of the same compiler.
I mean, you said that you perhaps uncovered a bug
that will then get fixed,
and then maybe the performance
profile will be different so you might need to test again so did you test again has that been
fixed you know um i've not tested it well i've not tested the final keyword um specifically again
yet um you know i have offered to if it has been fixed but um i did file ticket on the bug tracker
um after about a week seeing that no one else did. And then someone actually pointed to a much older bug in the Clang issue tracker that
actually existed for 10 years.
So this was a little bit known, but it wasn't necessarily a high priority one.
Right.
So what was the reaction from the community to that blog post?
I remember there was quite a long Reddit thread.
Yeah. blog posts i i think like i remember there was quite a long reddit thread um yeah so um you know normally when i've written posts about this project maybe like 1 000 2 000 hits on my blog i got about
20 000 for that one single page yeah and then i remember too just you know just dropping it on
red and then just seeing all the comments come in and i'm like boy, what did I do?
Yeah, I think one of your guests that I previously did talk about,
he just said, you know,
this keyword existed in the standard
for like, what, more than 10 years,
but no one had really tested it before.
Well, you know, some people
actually did comment saying like,
they actually tested their own cases years ago
and they actually did see a performance increase
with the usage of final.
You know, talking about the older host of the show jason turner i remember actually dming him on twitter or x whatever you want to call it now um and he said when he was working on chai script that
the usage of final actually did give him performance benefit so obviously that that post went out there
was a discussion about this this led to um actually us
talking about it and that led to me not sniping you about another thing i was like well actually
you know since you seem to be really good at doing benchmarks and figuring out whether keywords
have a performance impact or not there's actually another keyword where we had
you know very intense discussions
on exactly this uh even on the committee about you know whether this keyword should be used or
when and whether it has a performance impact or not and that's the no accept keyword and this is
something that um i remember when i started doing c++ it was kind of already around the time when C++ 11 was around, and especially when it was 2010-ish.
And then I remember I haven't been using it then,
but then I think in 2015 when I joined the Deuce team,
I think that's when it started being a thing where
I noticed that people just sprinkle noexcept everywhere.
They're like, okay, this function probably is not going to throw an exception.
I'm going to put noexcept on it.
And whenever I was asking why do they do that, they always said,
oh, yeah, it's going to make your code faster because you don't have to generate
the code that throws the exceptions or whatever that means.
So somehow it's going to be faster.
And so people were sprinkling it everywhere. the exceptions or whatever that means you know so it's somehow it's going to be faster and and so
people were sprinkling it everywhere and then we found out on the committee that actually no accept
causes a lot of problems as well so it was kind of this trade-off thing uh but then it turned out
and i started asking people anyway i actually had this in my later in one of my later jobs as well
right like we wanted to uh test our assertions by by throwing an exception out of them,
but then you couldn't do that if you have no accept on it
because then you're just going to terminate the whole thing
and include your whole test too.
Anyway, long story short, the guy who was writing that code,
he just insisted that no accept was making his code faster,
but he didn't actually produce a benchmark.
And then I started asking around and it turned out
I'm not aware of anybody ever producing a benchmark? And then I started asking around and it turned out, I'm not aware of anybody
ever producing a benchmark, conclusively proving whether noexcept has any impact on performance at
all. So I was like, hey, Ben, can you just do that? And you did. So that's amazing. So thank
you very much. Do you want to tell us about that post, that project?
Yeah, I remember when you asked me like, oh, do you have any measurements about noexcept? And I
was like, huh, wait a minute. And i remember checking like the old readme of the project i
haven't touched in like you know two three year or you know a year and a half but like i haven't
touched all parts but in like two three years and i'm like i think i actually maybe did do something
about no except prior and then i actually found just squirt away um you know there was a flag
so part of the project too like you know that has a bunch of like cmake um was a
configuration time um variables that you can toggle on and off to actually like see and measure
performance differences there actually was one essentially the same thing i did with final i did
with no accept three years prior and i was like yeah i saw about like you know a 0.05 performance
increase with no accept so but i'm like this could potentially be
fuzz and this is before i actually had the script that could like do all the benchmarking and no
verification testing so i'm like you know what let's everything that and you know when i wrote
the blog post about final i uh you know build a bit of a analysis suite in um python and jupyter
notebooks i'm like you know what i have this script an analysis suite in Python and Jupyter notebooks. I'm like, you know
what? I have this script now that can run this whole permutation of tests. I have this little
Python binder code that can then analyze and take a look at it and generate all sorts of like charts
and diagrams and pretty information. Let's run this again. And, you know, this is very much
getting to the realm of micro benchmarking, which is very, very hard.
You know, when you have a new algorithm, different data structure, it's probably much more easier to like see, you know, if you write a different algorithm data structure and you see a 20% increase in performance, I'm like, yeah, that's pretty good.
But when I feel like I'm starting to like the 1-2% range, I'm like, okay, this is starting to get, could this be fuzz?
We're not quite sure.
So instead of running the program once or twice, we probably need to run the program like, you
know, maybe 30 times, 50 times, a hundred times. Um, so with the final keyword, I say ran like 30
different test cases per scene, you know, times eight different configurations of chip compiler operating system this was like okay
let's add on another um operating system run on i don't think i added any new compilers
but let's run the test like 50 times per scene instead you know let's go more intense and from
that there was one case that we found that I found out where it actually was running consistently faster.
But in general, no except kind of didn't really do too much for performance.
But this gets also into the second part is, I remember you asked me if there were any other instances or any other prior research or, you know, people publishing like, you know, does no exception increase performance. And it was a little hard to find, but there was
like, say a lightning talk at a CPP con, I think like back in 2017 or 2019, where the presenter
reported that like there was, you know, one case here was a little performant, one case here was
a little slower. And I'm like, okay, we do have a little other prior research. I did find some
older blog posts too. I can't remember from, I think it was also pre-pandemic times as well um that
actually did a benchmark of um no except and found that actually was a little bit more performant
but i had this concern like wait a minute these benchmarks here are just you know say like
searching in i'm searching for an integer inside of a vector or a list.
You know, that's a very tiny, very specific thing that probably runs very, very fast.
What happens when we have a bit more of a complex application where instead of just, you know, searching through a vector, okay, now we need to like, you know, recursively trace back through a function, go to a different part of the program um you know
uh probably have a page fault or something happening because you know now we're running
to a different part of the code hopefully you get the idea is um that you know okay in a more
complex program we're not just like benchmarking these like tiny little atomic things when we get
to larger programs there's a much more going on yeah and this is something I did talk about in the benchmark of no except is like,
okay, with the example of,
let's have a vector of integers,
you know, randomly generated in random places.
Let's just search for one random integer
could be in the vector.
It could not be in the vector.
Let's see if we can just find the index of it.
And when we have that same exact function,
once with no except on, and once with no except off, we saw that no except actually was consistently
more performant. You know, some cases, they'll say like, the median was 5% faster, 4% faster,
16% faster. I'm just looking at the blog post right now. There was one case where it showed it was 40% faster.
So in that very small atomic benchmark code, it showed no, because it's faster.
But when I brought it into a larger application, which was this ray tracing project, it was only consistently on AMD Ryzen 9 with, I think, Ubuntu 23.10 with GCC 13,
we actually saw a very consistent performance increase
of around like, say, 5%, 6%.
And then some other, you know, doing those same,
and these are only for the scenes from book one,
which is using a vector under the hood.
But then testing those same exact scenes,
but on different chips and compilers
show that like hey we're having like say minus one percent uh performance hit or you know having
next to no performance hit whatsoever um but then compare like you know this five percent
performance increase on the amd ryzen linux gcc um as i said it's about five six percent
when i look at the atomic test of just doing the search for
an integer in a vector list, that was reporting about 11% performance increase. So, you know,
in the smaller test, we're having a large performance increase, but in the larger application,
we're not having as much of a boost. Right. Interesting that you're seeing such
significant figures in those sorts of
operations i wouldn't have expected to really see anything for that sort of code i think the usually
the um the main thing we talk about no except improving the performance of is uh move operations
in standard containers which are generally disabled if um if doesn't have no except did you
did you test that at all uh No, I did not test that.
That'll be interesting to see whether that actually does generate
any statistically significant figures
and whether they're different to the other figures that you found.
Yeah.
So let me clarify something actually here.
So there are different types of performance impacts
that noexcept could have.
So we do actually on the committee have now a discussion about what should be the noexcept
policy for the standard library.
And there's lots of different opinions.
Some people are saying everything should have noexcept that isn't going to throw an exception
if you call it correctly, which is, I think, what a lot of people do in practice. The other extreme is you should not put noexcept on anywhere except when you want a different
branch of the code to be taken because you're actually using the noexcept operator to branch
on whether noexcept is there.
And the typical use case for this is vector push, you know, this whole strong exception guarantee thing.
Like if something's not noexcept that you can't move the elements, you have to copy them because otherwise you can't undo it.
Right.
So you have to like use a different branch of vector pushback with a different, less efficient algorithm if something is not noexcept.
So that's one thing.
I think that is a legit use of no except that no i don't think
anybody says we shouldn't be doing that that is separate from no except just not doing anything
semantically at all but affecting code gen right and and people say that that's the thing that
they want when they like put it on low level code and that's the thing where i claim well no like
we have we don't have proof that that actually does
what you think it does.
So I think I want to just distinguish
these two different cases.
One where NoExcept actually leads to different code
being executed, which is legit.
And I think I would expect that to be measurable,
certainly, with things like vector pushback.
And the other case where it doesn't actually change
what your code is doing
it just subtly changes the assembly um and people are claiming that somehow that makes their code
faster and i don't quite buy that right i mean in general having exceptions always kind of been a
little bit of a hot topic um like i remember when i started work on this project watching some talk
and it was about exceptions i think they were introducing the boost.leaf library where some people were actually like,
you know, compiling with dash F no exceptions because in general they thought no exceptions
were slow.
Therefore they should be avoided.
And, you know, no except helps you avoid writing code with exceptions.
And, you know, we had, was it in boost.leaf?
That isn't the first exception alternative
in the boost library.
Isn't there another one?
Boost outcome.
Yeah, that's it.
Right.
No, it's interesting.
Like it pops up in all kinds of libraries.
Actually, I read a blog post just today
from Arthur O'Dwyer's blog.
I think it was published a couple weeks ago
where he talks about how no accept
affects the performance of unordered set
on stdlib C++ quite significantly.
But that's one of those cases like vector pushback, right?
It selects a different algorithm for something depending on...
And actually, it changes the layout of the nodes
and that has a massive performance impact one way or another.
Like it makes it either worse or better depending on what you're
doing yeah i know what article you're talking about um yeah he i'm actually you know i amended
my article and actually included a link to it because he actually did talk about what was going
on under the hood which is called vector pessimization so i'm actually glad that he you
know wrote that article um so we could actually understand a little bit more of what's going on
because as i said i'm a very surface level c++ developer right so so what's
your conclusion from from the work that you've done should we should we use no accept like what
what should be our no accept policy for like the c++ language like should we use it only for those
cases where we know it's doing something semantically or should we sprinkle it sprinkle
it all over our code and and oh man this is such a um hotly contended question
fair in mind who you're talking to this is a very loaded question um yeah yeah okay you don't have
so i'll so what i did um you know after seeing the results you know there were cases where it
was consistently you know actually give me a good performance for my use case. You know, a recursive ray tracer.
Other cases, it actually had a little bit of a performance hit.
Very minor, not as much.
And other places where it kind of really did nothing in general.
I actually, originally, I had it turned on for the project,
but I actually decided to turn it off because I felt like, you know,
if it was going to give a performance boost,
it would give about, say, a 1% boost,
but if it was going to give a performance hit, it'd be like minus 2%.%. So overall it felt safer for me to just kind of turn it off for the project. Um, so
if you see that, you know, all capital, no, except in the project's code,
it actually is being compiled out to just nothing instead. Um, you know, and I'm going to say just
like last time, the best thing you can do is actually measure
to see if it's actually giving you an impact or not um it's very possible that you know using
no except in some strategic locations could actually yield a performance benefit but
i don't think i was seeing it i was more concerned about the performance hit that i could get with
other configurations yeah that's that's really interesting because the problem with no accept
is that it messes with other things like we have a problem with contracts because we have this thing
where you can like we're working on this new feature for the language where you can stick a
precondition on the function and then you can attach a violation handler and throw an exception
out of that and that's really really useful for certain cases but obviously if you put no accept
on there then that just makes that impossible which you know precludes important use cases i mean talking out of the
case of coach and you know i do like the idea of and you touched about this in the when you talked
about my final blog post we like the keywords final and no except to tell other developers how
we intend this code to work um i think we've all been burned at some point
by a code that threw in an exception
and crashed our program
that we did not even know
that calling this function in this way
could cause an exception being thrown.
And it'd be really nice to know,
by default, could this function throw an exception?
Or is it not going to throw an exception?
And that's where I do like no accept.
Yeah.
But in terms of
performance i think the common theme between the two posts is if you care about performance you
should probably measure it measure and and keep measuring it so that's uh that's something novel
yeah and if you're going to tell other people to do it um just yeah i have those benchmarks to back
it up yeah it's surprising how few people actually do that.
Yeah.
Yeah.
And, you know, I do prefer, you know, larger benchmarks of more complex applications rather than just tiny little microscopic things.
And the other concerning thing, too, is that sometimes the benchmark results could change.
How do you say? You know, as compilers advance,
as chips advance,
as operating systems evolve,
you know, our runtime environments
are going to be different.
So what we potentially benchmarked,
you know, two, three, four years prior
could not actually yield
the same performance benefit later on.
This is when I did talk about
in the NoExcept blog post
was I had this thing called
BVH node more performant.
And when I was developing this
on an over-the-laptop using that class, which, you know, what, which was, um, API compatible with the
original BVH node class, it was yielding, you know, a five, 6% performance increase.
But when I actually then tested it later, wait a minute, no, it's actually now slightly slower.
Um, so then I had to turn that off and I'm like okay i'm wrong now five years you know five
years ago on older hardware i was correct but now i'm wrong yeah there's a few things like that
that have changed over the years like uh you know should you use iterators or indexes the different
performance trade-offs have changed and computer go-tos and things like that where we need to keep retesting.
So on that subject,
are there any other areas of C++ where you think that we could do
with some good benchmarks
that we don't currently have?
Oh boy.
I think maybe a little bit of it
might be like myth testing.
I think maybe the next one
I kind of want to look at,
but I'm going to have to reach out
to a different project than mine because this isn't really used too much um it's actually our
beloved plus plus oh yes prefix or post fixing keyword i know we are c plus plus developers
and i've seen and i remember this from university where some people said like
no don't write i plus plus write plus plus i it's more performant and i'm like oh and then you know they showed me
you know some assembly that compiles out and i'm like wait a minute it's been like quite a few years
since i was in university have things maybe changed my compilers actually like do the more
optimal thing now my plus plus i and i plus plus be completely compiled to the same exact assembly
you know this is in context you know of like a for loop or a while loop you know not somewhere else because plus plus i and i plus plus will do
different things semantically um you know is this actually a performance this is actually
on performance or not or could it do something we completely don't expect because that's something
i've not seen a benchmark for really so i might go investigate that another one i remember watching this uh
talk about free functions and you know i do like the idea of you know free functions are a bit more
ergonomic but i remember like at the bottom of one side said free functions are more performant
and i'm like huh where are the benchmarks uh the title of the talk was um free your functions and yeah i have to
probably go re-watch it three or four times just to make sure like it's burned in my brain so i can
properly talk about this i believe it was you know like take away some of your member functions
and make them free functions to make them more flexible but i was a little huh it just says
they're more performant i'm like where's the benchmark for that? I'd like to see that.
Because if we're having people, you know,
remove member functions and turn them into free functions
or just, yeah, I've not seen a benchmark for free functions.
Yeah, there should be no difference
in terms of performance other than,
as we've already discussed,
if it potentially changes the layout of the generated code,
there may be an accidental effect.
Yeah.
I mean, an accidental effect is still an unfortunate effect.
I'm pretty sure there are probably some people who,
in some other applications, probably did use final no
except thinking it was going to cause a performance benefit,
maybe actually accidentally caused a performance hit.
I do kind of wonder how do the compiler developers
actually do their own performance benchmarking.
That's something I'd actually like to see about.
MARTIN SPLITTINGER- That's a very good question.
So we talked a lot about your benchmarks.
I love how you actually do go ahead and measure,
and I wish more people would do that.
So thank you very much for being a shining light in that area.
Speaking of that, the code base you use for your benchmarks
is this project of yours, PS Ray Tracing.
That was completely unintentional.
Yeah.
That just happened.
I'm sorry.
So the code base that you use for these these benchmarks is your project ps ray tracing which
i think in your blog you're called uh your pandemic era distraction so so that's interesting
so do you want to just briefly talk about what is ps so you said already it's like rendering
like for rendering ray tracing but like do you want to talk a little bit more about how this
project came to be and uh what you use it for is it good for um anything else than benchmarking probably it is it has a
few hundred stars on github it's pretty good so yeah i know about this a little bit yeah it's my
most popular repository on github you know this is like maybe 260 stars on the next one is 13 stars
so you know it's my little baby as for saying it's mine um this is one thing i do need to clarify
this book this code actually originally came from a book i think i alluded to this before
um peter shirley wrote a book mini series about ray tracing and you know eventually uh boy this
is going even to like pre-pandemic times this is actually going to like when i was fresh out of
university um you know i had to take a little time off for my health. Um, I really liked my computer graphics classes. And then, you know, I'm like,
man, I kind of sucked a little bit at my global illumination class,
but I like pretty pictures. So let's go back. And actually I remember just going onto the,
uh, Amazon, you know, Kindle store. I'm just looking for books on Raytrace. I'm like,
huh, there was one that was actually just published like a week prior me to having this
thought. You should check it out.
It's only like $3 a pop.
Oh, and it has like three little books.
Okay, you know, for $9, I can spend that.
This is like 2016, by the way.
So kind of like the proto thing is like, huh, okay, no one's really reviewed this book.
And like, it's all in C++.
But I'm like, you know what?
I actually want to learn a new language.
I actually did it in NIM.
Yeah, I think there's been a little brief talk about that for those who are unfamiliar with the NIM language. It's a very Python feeling language that actually compiles down to C and C++
or JavaScript if you want to work in a web browser with it. It's really nice, really cute.
I feel like it's kind of a little similar in philosophy to um herb sutter's you know cpp2 syntax you know have this other syntax that compiles down to c or c++ and then
we can like shove it through a standard c c++ compiler so i actually went through the original
book series um but writing everything in nim and um you know it actually showed that nim was like
a little less performant than uh c and c++ but it found I was a little bit more productive and felt like I could go through it a bit faster.
So that's kind of like the proto origins of it.
So this is like 2016, but all the way through 2017, 2018, 2019, and then 2020, you know, I'd still read, you know, blogs about computer graphics, you know, people post articles.
One of my favorite is Graphics Programming Weekly that I subscribe to.
And I actually just would see images from this book start to pop up here and there.
And I'm like, huh, wow, this little mini book series is like starting to gain a lot of popularity.
I didn't think it was going to go that far, but I'm like, yeah, okay.
And I'm like, you know, it was 2022.
I started a new job at a company.
The work didn't feel like I would say very challenging to me so i
feel like i need something i'd like just really sink my teeth into right so you know we can't go
anywhere um so i'm like you know what why don't i just revisit this ray tracing mini book that's
popped up everywhere and do it actually in c++ um know, I first started writing C++ in like 2007 in my bedroom
in my mom's house because I want to then, you know, make violent video games. Cause that's
what everyone wants to do at the time. And then, you know, um, in university in 2014,
I want to actually learn now what they called modern C++ at the time, which is like C++ 11,
C++ 14. So I made like this little animation spriting tool
and that's where I'm like more modern C++.
And, you know, throughout, you know,
that period I've written C++ here and there.
I'm like, you know, let's learn like modern,
modern C++.
Let's see what the C++17 standard has in store.
Let's see, was everyone talking about C++20?
And I remember at the time,
everyone was talking about modules,
but, you know, I think that was like still being drafted.
So I'm like, you know what? Let's write a a c++ 17 project and as i was rereading this book
i'm like seeing how it evolved how it grew i was like pretty cool um like for example the old book
used just raw pointers you know with new and delete this newer version of the book now actually
used um shared pointers which are much more safer so So I'm like, okay, that's an evolution.
Also does cause a bit of performance hit, which I think is very well talked about and documented.
So, but as I was reaching, I'm like, wait a minute, there are other algorithms that we can do other instances. Like, wait a minute, that's not the best way you could write this code. You know,
if you write the code like this, it probably does some auto vectorization instead. And I'm like,
you know, this is kind of cool. I'm actually writing code to be a bit more performed than what's in the reference
implementation from the book. But I'm like, wouldn't it also be interesting to like,
actually like toggle them on and off. So in the CMake file I was using, yeah, in the CMake file,
you know, you could actually like toggle on the book's code versus the code that I wrote
from the book's original code. It was able to render images about four times faster,
which I'm a little proud of.
I also had other things such as like, you know, a thread pool.
And this is also to be done pure vanilla standard C++.
Okay, if we change out the standard random number generator,
you know, the random library from C++,
which is using a mericin twister
i don't know how to pronounce that the mericin twister with this other thing called pcg um 32
wow it actually the random number generator actually is way more performant in this in fact
there's sometimes where the mericin twister was actually just i think returning zero or something
and you can actually see where i documented like, okay, so these pixels actually being
nice and colorful, they're actually just little black squares.
And I think it's because maybe I exhausted the random generator, which I found a little
funny.
And this maybe gets into another issue too, which I found out recently, which unfortunately
random in the C++ standard isn't necessarily standard.
This is kind of doing a little bit of an offshoot
from you know the project clang gcc um and msbc their uniform int distribution and uniform real
distribution actually will give you different random numbers yeah this came out a few times
on the show i think we had an episode with franc Francis Bontempo where we talked about this a lot.
Yep.
With Martin Hoshinovsky.
Yeah, with Martin as well.
It did pop up a few times on the show
that the random number generators themselves
are actually standard.
Like they will generate the same sequence
of random numbers on all platforms
if your compiler is conformant,
but the distributions are not.
That's right.
Yeah, which is a little frustrating
because I would have really liked
to be able to compare chip versus chip compiler,
the same different compilers on the same chip
versus on different operating systems.
And while I can do that for some scenes,
for most scenes,
some scenes actually do rely on a random number generator
to put objects create textures and i can't fairly compare compilers now because of this so um maybe
the next thing i'm going to work on the project is actually like trying to like um solve this
issue if you know where the random the standard random generator isn't really fully standard
um or giving me the same numbers and you know just talking about this
too like this was a real issue actually for disney when they wanted to um do a high-res render of
finding nemo i think like 10 15 years later more so they actually lost their random seed um and for
i think like some of the move of the corals or some other plants in the movie, instead of like trying to, you know, Oh, should we just do a different random seed?
They actually like painstakingly actually like animated, like all the movements of the
plants, which is just, that's really unfortunate.
And that's where it might be nice to have like an actual, like portable random number
generator in the standard or some other easily consumable library
that we could just kind of grab and
have an issue with.
So back to the PS
ray tracing project,
continuing on. After I had those
initial first seven, eight revisions
of it, I'm like, you know what? I'm done. I should
go do some other things, which I did do.
But then every once in a while on
r slash cpp, I'd read about like, oh, do this because it increases performs. And I'm like,
maybe I could just eke out something a little bit more here. Or one time I kind of went into the
left, uh, two times actually went to the left field, which is one, you know, if I ever want
to make a change to this, I don't really have anything to like reliably test to make sure that
my change doesn't affect something else. Because if I do something thinking it's more performant, um, yes, it could run faster.
But if I accidentally then change, what is the output of an image? I'm like, okay, is,
can you really say it's faster? Like, you know, if, you know, for example, talking about the
random number generator, if it's placing objects in different parts of the scene and my rays are
not hitting the object that it would
before. Okay. Yeah. It's running faster, but you know, these objects are somewhere else. You just,
that, that isn't fair. Right. Yeah. One of the other ones is, um, this is another complete
offshoot being a QT developer. I'm like, huh? I don't think I've ever seen a QT application that
runs in a windows, Mac, Linux, iOS, and Android all together at the same time
with a unified code base. And, you know, I wrote about, okay, you know what, this was a headless
project. You know, you just run in the console and then you see a PNG spat out. Wouldn't it be
nice to just have a little GUI for fun? And, you know, that was a fun little project to work on.
And I wrote about my experiences with that. One of the time I wanted to see like, okay,
I do localization and translation. What's it like translating a project like this from nothing to speaking German and Japanese?
So, yeah.
And then in, you know, fast forward a little bit more to, let's say 2024, you know, I just read, you know, the performance benefits of no except.
Someone posted a link to that on r slash cpp.
And then that's where this whole thing started. I'm like, okay, you know, the performance benefits of no except someone posted a link to that on r slash cpp. And then that's where this whole thing started. I'm like, Okay, you know, I have this testing
strip that I built like a couple years back, I have this framework now where I can like test
these features, you know, I know how I can turn on a language feature, turn off a language feature,
let's just see what happens. And that's where it is now. I feel like I might take a little bit of
a break from this one specifically.
You know, I talked about, you know,
taking a look at free functions
to see if they actually are performing or not.
Let's see what happens.
And also maybe looking at the plus plus operator,
though I might grab a different project for that.
That's not my code.
Because one thing that does concern me
is that this is a very specific application.
It's ray tracing and image generation.
There are many, many, many other projects and programs out there such as like, you know,
let's say for example, wind simulation or financial modeling that are, you know,
probably have different constraints, different requirements where, you know, potentially maybe
some of these keywords actually could have a benefit. But in my unfortunately hyper-specific case,
no, it did not.
So that's why I'll say you need to measure at the end
to see if it actually helps or not.
I think maybe for the plus plus increment one,
I'm actually looking to maybe some real-time physics simulation
because PS ray tracing is not real-time.
It's very, very static.
You said a lot of things there
that I'd like to dig into more,
but I think we've already exceeded
our deterministic time box.
So we should probably wrap up there
and just ask one final question,
which is, is there anything else
in the world of C++
that you find particularly interesting
or exciting?
Oh boy.
Or should we not open that can of worms at this
point i don't know as i say i'm very much a surface level developer you know i'm not reading
the standards papers so much i'm just reading what other people post online on reddit on hacker news
you know um just as it should be yep just as it should be let other people decide my opinions
um well i mean maybe talking a bit about some other things is um
i feel like definitely c++ has a bit of an ergonomics issue it's much easier to just kind
of you know do things in other languages um i know russ has had increasing popularity i do not know
anything about russ so i do not want to talk about the ergonomics but you know a lot of people said
like okay this is aside from the ergonomics this This is talking about like safety. I know that's been a very big thing, getting rid of undefined
behavior, you know, making a safer language. That's something I've constantly heard about
for the past five, eight years. Um, when I went to university, some people did C plus plus,
not many people actually did. They were running off to languages such as Python and Java.
Um, you know, when I was leaving, I saw more people doing JavaScript right now.
But, you know, talking about jobs right now,
I still see a swath of C++ jobs available,
but not that many people know the language
as much as they used to.
And this is in my off time, I remember.
You know, in my free time, I like to dance.
So I go to the dance studio.
I tend to meet some people
who are like more college-aged than me.
I remember talking to this one girl who was here for an internship.
She just completed her first year at university, and then she's going to her second year.
I'm like, oh, I told her I'm a C++ developer.
She's like, oh, sorry, I'm not old enough for that.
I'm only a second year.
And I'm like, wow.
And, you know, maybe getting into this, I feel like there's been a lot of people saying,
yeah, C++ is unsafe. And I'm like, well, it depends on what you're doing in it. We have
made much more safer practices talking about, you know, the usage of raw pointers and new and
delete. Yes, that is more unsafe, but you know, now we have shared pointers and smart pointers
and, you know, memory safety is important.
Undefined behavior is important.
But in my limited professional experience,
I've seen more people shoot themselves in the foot
over the ecosystems that they build
rather than language features or standard library issues.
Yep, absolutely.
And there's a lot we could say
about all of those points unfortunately at another time yeah right now um they are perennial subjects
so they will come up again so anything else you want to to let us know before we let you go like
where people can reach you then we'll find out more yeah so uh my personal website is um 16
bits per pixel.net or 16BPP.
You know, I originally got this when I wanted to write some video game tutorials
back in university, but no.
But, you know, I like having a very short domain.
My GitHub username is define-private-public.
Yeah, I did one time in a project, pound define private public,
and I was really surprised
it worked i'm like you know what that's my github username from now on that's very nice
and that says a lot about you yeah don't ever do like kids if you're listening don't ever do
that in production unless like you're in absolute panic mode and even then yeah so
yeah all right yeah thank you very much for having me on. I really appreciated this.
Thank you very much for being our guest today, Ben.
Thanks, Ben.
Yeah.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in.
Or if you have a suggestion for a guest or topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate it if you can follow CppCast on Twitter or Mastodon.
You can also follow me and Phil individually on Twitter or Mastodon.
All those links, as well as the show notes, can be found on the podcast website at cppcast.com.
The theme music for this episode was provided by podcastthemes.com.