CppCast - ISO Papers and Merged Modules
Episode Date: January 3, 2019Rob and Jason are joined by Isabella Muerte to discuss her experience presenting multiple papers at her first ISO meeting in San Diego and her thoughts on Merged Modules. Isabella Muerte is a ...C++ Bruja, Build System Titan, and an open source advocate. She cares deeply about improving the workflow and debugging experience the C++ community currently has and is designing and implementing an experimental next-generation build system called Coven based on ideas mentioned in her CppCon 2017 talk "There Will Be Build Systems", while also simultaneously ripping CMake apart and putting it back together again with a library titled IXM. She recently launched aliasa.io, a small URL routing service intended for the CMake FetchContent module. She enjoys playing Destiny 2, acquiring tattoos, and is currently trying to master the five elements of earth, wind, water, fire, and gun (but she makes no promises). She bows to no entity but the terrifying Eldritch Daystar we call the "sun", and hopes to one day own two german shepherds named Rip and Tear. News Modern C++ Lamentations C++ at the end of 2018 Getting you there - your C++ standardization efforts in 2019 Visual Studio Intellicode Isabella Muerte @slurpsmadrips Isabella's Twitch Isabella's GitHub Isabella's Blog Links aliasa.io P0468R1 An Intrusive Smart Pointer P1272R0 Byteswapping for fun&&nuf P1275R0 Desert Sessions: Improving hostile environment interactions P1276R0 Void Main P1279R0 std::breakpoint P1280R0 Integer Width Literals CppCon 2017: Isabella Muerte "There Will Be Build Systems: I Configure Your Milkshake" Sponsors Backtrace Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 181 of CppCast with guest Isabella Muerte, recorded January 3rd, 2019. trace.io slash cpp cast in this episode we talk about the progress c++ made in 2018
and we talk to Isabella Muerte.
Izzy talks to us about the multiple papers she presented at the last ISO meeting and
her thoughts on merge modules. Welcome to episode 181 of CBPCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
Hi, Rob. How are you doing?
Doing pretty good. Happy 2019.
Yes, 2019. And I still don't have my flying car.
No, I don't think we're going to get those anytime soon.
Well, Elon Musk wants to make underground cars instead of flying cars.
That's apparently the opposite of what I'm looking for.
I can see how it might make more sense, though.
I mean, car accidents happening the way they are now,
just imagine a car accident where it's flying 20 stories above your head.
That'd be a lot of chaos.
It'll be fine.
Yeah.
Well, that's how we're sort of like to read a piece of feedback.
We got this email, which is very topical from
Michael and he
references this
article and a bunch of Twitter discussion
that's been going on lately saying
the viewpoints expressed in Iris'
article and by the angry
game dev mob on Twitter are not some kind of fringe
minority in games. It would probably be healthy
for the community to have a discussion about it in a friendly
setting like CppCast and maybe you can convince him to attend some standards meetings
um and he put in contact info for the author of this article as well as a couple other game
developers i guess who have been vocal in this online discussion and i think i'm gonna introduce
our uh guest for the week first before we start talking more about this because i think she will have some
opinions on this as well okie doke okay so uh joining us today is isabella muerta isabella
is a c++ bruja build system titan an open source advocate she cares deeply about improving the
workflow and debugging experience of the c++ community currently has and is designing and
implementing an experimental next generation build system called Coven based on ideas mentioned in her CBB
con 2017 talk.
There will be build systems while also simultaneously ripping CMake apart
and putting it back together again with a library titled IXM.
She recently launched alias.io,
a small URL writing service intended for CMake fetch content module.
She enjoys playing destiny to acquiring tattoos and is currently trying to master the five elements of earth wind water fire and gun
but she makes no promises she bows to no entity but the terrifying eldritch day star we call the
sun and hopes to one day own two german shepherds named rip and tear izzy welcome to the show i'm
happy to be back thanks for having me it's quite a bio that is quite a bio i was bored at five in the morning
so i figured i'd write it up spin it up some more do you currently have any dogs i don't i live in
an apartment that doesn't permit any pets whatsoever so any pets whatsoever so not even a
turtle no not even turtle uh we had we had rats in the attic recently though um so that was oh
that's kind of like having a pet like having a pet except that it it pees everywhere and it smells terrible and then they die yeah in the attic and then
yeah it's real cool it's not yeah that'd sound fun yeah anyways okay well we got a couple news
articles to discuss uh so let's start off with this one uh so it's titled Modern C Lamentations. And I guess it started off as a response to this developer's looking into the ranges blog post by Eric Niebler.
And he kind of picked apart Eric Niebler's example with Pythagorean triples.
And I want to see what everyone else's take on this.
Maybe it's just not the best example of using ranges.
I don't know.
It really is not, in my opinion.
Just from a performance perspective,
ranges are basically a link for C++,
but they have the potential to be a lot faster.
And to be like, look what I can do with this math thing,
it doesn't really matter much, in my opinion,
because a game developer is not going to do that
where they need to have, in some cases,
sub-16 millisecond execution times.
And it's like a neat example,
but at the same time, it's an example
and not indicative, I think, of real-world code.
It's kind of like the original calendar example
that Eric showed off a couple years back.
Really cool to show off,
but we're also not getting zip in ranges for C++20,
so he can't really show that example off right now.
But I think that a better thing to have done
would have been like, okay, we're going to use Boost ASIO
and get a bunch of information from GitHub,
and then we'll just munch that through a massive range pipeline.
I think that would have been a much better example, in my opinion.
Well, even disregarding the performance complaints,
like the amount of boilerplate that this example required for his maybe view interface.
And I'm reading this and i'm
like is this c++ because it's using c++ 20s templated lambda syntax which i'm not used to
reading yet yeah yeah and part of it i'm like i don't even understand what why are these opening
brackets here like oh okay yeah yeah one of the things I thought was interesting in the article
is he points out some alternative ways
to make this code even better,
including one that's just kind of like a straight C way.
But he also points out that, you know,
coroutines being used in other languages
is a great way to do it.
And maybe he wasn't aware
that we are hopefully getting coroutines with C++20,
but that's definitely another much more straightforward example
to do this type of code.
Based off of San Diego and soon to be Kona,
we'll see what happens.
I know there's been lots of back and forth on coroutines.
Is the version that we may get an allocating or non-allocating
or whatever that argument was?
I can't even tell anymore.
Apparently there might be even a third implementation at Kona based off of a brief conversation
at the closing plenary. I really wish I
had the brain matter to have been paying attention
at that point, considering it was my first meeting attending.
But briefly to just go back on the
ranges thing too, a lot of people were complaining
about the compile time stuff and uh one thing a lot of people don't know is that you can just
pass like dash f compile time report to you know gcc and clang and get that information out and i
ran into this a couple years back when i was trying to uh um you know took compile times of a
boost variant drop-in that i had implemented when i was at
apple and uh i found that like oh most of the time that is spent with boost is just like parsing
headers uh in a 2.75 you know compile time 2.65 of that is spent just you know reading those headers
in uh including you know boost variants like 376 headers every time, even on Clang.
Whereas
one 2,000 line file is going to
do you in under
.08 seconds or whatever.
That's averaged
over a million compiles
over the course of several days.
But yeah,
the compile time performance
I think is a problem with the current RAGES implementation.
But yeah, anyways.
Yeah, there's a lot of complex machinery in it.
A lot of it's also split up into,
here's a single file for this one function.
And I think that that's problematic.
So it's a very boost way to approach things.
It's great for modularity
because you can then just delete that stuff later
or just not include it, but that's not what typically ends up happening.
So I'm curious if in your experimentation,
if you tried to see if PCH did or did not help in that kind of situation.
It really doesn't matter if we want PCH to work or not
because Cotire barely works with CMake.
CMake doesn't support PCH still.
The only build system out there that really does is
Mison, and only supports it to
one single PCH for your entire
project, as opposed to on
a per-module basis, which
GCC and Clang support.
So the
pre-compiled performance of it doesn't
really matter, in my opinion.
The Clang module map stuff lets you like automatically get that it's like a free operation but it's still like
you know it's just it's not really well documented you have to look at the source code for clang and
you know that's several hours of your day lost just so you could you know see like is this faster
than just not caring so right yeah well before Yeah. Well, before we move on,
is there anything else worth mentioning with this game dev Twitter rant
that's been going on?
I haven't been following it much.
I just read this article.
So having worked in the games industry for a brief period there,
I ended up burning out cause I was working a hundred hours a week on a
regular basis,
which,
you know,
I recommend people not do,
you know,
stand up,
stand up for yourself,
maybe join a union.
Who knows?
The thing that I noticed is that there are a lot of workflows
that people have gotten used to,
and they don't want to spend time looking outside of their workflow
to see what is available.
I will admit that debugging on a release build is
not great, but
the JavaScript people have solved that with source maps.
We don't have that with our debuggers at the moment.
And I think
right now our only saving grace is the fact
that debugging C++ is actually a much nicer
experience than it is debugging, say,
Rust. But that is
a
gap that is closing every day.
That's just because the tool is available for Rust?
It's more of just like GDB and LLDB are built for C++ and C, right? And so they have to do a
whole bunch of stuff to work around it. They have to do these hacks where they're like, you know, embedding Python script names, um, that are located inside the, you know, uh, cargo directory, um, so that it will run those Python scripts when it loads up the executable.
And then they'll have like, yeah, like it's just, it's a whole mess.
Um, and I feel for the people that have to work on that, cause, uh, I've, I've tried to do stuff like that for early Coven implementations.
But we could be doing a lot better, to say the least.
Okay.
Well, the next article I had was from Bartek's blog,
and this is C++ at the end of 2018.
And it's really just a list of all the updates
and the state of the C++ ecosystem from the end of the year.
And it's looking pretty good.
C++ 20 is looking pretty strong from my point of view, I'd say.
And 2017 adoption has been going pretty well.
Yeah, it's been interesting having the chance to finally go to a standards meeting, um, and getting to see just like the massive amount of work that goes into it.
Um, granted, I didn't really help matters much by submitting, uh, 6% of the papers submitted on the
last, uh, the last mailing, uh, not the post mailing, but the pre-mailing. Um, so that, that,
uh, got some eyebrows raised. Um, so my also on Twitter. How many papers was that to make it 6%?
So it was 274 papers total, and I wrote 17.
It would have been 18, but the PhD was kind enough to pick up the slack for the No Discarder
with a Reason paper.
And then my name was attached to three others as well, just because I'd been involved with
them for a while, and they were also relevant to stuff that I'd been working on.
This was also me catching up for the last four years of not really being able to participate
and mental health stuff and other stuff like that.
That's a lot of papers.
A lot of papers, yeah.
If you look at the mailing, you'll just scroll down
and at some point it's just my name 12 times in a row.
Small break break five more
papers for me so i've i've uh i've got more papers to submit for kona too so i'm sure uh uh so you
have to contact halfinkle to get numbers i'm sure he's going to raise his eyebrows when uh if if he
hears this and when he sees my email requesting the the number of papers that i'm going to be
uh submitting so yeah and i wrote most of those papers also in
a weekend, which shocked a lot of people, but one weekend, not one weekend per paper,
not one weekend per paper, like in a 48 hour period, I wrote all those papers, um, minus,
uh, P zero four six eight, which is the retain pointer. That was instead of conversion from
restructured text to bike shed. So that took a bit longer. Um, and I still had like, you know,
eight hours of sleep every night. So it wasn like just working non-stop the whole time i was taking
breaks and whatnot um i just had a free weekend to burn with nothing to do so i was like i'll just
write some papers and see what happens and then i just kept finishing papers and i was like oh
so did you end up with like a template or something to help you get started on the next paper so that
you like just filled in the bits or what yeah i would just copy paste the the you know preamble of the previous header or the
previous bike shed file to the next one but then that was about it um i think i spent like an hour
writing a small little script to in powershell to just like uh dump them off of the uh bike shed
online website so i don't have to have some like special weird XR thing that runs curl and all this other stuff. So, um, it ended up being pretty
useful. Um, and, uh, and people can get that off of my GitHub. So you can see all like the progress
I'm making and papers that I'm working on and whatnot. Um, as well as all the terrible, uh,
pun titles that I, that I'm writing. So, writing so all right yeah was there anything else uh you
guys wanted to point out from this article before we move on because we'll definitely ask more about
uh the papers you submitted at san diego uh well i wanted to mention just the a number of conferences
mentioned on here and his timeline uh accu c++ now um cpp con code dive and that's not all of them and it's growing each year looking
forward to what we've got coming up we've got two new conferences at least in 2019 yeah we've got
a what c++ on c and then uh core c++ yeah yeah yeah which i am officially going to both of them
now yes congratulations wow yeah if you include the uh you know c++ committee meetings which are Yeah. Yeah. Which I am officially going to both of them now. Yes. Congratulations. Wow.
Yeah.
If you include the,
uh,
you know,
C++ committee meetings,
which are on this list,
there's something going on every single month.
Almost.
It looks like there's gaps in the schedule in August and October,
but,
uh,
as a lot of conferences.
Yeah.
Well,
in this year,
let's see,
when is Kona?
Kona is in February.
Oh,
Kona is in February. Right. Kona's in February, right.
So Kona and C++ on C kind of overlap,
and then C++ now and Core C++ kind of overlap this year.
Yeah, there's going to be multiple things per month on the calendar for 2019.
Fun stuff.
2019.
Yeah, for 19.
Right, right.
Yeah.
Okay, well, since you just name
dropped the PhD a moment ago, Izzy, this article I have is getting you there your C++ standardization
efforts in 2019. And this is a good post. It's kind of targeting people who are interested in
writing and submitting their own papers, which probably would have been a helpful read for you maybe a month or two ago, right?
So we were actually communicating over Discord. Actually, while we started the podcast, he
just pinged me on Discord. So we were actually communicating back and forth, both through DMs
as well as in the established channel for include cpp cpp next
where we discussed uh you know future stuff for the standard and then we would you know paste our
early drafts and people would point out like um you know typos or something like that and we would
fix them up so it was it was a lot of like collaboration and kind of a very like back and
forth effort between the two of us um i
think he wrote the second most papers like fresh for um for san diego um and then was also burned
out at the end of it and he was like i don't know how you did that and i was like i i have add meds
like you know um so uh uh yeah like the, the, it's a, it's a good post, especially because like,
um, if you're worried about like travel, it goes into, you know, how you can actually
get, uh, you know, sponsorship for that.
Um, I was at the time, uh, working for target, um, for the, uh, uh, San Diego meeting.
Unfortunately I was terminated four days after the meeting ended.
Um, that's a whole
other story.
But yeah, so
I was able to actually go
and represent someone
and their interests, which was
interesting as well.
It was, yeah,
being able to have someone
pay for me
helped a lot. But for Kona, I'm going to be paying my for me helped a lot.
But for Kona, I'm going to be paying my own way in this case.
Right.
And if you're not going with a company, this article is highlighting that you can apply to get assistance in order to cover travel costs and everything, right?
Yes.
However, my birthday is actually that Friday for Kona.
So I'm going to be turning 30 the day before the plenary vote.
So I'm going to be staying an extra week in Hawaii
for a vacation
and birthday present for myself
because it's Hawaii.
Like, come on.
I'm going to be in a room without windows
for seven days.
Like, let's look at the sun a little bit
or something, you know?
Well, at the same time of
year approximately i'll be in sunny warm southern england in february yeah that doesn't add up in
the same way does it no it doesn't san diego was very sunny and it made me realize how much i i
missed uh southern california's uh weather compared to the bay area. It gets way too cloudy up here and it really does affect like my,
my like mood and whatnot.
I like,
I was down there,
I was in my element.
It was like 90 degrees in the middle of November.
Everyone else is just like,
this is not right.
And I was just like,
no,
this is perfectly natural.
I'm fine with this.
So you've mentioned the include C++ discord a couple of times.
And I think it's been a while since we've talked about it on the show.
So if you want to give it a quick plug, by all means.
Yeah, so I'm actually a moderator for the Include CPP Discord.
We're basically just a group of people who wanted to make a more inclusive community for C++.
I wasn't actually planning to do a spiel for them.
So the best way to find out more about them is to go to includecpp.org or maybe include-cpp.org.
I actually need to check.
I'm sure you can put it in the show notes, though.
But myself, various other people in the C++ community, Simon Brand, Nicole Mazzucca, Yubi-San, the PhD, Matt Godbolt, various other people.
We all just kind of chill in there, help out new people.
It's a very warm, friendly, inviting group.
And as long as you follow our code of conduct,
everything's chill.
And I think we've had every single person
you just mentioned as a guest on the show.
Yes, that was on purpose.
Okay, and last article I have to go over real briefly
before we start talking more about San Diego and everything.
The Visual Studio IntelliCode is adding support for C++.
Yeah.
I don't think we've talked about this IntelliCode feature, but it basically is machine learning applied to GitHub repos and everything to help you type out code.
So when you're typing something,
it'll come up with suggestions based on,
you know,
other code people have written.
And it's also mentioning here,
I think just for C sharp so far that you can actually have it learn your own
code base.
So if you have a whole bunch of,
you know,
custom classes and methods that you want it to be knowledgeable about,
it can do that as well.
It's pretty cool.
I find that terrifying because like all it's,
all it's going to take is for someone to gain that system so that when you go
to type the word color,
it's going to like,
if you're in the UK,
it's going to use the American spelling.
And if you're in the U S it's going to use,
you know,
the non-American spelling and it's just going to be,
it's going to be real great.
Like I'm sure someone out there is going to like,
try to try to like write a,
uh,
you know,
SI unit tool and it's going to just suddenly like start dumping in feats and
miles and Fahrenheit and just let,
let's do it.
Like,
I can't wait.
Well,
I feel like this takes like,
uh,
programming from stack overflow searches to the next level,
basically.
I mean,
isn't that a,
isn't that a bunch of tools
people have written where it's like,
you can write a function,
and if it can't find it,
it tries to find it through stack overflow,
and then it just goes through every single answer
until it compiles, and then just downloads that
and dumps that into your code.
Yeah, I think I saw something about that.
Yeah, I'm pretty sure that's a thing.
It's like a Python import hook.
It's great. Yeah, I'm pretty sure it's a thing. It's like a Python import hook. It's great.
Yeah, fun stuff.
Yeah.
Okay, well, as we mentioned,
so we've done two recent episodes
covering the ISO San Diego meeting.
Yes.
But it'd be interesting to get your perspective
since you were there as a new member.
And as you mentioned,
you wrote over a dozen proposals. So what was your take on san diego how did it go um it was it was exhausting
uh one thing i noticed that people have not mentioned at all was the hotel we stayed in
um uh so it was uh it felt kind of like run down and also like like not this isn't me like
you know dumping on the on the hotel but i got whiplashed when I walked in because on a wall you just had Ron Burgundy's face with the text,
Stay Classy, San Diego, and I just had whiplash.
Is this real? Am I having a stroke?
No one's talked about this at all, and I'm just concerned that I imagined the whole thing, but no, it's a real thing.
I feel like that set the mood for me for the weekend. I was like, well, this is, it's a real thing. And so just like, I feel like that set
the mood for me for the week. I was like, well, this is going to be just a trip. So it was really
great. I was all over the place because I had, you know, 17 papers that had to go from SG1 to SG16.
You know, a little bit of SG14 stuff was mentioned, but I didn't even get to go to those sessions. I was in LWG, or not LWG, LAWG, EWG mostly, outside of the incubators.
I was in the incubators a lot because I wrote so many papers and they were just like,
no, you need to get past us for this stuff.
So yeah, I was all over the place because i wrote so many so i
now have like a very good uh handle on like the various like like each each working group has a
different um focus i've noticed like ewg is very like this should be um like a language feature
but if we can really just like if it it's controversial, people are going to try to push it off to,
to the library.
So like,
you know,
they keep that open as an option.
LAWG,
um,
you know,
they,
they have a very specific approach to like,
uh,
you know,
the chair is not a very fond of multiple namespaces.
I disagree with that on a great many levels.
Um,
uh,
but like people in EWG are in favor of inline namespaces, people in LAWG are
not. Um, so there's like some clashes there and having to come to that compromise is, is interesting.
Um, and I also got to sit in core for a little bit and that was actually quite interesting. It
was like a nice, uh, for me, at least chill session. Um, noticed a few typos in the
merged modules paper, which will be presented at kona um as as well and um i
was i was actually so busy that at one point uh a paper that i had written was um reviewed by lewg
while i was in another room discussing another paper that i wrote and that paper got forwarded
without any arguments for me to lwg for kona so um i i don't know if that's like a first to have like a an r0 get sent
to lwg and the author's not in the room but it's present at the at the at the meeting um and so
that was for the integer with literals and that's so that we can write you know one i64 one u8 that
kind of a thing um and that include the size T also?
So size T and pointer t are actually language level,
and that was written by the PhD.
So that's his paper.
I was like, okay, so ceased it in is part of the standard.
This should be a library level thing.
I actually would like it to be,
I think that there's no reason why we can't have the uint underscore least.
I made a mistake in the paper and was just like, oh.
So let me back up.
The integer with literals paper is to replace the u64 underscore c macros that have been in C student for years.
And those return the uint underscore least underscore value.
Or, I'm sorry, uint underscore least 64 or whatever the n is underscore t.
So that's the smallest possible integer that can hold that value?
Yes.
All right.
Yes.
Which, if you're on a platform that provides the uint 64t um it's the same type typically right i think that's
actually required like that if they are provided they have to be exactly the same um but i don't
believe i need to go back and check but regardless um that's what those macros return uh my paper
does not specify that instead specifies these just aren't available if you don't have uint 18
friends um so i uh briefly threw a thing out on Twitter a couple days ago,
and a bunch of people came back and said,
yeah, you should just change it.
And I was like, okay.
So that's going to be in the R1 for Kona.
Personally, I would love to have those actually be built into the language
instead of having them be, you know,
just like something they have to include.
And so like Rust, they have I64,
I8, U8, all these things, those are guaranteed to be those widths. But then Rust also doesn't
run on ZOS, or platforms where you know, char is 32 bits. So, you know, that's what kind of why we
have to have those and that's fine. But I would love to have those actually as types built into
the language.
And from what I've seen,
even in the game dev world,
people will typically not actually use U64 as the lowercase.
They'll actually use it as the, the uppercase form.
So it'll be like capital U64 instead of a lowercase U64.
So I would want to do a search like through GitHub,
but it's a little hard to do that at the moment.
There's no like massive search engine available.
Maybe Bing will be available one day.
I don't know.
Um, given the Microsoft purchase, but, um, yeah.
Uh, but yes, that's, that's the, the, that's what that paper is.
And, um, uh, I think there was like a few people that were like, ah, this like kind
of interferes with something with Visual Studio, but like, we're not worried about it.
Like it, it's an implementation defined thing.
No one uses it.
We're, we're good.
So, um, and then, uh,
the other paper that got forwarded to LWG as well was byte swap.
So, um, if, if all goes well,
we're finally going to get a const expert byte swap function
and C++ 20 after who knows how many years
of not having a byte swap function.
I don't know why it was so hard for us to, to get this in.
I know that some people were trying to focus on it for networking.
I chose the approach of, well, what if I'm just trying to read bytes
that were written by a big ending platform to a file?
I don't care about networking in that case.
So it just works on bytes.
Sorry, it just works on integers that are available
and built into the language, and you just swap bytes, and that's it.
And it all does.
So there's some people that said,
like, oh, you need to do it the Rob Pike way
where you're working with std byte arrays,
but those are not constexpr,
and you can't create an integer out of them,
at least easily.
So I chose just the simplest route,
and let's just standardize byte swap.
Even Arduinos have had it as a built-in instruction intrinsic since 2008.
So are they like big end to end to little end,
or is it just ByteSwap and you don't care?
It's just going to be the opposite order?
ByteSwap, you don't care, and we're also getting std end to end in C++20.
So if you want to implement H to N and N to H, you can do that yourself.
That is not my problem. Some people might want that to be out of the paper. plus 20 so if you want to implement h to n and uh n to h you can do that yourself that is that is
not my problem some people might want that to be out of the paper they can write their own proposal
right yeah i'm just thinking about emulator that i worked on recently where i just need it always
to be in little indian when i'm talking to the emulator or whatever and then yeah yep pretty
much that's that's what you can do with this and it it's constexpr, too. So, you know, constexpr, all those things.
Right.
Yeah.
As one does.
Yeah, yeah.
I actually, here, let me just open my browser real quick here.
I'm trying to get to my paper list.
Well, by the time you wrote 20 or whatever,
it'd be hard to keep track of them all.
Right, right, which is why, so on my GitHub,
which I believe I sent you a link to, Rob,
so you can put that in the show notes,
I have a massive table of just like where they currently are,
the current papers and whatnot, for C++20.
So, yeah, it's actually one moment.
I need to move this out of the way so I can still see well on the on the screen so um yeah so uh do you just
want me to go through all them real quick like rapid fire to discuss like which ones move forward
and where we're at with some of them or do you want to hone in on any specific ones so i guess
we just discussed the two that you said did move forward right so they move forward to lwg so those
those are the ones that have moved out of the incubators uh they they weren't even sent to the incubators they were actually sent to lawg first
so um the ones that that moved out of um ewgi and lawg um let's see here um so intrusive smart
pointer uh finally moved was sent to lawg at one point, kicked back down to LEWGI,
moved back out to LEWG.
It has to undergo some API discussion stuff, unfortunately,
and that is just for bike-shedding some API stuff.
There's going to be a big argument over
whether we should adopt by default or retain by default.
This is the intrusive smart pointer,
retain versus default. This is the intrusive smart pointer, retain versus, you know, release,
or not release, sorry,
retain versus adopt on construction.
I'm in the opinion that having worked with 30 CAPIs
and the factories that they involve,
like you want to adopt when you're constructing it.
And if not, then you're just going to be,
you know, incrementing.
So I really, I don't really understand what the use case is
or know what the use case is for an intrusive smart pointer.
Right, so there's actually quite a few.
So in the standards library, we currently have,
besides shared pointers control block,
we actually have like two for sure intrusive smart pointer count types.
The first is exception pointer,
which cannot be implemented with shared pointer.
It has to be implemented as a intrusive smart pointer count.
It's also opaque.
That could be implemented with retain pointer.
And then there's also std future and std promise.
They have a shared block as well,
and that can also be implemented in that sense.
So the intrusive part is the control block, basically, the count.
Technically, yeah.
So an intrusive smart pointer count is just,
it's like shared pointer,
except that the object itself has control over its own lifetime.
So if you've ever worked with DirectX, Objective-C, Python, OpenCL,
some parts of MonoC runtimeime, like MRuby.
There's a whole bunch of stuff that if it's got a ref and deref or increment and decrement,
it's an intrusive smart pointer count capable.
So RetainPointer came about because I had, in 2016, I was at at cpp con i was at gore's uh really fantastic
uh code routines talk where he showed like oh here it is running in llvm and also i'm on linux
i'm using visual studio code and check out all this crazy stuff i'm doing right here this just
got inlined and then he was showing off some some uh data types and i was like wait intrusive pointer
well i like i've been working on this thing for like three months now on the side. Um, cause I was currently trying to like, see,
like, could I unify, you know, uh, running Python and objective C together without having to write,
uh, uh, objective C or objective C plus plus the answer, by the way, is yes. That's a
long standing library that I've been working on for a long time. I'm basically waiting for
reflection at this point.
But, um,
and I just, like, was like,
you know, SG-14's tomorrow. What if I just write this as, like, a small paper?
So I wrote it overnight,
went into SG-14 the next
morning on that Wednesday, and was like,
hey, I have this, like, really important paper.
I wrote it last night. Like, this is gonna take,
like, 15 minutes of your time.
I ended up going a little over that, which ended up causing the person that was supposed to go first to lose
out on time, which I feel bad about, but I wasn't really like, you know, mentally present with all
of that. I was just like, look at this paper. It's really important. Um, and, uh, most hands
in the room went up. I think one person's hand did not, uh, but no one was against it moving forward.
Um, I presented it again with some, uh, changes at SG14 in CppCon this past year in 2018,
and Arvid Gerstmann came up to me and was like,
hey, I've implemented this six times in my source code,
and I can replace it actually, in every case, with this implementation that you have.
So I don't know if he's actually using mine, but it has a use.
And if Arvid Gerstmann is like, hey, this thing that you wrote is useful, it's probably useful.
So there's just some arguments over whether it needs to not increment the reference count when you pass a pointer to it, or automatically do that.
Currently, I provide a traits type as a customization point, and my argument is if you don't override that, it adopts by default, and you have to retain
on copy, or explicitly retain. And in practice, I have not actually had to do retain by default.
Everything is always created with a rough count of one when it's allocated. And while some people can argue that, but what if I'm creating this object
with a rough count of zero?
It's not really, you don't want to do that
and then place it on the stack
and then copy it to something else.
There's a few edge cases,
but those are like, you can just specialize for those.
So what may end up happening though, and this is like a very like low, low chance
of it happening, but it could happen where we end up with both a retain pointer and an
adopt pointer.
Um, and that is something that, that I'm going to as, uh, ask as a straw poll for in Kona.
Um, but we'll see what happens.
So.
Okay.
Yeah.
Okay.
Um, and then, uh, let's see what else, uh, a big thing that happened with SG 16 was,
uh,
the desert sessions paper,
um,
which is,
uh,
to add a state environment,
instead arguments,
uh,
type of the standard.
So,
um,
uh,
in every language except C and C plus plus,
you can iterate over the environment variables that are in your,
uh,
you know,
your programs,
uh,
running session,
uh,
hence the name session.
And you can also get the arguments
from anywhere in your program, like forever.
And those arguments don't change.
Technically they can, but also...
Yeah, technically they can.
Technically they can,
but in every other language they're considered immutable.
So there's no reason why we can't also say,
you know, copy these over or make a view of them
or something similar.
This is kind of an evolution of
adding a new
signature for main, which
was to add an initializer
list at string or string view or something
similar like that.
That just came about because
I had made a tweet and someone submitted that as a
paper. I was like, oh gosh, I hope I didn't influence that because that paper got eviscerated.
It turns out that they had not seen my tweet.
They had come up with it separately.
But it's good to know that there are other people that are going down that path.
But I want the sessions API to be UTF-8 safe.
And I want it to be UTF eight by default.
Um,
and we don't have those,
those available right now in the standard.
And so,
um,
that ended up basically it's,
it's waiting on SG 16.
Um,
however,
uh,
someone,
uh,
named,
uh,
Pedro Rodriguez,
um,
uh,
on GitHub,
uh,
submitted a pull request about a month and a half ago. I haven't
responded to the guy because I'm just in awe
because he keeps submitting changes and whatnot
to his
pull request, but he implemented the entire
paper and it looks like he's only
been programming for two years. It's
incredible that he was able to implement it
and also to implement it in a fairly
Unicode-sa safe way um it doesn't
you know work for like zos and stuff like that so that we are going to require some like transcoding
and stuff like that but like as an initial like hey this is implement you know this this can be
implemented it's um it's really impressive um especially that like someone who is not you know
been in the c++ muck for as long as you know know, the three of us have to do that so quickly is, is really, really fantastic.
I think,
I think that is proof that it is a good idea at the very least.
So yeah. And then let's see off the top of my head here.
We also had a offset of for the modern era that didn't get presented yet.
That was for like some people say
might step on the reflection uh api stuff um i disagree with that because the whole point is that
you pass a member function pointer to the function um so if you stored it as a non-const expert
variable you can't reflect on that so there's really no point in making it like it's in the realm of constexpr,
you know, the reflection API.
Although I want it to be constexpr.
And then a bunch of people said,
you need to ask David from EDG if it's possible.
He came back and said, yes, it is.
So that may be in 20, maybe in 23,
really doesn't, really don't know.
It hasn't been presented yet though,
but a lot of people
seem positive about it um let's see here uh oh yeah let's talk about the one paper that got
absolutely 100 rejected with like okay yeah that's no one no one voted no one voted in favor of it
so p1305 um which had a typo in the title which is very unfortunate it was a deprecate the address
of operator is the title it was supposed to unfortunate. It was Deprecate the Address of Operator is the title.
It was supposed to be Deprecate Overloading the Address of Operator,
but that's what happens when you write 17 papers in 48 hours.
You get tired and you forget to fix the title when you send the email in to Hal.
So Attila from, I want to say Hungary.
If not, then I apologize, Attila, for messing up your home country.
But he had pointed out, you know, this could just be done with a concept.
And I was like, oh, that's a pretty good point.
Could be done with a what, I'm sorry?
A concept.
Yeah, so you can have a concept that says, like, yeah,
you can't overload the address of operator in these APIs.
That solves the same problem.
And I was like, oh, yeah.
So when the hands went up, I voted neutral on my own paper.
And everyone was against it, strongly against it.
So it's been put to bed now.
EWG is not going to get rid of the address of operator overloading anytime soon.
And we'll just have to get used to it.
That's one of those things that's basically impossible to get right, right?
Not really.
I mean, like,
it depends on what you're using it for.
And, like, it had
a use case. It's not used
that often.
This is the Unary ImpreSand,
technically, right? Correct, correct, yeah.
There's only two really well-known libraries
out there that use it that are
available to the public. The first is
Microsoft's COM pointer,
which I should note, retain pointer,
in conjunction with out pointer from the PhD,
supersedes, as well as Boost Spirit 3.
Of course Spirit does.
If it can be overloaded, Boost Spirit has done it.
Yes, it has.
So it was unfortunate, but that's where we ended up.
Let's see here.
Another paper that got rejected was inline module partitions,
but that's going to be refocused to combine it with the feature presentations paper.
And if you don't know what those are,
it just means it's going to be used for conditional imports and,
well, not conditional imports, for conditional scopes of stuff.
So it's going to be a global static if, basically,
is what I'm going to treat it as going forward.
I didn't get to present the workflow operators,
which would solve some precedent stuff for ranges, coroutines, executors.
It basically becomes like an infix operator
without it actually being an infix operator.
So that didn't get presented, unfortunately. that you, it basically becomes like an infix operator without it actually being an infix operator. So
that didn't get presented, unfortunately, so
we may have to wait until 23, which I was hoping to avoid,
but we'll see what happens. Again,
you know, everything's kind of malleable.
Because I got the paper in at San
Diego, technically, it can still
be presented and possibly make it in 20.
I don't know, though. Someone will probably
say that I'm wrong. And that's
okay.
Let's see here.
Void main actually got some...
So P1276 void main, making it so that void can have a void return type,
actually got some decent feedback from EWGI.
The only reason it didn't move forward is because of modular main,
and that is from Richard Smith.
So that paper specifically is um like we treat main
as a special like little boy uh where it's like okay it can return int but then it doesn't
actually you don't have to actually put a return there and there's a bunch of there's all this
special wording that just says like uh if it falls off the end it'll just just act like you return
zero it's fine don't worry about it then you just call std exit zero at the end of it.
It's like,
well then why don't we just do that with avoid main?
Like every other,
every language in the world has void as the return type.
Main doesn't actually return anything.
Exit does.
Like when you,
when you return that value from main,
it's actually going to be setting that value to some value to some, uh, variable. And then it calls
std exit at the very, very end of that. Um, and if you call std exit before that, then that's when
all that other special logic comes into play. But like, you know, there's no, there's no real
point to it, if that makes sense. Um, so, you know, uh, but it may get combined with modular
main and then we can backport it to, uh, what it to what's known now as either legacy or classic C++.
What does modular main accomplish?
Modular main basically says that you can have
multiple main entry points for a module,
if I'm remembering correctly.
This is not in merged modules,
which we said that we were going to discuss.
Yeah.
So it's
also being rewritten for Kona,
so I would wait for the pre-mailing and then read
that paper. Okay.
Because I believe Richard is going to try to
integrate some of Void Main's
wording into it.
But that's
neither here nor there, in my opinion.
I wanted to interrupt this
discussion for just a moment to bring you a word from our sponsors.
Backtrace is a debugging platform that improves software quality, reliability, and support
by bringing deep introspection and automation throughout the software error lifecycle.
Spend less time debugging and reduce your mean time to resolution
by using the first and only platform to combine symbolic debugging, error aggregation, and state analysis.
At the time of error, Bactrace jumps into action,
capturing detailed dumps of application and environmental state.
Bactrace then performs automated analysis on process memory and executable code
to classify errors and highlight important signals,
such as heap corruption, malware, and much more.
This data is aggregated and archived in a centralized object store,
providing your team a single system to investigate errors across your environments.
Join industry leaders like Fastly, Message Systems, and AppNexus
that use Backtrace to modernize their debugging infrastructure.
It's free to try, minutes to set up, fully featured with no commitment necessary.
Check them out at backtrace.io.cppcast.
I was going to say, looking at the time where we are in the interview here,
it might make sense to just take this as a segue to talk about modules.
Yes.
So let's talk about modules.
So a lot has changed since last year when I wrote
my Millennials are Killing the Modules TS blog post,
where I was very angry.
Last time we had you on the show,
it was mostly to let you voice your complaints about modules,
right?
What's my complaints.
And it turned into an hour long,
uh,
uh,
stream of consciousness with no editing whatsoever.
Um,
I'm glad that this time I didn't,
uh,
decided to drink a diet Coke before,
uh,
the,
the podcast.
So I wasn't,
I'm not going to Rick and Morty this up.
If I burping everywhere.
Um,
so,
um,
uh, yeah, so a lot's changed.
The Atom proposal from Richard Smith, combined with a bunch of other feedback,
resulted in us getting the Bellevue compromise, which led to merged modules.
We've added a few more things since then.
Specifically, module is going to be a context-sensitive keyword.
Hallelujah.
I'm so excited.
Like Nathan Sidwell made a really good argument,
which was if we're going to make the argument that we can't use a weight as a keyword and we have to make it co-weight,
then why are we going to make module be a required keyword when it's used like
three to four times more on GitHub as a variable name?
It doesn't make sense.
So he had some rules.
He figured out how to make it context sensitive. Then also came into contact with the new module preamble.
So we still have a preamble.
The preamble can end early if you are depending on,
like say preprocessor definitions that are from something in the,
what's now called the global module fragment.
So we have a few.
Yeah, there's a lot to it.
Merge modules is very large.
But it's better, in my opinion,
because we now have this thing called a module fragment.
So a module fragment is just a subset of a module.
It doesn't mean it is a submodule.
It is just a part of the purview of a module.
Partial classes, kind of?
Sort of.
It's like a partial module,
except that it has to be included in that group
when you go to create that final exported module.
So modules' scopes have now gone from,
here is a module of both an interface and implementation,
and they can be like end-to-end files.
It is now, here are your end files for that module fragment.
Or not module fragment, for that module.
And if you don't have all of them, then it's going to not work correctly.
So a bunch of fragments make a whole.
Correct, yes.
And then we also have something called the global module fragment,
and that's basically the legacy preamples.
That's where you can put all your includes and all that stuff like that. So the layout of a file now goes module, I believe,
semicolon, and that's it. And then you put your includes, defines, whatever you need there.
And then you put export module, your module name, and then you're good to go. You can also skip all
of that and just put export module, your module name.
So that global module fragment becomes optional.
And this is a good compromise.
I'm not like 100% happy about this, obviously, because it's not perfect.
But nothing's going to be with this because there's so many different opinions on this stuff.
And also, this is just from a cursory glance at merge modules. It's like 120 pages. Um, and I, you know, can't keep it all in
my head. I'm not on the core working group, so I can't like, you know, just focus on that entirely.
But from a cursory glance and having been at, at San Diego, um, it's, it's shaping up to be
really great. Um, it would be even better if we could add a paper that
i wrote called implicit model partition lookup which richard smith uh and um hubert tong gave
me some feedback on uh at san diego so um i'm gonna have to change some of the wording for it
and we may be able to actually get it in 420 even though it was technically put off of the schedule for San
Diego. But it's basically P1302 is the number. But it's basically a way to say, here's a directory
or some module container, and you just throw that at your compiler, and you're done. And that will
look for a specific entry point that you've specified and that will that will be your module so that's that's everything in there and so it
will only pull in the the fragments that you've imported in there but it basically says like you
know the compiler does not become a build system but it does become a lookup system for that
specific module the uh build system is still responsible for actually compiling the object files,
but the compiler will still be able to actually generate
the
final exported
module interface that everything would use.
The build system is still
responsible for finding the actual imports that are not
implicit module partitions.
There's
a lot to it, and you
have to know where merge modules are right now and where they're going to be, um, to like fully understand like the full scope of it.
Um, because modules are extremely way around that simply because,
um,
we haven't taken any official straw polls.
A lot of people have different opinions on things.
Nathan said,
well,
you know,
um,
uh,
says that like,
well,
why don't we just make a GCC open a Unix socket to your build system and your
build system will be a Unix socket server.
And,
um,
that sounds terrifying to me personally,
because, personally, because
I don't want to have to worry about
CVEs coming in
for my build system.
I don't want to have to worry about, like, oh, someone
accidentally bound
this TCP socket
for Windows
7 because it had to work on Windows 7
or something like that. Someone ported
this to Windows XP, which doesn't have Unix sockets.
And oops, now all this person's, you know, this machine was compromised.
Now a bunch of people's personal information was out there because CMake wasn't expecting it to be bound to the Internet or something like that.
You know, it's not, that doesn't sound like a good idea to me.
Having directories though,
every OS has a directory.
Even DOS has directories.
And P1302 has two approaches.
I'm going to be dropping the hierarchy one as the primary example
instead of going with the splayed one,
which is you just have a directory
and you just have to have one hierarchy deep
and every OS in pretty much our history has a one level hierarchy at the very least.
Even ZOS supports this to some degree.
So if ZOS can do it, then like, you know, anything, anything is possible, right?
So that's the path I would like to take it in. Uh, this would also mean
that like Boris doesn't have to rewrite, uh, you know, all of build two, Nissan does not do a whole
bunch of work to get it working. CMake doesn't have to do a whole bunch of work. Um, your make
files will still technically work because make will read the timestamps timestamps from directories
just fine. Um, you know, Ninja will work just fine as well. Fast Build will still work. You can still,
you know, tunnel these things across the network through Ice Cream or DiskCC or whatever you want
to use, SCCache, CCache, whatever. I spent like a lot of time thinking this stuff through from,
you know, February of last year till September when I presented the idea to Richard, and he was like,
this sounds interesting. So yeah, it's a very simple lookup system without it actually having
any real huge performance hits, right? Because does it make sense to have to have the compiler
open a socket and send strings across the socket instead of
just saying hey this file that you parsed out just keep reading more files like it just makes sense
to me like it's already launched it's already reading stuff in like have it just keep reading
that stuff it just seems like it would be more performant especially on systems where
launching a process is a very expensive operation, like Windows.
So, yeah.
So that's the gist of it for modules, I guess.
There's a lot more involved,
and I'm sure some people like Matthias Stern are just shaking their heads like,
no, we just need pre-compiled headers.
But I think that if I...
I just recently redesigned my website. So I now
finally have the time to like sit down and write this stuff out. So I'm going to be writing a very,
very long post on what modules are, what you can do with them and why we actually do need P13.02.
Because otherwise we're going to end up with like vendor lock-in where it's like, okay,
well, our module system only works with GCC. This module system only works with Clang. This one only works with Bazel.
This one only works with Blaze.
This one only works with CMake.
This one only works with Build 2.
We don't want vendor lock-in, right?
Like you want to be able to move
between these build systems freely,
and this will help with tooling,
especially if game developers care about this stuff,
which they should.
So that's just the meat of the matter, if you will.
Okay.
Yeah.
And then there was like one other thing.
Oh, yeah.
So future stuff that I'm tackling also,
and I will hopefully have one or two papers available for Kona,
so in addition to the inline module partitions being used for basically a global static if token soup,
discussed, you know, what's the word?
Conditional imports with Adam from MongoDB.
And we came up with a brief syntax.
We're going to try some triboolian logic with it
to replicate the preprocessor
while also keeping it simple for build systems to be able to process that stuff
and do preprocessing if they need to.
And there's that.
There's also automatic inclusion of everything into a namespace,
as well as automatic usings on import.
This is all like, I'm just throwing words out here,
but this is stuff that people will hopefully be able to look forward to. And I will hopefully go over in a post in the future.
And I think that's about it. The biggest hurdle I think that we're going to have going forward is
going to be the std embed issue with dependencies. It's currently one of the largest mailing list or SG15 mailing list
threads that we've had running
which that list is usually
kind of dead and it's been
pretty lively lately thanks to that
discussion.
So I'm not going to
make a statement either way. I did
help the PhD with some of the syntax
that we wouldn't step on each other's toes
as well as
like kind of keeping it, what's the word, like uniform with the approach that I'm going to be
taking. That may end up getting it voted down as a result, which I feel kind of bad for if that's
the case. But the dependency issue is a big one and will matter for um you know these uh the these upcoming uh meetings so
okay yeah i'm glad to know that uh you have a better opinion about modules and you think it
will be making it into uh c++ 20 i don't know because there are a lot of people that have negative opinions on it. However, those
opinions are based off of where
the modules TS was,
as well as the Atom proposal being separate
and just
a lot of people not being in the room when
other things were discussed. So
when we were talking about the global module fragment,
a lot of things were brought up and I was like,
nah, I'm super against this. And then someone
else explained it to me in a different way and I was like, oh, why didn't they just say that instead?
It would have made more sense, and I would have been in favor of it, and I wouldn't have raised my hand as SA strongly against in that moment.
And then been just angry the whole time until someone explained it to me.
And I think some of it's just like, it's a very large paper merged
modules.
It's larger than, than Adam and it's larger than the modules TS.
And I'm glad that some stuff got kicked out of the models TS, honestly, like extern module
was not a good idea.
It would have allowed recursively dependent modules.
That's recursively dependencies are not a good idea in build systems.
You don't want that.
Um, and that would have permitted that.
And I'm glad that that removed it.
Did I want everything that was in the Atom proposal?
Eh, not really, no.
But I like where this compromise is.
It's a very precarious compromise.
But if merge modules make it in at Kona,
then the final vote will be in Cologne.
Our last chance to get them in for the C++20 standard will be in Cologne. When is that? That is immediately after Kona, then the final vote will be in Cologne. Our last chance to get them in for the C++20 standard will be in Cologne.
When is that?
That is immediately after Kona.
Well, not immediately after Kona, but it's the next meeting after Kona.
I believe it's June-ish.
Yeah, okay. I don't know.
I'm hoping to go to that, but if it interferes with
the Bay Area Pride
celebrations that weekend, I'm not going to go.
So, we'll see what happens.
Okay.
Well, it's been great talking to you again, Izzy.
Yeah, thanks for having me on.
It's been a treat.
Well, and since you said you're no longer with your last job,
are you currently looking for work?
Any plug that you want to get in real quick?
Yeah, I mean, so I am currently contracting.
That may turn into something full-time,
but, you know, if something better comes along,
anything remote,
hit me up on Twitter.
My DMs are not open. Just tweet at me, though,
and I will open a DM
for you.
So if anyone is looking to hire you, send you
a message. Send me a message. Hit me up in some
way. We'll talk.
Okay.
Thanks, Izzy. Thanks for coming on.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in,
or if you have a suggestion for a topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate if you can like CppCast on Facebook and follow CppCast on Twitter.
You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter.
We'd also like to thank all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode was provided by podcastthemes.com.