CppCast - Kona Trip Report
Episode Date: February 28, 2019Rob and Jason are joined by Peter Bindels to talk about features approved at the ISO C++ Kona meeting for C++20 including Modules, Coroutines and much more. Peter Bindels is a C++ software eng...ineer who prides himself on writing code that is easy to use, easy to work with and well-readable to anybody familiar with the language. Since the last time he's been on CppCast he presented at multiple conferences about build tooling and simple code. In combining both, he created the build tool Evoke from cpp-dependencies and other smaller projects, leading to a simple to use build system presented at CppCon 2018. Earlier this year he presented its companion 2D Graphics library for absolute called Pixel at CppOnSea. He's active in both standards development as well as helping out with various things at conferences. News 2019-02 Kona ISO C++ Committee Trip Report All Meeting C++ 2018 talks on youtube Core C++ Speaker List Peter Bindels @dascandy42 Peter Bindels' GitHub Links CppCon 2018: Peter Bindels "Build Systems: a Simple Solution to a Complicated Problem" C++Now 2018: Peter Bindels "A View to a View" Concerns about module toolability Sponsors Download PVS-Studio Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 188 of CppCast with guest Peter Bindels, recorded February 27th, 2019. analysis of programs written in Java.
In this episode, we talk about new features voted into C++20 at Kona.
Peter Bindels joins us after attending the Kona meeting.
With Peter, we talk about modules,
coroutines, and much more. Welcome to episode 188 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
I'm doing all right, Rob. How are you doing?
I'm doing fine. Getting over a little cold, but I'm okay.
Getting over a cold. I've been fortunate. I don't think I've really had one this year.
No?
No, but I mean, I got really sick after all of my traveling last year,
so maybe that just like did me good for like eight months or something.
Well, here in the Raleigh area, we had like a week and a half of rain so that probably contributed to it we and everyone who's listening in england
right now is like saying boohoo and everyone listening in seattle but it was abnormal for us
here yeah right okay at the top of your episode like through the piece of feedback uh this week we got a tweet from
attila saying uh cpp cast you mentioned c++ contracts could interact badly with constexpr
why is that d has contracts and all d functions are constexpr by default yet this code fails to
compile as it should i think this is something we talked about last week right jason yeah it's
something that someone told me at a conference and you know know, I was at CBP con, I haven't gone back and done any research on it yet. But it's
like, contracts will not be evaluated in a constexpr context, I believe was the statement
that someone told me. And I don't know if there's still a question about that. Personally, I would
say that contracts should be evaluated in a constexpr context, and it should fail to compile.
It should be a malformed program, just like any kind of undefined behavior or something would be at compile time.
But I don't know where that status last left off, and that's specifically what I was alluding to.
I see our guest is nodding his head in agreement with you.
Well, then we'll have to talk about that.
Yeah.
Well, we'd love to hear your thoughts about the show as well.
You can always reach out to us on Facebook, Twitter, or email us at feedback at cpcast.com.
And don't forget to leave us a review on iTunes.
Joining us today is Peter Bindles.
Peter is a C++ software engineer who prides himself on writing code that is easy to use, easy to work with, and well-readable to anybody familiar with the language.
Since last time he's been on CppCast, he presented at multiple conferences about build tooling and simple code.
In combining both, he created the build tool Evoke from CppDependencies and other smaller projects, leading to a simple-to-use build system presented at CppCon 2018.
Earlier this year, he presented its companion 2D graphics library for absolute called pixel at cpp
on c he's active in both standards development as well as helping out with various things at
conferences peter welcome back to the show thank you okay i made a typo that was meant to be
absolute beginners oh okay that makes more sense oh yeah it does yeah i should send these things
out for review before we go live i should read them more before we go live
So
That's okay
So the 2D graphics library is for absolute beginners
Called Pixel
Yes, so I've also been talking about
With the study group SG20
Which is about education
Which basically focuses at
Given that we have a language called C++
Which is, according to many, horrendously complicated and not suitable for beginners,
how do we teach it to people who are familiar with a programming language
and then carry on from that point?
And I've basically given them a challenge, which is,
I think we can also teach C++ to absolute beginners,
both from the theoretical point of view, as in I don't see any reason why we shouldn't,
and from the practical point of view, if we want to't see any reason why we shouldn't, and from the practical point of view,
if we want to do this for absolute beginners, what do we actually need?
So I've basically boiled that down into three subjects that we need to fix,
and most of those are usable for developers and experienced people as well.
So to start with, I need to be able to build my software,
and I need to do that without hugely complicated build scripting.
So let's see how
simple we can get things. I've talked to you last time about CPP dependencies, which looks at your
code and just knows how to build it. And CPP dependencies allows you to export that to CMake.
And then I got to thinking, what happens if I just use this for all my CMake files? As in,
I don't write a single one by hand, I just generate everything. Does that actually work?
So I tried it on a few simple projects and it actually works. The only thing a single one by hand. I just generate everything. Does that actually work? So I tried it on a few simple projects, and it actually works.
The only thing I'm doing by hand is renaming files.
So I said, well, let's try this on a more complicated project.
And as far as I can tell so far, that still works.
So I figured, why don't I just include that into CPP dependency?
So instead of having it as a build and code introspection tool,
use it as a build tool.
So of course, that's a giant paradigm shift.
So let's take the entire code base,
move it over there,
optimize it for that use case,
and make it do the entire build background as well.
And so far, that's worked.
And that's ended me up in a talk at CppCon.
Okay, sounds good.
Then you get into the question,
given that I can now build code,
and as an absolute beginner,
I can put Hello World in a folder and just
type evoke and everything comes out, including
runnable executables, what's next?
And then I tried
to put the question on Twitter, which is
what we need to change about C++ to
make it suitable for absolute beginners?
And a number of people, at least
five or so, replied that we need
something that's better than
console output. So we need say, 2D that we need something that's better than 2D or better than console output. So
we need, say, 2D graphics or something that allows you to get quick feedback, visual feedback, i.e.
we need some simple graphics that allows a beginner to start out and understand what he's doing.
And that's basically an idea. I developed it together with J.S. Budweik, I think I'm pronouncing his name right, and that's
become Pixel. So again, a talk at
CPP on C, and that
one is targeted at the absolute, absolute
beginner, which means that the first example
that you get starts out with people who don't
understand a for loop, a while loop,
or an if statement.
And you can still do something useful with it
that's easy and quick graphical feedback.
We can go over this more later,
but I'm kind of curious,
how does Pixel compare to some of the other
well-known C++ graphics libraries like SFML?
That's a really good question,
and one that I got during the talk as well.
So the big difference is,
if you look at many libraries,
there's SDL, GLFW,
they basically provide you the middle layer abstraction that allows
you to use OpenGL and other frameworks
but they don't apply the low level
they don't give you the simple interface
to use it. SFML, the one that you mentioned
does do that, but it also
mostly hides behind
it hides the 2D interface
behind its own interface
which means that if you would like to progress from that
point on to say OpenGL or 3D graphics,
you are essentially starting from zero again
using SFML as a window library.
And Pixel is set up based on SDL,
as in I'm not trying to do the entire thing
from scratch again,
but it starts from what it can do
and then gives you 2D graphics built on it
in such a way that you can easily extend it
to 3D graphics and so on
and keep using parts of it that you already are familiar with.
Okay. Well, let's go to the news real quick, talk about a couple of these articles,
and then we'll start talking more about your experience at Kona
and also definitely talk more about Evoque and Pixel, okay?
Yeah, sure.
Although you say real quick, Rob, but getting some of the things in here,
this might
be a very long news segment yeah so i guess the the first thing i put in here was the uh reddit
trip report put together put together by bryce lelbeck um i'm not sure if we necessarily need
to go through everything here now but maybe keep this in mind as we uh you know start talking to
peter about his experience at Kona?
Is there anything we should call out first? We have to call out
modules and we have to call out coroutines.
Both of those are a very special thing
that happened at Kona, which is mostly the
reason I wanted to be there.
If you've been following the standardization,
you know that modules have been somewhat
contended over the past few meetings,
the C++ committee meetings,
and there has been a lot of feedback
on whether or not it's actually usable,
whether it's buildable,
and how the interface for using modules
from a compiler and build system should even look like.
And as far as we know,
and we had a lot of discussions at Kona
about how to actually use it,
the design as specified in the TS is fine,
but the compiler interfaces may need more work beyond this
stage okay there's a gcc style interface there's a clang style interface which is not quite the same
there's a visual studio interface that is again very different and the combination of all three
show that every bit of functionality that we need to implement it well in the build system
exists and is allowed within the ts but they all show that there are different ways of doing things and all of them have different advantages and disadvantages.
For us, the main discussion was, given what we can see that the TS allows and requires,
is it possible within the TS to make any kind of improvement that we must require before it can be
shipped? And that was essentially the point behind the paper that I co-wrote, which is P1427. And the idea of that is, if there is anything that requires to be fixed before we
can ship modules and have them usable, then we have to do it now, because otherwise we'd be too
late. And after a lot of discussion on many days, we came to the conclusion that everything
currently in a module CS is as it should be, and there's nothing in the tooling environment that we cannot do,
but there may be some guidelines that we need to set out to tell people
this is how you should do it because otherwise you're giving us a really hard time.
But we also got to the discussion that actually that's not something
we could put into the international standard because it says it's about
C++, the programming language, and not how does your compiler look.
So from the tooling point of view,
we need to have something that we can basically output
that says this is how you should be implementing this,
and if you do this, then it's much easier.
Basically some guidance advice.
You could try to put it in non-normative notes
in the international standard,
but it's basically putting things in the wrong place.
And for that, we decided that we should start working
on a TR, a technical report, about the C++ ecosystem.
So that's including build tools, that's including code checkers,
and to make sure that those things can actually read and use modules
and then interact with compilers, how they interact with each other, the works.
Okay.
So we took the set of papers that we wanted to target at the IS,
and we basically are retargeting them towards the TR so that we can make a TR that represents this is how you use modules, this
is how you should set up your code so that the tools can use them.
I don't think we've ever really discussed the TRs here, have we, Rob?
I don't think we have, no.
No, can you give us like, I mean, now, okay, let's just go ahead and admit that the news and interview portions of today's episode will be intertwined.
Yes. So can you tell us what exactly a TR is?
In this case, it's no surprise that you haven't seen it before, because the only time before we had a TR was when we had a TR of a different kind in this one.
And that was the standard library updates, right?
Yes.
Okay.
So this was way back in 2005, I believe.
We had a TR called TR1, for lack of a better name, which was the stuff that is beyond C++03
put into a report of some kind.
And as I understand it, that is TR type 1, as it was called in 2005.
Okay.
There was, after that point, the TR type 2, or TR number 2 of type 1, which was the second
version of the library additions. And at that point, they got into C++11. At that point,
they also changed the naming of things. So the TR type 1 is now called a TS, a technical
specification.
Ah, okay. things. So the TR type 1 is now called a TS, a technical specification. And we're getting into
direction that TSs are actually sort of subject focus and should be subject focus instead of
completely broad. So the TR1 shouldn't have been like that, but it sort of worked. We now have
library fundamentals, TS1, TS2, and TS3, which is sort of broad for what it's actually doing.
And we're seeing that the more focused ones like coroutines, modules, they are working a lot better. So most of the things that
we're trying to do in the standard are going to be done in TSs. Right. Okay. And that's technical
specification, right? Yes. Okay. So this is what we've had in the past and what we are having now.
But as I said, there is TR type 1 in the past, and there's two more TR types. And one of those is basically an advice-giving specification that is non-normative,
as in you're not breaking any standard by not implementing it, but it is a strong recommendation.
The people wag their fingers at you if you don't do this.
Exactly.
We'll tut-tut at you.
Very, very frowny if you don't do this.
Okay.
But there's no official power behind us saying
you are not implementing the standard as we specified
it. And that is the kind of technical report
that we're making in this case.
What is the progress
on this technical report
as of the Kona meeting?
Is it already being written?
The progress as of the Kona meeting
is that we told the Evolution Working Group
that we are planning to make one,
and that they said, yes, this sounds like a good idea.
And that is about as far as we got.
We realized that we had a few papers that should now be targeted towards the TR, and they are not yet.
So those will be in the Kona post-meeting mailing with the updates to be targeting TR.
And beyond that, I don't think we got much further than that.
So to sum up, if I may, the modules TS was accepted.
You all said, yes, this is technically correct for what we can put in the standard,
but people really need to have some guidelines.
The modules TS as is, is correct as far as everything that needs to go in the standard.
Okay.
Which means that there's no reason for us to hold it back because there's nothing we are going to be changing.
How long did it take to actually discuss that?
I think we started on Tuesday, no, Monday evening, and we concluded this on Thursday
evening.
And you didn't go to bed, probably.
We went to bed at some point, but we had evening discussions lasting until midnight most of
the days.
So that was your primary focus while you were at the meeting?
That was my primary focus to be there. The secondary focus was to listen in on what happens
to coroutines, in part because of what happened in the past meetings, and in part because I really
would like to use them. So I sort of have a stake at being able to use them in the next standard.
So maybe we should talk about coroutines for a bit. Coroutines are the second major feature that got accepted out of Kona.
And I think we've talked a little bit about how there was Gore's original proposal,
and then there were a couple other proposals about changes people would like to see with coroutines.
What's the final product of coroutines look like?
Well, I'd like to first emphasize a little bit on something that I've heard a lot of people say,
which is that they are very surprised that coroutines went in.
Okay.
Okay.
Because the discussion in the past was that at Rapperswill, which was in June last year,
we had a coroutines proposal.
It was sent for a vote on plenary, which is basically Evolution says this is good,
and then a plenary was voted down.
It went to San Diego. It was accepted in Evolution, went to plenary, which is basically evolution says this is good, and then at plenary it was voted down, it went to San Diego,
it was accepted in evolution, went to plenary again,
was voted down again,
then we get to Kona. And we get basically
the same proposal going to evolution.
Evolution says this is fine, it goes to
plenary. And at this point there are two
things that can happen, and both of them sound like
somebody's going to be disappointed, which is either
you accept it now, which means that you could have accepted it like eight months ago and
people are going to be like why are you slowing down the standard you should have accepted it
back then right or alternatively you decide that we're voting no again in which case everybody
says you're holding up the standard you could have it's going to take another three year before
this gets into the standard it's going to slow down three years before this gets into the standard. It's going to slow down everything.
So there's basically no way to win.
But from a practical point of view, I understand the way the vote went.
Because the last time at Rapperswil, it was a vote that says,
if we're not doing it now, there's no competing proposals,
there's discussions between people,
and the best thing that can happen is that in four months we'll have more information,
and we can then vote on including it or not.
San Diego, essentially the same thing, except there was a third proposal being added by the Bulgarian national body. And this time at Kona, it's basically we have the three proposals next
to each other. We see what they mean. We know what they do. Everybody's talked to each other.
All the information is on the table. And if we decide to vote no, it's going to be no for C++20
entirely. Okay. Okay. So in this case, there's actually an impetus to get it into the standard
if we are trustworthy that this is a good proposal.
And theoretically, everyone says we are fully informed now
because we have three proposals and we can weigh them equally against each other.
Yes, and to put it slightly in the words of Herb Sutter,
if you were not informed at this point,
then you should have read the proposals as they were in the mailing.
Right, right.
I.e., it's your own responsibility to keep up with it.
And the conclusion was that we have coroutines TS.
It has a few tweaks that came in from core coroutines, which is good.
We had a big discussion that took a full morning about symmetric coroutines,
which is a Bulgarian national body proposal.
And there were many concerns from different sides, from compiler implementers, from front-end implementers,
about the implementability of it. And in the end, it was decided that the coroutine CS
was going to go to a plenary vote, essentially unmodified. And in this case, it got accepted
because otherwise it would have been three more years, and we don't see a major benefit to doing
so. So you said that there were a couple changes made based on the core coroutines proposal, though?
Yes, but I'm going to have to let you find those yourself because I've got to look up the exact details.
There are very tiny changes as far as I can tell, as in you can still take a look at the entire coroutines proposal, and it is mostly correct.
Okay.
Go ahead.
Sorry, go ahead, Rob.
I was just going to say, now that they haven't voted in, but C++20
isn't completely final yet, are we
expecting that they might still make additional
tweaks to coroutines based on some of these
other proposals? Given that it's not
officially an international standard, everything
is still up for tweaks.
The guarantee that you get is
that we are now beyond Kona,
which means the last meeting for getting big features in was Kona,
and anything new will go into 23.
There is no option anymore for putting in any new features.
If there's anything wrong with the current features,
or if we find big problems, then we might tweak them.
We'll probably tweak them.
And if something really terrible comes out now,
like this is actually unimplementable in some set of compilers, then it might still get taken out of the working draft, but that's very unlikely.
But to be fair...
Exactly one time.
It already has test implementations in both Clang and Visual Studio, right?
That's not even entirely accurate. It has implementations in four different compilers.
Well, okay. So at least those two, anyhow.
It has an implementation in Visual Studio for the past five years,
in Client for four years,
and Edison Design Group has a front-end that also works with it,
and VCC is implementing it.
So it probably will not come out with anything major
saying this is fundamentally flawed.
Yes.
I mean, coroutines are like an ancient concept
as far as computer science goes.
Like the earliest revisions of the art of computer programming have descriptions of how coroutines work.
So it seems like something that should be doable.
But I keep seeing all these like conversations about stackless or heapless or like how the states manage that kind of thing.
Do you know what we actually ended up with?
We ended up with stackless coroutines,
and they are usually heap allocating.
Okay, usually heap allocating.
Yes, I'll get to that in a second.
So the idea behind coroutines is that
instead of having a function that executes,
terminates, and returns a single value,
you can have a function that executes
and sort of keeps living.
So the next time you invoke it,
it will continue from that point on
and can give you a second and a third and a fourth value.
And it can also be called a second time from a different location,
which means that the calling and returning sequence
that you expect in C++ to be very simple
is now a lot more complicated in the presence of coroutines.
So there's two models that are basically the idea behind coroutines.
You have the stackable ones, which are, I call a function, it returns, yet stays alive, and its
location is on my own stack. Okay. Which means that if you want to call anything else, you'll
have to keep in mind that there's a bit of coroutine on your stack, and you cannot return
all the way and have the coroutine still exist, because something else will override it. Okay.
So that's one method of doing coroutines.
The alternative is that instead of putting it on your stack, you give it some bit of
memory that is then its stack.
Okay.
And given that you tell the compiler just magically invent a stack for it and go figure
out how big it needs to be, it can make it fairly efficiently.
That's one of the big discussions that we still had at Kona.
And it will allocate a piece of memory for it and put its stack over there.
Then if you have a second coroutine, it will have a different stack altogether.
So those are stackless because they're not on your stack.
They're not on your stack, but the memory still has to come from somewhere.
Exactly, which is why they are mostly allocating.
Okay.
In the case that your compiler can, for example, prove that it's not outliving your function,
then it can just take whatever space it needs, allocate your stack anyway and use it there it sounds hypothetically like clang's
heap illusion rules which i hear people say like well yeah but do those actually come up in real
code maybe they're more likely to come up in real code yes and that's pretty much exactly the same
thing you get here which is one of the points of discussion still is that if you have something that cannot allocate,
i.e. embedded targets,
then this might not be good enough
that it's usually non-allocating.
Because if you tune your optimizer
just a little bit differently,
maybe it now doesn't know how to do that
and your code fails to compile.
Or maybe upgrade to a newer version of the compiler
and it has different optimizer settings
so the stuff that used to compile
now just doesn't compile.
Like if it happens to generate a call to new or if it happens to be able to see through that and not then whether or not your standard library can link basically is
what you're getting down to yes so that might be slightly problematic and that's still one of the
points of discussion how we can make this slightly better so that it's always non-allocating that was
one example that was called out by timur, who was working on Stood Audio,
which was discussed but not voted in. It's going to be a 23-something. And the idea there is that
you have a real-time application that can allocate memory until it enters the real-time part,
which is, for example, if you have a digital audio workstation used by DJs, I'm fine with having a whole bunch of allocations and dynamic behavior
as soon as the thing is starting up and loading my songs.
But the moment I'm actually running a show,
I do not want any dynamic allocations,
and I really don't want the thing to crash.
And for that one, he demoed that if you allocate all the coroutines up front,
it cannot do any runtime allocations while you're just using your coroutines.
So while they are still allocating, you do get the benefits of having no allocations
at runtime, so they are usable in audio code.
Right.
As an example.
But the compiler can't necessarily prove every single possible allocation that's necessary
for the execution of the program.
It cannot generically prove that you will never be allocating your program.
Right. But your linker can by never be allocating your program. Right.
But your linker can by just not having it operate anew.
Right.
So that if you do at any point actually potentially allocate, it will just not link.
Right.
Okay.
So all of this about allocation and coroutines raises one obvious question from my perspective,
is what's the constexpr coroutine story?
That is a really good question
and I have no answers whatsoever.
Ah, constexpr.
You might have to make a complete talk about making
coroutines constexpr.
You know, constexpr all the things, part two.
Yeah, well, yeah.
We'll have to think about that.
That's going to be a big discussion.
Sounds like a good talk idea.
Constexpr even more of the rest of the things.
Yeah.
Yes.
I wanted to interrupt the discussion for just a moment to bring you a word from our sponsors.
PVS Studio Analyzer detects a wide range of bugs.
This is possible thanks to the combination of various techniques, such as data flow analysis, symbolic execution, method annotations, and pattern-based matching analysis.
PVS Studio team invites listeners to get acquainted with the article,
Technologies Used in the PVS Studio Code Analyzer for Finding Bugs and Potential Vulnerabilities,
a link to which will be given in the podcast description.
The article describes the analyzer's internal design principles and reveals the magic that allows detecting some types of bugs. Well, should we go over some of the other features
that were voted in to Civil Souls 20 then? Yeah, let's just keep going.
Like, the next one is slightly disappointing to me.
So this is static, thread local, and lambda capture
for structured bindings? Yeah, that's cool, but it still
doesn't allow constexpr structured bindings? Yeah, that's cool. But it still doesn't allow constexpr
structured bindings use.
You cannot do
structured bindings in constexpr?
No, you can't. It's the reference
nature of how they have to be
implemented causes problems.
Oh, right.
Yes, oh right, exactly.
Anyone who's familiar with it goes,
oh, yeah.
There's been a couple of functions I've had to rewrite
where I was using structured binding,
but nope, not at compile time.
Yep, we did get a lot more consXper this time.
Yes.
So we have consXper allocation in some scenarios.
We have consXper vector.
We are potentially getting,
I thought that I saw it on the list of things to be discussed,
but I don't see it on the list of things that were voted in,
constexpr other allocators or other containers.
So those are in the pipeline, but they're apparently not in yet.
It's constexpr vector, yeah.
Sorry, go ahead, Rob.
I was just going to say there's this second list
of features that have been approved for C++20
at this meeting or prior, but have that have been approved for C++ 20 at this meeting or prior,
but have not yet been added to C++ 20 because they're still completing the specification.
So is the expectation that everything on the second list will make it in?
It says hopefully it will be added.
Yes.
So in this case, the exact details on how the standard works is that we try to get things through evolution and library evolution,
which means that the design is essentially finished and agreed upon.
And beyond that point, somebody needs to actually write the words that go into the standard library or standard specification,
which happens in the library and core working groups.
The first two have had their deadline basically a hard exit at the end of Kona,
and there were many things that were voted in for which there's no words yet.
Then the next hard deadline is
for Library and Core Working
Group to finish their part of the specification,
which is checking all words, making sure everything
has the exact right meaning, making sure
that no comments have been accidentally italicized
that shouldn't have been. And they have a hard
deadline at the end of Cologne, which is
in July. So basically,
this means that all of these
things were voted in by evolution and by library evolution but they are being processed by core
and library as far as they can they can manage and anything that is at the bottom of the list
that is not finished by july will just not make it okay so there is a priority in that that is a
priority you can influence if you are in one of the proposals.
But there is basically just a finite amount of time for them to do things. So there will be some things that are essentially okayed, but not yet in the specification. So they will not be in 20
technically. So are you aware, like, if someone just wanted to get involved in the standard,
is there room for extra set of eyeballs that can, you that can help review some of these papers and try to help get C++ 20 finalized.
I know the exciting thing to do is to write new papers, but there's clearly a lot of work here, right?
Yes.
So there's a lot of work in core and library, and most of it happens to end up basically at this point in the standardization cycle.
Everybody's finished writing their proposals for 20 because there's a hard deadline.
And at this point, they've basically got a whole bulk of work pending on just them.
And just after Cologne, they'll have a basic lull where nothing can happen
because you are not adding anything to the current standard.
And beyond that point, nobody's pushing yet because we have three years.
Right.
You can definitely help them
out they are usually open for for additional help but keep in mind that the level at which they are
discussing things is very high i've been in the core working group for about two hours at some
point in wrappersville and it took me a lot of time to even follow what they're talking about
so if one of these proposals looks particularly interesting to you, or not proposals,
but things that have been approved, but not yet merged into the standard, you might click on it,
look for the author that's currently working on it, set of authors most likely,
message them and say, I'm willing to help if you need some help, but don't be surprised if it's
over your head because it's just that high up.
Yeah. So the thing you can always do is offer them help with doing the wording. That is something
that most library authors don't have a lot of experience with, and it needs to be essentially
technically correct. It needs to be exact in what you're trying to say. So if you have any
experience in reading strategies, then this might be something you can help them out with a lot.
And if you are or have been in a Working Group and know what kind of things they typically do,
please do help them out because that's the stuff that takes a lot of time from Core,
basically telling you this is not how we do it because we did it differently in all of these
locations. Right. So yes, you can definitely help out the authors there and you are helping out Core
Working Group and Library Working Group by doing that. doing that okay well maybe we should go back to the the list of things that
have been voted into the draft um stood polymorphic allocator i thought polymorphic
allocator was already a thing that one's i don't know what that's about wasn't that pmr
okay as far as i know it's basically pmr As a vocabulary type. So this is taking the PMR memory resource and wrapping it in an allocator
so you can use it in any allocator type location.
Okay.
That should make them easier to use then, right?
Yeah.
Of course, there's standard lerp,
which everyone obviously would know what that does.
Of course, you haven't lerped anything?
Yes, I actually have no idea what the genesis for the name lerp is.
I can look at this and see what I know what it is.
It helps you find linear interpolation between values and midpoint ranges and stuff.
And okay, fine.
What the heck is a lerp, though?
Why lerp?
It's lerp because that's the short name for linear interpolation.
And also in mathematics, in especially 3D graphics,
there's a few related terms like slurp,
which is a spherical linear interpolation.
Okay.
Basically taking, in this case, it's two vectors, and you linearly interpolate between them.
And in the case of a spherical one,
that's like taking two points on a sphere and then
doing a linear interpolation along the surface of the sphere instead of through the sphere.
This sounds like something you have personally had to do when you're experienced with routing
and GPS software and such.
This is not something that we have in that context, but in context of graphics, this
is what you do when you're basically moving something from one point to
another. Okay. So in graphics, in animation, this is a very, very common function to use,
and it's had a whole lot of precedence in using GLSL, HLSL, and so on. And all of those languages
call it LERP. So if you look at the discussion notes on what happened to the proposal, there was
a suggestion to change the name to something like linear interpolate, which was voted down because
of a strong history of use of the actual term lerp to do this. Wow. Okay. So they've gone with
what everyone has agreed is the common name for this thing. Yep. And most of the people that are
trying to use this are trying to do the thing that lerp does on my GPU, which now is standard lerp.
But it will be confusing to some people who are not familiar with what it does.
And also, I think it's worth pointing
out that this is not just for numbers,
it's for pointers, which
I think then is handy for
the kind of thing like implementing
Quicksort, basically.
Binary search algorithms.
I did not yet notice this was usable for that.
That's a new thing for me.
For binary searching
anyhow, I guess would be... Oh yeah, it says
Java's binary search implementation uses
this. Okay, yes. So that's the idea.
It's to help eliminate a category
of bugs from that all binary
searches are
broken article from forever ago.
From 2006, yes.
Well, that makes a lot of sense, but then again, if you're
trying to do binary search, there is a function called std binary search, which might just be exactly what you need. Well, yes. Well, that makes a lot of sense, but then again, if you're trying to do binary search,
there is a function called std binary search, which might just be exactly what you need.
Well, yes, that's true, but perhaps, I mean, you know, for some other use case or something.
Yeah, definitely.
Oh, actually, if you look at the paper, there is a point called naming.
That's not talking about Lerp, actually.
It's talking about midpoint.
Okay. Yes. that's not talking about Lerp actually, it's talking about Midpoint so there's one other thing
in the currently actually added
features which is std s-size
which is a thing that's been
subject of a lot of discussion
so I'm not sure if all the listeners have been
following along but there's a big discussion about
Eastcons versus Westcons
well yes, of course
where many people have wristbands that say
Eastcons or Westcons and some have both which is about one of the Well, yes, of course. Right. And so far, that discussion has gone in the direction of we've been using size T, which is unsigned as a size.
So we should be using that for size in all the other containers as well, including the ones that we're adding.
Which went fine until somebody managed to sneak in a signed size on span.
Yes.
Which then got into a huge discussion saying, well, we have a signed size now.
Maybe we should change everything else to match, maybe. And after
a whole lot of discussion, they basically got to the conclusion that they are changing it to a
unsigned size, and everything gets an S size function, which is a signed size.
But so we're adding S size function to every container.
As far as I've understood, this is basically to everything.
Okay, now I want, all right, I want someone to convince me that I'm wrong. So I will give my
spiel for just a moment here. But I think this is an absolutely terrible idea. And it is because
today, with let's say, standard deck, which the way it does allocations of blocks can easily have
many billions of items in it on a modern computer. If it has
three billion items in it, because we have designed it such that that is allowed,
and we now call this sSize function, what do we get? And for our listeners who aren't following
along for whatever reason, that's greater than two to the 30, it's greater than the maxed signed
32-bit or 32-bit integer, yes. So this would be on 32-bit platforms that this would be a problem.
So the theoretical objection is that if we ever exceed the allowable amount for a signed size,
that it would be undefined behavior. But in a practical note, it's pretty much impossible on
any platform to do that.
Well, okay. So I guess you're saying on a 32-bit platform with 32-bit sizes,
it's highly unlikely that you would ever exceed 2 billion items in a container. And on a 64-bit
platform with 64-bit sizes, it's unlikely you would ever exceed whatever that giant number is.
Yes. So if you're on a 32-bit platform
for ease of numbers, because they're much smaller, you would need to have something that is bigger
than two gigabytes, when the total addressable space by both you and the kernel together for
anything at all is four gigabytes. And you need to have more than two gigabytes of contiguous memory.
Well, it doesn't have to be contiguous with something like deck or list.
Yes, but if you're doing that,
then you have even more part of that
as overhead in making it non-contiguous.
Okay.
I may accept your answer
that this is not terrible.
This is not a thing
that you're practically going to be able to do.
Right.
If you're wondering about the size of that
in 64-bit,
I think the name would be 9 exabytes.
Right. Which is, to put it
into terms that are slightly more understandable,
9 million terabytes.
As in, take the biggest commercial hard disk
you can find right now and have like a million
of them, and have
that as your contents of your storage.
And then exceed it.
Because you need to exceed it before you get to the point where
this breaks. And that compares to the
amount of advantage that you get
when you subtract something from the size
and you just see if it's negative, then it must have been empty.
I can see a strong point for standard size for a signed size.
Right. Yeah, okay.
I will concede that it's not as terrible of an idea as I thought it was.
That said, I am on the side that an unsigned size
is the only thing that makes sense.
But I can see arguments for both sides. How did it get into span in the first place was it just a mistake was someone just overlooked that it was not a mistake no i don't know the exact details jason do you know okay i don't know the
exact details but i am almost 100 certain that it was intentional because it was from someone who is on the camp of
these things should be signed.
There are a lot of discussions
about these kinds of details, and
in some cases they actually have a bit
of merit.
I conceded my
fine.
Is there anything else we want to go
over that we haven't touched on already?
These are basically all simple, minor things.
Yeah.
I like the flat map and flat set that have been approved,
but still need wording.
If you're looking at that list, then definitely those are nice.
But I'm more looking forward to context per vector
and context per string than flat map.
Yes, but I've written flat map
a couple of times, because
for very tiny maps that you need to create
quickly, and you
don't care about lookup time as much,
or if lookup time
is very small because linear search
through three elements is faster than a binary
search, then yeah,
I'm cool with flat map.
I've also at some point looked at a benchmark looking at flat map compared to a standard
map and an unordered map, which basically shows that up to about 100 elements using
a flat map is faster.
I easily believe that, yeah, on current hardware.
From that point on, unordered map is faster, and at no point is a standard map actually
the fastest.
Right.
That was, to me, the biggest surprise,
as in, up to now we used
map, and then we get unordered map, and
now we get flat map, which basically means
that for the purpose of lookup, map
stopped having a function.
Yeah. Because at no point is it the best option.
That's interesting.
And the rationale behind flat map being faster
is because it's very much more
compact, it's co-located, so any lookups are just running through your cache.
So if you have small objects, there's like 16 of them per cache line.
So searching for something is really quick.
Yeah, and that was my use case, is I needed maps that were literally like five elements at most.
Yep. So even if you have slightly bigger objects, if you just put three to five things in there,
that's still going to be very fast compared to having a regular map,
which starts to build a red-black tree up to three or four levels deep,
which means you get four pointer chases and cache invalidations and so on,
which makes it a lot slower.
Right.
So just using an ordered map would have been a ready improvement there.
But an ordered map specifies that, as far as I remember,
all the elements have to be allocated outside of the actual map.
So you have the map as a hash index,
and then there are chains beneath that of the actual elements.
Yeah, all of the containers,
well, now that they have the move,
what is it, move node number functions?
Take node and move something like that,
where you can literally steal elements out of a map
and put it into another one.
So they have to be heap allocated separately outside of it.
I think as far as I know that they did that already.
Yeah, yes.
If only because trying not to do that is very complicated and very likely to lead to two tiny bugs
that are going to crash applications in corner cases.
Right.
So, Peter, how many times have you managed to make it to a standards meeting?
Was this one of your first ones
this is not the first one I've been to Rapperswil as well
and I wanted to go to San Diego but it's a far away place for me
so I wasn't able to go there
wait a minute is Kona actually closer than San Diego
Kona is surprisingly very far away from everybody
I'm not surprised at all
Kona was I think a 26 hour flight everybody. Yeah, I'm not surprised at all.
Kona was, I think, a 26-hour flight in total, including changeover at Seattle.
Okay.
And it's an 11-hour time difference, so it's perfectly opposite of at home.
But it's the one meeting where we get to have the final discussion about contracts,
concepts, ranges, modules, coroutines.
It is the most important one in these three years.
Okay.
So as much as I'm going to try to help out everybody who's going to Cologne because there's really important work to be done for wording,
I sadly will not be able to go because it conflicts with something else I have to be at.
Yeah, and Cologne is practically down the street from you.
Well, Cologne is an hour's drive.
I could just go there every day.
Right.
I don't recommend doing that at a standards meeting
because you are basically doing very, very complicated C++ for 16 hours a day.
And given that you're doing very complicated stuff for 16 hours a day
and need like eight hours of sleep a day,
you are busy 24-7,
so you don't have time to drive an hour back and an hour forth.
Plus, you'll fall asleep while driving.
Yes, I don't recommend that.
Yeah, that's a bad one. It sounds exaggerated, but the time I was at
Rapperswil, I was there for five days, tried to leave on Friday evening to drive home,
and had to pull over on a rest stop in Germany somewhere because I was just
almost falling asleep while driving. That is a considerably further drive than Cologne for you.
That is true. That was about nine hours.
Yeah.
And going to Kona, I was able to sleep on the airplane,
which I never am able to.
And this time I was able to do two flights in a row of sleeping.
Wow. I need to learn whatever trick you used.
No, that's a really easy trick.
You go to a standards meeting, you pay a lot of attention,
you go to all the discussions with everybody,
and you'll just fall asleep automatically.
I can really recommend it.
So in other words, you will not be making it to Cologne, right?
I will not be at Cologne, but I'm trying to
be at the next four, five, or six after
that point.
I've also contacted my own national body
to try to join them, and I'm
currently working on getting that arranged.
So just as a side note to anybody who is not a member of the national body and is interested,
you can go and you can join even if you're not an official member.
Okay.
Unlike the rest of the ISO meetings, the C++ meetings are intended to be attended by everybody
who's interested in the language, so they are not putting in a hard barrier that says
you may not join unless.
You just have to let them know before
you go but there is uh like your stand your it's the votes are based on like national standards
bodies or something right so if you just show up and you're not a member technically then your vote
is less relevant or something that's partially true okay if you're a member of a national body
or your company is a member of a national body in
case of the US, then at the final votes on plenary on Saturday, you only get to vote if you are the
representative of your company or if you are a member of the national body. But for all the
other days, which is Monday through Friday, any person in the room gets one vote. Which means
that you can actively participate, you can help out, and you are expected to have also read all the proposals that you're voting on.
Well, that would make sense.
Yes, but you'd be surprised how many people are okay with voting even though they haven't read the proposal.
No, I guess I wouldn't be surprised, unfortunately.
Just thinking about international politics in general.
Oh, yes.
So there's one thing you can always do, which is to abstain.
There's usually a five-way poll, and the sixth vote that you can do is just to abstain, not
vote at all, which is what you do if you haven't been paying enough attention during discussion,
which sometimes happens for actual reasons, and if you haven't read up on a proposal before.
So do go, because there's a whole lot of stuff to do.
So where do you find yourself spending most of your time at the Kona meeting?
What groups did you go to?
I was planning to go to at least the Evolution Working Group for Modules and Coroutines,
because that's why I'm there, which is Tuesday and all of Wednesday.
On Monday, I was trying to attend the new two groups.
There's SG17 and SG18, which is Library Evolution Incubator and Evolution Incubator.
Those are basically new proposals that haven't yet gotten to the stage where they are ripe for evolution.
Or smaller papers that just need a little bit tweaking before they get there, so they are essentially a knockout of the park.
So that was an interesting thing to be at.
That's looking at the new proposals, and because they're in the incubator now, they will not meet 20.
So there's a lot of space in actually
helping them out with improving designs.
On Thursday, I had a cheat day,
and I rented a car to drive around the island
because you're only
on Hawaii once every so often, and
it's so far away.
So I figured I should do that at some point.
In case you're looking at the webcam, I am
slightly sunburnt.
That's not just lighting.
I did assume it was lighting, actually.
Yeah.
And on Friday, I went to the study groups that I'm participating in,
which is SG13 for audio, SG15 for tooling, and SG16 for Unicode.
So they did not meet concurrently, then, I take it?
They meet concurrently with everything else that's happening. So I wasn not meet concurrently, then, I take it? They meet concurrently with
everything else that's happening. So I wasn't at
Evolution at that time, wasn't at Library Evolution.
You have to make some choices where you want to be.
And you are going to miss some things
that you really want to attend. For example, Wednesday
afternoon, I was busy in the coroutines room
looking at the National Body presentation by
Bulgaria, and at the same time
somebody was in, I forget
where exactly it was, but they were
voting on executors, which is something needed for a networking TS related to coroutines. So I would
love to be there as well. But you can't be in two rooms at the same time. It doesn't work.
Right.
So you find other people who are like-minded there. And I found Chris DiBella, who was
like-minded and would like to be in coroutines, but had to be at executors. And we just shared
our thoughts and ideas about that. So you could try to be in COVID teams but had to be at Executors. And we just shared our thoughts and ideas about that.
So you could try to be in as many rooms as you can be.
Since you brought it up, Executors did not get voted into C++20.
Is that right?
That is correct.
Executors is still too young of a proposal to go in.
And I think it is going for a TS, but the TS did not come up for a vote yet.
So that is one of the big things that's still going to happen.
And just looking ahead at C++23 for a second,
there will be executors TS, there is reflection TS,
which did get a vote.
There is networking TS, which is also still not in.
Right.
And there are a few new proposals that are also going into the C++23 backlog,
which was discussed in Evolution on Saturday afternoon.
So just in case you're thinking it's been Saturday and we got through plenary, we're done.
That's not how it works.
We just keep going.
And there was a big proposal on pattern matching.
Yeah, I see that added to the list here for C++ 23 or 26.
Is that also going to be a new TS?
I think it's going to have to be a new TS.
At some point during the discussion of it,
I was asking David Sankel, who was presenting,
whether this was all in one single paper
because there were so many new things and new ideas being explained
that I was having a hard time keeping up with it all.
But looking at the things that it allows you to do,
it is basically allowing you to make a new statement called inspect,
which does all the things that people want a switch to do
when they're new to the language.
So the time that you found out that switch
couldn't actually figure out which class something is,
or the time that it couldn't switch over a part of an enum
or part of a member that you have in a range,
or that you can't have complicated switch statements in there,
like if it's between 1 and 25,
or if it's an even number,
that kind of stuff is possible with a very well-defined good-to-read syntax, surprisingly.
And there, is any part of this, I'm looking at these guards in here, I'm trying to think.
Is it? Yes, okay.
So you can actually do, it can be a runtime check of some sort, say,
do this statement if this thing is
true. Yes. So as far as I understood, you can basically also do a string switch now by saying,
if at runtime the string is equal to one of these, then do this. So it's all the stuff that you would
want to be possible with a switch, but that isn't possible because switches just don't do that.
Right. And now you can do it, assuming that this is actually going to happen.
I did a technique similar to this in ChaiScript
that I've ended up slowly removing
because I realized no one was using it
and it made the language too complex.
The guard, specifically.
I'll be curious to see how this comes out
with this implementation.
Yeah, the language syntax reminds a bit of Prolog and Haskell.
Right.
Which means that if your programming style
or your goal for doing programming
is not something that fits well in a functional context,
then you're probably not going to be using it.
Well, and to be clear, I'm not talking about the inspect itself.
I'm talking about the if condition portion of the inspect,
the pattern guard section 5.4.
I was about to ask, what page are you on?
Yeah.
So for our listeners, you can have a pattern and then an if statement inside the pattern
that says, you know, whatever, if it matches this pattern and this condition is true,
then execute this branch of the inspect.
That's the part that I'm personally curious about.
I do expect this to be used in some cases, but not in most.
Right.
In many cases, you're trying to match either a decomposition or a specific literal of some sort that you want to be equal to whatever you're inspecting.
Right.
So in most cases, I would expect this to not be there, but I can already see the first use case that this is going to be used at, which is FizzBuzz.
Yes.
And actually, ChaiScript has a very, very clean and succinct FizzBuzz implementation that I wrote.
It's the only use case that I've seen of that.
So this might not be useful enough.
But again, this is a paper that was first presented in Evolution Saturday afternoon. So It hasn't had too many feedback,
too many experiences in implementing it, too many
experiences in using it. I'll be very interested
to see where this goes, because the general concept of the
inspect I'm totally on board with. I'll be curious
about the rest of it as well.
And it looks like they already are planning
for the future with an inspect constexpr,
so it can be a compile
time choice. Just the
same syntax as if constexpr so it can be a compile time choice just the same stash uh syntax as if constexpr so
awesome totally on board with that yep i'm not surprised that you're on board with more
concept hey why do something at runtime that can be done at compile time i mostly agree with that
mostly i'm looking more forward to the other papers that are about trying to remove use of the preprocessor.
Right.
I remember a quote from Bjorn Strusser, which is basically that the C preprocessor should not need to be used anymore.
And I'm phrasing it nicely in this case.
As far as I can tell, we still need it because there are many things that you don't, you cannot do without.
And one of those that is currently on the list of being voted in is to source location, which is pretty much, you know,
we have func and file and line as preprocessor macros.
How about we make this a language construct
so we don't need a preprocessor macro?
Yeah.
So that removes one of the big uses for a preprocessor,
which is in logging.
Where did this log statement come from?
Well, we can do that without macros.
And then we have includes, which is modules.
Now we don't need includes anymore.
We have inclusion guards in pragma once.
Oh, we have modules.
We don't need that anymore.
So we have ifdef for platform support.
And then we have a few corner cases
where the preprocess is sort of useful,
but not necessary.
But beyond that, it shouldn't be needed.
Right, yeah.
And the ifdef for platforms,
I believe Izzy Muerte has a paper
about how to replace that, so that
instead of basically
doing an IfConstExpert global scope, which
is not possible because If is a runtime statement,
and it doesn't work like
that, and an IfConstExpert
like ConstructedGlobalScope would need
the second part to parse and be
at least getting to the stage where you
have an AST, so you can then reject half of it. And her paper basically says, I have some statement that checks one or
the other, and both of those halves need to tokenize, but not beyond. Which means that if
you're doing if it's Windows, then you can call Windows functions. And if it's not Windows,
then don't call Windows functions. So the change, as long as it's parsable, I guess, is, yeah.
As long as it's lexable, not even parsable.
Lexable, okay.
Well, since we spent all our time talking about all the great news coming out of Kona,
maybe we should finish up by just quickly mentioning these other articles I put in the show notes,
and then we can let you plug anything you have coming up, Peter.
So all the Meeting C++ 2018 talks are now on YouTube.
Those are done, yeah.
So you can definitely go check those out.
I think it was like 47 talks are available on YouTube now,
so that's great.
And then the Core C++ speaker list is out, right, Jason?
Yes, so Core C++ has announced is out, right, Jason? Yes.
So Core C++ has announced, I think, all the talks and who's speaking.
I don't think the full schedule is quite online yet.
Some of them are people that we know,
and some of them are people that we've never had on the show.
So it would be interesting and exciting to meet new people
and go to that one for sure.
So definitely, if you're in Israel or anywhere nearby,
check out Core C++.
One thing to note about the Core C++
is that some of the talks will be in Hebrew,
which means that it may be very interesting,
but in some cases it's going to be hard to follow
as an international attendee.
Right.
Not trying to keep you from going
because it's a really good place to be at
and there's going to be so many interesting talks.
Yeah, at the moment, since the schedule hasn't been fully released yet i don't know
i only see one or two things that are actually that are listed as being in hebrew at the moment
that's a good point i'm not exactly sure the initial idea was to have one hebrew track and
one hebrew and english track but i've already spotted nine out of 19 speakers as being as far
as i know not able to speak heb. So they may flip that upside down.
So when I added this link to the show notes the other day, I clicked on a couple of the speakers and one of them, I remember being listed as Hebrew earlier and is now listed as English. So
I'm not quite sure what to make of that. Yeah. At the moment, the only thing that I see that
says that it's Hebrew on the schedule is one of the training workshop days before the conference. Okay. I would definitely love to attend because I see a talk
about coroutines there about the actual use of it. And that seems to be a very, very on topic thing.
Indeed. And I know as perhaps everyone could see from our conversation, very little about
coroutines and that talk, yes, is going to be in English. So I can go to that one.
Well, Peter, since we're talking about conference news,
do you have anything coming up?
Any conference talks?
I have a conference talk coming up at HTCU,
which is going to be about demystifying your compiler,
where if you're an absolute beginner,
you compile your first Hello World,
magic happens and a binary appears that does something.
How did magic happen and what exactly makes it tick?
So that's going to be a talk I'm doing with Simon Brand at ACCU.
And I have proposals coming up together with Chris DiBella about how the entire standardization
proposal and mechanism works.
So all the stuff about evolution and core and library incubators and study groups.
We'll have a talk about all of that that we probably will be submitting to CppCon.
And I will also be submitting a talk there
together with Arvid about how a linker works.
Oh, wow.
Okay.
But of course, those are still too far out
to actually know whether we're going to be voted in
because we haven't even got this little proposal yet.
Yeah, the call for submissions for CppCon is not out yet.
Yeah.
So that will come later.
Okay. Well, it's been great
having you on the show again today peter yeah thanks for having me thanks for coming yeah
talk to you later thanks so much for listening in as we chat about c++ we'd love to hear what
you think of the podcast please let us know if we're discussing the stuff you're interested in
or if you have a suggestion for a topic we'd love to hear about that too you can email all
your thoughts to feedback at cppcast.com we We'd also appreciate if you can like CppCast on Facebook and follow
CppCast on Twitter. You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter.
We'd also like to thank all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast and of course you can find all that info and the show notes
on the podcast website at
cppcast.com
theme music for this episode