CppCast - C++23 ISO Progress
Episode Date: October 28, 2021Rob and Jason are joined by Bryce Adelstein Lelbach. They first talk about SonarLint analysis, and searching algorithm performance and an observation on compiler diversity. Then they talk to Bryce abo...ut the proposals that are heading for C++23, including major changes to the executor and networking proposals. News Supercharge your C++ analysis with SonarLint for Clion Efficiently searching an array with GCC, Clang and ICC C++ Committee polling results for asynchronous programming Links P0443R13 - A Unified Executors Proposal for C++ P2300R0 - std::execution Sponsors PVS-Studio Learns What strlen is All About PVS-Studio podcast transcripts
Transcript
Discussion (0)
Episode 323 of CppCast with guest Bryce Adelstein-Lelbach
recorded October 22, 2021.
Sponsor of this episode of CppCast is the PVS Studio team.
The team promotes regular usage of static code analysis and the PVS Studio
static analysis tool. In this episode we discuss searching algorithms and compiler diversity
And we talk to Bryce Adelstein-Lolbeck
Bryce talks to us about the state of the committee proposals Heading for C++23 and beyond And we talked to Bryce Adelstein-Loback.
Bryce talks to us about the state of the the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
All right. How are you doing?
Doing okay. So we're recording this one right before CppCon starts, but it should air on the last day of the conference, I guess.
Yes.
Looking forward to it?
Yeah, tomorrow morning I give my class,
and as I've already said many times,
I am the only live on-site class at CppCon this year,
and I'm pretty excited about it.
I think it's probably the best class I've put together so far.
Yeah, hopefully it'll go well. Obviously, we can't tell listeners to go sign up for it at this point because I'm listening a little too late.
You can. I mean, there's plenty of flights to Denver from all over
the United States and Canada coming in
regularly. The airport's almost back to normal
capacity, I think.
And I'm sure the Grizzly...
The Gaylord still has
rooms available.
And if it doesn't, there's a bunch of satellite
hotels. And you can go
and sign up for my class right now.
Although it would be too late. You're right.
It would be a week too late.
I got all that wrong.
You should edit that out, although you won't.
So it'll be fine.
We'll just forget about the little time travel for a moment.
It's fine.
But I am looking forward to this class, and I do think it'll be probably one of the best ones that I've done so far.
So if you're listening to this in the future, because you can't listen to it now, then, uh, and you like
the sound of this, then start talking to your company about having me out in 2022 to give
training. There you go. Okay. At the top of every episode, like through the piece of feedback,
uh, we got this tweet from, uh, Jacob saying, listening to CPP cast and the talk about lifetime
extension. Uh, this is always fun to read. And he posted
this blog post about chained
functions breaking reference
lifetime extension.
How did we talk about that with Hal again?
When did that come up? Do you recall?
No. No? Okay.
Well, we'll put this post in the
show notes so that other listeners
can read it if they're interested. In case you
missed it, I've been preparing my class. I don't remember what happened a day ago.
That's fair. That's fair. Well, we'd love to hear your thoughts about the show. You can always reach
out to us on Facebook, Twitter, or email us at feedback at cppcast.com. And don't forget to
leave us a review on iTunes or subscribe on YouTube. Joining us today is Bryce Adelstein-Lelbeck.
Bryce has spent over a decade
developing programming languages and software libraries.
He is the HPC Programming Models Architect at NVIDIA,
where he leads programming language standardization efforts
and drives a technical roadmap
for NVIDIA's HPC compilers and libraries.
Bryce is passionate about C++
and is one of the leaders of the C++ community.
He is the chair of the PL22,
the U.S. Standards Committee for Programming
Languages and the Standards C++ Library Evolution Group. He also serves as editor for the INCITS
Inclusive Technology Terminology Guidelines. Bryce is the program chair for the C++ Now and
CPPCon conferences. On the C++ Committee, he has personally worked on concurrency primitives,
parallel algorithms, executors, and multidimensional arrays. He's one of the
initial developers of the
hpx parallel runtime system bryce welcome back to the show hello it's good to see you guys again
although i'll be very excited when we can all see each other in person yeah definitely is it is it
is it executors or executors it actually might be neither because the current proposal that is the current direction does not actually have a thing in it called executor anymore.
Oh, okay.
Well, we're going to have to talk more about that, I think, right?
Yeah.
Yeah, for sure.
It's a very C++ thing.
But I always prefer to call it executor as not executor, because there's multiple of them.
It's like, you know, you don't call it range.
You call it ranges.
Sure.
That makes sense.
Okay, Bryce.
Well, we've got a couple news articles to discuss.
Then we'll start talking more about kind of the latest news going on with the ISO and everything you're working on.
Okay?
All right.
First one, we have a post on the Sonar cube blog and this is um a sonar
source blog supercharger c++ analysis with sonar lint for sea lion and uh this post is from phil
phil national first official act of the year was to write this article yeah yeah and this is all
about uh i guess it's only been a little while that SonarLint has been available in CLion, and they're just going over some of the rules and inspections that come with that tool that will show you what's going on with your code.
One that stands out to me as fascinating is it being able to recognize if you're using a possibly insecure hash or encryption thing and saying,
hey, wait a minute, you might want to reconsider your hashing algorithm here or whatever.
Yeah, that was pretty awesome.
Yeah, I was pretty impressed by that.
But it's like, you know, it's not like it can really tell whether it's...
It's not going to always be able to tell whether it's a, like in a context where it matters or not.
Right.
But like,
still,
it's pretty cool.
Yeah.
Yeah.
Yeah.
I like,
I think that the,
the,
just the UI of their interface,
I find pretty attractive.
And like,
I'm not usually a huge fan of like IDs.
I just use like VI,
but I looked at that and I was like,
yeah,
you know,
that that's like,
I would use something like that. And looked at that and I was like, yeah, you know, that's like, I would use something like that.
And I think that, yeah, when we had the SonarLint people on here, they talked about how there is C-line integration, although it seems to be getting better.
But another note on here is for any of these issues that SonarLint brings up, you can click on it and get like a full explanation as to why that is, that check is
there. Yeah. Like, like I found that like really useful for some of the, like they use an example
of one of the transparent comparators. Um, like that's not, I don't expect most C++ programmers
to like grok that just from like a one line thing. So having, having a link to, you know,
a full writeup of it,
it's really, really helpful. Yeah. And just so listeners know on top of this C line plugin that
we're talking about right now, they do a visual studio VS code eclipse. And I think these are all
free with a sonar lint. I think that's right. Yeah. It's all their server based stuff. So it's,
it's, I think, no, maybe not on this one. Sorry. But yeah, it's I think
those are all free. Okay, next one, we have this blog post efficiently searching an array with GCC
clang and ICC. And this is, you know, going into some of the benefits of using std algorithms over,
you know, writing your own, you know, algorithms and some of the benefits from that,
but also has a comment towards the bottom
just about how everything's moving to Clang
and how maybe that's not a great idea.
I thought this was a good post.
Yeah, we often bring up how more diversity
in the ecosystem is nice.
Yeah, I do think that it's a little perhaps concerning
that so many compilers are moving towards Clang as a front end, especially because Clang's great, but everybody's good, happy memories about Clang are from 10 years ago when Clang was the new kid on the block. And there was this large production code base that had a lot of legacy that was in use everywhere, GCC, that, you know, moved a bit slower.
Now, 10 years later, Clang is sort of in the place in its lifetime where GCC was 10 years ago.
It's not quite as agile as it once was.
And in fact, it is lagging behind
in implementation of some of the latest
standard C++ features.
And the code base has gotten to be very, very large
and complex because it has so many different use cases that it has to serve.
And like, you know, I used to be able to navigate through the Clang code base, you know, eight,
seven years ago. And like, I've looked at it recently. And it's, you know, it's a lot,
there's a lot more there than there was. Yeah. And I feel like even five years ago, Clang and GCC disagreed on something. I just assumed
Clang is right. GCC is wrong. And it was only two days ago that it took a little bit of effort,
but I just had to prove to a friend, no, Clang is definitely the one wrong here. GCC is the one
right. That is an outstanding bug in Clang. And it's one that i first noticed like six months ago so i maybe i just run into more arcane uh compiler bugs than you but i but i certainly run
into my share of them that are that have been playing ones but this is it's actually that's why
i sort of came to this realization that maybe um maybe clang was no longer the new cool kid in the
block when i was talking with sean baxter about circle because like my first reaction when talking to sean about circle is like hey
why did you go and write your own c++ front end like nobody does that right like that seems like
yeah and he was he was like well i looked into it and like it's a really big code base and like i
just thought that i could write something
that was cleaner myself sort of from the ground up that i'd understand better and like after
talking with him a bit i'm like you know he's kind of right like he kind of made the correct
decision although in 10 years now circle will be the slow moving kid on the block and someone else
will pop up then we'll have the square compiler or something you know and i yeah i don't know it's something i feel like something changed in gcc
i don't know what changed but it does seem that they are like you pointed out they're rolling
out features faster than clang is did they do something to make their code base more
extensible maintainable in some way or is it just they're more determined right now?
They did.
They started switching to using C++ internally
some period of time ago.
And I believe as part of that,
they revamped a bunch of their internal data structures.
I think there's also just slightly more folks working on GCC
now than there may have been
a few years ago.
Right. That seems believable, yeah.
I don't know if we're going to get another compiler anytime soon that'll maybe take the place,
but hopefully this trend doesn't go too much further.
LLVM's backend is just so incredibly helpful.
Yeah.
Because even Sean is taking advantage of LLVM's backend, right?
Yeah, yeah, yeah.
What, Circle?
Yeah, Circle's just a frontend, and it uses LLVM on the backend, yeah.
Okay, and then the last thing we have is a Reddit post on RCPP,
and this is C++ Committee Polling Results for Asynchronous Programming.
And it's great that we have you here, Bryce, to talk about this.
We were talking about executors a few minutes ago.
This is more about the uh, the networking TS.
And it looks like as of now it's somewhat stalled. Is that correct? It's not dead. It's not dead.
Yeah. Um, I mean, I definitely, I will say this, we definitely would like networking in the standard
library. I think that's, um, that still remains true. Um, but, uh, we. But we are not convinced that the networking TS is the right solution.
So the networking TS has got to be one of our longest-lived TSs now.
Okay, I see. Um, and prior to the networking TS, the committee spent some amount of time dabbling with just
like inventing a networking interface out of thin air.
Um, you know, just inventing a new one, basically saying, Hey, let's standardize networking.
Let's just, you know, go and create something, something of our own design.
Um, and then at some point somebody was like, yeah, that's maybe not such a good idea.
Let's just standardize boost asio.
Um, and then that's sort of how the networking TS came about.
But I think there's one frequent confusion here,
which is that the networking TS
is not really solely just a networking layer.
It also contains the ASIO asynchrony model.
And that's always been linked to the actual networking interfaces.
And to some degree, any networking API is going to inherently have some model for asynchrony these days.
But because of that, because the networking TS had an asynchrony model, it was inevitable that
it was going to come into conflict with other attempts to standardize an asynchrony model
on the committee. Especially given that the committee tends to like consistency
and unified models. And in fact, one of the polls that was not taken in that electronic polling
period that was listed on Reddit, but my vice chair, Ben, included that result, was a question
that I have been repeatedly asking Library Evolution over the past six to eight months, which is, are we really confident that we must have a grand unified model for asynchrony in the standard library?
Like a grand unified model that covers both structured concurrency and like compute asynchrony and like file and IO asynchrony and also networking
asynchrony. Because like, if we can agree to not do that, but like, it's okay to have multiple
models for different, you know, areas, then it would make life a lot easier because we would no longer need to sort of pick a winner.
But the committee did not bite on that.
We did not have a clear consensus that we wanted to have a unified model.
We definitely didn't have a clear consensus that we didn't want to have a unified model.
And so that meant that inevitably we were going to have to pick a model.
And so now that I've given a little bit of the history of the networking to us, let's go to the executors. year long project to standardize some framework for specifying, for parameterizing execution.
In the same way that we have allocators to parameterize allocation, we need some way to
be able to say like, hey, I want to write some generic thing that will like create work somewhere.
And then I want to be able to plug in different things, different types of executors that'll run the work in different places and in different ways. So that was the basic idea. And back when
it got started in like the 2016 to 2018 period, there were originally two or three proposals
and including a proposal from Chris Kohlhoff, the fellow behind Boostazio.
And at one of the Kona committee meetings, the committee said,
y'all get together and come up with a joint proposal.
And that was P0443. And I'm not entirely sure the exact moment that it went wrong. It wasn't like there was like one instant where it went badly. But it was just like, it started off as a, like, a, just throw a bunch of authors together and make them compromise
paper, which sometimes works, sometimes didn't. In this case, I don't think it was as successful
as we would have liked. Um, and I remember even from the beginning, it was, it was a little
unpleasant because it basically, it didn't really compromise between the models. It just took the union of all of them. And it grew and grew over
time. And it eventually ended up with this thing called properties, which was essentially a
solution to not everyone being able to agree upon what was fundamental aspects of an executor and
what were fundamental aspects of the interface.
Um, and at one point there was this whole interface adapting thing where, um, essentially
different people wanted to be able to write executors in different ways and then have them
be adaptable to each other. Um, now the, I will, I will give you the inside perspective on how P0443 died.
So last year, around this time, we had just finished a review of P0443 in Library Evolution.
And we got some feedback on the paper, you know, how to like evolve it for
like the next steps. And around that time, the main author of the proposal, Jared, who was at
NVIDIA, he was, he'd been working on this for a number of years and he had sort of reached a point
where he was no longer interested in working on this. And so we needed a new person to come in and take it over.
And I had, for the most part,
sort of been pretty hands-off about it
because I was just like, you know, like, Jared's on it.
I trust Jared.
Like, it'll be fine.
And I don't think that anybody other than Jared
had realized how bad the situation had gotten with the proposal.
And then, like, I started looking into the hood and I was like, oh, this is really, like, this is not going to work at all.
Like, in particular for NVIDIA's use cases at the time, it did not, like, the proposal that we were writing and authoring simply did not work. And the only way that it would work would be if we did things that were not specified in the proposal, semantics, that if we implemented it that way, it would be very surprising to people when they moved over to other platforms.
And I had never particularly
liked this approach. And so a few years earlier, there'd been this senders and receivers proposal
from Eric Niebler and Kirk Shoup and Louis Baker and Lee Howes from Facebook. And as soon as I'd
seen that, I had a gut feeling that that was going to be like the thing that we were going to end up, the direction we were going to end up going because it was, um, less of a train wreck
than P0443. It was actually quite nice. Um, sender receiver models come up in all kinds of places,
right? I mean, that's a fairly normal idea. Yeah. Um, and it was, it was like, it was like it was elegant it was it had um clean fundamental concepts it it was not birthed out of
a compromise where nobody actually compromised like it had a clean underlying model and that
was the key selling point um and so it's like when i first saw that like which was two or three
months before i came to join nvidia my like my first reaction was like, I am going to,
at some point in my career, probably I'm going to need to sell NVIDIA on this being the model that we need to adopt. And so, yeah. So then last year,
P0443 kind of fell apart. The main author left. And then we, NVIDIA, the folks that employed the main author, sort of realized, oh, this is not what we want to be doing.
So I asked one of the people on my team, Mihao, I told him, drop all of the things that you're currently doing.
Your full-time job is now this. And we started a once-a-week telecon with the senders-receivers folks and some of the NVIDIA executors folks.
And the first telecon, we were very far apart. And my approach was essentially just I'm going to lock all of you in a room once a week until there's a proposal that comes out of this that we all agree on.
And six months later, that proposal propped out.
And that was P2300.
And it was actually quite good.
And it's based solely on senders and receivers.
I see.
And it sort of represents a paradigm shift from NVIDIA's perspective.
But I think it's a good paradigm shift because I think it's a better model for the standard library.
But there was still the question of what do we do with the networking TS?
Right. P2300 essentially ejected the parts of the compromise that were needed by the networking
TS. Because again, we wanted a very minimal core proposal, and we did not believe that those things
were required. And so I had hoped that the committee might choose to see these things as being distinct and that we could
have the networking TS have its model and senders and receivers have its model.
But it became clear that we were going to have to pick between the two. That was the only way
forward. We could not even review the proposals, either of the proposals, because the entire time we would spend discussing
the proposals would be these existential debates about the underlying model. And we needed to get
to a point where we could actually look at the details and do a detailed design review. And that
meant that we had to pick one of the models and move forward with that model. And that is what we have done. And actually just this week,
we had the first library evolution at Super Telecom, which was Monday and Tuesday,
which was seven hours over the course of two days. And at the end of that seven hour telecom,
we took a poll, which was essentially, do you all want to sign up for
another one of these super telecoms and more meetings in like January, um, to continue, uh,
pursuing the goal of shipping senders and receivers and C++ 23. And we did have weak
consensus that we want to do that. So, uh, the odds are not in our favor. We have essentially three months
and it is not at all clear
that we will reach consensus
on sending senders and receivers forward
for C++23 by the design cutoff,
which is February.
But we have decided we would like to try.
And so we will be doing that.
Okay, so I guess I should just say
we're going to hope that we do get this by February
but going back to networking,
I guess the future plan of networking
would be either to make a new proposal
or to modify the proposal to work with this asynchronous model
or are we not there yet?
Yes, I think that one of those two things would have to happen.
And there's the separate question of security,
which remains unaddressed and which has loomed over the networking TS for a while.
We do not have consensus that we're okay shipping networking without a secure layer.
As a cell, effectively.
Yeah.
There's additionally a contingent on the committee that essentially only wants to ship secure sockets period because of concerns about needing to roll out security updates and maybe having ABI breaks and whatnot.
Um, or just not being able to keep up with changes in security.
Yeah.
Right.
But the, the, the previous answer to this, which had essentially been, well, we'll ship security.
Like, like we weren't, we won't deal with it in the first release.
The first release, we'll just have unsecure sockets.
It's fairly clear that that's not going to fly.
There was a very split on the security question, networking is not going to be a tough one. And then we would, yeah, I think we would need to see
a probably substantially
modified networking TS.
It is possible
that could happen in time
for C++26.
I think if we're talking
a new proposal,
like a completely new proposal,
I doubt that's a possibility. If we're talking a new proposal, like a completely new proposal, I doubt that's a possibility.
If we're talking about a retrofit of the networking TS, that's probably the best route forward.
But that would require a group of people to collaborate.
And the last time that collaboration was attempted,
it led to P0443, which I do not think was a successful outcome.
So, I mean, we'll keep trying.
We're going to keep trying.
I want to read a comment here on the Reddit discussion.
And it goes back to something that Bryce had just alluded to.
If we agree to ship networking with a security
protocol, is it planned to fix security issues if they imply an ABI break? Or is it planned to
explicitly never fix security issues if they imply an ABI break? Forget the security issues
comment on here. When I read this comment and just thought about the last year of conversations that we've all been having about ABI,
I personally went from like,
oh yeah, I want to see networking in the standard
to nevermind, don't put networking in the standard
because it will never be allowed to change.
And-
Well, I mean, we could also,
we could try this very bold thing
of we could design the facility in such a way that it's ABI resilient.
Like we're not going to be able to design it in such a way that entirely prevents ourselves from being unable to change it in the future.
But we can, like we're talking about an entirely new component um uh we should not take it as a given
that it's going to be impossible to build it in a way that's going to be abi resilient would there
be a cost to that sure i think that the cost of i think that i think that the cost is is a is an
acceptable cost everything's a template yeah well actually it's it's more likely to be the other way around that you
you'd essentially need to ensure that there's a layer of indirection um uh you know like the
take the classic example of microsoft's uh stood mutex implementation which is like 10 times the
size of their shared mutex implementation um if they had simply chosen to implement that
instead of with the mutex bits in the class body,
if they had instead said,
hey, we'll always do an allocation
and we'll put those bits in this pointer somewhere,
then they would have been able to make the...
They'd be able to use the newer synchronization primitives on the newer platforms.
They'd have more options available to them.
But, you know, they don't because they did not want to put
that extra layer of indirection in.
And it's something that probably didn't even occur to folks
at the time.
Right.
Sponsor of this episode of CBPCast is the PVS Studio team.
They developed the PVS Studio Static Code Analyzer.
The tool helps find typos, mistakes, and potential vulnerabilities in code.
The earlier you find an error, the cheaper it is to fix,
and the more reliable your product releases are.
PVS Studio is always evolving in two directions.
The first is new diagnostics.
The second is basic mechanics, for example,
data flow analysis. The PBS studio learns what Sterlen is all about article describes one of
these enhancements. You can find the link in the podcast description. Such articles allow you to
take a peek into the world of static code analysis. So just for a moment, know you were just giving us this great inside baseball look at
what would happen with executors um since jason just brought up abi can you tell us any bit about
how the committee is looking at abi these days because obviously there's the big paper a year
and a half ago from titus and i think the vote just was kind of a you know we're not going to
change anything like we're not going to make decisions on ABI. Is that look changing at all? I do not think, I do not think much has changed.
Um, I think that, um, at the very least I've been trying to turn the conversation more to
how can we design for ABI resiliency? Because that seems more, um, more productive. I do not think that we're going to solve this question on,
like, it's not going to be solved solely on policy. I think it's pretty clear that we're
deadlocked, at least partially deadlocked on policy. I think maybe if the conversation had
been framed differently, we would have been less deadlocked.
And so maybe we need to have that conversation again.
It's a hard conversation to have without a face-to-face meeting.
Sure.
But the best way to dig ourselves out of this hole is to explore technical solutions.
And this networking TS security problem is a perfect example. It's a new library component. It's something that we're almost certain that we'll need to be able to
evolve in the future. And so we should be exploring how we could build this thing in a way that is
ABI resilient. And if the answer is that we don't think we can, then yeah, we shouldn't standardize
things like that unless we do make the change of policy. It does very much sound like between
the asynchronous model needing to completely change and possibility of designing for ABI
resilience, we're really no longer talking about Boost ASIO being merged into the standard. It's
something different or something with a very different API. Yes, I think that that's,
I think that that's correct. Okay. And I have to be honest, I think the library evolution regulars
have known this for a while that ASIO wasn't going in as is. my predecessor titus i i like one of one of the last things he
said to me in the context of him being lube chair was something along the lines of yeah make sure
you take a real careful look at the networking ts um i think specifically he was uh around uh
the issue of object lifetimes um It's been pretty clear for a while
that since the networking TS was shipped,
best practices for some aspects
of standard library design have changed.
And so we knew that we would need to do
a very comprehensive review of the networking TS
because it was put out,
it was like designed and put out in like,
you know, 2016, 17, 18. And since then, the way that we design standard library interfaces has
changed, but it's a really big proposal. And so, and we've, we've completely lost institutional
knowledge of it. Um, by which I mean the library evolution regulars, the people that I as chair rely upon to review the papers, they do not have the knowledge
of this 400-page thing that they'd need to to be able to have a useful review of it. So that means
that the starting point of the review has to be reteaching everybody what's in here.
And then we have to do the actual review only after we've gotten to a point where we have enough knowledge to be able to meaningfully make decisions.
Disgusting.
Yeah.
Yeah.
And that's just a massive, massive, massive commitment of time. And in hindsight, I'm, you know, we, we, we had,
we had put the networking TS off to the study group. And I, I'm, I, the problem with putting
it into a study group is that the study group audience, um, is going to be much more self
selecting in particular. It's not going to necessarily reflect the position of library
evolution because the library evolution folks aren't going to be there. But the benefit of it and the reason I'm glad that we did it is we would be in a much worse
shape now if Library Evolution had spent half of our time over the past two years
reviewing the networking TS only to discover at the end that we didn't really have an appetite for it.
And, I mean, it ended up becoming pretty clear that we didn't have an appetite for it just directionally,
just the fundamental model, not even having dug into the details. Okay. So, you know, we talked a lot about networking
and std execution,
but you said C++23, the feature freeze,
is early next year.
What else do we think might make it
as far as big features go?
How about...
What did make it?
Give us some good news.
Or what made it already, yeah.
It's time for some good news.
What did make it?
A bunch of ranges work has made it already, yeah. It's time for some good news. What didn't happen? A bunch of Ranges' work has made it in.
Ranges has probably been the major success story of Library Evolution over the course of my tenure thus far.
We have a lot of very active authors.
We had a plan.
I don't remember the paper number, but we can put it in the show notes later.
Sure.
And we had a plan for,
for what we wanted to do and there were priority levels of the plan. And it looks like we're going
to do pretty much all of the things that are in the top tier of the plan. Some of the stuff that's
in the lower tiers too. Um, so there's a substantial number of ranges, f fixes in C++23. Ditto for formatting.
There's some other big library evolution features that are happening.
Probably one of the more notable ones is that we will have a standard library module.
It will be called std. We have determined that just having one big module
is actually pretty fast because the way modules work, because it's not textual, you just pay
really for what you use because it's like an index and just looks up in the index like,
hey, you needed std vector. I'll go grab std vector now.
And across a number of implementations, we've seen that having import std just as a catch-all is faster than including only the headers that you need.
Now, to be clear, if I do import std, it's not the same as using namespace. I still have to do
std colon colon to do things, right? I just want to make sure that's clear for everyone here.
Okay.
But yeah, so there'll be import std.
It'll be really fast.
It's probably going to be faster.
Like even if you have like
very fine grained header includes,
it's probably going to be faster
for you to just use import std.
And we've deferred the question
of having finer grained modules until later.
I don't actually think
we need finer grained modules because why would we, why why would we if we have one big std module?
But then you add 400 pages of networking to it on the other hand.
Yeah, that's a fair point.
But the costs shouldn't scale up that much for the size of the library.
So then there's also a, I think we're calling it std.compat.
And std.compat contains all of the namespace std names,
but it also contains some of the things that sometimes get leaked into the global namespace,
like int64t.
We made a decision that we wanted to fix the historical mistakes of the past and not pollute the global namespace in import std.
So if you do import std, you're not going to have int64t. You'll have std int64t,
but the only global name that you'll have in import stood is operator new and
operator delete. Well, that's actually really interesting because, I mean, I just want to
clarify this for the sake of our listeners who don't know this. Almost everything from the C
standard library is intentionally or unintentionally imported into the global name space.
So you can call, you can do pound include CSTDIO
and then just put puts or whatever.
You don't have to do STD colon
even though technically you
need namespace.
Interesting. Okay.
Yeah.
Perhaps amusingly,
there is a feature
test macro for import std.
But if you import std, you will not have the feature test macro because modules do not import macros.
Wait, so you would have to first include part of the standard library to be able to do the feature test macro and know if you can import it.
You'd have to include version and then you can check in there for the feature test macro.
Okay.
That makes sense.
Because we had to add a macro
even for the feature that is like the anti-macro feature.
Wait a minute.
So if you just do the import,
you don't have any of the feature test macros or do you?
You have none of the feature test macros.
You have no macros.
So if you rely on that,
you will have to still include version.
Yes.
So there's that, which is going to be a pretty big thing.
I am fairly hopeful that we will have Std generator.
And I hope that the two authors who need to produce a revision of Std generator are listening to this podcast and have heard me announce to the world that i am fairly hopeful that it will make it into c++ 23 and this is
i'm sorry for coroutines or uh yeah yeah for coroutines yes um so part of the uh coroutines
library uh support yeah the coroutines library that we don that we need, we don't have really yet. Yeah. Another big feature that's
fairly likely to make it in is std mdsman,
which is the multidimensional array abstraction.
std expected is also in.
It's in? Yeah, it's in. I missed that entirely.
Yeah. There's probably a couple others that i'm
missing there was a bunch of stuff that we kind of shipped to lwg in like the 2018 2019 period
that were big things that um uh that have only now just been processed um i think stood expected
was the biggest of them okay um and yeah oh and
then in terms of language features the one thing that i know about is deducing this has made it
um in uh which is uh fairly cool and um there's a bunch of constexpr forcation of a variety of
different things um and there's also this cool thing that i believe is going to C++23, which is the ability to declare an entire class constexpr.
I missed that. Yeah, so that's just some syntax sugar.
Right. We think we covered that paper months
ago, Rob, when we were doing one of our paper roundups.
Yeah. And yeah, so that's sort of what C++23
is looking like.
It's, I think, going to be at least a little bit larger than C++14 at this point.
I also suspect that we will land a bunch more things that are already in flight between now and February.
I don't know what's in the language evolution queue.
But there is some possibility that we'd get senders and receivers in,
which I think would make it probably a release more
on the scale of C++17.
Yeah.
You know, I'm personally cool with a C++14-like release
every now and then, because that was a great bug great bug fix cleanup kind of release to the language.
Yeah, I would be okay with it if we did not have so much work to do.
The pandemic slowed us down more than I would have liked.
If not for the pandemic, it would not have been a bug fix release.
It would have been a much more substantial release.
And I think that is, I mean, I'm very proud of all that we were able to accomplish during the pandemic.
But I think the fact that the pandemic did end up substantially changing the amount of work that we could do
does not bode well for the future of our process.
Like, I'm very happy with how far we've come,
how quickly we were able to adapt,
but I think we needed to adapt even more.
Well, on that note,
are there plans yet for in-person meetings early next year,
or is it still unknown?
We were supposed to have a meeting in Portland,
which got canceled in February.
We're supposed to have a meeting in Newland which uh which got canceled in february um and uh we're supposed to have a
meeting in new york uh next july and uh we'll make a decision on that around march or so
it's one in february already had to be canceled it's four months out wow yeah okay um well we had
to we had a deadline by which if we didn't cancel, there would be a lot of fees.
And Herb asked – we did a poll of the committee chairs and key staff to see who would be able to show up.
And it became pretty clear that people were not going to be able to attend.
Okay.
Yeah.
I think given the current direction of things, maybe it would have actually been fine.
But at that time, it was far too risky.
We'll have to see.
Yeah, Delta was very scary.
At the beginning of November, we actually start to see more borders open and that kind of thing. We have to see what actually happens.
I feel like I have to agree.
I mean, yeah, you couldn't have risked losing the money for February.
That seems totally
reasonable okay well uh i know we had a couple other features we maybe wanted to get a status
update on um is has any work been done on reflection does that have a chance of getting
to 23 uh no uh and it was never we we never expected it to get into 23 yeah no zero chance
of it getting to 23 um at least as far as I know,
you'd have to ask JF to be sure. But yeah, it's not, it was never even something that we were
targeting for 23. Ditto for pattern matching. I think both of those are possibilities for 26,
but not for 23. Let me say this delicately.
There is some desire to put contracts into C++23.
I think it is a wee bit late for that desire to be manifesting.
So I think it is highly unlikely that we could see contracts in C++23,
but it is a discussion that's being had.
I think there would have been a lot of motivation to fix up contracts after it was dropped so last minute from C++20.
There was, and there's been a lot of work on it,
but I still think it's a little late.
You said that you think that the committee maybe could have adapted a little bit more.
And I'm kind of curious because I understand your situation correctly.
You both work a regular full-time job remotely and are on the committee remotely.
And I've heard lots of discussion about how the committee was not able to get as much work done in a remote situation as they would have liked. How does that, oops, sorry, I hit the mic. How does that differ or compare with
a regular job? Like what do you think the committee have done differently perhaps or not?
Or do you think it's just too much of a big room environment? I mean, honestly, running library
evolution is easily 50% of my job
but it's probably more than that
it's
consumed a very large chunk of my day job
because
it's
had some unique challenges
during
the pandemic
so essentially it's just like a full time
we're actually meeting for less hours than we regularly would. But there's a lot more administrative work that has to be done.
Yeah, I mean, I think the challenge we've had is it's just hard to get the time for the synchronous
meetings. But the real problem is that we need to be able to review
papers without having a telecom to look at them. And we've started doing some of that. We have,
you know, at least there is a route by which a paper can come in to Library Evolution,
be seen solely on the telecom, be seen solely in a mailing list
review, an email review, and then go straight to electronic polling without ever consuming any
telecom time. That's something that we only really started at the start of the summer.
But the email list is not, it's not really the right tool for this. And it's hard to get the level of engagement that we need. So I mean, I think we would probably need something like a, you know, a discourse or some sort of forum mechanism or something, where it's a little bit easier to organize things, but it's really more of like a cultural shift than anything else.
For us to really be effective, I probably need more.
It's not so much that I need people to show up to telecons.
It's probably more that I need people to go and review all of the, you know, the less controversial papers that come in.
And like send like out an email review with like, you know, are my thoughts like plus one minus one on this um uh and you know it's just it's just going to take time
it's just culture change it takes time um uh i i think uh when jf and i started doing the telecons
at the start of the pandemic it was we made it was very explicit that in the paper we did not
ever call it a temporary measure because we never imagined the remote operations being a temporary thing.
This was a modernization that needed to happen anyways.
I've heard some banter on Twitter, and I almost wonder how I'm going to regret bringing this up because lots of things happen on Twitter, that part of the problem with your current scenario
is when you're meeting in person, everyone who's interested in discussing the paper is there and
able to vote. But with the remote telecons, I get the impression from Twitter that sometimes
a vote goes one way or the other just because of who managed to show up to the remote telecom.
No, that's totally untrue. We have the same problem at the face-to-face meetings because, like, one, not everybody is there.
And two, even if everybody is there, people aren't in the right room, you know.
Okay.
It used to be a regular occurrence that we would have some vote or somebody's paper would get presented with them not there.
Or we would want to review somebody's paper and the author wasn't there.
And I mean, we do a lot of our polling electronically now, and that gives everybody
a chance to participate. Although there is a fine balance between, like, it is sort of important
that people be present for the discussions before before so that they can have an informed uh opinion when
they vote um but yeah the electronic polling mechanism i've been very happy with um uh because
one it it lets me figure out who voted how and two it uh we ask people to provide comments so i can
figure out why they actually voted um uh because uh people are people are shy on telecom or in a meeting room.
You'll have a bunch of people who will have opinions,
but they won't express them until they vote.
And if the only way that they've expressed their opinion is that they've voted
and that you don't really know in an actual physical vote who voted how,
that is a lot of information lost.
So it's a huge difference in knowing, oh, Bob voted weekly against this for this reason
that never came up in the discussion versus like, oh, why did seven people vote weekly against this
proposal even though nobody brought up issues with it when we were discussing it? Yeah. But
the problem of people not being present
for all the discussions that they want to be
is just a
life problem.
It's not a
particular failure of the committee or anything.
It's like,
you know, sometimes
you know,
a mistake is made
and somebody forgets to tell people something or to make sure that somebody's invited there.
And sometimes the person, you know, is not paying as close attention to their emails or their calendar invites as perhaps they should be.
So it's a little bit on both sides.
Well, I think we might be running out of time.
Bryce, is there anything else that you want to let us know about before we let you go today?
Well, nothing.
We didn't mention that you have your own podcast now, Bryce.
Oh, yeah, yeah, yeah.
Oh, my gosh.
Connor's not going to be filming.
Yes, Connor and I have a podcast called the ADSP podcast, the algorithms and plus data structures equals
programs podcast. We talk about a variety of topics from the furniture in my apartment to
APL, the programming language, to parallel algorithms, to, you, uh, how to get through school, whether you should
drop out of college, things like that.
Um, we just sort of like talk about like whatever.
It's really, it's really just like an excuse for kind of a night of chit chat, uh, once
a week or so.
And we're coming up on our one year anniversary and we have big plans for the one year show.
Big plans.
Um, yeah.
We did plug one of your episodes not too long ago when you had Sean Parent on and had him doing some storytelling.
And I have a lot of respect for the way you did the episode because it appears you had one convo with Sean and made like three episodes out of it.
Is that right?
That's pretty much what we always do.
We record for like two hours and then Connor turns it into some number of episodes.
Because we don't go in with a plan.
Maybe sometimes there's a topic, but we just go in and it's just like
it's supposed to be natural, just two guys recording.
But yeah, the Sean Perrin episode, we do sometimes have
folks from tech, usually people who've been in the industry
for a while, come in and tell their stories.
Sean Perrin is our most frequent guest.
And yeah, those are my favorites.
Okay.
Well, it's great having you on again today, Bryce.
Yep.
Great to be here.
Thanks for coming on.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in.
Or if you have a suggestion
for a topic,
we'd love to hear about that too.
You can email all your thoughts
to feedback at cppcast.com.
We'd also appreciate
if you can like CppCast on Facebook
and follow CppCast on Twitter.
You can also follow me
at Rob W. Irving
and Jason at Lefticus on Twitter.
We'd also like to thank
all our patrons
who help support the show
through Patreon. If you'd like to support all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
And of course, you can find all that info and the show notes on the podcast website
at cppcast.com.
Theme music for this episode was provided by podcastthemes.com.