CppCast - Cologne Trip Report
Episode Date: July 25, 2019Rob and Jason are joined by Botond Ballo and Tom Honermann to discuss what features were added and removed from the C++20 draft paper at the ISO meeting in Cologne. Botond Ballo is a software ...engineer at Mozilla, where he has been working on the Firefox web browser's rendering engine for 6 years. He's been attending C++ standards meetings for about the same time, and blogging about them to keep the C++ user community informed about standardization progress. In the committee, his interests include general language evolution, reflection, and tooling. Botond likes to hack on IDEs and other developer tools in his spare time. Offline, you might spot him climbing rocks or reading fantasy novels. Tom Honermann is a software engineer at Synopsys where he has been working on the Coverity static analyzer for the past 8 years. His first C++ standard committee meeting was Lenexa in 2015. He currently chairs the SG16 text and Unicode study group and participates in the SG2 modules, SG13 HMI/IO, and SG15 tooling study groups. His contributions to C++20 include the new char8_t builtin type. A C++ minion with 20 years professional experience. Husband and father of two awesome boys. Botond Ballo @BotondBallo Botond Ballo's Blog Tom Honermann @tahonermann Tom Honermann's Blog Links 2019-07 Cologne ISO C++ Committee Trip Report p1607 - Minimizing Contracts Sponsors Backtrace Announcing Visual Studio Extension - Integrated Crash Reporting in 5 Minutes Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 207 of CppCast with guests Bo Tombalo and Tom Honerman recorded July 25th, 2019.
This episode of CppCast is sponsored by Backtrace, the only cross-platform crash reporting solution that automates the manual effort out of debugging.
Get the context you need to resolve crashes in one interface for Linux, Windows, mobile, and gaming platforms.
Check out their new Visual Studio extension for C++ and claim a free trial at backtrace.io
slash cppcast.
CppCast is also
sponsored by CppCon, the
annual week-long face-to-face gathering
for the entire C++ community.
Come join us in Aurora, Colorado
September 15th to 20th. In this episode, we discuss news from the Cologne ISO meeting.
Joined by committee members Botan Balo and Tom Hahnemann.
Botan and Tom tell us about what got added the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how's it going today?
I'm doing all right, Rob. How are you doing?
I'm doing just fine. I'm excited to dig into this interview.
Yeah, lots to talk about today.
Lots to talk about, yeah.
Although I guess before we go on to the interview, we should mention, I mean, you know, we are in the heyday of C++ conferences right now.
Yes, we are.
And the flagship conference, if you will, CppCon, is coming up here in September.
And we should mention quickly, the schedule for that
is now available. So I
just got the email today. I think that I can schedule
all my classes or all my
conference talks. Yeah. Yes.
And I just saw as well, I think
I'm evenly spread out throughout the week.
I'm like Monday, Tuesday, and Friday or Monday, Wednesday
and Friday, something like that. Nice.
So if you haven't signed up for CppCon
yet and you're planning to, go do it, obviously, yeah, to our listeners. Yeah, book your hotel room,
make sure that you're ready to go. There are still plenty of openings in the classes and a
huge selection of classes. I know I've already done the shameless self promotion of my classes.
Nothing wrong with that. The list of instructors on here is like
who's been on CBPCast
really. We've got Arthur O'Dwyer
and Charlie Bay and
Klaus Eagleberger and
Patrice Waugh and
Diego's teaching a class
and Phil Nash.
So yeah, quite the collection
here. Matthew Butler, we've had him on,
right? Yeah yeah lots of classes
to choose from and they're both pre and post conference classes so definitely take a look
at those if you are signing up to go to cpp con yeah there's a lot going on here yeah okay so at
the top of episode like treated piece of feedback uh this week we got a tweet from tyler young
saying there should be enough material here to
keep CppCast and CppChat busy for what, another year? And he's linking to the Reddit C++ Cologne
trip report, which I think we're going to be talking a lot about this episode.
Indeed, the agenda for today, right? Although it won't keep us busy for another year because
we're going to have to cover this, and then in the next
standards meeting, we're going to be talking about C++
23, basically.
I was looking at the schedule, and they don't
go straight into 23.
I think it's going to be this time
next year when the first official 23
meetings start, because they still have all the
ballots to respond to and whatnot, right?
But theoretically, we're not talking about new
features after this. But we'll talk to our guests about that. right? But theoretically, we're not talking about new features after this.
But we'll talk to our guests about that.
Go ahead.
Yes, the evolution
will be working
on C++23 features,
but we won't actually be pulling them into the paper
for a few.
Yeah, there's kind of
a pipeline of different subgroups that
look at standards proposals.
And so
groups that are earlier in the pipeline
are ahead of the curve. They're moving on
to newer standards,
to material for new standards,
a meeting or two in advance of groups
later in the pipeline. So that
we're always busy all the time
with new material.
In fact, even in Cologne, there were some
things that were forwarded out of the evolution
groups targeting C++23.
Makes sense.
It might be time to introduce our guests.
Yeah, let's go ahead and introduce our guests.
So first we have
Botan Balo, who is a software engineer
at Mozilla, where he has been working
on the Firefox web browser's rendering engine
for six years. He's been attending
C++ standards meetings, meeting for about the same time,
and blogging about them to keep the C++ user community
informed about the standardization progress.
In the committee, his interests include
general language evolution, reflection, and tooling.
Botan likes to hack on IDEs and other developer tools
in his spare time.
Offline, you might spot him climbing rocks
or reading fantasy novels.
Botan, welcome to the show.
Hello, thank you.
And Tom Honerman is a software engineer at Synopsys,
where he's been working on the Coverity static analyzer for the past eight years.
His first C++ standard committee meeting was Lenexa in 2015.
He currently chairs the SG16 text and Unicode study group
and participates in the SG2 modules, SG13 HMI-IO and SG15 tooling study groups.
His contributions to C++20 include the new char-8t built-in type, a C++ minion with 20
years professional experience, husband and father of two awesome boys. Tom, welcome to the show.
Thank you very much. Very happy to be here.
You know, so we've already hinted that the agenda for today is to discuss the last standards
meetings, but just in your two bios, I'm pretty sure we could have complete separate interviews, full length.
If you don't mind, I want to ask Bhutan, you said you work on the Mozilla rendering engine.
And at the moment, pieces of that are moving to Rust land, right?
That's right. Yeah, that's a process that started about three or so years ago.
Although the roots of writing parts of a rendering engine in Rust go back further than that,
there's been a research division at Mozilla that actually has a complete functional rendering
engine called Servo written in Rust. Now it is a research engine. It's functional, but not really production use ready.
But that's been an effort ongoing for a while.
And in the past three or so years, we've been taking bits and pieces of it
and lifting it into our production rendering engine.
And so, again, there are just a few components at the moment.
An entire replacement or rewrite of our production rendering engine in
Rust is probably going to take something in the timescale of a decade or two. So it will very much
continue to be a lot of C++ code. But yes, Rust is the long-term future as we see it.
And what does that interoperability story look like right now?
And we don't have a ton of time to really dig into it, but just kind of at a high level,
parts of it on Rust, parts of it in C++, they talk well together?
Yeah, fairly well.
As in the case of most pairs of languages, the common denominator is a CFFI interop layer. But we've been making some pretty interesting advancements
in areas like cross-language link time optimization.
That's actually a project that just landed in recent months.
And it means that we can be more liberal
about where we cross the language boundary
without introducing performance bottlenecks.
It sounds like that assumes you're also compiling your C++ where we cross the language boundary without introducing performance bottlenecks.
It sounds like that assumes you're also compiling your C++ with LLVM, basically with Clang.
Yes, yes. So this is for the case where we compile our C++ code with LLVM,
which we're starting to do increasingly, even now on Windows.
We compile with Clang and LLVM,
thanks to the work that they've been doing to support the Microsoft ABI.
That's not possible.
Very cool.
And Tom, if you're working on Coverity,
you must have a unique perspective on the language, I think.
Perhaps.
I work on the front-end side of it, focused on our translation from the abstract syntax trees, AST,
from either EDG or Clang into our own internal representation.
And then we have other people that work on the analysis side.
So I don't know a whole lot about what happens on the analysis side.
If you ever wanted to explore that, I would recommend my colleague,
Charles Henry Gross, would be a very interesting candidate to have on here to talk about that.
But for me, we need to really understand the language so that we can represent it properly within our database
so that the analysis can run on it and find all the bugs that you don't want to have to find yourself.
So you use the Clang frontend to build you an AST, then do some manipulation on that and pass it along?
Is that what you said?
Yeah, basically. We have our own internal AST representation. We use Client and we use EDG,
the two different plugins, depending on which compilers we're trying to work with.
And so we take what they give us and we turn it into what our common needs are for our analysis.
That sounds like it makes a lot more sense than trying to write your own parser.
Yes, trying to write your own parser for C++ these days.
Yeah, that's a large effort.
Yeah, I can attest to that being a very uphill piece of effort
because I've actually worked on an IDE, Eclipse,
which to this day, in fact, uses a homegrown C++ parser. And it is quite, quite
buggy and of lower quality than Clang-based tools as a result, because parsing C++ is a big job,
and especially keeping up with new language features and so on and doing all of it accurately.
I think for newer tools and newer tooling projects,
there's an understanding that you really do need to be based
on an actual compiler front-end.
Yeah, and it is really fun seeing the various bug reports
that we get and the various edge cases that we hit
when we're looking at the front-ends.
As much resources go into EDG and Clang,
there's still bugs there.
And even understanding what's happening in a lot of those cases
is really hard for the experts.
Great way to learn the language.
Okay, so I think we're going to skip the normal news section that we would do
with most episodes because we're going to be focusing on all the news from Cologne
where you were both attending. But before we get into that,
maybe you could tell us a little bit about how you got involved with the ISO committee.
Maybe Botan, start with you. Sure, yeah. So I got involved just by, I mean, I've been programming
C++ since the C++ 98 days. And at the time, the language was in a period of kind of
stagnation.
But there was this talk of all this cool new
stuff happening in a new revision
of the language, which at the time was called C++
OX, because it
was projected to be publicized
early in the first
decade of this millennium.
And as we know, that
schedule ended up slipping a bit and ended up
becoming C++11. But it was an exciting thing to follow and keep an eye on. And then once C++11
was out, and we got all this new goodness, that wasn't the end of the road, there was even more
stuff. And particularly at the time, there was a major feature called concepts,
which people had been hoping would make C++11, but it didn't. And so I was particularly excited
about that. And so I was looking at, oh, if it didn't make C++11, what will happen to it? Maybe
they'll make the next revision. And so I was just following the various online communities about what happens there.
And that's where I learned about the standards meetings and the fact that they're open to the public.
And then I thought to myself,
hey, if they're open to the public,
wouldn't it be awesome to check one out sometime?
And so I started looking at whereabouts they are.
And as it happened in, I believe, September of 2013, there was one in Chicago, which is just a few hours away from Toronto, where I'm based.
And I figured, hey, why not check it out?
And so I did.
And it was a really exciting experience, especially because you get to meet a lot of people who previously to that
you had only known through books.
You meet people there who have written books about C++ and who teach courses and all that
stuff.
And so really awesome to meet these people in person.
So I was hooked from the first meeting.
And fortunately, I joined Mozilla around the same time, and I was able to get their generous support
to continue attending the meetings,
which I've been doing.
Very cool.
How about you, Tom?
Kind of similar hook with the concepts stuff.
Really?
Yeah, what had happened with me
was I was working on a project
that included Java and C++
and doing some JNI work there
and trying to rework how this code worked because it needed some help
and discovered that character encodings, right,
Java uses modified UTF-8 in its interaction with C++ APIs.
And I was impressed by how challenging it was working with encodings
and crossing these borders.
And I started getting interested in what would be a better API for dealing with encodings and crossing these borders. And I started getting interested in what would be a better API
for dealing with encodings in general.
And I came up with this idea of a text view,
which is iterators for doing decoding and encoding.
And I played around with it for a while,
and I remember struggling and struggling with it,
and eventually decided either I'm going to stop doing this
or I'm going to try using concepts to help me out.
And so I tried applying concepts to the design that I was working on and it, boom, it just
unlocked my brain and I was able to make good progress on it. And eventually decided, well,
you know, maybe this is something worth bringing to the standard committee. I started doing some
research and like Botan said, discovered anyone could kind of show up
at these meetings.
In 2015, I discovered
Synopsys had a professional development program
where we could spend a week
at a conference and such. And I thought,
you know what, I'm just going to go crash Lenexa
and see what happens.
So I showed up there and
I got to meet lots of people
and like he said, I was hooked as well.
So we're in the synopsis to keep going and try to get more of us going to these meetings
because it's important for what we do, right?
Following the language and particular static analysis, undefined behavior,
is very much in the realm of things that we want to be able to analyze.
Now, I'm curious, since you've now both mentioned concepts as your hooks,
but I feel like we're talking about two different concepts,
because, Tom, when were you first introduced to concepts?
I actually was a little bit...
I started playing with C++ 0x concepts.
Oh, okay.
Around the C++ 11 time frame but that
was before the thing i was just talking about so i've been following it um through then now i have
to ask the most loaded question i've perhaps asked ever on cpp cast since you both were hooked into
the standard because you thought concepts look awesome and now the concepts that we're getting
in c++ 20 are effectively completely different from the things that hooked you. How do you feel
about that? You're right, that is a bit of a loaded question. The design of concepts has evolved a bit since the C++ OX days. And I guess, I mean, it evolved in various ways.
But I guess for me, the most important difference is that we're,
and again, I apologize if this description sounds a little bit biased.
But I think it's legitimate for explanatory purposes.
It feels to me that the concepts we got are sort of half the feature that we were going to get in
OX. Well, they are called concepts light, so. Well, yes, yes. And that's why. Because what
we were going to get in concepts light involved checking call sites as well as checking definitions, right? So a concept
is basically like an agreement between the users of a function and its definition. The users say,
I'm only going to give you things that meet the requirements, the declared requirements.
And the definition side of the agreement is, I'm only going to use the things that I have required. And both of these
can, in theory, be checked independently. So the use sites can be checked that they are, in fact,
passing in things that meet the requirements. And the definition can also be checked in isolation,
that it only uses things the requirements provide for. And in OX, both of these sides of checking were going
to be done by the compiler. Whereas in the feature we have today, it's only the use side checking and
not the definition checking that is done. And so, I mean, I understand why it sort of had to be that
way. The definition checking side was a very hard problem. It had performance problems.
It had soundness problems.
But I do, in a high-level sense, miss that ultimately.
I tend to agree with the characterization
that we got about half of what was in C++ 0x
in terms of the feature set.
But I think in terms of value, I think we got about 90%.
Oh.
Because the user checking side is really where the most,
is the most impactful part.
People have been writing templates for quite some time.
And they've figured out tricks and such,
and good ways to test their own templates
to make sure that they are what they intend to be.
So that part is not as
concerning to me. The authors of the
current concepts have been
pretty bullish about this, that they
don't miss the definition checking,
which I think is good.
I also think the definition checking
that was present in C++ 0x,
well, it's
often stated that it was complete, right?
That if it definition checked, you were good to go.
But that wasn't completely true.
There were edge cases where you could still have a definition
that would not instantiate correctly at compile time.
And using it was challenging
because it became really restrictive
as to what you could actually put in your template.
If you wanted to do any kind of logging or anything like that,
you couldn't because if that wasn't part of the interface,
then it couldn't be part of the implementation.
So what we have now really gives us
plenty of flexibility for library authors
while getting us the usage checking.
That was really the most important part of it.
All right, good.
Yeah, I think that's a fair point.
I think what we ended up with is ultimately a pretty solid language feature,
and it has a lot of utility for C++ users.
Well, I'm glad you both feel optimistic about it.
I didn't know where that was going to go. Okay, so I want to start digging into some of the work that went on at the clone meeting, where you both were.
And I think it's worth starting with a feature that was actually taken out of C++20, which is contracts.
It was first voted in, I think, a year ago at Wrappers Will, but it was taken out this meeting.
So can we start with just kind of what happened,
what was wrong with contracts that it was taken out?
Oh, man, how much time do you guys have?
I would say that it had some controversial aspects for a while,
and bringing those to light has been challenging.
One of the conversations I had during the week
was with people about axioms and assumptions.
We want the ability to use a contract
to feed into the optimizer
and let it optimize the code better.
But that has some interesting implications to it.
And in the case of axioms, But that has some interesting implications to it.
And in the case of axioms, many people have been striving to understand what's different between just assuming a contract and having this idea of an axiom.
And so trying to get people to see it the same way is, I think, ongoing work
that the study group that's been formed now may have to continue exploring. We could dive into that more, but it's a big conversation,
a big discussion all by itself. I think one of the things that makes contracts a challenging
feature to standardize is that even though they're, you know, one thing in the code,
there is a very wide variety of potential use cases for them,
and sort of ways you can apply them to various purposes ranging from
optimization, to static analysis, to runtime checking. And obviously, the committee is a very diverse group of people from different
domains and different areas, different sort of styles of programming and
life site models for software life cycles and all that. And so it's really hard for
people to agree on a single set of rules that can apply to all of these various use cases.
And so I think that's what a lot of the wrangling is about. And I feel like the feature has been
under debate for years. And it often engenders mailing list threads with hundreds of posts on them.
That can be challenging to follow at times.
And sometimes you feel like you get an agreement on something, and a little bit down the line, that agreement disappears.
And that's what I think happened.
We thought we agreed on it a year ago, and people weren't so sure now.
And there just wasn't enough time to figure it out.
And the train, the schedule is pretty firm.
The train moves on.
And the C++20 train, it was time for it to go.
And a feature that was in this current state of uncertainty was not ready to be on board.
So we will revisit it, and I'm hopeful that we will get a better fleshed out feature in
the C++20 timeframe.
Ultimately, I think the biggest questions during the week were the idea of build modes
and whether they belong in the standard, whether they should be prescribed, how much implementation freedom there should be
for using them. The adoption of P1607 on Monday there was somewhat described as a very radical
design change from what we had previously been working on. Some would say it's more of a tweak,
which is why it got discussed and talked about.
But ultimately, its goal was to improve the flexibility in using the contracts
in order to better facilitate composition between different components of a large-scale product.
So what we had in the working paper going in really didn't have much of a way for saying,
for this particular module or component, we want to build a certain way,
and over here we want to build another way and mix and match.
And that was definitely a challenge.
So we had contracts going into Cologne,
and then you're saying on Monday a big set of changes for contracts were voted in and
then by the time you got to friday all of contracts were removed yeah so so basically
the reason the reason we were even considering changes on monday is because um over the past
few months during mailing list discussions it became apparent that people were
not happy with what we voted in a year ago. So I think what was clear going into the meeting is
that something had to change. And so different people had different proposals for what directions
we could change in. And something had consensus on Monday. But it turned out to be too big of a change for us to be comfortable making it at this late stage.
And that's why the ultimate decision ended up being to pull it.
So we weren't happy with what was already there.
It was too late to make a change to something else, so that left not having it at all. Okay.
Yeah.
It was interesting, though, because the vote on 1607 was basically 28 in favor and 9 against,
which is pretty good consensus.
But then the vote to remove contracts on Wednesday was 49 for and 11 against, which is definitely
strong consensus.
Wow.
So it was interesting watching how things changed.
And, you know, I mean, people
also,
as new information comes in,
so I will confess, I was on the
in-favor side both of those times.
So on Monday, I thought this change is
great. And by the time
we had that second vote about removing it, I still
thought the change was
a good direction,
but I agreed with other people's arguments that it's a late change to be making in this
version of the standard.
Right.
Right.
And so, Tom, you mentioned that there's a new study group devoted to contracts, and
hopefully we'll figure it out and fix the feature in time for 23.
Is that the plan?
That's the plan.
Yeah.
John Spicer is chairing that group.
There's a new mailing list that has been created and I don't think there's
a schedule yet for any telecoms and meetings,
but that'll be coming.
And so we'll get,
we'll get some more activity going on at more people involved.
And yeah,
I have no doubt that that will get something better.
Just like,
just like we did iterating on concepts, we'll iterate on contracts and we'll get something better. Just like we did iterating on concepts,
we'll iterate on contracts
and we'll get something that will be better in the end.
I think aside from constexpr,
contracts are the thing that I was most looking forward to
because as a teacher,
particularly anything that I can give my students
that say this is a tool
that helps you write more correct code
and APIs that are harder to use wrong, like, you know, that sounds
great. So hopefully, yeah, I'm looking forward to seeing what comes out of it.
Yeah, especially for working for Coverity, contracts are
definitely relevant to what we do. Oh, right. Being able to take advantage of those
and find more defects. Right. See where you're violating contracts
and yeah. Yeah.
Interesting.
I want to interrupt the discussion for just a moment
to bring you a word from our sponsors.
Backtrace is the only cross-platform crash and exception reporting solution
that automates all the manual work needed to capture,
symbolicate, dedupe, classify, prioritize,
and investigate crashes in one interface.
Backtrace customers reduce engineering team time
spent on figuring
out what crashed, why, and whether it even matters by half or more. At the time of error,
Backtrace jumps into action, capturing detailed dumps of app environmental state. It then analyzes
process memory and executable code to classify errors and highlight important signals such as
heap corruption, malware, and much more. Whether you work on Linux, Windows, mobile, or gaming
platforms, Backtrace can take pain out of crash handling. Check out their new Visual Studio as heat corruption malware and much more whether you work on linux windows mobile or gaming platforms
backtrace can take pain out of crash handling check out their new visual studio extension for
c++ developers companies like facily amazon and comcast use backtrace to improve software
stability it's free to try minutes to set up with no commitment necessary check them out at
backtrace.io cpp cast okay uh going on to some of the features that were added to C++20.
We're moving pretty slowly for the record here.
There's a C++20 synchronization library. Do you want to tell us a little bit about that?
You skipped standard format. I'm sorry. I'm sorry. Go ahead.
I just I just want to say standard format was voted in. Maybe we don't have to dig into it, but I like that.
Yes, it was, and I think it's overdue
to have a formatting library that's better than C-Style Printf
or standard Iostreams.
And this is the one based on FMT, right?
Yes.
Yeah, it doesn't have its own printing utility but it has its own
formatting utility and uh i think it's yes it's long overdue yeah it's a it's a great piece of
work and um the author is already for c++ 23 been working on the uh the scanning side of it
no i asked him about the scanning side of it a couple months ago on twitter. And at the time, he was, eh, I don't know.
Would there actually be use for this?
And now all of a sudden, it's becoming quite serious.
Yeah, we reviewed what he's working on in SD16 this past week.
A type-safe scanning library, I think, would be awesome.
Really, I really do.
All right, now we can move on.
I didn't mean to
skip it. My eye just glanced
over for some reason. So yeah,
C++20 Synchronization Library. Can either of you
talk a little bit about that?
So I...
One of the things about the committee is that
there are a lot of subgroups,
a lot of things going on in parallel. In fact,
if I'm not mistaken,
at this meeting for the first time,
there were up to nine parallel tracks.
So there were days or parts of days where nine groups were meeting in parallel.
And I believe our previous record was six or seven?
Yeah, something like that.
And Kona was six or seven?
Something like that.
Wow.
And so obviously, one of the implications of this is that you can't follow in detail every subgroup's work, even though it might be super interesting.
And so, yes, I have heard about the C++20 synchronization library, talked about in plenary sessions and informal conversations and so on. I have not actually been in the group that was discussing the details of it
because it was just one of the nine that I did not get to pop into, unfortunately.
And I'm in the same boat.
I've never attended an SG1 meeting, so I don't know the details there either.
And with regard to Botan saying that
with all the waves that we're spreading out in these different subgroups,
you can't be everywhere at once, that's certainly true, but somehow or another
Botan does happen to make it look like he was in every one of those rooms when he
made his trip report.
To be fair, my trip report does have a focus on the evolution group, which is where I sit for most of the time.
But yes, I try to gather bits and pieces of information about what else goes on and try to mention it.
How about context per allocation?
I think that one I know a little bit more about. It's a very exciting proposal
because it greatly expands the set of sorts of things
we can do in constexpr functions.
Namely, we can now use things like vector and string
that perform dynamic allocations.
And so I think this is a really important step towards the committee's
goal of making compile-time programming be more like regular programming. So the more, I mean,
certainly vector and string are very regular programming types of things. They're sort of
everyday things that you want to be able to use to express basic things.
And so being able to use them, I think, is an important milestone there.
It was a challenge, as I understand.
So I'm not a compiler implementer.
But from the conversations I've had with compiler implementers and from what I've heard them say in session, it definitely was a challenge to implement and to even come to an understanding
that it's implementable. So the early stages of the discussion were, oh, we'd really like to have
this. Can we though? And basically some implementers ended up going off and doing some work, some research work in their implementations to try to prototype and just answer the question, is this implementable?
And I believe the EDG folks were one of the pioneers in this area.
We just discussed that with David, yeah, a few weeks ago. Yeah, David Vandervoort, yeah, he does a lot of experimentation and sort of validation of new proposed features in C++.
In fact, we have a little bit of a running joke where if the chair of the evolution group, which considers new language features, is uncertain about whether a particular feature, even just a detail of a feature, is implementable.
He'll just ask David,
and David will come up with an answer on the spot for most things.
Which might look like, no, of course it's not implementable.
Well, actually, maybe.
Sure.
But this particular feature, constexpr allocations,
was not an answer on the spot thing.
It was a go off to the side and do some research.
But the results of that research were promising.
And so we ended up proceeding with it.
And I believe there were a few road bumps here and there.
In particular, we had to pull part of the feature where you could allow a dynamic allocation to survive to runtime.
Right.
And have it become just like a global, have the result of a concept of computation that
involves dynamic allocation become just a global that survives to runtime.
That's something we wanted to do, and we could not figure out how to specify it properly
and in a way that all the
implementers are happy with it and confident they can implement it. So that part was pulled.
So you just get to use dynamic allocation during the compile time computation itself and then
throw it away. Then you have to shove it into a standard array or something like that at the end
of the, I would guess if that you wanted that data to
survive that's right yeah i'm okay with that honestly uh i've spent a lot of time in the
in constexpr i think i feel like that's probably the better idea but part of what the part of
reading this paper that like i was just reading the stuff this morning and my brain went because
right now a literal type must be trivially destructible with this paper a literal
type can have a virtual destructor that is complete departure like what literal types mean
in in my head so it's going to take some rethinking on some things yeah i don't know
that's what virtual destructor poses it'sructor poses more difficulty than virtual functions in general.
I think there was the tradition that...
Well, yeah, but the fact that you couldn't have a destructor at all before,
and now you can, oh, sure, it's a destructor.
It could be virtual, whatever, inheritance, it's fine.
It can all happen in compile time now.
Like, I'm going to have to go modify some of my training material, basically.
Yeah, I think that was a conservative approach taken.
You know, avoid material, basically. Yeah, I think that was a conservative approach taken to avoid controversy
and ensure implementability to begin with,
knowing that we could and probably would relax it eventually.
Right, yeah, as Constexter has been every single release,
more and more things relaxed, yeah.
Yeah, to Botan's point about the implementability concerns,
remember that was discussed at Plenary
when we were voting the feature in,
and there are still some concerns out there.
So hopefully it won't encounter any more.
Oh, yeah, good point.
Good point.
You're right.
There was, I believe, an implementer.
I don't know if I'm supposed to say which one.
No, we should.
An implementer expressed some concerns still
about the implementability. I saw an implementer also tweeting after the Pl expressed some concerns still about the implementability.
I saw an implementer also tweeting after the plenary some concerns that they don't believe it will be implementable or are not sure if it will be implementable.
Right.
So, I mean, this is one of those things where we're at the stage in the process where if a fatal problem with a feature becomes apparent, there is still time to pull it out.
Now, that said, I'm really hoping that this particular feature does not get pulled.
And I'm optimistic that if it's implementable in two or three implementations,
it can be implementable in a fourth one with sufficient effort.
But I guess that remains to be seen.
And this is why we have the feature deadline
for facing the standard.
There was another proposal
that made it through EWG and through Core this week,
and that was to allow floating point types
as non-type template parameters.
And we actually put that up on the straw poll
for plenary on Saturday.
But then during the week,
somebody objected that,
you know what, this is a feature.
This should have come in in Kona at the latest.
And so we ended up taking a procedural poll to say,
do we as a committee want to allow this late feature into this release?
And that poll ended up failing,
so that we did not poll actually adopting this new feature at all.
Wow.
So there were lots of discussions about
implementability and making sure that things have have been implemented before so if you want to
bring a proposal into the c++ standard implementing it ahead of time is a great way to help get people
behind you and supporting your proposal okay and i think this was already mentioned but uh since
we're getting constexpr allocation,
we're also getting std vector
and std string will become constexpr,
right? That's right, yeah.
And so there was a little bit of library
work involved in
formulating those classes, right? So we have to
expand the language rules to allow more things
on the language side, but also
tweak the implementations of those classes
a bit to fall into, to conform to what is now possible. And there are actually some language features
to help with that. So we have a language feature called std is constant evaluated,
that allows you to provide a different implementation for compile time versus
runtime for the same function. And while I haven't followed this in detail,
I think string may have had to make use of that
such that it doesn't do some of the fancier things it does
like the small string optimization at compile time
because that involves, I don't know, reinterpret casts and so on
that you still can't do at compile time.
Oh, yeah, okay, that makes sense.
But you want to still be able to have
the small string optimization at runtime,
so now you can have both.
To go with this constant evaluation,
we also made it through
to allow inline assembly
in constant expert functions.
Unevaluated inline assembly.
Yes, which is a very important point.
We're not expecting the compiler to actually evaluate the assembly.
That would be interesting.
But that allows intrinsics, essentially,
to be emulated at compile time or executed at runtime,
as you would like for constexpr.
So a very useful feature.
Yeah.
Okay.
So I know we kind of skipped over the C++20 synchronization library.
Is there much to be said about stop token and joining thread?
I think that also came out of SG1, right?
Yeah.
So I don't know much about stop token,
but I've heard some of the discussions about joining thread.
And so basically the idea with joining thread
is that if you spin up a standard thread,
a std thread,
and you...
Tom, maybe you can help me out here.
If the main program exits
and your thread is still alive,
is that when there's
debate about what to do there?
Yeah, you basically end up in undefined
behavior if you don't
manually join your thread properly.
Yeah, you basically, on all implementations, get a
crash with, uh, you didn't
join this thread. Yeah.
Right. And so there was
a desire for
either the standard thread's behavior changing or having a new facility, which automatically joins in the destructor. And well, people had arguments for not changing the behavior of std thread itself. But so we have now we have a new facility. J thread, I I think, is the name, or is it joining thread now?
It's Jthread.
I think the recommendation is now that we have this,
don't use std thread.
Use std jthread.
That's the difficulty.
Let's see.
Yeah, I mean, any time I talk about threading with students,
they're like, well, why doesn't it just join when it goes out of scope?
And I'm like, I don't know.
Like every other thread implementation.
I wrote my own RAII threads as pthreadwrappers way back in the day before we had standard thread.
And mine automatically joined when it went out of scope because that's what destructors are for, right?
That's right.
So, yeah, I think it's a welcome change, personally.
I do, too.
I just wish we could have fixed the original one
rather than creating another one.
Well, with deprecated, at the same time, we deprecate stood bind.
Okay.
Since I'm on the standards committee, right?
I can just decree these things.
Oh yeah, we love deprecating things.
Okay, I think we're mostly getting into smaller features now,
but source location
yeah uh that's a uh that's one that that that i've been looking forward to um because um nobody
likes macros right we want to we want to uh exterminate macros and use uh modern c++
alternatives where possible and i think we're at the stage where for the vast majority of macros and use modern C++ alternatives where possible. And I think we're at the stage where
for the vast majority of macros that is possible, right? So there's no more need to have
pound-defined constants or little function-like macros. There's really no need to have those
in C++. But there were these pesky under-under file
and under-under line
and under-under function macros
that just did not have a replacement.
If you wanted information,
if you wanted to compile it
to automatically put in information
about your line number
and the file name and all that,
that was the only way you could do it.
And now there is a non-macro way to do it with source location.
So I think that's a very welcome feature.
Yeah, and I looked at this as a nice rescue.
The source location had been languishing in the library fundamentals TS for a while.
And this Quentin Jabot and I forget who the other author was that went and decided to push it and get it out of VTS and into the working paper.
So kudos to them for taking on that effort and getting it through.
How about using enum?
So that's kind of a minor thing,
but it sort of fixes a paper cut, right?
So in C++11, we have scoped enums.
And that's good for sort of name hygiene,
for not mixing,
not just having your enumerators pollute
whatever enclosing namespace
that you define your enumeration in.
You put them in this nice scope.
And so that's great and all,
but sometimes you're writing a localized piece of code where
you have to just repeat those enumerations over and over again, and having to prefix them with the
enumeration name just gets verbose. And so using enum allows you to introduce the enumerator names
into some local scope of your choice, and then forth refer to them without the enumerator names into some local scope of your choice and then forth
refer to them without the
enumerations name as
prefix. One of those nice things
that you probably don't even think about
once you have the feature. It's like, of course you can't do that.
But you notice it as a
small paper cut when you don't.
And thank you for having the summary for that one.
As we noted before, having like nine breakout
sessions,
you can't follow everything.
Honestly, I don't think I was even aware of this one at all.
Well, so I'm not sure if we should go over every single feature, but are there any ones you guys want to highlight that you were involved in?
Well, so I think we can't have a discussion about C++20
without featuring its flagship, what I view as its flagship feature, even though it was voted in at a previous meeting and not at this one.
So I'm thinking of modules.
Modules was voted in at the previous meeting, but I think that just not having it pulled out at this meeting is an accomplishment that I'm proud of.
Because there was a non-trivial risk of that, right?
At the last meeting, the state modules we're in was that the implementers had come to a consensus,
but there were other segments of the community, and Tom can perhaps tell us more about this,
in the tooling community in particular, that had concerns about modules adoption.
And I think up until this meeting, there was a risk that modules might get
pulled over some of those concerns.
And so I think that that not happening is an accomplishment in and of itself.
Yeah, I suspect if we had more time to work the deployment problem, that there would be more chance pretty serious concern. Based on what implementations
the direct set implementers
are taking on it, the problems
that it imposes on traditional build
systems are effectively equivalent
to generated headers. So for those
of you who have dealt with build systems
and generated headers and
how difficult those can be,
those problems are now going to arise
for adoption modules.
So we'll have to see.
There are a lot of people experimenting with different things.
People from CMake have been attending the meetings
and making sure that CMake will be able to handle modules well,
likewise for some of the other more modern build systems.
For those of us that are still stuck on old Nake-based build systems,
we're going to have to do some work
to be able to take advantage of this down the road.
But SG-15 is continuing to explore our options there.
We may be able to introduce an implicit module use workflow
through the compilers,
which is what Clang modules have traditionally done.
So Clang modules were very easily adopted
because build systems didn't need to be modified to work with them.
However, that brought along problems of modules being rebuilt multiple times
and some ODR issues and such.
So implicit modules are not a perfect solution either.
But we'll have to see how things shake out here.
Okay.
Were there any changes made to any of the main features for C++20?
Modules, coroutines, concepts, ranges?
There was a change made for modules in that previously,
the working paper specified
that some subset of headers
are importable
as importable
header units.
And the mechanism by which this is specified
is up to the implementation.
So like on Clang, it could be module maps
and other implementations. It could be command line
options or some other way of marking
a header as being importable.
And if you had a pound include directive for an importable header,
it was treated as an import declaration,
as if you'd done an import instead of a pound include.
That proved problematic for Microsoft's implementation,
and so we discussed that in this meeting
and decided to make that whether or not that Pond and Cleve translation happens
is now implementation defined.
I wanted to comment on the
integrate the spaceship operator
into the standard library. I'm impressed
that that got through to
modify the standard library
for a
timely feature right there.
Yep.
I think that's
important to have
that library support
with us. And I think it's
the result of
the committee taking care
to identify features
that have library impact
and sort of front load them.
So we have a schedule and
we've been trying to,
I think in part because
of the user community's positive reception
of the regularly scheduled release
of new features every three years,
we've been trying to be more disciplined
about sticking to it,
but also ironing out kinks in the process.
And one of those kinks
is making sure
that when a language feature has library impact,
it is refined early enough
that there's time for the library groups
to go through the library
and propagate the consequences there.
I think there are some cases
like class template argument deduction of C++17,
where we kind of ran out of time to do that really properly.
I think we did a better job now in 20 with Spaceship.
So I think that's a success story for being disciplined a bit about the schedule and stuff.
Yeah, I agree.
I think Barry Revson should get a good amount of credit for having done all that work and pushed that through.
There was a number of changes that were made to the core language for the spaceship operator
based on the work that he did integrating it into the standard library.
So there were some good fixes that came there.
Yeah, absolutely.
As is often the case, there are issues we discover at a late stage.
And yeah, big shout out to Barry and also David Stone,
who spent a lot of
time writing high-quality
papers to fix spaceship
defects.
Yeah, and one place where
I'm a little bit concerned is on
coroutines, where we now have the language
facility, but we don't
have the library side support
for coroutines.
So we'll have to see how this plays out going into C++23.
Fortunately, Gore and folks here, they've done a lot of work on this already.
It's been implemented and deployed.
I think Microsoft is using coroutines pretty widely internally at this point.
So there's good implementation experience that we can rely on,
but we still don't have those library parts in the standard library. It would be interesting to see what happens when they land. That's an interesting point. I had seen some
complaints a few months ago about how
ranges that we were getting wouldn't have all of the utilities for
ranges that we want, but it looks like some of those ranges' utilities did
in fact just get voted in, if I
got that right. Yeah, in particular
the popular ones, the range
views, which
are super useful for
allowing you to program in a bit
more of a functional style with ranges
that did make it in.
I think a lot of people are pretty happy
about that. And that's another great example
of how ranges influence the
final evolution of concepts
and how it appears in C++
20.
Getting that library experience is
really, really valuable.
Concepts and ranges are a nice pairing of a
language feature and a library feature that are landing
together and
ready for users to use from the get-go.
So I guess final word then, deprecate volatile?
Yes, good idea.
I don't know what to say.
I think it's probably a positive.
In most of the cases that are deprecated are things people shouldn't have been doing
anyway, or used for purposes
other than what
it may have been intended for
yeah I feel like the deprecation
hits a lot of corner cases that
probably weren't super interesting to
begin with
but yeah volatile has definitely been
misused for
synchronization related purposes
and I think to be clear we did not been misused for synchronization-related purposes.
And I think,
to be clear, we did not deprecate all of it. We deprecated
uses of it in certain contexts.
So I think
reducing the
surface area there
for people to misuse it is
a positive.
I've never seen it.
Volatile is essential for its correct uses.
There's a workaround for it.
Right.
We need it, but we need it where we need it,
not where we don't. And it looks like they'll be
moving in a direction of volatile load and
volatile store to make it very explicit
that that's what you're doing.
Something like that. I'm guessing
those, that's a standard
library
atomics type proposal or um
uh yeah i'm not too familiar with with the specifics of it to be honest um something like
but yes i can i can see utilities like that flowing out of the concurrency study group
okay so uh now that we are past cologne and C++20 feature freeze, essentially,
what are you two looking forward to with C++23?
So for me, I think one of the biggest items,
and to be clear, we've got a lot of big items in 20,
so I got to cross a lot of things off my list.
But one of the big remaining things I'm looking forward to is reflection.
Reflection is something that's been in the works for a long time.
There is a TS, a technical specification, out for it, but it is A, to my knowledge,
not fully implemented anywhere, and B, it's not the final form in which we expect to get
reflection in the language. So it's very much a work in progress, but there's been a lot of
bright minds at it, a lot of implementers
experimenting with it, and
I'm optimistic that we
will get at least one batch of reflection
features in 23,
with possibly further
more next-generation stuff like metaclasses
later on. How about you, Tom?
For me, as chair of SG16,
the Texting and Content Study Group,
I'm very focused and interested in making some improvements there.
In C++20, we put in some foundational work.
So for one thing, the standard no longer refers to,
the C++ standard no longer refers to a version of the Unicode standard
that is older than some committee members.
That's an improvement, you know, a floating sort of reference to it.
We
made a change that doesn't affect
any implementations, but
as previously specified,
the char16 and char32
literals were,
the encoding of them was implementation defined.
But on every implementation,
it was UTF-16 and UTF-32
respectively. So we just decided to make that. Yes, we can count on that. It will always be UTF-16 and UTF-32 respectively. So we just decided to make that.
Yes, we can count on that.
It will always be UTF-16 and 32.
Wow.
Okay.
And then we got the char-8-2 type in,
so that we can now have strong types for UTF data.
And that was motivated for three different reasons.
One, in that the char-based encoding is
implementation-defined and locale-sensitive.
So it's hard to know
what data you have in any char-based
storage. So
char-18 now gives us a way to clearly
separate UTF-8 data from
other stuff and
put them on the guardrails that
people need there to avoid mixing
them together and introducing Mojave into their application.
And then along with that, we get an unsigned type for managing BTF-8 data.
This char is often signed, and if you try to use it to inspect a trailing code unit type,
it doesn't always do what you want, not portably anyway.
And finally, char-80 does not alias like char
does, so there's some performance improvements
that can come from the museum
and working specifically with UTF-8 data.
That's interesting.
So for C++23, what we're looking
forward to is building on top of that
and providing encoding
aware text and text view
containers and views that
support decoding
when you need to work at a code point level with text and text view containers and views that support decoding when you need to work at a code point
level with text and
probably some graphene cluster
iterators so that we can look at
what a user perceives as a character
as opposed to the individual
components. Transcoding
support. Right now I
like to say that with Charity
writing Hello World is an expert
only activity.
Because getting the data,
the UTF data, into a stream
in a way that isn't going to
produce the wrong result portably
is really hard. We just don't have the
facilities in the standard to provide
that transcoding support. So that's
something that we're working on. Jean-Yves
has a proposal for that
and is making good progress on that.
And then finally, support for Unicode algorithms in general
and hopefully a good regular expression library
from Hanna Dusipova.
She's been doing some great work,
which everyone's familiar with,
and adding Unicode support to that
with some help
from Karens and Jabot.
Looking forward, hopefully that can
land in the 23 timeframe as well.
That makes me think of the code
CVT utilities that were
added in 11 and then, I don't know,
deprecated in 14 or 17.
There's a couple of us that actually need
something there, so hopefully
we'll get something real in 23.
That's the idea.
The entirety of CodeConvert has not been deprecated.
What was deprecated was just the ones that converted
between UTF-8 and 16 and UCS2 and such.
Okay.
We eventually, I think...
Which is what we need when we're working on Windows.
Yes, and I think eventually we need to just deprecate
all of the code convert facets
and come up with something new.
I'm not sure that the code convert facets
were ever a good idea to begin with.
What I think makes more sense is layering
the encoding awareness on top of the strings
rather than having them being embedded into it.
Right.
So hopefully we'll, I think we'll probably pursue that direction.
Very cool.
Yeah.
Thank you.
Okay.
Well, it was great having you both on the show today.
Thank you for having us.
It was great to be here.
Thank you, guys.
Much appreciated.
Thanks.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in,
or if you have a suggestion for a topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate if you can like CppCast on Facebook and follow CppCast on Twitter.
You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter. Thank you.