CppCast - Reflection for C++26
Episode Date: January 26, 2024Daveed Vandevoorde joins Phil and Timur. Daveed talks a bit about his work at EDG, but mostly his efforts to get Reflection into C++26, along with his co-authors, and how that fits into the big pictur...e. News Meeting C++ 2023 videos (including all keynotes) "A 2024 Discussion Whether To Convert The Linux Kernel From C To Modern C++" How do you correctly implement std::clamp? Blog post Reddit discussion C++ Online Workshops Links P2996R1 - "Reflection for C++26" P1240R2 - "Scalable Reflection in C++" "C++ Templates - The Complete Guide" - book (Vandevoorde, Josuttis)
Transcript
Discussion (0)
Episode 375 of CppCast with guest David van der Voort, recorded 22nd of January 2024.
This episode is sponsored by Native Instruments, innovative hardware and software that inspires
musicians, creators, and DJs.
And Sonar, the home of clean code. In this episode, we talk about the Linux kernel
and about the implementation of std clamp.
Then we are joined by David van der Voort.
David talks to us about reflection for C++26.
Welcome to episode 375 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Timo Dummler, joined by my co-host, Phil Nash. Phil, how are you doing today?
I'm alright, Timo. How are you doing?
I'm not too bad. I'm staying inside. The weather is pretty bad outside. We have about a meter of snow it's very cold and there's some kind of snowstorm outside so i'll
just not gonna go anywhere for now but as long as i'm inside yeah i'm good how are you doing phil
what's the weather where you are we have a lot of weather here as well we had a big storm pass
through last night maybe headed your way even quite high winds for the uk at least i think up
to 99 miles an hour at times so a lot of trees down in the area.
We couldn't get our car out of our driveway this morning.
So yeah, we're just dealing with all that at the moment.
But no snow this week.
We had some last week.
That's fun.
Yeah, I also couldn't get our car out of the driveway two days ago
because it got stuck in the snow.
I had to spend about an hour with the shovel to dig it out of there
and to be able to go anywhere.
So that was fun.
Yes, no joke.
All right.
So at the top of every episode, I'd like to read a piece of feedback.
And this time we got a comment from Hamlam who wrote the following on Reddit.
I love this podcast series.
So referring to the last episode, which I recorded with Guy and Wieler as the guest.
So Hamlam writes,
I love this podcast series, but this time I was particularly annoyed by the audio quality.
The guest host audio was almost inaudible to me while listening to it in my car, commuting to and
from work. My suggestion would be to make sure that participants have a reasonable microphone
and also record their audio locally in high quality, then use that audio for mixing the
final audio for the podcast.
Well, thank you very much for the feedback.
I do have to say, yes, that's mostly on me.
So what happened is that when we sat down to record,
I noticed that Guy's audio was quite poor.
And I had trouble understanding him.
So I said, okay, can we figure this out?
So it was this awkward few minutes where we kind of watched him
fiddling with his audio settings and stuff,
but it just wouldn't get better.
And what I should have done is to say,
okay, we're going to actually figure this out
before recording.
And what I did instead is,
oh, Phil was going to fix it in post.
It's fine.
And we recorded,
we went ahead and recorded anyway.
It turned out that the source material
actually wasn't that good.
I don't know what the problem on Guy's side was,
but we should have sat down and fixed it before recording so that's that
one is on me i apologize i'm gonna make sure we're gonna uh it's not gonna happen again hopefully
yeah yeah i will take a bit of the blame as well because i'm usually the one that says it's okay
we can fix it in post because i've got these these really great plugins which uh funnily enough are
actually provided by the people sponsoring us today.
And they usually will sort out just about any problem that we have.
So I'm usually fine to say, we'll fix it in post.
This time, no matter what I did, we couldn't seem to recover it.
So sorry about that.
We will endeavor to do better.
All right.
So we'd like to hear your thoughts about the show.
You can always reach out to us on xmastered on LinkedIn
or email us at feedback at cppcast.com.
Joining us today is David van der Voort.
David is a Belgian computer scientist who lives in Tampa, Florida, USA.
He's the vice president of engineering at the Edison Design Group, where he contributes
primarily to the implementation of their C++ compiler front end.
He's an active member of the C++ Standardization Committee.
His recent work in that context has primarily been about developing a compile-time reflection mechanism.
David is also one of the five members of the committee's direction group.
He's the primary author of the well-regarded book, C++ Templates, A Complete Guide,
now available in its second edition.
David, welcome to the show.
Hi, Timur. Thank you. Hi, Phil.
So it's been like four or five years, I think, since you've been to the show. So it's a great
honor to have you back. Thank you very much for joining us.
Oh, it's my pleasure.
Now, we're going to talk about reflection in the main part of this episode. I just want to pick on
the EDG part, because I know you've been there for a long time now. Yeah. And I hear about EDG a lot over the years,
but mostly as the front end for Visual Studio's IntelliSense.
And back in the day, the Comeo C++ compiler,
I think, used it as the front end as well.
If I remember rightly,
was the reason it was the only compiler
that ever actually got full conformance with C++98.
But I don't think it's around anymore.
But what else is it used for these days?
So we have a fairly broad set of customers.
So some of our customers are people that sell chips
and want a C++ compiler with their particular development kit.
And so they're the traditional compiler vendor.
So we have a few of those.
You mentioned Visual Studio uses our product in IntelliSense.
A lot of our customers these days do source analysis. So they take our product, reparse C++.
And one of the reasons why it's compelling to go with us
rather than some other solution is that we cover
all three of the main implementations.
So you can tell our product,
please parse C++ as if you were GCC 7.3.1.
I'm just making something up.
Or with the same executable, you can say,
oh no, please parse it as if you were Microsoft Visual C++ version 19.23.
And sometimes we even go down to
the build number. So we implement
all these
extensions, their bugs,
things like that. So a lot
of our customers will do source analysis based on
that. Some will do source-to-source transformation.
So they'll take your source code,
then insert in
our representation of it some stuff,
and then we regenerate C++ from that.
So yeah, a lot of applications in that domain.
Some will just give you stats about your code.
A lot of it these days is security stuff,
where some application will look at your code and say,
here and there and there you have vulnerabilities. of security stuff where, you know, some application will look at your code and say, you know,
here and there and there you have vulnerabilities.
Right.
Yeah.
It's pretty broad, but still a fair number of soft, plain, traditional E++ compilers.
Okay.
So you have to match the bugs of the other compilers.
Yeah, yes.
Does that mean you have a load of test cases that check that you've got the right bugs?
Right. So you'd be surprised at the fraction of my work,
which consists in trying to figure out
what the boundary of the bug is, right?
I'm not allowed to look at the source code
of any of the compiles.
In some cases, impossible.
I don't have access to Microsoft's source data anyway,
but I'm not allowed to look at GCC source code
or Clang source code.
I think maybe technically I could Clang, but I don't.
I just send a whole bunch of...
So I get the test case that says,
hey, you're doing something different.
And then I try to make a whole bunch of adjacent test cases
trying to figure out, oh, that's what they do.
And sometimes, a lot of times at the end,
I'm like, okay, I know what they're doing,
and I emulate that.
Sometimes it's a little different where I'm like, okay, I know what they're doing, and I emulate that. Sometimes it's a little different where I'm like, okay, I know what they're doing,
but I really don't want to do that in my compiler.
Can I do something that's close enough?
And then we'll do something like that.
It's on a case-by-case basis.
But a lot of the problem reports we get to our product are of that kind,
where you're like, okay, well, we know this is not
standard
C++, but we would like you to parse it anyway
or we'd like you to behave this way or that way
anyway.
Sounds fun. Yeah, that sounds fun indeed.
It is actually, it is fun.
Sometimes it's frustrating, sometimes
after you have
done some really gnarly
emulation, you're like, yes!
Yeah, it's an interesting job.
All right, so David, we'll get more into your work
in just a few minutes.
But first, we have a couple of news articles to talk about.
So feel free to comment on any of these, okay?
The first news item is that the keynotes
from Meeting C++ 2023,
which is a big conference in berlin that
happened two months ago so those keynotes have been released on youtube and all three of them
are pretty great and worth watching so yes we've got um six impossible things by kevin henney
helping open communities thrive by lydia pincher and proxy plus plus by ivan chukich so you can go
ahead and watch those and they're all pretty awesome yeah i think a lot of the other videos from meeting c++ are available now as well
i know the one for my talk is i don't know who else yeah so this this one is from like a couple
weeks ago so they might have released more yeah right so the next piece of news is a post on
foronix.com which is a website that I have to admit I haven't come across before,
but apparently it is a leading website
or even the leading website for Linux news.
Yes.
And there is a post there
which I found really interesting.
Actually, thank you to Nis Minard
who suggested that we include this as a news item.
And that blog post says,
a 2024 discussion whether to convert the Linux kernel from C
to modern C++.
So it's kind of a summary of an
old discussion, but one that has been
recently reignited.
Whether the Linux kernel should actually be
converted from C to C++.
And so
the argument is that C++ actually has grown
up to be a better C
for the kind of embedded programming
that you're doing when you're developing an OS kernel. And that in particular, C++ 20 is a total
game changer when it comes to meta programming. It adds concepts, it adds a bunch of other stuff
that makes kind of scalable programming, you know, much easier and better. And also like many things
for which they require kind of GCC specific
extensions and see in other very,
very elaborate stuff like in C++ and modern C++,
like C++ 20 and later,
you can just write them like really easily.
So that's interesting.
Of course,
the question is if you want to rewrite the kernel or parts of the kernel in a
different language,
why not just use Rust?
They're actually already doing that already.
I think parts of the kernel have been rewritten in Rust,
or at least they add Rust components to it or something.
So the argument is that rewriting large parts of the kernel in Rust
isn't a good idea because it's too different,
whereas conversion from C to C++ can actually happen gradually.
You can already almost compile the kernel,
just switch the compiler from C to C++
and it will almost already compile.
And then you can just gradually add
new features
at whichever rate you want. And obviously
the idea is not that you
add every possible C++
feature under the sun, but just have
a defined subset of C++, which is
kernel C++, which is
the thing you're going to be using,
much like they already do it for C.
And yeah, the argument is maybe that's what the Linux people
should be doing.
And yeah, I don't know.
I think people have probably very different opinion on this.
I'm personally not a Linux guy very much,
so I don't really have an opinion on this,
but I'm curious what the other people think about this.
So it's an old question.
Back in the 90s, it came up,
and Linus Torvalds had some choice words about C++ at the time.
Oh, yeah, I remember.
He was quite strongly, he had a very strongly worded,
a few very strongly worded statements
about why he thinks C++ is rubbish, from what I remember.
Right.
Yes, and I think he had some more in the 21st century as well. Now, the person who brought
this up, if I remember correctly, is Peter Andwin, who is a respected kernel contributor.
And he pointed out first, when we say that the latest kernel is written in C, it really isn't.
It's written in the GCC dialect of C.
And actually about 20 years ago, my company
was asked to do quite a bit of work to be
able to compile the
Linux kernel.
It's not trivial.
Meaning we already had a C
frontend with
a bunch of dialects, but we had to add
a lot of stuff to be able to get
to the point where we could compile the Linux kernel.
And then I think a few years ago,
they finally switched from C89 plus GCC extensions to C11.
But I think Peter Anden was saying there's really a lot of the special
extensions that they use that could be instead more simply done using C++ constructs.
And also they could use data structure, like templates are really amazing, right?
Once you have templates, you can replicate data structures much more cleanly throughout
your code base without a lot of macro hackery.
And so I think he was pointing out that that'd also be a really nice thing to be able to do in the Linux kernel,
because kernel has a fair number of important data structures
that have to be implemented efficiently.
Yeah, I think for our audience,
most people won't take a lot of convincing
to suggest that C++ will bring a lot to the table,
but I think the Linux development community
has a very different approach to this.
So it's very interesting to see how this plays out.
But I like the way that this sort of has come full circle.
So today we're talking about what about successor languages
to C++, like Carbon and some of the others.
And here we're still talking about C++
as a successor language to C,
and exactly the same principles again.
So, yeah, interesting to see how this plays out.
I'm not holding my breath at this point.
And that idea of gradually moving, it's really important.
The EDG front end until, I don't remember exactly,
but maybe five or six years ago, it was still C.
And so we very gradually moved to
C++. And if you just were to jump anywhere in the code base, it would feel very much like C.
A lot of our other benefits is just being able, same as with the kernel, to be able to drop in
some template instances here and there, instead of having to reimplement a linked list for the 120th time.
Been there, done that.
Right.
Speaking of implementing stuff for the 100th time,
the last news item I want to mention today
is a blog post about stdclamp,
which there's also a very interesting Reddit discussion about it.
And so the observation was that std clamp actually generates less efficient assembly
than if you just write std min of max std max min value, like kind of the naive kind of one
liner implementation using std min and std max on GCC and Clang, which seems surprising.
But if you read the blog post, the reason is very subtle.
One reason is that kind of the naive implementation
of std min of max std max is actually incorrect
because the standard version specifies the behavior
for negative zero and specifies the behavior for nonce,
which like the min max kind of thing doesn't give you.
So if you like shuffle it around in a way that it behaves the same, then you actually do
get the same assembly.
And another reason is that apparently the order in which the arguments are passed to
clamp also affects the generated assembly, at least in x86, because of the limitations
of the instruction set and the requirements imposed by the calling convention and stuff
like that.
So the blog post goes into a lot of depth on this stuff.
It's very, very interesting.
So yeah, that caught my attention this week.
I saw the post come by on Reddit, but I didn't click through, so I haven't read that.
Yeah, I only skimmed the article so i didn't get to see um if that examine questions like
is the the standard library definition holding us is there another case where the standard library
definition is holding us back could we actually have an easier more more performant implementation
of still clamp if we relaxed some of those those details so we could use the naive implementation
yeah or whether there's some other side effects to that.
So that's funny.
So if you don't care about what happens when it's none
or when it's negative zero,
or you don't care whether negative zero
is smaller than positive zero
or whether they're equal or whatever,
then yeah, you get to a more efficient implementation.
And funny enough, I had that as an example
in my paper for Assume,
which we now have in C++23,
where you can take the clamp implementation
and just before you call clamp,
you add square bracket, square bracket,
assume is not none and is not zero or whatever,
and you actually get better assembly code this way,
even if you use standard clamp.
Of course, then if you do pass in none,
then everything crashes horribly, So don't do that. But this was actually one of the kind of use cases that I had for assumptions that,
you know, sometimes you don't want to implement it yourself. You just want to use the standard
library thing, but you don't care about all the edge. And it's similar for stuff like std midpoint
or std lerp or whatever, like lots of of these numeric things but you just don't care about these edge cases but they slow down your code so so you can just kind
of add assumptions to just throw away those extra instructions if you have a clever compiler then
then then then you can do that i mean you can also sometimes get the opposite like sometimes
if you add an assumption it triggers weird stuff like i've seen this on gcc where previously it
would uh like auto vectorize your code and then if you add assumptions suddenly it doesn't and then
slower instead of faster so you see stuff like this as well so you got to be careful but
yeah i have encountered this topic before it's it's a fascinating topic
yeah so it sounds like one of the those problems are generalized to
really want a pair of functions,
one which has a narrow contract with assumptions,
and another which has a wider contract and a little bit of a performance hit, perhaps.
Oh, right. So now we're getting in a contract.
So I don't want to keep talking about this because then we can do a whole episode about that.
That's obviously one of my favorite topics.
So let's park that here. I believe you have one more news item for us today yes um so the last time i was on which was not last episode but the
one before i i mentioned that uh c++ online tickets were available and at the time that i
said it as we recorded they weren't but i thought i was going to be coming out in about 10 days
they'll be online by then and i realized the night before as i was editing they still weren't online so i had to quickly put them online it did like a soft
launch so the page was available you could only get there from the cpp cast link but they are now
fully available so i wanted to reiterate that and we'll put another link proper link in the show
notes but also to point out that if you buy a workshop there's some great workshops on
offer and very reasonably priced you actually get the whole conference thrown in for free
so it's a great way to get a workshop and a conference for a very good value so that's
what i had to say on that all right that sounds great can you remind us when that event takes
place yes because we also changed the dates which i think i did mention last time as well it's going to be the end of february double check the exact dates 29th of february to the 2nd of march
right so a bit more than a month from now yes yes all right okay so um that wraps up the news items
for today so we can move on to our main topic for today and uh this is really i think quite special
because over the last year or so like
when we phil and i were have been since since phil and i have taken over the show you know we've had
episodes on many many different topics but almost on every episode we just kept coming back to like
the kind of two hottest topics that in c++ right now or what seems to be the hottest topics one of
them was uh you know safety and the other one is reflection.
And I think almost every episode we have either a news item about reflection
or somebody mentioned something that relates to reflection
and how it would be awesome to have it in C++.
So it's mentioned, I think, in almost every episode.
And now, recently, this paper paper P2996 came out
that had the title reflection for C++26.
And it looks like we might actually get reflection now
in the next upcoming C++ standard.
So a lot of people are very excited about this.
So we thought we have to have an episode about this.
This is such an interesting topic.
And so we invited David,
who is the expert on the topic
and one of the authors of that paper, to talk to us about it.
So David, thank you again very much for joining us today.
Not at all. My pleasure. I'm glad to be here.
So this paper, P2996, let's talk a little bit about the history of that.
So how did this paper come about?
So I've been following it very loosely.
I haven't actually participated in any of the standardization work on Reflection,
but I'm doing other stuff on the committee,
but you kind of hear like what's going on
a little bit here and there.
There's like a paper here that people discuss
or like something there that just was released.
So I was kind of loosely following it.
And from what I remember,
like we had the Reflections TS.
That was a while ago, right?
That was stuff by like David Sankel and a few other people.
Axel Nerman and Matthias Schochlik.
Yeah, exactly.
So everything was kind of template-based.
You have to write colon, colon, type everywhere,
and it was kind of a bit difficult to use.
And I remember we had some papers saying,
no, we shouldn't be doing that.
We should have a value-based reflection
where you reflect on a property
and you get an object back, right?
And you can write more kind of procedural code,
even though it's obviously compile time,
but more procedural code
kind of to work with that much more easily.
Then I remember it was maybe somewhere
around the Prague meeting.
It was the last meeting before the lockdown
or maybe roughly around that time.
I remember there was a paper by Andrew Sutton,
which had like a hundred pages
where there was a lot of new syntax.
I'm like, oh, let's do this.
Let's do this.
Let's do this.
And I didn't really understand that paper.
There was like a lot going on there.
But then there was like,
I think a few years where just nothing happened
or probably a lot of stuff happened in the background,
but nothing visible to me. Like there was just like the reflection study group wasn't sitting there
were no new papers coming out and i was thinking okay that's very sad you know everybody says we
need reflection but apparently somebody just stopped the funding for this and now nobody's
working on it anymore or something i don't know so there was several years of silence and all of a
sudden now there's this like revival where there's this amazing new paper that came out
and people are talking about we might already get it in css 26 anyway like can you fill in the blanks
what actually happened there and like who's involved in this effort and like where does it
come from okay um can i go even back a little bit further? Absolutely. We have time.
Okay. I could go all the way back because my interest in this comes from my very first touching in C++ in the late 80s and early 1990s.
But I won't go back there.
But like you mentioned, the TS came out of the formation of SG7.
And I don't know why, but I completely missed that.
So when Chandler, Caruth, and a couple of other people started of SG7. And I don't know why, but I completely missed that. So when Chandler, Caruth, and a couple of other people
started up SG7, and then they started doing this work,
I attended just about every meeting of the committee,
and I did not see what was going on there.
At some point, I did see what would later become TTS,
and I thought, you know, I don't think this is a good way of doing things,
which is to use template metaprogramming
as the metaprogramming model going forward.
Because it's hard to debug.
It doesn't look like the rest of C++.
So it's a different way of programming.
So it's been nice that we've been able
to extract that power out of templates,
but wouldn't it be nicer if you could actually program using for loops
and the standard library and things like that?
So I wrote an initial paper for SG7 about that,
and Andrew Sutton had been working with Herb on another project,
which was metaclasses.
You've probably seen
Herb's talks about that.
Yes, there's a paper that also generated quite a lot
of attention. Everybody was talking about it and then
just stopped at some point.
So around the same
time that I submitted this paper saying, you know,
we should go with value-based programming,
Andrew came with
his own paper,
which was somewhere in between
the template metaprogramming and value-based programming.
It's a bit like, if you know,
Louis Dion's HANA library,
which it is under the covers.
It's a lot of template metaprogramming,
but it presents itself more like
a value-based kind of interface.
So we came to the SG7 meeting and represented our points and SG7 agreed that there was a lot
of promise there. And Andrew and I started chatting and agreeing to work together to work out what it actually would take to,
to have a value-based reflection.
So out of that discussion came P1240.
But P1240 was a fairly large paper,
fairly ambitious.
And it's,
it showed how we,
we thought we could get to,
to useful reflection based,
a value-based,
value-based reflection and metaprogramming.
And so I started implementing that in the EDG front end,
but I did it on my own time,
and my own time was very limited.
At the time, I still had young kids.
And so I'd do that while I'd drive them to their class and while they were in class,
I'd try to add one parse rule or something.
Like it progressed very slowly.
Andrew and Wyatt Childers, who at the time was working for Andrew and Andrew's new company, Lock3, made the same effort in Clang.
And they got a lot farther.
So the implementation was fairly complete.
And so he showed, you know, he showed how it could be done.
But we also started talking about,
well, what comes after P1240?
So P1240 is mostly asking questions
about the program and getting back values.
And there were a few things you could do afterwards
to affect the program,
but it's mostly pure reflection,
meaning examining what your program consists of, right? So you could ask what the names of your enumerations are. That's usually
the number one application. We want our enums names to be reflected as strengths. And it solved
that problem and some others. But we were thinking like, how can we scale this up and what would be
the future for that? And so then we came up with this idea of injection.
I say came up with, there was a presentation I made
with the committee in Oxford in 2003, I believe,
that actually showed injection and the precursor
to constant valve functions and things like that.
And had most of those
features there, but we brought them into
a modern language at that
point.
Oh, excellent.
The slides for it are available on
wg21.link,
but I don't remember the number.
It's like 1700 or so.
In any case,
what Andrew then did was take all these ideas and bring it together in a vision paper.
So that huge paper you're talking about is not necessarily a proposal.
It's more like, here's what we could do if we were willing to push this through.
So here's a possible future and a future that we think would be very desirable
in the long term. And so in addition to being able to do reflection, so asking questions,
it has a mechanism to inject bits of code here and there in your program. So the function is
no longer limited to returning your value. It can also, as a side effect, generate more C++ code.
And that's a very, very powerful idea.
And Log3 implemented some of that.
There's different ways you could think of injecting.
You could inject strings, for example.
You could say, well, I can build a string,
and at any time I can say,
compiler, please take my string and compile that.
That's actually super powerful.
It's a little harder to debug.
Now you need tools,
because you no longer see your source code.
It's somewhere inside the compiler.
What Locktree did was it also developed a notion of fragments,
where you actually make little patterns
that you can then inject in the code.
It's a little easier to debug.
It also has the advantage that you can parse it up front.
On the flip side, it's a little harder to use.
In any case, so that was the situation going on there.
In Prague, we made some good progress,
and the P1240 paper was essentially accepted as the way to go forward.
And then the pandemic hit.
So during the pandemic, SG7 didn't meet too often,
but we met a few times.
And then someone suggested that maybe we revise the decision
to go with P1240.
Couldn't we still do something in between the TS and MP1240,
so constable-based reflection.
And there were a couple of SG7 meetings around that.
And there was a meeting that was scheduled
where I had prepared for that meeting.
I actually had a presentation that would show
that all the things that the other parties suggested
were hard to do
in our model could actually be done without too much effort. But that meeting for some,
there was a scheduling problem and it was scheduled, but then it didn't happen. It got
canceled last minute. And so we never followed up. Like you say, for over a year, probably,
there were no more meetings of SG7 I'm not entirely sure why
I think we all kind of got distracted I certainly had some personal issues in my family life
and then I don't know maybe six months ago maybe a little more I thought I started talking with
Barry Revzin and if you know if you Barry, he's responsible for about half of what
C++23 has to offer.
He's a very
productive guy, very smart,
and if you can get him
on your team,
things move along.
We talked about reflection because he's also
interested in that, and I asked him
if he would be willing to
help with coming up with something and try to aim it at C++26. And I asked him if he would be willing to help with coming up with something
and try to aim it at C++ 26. And that's where P2996 came from. So we said, okay, what's sort
of the minimum viable product we could come up with and bring into C++ 26? And so we, P296R0, so the paper that was presented in Kona, was written really quickly.
It's a paper that has a hard time standing on its own, in my opinion, because it relies, actually add a few things to compensate
for that oversimplification,
and then say, okay, we can get this done in C++26.
I also got my company let me work on this
a little bit more on company time.
And so soon after Kona,
I implemented a fair amount of P296
to the point where most of the
examples that are in that paper are now
supported in our frontend.
We made that available
at the end of December.
So if you go on Godbolt right now,
you can click the
EDG Experimental Reflection Compiler
and then you can try
the examples for P296
in adapted form.
And so at the same time that we released the EDG frontend
with that experimental support,
we made an update to P296, which is R1,
mostly thanks to Barry.
And that has examples with links to Godbolt
that show, oh, look, it's working.
And also it has more examples than the R0 version of the paper.
So that's where we are for sure now.
Behind the scenes, we've been working hard at getting ready for Tokyo,
where we hope to have a stronger paper to discuss in EWG and LEWG.
When I was reading the paper,
I was clicking through the Cobalt links
to see most of the examples you got there.
As you say, some are in slightly modified form.
So that was really useful just to actually see it working,
even though it's not standardized yet.
I mean, it sounds like a really complex thing to implement just in not quite your
spare time you still got some company time on it but so i mean was that a difficult process to
get that in so it's um as features go um you know i often think about the cost of implementation
versus the benefits you get right and here it's not bad at all.
So a lot of the reflection stuff,
everything that's meta functions is actually typically simple
in the sense that the compiler
already has all that information.
It's just a matter of carrying them
into the const of all world,
like creating vectors with reflection.
And internally, a reflection
is kind of like a tag pointer.
It says, oh, this is a reflection of the constant value
or a type or variable, right?
And then a pointer to what the compiler thinks
that it has as a representation of that.
So that's really not hard to do.
For us, by far the hardest is a splicer.
So that is the opposite of a reflection.
So once you have done some computation
with the representation of the program,
you want to bring it back into the program.
And so we have a notation to turn a reflection value
back into a program construct.
That's a little trickier.
Now, it's not super complicated.
It's a lot.
Think of variadic templates, for example.
For us, at least, variadic templates
are a very hard feature
to implement.
And I've talked to some
of the Clang engineers
and they told me
that it's pretty,
it's wide-ranging.
Whereas this tends
to not be wide-ranging.
So individually,
a splicer can be
a little tricky,
but it's not something
that reaches everywhere
into the compiler.
Once you've got it,
it works.
So I'd say it's actually probably a little simpler than Lambdas,
which is a medium-sized feature.
That's interesting.
I would not have expected this to be easier to implement than Lambdas.
That's really quite interesting.
It's hard to compare, right?
Also, it was 15 years ago that we implemented Lambdas.
But I feel it's on that order of magnitude.
It's a medium-sized feature.
That's really interesting.
So you're going to get into more details about splices
and all of that in the second half of the episode.
But before we do that, I'm just curious,
what's the current state of the paper?
Has it been, is it in the evolution study group now?
Like, where is it in the standardization process?
Like, is it heading for C++ 26 or like where are we now so so at the kona meeting in november of
2023 it was voted out of sg7 so i think of the sgs as sort of the research arm of the committee
right it's like the subgroups it's like, okay, let's look at the grade questions,
which way they should be answered.
And we got out of there.
And so that was based on the P2996R0 paper.
And they said, yes, let's go with this direction.
Now give it to EWG and LEWG,
because unfortunately this is a feature that requires cooperation from both of those design groups.
That makes it a little more complicated.
It's going to require some
synchronization between those
two groups. So that's where it is now, but it
hasn't actually been discussed in those forums
yet. That's going to start in Tokyo.
We've been working
internally
on coming up with
wording, so
proposed wording. Here's what we think should
be the changes in the standard
to implement this.
But also we need to flesh
out the discussion, the design discussion.
Why do we do this instead of that?
Or even, here are two
options. What do you think we should do?
I think that's what we're going to discuss in
Evolution Working Group and Library Evolution Working Group.
It's, you know, is this the right design for the general vision that SG7 has for 2U?
And once we're done with that, we're going to have to go to the core working group and library working group to look at that wording.
There is, I believe, a slight
difference between the way the library
arm of the committee works and the
core language arm
of the committee in that
evolution, so the
core language evolution group
will not forward something to core without
specific wording for everything.
So core can then modify that that but it's a requirement i believe that that's not the case in library oh it is now
it is now when i was there in in what was it in vana i think with the in place vector paper
they they spent like an hour going over the wording in library evolution and they said oh
no you need to fix this and this and this.
Okay.
So, but I feel pretty good about it because we have,
we have a lot of good people on the team.
So I mentioned Barry already.
We've got Peter Dimoff, who's, you know,
longstanding expert on library issues.
We've got Andrew Sutton.
He's, he's been in a different world lately, but he's still an expert on reflection.
So we have his input there.
We've got Wyatt Chilis,
who was one of the main implementers
of the lock three implementation.
And he's now a colleague.
He works for EDG now.
And we have several other people
who are helping us flesh out the details of this.
So I think we should have a pretty good wording story relatively soon.
Yeah, I mean, it's kind of interesting.
It sounds like you're in a very similar boat,
like we are in SG21 with contracts.
It's also a kind of not small language feature,
which has a library API attached to it as well.
It's a study group.
Now it needs to go to all of these other groups.
And it needs wording and all of that.
I think you're a little bit ahead of us.
You already voted it out of SG7.
We haven't yet voted our proposal out of SG21.
But hopefully we're going to do that by Tokyo or even before Tokyo on the telecom.
So we're not that far behind.
But like, let's see.
It's kind of interesting.
It's kind of the same process.
It looks like these are the two big language features
heading for CSS 26 kind of reflection and contracts.
So I'm curious who's going to get in the working draft first.
It's a race.
And since the core working group and the library working group
are often the bottlenecks for getting through, we're trying to elbow you guys out of the way so we can get
their first and and get their attention and then you can wait until we're done with so we just
increased our telecom frequency from every two weeks to weekly to like prevent you from doing
that all right i'm i'm gonna have to talk to the other guys. All right.
There's an arm's length here.
So we're going to dive deeper.
Nice bit of competition.
Yeah, so we're going to dive a little bit deeper into how the proposal for reflection works.
But before we do that,
I would like to read a message from our sponsor for this episode.
So this episode is supported by Native Instruments,
guided by their mission to make music creation more inclusive and accessible.
They create innovative hardware and software that inspire and empower musicians,
engineers, and DJs of all genres and levels of experience to express themselves.
Want to work on world-class tools for music and audio?
Check out their Career Center at www.native-instruments.com slash careers.
And with that, we're back here with David.
Can I make a little note about this announcement you just made?
Yes.
So your listeners don't see this, but we are actually on a video three-way call.
And you're talking about instruments.
I noticed in the background of each of our video feeds
are music instruments of all three of us.
I don't know if that means anything,
but I guess programmers like to make music.
It's, I think, a particularly fun industry
to use C++ in.
Like a lot of audio software is written in C++.
And it's a very fun industry to work in.
I've spent a decade of my career in that industry.
I've now lately kind of moved away from that.
But yeah, it's a good, it's an interesting field to work in.
There's a lot of C++, there's a lot of music.
It's pretty cool.
We need to do an audio developers episode.
Oh, maybe you should.
Maybe you should.
If you know Dave Abraham, I believe he started off or at least had part of his career in the audio world as well.
I'm not an audio programmer.
I just play a bit of music, but I thought that's interesting.
Yeah, that is interesting.
Well, so let's talk a little bit more about reflection.
I want to, in the remaining time, dive a little bit deeper into how your proposal actually works so you say it's a reduced initial set of features that's kind of a subset
of like the bigger picture so so what features are actually in there and which are not and why
like what's in that paper like very roughly like what can you do with it like can you do conversion
between steam and enum can you like do serialization, deserialization?
All of these typical use cases
that people use horrible macro hacks for today.
Guilty as charged.
Exactly.
So those were sort of a minimum needed feature sets, right?
When SGU7 got formed,
I think it was Jeff Schneider and some other people wrote a paper of,
here's what we need.
And I think there were about seven items.
The enum to string was one of them.
The serialization for the ability to look at a struct
and ask a struct, what are you composed of
and can access your values?
So we support that.
You can think
of it as three parts.
One is get from a language,
from a source construct,
to a value that represents that construct.
And for that is the caret operator,
prefix caret,
allows you to say caret
int, and now you have a
value that represents the int type. But you could
also do caret 42, and now you have a value that represents an int type, but you could also do caret
42, and now you have a representation
of the value 42, or
caret some template name like std vector
std vector int.
So that's step one. You bring
things into a domain that you can
do computation on.
Then, in anticipation of this,
we had developed in C++20
a whole bunch of extensions to constexpr, including the
constval functions, which are functions
that have to happen at compile time.
So with the constval
functions, you can do all kinds of
manipulations of this representation.
And part of the proposal
is there's a couple of magic,
more than a couple, a series of magic
functions. We call them the
meta functions or intrinsic meta functions, which allow you couple, a series of magic functions. We call them the meta functions or intrinsic meta functions,
which allow you to, given one of these values,
ask questions about it or even make transformations on it.
So you can ask, are you a type?
Are you an instance of this template?
Things like that.
And then the third part is once you're done with all your computation,
you have to bring it back into your source code.
Like maybe you have a reflection for a numerator,
and you want to turn it into a string.
Well, that's not too difficult. We already have strings.
But sometimes you actually want to turn it into a type.
You want to express a type, not as a name,
but as a computation, as an expression.
And so for that, we needed a new notation.
And for various reasons,
it is useful for that notation
to be delimited, meaning it has to be like
parentheses or brackets or braces.
Unfortunately, those three are already all
taken. So we ended up with
a new pair of tokens,
which is square bracket, colon,
colon, square bracket to close.
So that's what we call splicers.
But once you have computed
a reflection value of interest to you,
you can use it
as if it were an actual language construct
by surrounding it with those tokens.
And so with those three things,
so the carrot operator bringing in the reflections,
the splicer, the limiters,
and then the magical meta functions,
you can do everything.
And you actually have a framework
in which we can do much more in the future.
So C++ 96 is not the last word in terms of reflection.
Like Andrew's paper that sort of had this vision going forward,
we were pretty convinced that this is a great platform
to extend from.
Right.
And we have internally implementations of other features
that are not proposed in the paper,
but that we have already done
and show that we will be able
to do very powerful things in the future.
Nice.
So addressing really valuable use cases
that everyone has
and getting rid of lots of tedious boilerplate,
that's one thing.
But as C++ developers,
what we really want to know
is what color to paint the bike shed.
So just to focus on the syntax a little bit at the moment,
because that's one of the most visible things
when you first look at the paper.
The carrot operator, that really stuck out to me
after using C++ CLI years ago.
That reminds me a bit of that.
But why did you go with the carrot operator?
What were the other things you considered?
I think ReflexPro is what was used in the TS.
Yes, so the TS used ReflexPro as a new token.
And you can think of it as meaning one of two things.
Either reflect expression or reflection expression.
The reflect expression would mean that what it does
is it takes an expression and reflects it.
However, it turns out that of all the things we
can reflect expressions are the hardest and the ts didn't do much of it right so it's probably not
a good token for that now the other one is interesting it's a it's a reflection expression
which is true it's actually even more true in our model where it is an expression in the ts it was
not an expression it created type
right so it was a bit of a misnomer but for us it's a perfectly fine token and i believe
yeah in the in the very first version of p1240 we used that token and even after that we used
reflex per it works great it's a little heavy because it turns out that very often
you want to pass things by reflection.
Like you want to pass an int to a function as a parameter.
And saying ReflexPer, open parentheses, int, close parentheses,
the int is almost lost compared to the length of the ReflexPer
in the parentheses.
And so having the caret, which
the reason why we chose that one is it has the idea of lift,
you know, go up in representation.
Interesting. That kind of makes sense. I like that.
In some languages, it's called the lift operator.
So that's why we chose that. It does not conflict with the C plus
CLI
notation because
in C plus CLI, the
carrot is used for a, I think
it's called handle, right?
Or, yeah, I think it's handle.
The percent is
the reference and the
carrot is the handle, but it's only
in the declarative context
where when you declare these things,
when you try to indirect those things,
you still use the asterisk.
And for us, the lift is an operator,
which is only in the, it's never declarative.
It's only in expressions.
So there's no conflict there.
I think there is a conflict with Objective-C lambdas,
which uses caret.
Yeah, with the blocks, which are a C feature.
And there is potentially a conflict there.
That's correct.
All right.
So that's something we're going to have to look at probably some more.
Just get those people into Swift.
But yeah, so that's where
that came from. But, you know,
we could certainly do with ReflexPair, it's just
not as light of notation.
More typing work, and I don't mind typing,
but when you look at it in,
you know, when you apply
meta functions on it, like, very often
you'll do, like, you know, substitute
this template with an int
and being able to say
just hat int
as opposed to
reflects per int,
it makes the notation more,
it lets you focus more
on what's important.
That's definitely a lot cleaner.
Yeah.
Which I never thought I'd say
about the carrot operator,
but there it is.
But you said that
expressions are not something
you reflect on very much.
So what do you typically reflect on?
Is it just types or other things?
Functions.
Functions, class apps are probably the number one.
I'd say a lot of problems come down to a form of serialization.
Or at least looking at the structure of a type.
So I think that that's going to be the feature
that gives the most boilerplate savings
in the first few years of reflection hit in C++
when people will no longer have to have a separate Python script
to generate your structures from a pseudo-structure definition.
I think a lot of people are
excited about those applications.
But you can reflect namespaces.
You could have
code that looks in one namespace in one
configuration, another namespace in another configuration,
for example, by having two different
namespaces and selecting
with a question mark operator which one you want.
Ha! That's so
powerful.
That's pretty cool.
Or same with types, right?
You put a bunch of types in a vector,
you sort them according to certain criteria,
and you use whichever
is applicable.
It's a lot of...
What I'm most excited about
is to see what other people will come up with
I had one person
recently send a really nice email
and they send an implementation of
tuplecat
I started my
days in the committee in the standard library
but I was only about one year's worth
and then I moved on to the core language
and I'm no longer an expert on the standard library but I was only about one year's worth and then I moved on to the core language and I'm no longer an expert on the standard library.
But I was made to understand that tuple cat
is hard to implement in C++ today.
Tuple is just in general one of the most painful parts
to implement in the standard library.
Just doing anything with them is just such a pain.
And it shouldn't be, right?
Because in every other reasonable programming language it's just a built-. And it shouldn't be, right? Because in every other reasonable programming language,
it's just a built-in thing, right?
Yeah.
Well, anyway, I won't comment on that.
I know what TupleCAD does,
which is just take two tuples
and make a new tuple with the concatenated values.
And apparently, with the demo version of the compiler,
he was able to implement it in a much
shorter way.
That's very exciting when something that we
did not anticipate
gets done using the same
tools.
That shows us that we're probably on a good track.
Yeah, that's really cool.
You apply this caret operator to
a type or a namespace or a function
and you get an object back that you can use to do something.
So one thing that I found surprising in the paper
is I would have expected that for a type,
you get a meta type object back.
And for a variable, you get a meta variable back.
And then the meta type has members saying,
these are the members of the type,
and this is whether the type is const or whatever.
And so, but that's not what happens.
You get like this std meta info object back.
Like whatever you reflect on, it's just like one single opaque type that you use to query everything.
So I was like, okay, this is not an obvious design choice.
Like what's, I remember there was some debate about this, which I haven't followed,
but there must be a reason, but I don't really see it.
Yes.
So there's a couple of reasons.
And you're right, this was perhaps the most debated part in SG7.
But from my point of view, the number one reason for it
is that we do not want another ABI. And what I mean
by that is today, there's a lot of decisions we've made in the language that we cannot work back
because we want ABI stability. Now, reflection, what it does is encodes in values the language,
the language specification itself. And to give you an example,
that language specification right now is able to move definitions around fairly freely.
In C++03, the term variable in the standard meant something a little different from what it meant
in C++11, in the sense that references were not variables in C++13.
They were their own thing.
So you had variables, and then you had references.
They were two different things.
And in C++11, they became one thing.
Now, if we're going to encode stuff like that into the library,
by saying, well, you have a variable info and you have a
reference info, and suddenly you say, well,
our vocabulary to
describe this thing has changed.
So now we have to change the library to match
it. You're breaking all kinds of programs.
You see that
happening? Another example is, for example,
structured bindings.
Today's structured bindings are sometimes variables
and sometimes not.
But maybe in the future they will always be variables
or maybe not.
So do you want to encode that in your type system
and then find yourself cornered
because you can no longer evolve it
because plenty of programs have come to rely on those strict rules.
Now, of course, there's still some of that dependency
in the value representation.
But the value, because it's a value, you can dynamically depend on it.
I mean, dynamically select on it. I mean, dynamically select on it, right?
So if you write your programs correctly,
even if the language changes,
and say, once you reflect on
structured bindings, for example,
and you ask, is this a variable?
Maybe a particular structured binding
today is a variable,
but in the future it's not.
Your logic could be written in such a way
that you would be insensitive to that.
You would do the correct thing in any way.
If you know, Titus Winters used to be
the chair of library evolution,
and he made an astute observation.
He said,
the higher you go up in abstraction,
the harder it is to evolve.
So changing the values returned by a function
from one standard to the next,
it can be a little painful, but it's not horrible.
Changing a type, it's much, much harder.
And that's why we still have this std vector bool
as a specialization.
We cannot change its meaning because it's at the type level.
And Titus pointed out
that we now have a higher level of abstraction,
which is a concept.
And he predicted that
we really have to get these right
because concepts are so high level
that if you change what it means in the future,
there's not a single program that's going to survive.
Yeah, I've actually heard developers who say still they are not going anywhere near
concepts in their own libraries because once you have them, you can never change them.
So they just don't use them, which I found strange, but also kind of plausible at the
same time.
Yeah, we want to understand how these things work. In any case, that is my primary reason
for not wanting to encode reflections
as a hierarchy of types or as a collection of types
because I do not believe we will get it correct forever.
And we need that ability to be able to change over time,
evolve things over time.
Now, another possible answer is that every compiler
has different ways of organizing its internals.
And so having that type be a particular set of types
that maybe map well in one compiler but not another,
that might be a great strategy to encourage
having a varied ecosystem of implementations.
And then another one is that by having this opaque type,
it's already ready to encode things that don't exist yet.
For example, when Locktree added fragments,
it turns out fragments,
the value of a fragment can just be a reflection itself.
And it works really well in the whole ecosystem
with the metafunctions and so on.
Nice.
So how do you interact with these info objects?
So you pass them through your meta functions.
You can ask them questions.
Are you a type?
You can even issue errors if they're not the right thing.
Now, although it is a dynamic thing, it's still compile time.
So you're not going to generate bad code unknowingly.
If you try to, for example,
if you have, say, a reflection for a variable,
and then you use a splicer
to make it an actual variable in your program.
But instead of using it where a variable is expected
to use it where a type is expected,
you're going to get a nice error and say,
hey, I expected a type is expected, you're going to get a nice error and say, hey,
I expected a type here, but you gave me a reflection for variable
x. Wait, so is that like the library
API that you mentioned earlier, that you have something
like std is type,
and then you can ask it if it's a type,
stuff like that?
Yeah, you can do that, but the splicer
itself, so the compiler knows
when you use it, right? So say you have that variable and you try to say using x equal
and you splice a type there, right?
Because it has to be a type.
And you use that variable instead.
It's going to say, hey, this is not a type.
This is a reflection for a variable.
Wait, so just to recap.
So splicer, that's like the square bracket, colon something,
colon square bracket.
Yes.
So is that like the opposite of the carrot operator?
Like it turns it back into like a type or whatever it is.
Exactly.
Yes.
So, so basically it's like, if you do cat, like some carrot, something, and then
a splice of that carrot, you get exactly the, it's like a round thing.
So you could say, you could say square bracket, colon, carrot,
int, square bracket, colon,t, int, square bracket colon,
sorry, colon, square bracket.
And that's as if you had just said int.
So you can use that anywhere in your code as a type.
You can say square bracket colon something, blah, blah, blah,
and then to declare a variable or whatever.
Yes, exactly.
So it's basically like code injection, no?
It's a limited form of code injection. It's like you's basically like code injection no it's a it's a it's a limited
form of code
like you can only
inject the things
that you like the
types of entities
they're going to
reflect on
yeah but it's
kind of like
code injection
yeah and the
things you can
inject are
types so
wherever you
expect a type
name you can
put a type
you can put a
constant value
you can refer to
a variable
a function a names, a function,
a namespace,
a data member.
Those are the ones I can think of.
So I don't think the reflection TS had that.
I think that's new. That's cool.
It had
some versions of it. But for example,
because it had to do everything with...
It didn't have a new kind of type of
something.
The TS, if you wanted to access the members of a struct,
you had to do it as a point-to-member.
So it would provide magical ties
that would give you back point-to-members.
That's great until you have a bit field in your class
or a reference,
because you can't have point-to-members
that point to bit fields or references.
Whereas with this proposal,
there is no problem.
An info object can represent a bit field,
it can represent a reference,
and when you use the splicer syntax on it,
that's what you get.
So you can say, you know,
variable.splicerSyntax
and the info for a bit field,
and you have now access to the big field.
It just works.
Okay.
Yeah, I have to like play around with that.
It takes a little bit to wrap my head around that,
but it sounds like it's very elegant, actually.
I think it works really well.
I think there's some examples of the,
I believe I put the bit field example
in the P2996 paper.
But yeah, and now that we have an online demo version,
which admittedly is fragile,
it's definitely not fully baked,
but you can do a lot of stuff already with it
and try it out and experiment with it
to get a feel for how the code goes.
So we'll put a link to the Godball thing into the show notes so people can play around with it to get a feel for how the code goes so we'll put a link to the god ball
thing into the show notes so people can play around with it that'd be cool so since we're
talking about splices just to bring things full circle again come back to the syntax
so tim already said this sort of open square brackets colon then the thing you're splicing
and then you end it with colon close square bracket bracket. So it's a very odd syntax.
So how did it come up with that one?
You know, I was trying to look for,
I need a new delimiters.
Yeah.
And, you know, all the good ones are taken, right?
Like the square brackets are taken,
the parentheses are taken.
So we can do something like,
at some point we were talking about unreflexible.
Yes.
And we can certainly do that,
but again, it's kind of heavy.
And one of the things that we would need
is we sometimes need to disambiguate things.
So there's some context where we don't really know
whether you mean a type or a template
or something else, right?
And so sometimes you have to say type name
and the splice. Sometimes you have to say
template and the splice.
So that we know that
if the next token is a less than, that's
an angle bracket or if it's a less than.
So in
that context, having
unreflexper or some other token
also makes it not so nice because now you'd have to say template unreflexper or some other token also makes it not so nice,
because now you'd have to say template unreflexper something other.
So we just look for, you know,
which token can we use and maybe slightly modify it
so it still looks like it's a bracketed thing,
because what's inside is an expression,
and it's potentially a complicated expression.
So we really wanted the limiters
instead of like a prefix thing.
Right.
I mean, we have two,
three new characters in C++ 26, right?
We have dollar, at, and great,
like the accent thing.
So you could have used any of those, but.
We could have.
Now, remember this was done
before these source characters were
made available right right right so i could see for example how the backtick operator maybe like
back in between that's kind of a delimiter but it looks like a quote yeah yeah we um
we may not want to use that because if we go with injection
in the future
one of the things you need in injection
is you make a fragment
of new code that you want to inject
but somewhere inside of there
you may have to escape
from that pattern and say
oh now this next expression is actually
in the context surrounding
the fragment that's going to
be injected. And the backticks could be
a really nice
syntax for that. Maybe.
Or maybe we grab it for
this.
Again, we
don't have... I think that's the kind
of backsheds that we will discuss
in the evolution working group.
So in Tokyo, we're going to have a day
on reflection and evolution.
That sounds interesting.
Certainly, there's going to be a
time slot for it. I don't know how
much time we will be given,
but I expect that
it's going to be discussed at least in evolution
working group, maybe also in library evolution
since you also need to look at that side
of the problem.
Interesting times.
But yeah, interestingly, syntax in a way is the easiest, but it's also the hardest to
get agreement on.
And you know that from...
Oh, yes.
We had very painful discussions about syntax in Kona.
From an implementation perspective,
this is not the hard part, right?
Like, you know, we can make it work.
I like the square brackets with a little something.
Like, I consider it just a period.
So a square bracket period, square bracket slash.
Now, the thing is, we're also thinking of other kinds of splices.
Like, one idea is to be able to splice
a whole declaration in,
in one shot.
And so possibly this would be square brackets with a different thing to,
to say,
you know,
Oh,
here it's not just a splice of a small language element. It's a splice of,
of the whole,
of the whole declaration.
Yeah.
I think in contract,
somebody at some point suggested square bracket curly brace.
Okay. Instead of double square bracket curly brace. Okay.
Instead of double square bracket, which we ended up not using because it looks weird.
But yeah, there's infinite possibilities.
So those are also still available.
Maybe there's another race there between contracts and reflection,
where we have to make sure you guys don't grab our various syntax opportunities.
Right. Right.
Okay.
So, no, I think we're done with syntax.
So you're good.
I very much hope you're not going to reopen the syntax debate there.
You can have all the syntax.
Okay.
Okay.
So, yeah, we've been talking about this for over an hour now.
I think we're kind of running out of time.
All right. Yeah, which was kind of expected because it's over an hour now. I think we're kind of running out of time. All right.
Yeah, which was kind of expected because it's such an exciting topic.
So I guess a traditional last question.
Is there anything else apart from reflection going on in the world of C++
that you find particularly interesting or exciting right now?
Can I go a little bit outside of C++?
Yeah, sure.
I keep an eye on,
like I like Rust as a language, right?
I think a lot of us do.
It's an interesting programming language
and it took an original approach to safety.
And that was like 15 years ago.
But now there's a couple of other languages
and there's one couple of other languages,
and there's one in particular.
It's called Hilo.
Yes.
Dave Abraham came to C++.
He presented Val.
Yeah.
The Val language.
And Hilo is an evolution of that.
Was it called Hilo?
H-I-L-O?
Yeah, H-Y-L-O.
I've never heard about that.
Okay, well.
It's just Val renamed.
Oh, it's Val renamed.
Okay, yeah, we had an episode on Val.
We had like Dimi on the show,
who's one of the people working on this.
Okay, yeah.
So that's, and it's evolved.
So Hilo has developed some ideas a little bit further.
I think that's super interesting.
So yeah, I love looking at new ideas in that space.
Rust is fantastic, but it's not easy.
Like it's hard for Rust to be your first programming language.
Because the margin of entry is a little bit high.
And so I'm interested in people who are looking at,
well, is there a way we can tackle that problem
a little bit like Rust,
but also trying to simplify it,
which is not easy.
So I'm not sure if hylo is the answer
but uh i i did find it very interesting to read what they yeah it's kind of somewhere in between
right you can either say there's just no pointers or reference to anything and then you're done or
you can do what russ does i think val or now hylo uh is is kind of in between so maybe that's kind
of the you kind of the compromise
that you need to actually have a programming language
that's easy to work with
and still gives you all these guarantees.
But it's still expressive enough, right?
Yeah, that's always tricky.
And yeah, ultimately, I always come back to C++,
so it's not as safe,
but I can write what I'm trying to express a little more directly and i'm enjoying it
yeah okay well i think we do need to to wrap this up so uh anything else you want to tell
us before we let you go such as where people can can reach you they want to find out more? No, I'm good. I'm sure you can find my
email in the papers.
I do not answer emails very
well. I'm sorry I get too much of it.
But
I still gladly receive them
and I try to read them,
but I cannot keep up.
Okay, well, thanks very much for
coming on the show, talking to us all about
reflection. 4C++26, we're the show, talking to us all about reflection.
For C++26, we're going to hold you to that.
So thank you.
Thank you for your time.
Thank you, guys.
Thank you very much.
That was great.
Thank you so much.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in.
Or if you have a suggestion for a guest or topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate it if you can follow CppCast on Twitter or Mastodon.
You can also follow me and Phil individually on Twitter or Mastodon.
All those links, as well as the show notes, can be found on the podcast website at cppcast.com.
The theme music for this episode was provided by podcastthemes.com.