CppCast - Circle

Episode Date: January 23, 2020

Rob and Jason are joined by Sean Baxter. They first talk about a blog post and some papers headed for the upcoming ISO meeting in Prague. Then they discuss Circle, the compiler and language extension ...for C++17. News The Hunt for the Fastest Zero 2D Graphics: A Brief Review C++ Standards Committee Papers pre-Prague mailing Links Circle Circle on GitHub P2062 The Circle Meta-model Sponsors Write the hashtag #cppcast when requesting the license here One Day from PVS-Studio User Support

Transcript
Discussion (0)
Starting point is 00:00:00 Episode 231 of CppCast with guest Sean Baxter recorded January 22nd, 2020. Sponsor of this episode of CppCast is the PVS Studio team. The team promotes regular usage of static code analysis and the PVS Studio static analysis tool. In this episode, we talk about ISO papers headed for Prague. Then we talk to Sean Baxter. Sean talks to us about the Circle C++ compiler and language podcast for C++ developers by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how's it going today? I'm all right, Rob. How are you doing?
Starting point is 00:01:18 I'm doing fine. Any big updates from you? Nope, just working along and figuring out what this year is going to be like still. Yeah, still early. Still a lot of planning to do, I'm sure. Okay. Well, we can jump right into the feedback then. We got this tweet from Ran Rajiv, who I believe we both met at CPCON last year. And he writes,
Starting point is 00:01:42 Thank you, Rob and Jason, Phil, Fredred and oddy for making me laugh out loud in public in some parts of the show really good way to start the weekend thanks again so yeah i definitely uh enjoyed this episode uh last week so i hope uh other listeners did as well and uh yeah glad we were able to make someone laugh yeah well we'd love to hear your thoughts about the show uh you can always reach out to us on facebook twitter or email us at feedback at cbcast.com and don't forget to leave us a review on itunes or subscribe on youtube joining us today is sean baxter sean is an independent programmer and the author of circle the next gen c++ compiler he formerly worked at de shaw research
Starting point is 00:02:22 nvidia and jpl, welcome to the show. Thanks for having me. I'm kind of curious, when you worked at JPL, were you working on any Mars rovers or anything like that? No, I was debugging Fortran code for atmospheric radiative transfer or whatever they have programmers do. If you know that code, they'll put you on stuff they've been maintaining since the 80s or longer, and there you are. And so people move through that, you know, you go in, you're like ready to do science, and you might have some exposure to that. But ultimately, you move through as a programmer, it's just not a place that
Starting point is 00:02:57 that retains, I don't think programming talent. So that's unfortunate. I mean, there's historically been lots of interesting things that have happened there. Yes, agreed. Do newer projects, are they also coded in Fortran, as far as you know? Or is it just maintaining the legacy stuff that's in Fortran? So, I mean, most of the code is written by scientists who work in the language of their advisors. So there's people my age who are Fortran 90 programmers because their advisor was some, you know, renowned atmospheric chemists and they have, they inherit, you know, this person's
Starting point is 00:03:31 research, which is really like a Fortran package. And so they keep running with that. And, you know, it's been like, I guess, 10 years since I left there. But yeah, there are people in their 30s who were, you know, dealing with Fortran. So I'm not saying it's bad on its own because, I mean, Fortran is honestly fine if you're just, you know, doing data manipulation. But it's hard to find, you know, ordinary programmers who aren't domain specialists to come in and work with you and accelerate it. So I spent a lot of time trying to, you know, port some stuff into C, you know, get it to run quickly. It's easy to make, you know make 1,000 or 10,000 times performance improvement over someone's Fortran code. Not because Fortran is slow,
Starting point is 00:04:10 just because when you're a real programmer, you understand things like memoization. If I already know what the answer is B, I can cache it and store it and retrieve it later, and there's lots of things you can do that don't occur to people. So I think it is good to get their projects in newer languages, but it's not unusual for domain specialists to be using antiquated tools.
Starting point is 00:04:31 It falls in line with what I've seen with some of my training lately, is organizations moving from Fortran to C++ simply because it's hard to find good Fortran programmers today. Yeah, I mean, that's a good reason to move. Okay, well, Sean, we've got a couple news articles to discuss. Feel free to comment on any of these, and we'll start talking more about Circle, okay? Okay, so this first one we have is a blog post,
Starting point is 00:04:57 The Hunt for the Fastest Zero, and this is on the Performance Matters blog, which I think we had another blog post from them somewhat recently. I think you're right. Yeah. And yeah, this is interesting. Someone wrote a method just to fill a char array with zeros
Starting point is 00:05:14 and you can get 30x improvement by using the char slash zero instead of just the integer zero. And they dug into it and it's because of a you know template specialization template specialization specifically with lib standard c++ gcc standard library implementation right yeah this this test is all with gcc yeah apparently lib c++ from lvm doesn't suffer the same drawback. Okay. Did they go into why?
Starting point is 00:05:49 Or what did they do differently to avoid this? Honestly, I don't recall. I read the article a few days ago. Well, anything interesting you got out of this one, Sean or Jason? Use Memset. Just use Memset. No, use your optimizer. They were using the optimizer on GCC.
Starting point is 00:06:05 Yeah, but he qualifies it. It's after... It's O... Okay, so at O3, then both versions go to memset. But O1 and O2, it doesn't. Right. I knew it was at some point that it did. It did, ultimately.
Starting point is 00:06:23 Okay. Next one is we have this paper submitted to the ISO committee, and this is 2D graphics, a brief review. And I'm not familiar with this author, James Barrow, but he went over the entire 2D graphics proposal and made a number of conclusions and recommendations about things that the author you know suggests should change and it seems like very constructive feedback uh is how i looked at i didn't read the entire paper because it was quite long um but he's just gonna say i take issue with a brief review but well he if you just skip ahead
Starting point is 00:07:00 to the conclusions recommendations uh he he does itemize all of his his main takeaways and recommendations for the paper well the original paper is like 300 pages yeah which is why i don't think you should try to refute it point by point it's a weird thing to put into the c++ standard i mean there's nothing in the past 40 years of software that's evolved faster than graphics and And, like, what are you dealing with now? Ray tracing hardware, augmented, like, head-mounted displays, augmented reality. There's so much technology. And, like, the graphics TS is basically like GDI. I mean, it's like 1990s ideas of paths and pen widths and brushes and things.
Starting point is 00:07:41 Like, I mean, that was useful back then because there was no RAM and there was no bandwidth to do full color graphics. And now if you're doing, you know, a 2D game, even you can provision 50 auxiliary buffers that run on GPU and you can have hundreds of layers of post-processing and all this capability is available. And I don't, I don't know why they'd want to take a graphics library, which should be fast moving and then slow it to a crawl by forcing it to go through ISO standardization every three years to get new updates. It's like the weirdest thing to hamstring like that. Yeah, I can't disagree with that take. I mean, I do think, and I think one of the recommendations in here
Starting point is 00:08:16 was to kind of break it out into smaller proposals, like linear algebra, which I believe is being done. And those things you know could be standard standardized and don't need to be as fast moving as the entire you know graphics library so i personally found the dissection of the rgba color type uh interesting as like pointing out how it doesn't do the operations that you would expect it to do. And behavior is different, whether you multiply by an integer or a floating point type. And then this little note here that says, and oh, by the way, all software libraries do color wrong on purpose.
Starting point is 00:08:57 Did you read that? No, I think I missed that part. That could become like an all day long rabbit hole. Let's see. it's talking about how css does it intentionally wrong yeah css deliberately handles linear color incorrectly because they do what we all expect it to do not the actual like physics of color basically and apparently it's like a thing that people talk about like which color space you want to interpolate in, like red,
Starting point is 00:09:25 green, blue channels versus hue saturation, brightness, that kind of thing. I believe if you, I mean, different ways to average colors, but no,
Starting point is 00:09:32 no, no. Yeah. It's not just different ways. It's that it's deliberately wrong. Uh, because, uh,
Starting point is 00:09:38 if you increase the values linearly in the RGB space, it doesn't, um, it doesn't actually increase the perceived brightness to the human eye, and that there should be a curve with gamma correction and stuff like that. I got lost. But there's a linked article from the paper that talks about all of this, which then links
Starting point is 00:10:03 to another article which talks about more detail of the things and i decided i was spending too much time going down that rabbit hole okay uh and then the last thing i have to mention is and uh i definitely did not go over all of these but the pre-prog uh iso mailing list is out so you can take a look at all the papers that are going to be presented at the Prague mailing, which I think is next month or is it the end of this month? It's in February. It's in February, yeah. So yeah, if you want to take a look at all the papers that are going to be going over at the next meeting,
Starting point is 00:10:38 they are all here. And I think there's still going to be some bug fixing for C++ 20 going on at this meeting, or are we past that? You're giving me a funny look, Jason. Oh, no, I just noticed a paper in the mailing that I had not noticed before. So that funny look was not directed at you. I believe this is the last opportunity for bug fixing. I believe so.
Starting point is 00:11:01 But they'll probably also start to actually look at some of these papers for uh 23 yeah i've um i need i'd have to look at the timeline sorry so sean here is just for the record it directly mentioned once and indirectly mentioned at least once in this mailing i'm i'm well the the don't constexpr all the things is a Circle mailing. Yes. Yeah. And then there's another one that's the Circle meta model. Yeah, there's a pro and a con sides.
Starting point is 00:11:35 Oh, okay. So maybe we'll talk more about that once we actually get into the interview portion. Sure. Is there anything you wanted to highlight before we get to that, though? I saw that Titus updated his ABI paper. I wasn't really sure what changed in it though. I didn't read that. I was focused on the constexpr papers that I had not yet seen. Anything notable there that you wanted to mention? Some C header library constexpr proposals. David Stone's second revision of his paper on fixing the inconsistencies between constexpr and consteval,
Starting point is 00:12:12 which I did not appreciate the first time I read this what the problem was. The second time reading it, I do. So there's that. Basically, a consteval function must be evaluated, even in an unevaluated context, which then breaks some things. So he wants to just basically cross out a single line, which is probably a talk all by itself, because it's been an ongoing argument. If you follow these things on Twitter between like Andre, Alex and Drescu and the rest of the C++ world. And there's a paper here about reducing the scope of constexpr if uh but i am just so opposed to anything that actually has an if block where the scope is outside of the if block like that just does not work for my brain but i don't know if that's worth going into today how about you sean anything uh you want to talk about with these
Starting point is 00:13:20 no not really okay okay well um as we kind of mentioned we have talked about circle a little bit over the past few weeks we we first brought it up uh when going over a blog post from shanheed uh regarding his stood in bed proposal um yeah and we we briefly mentioned circle in the context of that because you went ahead and just added an embed like feature to circle yeah and uh yeah uh and afterwards someone suggested we just have you on to talk more about circle so do you want to just uh start off by telling us about what the circle language actually is what the compiler is? Sure. So I had this idea four years ago when I started. I used to take C++ as is and then rotate it from the runtime onto the compile time axis.
Starting point is 00:14:15 And so you could have essentially a runtime version of C++ programs and compile time versions that are interleaved, and you use a single token, which is the meta keyword, to distinguish if you want something to be compile time versus runtime, which is the default. So instead of trying to add more core language mechanism and change semantics, my belief was that the semantics of C++
Starting point is 00:14:38 should be completely viable at compile time. So if you want to use a std vector at compile time, you should just be able to use pound include vector and use std vector. So you do like meta std vector of int foo, and then you can manipulate it at compile time. So if you have a meta keyword in front of a declaration, that object that's declared
Starting point is 00:14:58 has a compile time storage duration. If you have meta in front of a for statement, that becomes a compile time loop, which is really an unrolled loop. And then the loop index is not constant, but it is compile time. So that loop index can be used as a case statement argument, or it can be used as a template argument, or it can be used as a bit field specifier. What I realized was that constexpr conflates two different things, which is constancy and compile-timeness. And if you were to keep those things separate, and instead of trying to mark functions that you want to be called at compile time at the function,
Starting point is 00:15:45 but rather mark at the point of the call, then you can suddenly import 40 years of C++ and C libraries and have everything available at compile time. So, yeah, I just put meta in front of some expression and it runs at a compile time. And this changes semantics in, like, a really nice way because in ordinary C++ code, you can only put expression statements in block scopes, right, in function. So you can't put an expression statement in a namespace because that namespace is really reserved for, like, static initializers or dynamic initializers. You can't put it inside of a class scope because class definitions don't have anything that can be executed, right?
Starting point is 00:16:17 Same with enums. You can't put a printf inside an enum. But you can put a compile time version of these statements in any of those block scopes because suddenly a printf in global namespace makes perfect sense. As you're translating the source file and you hit a metaprintf, just execute that printf and you can get hello in translation unit at compile time in your terminal as the compiler is running. Or you can do it inside of a class definition. So it really makes template metaprogramming very easy because now I can use ordinary algorithms. I can use std sort and std unique to create a unique list of types. And I don't have to create anything new. I can just
Starting point is 00:16:55 use the existing infrastructure, and I don't have to think about creating a set of partial template specializations that will, through some kind of deduction magic, get me the solution I want. You can program with the existing idiom. And what's cool is that once you have compile time control flow, once you have data-driven compile time if and compile time for, you can do useful things with information the compiler has always maintained but never exposed to you. So you can actually do useful things with introspection. So all compilers need to know what the names of your data members are and what their types are. But C++ doesn't provide you with that information
Starting point is 00:17:31 because it doesn't give you enough compile time flexibility to do anything useful with it. But now since we've kind of rotated C++ from the runtime onto the compile time axis, I start exposing tons of compiler state that is available now through additional keyword extensions. So if you want to get the name of a non-static data member, you just use member name, member underscore name. And it's a keyword and it's an expression. And you pass it the type you want and the index of that data member. Or if you want all of the member names,
Starting point is 00:18:06 you can use member names plural, and that returns a parameter pack. And so during substitution, that parameter packet substitute and you return for each element one of the member names. And so this makes it like, you know, really easy to use compile time information as data. And since we already have a full programming language, meaning the C++ programming language, we can do useful transformations on that data. So then it became like really fertile ground for new language development
Starting point is 00:18:33 because suddenly I had a new set of data, which was compiler provided data about my translation unit. And I had a whole new set of tooling, which was C++, which is old, but it's new in this context. And so then you start thinking, well, what else can I do? And you add things like M type, which is a pointer sized opaque data type that packs a type. So you can consider boxing any type into this M type object, pass it around, you can put it into a std vector, you can sort it, you can put it into a map, and then you can unpack it. And all this works at compile time in this integrated interpreter.
Starting point is 00:19:04 So if I want to, you know, like I said before, if you want to create a class template that creates a unique set of data members, like a unique tuple, I can just create a std vector of mtype, I can box all of the template parameter
Starting point is 00:19:20 types into this vector, and then I can std sort them, std unique them, and then I can back them back out with a keyword, and now I have a unique set, and I haven't done any template metaprogramming at all. I've just used std vector and stl. Just basic usage of the language to solve pretty simple tasks prompts you for new inventions. I have dozens of new language features
Starting point is 00:19:47 and all these sort of presented themselves to me without much effort, just because I was working on such fertile ground. And once I've accepted this idea of separating constancy from compile-timeness, and I realized I could use all of C++ and even system tools. So I can make foreign function calls
Starting point is 00:20:02 and access command line tools and whatever else I want to do to help kind of create my translation unit. So since you, well, the thing that you ended with is saying that you can call system calls and foreign functions and whatever, one of the complaints about that in the C++ world would be that you would end up
Starting point is 00:20:24 with two different translation units having two different values in them because two different calls to that system function return two different things, whatever. Like you opened a file, you get one value. Then first time, the next time you compile that header or whatever, you get a different value. How does that work in your world?
Starting point is 00:20:40 Yeah, I would hope that would be the case because if you change the contents of a file, I mean, you should get a different value as you read it out again. So if you have data-driven translation, if you want to put configuration information in a JSON file and use that JSON file as an asset to drive translation, you're going to get obviously different executables every time you build or every time the underlying file changes. But you could say the same thing about headers. I mean, if you change a pound include, if you choose a header file, change a header and then recompile,
Starting point is 00:21:09 obviously you're going to get different executables. Just this broadens the scope of what potentially is considered source. I guess what I'm saying though, any sort of asset. If that header file changes during compilation between two different translation units, then you have undefined behavior. You've probably broken one definition rule, something like that. I'm just saying if
Starting point is 00:21:30 your underlying is whatever other system call that you're making, I'm compiling two different files at once, or two different files in my build system. The first file includes some header file that does some work and it gets some result. the second file calls the same header file and gets a different result because there was a system call made inside that header file was that why would the i don't know why why would this asset be changing while you're building i'm saying the possibility exists it's possibility exists of changing any source file while you're compiling and if you have a single if you have a header file included by multiple translation units that could be changed during build as well.
Starting point is 00:22:08 I don't know. I think these are... I'm just asking, so would that be basically undefined behavior if that happened in your world, same as if a header file changed, possibly? No, I don't know why it's undefined. Everything's deterministic there. If the contents of the file changes
Starting point is 00:22:24 between building T-U-A and T-U-B, it's still defined. It's just T-U-B is going to have contents from a different... And there's possible reasons to do that. I could definitely see a build tool wanting to treat some translation differently than others. People have hacked up all sorts of things.
Starting point is 00:22:40 I'm not saying do that, but I'm not sure what the nature of this... I mean, this is like a very widely do that, but I'm not sure what the nature of this... I mean, this is like a very widely spread sentiment, but I'm not sure what the fear is, really. Because as programmers, I think we're all pretty comfortable opening files at runtime, especially. And if we have
Starting point is 00:22:56 Python tooling or whatever else, we do it during interpretation, which is quasi-compilation time. But the idea of opening a file at compile time scares people, and I don't understand why, because it's really no different than opening it at runtime. It's the same system. It's the same API.
Starting point is 00:23:13 You can use file star F equal F open and read the data out, or you can use JSON HPP or whatever you want and just use it responsibly. Don't try to build a trap and then step in it, I guess. Okay. Okay. No. I think the...
Starting point is 00:23:27 You could say, well, there's a possibility of undefined behavior just because C++ is full of things that... little edge cases that are possibly undefined. But just the possibility of undefined behavior doesn't mean that you've contaminated your source, right? It's like, if you make wise choices, then you're not really exposed to this. If you use, I mean, how do we know
Starting point is 00:23:50 we're not going to have memory leaks in our runtime? So it's because we use like std unique pointer or we use std vector, right? And therefore we protect ourselves from these dangers that are inherent in the language. And I would say use that same safe programming practice at compile time. Use a std vector. Use a std vector.
Starting point is 00:24:06 Use a std unique pointer. And your resources will be managed appropriately that way at compile time. Okay. What compiler are you building this on top of? Did you start off with Clang? No. What are you working with?
Starting point is 00:24:19 Did you start something on your own? Blink file. Oh, wow. Yeah. So I felt the... I mean, when I started, I looked at Clang, really looked at Clang. And it's just so hard to modify. I mean, it's like well over a million lines. I don't know how big it is now.
Starting point is 00:24:33 And, you know, everyone on Twitter who tries it, they end up writing some, like, lengthy blog about it. Like, you know, Quarantine did it, like, last month, I guess, or a couple weeks ago, about putting in a small modification. And then you run up against some semi-braced initializer routine that's like 30,000 lines. And you're like, well, I don't know if I want to make huge modifications. And I've made dozens of really deep modifications or additions to C++. And the idea of trying to do that to Clang was not appealing to me. So I wrote a new compiler from scratch. I don't think compiler front-end work is very difficult. I think writing a C++ compiler from scratch is very difficult
Starting point is 00:25:14 just because there's so much of it. But now that it's working, putting in new features is really easy. And a lot of times it's just, I have something I want to achieve. Let's introduce a new AST type for it. It's only going to be a couple lines. Let's introduce a new AST type for it. It's only going to be a couple lines. Let's introduce some modest grammar and then you put a little grammar rule in there and it parses that out for you.
Starting point is 00:25:32 No problem. A lot of stuff can go in an hour or two. Especially once you keep doing it over and over and over again. A lot of C++ is the same formula which is if there's any dependent arguments, then you're going to have to create a special AST type which is type you know, if there's any dependent arguments, then you're gonna have to create like a special AST type, which, which is, you know, type dependent. And then during substitution, that'll, you know, re-inject back into the expression builder. And you replay this
Starting point is 00:25:53 pattern so many times, it becomes rote, and you refactor your code. And then it becomes like, you know, really powerful where, I mean, the, you mentioned the embed issue, like I put an embed keyword in, that was maybe like an hour and a half of work. I mean, why should it be any more than that? You're just taking a path and you're loading a file and you're exposing it as a kind of an array L value. And then you've got the LLVM backend that just emits like a raw data field there. So I don't think I would have had this flexibility if I were to stick with Clang just because Clang is so big. So you wrote your own frontend, but to be clear from what you just said, you are using LLVM for your backend. Yeah, for code generation. Yeah, sure. I mean, I would say technically I have two
Starting point is 00:26:32 backends. And that's also why development was so good, which is the interpreter. So for the longest time, I would just be able to run any code in the interpreter. So I mean, that makes it really easy because emitting LLVM code is quite difficult for C++ because of destructors and exceptions. So you always have like the exceptional path and then the kind of nominal path because of, you know, all the special stuff that the compiler has to emit
Starting point is 00:26:56 that the programmer doesn't see. But when you write an interpreter in C++ for C++, it's like super trivial because exception handling and destructors are done for you. So if all the objects with non-trivial destructors have a destructor in them that will like, you know, step through and clear out all the data members recursively, suddenly, I don't have to do anything about exception handling in my interpreter. And it just becomes
Starting point is 00:27:18 like a awesome platform for prototyping features. So I'm just guessing here, are you like directly executing the AST in your interpreter? Yeah, sure. Okay. That's, yeah. You just walk the AST. It's exactly like an LLVM code generator. You walk the AST and you either execute it or you emit basic blocks and things.
Starting point is 00:27:38 Right. That is the exact, well, not exact, but the same approach that I took with a scripting engine I've been working on for a while that it directly executes the AST and then just let the C++ runtime take care of object lifetime and exceptions and everything like you just said so I didn't have to worry about those deals. That's a huge win that C++ has that kind of
Starting point is 00:28:04 deterministic destruction. Right. I wanted to interrupt the discussion for just a moment to talk about the sponsor of this episode of CppCast, the PVS Studio team. The team promotes the practice of writing high-quality code, as well as the methodology of static code analysis. In their blog, you'll find many articles on programming, code security, checks of open source projects, and much more. For example, they've recently posted an article which demonstrates not in theory, but in practice, that many pull requests on GitHub related to bug fixing could have been avoided if code authors regularly use static code analysis. Speaking of which, another recent
Starting point is 00:28:39 article shows how to set up regular runs of the PVS Studio Static Code Analyzer on Travis CI. Links to these publications are in the show notes for this episode. Try PVS Studio. The tool will help you find bugs and potential vulnerabilities in the code of programs written in C, C++, C Sharp, and Java. You mentioned, you know, after you first introduced this meta keyword, rotating the access, that you kind of had this fertile ground to introduce new features, what are some of the main features that circle has that are not available in
Starting point is 00:29:11 C++? So a couple of them, I took notes from people off proposals. So if there's a proposal, if I want to talk to someone and I get their attention and they have a proposal, that's great because then I'll like, you know, implement the proposal and then I can, I can talk to them. And so I did the pattern matching proposal.
Starting point is 00:29:29 That's like Michael Park and David Senkel. And that's why I met David Senkel. So I implemented that. And then if you're not aware what that is for everyone listening, it's like a switch statement, except for you can have a structured binding in the case. And so you can match how components of tuples or components of classes, and you can have filters, and you can have like a structured binding in the case. And so you can match out components of tuples or components of classes, and you can have filters, and you can have a pretty expressive kind of like domain specific language for doing tests. And then the first case that matches is the one that gets returned. And so I put that in, and then that prompted me to do things like designated binding. So instead of a structured binding that just matches by index, you can match.x or.y or.z,
Starting point is 00:30:05 right? Wildcards, you know, deep referencing tools, and these kind of dovetail with pattern matching. I put in, from a compiler, I have injection from strings, which is, you know, a no-brainer if you have a kind of programmer or compiler like this. So I have an at expression keyword, which takes a string known at compile time. It doesn't have to be a string literal. It can be a std string that you glue together using whatever magic you want. And you can inject the expression by parsing this input string. And so you can certainly put in logic in a file that you load in at compile time and emit code. But really, it's good for injecting logic that's embedded in strings.
Starting point is 00:30:47 So I have a pretty long tutorial on my website, or kind of example on how to build like a lib format or like a Python F string, where, you know, everyone likes Prunef because, I like Prunef, because it's concise in that it gives you, you know, the width and the precision modifiers and all that's in like a really nice little language, but it's not type safe. And we have the usual litany of complaints. You know, IO streams is really slow and it's not very
Starting point is 00:31:17 concise. So like, what's the magic medium? And for me, it's being able to provide a single format specifier that has fields for precision and width. And, you know, if you want a capital lowercase e or you want a g or an f formatting or whatever. But then also to put the expression you want to print right there in the format specifier. print a seven column wide float you could do like 7.f and then colon and then you can put in the expression you want right there inside of the format specifier. So how does that work? Well, my compiler
Starting point is 00:31:56 you can write a circle macro which is a special kind of function a special kind of injection tool that macro gets this string as a compile time argument. And then it can use ordinary parsing techniques to parse out the attributes from the expressions. And then it can inject that expression from a string into the compiler.
Starting point is 00:32:16 And that will evaluate the expression at the point of the macro expansion. So essentially, you've got, just like Python Fstrings, you can put a, you can kind of create your own domain-specific language, a formatting language that allows you to embed the expressions right there into the specifier and then parse those out, do some operations on the data, and then evaluate the embedded expressions. And you can do that through injection. So it just makes things really nicely. We can now have kind of functions
Starting point is 00:32:48 that take in arguments in non-traditional ways. The problem, one difficulty with libformat, for instance, is that it has format specifiers, but there's no way to check that the format specifier and the provided variadic arguments are compatible until runtime, because you can't actually parse the format specifier until runtime. So it ends up packing...
Starting point is 00:33:07 Just for the record, there is compile time checking as well for the format. Well, it still packs all of the arguments into a variant, right? And then checks those against the... I'm not entirely sure how it's actually implemented, but if you can get a compile time error, if you specify a compile time specify a format that doesn't match the types passed in.
Starting point is 00:33:29 It just makes it not really easy this way now because, again, we have reflection and variadic template subscript, and we don't even have arguments. We just have this format specifier, and it's easy to introspect the compile time. There's really no question of how you would do it, I'm trying to say.
Starting point is 00:33:45 Other things I have are object and data member pack declarations. it's easy to kind of introspect to compile time. Right. So there's really no question of how you would do it, I'm trying to say. Right. Other things I have are, like, object and data member pack declarations. So I have, like, a one-line tuple class template, which is just, it's a pack expansion, but instead of expanding, like, into an initialized list, you can expand into a class definition. It's just, like, there's a lot's a lot of really concise ways to do things
Starting point is 00:34:08 that don't involve any loops, or people nowadays don't really like loops as much, for good reason. And they want to use this functional or declarative approach. And I'm trying to do that with more of class definitions as well. If you don't mind, I want to go back to something earlier I forgot to ask about. You were talking about implementing a C++ parser and compiler.
Starting point is 00:34:31 Just out of curiosity, how much of C++ do you actually support right now? I don't think I support all of 17. I don't support yet deduction guides on std initializer list, which is pretty obscure. I support deduction guides, but only on constructors, and not on the braced initializer list version of the constructors. I haven't gotten around to that.
Starting point is 00:34:53 Is that it? I'm not sure what else I'm missing from C++17, if anything. As far as 20, I have concepts, which are pretty easy actually to implement, and then I have spaceship. There's no test for spaceship yet. I'd like to do coroutines, but I'm looking for more guidance on kind of ABI concerns
Starting point is 00:35:10 and what it actually means. I guess GCC 10 just came out with coroutines merged now. Oh, did it? I thought it was just, like, this week or last week. It hasn't been released yet, but I didn't know it had been merged. Yeah, but no, but I didn't even know it had been merged. So when there's guidance on how to do coroutines, It hasn't been released yet, but I didn't know it had been merged. But I didn't even know it had been merged. So when there's guidance on how to do coroutines,
Starting point is 00:35:30 because that really ripples down to the back end in a big way. Then I'll add that. I haven't made a pass through the C++20 draft yet, but most of the stuff in the C++20 draft is pretty modest, except for modules and coroutines, which I find are the hard ones. In a hypothetical, if I had a C++14 project right now, should I be able to compile it with your compiler? Yeah.
Starting point is 00:35:50 There's a few things I don't support that are non-standard. I parse the AVX intrinsics, but I don't emit backend code for them, things like that. That's not really part of standard C++. Right. That's just busy work, really, to find all those LLVM intrinsics, thousands of them. Yeah, so my test case right now is...
Starting point is 00:36:10 I mean, the biggest one that I added in, I guess, November is Range V3. There's like 212 tests in Range V3. And I compile those with C++20 concepts. So those all compile and build. And that was definitely the biggest torture test I faced. Just because I had to do special logging of all the concepts it evaluates because there's so much like concept use on like is this type like semi-regular or whatever and it goes on and i like these huge dumps and figure out like which concept evaluated true or false when it should have done the other and so that exposed a couple bugs bugs I had, especially in the type traits. It was like, is X trivially constructible from Y?
Starting point is 00:36:47 That kind of stuff. But yeah, so I try to use a lot of really forward-looking things in my regression tests. I have my HANA's compile-time regular expression code. That compiles fine, although that code's not actually crazy. And then I compile some of my own translation units. I don't have a bootstrapping compiler.
Starting point is 00:37:06 I haven't contaminated the Circle source code with Circle, because I think that's insane. And I understand people from other languages do it, but it doesn't make sense for a C++ compiler to try to bootstrap itself anymore. Yeah, so I just have normal C++
Starting point is 00:37:21 14 or 17 source code for the Circle source. And also, Circle goes through such feature churn. I'm constantly removing and adding features, so it wouldn't make sense to do that. That just leads me to three other questions. So my understanding, and I could be completely wrong on this, is that at least some of the modern C++ compilers
Starting point is 00:37:44 do some transformations on the ast for optimizations before they pass it off to like the llvm backend so just like out of curiosity like have you compared like does the circle compiler generate like approximately the same optimized code that cling does for the same endpoint. Clang uses its own proprietary blend of flags and spices, I guess. I use the default 01, 02, or 03
Starting point is 00:38:13 settings. I don't actually believe Clang does real source transformations. I thought they did something, but I could be completely off-base. I think everyone does some constant folding, and certainly if you have a constexpr function call in an expression, it'll want to fold that, and I fold that. But I'm not sure what other operations it would do.
Starting point is 00:38:33 Okay. I just don't know what... It's a lot easier for the back-end to optimize than it is for the front-end. It's like once you get into that three-address code, that single static assignment code, it just becomes a graph problem. And then you can say, well, these two edges are going from the same source to the same destination and collapse them and merge.
Starting point is 00:38:51 It becomes a really easy problem. And that's why LLVM has been pretty successful because they reduced it to a tack that's really easy to process. And I wouldn't really want to do very much in the AST. That's fair. You mentioned that you're constantly adding and removing features as you're playing with things. Would there be a way for me as a user of Circle to say, well, I am relying on features X and Y,
Starting point is 00:39:16 make sure that they are enabled or disabled, these features that for whatever reason have gotten in my way or something like that? Like, do you have flags for any of these things? No, but I don't have users who talk to me either. So, I mean, I'm the only user. I wish it were not the case. I wish there was, like, interest in doing something with this. But right now, it's me. Some people, you know, I mean, there's been some interest
Starting point is 00:39:40 and people looked into it for writing some papers or doing talks or whatever. But, you know, the core language just seems pretty stable. The things I've really been refactoring are macros, some of the injection stuff. But yeah, I don't have a problem adding new features. Someone just asked me about embed, and I said, yeah, I'll put an embed.
Starting point is 00:39:59 It's no big deal. I don't know if I would want to remove any features yet. Okay. Yeah, I did just... Also, just this last week, this week, I finally dropped the list comprehension feature, which is, I think, the biggest transformation to the core C++
Starting point is 00:40:16 language in ages. It's like I'm really pretty amped about that. Can we talk about that right now? Yeah, go for it. Go ahead. One of the things that Python people have been, you know, boasting about since the beginning was that Python has list comprehension. So you just have like a square bracket, and then you can put a for loop inside that with a filter,
Starting point is 00:40:35 or you can put some slices in there, and it will, you know, expand these guys out and create a list for you. And the Python list is like a standard C++ vector, basically. And yeah, it's really nice and it's really expressive. And they have a point. And the C++ answer was to do ranges. And ranges doesn't have the concision and it's not a language built in. And so it takes a long time to build
Starting point is 00:41:01 and then the compiler errors are really scary. And I think it provides kind of a logic that's difficult to get your head around. And certainly no replacement for list comprehension. So last month, this is only like six weeks ago now, I said, what I'm going to do is use the existing infrastructure laid out by C++ 11 parameter packs. So if you have a parameter pack, there's an implicit bit held by the compiler, which is this is a parameter pack. And it's only going to materialize a real value
Starting point is 00:41:33 or yield a real value during template substitution. But other than that, it's just like an expression. So if you have a non-type parameter pack of ints, so int dot dot dot my ints that you feed to a function or feed to a class, and you reference those things, that returns a PR value int, right? It's a real integer. It's not some wrapper. It's not like a range T around int. It's an int. And so it can do overload resolution, and you can add these things together. You can do whatever you want to do on expressions
Starting point is 00:42:00 of that type and of that result object. So I said, okay, what if I were to kind of run with that pack bit, add a second pack bit, which is the dynamic pack property, and then allow you to turn any container into a parameter pack, which is dynamic, though. So what I did is I introduced a slice syntax, which is like the square bracket with a begin colon and colon step, which is the exact same syntax you get in Python. I think Fortran has it.
Starting point is 00:42:30 I think maybe. I think MATLAB has it. I mean, they're like kind of different as far as like the ordering. But the idea here is that you have a begin index, an end index, and a step. And the step can be like positive or it can be negative to go from right to left order. Okay. So now what happens is you use a container and you use this pack. and the step can be positive or it can be negative to go from right to left order. So now what happens is you use a container and you use this pack. So the simplest slice notation is just square bracket colon.
Starting point is 00:42:58 So it's one colon and then the begin is assumed to be zero and the end index is assumed to be negative one, which is the one past the last element. Now that transforms this stood vector from type stood vector to type whatever the, which is the one past the last element. Now that transforms this std vector from type std vector to type whatever the inner type is, right? And how do you arrive at that inner type? Well, you call like the begin member function that returns an iterator, and then you dereference it with star, right? So that's the same thing that range four does. So by using a slice, I've transformed any STL container,
Starting point is 00:43:28 anything that has begin and end iterators, into a parameter pack. Now the type has changed to whatever the dereferenced iterator type is. So that'll be like an L value int if you have a std vector of int. And now because it's not like a... I haven't done a range wrapper around that inner type. The inner type is available by itself, even though it's implicit like a, I haven't done a range wrapper around that inner type. The inner type is available by itself, even though it's implicitly a pack. I can add them together.
Starting point is 00:43:50 I can pass them to functions. So if I pass this slice operator to printf, right? Well, printf has like an integer return object, result object. So now that printf call is itself a pack. And I can add it to other things. So you can create these big expressions, have a whole bunch of slices in them, and you can expand them out.
Starting point is 00:44:10 When you expand, the compiler will generate a for loop. So it says, well, I've got one dynamic pack in this expression. I'm going to create a new little stack frame. I'm going to call it.begin. I'm going to call it.end. I'm going to advance the begin iterator forward. I'm going to advance the end iterator forward. I'm going to advance the end
Starting point is 00:44:26 iterator backwards if I'm stepping left to right, depending on if step is positive or negative. And then I'm going to generate a loop that will visit each element in this pack. It could step three elements between or whatever the step size is, and then it executes that expression.
Starting point is 00:44:42 You can think of the slice operator as a dynamic pack generator. It's something that will generate a loop. So you can think of the slice operator as a dynamic pack generator, something that will like generate a loop. And then the consumer is the kind of embodiment of the loop. So the consumer in this case can be a regular expansion expression. So let's say I want to printf all of the values of a vector to the terminal at once, right? So you see printf, quote, percentage d, end quote, comma, v, square bracket, colon, end parentheses, dot, dot, dot, semicolon. And so the dot, dot, dot at the end is a pack expansion into an expression. So what it does is it converts the result object. The result object of printf is int.
Starting point is 00:45:20 It converts that to void. It's a discard, like any expression statement is. And then it generates that to void. It's a discard, right? Like any expression statement is. And then it generates a for loop. And so now what if I want to print the back, the result of the contents of a vector in reverse order, right? It's the same thing, but now the slice is v square bracket, colon, colon, minus one, which is right to left traversal. It's reverse order traversal. What if I want to do only the odd elements? It's one, colon, colon, 2. So start at offset 1, don't have an explicit end, and then 2 is skip every other element. And so now what you've done is you've ordered up.
Starting point is 00:45:51 You have this amazing, really expressive syntax for saying I want to visit a container in some order. And the container can be anything. It can be a std map. It can be any kind of type that has not even random access, but any forward or reverse access iterator, because it used std advance internally if it has to go more than one
Starting point is 00:46:10 step, or plus plus or minus minus, right? And then if you want to do list comprehension, this is like the best part of it, is that you can expand these things into an expression, but if you expand them into a list, so just use square brackets, people think square brackets have to be lambdas, but if it doesn't, if it's not followed by like parentheses or arrow or whatever the grammar to be lambdas, but if it doesn't, if it's not followed by, like, parentheses
Starting point is 00:46:25 or arrow or whatever the grammar is for lambdas, I interpret that as a list comprehension. So now put a complicated expression involving one or more containers into the square brackets, and then put dot dot dot inside the square brackets, right? And so now that's a list comprehension. The result object of that list comprehension
Starting point is 00:46:41 is a std vector. It's specialized over whatever type is inferred from the contents, right? And then it gets returned. So if I want to say, let's say I have a std vector of ints, and I say square bracket v square bracket dot times 2 dot dot dot n square bracket, right? So what I've just done is created a slice that will loop over every element of v, multiply it by two, and then I expand that into a new std vector.
Starting point is 00:47:09 And so when the compiler generates code for this, it will create a new std vector. It will count how many elements it expects to see. It'll reserve that much memory. It'll step through, and then it'll populate the std vector and return it. And what's even better is now the std vector, it is a regular PR value std vector, but it's
Starting point is 00:47:25 also a list comprehension AST node, which means it's implicitly convertible to an initializer list. So if I want to initialize a string, std string, std string has an initializer list constructor. It does not have a std vector constructor. Std map has an initializer list constructor.
Starting point is 00:47:42 Pretty much any STL container other than array has an initializer list constructor. Stood, pretty much any STL container other than array, has an initializer list constructor. The list comprehension provides the backing store, which is dynamic memory required to hold the contents of the initializer list of some dynamic length. We don't know what the length is at compile time because
Starting point is 00:47:57 heap allocation is used. It creates that Stood vector. It does a materialization on it to turn it into an expiring x value. And then it returns or yields an initializer list with a pointer into that data. And so now we can have kind of universal list comprehension initializers for any STL types.
Starting point is 00:48:18 And I have this very long document on GitHub with all the other Circle stuff that has hundreds of examples on how to how to do text manipulation so how do I take one string and say you know alternate capitalization or double characters or whatever you do things that are shown as as STL ranges or you know C++ range examples but now entirely in this new list comprehension syntax without any additional function calls the point is that there's no function calls required.
Starting point is 00:48:48 Everything's accelerated by the compiler itself, and so there's no sturm und drang over the design of the interface. A lot of the problems people have with std transform and std foreach, whatever else, is that there's a specific interface, and you have to go here to that interface. Transform will visit every element between the begin and end pairs,
Starting point is 00:49:03 but what if you want to visit two arrays simultaneously? You have to create a zip iterator. What if you want to step every other element? You have to create a step iterator. Now, what if we want to do both? What if we want to step and zip? We have to make sure we have the step and the zip iterator composable. Do we use pipes to compose them?
Starting point is 00:49:19 Do we use composition? And it becomes like a huge question because what we wanted to do is express an algorithmic desire through a function, but that function has to have a specific interface. By doing everything in the language with list comprehension and these dynamic packs, I avoid all that. I never have to create a lambda for this stuff,
Starting point is 00:49:36 right? Because I'm not trying to ship my special functionality, capture scope through this closure, and then pass it to another function where it'll be invoked again. If I want to do a reduction, I have, you know, like C++17 has fold expressions, which is like dot, dot, dot, plus some container. So I've expanded that to work with dynamic pack. So you can do dot, dot, dot, plus v square bracket colon, which is like the slice of v, and that'll add up all the elements of the container at runtime right um that i can put any
Starting point is 00:50:06 expression in there i can it's not just slice of v i could say um find me the max of these uh the difference of two elements right so i have i have um vectors a and b and i want to find the the max difference of two elements right so there it's just dot dot dot, dot, std max, parentheses, a slice minus b slice. Now, that'll find me the maximum of the element-wise differences. But if I want to do that with STL, I'd have to create a lambda function that encloses that behavior of a minus b. And then I'd have to figure out a way to get the current iterators from a and b to that lambda. And that's an interfacial problem, and that's why we have concepts.
Starting point is 00:50:46 We just have to make sure that these different requirements are composable. But when you build this functionality right into the language, that all washes away. I don't worry about interface anymore, and I don't have to worry about closures because there's nothing to close. This is all written in line.
Starting point is 00:50:59 And so I think from, like, an immediacy argument, you know, it's much easier, it's much better to have a rich core language of C++ because you don't worry about interface. And that's what's been killing C++. When you look at STL code, it's like there's a zillion constructors in every type
Starting point is 00:51:16 now. Every type has an explicit and non-explicit one. There's so many enable if statements. There's just so much stuff now because people worry about composability. But composability is not a problem when you're just dealing with individual expressions. So my push for this is to make the language as expressive as possible so you're not worried about calling libraries and how to use libraries. So if I could make the completely, let's see, ignorant, I have no idea because I've never actually tried this feature request for your list
Starting point is 00:51:48 comprehension, it would be if I understood you correctly, this is what I would like to see. If the size of the list being built is known at compile time, like it's coming from a std array, then the backing store itself be a compile time sized thing coming from a std array, then the backing store itself be a compile time sized thing, like a std array. So then for systems that can't do dynamic memory, for whatever reason, it could still work with your list comprehensions. Yeah, I thought about doing that. If you have a right now, the slice operator always returns a dynamic pack, if it were to return a static pack, it would work because list comprehension is just, it just
Starting point is 00:52:27 includes a regular initializer list in it. So you can do a, if you have a non-type template parameter pack, you can expand a non-type pack in that, right? Because you can compose it of multiple things. It can be a slice expression and it'll go through the whole slice expression, comma, some scalars, comma, static pack
Starting point is 00:52:43 expansion. You're talking about a static pack expansion. It would make sense, I agree, to put in a slice operator that allows you to... Oh, wait a minute, I have that. Shoot. I'm not sure if it works for that or not. I do have a slice operator on template packs. I forgot about that. Dot, dot, dot,
Starting point is 00:53:00 slice. And so if I'm giving a non-type parameter pack and I want to only get the even elements, I could do dot, dot, dot, bracket, colon, colon, two, which skips every other one. And that transforms the original pack into another pack. But now this pack essentially only presents half the elements to me.
Starting point is 00:53:20 So then if the original pack had size 10, the new pack has size five. And so when the new pack is expanded, it essentially multiplies each element by 2 or each index request. So, yeah, obviously I think it would make sense to extend dot dot dot pack syntax
Starting point is 00:53:36 to things like tuples or anything tuple-like. Because there's that thing where for structured binding, if you have a class type or a std array or a std tuple or a, is that it? A pair? A pair, tuple, array, yeah. And those are all implemented with standard library hooks to make it possible, yeah.
Starting point is 00:53:57 Right, so I think that's a good feature request. I think it would make sense for me to check if the incoming expression is not a parameter pack and use that static slice syntax, then to try to attack it as if it were a tuple or a tuple-like structure. Right. Because that would work with std array, right? Yeah. Yes, it should. I believe so. I mean, if I understood everything that you said about how it's implemented, then I'm pretty sure it sounds like it would work.
Starting point is 00:54:24 Yeah, I think so. Yeah, all things are possible, right? It just requires someone to say, let's do it as opposed to sniping at other people's proposals. I don't know. It's like, I just want to, I don't really write proposals. I just write compiler. And that's why I've got so much done. I mean, I'm the one who decides if it goes in or not. So yeah, I'll probably put it in. Well, then I would like to make two other feature requests while I'm at it the first would be a pull request to compiler explorer so that it's possible to play with your compiler on godbolt.org which should not be difficult to do maybe okay i i it's just it's hard for me to like support maybe it's identical maybe maybe behave identically but like i don't really have any support team i mean it's like one person so i'm not sure if i want to put it out in multiple formats like that i don't know what it actually
Starting point is 00:55:09 takes to integrate it but oh yeah it's actually really simple to integrate a different compiler like if you look right now godbolt.org supports like okay 15 different programming languages uh giving it another front end assuming assuming building it is no more difficult than building clang then it should be pretty straightforward single pull request and it would get built nightly Giving it another front end. Assuming building it is no more difficult than building Clang, then it should be pretty straightforward, single pull request, and it would get built nightly automatically. Okay. And the other, well, okay, then you're definitely going to disagree with my next feature request, I'm pretty sure, which I don't blame you.
Starting point is 00:55:38 But it would be cool to see the chart of features that you do support from C++ on the compiler CBP reference compiler compatibility chart. How is that compatibility? That's not an automatic thing? It's not an automatic thing. People just go in and update the table on Wikipedia, basically.
Starting point is 00:55:57 Oh, right. So that just gives you the proposal number for each of the features. Yeah, and the whole thing's already built out right now. It would just be checkboxes for your compiler. Yeah, yeah that would be good i'd probably find some things i forgot about um and right now my my test case is like all of the stl which is like lib standard c++ 9 that's like i don't know 100 files in there it's like 220 000 lines and there's range v3 and some other ones but i'm sure there's some there's some constructs that aren't used by any of those that are still in the language i guess i should oh and I might be able to save you some effort with compiler Explorer if we try
Starting point is 00:56:27 it really nicely, because when Matt listens to this episode, he say, Matt, it would be awesome to see circle on compiler Explorer. He might do it if it's not that difficult and you might not have to do anything. Okay.
Starting point is 00:56:43 So one more question I have before we let you go is, you know, you just said a moment ago how you don't write, you know, ISO papers, you're writing a compiler. But there are a whole bunch of ISO papers that seem to be, you know, taking a look at Circle. Are you hoping that Circle does inspire changes in the C++ language? Like, what is your goal with Circle?
Starting point is 00:57:06 My goal is to get some material support and be able to put this out in some form. Now, if that means it's a C++ compiler that sort of pushes ahead of the standard, that would be... My goal is to get it used and to get some other support on it because I'm super exhausted
Starting point is 00:57:24 and I need some other people to make it worthwhile. But it's a great piece of software. It's about twice as fast as Clang. I put another benchmark up last night. It shows 2.3 times faster than Clang, 15 times faster than MSVC in the large array initializer XSD benchmark that Karntan was working on before. It's a fast
Starting point is 00:57:46 compiler, and it's easy to use, and it's great for prototyping. I kind of wish that the C++ community would engage with me and say, this would be a great platform for prototyping all of our proposals, because arguing over PDF files
Starting point is 00:58:02 is not satisfying to me, and I don't think it leads to good results. If people had a path for getting stuff implemented in a real compiler in the course of weeks as opposed to years, that would give the iteration and speed and proactiveness that C++ needs to stay competitive and to become a better language.
Starting point is 00:58:23 I want the code to be used. I forget what the original question was, though. You know, just what are your goals with it? Are you hoping that C++ becomes more like Circle in the future, or are you seeing it as its own separate thing that you hope people use instead of C++? I guess my hope is that C++ would become like Circle, but I don't have confidence in that.
Starting point is 00:58:47 I think there's a lot of personalities in the committee, and there's just so many people. And for a lot of people, the fight is the attraction. The idea of going to quarterly meetings and then duking it out is a career. And I mean, they just like that level of, they like the process. And I don't think that process would get through the really extensive challenges that this compiler is making.
Starting point is 00:59:16 I mean, even the question is like, what is a compiler? And Circle is like reframing that because it's like simultaneously a compiler and an interpreter, right? It's its own scripting language. And that's something that the committee is just it's not really ready to deal with especially when it's looking at proposals that have you know very small changes about expert if scope or whatever right i mean this is like navel gazing time and i don't know think i don't think the committee with hundreds and hundreds of of members is ready to to to go through all of this i I could be wrong.
Starting point is 00:59:45 I hope I'm wrong. But right now, my hope is that I can get funding and Microsoft or Google will say, maybe we should like talk to this guy. I can't even get talks at companies. I always try and I haven't actually given a talk at a company, a real talk since like July. I've just kind of given up. I mean, I kept trying Microsoft and they said, you know, no, the employees wouldn't even stand up for me. So I don't know. I've tried Google a bunch of times and then, you know, get dead end. So, um, yeah, I, I mean, I think this is great stuff. Nobody's trying to recruit me. I gave,
Starting point is 01:00:13 I did give a talk to, um, Herb and Bjarne and, uh, Chandler in November, which was good. Pretty much everyone on SG7. It was like, maybe like seven people, eight people. But, you know, so I have like two hours. I went through all the features and it was great stuff. But, you know, where do they go from there? It's not clear. Right. You know, it's like I gave SG7 all this content. And there's really no urgency to implement it.
Starting point is 01:00:39 Or it's even talked to me about getting access to the technology. So I don't know. I don't know. I don't really have a plan going forward. My dream when I started was that if I build a really amazing tool, that good things will happen. I built the tool and I'm still waiting for the good things to happen.
Starting point is 01:00:55 It's been great having you on the show today. It definitely seems like a powerful compiler and I hope some good things do happen. Maybe some people who haven't heard about it before will hear about it. Doing things like this is so essential for me. I'm really thankful, because it's really critical for me to get the word out.
Starting point is 01:01:10 Yeah, that's where I think having it on the compatibility chart on CPP Reference would be big. I haven't even considered that. I didn't know that was a thing even. Yeah, because people looking at that chart will be like, hey, wait a minute, what's this other compiler that supports all of these features too? Because right now there's really only three options for compilers that support C++ 17.
Starting point is 01:01:31 Okay, thanks so much, Sean. Thanks, guys. Yeah, thanks for coming on. Thanks so much for listening in as we chat about C++. We'd love to hear what you think of the podcast. Please let us know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. We'd also appreciate if you can like CppCast on Facebook and follow CppCast on Twitter.
Starting point is 01:01:56 You can also follow me at Rob W. Irving and Jason at Lefticus on Twitter. We'd also like to thank all our patrons who help support the show through Patreon. If you'd like to support us on Patreon, you can do so at patreon.com cppcast and of course you can find all that info and the show notes on the podcast website at cppcast.com

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.