CppCast - CMake Server

Episode Date: August 10, 2016

Rob and Jason are joined by Stephen Kelley to discuss his work on the CMake Server project which will enable advanced tooling for CMake. Stephen Kelly first encountered CMake through working o...n KDE and like many C++ developers, did his best to ignore the buildsystem completely. That worked well for 4 years until 2011 when the modularization of KDE libraries led to a desire to simplify and upstream as much as possible to Qt and CMake. Since then, Stephen has been responsible for many core features and designs of 'Modern CMake' and now tries to lead designs for its future. News Conan virtual environments: Manager your C and C++ tools Macromancy Opt-in header only libraries Opt-in header-only libraries with CMake Stephen Kelly @steveire Steveire's Blog Stephen Kelly on GitHub Links CMake Daemon for user tools CMake Sponsor Incredibuild

Transcript
Discussion (0)
Starting point is 00:00:00 This episode of CppCast is sponsored by Incredibuild. You won't believe how fast your build can run until you download your free developer version at incredibuild.com slash cppoffer or just click the link on our website. CppCast is also sponsored by CppCon, the annual week-long face-to-face gathering for the entire C++ community. Episode 67 of CppCast with guest Stephen Kelly recorded August 10th, 2016. In this episode, we discuss opt-in header-only libraries.
Starting point is 00:00:48 Then we talk to Stephen Kelly, one of the lead maintainers of CMake. Stephen talks to us about the CMake server project for user tools. Welcome to episode 67 of CppCast, the only podcast for C++ developers by C++ developers. I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how are you doing today? Hi, Rob. How about you? I'm doing pretty good. Any new announcements from CppCon? I don't think I checked yet today.
Starting point is 00:01:35 Not that I know of. Yeah, still the same post as last week, that Bjarne is going to be the main keynote. Okay. There should be more announcements soon, I would think, because there's still a couple of those plenary sessions they haven't told us about yet. Yeah, I think that's the only thing
Starting point is 00:01:50 left on the schedule that hasn't been announced yet. Okay. Well, at the top of every episode, I'd like to read a piece of feedback. This week, I got an email from Eric, and he wrote, I'm recently returning to C++ having last used it in the 90s. The podcast has been a great way to immerse myself in the language.
Starting point is 00:02:07 I was talking to a coworker today about the episode on modules with Gabriel Dos Reis. He mentioned that he wishes it would have been longer and gone into more depth. That reminded me that I've been meaning to send you guys some feedback. I agree with my coworker. I would love to hear more information about modules, and I wish many episodes were longer. Sometimes things are just getting rolling when it's time to wrap up. He also gave us some suggestions on libraries that he'd like to hear about in
Starting point is 00:02:29 future episodes. CAF and Speedlog. And he says, keep up the great work. Thanks from Eric. So I know we definitely hear feedback a lot that they like it when we go pretty in-depth. I'm not sure about longer episodes, though.
Starting point is 00:02:45 I looked at the time, and the modules episode was about 53 minutes. It was one of our longest, I believe. Yeah. I don't think we've ever gone over an hour, so it was definitely one of our longer episodes. And it would probably be hard to keep guests for much longer than that. And I also think there's a limit to how much information you can realistically get out of the audio format.
Starting point is 00:03:09 Hopefully you get a good taste of what modules are and learn a couple new things about them from an episode like that. But from that point on, if you want to know more, I think you probably have to go and watch some of his talks that he gave at CppCon or read some of the papers he's written. That's kind of, in general general for any episode we make.
Starting point is 00:03:27 If you want to learn more, you probably need to go to other sources because there's only so much we can do. Yeah. I would say it's plausible at some point in the indeterminate future that we might have a two parter episode. Sure. Sure.
Starting point is 00:03:41 Things really got rolling. Yeah. But I think we went pretty in-depth, and I wasn't really sure if we had any other questions in our minds that we chose not to ask. Right. Yeah. Okay, well, we'd love to hear your thoughts about the show. You can always reach out to us on Facebook, Twitter,
Starting point is 00:04:01 or email us at feedback at cpcast.com, and don't forget to leave us reviews on iTunes as well. Joining us today is Stephen Kelly. Stephen first encountered CMake through working on KDE and like many C++ developers, did his best to ignore the build system completely. That worked well
Starting point is 00:04:17 for four years until 2011 when the modularization of KDE libraries led to a desire to simplify and upstream as much as possible to Qt and CMake. Since then, Stephen has been responsible for many core features and designs of modern CMake and now tries to lead designs for its future. Stephen, welcome to the show. Thank you.
Starting point is 00:04:36 Thanks for joining us. I'm kind of curious because Qt has its own build system also, right? So how does this interaction between CMake and Qt and QMake and all that come together for you? That's right. Yeah, Qt uses QMake to build Qt itself, and many users of Qt also use QMake. But one of the things that I did several years ago,
Starting point is 00:05:02 I think all the way back in 2011 or 2012, was add CMake files to Qt itself, so that once you have a build of Qt, you can write your own project using CMake and use Qt as a dependency and have that work out very easily. So Qt has this, excuse me, CMake has this concept of package configuration files, which works similar to package config, the Unix tool for, you know, finding out what flags you need to use to compile and to link. But the CMake package config files are a bit more CMakespecific and suited to consumption by CMake. CMake can generate those files if you're creating a library which itself is built with CMake, but as Qt uses QMake, I use QMake to generate CMake files within the Qt build system. Oh, okay.
Starting point is 00:06:01 Yeah, some people don't realize that that's possible. They think that if a library upstream doesn't use CMake, then it can't ship a config file, but that's not the case at all. That's what Qt does, and it used to be what LLVM and Clang do as well, back when they used to use a Makefile-based build system. So the Makefile-based build system would generate CMake config files too. Okay. Okay. Well, we have a couple articles we're going to talk about.
Starting point is 00:06:31 Feel free to chime in on any of these, and then we'll start talking to you more about CMake and the latest project you're working on for CMake. Sure. So this first article is from the Conan blog. We talked to them a couple episodes ago, and they are now creating conan virtual environments uh to manage your c and c plus plus tools what were your thoughts
Starting point is 00:06:53 about this one jason uh kind of reminded me of what rb rb env i believe it is for when you want to switch between different ruby versions do. But I don't know. I got a little lost on where I would personally use this. I think Python has something similar as well, also called VirtualInv. Okay. I think he mentioned in the article that it might be useful for setting up continuous integration machines. I could definitely see that use case. Well, that's something I do a lot of work with,
Starting point is 00:07:24 so I should probably spend more time reading this article and trying to understand his goal, maybe even reach out to them and get more info if I need it. Yeah. I tried it out yesterday just to sort of see what it's like. It turns out that I'm already doing something similar in an ad hoc way. So what the Conan virtual env does is it sets various things in your path environment variable and sets your PS1 prompt in the terminal. But I already have my own scripts that do that.
Starting point is 00:07:59 So they conflicted a little bit. So you use it for switching switching what uh c++ compiler using by default for instance yeah yeah okay i've just always gone into i use cmake for all my projects so i've always just gone into the you know c++ configuration there by hand if i want to or pass cxx on the command line i never thought about the desire to automate it that makes me a bad programmer i guess i think it's also a system to download dependencies yes dependent libraries and things like that and in that sense i think it's a cool idea so i'd like to see how far it goes okay the uh next article we have here is titled macromancy um and i always like reading articles where they start off by telling you please never ever do this in actual code you always know it's
Starting point is 00:08:53 going to be a fun article when they start off like that even better before saying not to do it he starts by quoting the standard saying i promise this is allowed. So yeah, basically he challenged himself to see if he could include header files in a more dynamic way. And he came up with some interesting results. It's going to be a two-part post. So the next one, I'm not sure if it's out yet. Have you seen it yet? I have not sure if it's out yet. Have you seen it yet? I have not noticed.
Starting point is 00:09:26 Yeah, but basically does some crazy things with macros to expand and use include headers. There's some interesting things you could learn from macros, I'm sure. You guys want to add anything else to this one? Maybe just a comment that some of these tricks for telling the compiler which header file to include next with a macro is some of the things that Boost's preprocessor library has to do for recursively including header files.
Starting point is 00:09:54 So it's ridiculous, but it does actually exist in the wild also. Right. Yeah, people use Boost preprocessor. I haven't used it so much myself, but some of the things in the blog, they look sort of familiar after reading through a little bit of the Boost preprocessor code. I got rid of all Boost preprocessor usage
Starting point is 00:10:16 when I moved to C++11. I was able to replace it all with variadic templates. Great. Okay, and this last article is from Vittorio Romeo, who's another developer we had on one of my earliest episodes. And he is talking about opt-in header-only libraries. And basically the idea that some developers prefer header-only libraries because they're really easy to set up, but some think it's kind of a crutch.
Starting point is 00:10:43 And you can design your library in such a way to enable either developer to use it however they want, either in a header-only mode by having the header actually include the CPP files or in a static linking or dynamic linking mode. I thought that was a pretty neat idea. And Stephen, you said you actually went ahead with implementing some of this in CMake? Actually, what I did was I made it easy to do what he's describing with CMake. So I actually started by forking his code. So he put a link up to his GitHub repo, which is actually a repo for his blog, I think, which contains the example code. And in there, he had a shell file, which was doing the compilations of the files of static
Starting point is 00:11:30 library and dynamic library and header only. And so I just wrote some CMake files to show how you could do that with CMake without, you know, manually invoking the compiler as he was doing in the script. I feel like if you're a library programmer, you should read this article and read your follow-up article, Steve, and I think it's worth something that you should spend some time thinking about. Yeah. I mean, there's a lot of trade-offs. If you want to make a library which is both usable as header only and as a static or dynamic library for example you know you can't have static functions
Starting point is 00:12:12 of the same name in different cpp files because they're all going to be matched together or functions within an anonymous namespace or similar hash defines in well identical hash defines in the cpp files if you mash those together you get the uh you start to get conflicts just like with a unity build so i think it can work for certain libraries but there's definitely trade-offs and constraints that get introduced there that's an interesting point that it would kind of have overlap with Unity builds. I hadn't thought about that. Well, we'll put the link to your blog post in the show notes as well, Steve. Okay, cool.
Starting point is 00:12:54 Okay, so tell us a little bit about the CMake Daemon project that you started. Yeah, so living here in Berlin, I have a lot of contact with other people who work on KDE and on Qt. So people who work on KDevelop and people who work on Qt Creator, both offline and online. And they both want to get great integration of the IDE with CMake build systems. And so one of the KDevelop developers, maintainers, started a thread on the CMake mailing list about how we could improve that kind of integration. And we kind of expanded it to include the Qt creator developers, such as Tobias Hunger, who you've had on your show late last year, and also other developers like C-Line developers
Starting point is 00:13:54 and Visual Studio developers, just to see what are the actual needs that IDEs have for CMake integration. And there's different things. IDEs need to know the structure of your build system, like where are certain targets, what files belong to which
Starting point is 00:14:16 target, how do I compile them, what flags and include directories and everything do I need to pass. But then beyond that, if you already had that and that was easy, you want good code completion and good semantic highlighting and debugging facilities. And so over several design discussions on the mailing list,
Starting point is 00:14:39 we kind of came to the point that it was clear we shouldn't just generate more files from CMake in order to give that kind of information to IDEs, but rather we should define some API in a way which is long-running, such that IDEs would be able to get all the information they need. So initially the idea was we would just generate extra JSON files along with the Make files or the Ninja files or the Visual Studio or Xcode files. But because those are static,
Starting point is 00:15:19 that design wouldn't be able to be extended to give you great code completion or debugging facilities. So that's how we ended up at a choice between defining some kind of plugin API, like with some C or C++ API, or writing an inter-process communication API. So from then, I did a bunch of refactoring within CMake to make it possible to do that, and then posted my proof of concept in my video last January of what can be achieved with this kind of design and architecture. So it is the IPC option that you ended up choosing ultimately then?
Starting point is 00:16:08 It is the IPC option that we ended up choosing, yeah. With the C or C++ API plus linking to some library, we decided that wasn't viable because CMake actually already has a plugin API. It's way out of date. that wasn't viable because CMake actually already has a plugin API. It's way out of date. It was introduced like over 10 years ago and it was obsolete quite quickly. I don't think it was ever widely used.
Starting point is 00:16:37 I don't recall ever seeing a reference to it. I've been using CMake for a very long time. Yeah, I think there's a cmplugin.h or so file that ships with CMake for a very long time. Yeah. Well, I think there's a cmplugin.h or so file that ships with CMake, and there's a special CMake command called load command, maybe, or something like that. It's still there, and I think it still works as well as it ever did, but it's very restricted, and it's not really considered a success. And, you know, there are other issues that come up once you start to make some kind of binary interface like that,
Starting point is 00:17:12 such as 32 and 64-bit and different architectures and ABI stability. And, you know, if you want to have a stable API, that means you can't add new virtual methods and you can't add new members and everything like that. You kind of start to hit all of the issues that Qt hits when creating shared libraries. And from experience, I know that there's a lot.
Starting point is 00:17:41 So the IPC idea, you know, it's not uniquely mine or anything. It's something that I had encountered one or two years previously when looking at Clang. So the Google developers, I think, designed an IPC mechanism for Clang such that IDs would be able to say, please, Clang, give me a code completion in this file on that line on that column. And there's an old design document that you can still find from Chandler Carruth about that,
Starting point is 00:18:17 but it became the YouCompleteMe system, the YCMD, which is used for code completion in Vim, and there's plugins for Sublime and everything else as well. I'm not sure exactly what kind of protocol it uses to do that communication, but we chose to use JSON messages, such that a client would put together some JSON packet to request code completion information or state information and get a JSON response. So do you do that over a locally opened TCP socket, or how do you actually do that? In theory, we would use something like a local socket.
Starting point is 00:19:03 I guess that's a Unix socket on Linux and Mac, and it would be a named pipe on Windows. That's in theory. Currently right now what we do is we communicate on stdin and stdout. So we start the process, and then we can just paste things into the terminal. And that's good for the moment because we're trying to define what the protocol is
Starting point is 00:19:28 and what the schema of the messages is and how to make everything work. But the system that we're using for that IPC communication is libuv, which is also used by node.js. And it has APIs for local socket communication and TCP communication. And, of course, stood in, stood out. So in the production quality
Starting point is 00:19:59 version of this architecture, we'll be using local sockets. And maybe we'll keep the development version around writing on studite. Yeah, I could see some interesting use cases for abusing that for piping commands to CMake and getting the results back for command line queries. Yeah, yeah. One of the early things that I wrote was a simple Python client and Python client library, which can communicate with the server like that. It puts together the JSON and sends it to the other process.
Starting point is 00:20:35 I can think of lots of ways to abuse a system like this or to use it in ways that I haven't thought of yet. You've mentioned working with the Qt communities and a couple other IDEs. Are any of them actually using this right now? Or is it not considered production ready yet? It's not considered production ready yet, no. So I made a plugin for the KDE Kate editor,
Starting point is 00:21:00 which I put online on GitHub. And I think that's about it. I think that's the only one that I've published. But the work is ongoing in the master branch of CMake. So I just put the daemon in my own clone on GitHub. But the server, sorry, the CMake server, I keep on forgetting that we decided that it's, we're calling it the CMake server now,
Starting point is 00:21:26 not the CMake daemon, because it's not really a daemon. Okay. In how we're going forward, we're calling it a server. But I'm not doing the major development work on it anymore, instead Tobias Hunger is doing it. Oh, okay.
Starting point is 00:21:42 So he took a subset of the IPC commands that I implemented and has started trying to integrate those into CMake in a production quality way. So I took several liberties, you might say, to make it possible to demo what I demoed. I didn't write unit tests for anything and everything like that, but I did write a unit testing system, which uses the Python library that I wrote,
Starting point is 00:22:13 but I didn't try to write production quality unit tests in that. But Tobias has picked up the flag and is trying to get commands into CMake Master which give the basic information that IDEs need which is just where are the targets what files belong to them how do I compile them and how do I link them together
Starting point is 00:22:39 the intermediate aim isn't to get the code completion and debugging facilities into CMake master. That'll come later. But I think we need to reach a point where the server is actually integrated in the master branch and is used and useful. Tobias has, I think, started trying to see how he can consume it from Qt Creator. And yeah, that would be a success, I think, at that point. Yeah, if it was being used from Qt Creator, I would definitely consider that a success.
Starting point is 00:23:19 You mentioned earlier that you had some interaction with people from CLion and the other IDEs, and I believe CLion just announced that they now have some support for actually like code completion and stuff for CMake. Yeah, I read the... Okay. Go ahead. They're not using your stuff at this point.
Starting point is 00:23:37 Is that correct? That's correct, right? That's correct, yeah. Okay. Well, I certainly assume so. I don't know the details of how this stuff works, but I did read the blog about the smart CMake support, and, yeah, it looks certainly interesting.
Starting point is 00:23:55 A lot of IDEs can already do code completion for CMake, but the problem is as soon as there's a new CMake release, they get out of date because they hard code lists of commands, you know. And so if there's a new command or if there's a new parameter or keyword to a command, then that's initially not highlighted correctly or code completed correctly with, you know, the old version of an IDE like Cougar Creator or CLion. Maybe they're parsing the help output. I mean, there's other ways that you could generate that list of commands. And that would be a smart thing to do, I think.
Starting point is 00:24:35 But, you know, that will only get you the list of commands. It won't get you the list of keywords and where in the command they are supposed to be used. For example, you know, the shared or static keyword when you call add library. There's no easy way to parse that out of what CMake gives you currently. Okay.
Starting point is 00:24:55 I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors. IncrediBuild dramatically reduces compilation and development times for both small and big companies like EA, Microsoft Game Studios, and NVIDIA. IncrediBuild's unique process virtualization technology will transform your computer network into a virtual supercomputer and let each workstation use hundreds of idle cores across the network. Use IncrediBuild to accelerate more than just C++ compilations, speed up your unit tests, run more development cycles, and scale your development to the cloud to unleash unreal speeds. Join more than 100,000 users to save hundreds of monthly developer hours
Starting point is 00:25:34 using existing hardware. Download your free developer version at IncrediBuild.com slash CPP offer, or just click on the link in our link section. So you said Tobias Hunger has been kind of taking the lead on the project now. Do you have any idea when this would become more production ready and mainstream? It's really hard to tell because there's other work which is related, which is being done first. So before actually trying to integrate
Starting point is 00:26:07 the server, we changed track a little bit to make it possible for IDEs to get information from CMake, like which generators do you support, do you support server mode at all? And things like that. So currently, if you run CMake minus minus help, at the end of it, there's a list of generators and what IDs are doing are parsing that output and then giving the user an option which generator do you want to use. We want it to be easier than parsing output like that, which is unstructured. So we're adding a new command to CMake which outputs that stuff as JSON so that IDEs will be able to invoke that, get a list of generators, know immediately whether server mode works at all,
Starting point is 00:27:00 because obviously there will be a time when old versions of CMaker in the wild and new versions and IDEs will need to know whether they can expect the server mode to work. So that branch is progressing. It's almost in master, I think. I think that will be in master next week and then after that we return to the
Starting point is 00:27:28 getting the CMake server stuff integrated that will take I think a few more weeks I would guess maybe 5 or 6 weeks maybe even less one of the complications is the dependency on libuv actually because
Starting point is 00:27:48 dependencies tend to be checked into CMake in the repo, in the source repo, because CMake doesn't want to have any external dependencies itself and there's just a process to getting a dependency like that into the source repo you know, a special branch and a script for updating it and everything like that. So once that's done, I think that's all of the non-direct work finished and the patch can just be reviewed and integrated.
Starting point is 00:28:20 You touched on something I've been curious about, because I know CMake is very portable and runs on lots of different platforms. So I imagine any dependency or anything like this you choose has to be also equally portable. But I also imagine it probably limits you on what C++ features you can use. What's that process like? Well, certainly we're limited in what C++ features we can use. It was, I think, last year or two years ago maybe, I removed the ability to build CMake with Borland and MSVC 6 and some other ancient Unix things.
Starting point is 00:29:00 And before I did that, we weren't able to use std string append, the function. It wasn't there in some of those toolchains. And vector at, std vector at. Wow. And some other things like this. There were workarounds for various problems that would be hit with those toolchains. But it's more modern now. The baseline is MSVC 7.1, I think,
Starting point is 00:29:29 for building CMake and some ancient GCC, like maybe 3.3 is still tested or something like that. So yeah, it's quite limited in what C++ features we can use. However, with the server mode, we might be able to use more advanced features, especially if a dependency needs it, because we can conditionally compile that into the CMake binary. So if you're using a platform such as some old HPUX or something,
Starting point is 00:30:07 you probably just want CMake to work and you're not really going to be worried about CMake server and IDE integration. Whereas if you're using a modern desktop, you're going to build CMake with a modern compiler anyway. Right. And so we might conditionally compile the server in that case. I don't know. That hasn't really been decided yet. I wrote all the prototype code with C++11 and lambdas and auto and everything like that.
Starting point is 00:30:35 So we'll have to just see if that passes review. That makes a lot of sense. I mean, the only time I personally have to deal with running CMake on an older system is like Red Hat Enterprise Linux old versions tends to come up in cluster environments. Yeah. And then we have to use CMake on the old system to get a build of our project that can run on the old system. Yeah.
Starting point is 00:31:00 And we probably wouldn't need the server in that case. Probably not. It's not going to be the first problem you hit, I would imagine. Although it could be. You might hit some problem in the build on that platform, which you don't hit on your desktop. Right. You need to rebuild that somehow.
Starting point is 00:31:22 Right. Good point. As someone who's been so involved in the CMake development for several years, do you have any best practices or recommendations you'd like to share for CMake in general? Well, I guess the primary one would be create CMake config files. CMake has a lot of features for exporting build targets to make it really easy to create such config files and to use them. And try to catch up on the documentation that ships with CMake, such as the CMake build system manual and the CMake packages manual.
Starting point is 00:32:08 Apart from that, there's some scant documentation on my blog and various other places that I've written about how to write modern CMake. And in general, writing modern CMake means using the target include directories command instead of the include directories command, which is the one still a lot of people use. The difference is that the include directories command affects all targets in a directory and also in following directories, whereas the target include directories command is specific to one target. And I think that results in more maintainable code.
Starting point is 00:32:47 So if you use the target include directories command and you set compiler flags or whatever in one of those includes, it does not affect upstream things? Only whichever direction that might... That's actually something you can control. So again, the include directories command doesn't have as many features, and it affects all targets that it can find are in scope, whereas the target include directories command has extra options that you can pass to it.
Starting point is 00:33:19 So you can say, say if you write a library and you use Qt only in the C++ files, but you don't inherit from any Qt classes, or you generally don't include Qt headers within your own headers, that means that Qt has a private dependency in that case. You're completely hiding it. Your consumers don't need to pass Qt include directories on their command line. Whereas if you do inherit from Qt classes and hash include Qt header files within your own header files, that means that your consumers also need to pass the Qt include directories on their command line. And it sounds
Starting point is 00:34:03 very complicated, but it comes down to just a single keyword within those commands where you'd say something is private or something is public or something is interface only, which means consumers need it, but I don't. Interesting. Okay. And so there's target commands like target include directories, target compile definitions, target compile options, and target compile features. So I tried to make it possible to write a build system without using the old commands at all.
Starting point is 00:34:35 I will personally have to look into that. Right. It's, yeah, maybe the best. I think if you Google for modern CMake, you find a blog post that I wrote at my previous employer. And that contains a lot of information about that stuff. Okay. Are there any other kind of in the work features on CMake you could tell us a little bit about? Not really. Actually, I stopped doing CMake feature development about two years ago. And then
Starting point is 00:35:10 following that, I did a lot of work on refactoring the internals of CMake to improve the code base generally, and to make it possible to implement the CMake server. So I spent about a year doing that where I didn't write any new features, I just did refactoring. And my aim was to make it possible to do lots of things, not just the Cmake server. I also wanted to make it possible to have multiple compilers in use at the same time. So a topic that comes up is that if you're doing cross-compiling, sometimes you need to compile some binary and then use that to generate new source code, which you then cross-compile. You would have to do that if you would build Qt with CMake, for example.
Starting point is 00:35:58 That would be one of the difficulties or the roadblocks in Qt using CMake. So, yeah, that's still a feature that I want to implement someday. and roadblocks in Qt and CMake. So, yeah, that's still a feature that I want to implement someday. All that refactoring would have to be completed in order to make that possible. But I think it would be a nice feature, but maybe not something that everybody would use um beyond that i think there was two other features that that makes possible uh but i don't remember them right now but uh yeah so the the thing is i'm not doing so much feature development in cmake myself anymore
Starting point is 00:36:42 but just trying to give guidance and feedback where I can to other people that come along. So there's a very long tale of CMake contributors. There's very few which do a lot of commits and a lot of
Starting point is 00:37:00 features, and there's a huge amount of people who do just one or two patches just as a flyby because they want to improve some find module like how they find zlib or libpng or something that's built into cmate um and there's also lots of people in the middle. And so, you know, nowadays I just try to provide guidance wherever I can on some kind of direction if some refactoring has to be done. I give my knowledge on that. I might put you on the spot here real quick and ask about a personal issue that I've been
Starting point is 00:37:43 having with CMake recently um the package maker support for mac os still exists but package maker has been deprecated by apple and there's what the new package build command i believe that there's been some talk on the cmake forums about supporting but do you know if that work is progressing? I haven't seen anything about it in a while, but I do remember a thread about it on the mailing list. Okay.
Starting point is 00:38:13 I don't do a lot of development on Mac or have the need for those Mac package maker things. So I didn't follow it so closely. Okay. Well, then maybe back to topic
Starting point is 00:38:30 from what we were interviewing you for. In your video from January, your CMake browser tool looked actually like something that I could potentially use today to help debug CMake issues that I have. Do you have plans to keep maintaining that tool? My plan with that tool is to
Starting point is 00:38:51 just demonstrate what's possible with the server and use it to verify that the server works. Beyond that, my aim is to get those features into tools that you're already using. So I don't know what tools you're using, but nowadays people have many, many tools to choose from. Like CLion, Qt Creator, Visual Studio, Sublime, Xcode. want to make people leave those environments just to do some CMake coding or maintenance, or even reading
Starting point is 00:39:29 of CMake code. However, maybe at some point it will become something useful that can be maintained long-term. But I don't have plans so long-term. The only goals I have in the foreseeable future, at least,
Starting point is 00:39:48 is just to get the CMake server integrated at all and obviously used by IDE tools. After that, what I would be interested in as the next priority would be what else can we do with this. So is there profiling tools that we can make to make our builds faster or even just to analyze where time is being spent when cmake is being run there are very large build systems out there which you know take three or six minutes for the cmake run alone And there may be reasons for that which don't need to exist.
Starting point is 00:40:27 And so it would be nice to have some tooling for that. In the interim, until other IDE developers have been able to integrate the CMake server, the project that you have worked on, is that available on GitHub if someone wants to check it out and build it themselves to try out? No, I haven't made it available yet. Mostly just because I wanted to keep the focus on the server implementation itself. Okay. And keep the focus on getting the features into existing user tools.
Starting point is 00:40:59 Yeah. but maybe at some point if we reach the point that the server is already existing and used by other tools then I can just make that available I guess another question I have then is you said Tobias is currently taking control of the project and currently working on the Qt Creator integration are you looking for other developers to kind of step up and work on other like integrating it into Sublime Text, for example? Absolutely.
Starting point is 00:41:31 And I'm looking for other people to step up and help us upstream and CMake. Tell us, first of all, what you need from such a system if you're an IDE developer or help us actually do it. There's still lots of things that need to be done within the CMake code base to make those advanced features work. For example, the code completion. Currently, CMake has a parser for its own language, but it's not suited to generating JSON messages on a network um so we would have to refactor that part of it and that's that's you know a self-contained packaged um task which can be
Starting point is 00:42:13 which i can describe and which i can help somebody through um and it's you know something that anybody can help with if they have the time and interest and existing C++ knowledge. So it's, you know, I think most of the tasks are not things for somebody who's a C++ beginner. But I think anybody with some C++ experience would be able to do these tasks. Okay. And I've remembered the other thing that all that refactoring that I mentioned before could make possible, and that's making it possible to replace the CMake language someday.
Starting point is 00:42:55 The refactoring that I was doing was just trying to disentangle a lot of things. I don't know if you're aware, but something that appears in the CMake GUI interface is you have a configure button and a generate button. And so those are two separate steps that CMake has when you run it. And in the code, all of the classes and processes for the generate step and the
Starting point is 00:43:26 configure step were all tangled between each other so all of the objects for doing the generate step were created before anything that does the generate step sorry does the configure step and then everything
Starting point is 00:43:42 only happens later so there was a lot of stuff to clean up there. But once you have a proper separation, that's what makes it possible to use multiple cross compilers and a host compiler. But it also allows you to isolate the parts of the CMake codebase which care about the language. And if you have those isolated well enough, then you can replace it with some other module, which uses a different language.
Starting point is 00:44:08 That could be very interesting. Yeah, that's a very much more long-term thinking. Sure. And, you know, would need some dedicated effort. But that's, again, the kind of thing that I would like to attract people to the Cmake mailing list to do to help out with okay Jason do you have any other questions I believe that was all I had okay well it's been great having you on the show today Stephen uh we'll put links to uh to your
Starting point is 00:44:39 blog post with the video which everyone really should watch to see uh what you know could potentially be possible with various IDE integrations in the future. I recommend watching it at double speed. And where can people find you online? I'm on Twitter, Steve I-R-E or Steve Error
Starting point is 00:44:59 as I pronounce it. And my blog and anywhere else that you find Steve Aira that's probably me on Reddit for example and elsewhere okay well it's been great having you on the show today thanks for having me
Starting point is 00:45:15 thanks for joining us quick programming note the show is going to be taking the next two weeks off for vacation but we will be back in early September thanks so much for listening as we chat about C++. I'd love to hear what you think of the podcast. Please let me know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, I'd love to
Starting point is 00:45:35 hear that also. You can email all your thoughts to feedback at cppcast.com. I'd also appreciate if you can follow CppCast on Twitter and like CppCast on Facebook. And of course, you can find all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is provided by podcastthemes.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.