CppCast - Vcpkg
Episode Date: June 7, 2018Rob and Jason are joined by Robert Schumacher from Microsoft to discuss the vcpkg package manager and more. Robert Schumacher is a developer on the Microsoft Visual C++ Libraries team and the ...lead developer for vcpkg. He has previously worked on the MSVC implementation of the Modules TS and is the current maintainer of Cpprestsdk. Besides work, he occasionally indulges in functional programming and arguments about whether inheritance is fundamentally flawed. News Teach yourself C++ Where to start Macro Expansions in Quick Info Tooltips Call for a more diverse program at Meeting C++ 2018 Conan 1.4 released Robert Schumacher Robert Schumacher's GitHub Links Vcpkg Vcpkg Docs Sponsors Backtrace Patreon CppCast Patreon Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 153 of CppCast with guest Robert Shoemaker recorded June 6, 2018. dot io slash cppcast.
In this episode, we discuss resources for learning C++ and macro expansions in Visual Studio.
Then we talk to Robert Shoemaker from Microsoft.
Robert talks to us about VC package
and the Rappersw developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
I'm all right, Rob. How are you doing today i'm all right rob how
are you doing i'm doing okay don't really have too much news on my end how about you um well i think
uh so i've got another round of traveling coming up but this should not affect our podcast right
as far as we know currently yeah we got a couple interviews planned over the next few weeks so we
should be good okay yeah um so i guess that's not really any news for our listeners
okay well at the top of our episode i'd like to read a piece of feedback uh this week i got a
tweet from uh tony lewis and he was actually writing to cpp chat but saying that he's really
enjoying the addition of cpp Chat to his podcast feed.
And it really does provide something quite different to the equally great CPP Cast.
And yeah, I completely agree.
It's great to have another podcast out there.
I think it is, you know, different and unique compared to what we do.
So it's, you know, more the merrier.
We're glad that John and Phil are doing that.
So are they managing to maintain a regular release schedule?
I think so.
I think this was like episode 30 that he was replying to,
and I think the two of us, was that like episode 24 or 26?
Oh, well, then that would pretty much imply they are, yeah.
Yeah, good stuff.
Well, we'd love to hear your thoughts about the show as well.
You can always reach out to us on Facebook, Twitter,
or email us at feedback at cpgas.com.
And don't forget to leave us a review on iTunes.
Joining us today is Robert Shoemaker.
Robert is a developer on the Microsoft Visual C++ Libraries team
and the lead developer for vcpackage.
He has previously worked on the MSVC implementation of the modules TS and is the current maintainer of CPP REST SDK. Besides work, he occasionally
indulges in functional programming and arguments about whether inheritance is fundamentally flawed.
Robert, welcome to the show. Hi, guys. It's great to be here.
Which side of the argument on whether or not inheritance is fundamentally flawed do you come
down on? Generally, I'm arguing with my office mate who is from a Java background,
so I tend to be on the side that inheritance is a blight upon the world, if you will.
Wouldn't we all be better if we just composed our structs in the true C way?
Well, yeah, I mean, I guess we could make arguments about java's version of inheritance being
fundamentally flawed yeah but it's great fun it's great fun he has some we do a back and forth and
it's pretty good yeah that sounds like good office banter just to give each other a hard time
yeah definitely well robert we have a couple news articles to discuss. Feel free to comment on any of these,
and then we'll start talking more about VC package
and some other work you do, okay?
Awesome. Let's go into it.
Okay.
So this first one is a blog post on Medium,
and this is actually titled
Teach Yourself C++, Where to Start.
And this is from a programmer
who is kind of new to programming in itself, but just decided to teach himself C++.
And he kind of gave an overview of why he decided he wanted to learn C++ and some of the resources he used, including YouTube talks of CS courses and recorded conference talks and some of the books he read.
And I thought it was a pretty good overview and could be useful to someone who uh wants to learn c++ yeah uh i agree although you
know thinking about this article i honestly cannot remember how i learned c++ initially
because i know i didn't own any books you didn't and i learned it before college yeah well when i
first started playing with it i mean it was very rudimentary,
just object-oriented programming, basically,
to set it apart from C.
But I didn't have any books,
and I don't remember there being online resources available.
I have absolutely no idea how I was aware
of what the syntax was supposed to be.
No memory of that.
It's kind of funny.
Yeah.
How about you, Robertbert i've had a similar
yeah i can't remember how i learned c++ initially it's been so long um but when i was looking
through the article i noticed that a book that i would recommend was left off which is a tour of
c++ and i understand that that is often targeted at people coming from other languages or people who are looking to
update on C++, but it's really comprehensive. It's really
concise, I would say. I don't want to say dense, but it's concise.
It tells you what you need to know without necessarily too much fluff around it.
Yeah, I also definitely agree, and I believe there's
a new version of a tour of c++ that's
supposed to be coming out soon yeah i'm just going to mention he did recommend one of bjarne's other
books the uh c++ programming language but the uh the tour of c++ is definitely a good one especially
for beginners yeah yeah okay uh next one this one comes from the Visual C++ blog. Macro expansions and quick info tooltips.
And I thought this looked pretty handy.
Basically, if you hover over a macro being used in the code,
it'll show you what the macro definition is, kind of in line,
but also show you, based on on you know whatever variables you're
putting into the macro uh what the actual you know calculation will be like in this case it's
area of a cylinder and it actually shows you you know the numbers you're inputting and what the
area is calculating out to yeah um you know one thing that's better than having macro expansion with your mouse over?
What would that be?
Not using macros.
Well, there was the other feature, I think you're probably going to mention it,
that we talked about a few weeks ago, which is the replace macro with constexpr, right?
Mm-hmm.
Yeah.
Oh, yeah, that's right.
That's supposed to be in the latest version of Visual Studio, right?
Mm-hmm.
We've been working pretty hard on that.
I believe Cody Miller is our developer who's been working on that.
He's on the front-end team.
And he's a smart guy, and he's been doing some really cool stuff with that.
And there's a lot of really interesting edge cases that you get into when you start picking that apart about things like,
well, what if you have an
if def around the definition? So then do you just replace those with constexprs,
or do you try to lift it out, or is there some other way to handle that? It's interesting.
It's interesting. So that feature works not just with regular old pound defines. I mean,
it works with function style macros also is, is that correct?
I don't know what eventually got shipped in the product.
I know that certainly we discussed the full gamut of all of the
possibilities and I'm sure that they shipped the part of it that they
were confident they could do without kind of compromising your code in a
certain sense.
Right.
Right.
Because in,
in,
in visual studio,
we tend to be conservative on these.
So you'll know if you've ever used the rename, right?
If we have any doubts at all about
what should or shouldn't be renamed,
we ask you.
We don't just aggressively go ahead and do it.
And so I would assume that we did something similar.
Yeah, I could imagine there could be
some really tricky cases where,
say you're like
intentionally duplicating the code that was part of the macro expansion i mean like this is like
what like uh min and max and macros is like notoriously difficult to actually get 100 correct
right so all of the things that you have to do to guard your user from themselves yeah like do you want that ending
up in your constexpr function probably not okay uh and then the next one we have is a
call for a more diverse program at meeting c++ 2018 um jens veller obviously is hosting
meeting c++ again and he decided to delay the uh call for speaker
announcement or deadline uh another week so that if you're currently at rappers will which uh
robert i believe you're there right now uh and you want to go to meeting c++ this year you got
an extra week so you can you know finish out the iso c++ meeting and then you'll have a week before the speaker deadline.
That's great.
There's an interesting detail in here
that was somehow completely lost on me last
year. And
Yen says
meeting C++ will
dedicate a track to, oh sorry, I'm just
starting to start backward. Last year's track for
new speakers was a great success. I did
not realize that one of the tracks was dedicated to first time
speakers.
Yeah.
I thought that was a really good idea too.
And I'm glad he's going to continue with it again this year.
Yeah.
I,
it might be difficult for some of the like really small conferences to do
that kind of thing.
I mean,
we've talked about a Pacific plus plus they're relatively small conference
and a CPP on C Phil's conference is coming. Um, they have their relatively small conference, and CPP on C, Phil's conference,
they have their call for papers out right now.
But yeah, certainly I feel like any of the bigger ones
could totally do something,
at least dedicate a portion of the conference to new speakers.
Yeah, definitely.
It seems like it would make sense for meeting C++,
and maybe CppCon could do something like that.
Yeah, I'd love to see that at cpp con that'd be really interesting i mean cpp con's
huge and i would i guess if john is listening he can take that as a vote three of us think that
would be interesting of course they already have had their call for submissions and they're
reviewing all of those submissions now but i I'm sure they get enough first-time speaker submissions
to warrant maybe kind of highlighting that content in some way.
Yeah, I know one of my previous students has submitted something
and it'll be his first submission to a conference.
Yeah, I think he ended up calling for more reviewers, actually,
because of the number of submissions he got so
there's probably enough in there yeah okay and then the last thing we wanted to talk about was
uh conan 1.4 update uh and it looks like the the big announcement here is uh scm integration
new cmake generators and also better visual studio management oh i missed the better
visual studio man oh look it's right there that's good um but yeah it looks like it's going to
definitely clean up their integration with cmake also yeah and you know we just keep talking about
package managers so i thought we may as well cover the announcement of a new release of Conan. Yeah. And it looks... Which... Go ahead.
Oh, I was going to say this release came out
like two weeks after I recorded my episode about Conan,
and it might have affected my experience
with how the CMake integration went.
Oh, okay.
So you think you may have had a better experience
with this new feature?
Well, the CMake part of it was pretty straightforward.
It's just like three lines or something.
But it might have been, you know, it would have at least changed my sample code.
Gotcha, okay.
Okay, well, since we're talking about package managers anyway,
Robert, do you want to start off by giving us an overview of VC Package?
Right. So VC Package is a library dependency manager.
So what that means is that we're primarily focused on dealing with kind of the other people's code that goes into your code.
So we're not a build system. We don't want to replace your build system build systems are thoroughly interesting but but uh you know everyone everyone currently is undecided i would say there's not a general consensus about what actually the best one is and i think there's some really interesting
developments happening in that area so to can that off we're not that. We deal with bringing other people's code into your project.
We primarily build from source. And then if you want to do binary caching, then we integrate with another system to do that binary caching.
So you can use any binary caching mechanism that you want. You can use NuGet. You can use zip files. You can just throw the binaries out on a network share, whatever you want to do with that. We have specific integration mechanisms for MSBuild, which is Visual Studio's build system,
and CMake.
But we explicitly lay out the files in such a simple way that for any other build system
that wants to use them, it is trivial.
It's just a very small set of command line flags and you're ready to go.
So yeah, I think that's pretty much the short summary. And yeah, Jason, you took a look at it
as well, I understand. Yeah, I just played with VC package a little bit in one of my most recent
episodes of C++ Weekly, but I kind of wanted to
unpack what you were talking about with binary caching. So when you go to install a package
with VC package, it's going to build it from source. What happens to the built thing?
Right. So it gets built on your machine with your tools. So your particular tool chain, and then those binaries are made available for
your build system. And this can come in any of the ways that I described before. And let's see,
if you are using MSBuild or CMake, you can, or any other build system, if you add the right flags,
you can pull binaries directly from our system. However, there are a lot of cases where you don't necessarily want to build these binaries on every machine.
So, you know, if you're at a company, you very often will have a small set of people dedicated to dealing with the outside world who need to bring in the tested, understood, particular versions of things that you want to use that are built with the tools that you want them to be built against and in the ways you want them to be built. And then you want to package
them up for consumption by everyone else in the company in kind of like a shrink-wrapped form.
And that is how VC package approaches the problem, is a particular VC package instance can be used
to build the machine. And if you're just an enthusiast at home, this is what you do on your
home machine, is you would build those libraries using your tools. But in a larger organization, you would build those tools one or you would build the
binaries once and then you would package them up in something like NuGet or something like Zip.
And we do this automatically for you. You don't all you have to do is say, hey, I want a NuGet
and we'll give you NuGet. But then you can consume that one on your CI machines or you can deploy it
to all of your developers or you could upload that deploy it to all of your developers, or you could
upload that zip of all of your dependency binaries to GitHub. And then anyone online who wants to
replicate exactly how you build your thing, those are the exact binaries that you used to build it.
So that's how we layer the problem so that, you know, as part of any good engineering strategy,
right, you want to try to break the problem into as small of pieces as possible so that you can do a really good job
at solving that piece.
And so that's how we layer the problem.
So if I install the package,
it's available to all of the build systems
and tools and whatever
that I'm building on my system.
It's just always going to be available there.
It depends on a particular strategy.
So the way that you probably used it in your episode,
and I believe I recall that you did so, is we do have a user-wide integration mode. And this
user-wide integration mode is the way that Visual Studio will work by default. So if you use our
user-wide integration, then you just write, you know, vcpackage install, say, SDL2. And then you'll
go to Visual Studio, you'll do File, New Project, you'll grab some sample code online, you know, vcpackage install, say, sdl2, and then you'll go to Visual Studio, you'll do file new project,
you'll grab some sample code online, you'll paste it into Visual Studio,
and it'll build and it'll run.
Okay.
That's it.
Just automatically knows that there's this cache of things that have been done.
Right, right.
And that linkage happened because of that user-wide integration command.
However, that's not the only way to use it,
because that by itself isn't sufficient,
especially when you start getting into more complex scenarios.
You know, I've got multiple projects.
They have different requirements
on exactly what binaries they want to use.
Maybe I have one set of dependencies
that I want to use for my company projects,
one set of dependencies that I want to use
for my private projects.
And in this way, you can link
the individual VC package instances.
So another way that I would say it is that a particular VC package instance, so that's a
git clone and then with all the binaries inside of there, a particular instance looks like a
system-wide manager to your project, but it can act on any scope that you need. So it can be on a per project basis if you want.
You can just make a new one for every project
or it can be on a user-wide basis
or it could even be on a system-wide basis if you want.
Okay.
And that choice is made by how you particularly integrate.
So in CMake, we ask you to supply a toolchain file from us.
So we provide a toolchain file that you specify when you do your CMake configure.
And that toolchain file lives inside the instance.
So it's very clear when you use a toolchain file, that's the instance that you're getting binaries from.
Okay.
And so the way you can use multiple instances is trivial.
You just use the toolchain file from the instance you want to get the binaries from.
All right. Interesting. If that makes some amount of conceptual sense.
I think it does, yeah.
To back up a little bit,
Jason has gone through this process for his
C++ Weekly video.
Just for the record, I did not realize we were
having you on when I recorded all those
episodes. No, no, you did
a great job.
If you're just getting started, though, how easy is
it to get started with using VC package as a package manager? So on GitHub, we've got a quick
start instructions. It's all on one page, you don't even have to click any links, you just get clone,
run a bats file, run one command to do the user wide integration, because that's the easiest way
to do things. And then install whatever libraries you want.
And you're done.
That's it.
You can now, if you're in Windows, you can open up Visual Studio and you can just immediately start pound including and using the code.
And it just works out of the box.
You don't need to worry about the link line.
We deal with that.
So it is pretty magic. I'm always pleased to introduce VC package to users who haven't used it before because they go through this experience and they're just like, oh, my God, this can't be C++.
This is way too easy.
And it's a wonderful thing.
Now, if you're using CMake, we took a slightly different approach to CMake because CMake has a well-defined mechanism that you should use
to find your dependencies. And that's the find package mechanism. And that's an official CMake
thing. Like this is the blessed way of dealing with packages. You should not be using any package
manager specific thing. So we integrate directly into that and that's via the tool chain file. So
the tool chain file makes sure that the binaries from vcpackage can be found from the find package commands that you execute in your CMake. And this
means that your CMake file is 100% vcpackage agnostic. The same build system that you will
use to build against apt, the same build system you'd use against homebrew, the same build system
you would use against any other system, that's the one that you use for vc package you don't have to have like if this then that
and we think that this is really important in a world where there are multiple package managers
because that's not going to go away anytime soon if you want to be in homebrew if you want to be
in app if you want to write software for these wonderful ecosystems, you have to be able to
accept dependencies from them. This is like a core requirement of the Debian packaging guidelines.
You need, you cannot vendor your own dependencies. You need to use ours for security, for stability,
for maintainability. So you must use a system that will be agnostic. And that's why we've designed our integration in this way,
is it is agnostic with the way you write your CMake file.
So, go ahead.
No, no, that's all I had.
You said you just, you know, say that you VC package install
whatever package name, and then it's basically magically there for you.
But is it the release build, debug build, 32-bit, 64-bit?
And on Windows, if I want a Clang build, a CL build, and a MinGW build,
what was just built for me, and what happens in Visual Studio
if I go and change my target to debug or to 32-bit or whatever?
Right.
So we currently always build release and debug because we don't view debuggability as optional.
Okay.
So we always build release and debug, and those are chosen for you based on your release and debug setting.
Now, for architectures and for CRT settings,
things are slightly more interesting.
We, in a particular VC package instance,
the binary graph is split for every target.
And we call targets triplets after the well-known concept.
If you're familiar with it, the concept of a target triplet.
So when you install a library with vcpackage and you write vcpackage install zlib, for example,
you'll notice that it immediately comes up and says zlib colon x86-windows. So x86-windows
is the target triplet. So that's the graph, the universe that it's installing into. And it's
building this package in a way that will be compatible with that universe, which in the case of x86 Windows is MSVC V140 or V141 with a dynamically linked CRT where most of your dependencies are expected to be DLLs.
And all of that information is specified underneath the name x86-windows.
And specifically, it is encoded in a file called x86-windows.cmake in the triplets folder. And you
can create new ones of those that have whatever settings you want. You can change any of these
settings. You can say, I want to use a static CRT. I want to static link the libraries. I want to use
a different version of the compiler. I want to use a completely different compiler,
all of those things can be specified inside that triplet file. So the triplet file is meant to
denote kind of a completely independent graph of packages, because if you write a piece of software, you know, you're going to target maybe Linux and x86 Windows and x64 Windows and some other system.
And each one of these universes, each one of these kind of imaginary future processes, right, because eventually you're going to run this program and it's going to live in a process somewhere.
The idea is that each one of these processes is kind of its own universe.
And so you're targeting a
particular universe with with a given triplet and so that's the way we uh that's that's the way we
conceptualize it so to answer your concrete questions when you wrote pc package install
zlib you got the msvc uh dynamic crt x86 but if you wanted a different one you just write colon the other triplet that
you want so x64 windows or x86 windows static or maybe arm uwp or whatever it is that you want to
build so i you know hypothetically just built and installed the default thing and then i go into my
project and i choose assuming this is even still an option it's been a while since i've even looked
i choose that i don't want Unicode CRT
I want the ANSI CRT or whatever
which is going to change which CRT I link to
is that still an option?
that is not
I do not believe we have
a non-Unicode CRT
but I could be mistaken
I know it used to be a thing
yes it used to be a thing
but I don't think that's a thing.
So in my hypothetical universe, I've done this, and then I go to build my project, I'll
just get a package not found kind of error, or is that what would happen?
In an ideal world, that is what would happen.
Now, in the particular case of Visual Studio, in the particular case of MSBuild, in the
particular case of the CRT,
we unfortunately found that we couldn't automatically switch based on the CRT, which is an enormous shame.
So there are still multiple CRTs available then, right?
Yes. Static and dynamic and release and debug.
Okay.
So we deal with release debug, but we don't deal with static dynamic. And that does matter. So if you link against the static CRT slash MT or the dynamic CRT slash MD,
those two are link incompatible. Your entire process really needs to be using the same
one. And if you're using the static CRT, you better be static linking the world because
you're going to have problems otherwise. Right. So we unfortunately were unable to automatically switch based on that setting.
And in that setting, you'll get a linker error saying,
hey, you're trying to link a library that uses the dynamic CRT
and you're using the static CRT and those aren't okay.
So you will get an error in that case for the linker.
And the way that you resolve that is you just have to manually override
which VC package triplet that you resolve that is you just have to manually override which VC
package triplet that you're using. Now, in the case of targeting x86 or x64, though, we will
automatically choose the right triplet. So when you do the platform dropdown and you say x64,
we will automatically switch over and start using x64-windows to get packages from,
in which case you get a package not found error like you expected.
Okay. Now, I guess this might be a good moment to, for our listeners who don't know,
what is the CRT?
Right. The CRT is, I loved a quote based off of the movie, I believe, The Usual Suspects,
which is, the greatest trick that C ever pulled was convincing the world it didn't have a runtime.
Okay. So the CRT is the
C runtime, the library that implements all of the C functions that you're familiar with, you know,
malloc and memset, memcopy, and all of these wonderful functions. And so it becomes very
important that you and any libraries that you use agree on what malloc means. Because if you don't
agree, and they malloc something, and you try to free it, and well on what malloc means because if you don't agree and they malloc something and you try to free it and well they're malloc allocated things over here and you're free
expects them to be over here bad things happen very bad things happen very quickly so fortunately
um i can't claim credit for this but i'll maybe i'll claim credit on behalf of my team we do have
some mechanisms in place to detect this at link time,
so you don't have to wait until runtime to figure out that your program has a problem.
So it's called pragma mismatch, so that's named off of the mechanism you use to inject one of
these detection things. But basically, the point is that when the linker tries to slam some objects
from over here into your objects from over here, it'll say, ah, these have a little tag in them
that says they were using the debug CRT, or the dynamic CRT.
And these say that they're using the static CRT.
And those are not okay to put together in the same program.
Oh, that's cool.
Yeah, yeah.
Yeah, I've had some direct experience.
This used to be the case, I don't know if it still is,
that the lib Ruby, the official binary for Windows,
exports its own versions of things like free and malloc and printf,
and it can really, really screw with the runtime behavior of your program.
It is literally the worst thing I've ever had to deal with as a programmer.
But anyhow.
Better stick with C++, I guess.
I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors.
Backtrace is a debugging platform that improves software quality, reliability, and support
by bringing deep introspection and automation throughout the software error lifecycle.
Spend less time debugging and reduce your mean time to resolution by using the first and
only platform to combine symbolic debugging, error aggregation, and state analysis. At the time of
error, Bactres jumps into action, capturing detailed dumps of application and environmental
state. Bactres then performs automated analysis on process memory and executable code to classify
errors and highlight
important signals such as heap corruption, malware, and much more. This data is aggregated and
archived in a centralized object store, providing your team a single system to investigate errors
across your environments. Join industry leaders like Fastly, Message Systems, and AppNexus that
use Backtrace to modernize their debugging infrastructure. It's free to try, minutes to set up,
fully featured with no commitment necessary.
Check them out at backtrace.io.cppcast.
So you mentioned Debian
and a couple of their Linux platforms a moment ago.
When VC package first started, it was Windows only,
but you did recently expand.
What are the total platforms you support now?
So we build the entire package graph daily
for Windows and Ubuntu and OS X.
However, we know that people have been successful
internally and externally using us for Arch Linux,
for Debian, for FreeBSD, and also for cross-targeting for things like Android or iOS
and Emscripten. So all of these things are possible. We just don't currently build them
on a regular basis to the point where we would want to say, yeah, these are, you know, officially
supported. That's not to say that if you file a bug, we won't fix it. We'd be, we would love to hear about any issues, but it's just, those
aren't on our, those aren't in our CI system yet. Someday, maybe. And I would love to love to get
there, but they aren't in there yet. So that's why they're not on the official list. And when I
played with it, I was using a derivative of Arch and didn't notice any distribution specific issues,
nothing that I saw.
Just out of curiosity, when you first started working on VC package, did you plan that you
would eventually support Linux and OSX, or do you think it would just remain Windows
only?
I would say that as a C++ developer, not necessarily as a Microsoft employee, but as a C++ developer,
one of the great things about C++ is that it is cross-platform. You can write code and it can compile natively
to dozens, if not hundreds of different platforms just out of the box. And so any sort of package
manager for C++ really does need to keep that in mind. It really does need to be able to,
at least in principle, handle that.
So certainly from day one,
we already kind of had the ability
to have cross-targeting.
So not necessarily being hosted
on a different platform,
though in principle there's no real difference.
But certainly cross-targeting was in there
from day one because of UWP.
So for those who are unaware, UWp this is the universal windows platform it's basically the windows store model it's the generation the next generation of windows apis uh compared to say win
32 so these are the the new apis and it's a new protocol that lets you do a bunch of fun fun
things about language interop and if you want to you want to have live tiles or interact with the notification system, these APIs are being added in UWP and WinRT, which is the next generation of the operating system APIs.
So we, from day one, though, a lot of programs can be compiled either for UWP or for Win32, the legacy desktop system.
So we, from day one, had that cross-compilation idea baked in.
And so from there, it's a natural extension that, well, of course we can target Linux.
Of course we can target Emscripten.
Of course we can target phones.
I mean, they're just different.
You just have to be able to change the switches out from underneath
and then change the compiler out from underneath.
But that's totally a normal thing to do.
So I would say that from the beginning
the architecture was there
even if the code wasn't.
But we're engineers, we can fix code.
So it's not a problem.
So would you say
that the support for Windows
and Linux and Mac os is fully stable
mature at this point there are some well i don't want to say that we don't want to improve it right
so we definitely want to continue improving it so a notable i don't want to say gap but a notable
thing that we've chosen to do for now is that we are focusing on static linking only on Mac and
Linux. And this is due to some of the troubles about deploying dynamic libraries. So if you're
consuming from a system package manager, then the story about dynamic libraries and shared objects
is pretty simple. They're going to be in user lib, and that is where you get them. And if you want to
move your application to another system, it either needs to be exactly the same
or you need to rebuild it for that new system.
And that's pretty much the story.
However, for something that's more project local,
something that intends to more support
kind of a cross-platform compilation framework,
it's not clear what we would do if we built shared objects for
you. We would have to say, well, we built these shared objects for you, but now you need to either
embed our path, in which case moving it around becomes difficult, or you have to use the LD
library path, like environment variable with a shell script that jumps into the actual program.
So these were some challenges that we don't know how to solve yet.
And I would love if anyone in the audience has some ideas
and they'd love to chat with us about that,
I would be more than happy to chat with you about it.
But it's a problem that we didn't feel we could solve really well,
and so we haven't focused on shared objects yet.
And so we're pretty much focused on static libraries.
Well, then on the Windows, how do you solve that problem?
Do you provide
packaging support
in CMake or something so it knows where to get
the DLLs to shove them into the
installer or something like that?
Something like that, yeah.
So we add a post-build step which
does analysis of your binary
and figures out the transitive
closure of all your dependencies,
and then what we call app locals, all of the DLLs.
So app local is a term for putting a dependency DLL in the same folder as the executable,
which causes the loader to be able to find it.
And so you take all the dependencies and you put them in the folder,
and now they're all available, and you can just zip up the whole thing
and put it on any machine anywhere, and it works.
That's a pretty straightforward story.
Another question is the GitHub page for VC package still says that the tools and ecosystem are currently in a preview state,
and obviously you just mentioned you're soliciting open source contributions.
How long do you think it's going to remain
a preview state tool?
I'd like, so obviously
I can't guarantee anything
because things change, right?
I'd like to
look at moving
out of preview nearer
rather than farther. One of the big things
that we want to do...
One of the things we want to do as part of moving out of preview, though, is drastically improving our enterprise support.
So we know a lot of businesses are successful in using VC package already.
But there are some things that we really want to improve upon before we say, you know, we're out of preview.
Some of these things are we want to improve the ability to shrink wrap
your dependencies ahead of time. And I don't mean the binaries. I mean the sources. So being able to,
say, redirect all of the source fetches to a private server that you control so you have
complete guarantees that a GitHub repo isn't going to just disappear out from under you tomorrow, or the public version of something is not going to
disappear out from underneath you tomorrow. Now, because vcpackage is based on Git, and you just
do a Git clone, we, in that repo, deliver the entire history of all of the recipes and the
entire tool source, because that's part of the
batch file that you run, is it actually builds the tool on your machine. So you have everything
from us that you need. Even if our GitHub repo disappeared, everything would continue to work.
But the problem is, is that the library that you're building itself, we fetch the sources
from there upstream currently. So we fetch Boost sources from the Boost.org repository on
GitHub. So they have sub-repos for all of the individual Boost libraries. We fetch those repos.
And that means that if those repos were deleted, then that could cause problems. And so one of the
things that we want to address for companies is the ability to say, look, these are the recipes
that I want to use. Can you pre-download everything? Excuse me. Can you pre-download everything, put them all on a private HTTP server, for example,
and then redirect all future requests to that server. And if you ask for anything that isn't
on that server, fail so that we know that we are missing a dependency, something like this.
So that's a place where we really want to improve our support. And we also think that enterprises
will probably want to go further than we currently have with the idea of binary caching.
So our current mechanisms where we can dump the binaries into a raw folder or we can dump them into a zip file or a 7-zip file or a NuGet package or an QTIFW-based installer, that was an export mechanism that was contributed by another user.
I'm not fully versed in how it works,
but it appears to be an SDK-style installer.
If you remember downloading an MSI
for some pre-compiled library
and installing it on your machine
and having it available,
it looks like it's a mechanism
for doing something like that.
But we know that enterprises
are going to have very particular concerns
about how, well, we don't really want it all to go into one NuGet package.
We want it to be split up across different NuGet packages in this particular way, and we want to do more in that direction.
So these are the two areas that we would like to improve enterprise support before calling it stable version one.
But I don't think that will change our approach to development.
We believe that package management is, it's an ongoing process.
It's not something like a box that you just ship and it's done.
It's an ongoing, continual, incremental improvement process
that really stops being useful if it stops changing.
Okay.
Okay.
I was wondering if we could ask you any questions about how the Rapperswil committee meeting is going since you're over there right now.
Yeah.
So this is my first committee meeting, so it's an interesting experience.
Well, first committee meeting and second time out of North America, so doubly interesting experience.
I love the trains. They're fantastic.
I highly recommend
riding on them if you
ever get a chance. But the committee
meeting is really interesting.
I have generally been sitting in EWG,
not L-EWG, as you might
imagine being a package manager
author, but
because I've been working with modules and
coroutines and so those two were two of the big areas that were on the docket for ewg and sorry i
should step back a second the ewg is the evolution working group and so this is the group that
discusses and tries to hammer out the directions for the
future of the core language. So that's not the standard library. These are things like lambdas
or auto or type aliases, so template type aliases. These are all things that come from EWG.
Then there's a CWG, which is the core working group, and they're the ones who make sure that EWG didn't make too many mistakes.
So they hammer out all of the little details in the wording
of exactly how this should be specified,
because obviously C++ is backed by a standard,
and so what that standard says is really important
because the compiler writers are expected to implement that,
not what they just felt like implementing.
So I've been attending EWG,
and some of the big things that have been discussed
were modules and coroutines was today.
And then on Monday, they discussed modifying the constructor
for StringView, which was interesting.
In what way?
So StringView, as it's currently specced out,
will take a pointer and a size, or maybe two pointers, I forget.
But the idea is that string view in its current specification is intended to model a string.
This begs the question, well, a string and not just a string, it's intended to kind of model const char star from C, except it's length counted.
But this begs the question, const char star can be set to null pointer. Is null pointer a string,
or is it not a string? Is it an empty string? If you dereference it, it's not a null byte,
which is really kind of what we expect from an empty string.
So what is it?
And the discussion about string view primarily revolved around what is string view really intended to model?
And is this or is this not a string?
And how should we treat it? to make it so that string view would accept null pointer as a
constructor
argument, and it would
initialize the string view
like it was an empty string.
That was the proposal. The status quo
was that it would be ub, because you gave
us not a string, and string view
views strings. You gave us not a string,
so you gave us garbage so you
be the standards equals false answer to these things um and in the end there was not consensus
for uh there was not consensus to um modify the constructor so so as is it'll we'll stay with
current behavior which is ub uh at this mean, obviously, things can change.
And I wouldn't be surprised if it comes up again.
And I believe the next meeting will be San Diego.
But on Monday, there was not consensus.
That's pretty interesting.
Yeah, it's an interesting thing.
Now, the funny part is that you can still get a null pointer zero string
by default constructing it.
Yeah, default construction.
I was just playing with that yesterday myself, and I was somewhat surprised,
but at the same time glad that it existed because I kind of needed it.
Yep.
Well, and that was an argument brought up from one side of the table.
So it's, yeah, it's an interesting thing.
But, yeah, modules and coroutines, pretty much all-day affairs, a big slog.
But progress was made in both counts.
I think some interesting points were raised,
and we dug through a bunch of proposals.
But no specific news, of course, yeah.
Yeah, I don't want to affect the proceedings
that still remain to occur.
So there's a section of the meeting called plenary, which is kind of like people break
up into these individual subworking groups, but then everyone comes back together for plenary
where everyone gets to come together and, and, and decide that, yes, this is truly the great.
And this is the great future that C++ will be, or, or no, you guys really didn't think about this enough
and you need to go back and keep hammering on it.
And so plenary is still to come for a lot of these things
and so I don't want to say one way or another
about how that will go.
Okay.
Well, some exciting news to talk about
in a week or so after everything's done.
Yeah, definitely look forward to the trip reports.
Yeah.
Sorry, go ahead, Jason.
No, that's all right.
There's also supposed to be like tools working group or whatever kind of discussions, right?
Have you been involved in that at all this week?
That's right.
That's primarily why I'm here is for the tooling working group SG15, we'll be meeting on Friday evening.
On Friday evening? That's like the final hour, right?
There's a Saturday.
There's some stuff on Saturday.
It's a long
haul.
But no, I'm really looking forward to that.
That's headed by Titus
Winters of Google fame.
So it's going to be good.
It's going to be good.
Awesome. I'm glad you'll be able to go there
with your background with VC Package and everything.
I'm sure you should have some interesting contributions for it.
I hope to.
Is there anything else you want to talk about today
before we let you go?
Let's see.
No, I think that's awesome.
I think we dug through VC Package a bit.
And I'd like to let the audience know that,
yeah, we're an open source project.
We're on GitHub, github.com slash Microsoft slash VC Package.
We completely 100% accept contributions.
Well, sorry, I don't mean to say we accept 100% of contributions.
But we are in we are
very much open to contributions um of the 700 plus libraries that we have all but you know 30 of them
were externally contributed so yeah we accept contributions quite a bit it's it's it's really
about a community of maintainers more than it is about just a single tool. It's really about the ecosystem. That's
what it's about. So I wouldn't, yeah, yeah. Please don't hesitate to come by and let us know.
All right. Okay. Thanks so much for your time today, Robert.
Yeah, it's great being.
Thanks for joining us.
Thanks so much for listening in as we chat about C++. I'd love to hear what you think of the
podcast. Please let me know if we're discussing the stuff you're interested in, or if you have a suggestion
for a topic, I'd love to hear about that too. You can email all your thoughts to feedback at
cppcast.com. I'd also appreciate if you like CppCast on Facebook and follow CppCast on Twitter.
You can also follow me at Rob W. Irving and Jason at Leftkiss on Twitter. And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode is provided by podcastthemes.com.