CppCast - C++ Build Insights
Episode Date: March 26, 2020Rob and Jason are joined by Kevin Cadieux and Sy Brand. They first discuss a blog post on Memory Access Patterns and the Clang 10 release. Then they talk about C++ Build Insights, Kevin tells us how v...cperf can be used to find places where build performance can be improved in your code. Sy then goes over some of the other recent updates to Visual Studio and Visual Studio Code. News Vector of Objects vs Vector of Pointers and Memory Access Patterns Post-Prague Mailing Clang 10.0.0 Released Links Download Visual Studio Introducing C++ Build Insights Analyze your builds programmatically with the C++ Build Insights SDK Have full integration of Build Insights within Visual Studio Sponsors PVS-Studio. Write #cppcast in the message field on the download page and get one month license Read the article "Zero, one, two, Freddy's coming for you" about a typical pattern of typos related to the usage of numbers 0, 1, 2
Transcript
Discussion (0)
Episode 240 of CppCast with guests Kevin Cadoux and Cybrand, recorded March 26, 2020.
Sponsor of this episode of CppCast is the PVS Studio team.
The team promotes regular usage of static code analysis and the PVS Studio static analysis tool. In this episode, we discuss a new Clang release.
Then we talk to Kevin and Sai from the Visual C++ team.
They tell us about C++ Build Insights and other updates in Visual Studio. Welcome to episode 240 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
I'm doing all right, Rob. How are you doing?
Doing okay. Not too much news to discuss myself uh we're gonna agree not to talk about
the thing already going on well i didn't say it i didn't say the word so we're just gonna
skip over that because i'm sure people don't want to hear us talking about it every week
i do have news that has kind of unofficially official because it is listed on the conference
website now okay that i will be doing a two-day conference also at NDC TechTown,
which is at the end of August, beginning of September, back in Norway.
Awesome.
It was a few weeks ago that I'm like, no, I don't have any news yet.
And then I went to their website and saw that they had posted it.
That's great.
Okay, well, at the top of the episode, I'd like to read a piece of feedback.
This week we got a tweet from Peter Van Sent saying, I'd like to recommend John Turner to a CPP cast episode. If you're wondering, I've been doing reruns. Did you reach out to your cousin about this, Jason?
Well, I referenced him in my response to that tweet, and he didn't respond to me. I'm assuming he's referring to my cousin and referring to what the third episode when I was on and our second episode and discuss. Cause he worked on
Kai script with you. Yeah. Yeah. And I mean, he's been involved
in many programming languages at this point, TypeScript and
Rust and yeah, we should definitely have him on.
Yeah, sure. We should, uh, you know, reach out
to him if he doesn't respond to that tweets. Okay okay well we'd love to hear your thoughts about the show you can always
reach out to us on facebook twitter or email us at feedback at cbs.com and don't forget to leave
us a review on itunes or subscribe on youtube joining us today we have kevin kadua and sy
brand kevin is an engineer on the msvc back-end team working on code generation optimization and
build throughput he is a designer and main implementer of c++ build insights the new
build analysis platform for the msvc toolchain in his spare time he wait what spare time anyway he
likes to play video games and enjoys creative endeavors such as graphic design and writing
and we also have sy who is microsoft C++ developer advocate. Their background is in
compiler and debuggers for embedded accelerators, but they're also interested in generic library
design, metaprogramming, functional style C++, undefined behavior, and making our communities
more welcoming and inclusive. Welcome to the show, Kevin, and welcome back, Cy.
Hey, thanks very much for having me again.
Yeah, thanks for having me.
So do you like creative writing, short stories kind of thing, Kevin?
No, just I guess technical writing.
Oh.
I don't think technical writing is a hobby.
No, but it can get creative.
Oh, okay.
That's interesting.
Okay.
Are you playing any good video games then?
I play Overwatch at the moment, typically.
I am not.
I've never played that.
Have you, Rob?
I have not played it, but my son actually just recently got it for his Nintendo Switch.
I should try it out with him.
Yeah, I just got Switch and Animal Crossing, which is an absolute joy in these times.
I've seen lots of people talking about Animal Crossing online.
I have not looked into it too much.
Wasn't that like a Facebook game or something?
It was a Wii game.
Was it a Wii game? Okay okay it was on gamecube
originally game oh gamecube oh it was my neighbor playing it on his wii which probably was just the
gamecube version that's what i'm thinking of so is this like a straight port of the old gamecube
or is it updated yeah this is brand new okay it's really good you should get it okay i mean what is
the gameplay like oh it's just it's cutesy and you're building up an
island and making friends it's just it's very pure very sweet okay uh okay well uh kevin and
sai we got some news articles to discuss then we'll start talking more about uh c++ build insights
and other news from the visual studio team okay okay? Yeah, sounds good. Okay.
So this first article we have is Vector of Objects
versus Vector of Pointers and Memory Access Patterns.
This is from Bartek's blog.
And yeah, he did some benchmarking here,
talking about different situations where the two patterns could be better.
So doing a vector of objects can be better
for kind of like a random access situation.
But if you're sorting,
the vector of pointers is actually much faster.
There's one little note here.
Copying pointers is faster than copying a large object.
Yeah, I like this article.
I think this is a repost of an old one he did
from like six years ago or something.
Like if you look at the comments,
they're kind of old,
which maybe explains like the,
he says that he got 2.5 speed up on the vector access.
And that seems a little bit low to me for some things.
But then I guess like micro benchmarking stuff
when cache access patterns are involved
is really difficult to get solid numbers.
But still, I like the way they explained it all.
Yeah. I think you wanted to add to this one, Kevin?
Yeah, I think the results are interesting.
I mean, it's expected because the pointers are smaller in memory,
so sorting them is typically faster.
That's what I would have expected.
I feel like I'm actually slightly surprised that sorting the shared pointers is
faster than sorting the values
because copy an assignment
on shared pointer is very
much not cheap.
By the time you have to do a
reference increment, reference decrement
and then check to see if the result's zero and whatever
and it's a bunch of...
Jason's moving them around though.
I'm curious
what your recommendation would be with this would you go vector of objects or vector of pointers
or does it depend on the situation are you asking me yeah i don't get asked questions
i would always prefer the value of the vector of objects until i had a reason to move to vector
pointers although i was just thinking about size comment and that's correct i guess in the sort value of the vector of objects until I had a reason to move to vector pointers. Although I
was just thinking about size comment. And that's correct. I guess in the sort case, there is no
assignment, it's all swaps. So maybe they can avoid the expensive reference, increment and
decrement. Yeah, otherwise, it would just completely murder the performance. I would
think so it have to write. Okay. Okay. Well, this next article we have is the post-Prague mailing from the ISO group.
And I'm not sure if there's actually anything new here or if it's kind of just an updated list of what went to Prague.
And you can see what was actually adopted and voted in at the Prague meeting.
Is that right?
Or did anyone notice that there were some new papers in here?
Well, I have to say, for my part, I did not memorize the list of all the papers that were in the Prague meeting.
No, no, I didn't either. That's fair.
But some of these things I definitely did not recognize, but I don't know how new they were either.
Although if you look at the dates on them, it looks like most of the dates are before the Prague meetup.
Right.
I don't know. This is all out here, though, if anyone wants to go back to it. Did you read through any of these, Cy or Kevin? them it looks like most of the dates are before the prog meetup so right yeah there's so this is
all out here though if anyone wants to go back to it yeah did you read through any of these sire
kevin i personally did not to be honest it's okay oh an interesting paper here that's called
throws nothing should be no except and it's a discussion of whether or not standard library
functions that are currently marked as throws nothing should in fact actually
become specifically no except in the standard and i don't know if it comes to uh you know whether or
not this was accepted but i do find it as an interesting discussion at the very least because
i've often wondered that when i'm reading through the standard and you you know if you see no except
on the function signature you know it's not going to throw an exception.
Otherwise, you have to read all the stuff until you get to the throws.
Oh, throws nothing. Oh, okay.
It doesn't look like this was accepted yet, at least not at Prague.
Okay.
Okay.
And then the last article we have is that LLVM Clang version 10.0 was released.
And some of the highlights were that they managed to get C++ concept support in already.
That's amazing.
Yeah.
Yeah, this is really great.
Actually, one of... Oh, sorry, please go ahead, Sai.
Yeah, I was just saying that having another concept implementation is really great.
We've been shipping Clang and LVM in VS since, I think, 16.1 with CMake support and then MSBuild support and then we've been
having Clang Tidy support
in there as well. So we're hoping to
update to the most recent version as
soon as we can. Awesome.
I like
to tell students about Control Flow Guard.
A lot of people on Windows
doing Windows development aren't aware of it.
And I find it
interesting. This is one of the highlights here for clang 10 is that it supports control flow
guard checks as well. Does that I don't know, do either of you since you're involved in Windows
development have anything to add to that? I do not. Sire Kevin. No, I don't really know anything
about that. Okay, it's it's it's my understanding is that it's a uh runtime
guarantee that if your code if someone tries to circumvent your code so that it's um tries to
execute code that should be uh not executable then it's going to throw and then uh crash the program
so the operating system will catch it so it's's just one extra little check. It's supposed to be very low overhead,
so Microsoft recommends any applications
that you ship to users to have control flow guard enabled.
Nice.
So it's interesting,
they have this new optimization
for if you do pointer arithmetic,
which moves the pointer outside of an array,
that's undefined behavior,
and so it's now doing optimizations based on that.
I'm hoping this isn't going to be one of those things where it breaks a ton of code and people get angry.
But they did add UV sanitizer support to it as well to catch those cases.
So we'll see how that goes.
I hope it breaks a lot of code and people get angry.
And then they start using their sanitizers more and they understand why they were doing bad things yeah that is one one approach to the problem
well it's like i mean i don't think we've ever discussed this with you on the air so i or
definitely not kevin but it's come up a couple times that when GCC implemented or made it default to optimize around
checks for this being null, that they got just blasted for it. But it has literally been
undefined behavior since day one of C++, but people relied on broken compilers.
Yeah, well, I mean, we've had somewhat similar issues because we've been working so hard on conformance the last few years.
And of course, there's inconsistencies in the old behavior and the new behavior.
So everything's been behind a switch.
But yeah, getting people to move to the standard behavior, not rely on what implementations used to do is just a hard problem.
Yeah.
Okay, so Kevin, we briefly spoke about some of the blog posts that you did last year,
I think around November, about C++ Build Insights,
but we definitely did not dig into it.
So could you maybe give us an overview of the tool to start off?
Yeah, of course.
So C++ Build Insights is like a build analysis solution that we developed for MSVC.
Its purpose is to help our users understand their build times, basically.
And it comes with a tool called vcperf.
And the tool is used to collect the trace of your build,
which you can then open in WPA, which is the Windows Performance Analyzer.
It's like a trace viewer that's available on Windows to view ETW traces.
And when you do that, when you open your trace, you get this nice graphical overview of your
build and you can immediately spot bottlenecks and things of that nature.
It also gives you time information in aggregated form.
So what this gives you is like, oh, in your entire build, you parse this header like 3,000
times for a total of 9,000 seconds.
And so these are very useful because, for example, in the header case, you can use that
to optimize your PCH, for example.
And so the events that are supported, we have events for all compiler and linker invocations.
We have header parsing, function code generation,
and even template instantiations,
which is the newest event that we added.
The system is scalable.
You can use it on builds that run for many hours.
And because it's based on ETW, it's very easy to use,
because the collection is managed by the OS itself.
So for example, you can just start a trace,
and all the events from MSVC running on your system
are just collected automatically.
You don't have to know,
like you don't have to use a command line switch,
for example, on each of your invocations.
It's just done automatically.
So that makes it easier
because you don't have to understand
even your build system on how to add switches and stuff.
And oh, I also heard, like in your episode in November,
someone was asking if VC Perf was backward compatible,
basically, like if they could use it on older versions.
Do you remember?
Yeah.
Yeah, and the answer is yes, actually.
For example, if you download VS 2019 just to get VC Perf,
you can actually open a command prompt,
like a VS 2019 command prompt run vcperf
and build your project you using vs 2017 for example and it would still work but it's only
backward compatible down to vs uh like down to version 15.7 i think which is somewhere in vs 2017
okay okay and now i guess before we get too much too, is this available to all users of Visual Studio,
or do you need Enterprise or anything like that?
It's all users, as far as I know.
Okay.
Because, yeah, vcperf is even, as of now, available on GitHub.
Oh, okay.
So you can clone it, build it, and just run it as a standalone program,
and whichever VS you're using, it's just going to work down to 2017, as I said.
I guess it really would not make sense to make it an enterprise-only feature
if it were open source.
Okay, so even if you're just using the command line Visual Studio build tools,
you can install vcperf and run it that way.
Yeah.
Okay, awesome.
And so that's what you first
announced in november but then there was an update to this tool just a few weeks ago right
yeah so just a few weeks ago i i wrote another blog post which um basically announced the release
of the c++ build insights sdk uh because so so the the c++ build insights sdk basically takes
c++ with build insights technology and makes it available to everyone so that they can write their own tools with it.
Like vcperf itself is built using the SDK.
Okay.
And so that's the reason we made it open source, because we wanted to also, well, one of the reasons you should
everyone listening should clone and build
vcperp is if they want to
seriously, because the
new events like template instantiations
for now, they're only available
in the open source version.
Oh, okay. Yeah. So the open
source has more than what comes packaged in Visual
Studio. I mean, the events for
the template instantiations are in their Visual Studio, but the vc perv that understands them and creates the
trace that actually shows them is only available in in the open source version for now okay yeah
these things will make it into into releases but you know between releases and things like that
any updates you'll have to get on the GitHub.
Oh, and we also have for the SDK, we have a samples
repository on GitHub as well
which will populate over time
with more examples on what they can do with the
SDK. So then, who is the
intended target audience
for this SDK?
One of the reasons we did the SDK is
because initially we were talking to
our Xbox partners,
and they were telling us that they needed a way to programmatically analyze their build times during their CIs.
So that was one of the reasons we did it.
For example, you could automatically detect bottlenecks in your builds by consuming the timing information using the SDK.
And there's a bunch of other use cases.
For example, if you don't like WPA,
if you don't like viewing your traces in WPA,
you can basically use the SDK
and create your own visualization experience
that fits with your own tools.
The target audience is mostly build engineers,
these people in the companies that manage and monitor their builds.
And if they want to write custom tools that fit their scenarios, it's typically those
people who would use the SDK.
So do you know if anyone is, I'm assuming you might not be able to tell us details,
but if anyone is already using this for their build dashboards to show where they have messed
things up in the last check-in or
something? I actually don't know that yet. It's too, I think it's too early to tell. Maybe someone's
going to come back to us at some point. That would be, I mean, a really cool idea. Like, you know,
if you could have a build fail after a check-in because suddenly parsing all the headers took,
you know, 10% longer,% longer. That would be something
you want to go and address right away.
And particularly your comment of
PCH earlier, I find things like
PCH can be a little fragile.
I can easily see that someone could mess something
up and accidentally add 20% to the build time
on a large project.
Interesting.
If anyone listening would like to try out
for analyzing their builds, please feel free to get in touch with us.
Yeah, and there's like a full documentation available online like on
docs.microsoft.com, like all the functions, all the APIs are documented
so it should be simple enough, I think, I hope.
I want to interrupt the discussion for just a moment to bring you
a word from our sponsor.
This episode of CppCast is sponsored by the PVS Studio Company.
The company produces the same name PVS Studio Static Code Analyzer
that has proved to be great at the search for errors and especially typos.
The tool supports the analysis of C, C++, C Sharp, and Java code.
The article 012 Freddy's Coming for you, which has been recently
posted clearly demonstrates the analyzers outstanding abilities of searching typos.
The article shows how easy it is to make a mistake, even in simple code, and that no one is
immune from making it. You can find a link to the article in the podcast description. There's also a
link to the previous video download page. When requesting a license, write the hashtag CPP cast,
and you'll receive a
trial license for a full month instead of one week. So I'm curious about what are some of the other
situations that vcperf and the insights SDK might help C++ engineers solve? Like you mentioned,
you know, cleaning up your PCH, optimizing it if certain headers are taking a long time to parse, being parsed lots of times.
What are some of the other actionable things you might get out of this tool?
So the top three things that we see, the first one is the headers, honestly.
I'm going to repeat it because it's just so common that we see people finding out, oh, wow, this header is taking a whole bunch of time. And that's because people before we had build insights, they just didn't have the information
that they needed to know. The aggregated statistics on each header, they just didn't have
that. So I think this is the reason why we're seeing a lot of people figure out that their
headers are just wrong. So that's the first one that we see the most often. And the second one is just like bottlenecks.
Like the first thing that you see when you open WPA is this graphical view.
Like you see like one timeline per thread, basically.
So you can easily spot like, oh, let's say you have like one area of your build,
which is not parallelized.
You're going to see it as just like one line there.
So this way you can easily see oh there's
something going on there what is it and then you can just like drill down and discover for example
oh it's because i didn't throw mp the mp switch on this yelling vacation or something okay yeah
go ahead i'll explain why i'm laughing in a minute go ahead and finish yours. I think I know why, but anyway. And the third one is that
we often see is functions. Sometimes functions take a long time to optimize in the optimizer.
This can be because either because they're too large or because it's like sometimes the compiler
generates like large dynamic initializers that just take a long time to generate. Like if you have large static, large arrays,
large global arrays with tons of entries,
sometimes the optimizer has trouble with that.
And so you can spot those in the functions view of WPA.
We see this quite, like it's not uncommon, actually.
We often see this.
And another way that you get huge functions
is if you use the force inline keyword too much and so everything gets in line it's like a like
a huge function and then the optimizer starts to um start to break out yeah
i would say those are the three most common okay things that you can do a large project that i was
have been working on
off and on for the last 10 years,
it's been a multi-year process for them
to split the project into two pieces.
And it turns out I was building one of the pieces
just two days ago,
and it was taking forever to compile
on my recently built 12-core machine.
And I'm a 12-thread machine, a six-core,
and I'm like, what in the world is going on?
And finally realized that, yeah, during the split process,
the slash MP flag had not been propagated
to the other portion of the library.
Oh.
Yeah.
That's not what I imagined.
I thought you were going to say something else.
Oh, well, what did you think I was going to say?
Never mind.
It's better.
Never mind.
I would love to see, since you're talking about like actionable items it sounds like with these
common use cases that it would be possible for someone to write a tool that says well we just
analyzed your build and hey dummy you forgot to put slash mp and oh by the way you're including
the same header file 30,000 times unnecessarily.
Like, is that possible? Yeah, that's I mean, that's one of the reasons the SDK was created
in the first place is to write these kinds of tools. Like playing tidy for my build system.
Oh, yeah. Sorry. In fact, like because I'm going to write more articles in the future.
And like every one of them, most of them is going to have like a VC Perf use case
accompanied by the appropriate SDK sample that does the same thing.
So like the first blog post that I'm going to post soon
is going to be like an example on how to find a bottleneck in a build using VC Perf
and the build explorer view, like the graphical timelines view.
And the sample that comes for that is actually the example that I mentioned. It
automatically finds cases where you don't throw MP
and when it's a bottleneck
and if it finds it, it's going to
emit a warning. Very good.
Very cool. So
this is pretty easy to
use. You just open up your
Visual Studio command line, run vcperf
before and after the build.
But are there any plans to
bring it fully into visual studio so you could just hit like you know build with uh vcperf or
something like that and and get an output of you know what was taking all the time in your build
so do you want to yeah sure we're uh this is something that we're um like we definitely see
as ability going forward so we're kind of seeing what the adoption
is on this tool and
how much people would
like a feature like that so
yeah if this is something
which would be very useful for you
please get in touch or create like a
ticket on our
bug tracker on developer community
and it'll just help us to prioritize
how much work we should be putting
into this kind of support.
Hold on a second.
I just need to go to the VC website real quick now.
Well, we actually have one suggestion ticket
for this specifically
that someone already posted.
It has, I think, 20 upvotes.
So everyone listening,
just go there and just upvote.
Oh, maybe I'll find a link in the show notes
and put it. Absolutely, put a link to that
in the show notes, because more accessibility
to more tools is always better.
Although, forgive me
if I miss this, but is there any downside
to running this? Is there any overhead
that is noticeable or anything?
The overhead is not really large,
to be honest. I've never seen anything
drastic, even when we have the most
verbose events, like template instantiations the overhead like is within noise okay very good i
found it i'll put this in the show notes okay well uh size speaking of uh visual studio proper
uh are there any recent announcements uh for visual Studio 2019 or VS Code that you wanted to share?
Yeah, sure. We've been doing a lot of work.
In fact, we've had 16.5 just got released last week.
And that comes with a few cool features like team training for IntelliCode.
IntelliCode is our machine learning driven tuning mechanism for IntelliCode. IntelliCode is our machine learning driven
tuning mechanism for
IntelliSense. So it's like, you know, if you
if you're doing a bunch of stuff with
with iterators,
then if you type.begin on
something, then probably you're going to be typing.end
on it later on or something
like that. So
we've trained this model on
a large set of open source code bases to get it to
understand, you know, what kind of coding patterns are there in C++? What kind of member functions
are you going to be using shortly after each other? So that when you have IntelliSense, and you get
that list of member functions, the ones you're going to be most likely to use will be right at
the top. And so what 16.5 comes with is the ability
to train your own models on your own code bases so that you know if you have your own um if your
team has like a set of idioms they use a lot or certain programming styles then you can kind of
capture that in your in your model and it will tune intellisense for you which is really cool
um it also comes with if you use
IntelliSense with the standard library, sometimes
you get massive type names.
So we've improved those a bit
so you should be able to read those
a bit better now.
Another good one is file copies
for CMake.
If you're doing remote
programming, then
we've optimized those copies so that it's not doing any unnecessary copying of files.
Today, which will be, I guess, is it this episode going out tomorrow?
Yeah. we've released 16.6 Preview 2, which comes with a few cool things like
better Ninja support for CMake on Linux.
So Ninja's now the default underlying generator
when you're building on WSL or remote Linux
instead of Makefiles.
We've also added some simplified templates
for debugging CMake projects
on Linux, because you previously had to do a bunch of
your own work to get that
all set up, so we've made that easier.
We also added
better Doxygen comment
generation in Visual Studio,
and
even an IntelliSense
code linter, which
you can try out. In terms of the language, the conformant preprocessor is now complete.
What?
Yeah.
Hasn't that been an open feature request for 25 years or something?
That has been, yes.
This has been a blocker on a lot of things.
A lot of people have been asking for it so
those two the main features
are C++20's
VA opt
thing which is like a
for doing
handling variadic macros
where some things are optional
and the other one is the underscore
pragma which we've had
a feature which looks kind of like it for a long time,
but it's not been the same thing.
So now underscore pragma is supported.
So this is no longer an experimental feature.
You can go ahead and try it out.
Yes, I'm really pleased about that.
Sorry, go ahead, Jason.
No, it's all right.
I was just going to say,
I'm guessing the new preprocessor is still optional,
or is it going to become the default?
It will become the default.
I think it's still under a switch.
Okay.
There's going to be,
by the time this episode comes out,
there will be a blog post
which explains everything uh i've got the question about this intellicode thing which i
i've had enabled although i haven't really seen the difference on my code basis to be honest i
don't spend a ton of time in the visual studio ide either but forgive my ignorance with machine
learning my understanding is that machine learning can pick up bad habits also so is it possible to accidentally train your intelligo intellicode to propagate bad practices
i mean if if you're if you're training your own models then you then yes yeah if you have bad
practices already then uh that's what machine learning is going to pick up,
you know, the whole garbage in, garbage out.
So yeah, be selective about what you're training this stuff on.
If you want it to pick up the best practices,
then train it on code, which has your best practices.
Is it possible to go in and tweak it?
And if it's recommending something, say,
no, never recommend this ever again, please. I'm not sure about that off the top of my head um okay that
would be a yeah kind of like on on twitter you could be like i don't like this recommendation
yes maybe we could have something like this that's a good question though uh how granular
uh is it when you tell it to train on some of your code?
Like, are you telling it to train on all the code in a Visual Studio solution or in a specific project?
Can you tell it, you know, don't ignore all or just ignore all the code in this folder? I know it's bad.
I think for that one, I'll have to defer to documentation off the top of my head.
I want one that says, if git commit user equals Bob,
ignore.
That'd be bad.
And on the
VS Code side,
since last time we spoke, which I guess
was CBPCon,
we've had
find all references support
in VS Code for C++,
which is one of those features which people were constantly asking for
and is really great to have in there.
Symbol-based renaming, so it's not just based on text.
We're actually using an understanding of the symbols in your code base
in order to do this.
It's like a semantic renaming rather than just token-based.
Localization support for the actual IDE tooling.
So if you want your interface to not be in English,
then that's supported now.
A big one was we took on maintenance
of the CMake tools extension.
So that was previously written and maintained by Vector of Boole.
We did a fantastic job on it,
but wanted to hand over control to someone else.
So we've taken the maintenance of that on.
And we're still, you know, taking...
It's still very much a community project.
We're not, like, taking it and not letting anyone touch it.
We're still taking PRs and issues
and trying to run it the same way as it always was,
but make sure it's the highest quality it can possibly be.
So the things which have been added since then
have been support for multi-root workspaces.
So if you have multiple root CMake list files
in your repository, then that's now supported.
And the other one is the file-based API for CMake.
So if tools need to extract information
about the build from CMake,
they used to use the CMake server, but it was really slow.
So they added a new interface,
and now VS Code uses that interface.
So if you're loading up a CMake project
and populating all of the information,
you should notice that that's a lot faster now.
That's interesting, because I believe the most recent release notes for Sea Lion
made the same change with their CMake handling as well.
Yeah, it's definitely a noticeable improvement.
So I'm curious, with all these features that you're adding to VS Code,
where's its place in the Visual Studio ecosystem?
It seems like it's almost becoming as
featureful as Visual Studio.
We still want to maintain
that feeling that VS Code is a little
bit more lightweight
for people who prefer that
kind of environment.
When we're making
recommendations to people, we still, if you're on Windows,
Visual Studio should be
your go-to tool. And then Visual Studio Code making recommendations to people we still if you're on windows visual studio should be your
your go-to tool um and then visual studio code is what we recommend for for linux and and mac uh
you know that as you say there are a lot of new features being added to vs code but we're
trying not to um to change it in such a way that it's unrecognizable, you know what I mean? It should still be that same kind of style of code editor,
but just have all of these extra niceties,
like the semantic renaming, reference finding, things like that.
I have to give it a try again. I haven't in a while.
Yeah, please do. Let me know how it goes.
All right.
Do you have any timeline on when MSVC might be C++20 feature complete?
I know the ink's not fully dry yet.
Yeah, I mean, we can't make any promises at this time,
but we're working really hard on it.
We've got the concepts and Spaceship Operator
now feature complete in the compiler.
Both concepts and Spaceship feature complete?
Is that what you just said?
Yeah, the Spaceship feature, the library stuff is still
in progress, but
for the compiler, yes.
And then we're working hard on
coroutine support and
modules.
The standard library
has been, ever since
Visual Studio 2019
came out we've had more and more features every release for C++20
we're doing more
constexpr algorithms and span
for the most recent release
is constant evaluated, std erase if
things like that
if everything goes well we might have um feature
complete by the the end of the year but um there are no promises yet we have to see how how things
go all right i'm curious about your updates to uh coroutines because it seemed like i don't know
again i've not really been following coroutines very closely, but like a year ago, all the compilers like, oh yeah, we've got coroutines. And now if I look at the standards
conformance page on compiler explore, uh, explore, excuse me, compiler conformance page on CPP
reference. So I was trying to say, um, everyone's like, no, we don't have full support for
coroutines. I don't know what you're talking about. And like, did I, did I miss something
or is my memory like tainted? Yeah. I haven following um core routines as as much uh we have been um getting
that we we're working towards uh a complete implementation as as soon as we can uh it's not
quite there yet okay another thing i need to spend time with with I know you mentioned a couple things about the
VS 2019 preview build
Is there anything else
upcoming that you can tell us about
we should be looking forward to?
I think I mentioned
the main ones
which are in
16.5 and 16.6
preview 2
You can go ahead and look at the
release notes for 16.6 preview 2. You can go ahead and look at the release notes for 16.6.
Preview 2 should be out by the time this episode is,
so you can go ahead and read more details there.
The 16.5 release notes are already out,
so they have more details about everything which came in that release.
As far as things which are coming up,
we're always working on conformance.
We're working more on our CMake support
for both Visual Studio and Visual Studio Code.
So yeah, just keep your eyes open for our blog posts
and release notes, and they'll keep you up to date.
Great.
So one other thing I will mention is
we've been working really hard on, you know, build speed improvements have been really high on our priorities.
You know, we've got our build insights tool, but also just in terms of the tool chain in general.
So we've been working really hard on improving the linker throughput and all the way end-to-end from
the front end of the compiler right to
the linker.
We're seeing build improvements anywhere from
15 to
45% on
large industrial code bases
like AAA video games,
C++ WinRT,
all these kinds of things.
If you're still on
an older version of the tool set
then I would highly recommend
upgrading if you can
we're still ABI compatible
back to 2015
so if you're using the 2015 tools
then we will still link against
any of your dependencies and you should
hopefully see massive build improvements. So, um, yeah, please let us know if you have any success
stories or, uh, or any problems as well, because if we've interest, if we've added any regressions
or anything, we'd, we'd love to hear about that. But yeah, also successes are good.
That's, I mean, that's great because so often you hear people, Oh, uh, we, you know, we'd love to hear about that. But yeah, also successes are good. I mean, that's great because so often you hear people,
oh, well, we just don't have a compelling business reason
to upgrade our compiler.
And I try to get to a better optimizer, better warnings,
but I like the idea of being able to say,
maybe upwards to 50% build improvement or something,
just that would be helpful.
Yeah, so
people even just give it a shot and
time their build, run
build insights on it and
should see those
numbers, even in terms of
runtime as well, like we've been working hard on
the auto vectorizer,
AVX2 support,
AVX512 support,
just getting more and more vectorization opportunities,
better optimizations in a number of areas.
So there will be blog posts coming
about backend improvements as well in the coming weeks.
Awesome. Very cool.
Okay. Jason, you have anything else you wanted to go over?
I think that covers it for me.
Okay. Sine or Kevin, anything else you wanted to share
before we let you go?
I think that covered everything that I wanted to talk about.
Yeah, nothing for me, really.
Just go try Build Insights.
All right.
I'm definitely going to check it out.
Build Insights.
Try the most recent updates
try the preview
let us know if you have any feedback
awesome
thanks so much for coming on the show again
thanks so much
thanks for having us
thanks so much for listening in as we chat about C++
we'd love to hear what you think of the podcast
please let us know if we're discussing
the stuff you're interested in or if you have a suggestion for a topic we'd love to hear what you think of the podcast. Please let us know if we're discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear about
that too. You can email all your thoughts to feedback at cppcast.com. We'd also appreciate
if you can like CppCast on Facebook and follow CppCast on Twitter. You can also follow me at
Rob W. Irving and Jason at Lefticus on Twitter. We'd also like to thank all our patrons who helped support the show through
Patreon.
If you'd like to support us on Patreon,
you can do so at patreon.com slash CPP cast.
And of course you can find all that info and the show notes on the podcast
website at cppcast.com.
Theme music for this episode is provided by podcast themes.com.