CppCast - Visual C++ Updates
Episode Date: March 21, 2019Rob talks to several members of the Visual C++ team about both Visual Studio Code and the upcoming Visual Studio 2019 release and more. Marian Luparu is the Lead Program Manager of the C++ tea...m responsible for the C++ experience in Visual Studio, VS Code as well as Vcpkg. Sy Brand is Microsoft’s C++ Developer Advocate. Their background is in compilers and debuggers for embedded accelerators, but they’re also interested in generic library design, metaprogramming, functional-style C++, undefined behaviour, and making our communities more welcoming and inclusive. Tara Raj is the Program Manager for the C++ experience in Visual Studio Code and Vcpkg. She is interested in developer tools and Linux. Bob Brown is the engineering manager for C++ experiences in Visual Studio and Visual Studio Code. Marian Luparu, Sy Brand, Tara Raj and Bob Brown @mluparu @TartanLlama @tara_msft Bob Brown Links Visual Studio 2019 Launch Event Visual Studio 2019 Preview 2 Blog Rollup Visual Studio Code C/C++ extension: January 2019 Update Sponsors Wanna Play a Detective? Find the Bug in a Function from Midnight Commander False Positives in PVS-Studio: How Deep the Rabbit Hole Goes Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 191 of CppCast, recorded March 20th, 2019.
Sponsor of this episode of CppCast is PVS Studio.
PVS Studio is a static application security testing tool for detecting errors and potential vulnerabilities in the source code of programs members of the Visual C++ team.
We talk about recent improvements in VS Code.
And we talk all about the upcoming Visual Studio 2019 release. Welcome to episode 191 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host Rob Irving, and for this episode I was in Redmond, Washington for the Microsoft MVP Summit.
So I had a chance to sit down with several members of the Visual C++ team.
We talked about Visual Studio Code,
Visual Studio 2019, and the state of Visual C++ conformance with the standard. Here is the interview. Why don't we go around the room and introduce ourselves. Simon.
Hi, my name is Simon. I've been, I guess this is my third time on the show. So I'm now working
on the C++ team at Microsoft. I'm their developer advocate,
which means I'm in charge of community stuff and our blogs,
making sure that we're helping people,
that people know what to expect from our tools and all that stuff.
Okay.
And Simon, so like you said, we had you on two times before.
I think we had you on just a month or two before CppCon.
Is that right?
Yeah, I guess so.
And you joined Microsoft right after we had you on.
How's it been going so far?
Is the implant installed?
Yeah, it's been really great.
The implant is definitely working.
Yeah, it's been very, very different from what I was doing before.
Because previously I was just like, you know, this is the compiler backend and you're working on this
and this is the only thing you're doing.
Whereas now it's like lots of different bits and pieces
and trying to kind of form my own idea of what I should be doing.
And I get to spend a lot of time helping people
and doing stuff which I was doing in my spare time already.
So yeah, it's been really great.
Okay.
Hello, I am Mario Loparo.
I am the program manager lead for the C++ team.
I'm working on general Visual Studio support for C++,
helping with Visual Studio Code and VC package in general.
And I'm very thrilled to have Simon part of the team.
I also have Tara joining us in the team as well.
So do you want to introduce yourself, Tara?
Sure. Thanks, Marian.
I'm Tara Raj.
I'm a program manager on the C++ team,
currently focusing on Visual Studio Code and VC package.
Previous to this, I was at Microsoft
working on the Windows subsystem for Linux
and SQL Server on Linux. So as you could
imagine, working on a lot of Linux-y things. Okay, we have talked about the Windows subsystem
on Linux a couple times on the show. Anything interesting going on there between like what you
were working on and Visual Studio Code? Yeah, there's quite a bit of integrations that are happening between
WSL and Visual Studio Code, especially if you're on Windows and you want to target a Linux
environment. Okay, awesome. Hi, I'm Bob Brown. I'm the engineering manager for the C++ experiences
in Visual Studio Code and also in Visual Studio. Okay, so I guess, Bob and Tara, we can start off by talking to you
about some of the changes coming to Visual Studio Code.
So in just two weeks,
I think we're getting the big Visual Studio 2019 update,
but Visual Studio Code has kind of been
getting more continuous updates over the past few months.
Is that right?
That's right.
We try to release the extension every month or two or so.
We put out an update with new features.
In recent months, we put out document comment support.
We're now working on IntelliSense caching or AutoPCH,
as Visual Studio users might be familiar with,
to improve the performance and the speed of IntelliSense features,
such as autocomplete, browsing know quick info tool tips and and features
like that okay can we talk about some of the big features that are just visual studio code in
general like i know live share is a pretty big thing that's coming to visual studio and is also
going to be in visual studio code is that right that's right um is there anything specific you'd
like to know about live share i, can you just describe the experience?
Because we may have talked about it a little bit on the show, but not in depth.
Sure. LiveShare gives you a collaborative session.
You can basically share the...
So a developer who is working on a code project,
if they want to be able to share their project with another developer
who may not be in the same room as them,
they could be in the same room or they could be on the other side of the country or across the world.
They can set up a collaborative session which allows both developers to see the same code,
get the same IntelliSense experience, and debug.
They get to do basically everything you can do in VS Code or Visual Studio across this collaborative session.
Okay, and this can be VS Code to VS Code or VS Code to Visual Studio?
That's correct.
In any direction?
Absolutely.
Awesome.
And if I have a code base that I'm working on, does the other developer have to also
have that code base checked out?
How does it work exactly?
No, that's the great thing about LiveShare is basically the IntelliSense features are
all powered by the host.
So whoever is creating the shared session is the one who hosts basically everything.
And when the collaborator who is connected to the session wants to see anything in that workspace or in that project,
those requests basically get sent to the host and the relevant information gets sent back to the collaborator.
So they don't have to clone the entire repository.
They don't have to have the same project.
They don't have to have the code files on their physical machine.
All of that stuff is served through the live share session.
So if I don't have the code on my machine
and I want to start kind of poking around in it,
does it just kind of serve me those files on the fly?
Exactly. That's exactly how it works.
So the message is, like, you can scroll through the file.
You don't have to be looking at the exact same section of code that the host is looking at.
You can look anywhere in the project that you'd like to.
I guess, do we want to talk a little bit more about some of the C++ extension features, productivity enhancements?
Yeah, absolutely.
So Bob mentioned a couple of the big features
that we had come in,
one of which was documentation comments
and auto PCH.
So to go into a little bit more detail on auto PCH,
which stands for auto pre-compiled headers,
what we're doing there is
it's essentially IntelliSense caching. And what that allows for is
if you have a large file with many headers, by pre-compiling them, we can actually populate
our IntelliSense recommendations much quicker. And that definitely falls into our productivity
category where you have your IntelliSense populated much quicker than we previously showed them to you.
And in addition to that, another experience that we're working on is the configuration experience.
We understand that to get started with C++, it takes a lot.
You need to have a compiler, a build system, and things like that. You need to kind of configure all those things by hand, like the path to your compiler, what
you want to call to invoke at the compiler, things like that.
Exactly, yeah.
And if you're a first-time user or a student, that makes your life that much more difficult
to get started.
And especially if you're using Visual Studio Code and you just want to get started quickly,
that's a barrier for you to get started with C++. So something that we just introduced in our March update is to build and
debug an active file. So if you have a file open, all you need to do is hit F5 and we generate
your tasks and your launch file. We help configure that for you and you just get
building and debugging
right away. That sounds awesome. What else is new with Visual Studio Code? I saw something
about the Insiders program. Yeah, the Insiders program is something that most extensions as well
as Visual Studio Code offer. So the Insiders program for the C++ extension is so that you can get some of our quick updates as soon as we check
them in. So we have anywhere from two to three Insiders builds before we have a release. So as
Bob mentioned, we release monthly by monthly, somewhere in that about six week range. And so
as Insiders, you can get updates every few weeks. And this is really helpful for us because as we're developing features, we want to know if there are certain cases that maybe we hadn't considered or if there are regressions, painful bugs for you or just experiences that you really enjoy.
And we want to see how much you enjoy them. And if you're using them, it's great for us to get that early feedback
just to make the extension as awesome as possible. Nice. And we've definitely talked about this on
the show before, but Visual Studio Code is available on which platforms? It's available on Windows,
Mac OS, as well as most Linux distros. Okay, that's great. Anything else we haven't talked
about yet that you want
to bring up? Yeah, so one of the things we've been looking at with our survey feedback is that
many of our users are looking to integrate their build systems and other things. Configuration is
something that comes up repeatedly in our surveys. They want to be able to get coding faster, so
that's where this build and run active file feature came from.
Another thing that's challenging for a lot of people is configuration. I want to include a header file and I want it to just work. I don't want to have to go through all of the pain of
searching for the repository that owns that header file or that library and download all the pieces
myself. I've used it before. I just want to simply, you know,
tell the extension to just go find it for me and bring it down. So one of our areas of investment
is going to be integrating more closely with VC package. Other feedback that we've seen is that
people want better integration with build systems like Make and CMake. So those are other, you know,
other experiences that we are investigating
implementing into our extension just to do the configuration for the intellisense they've already
done it before they've already configured their make and their c make files you know they don't
want to have to go through and configure yet this extension again when they've already done it and
the basically the the experience they expect is already coded up in their build system
so integrating more tightly with these build systems allows them to have a cleaner experience.
And this will all be in the Visual Studio Code C++ extension?
Or are there like another extension for CMake?
There already is an extension for CMake.
And we do integrate with that particular extension.
So we have an API that allows us to send them a message and say,
hey, what's the configuration for this particular source file? And they'll send that information
back to us. So we do have integration with a CMake tools extension. It's fairly popular.
Anything else we haven't talked about with VS Code yet?
Yeah. So something that we also have planned that Bob had mentioned is VC package integration. And
VC package is a C++ package manager. and it's cross-platform as well.
So the plan here is to get that integrated into our VS Code extension.
And an experience that this might look like is, let's say you have a header file, but you don't actually have that package installed.
This would allow you to simply use vcpackage in the back end to install that
library for you. So just by seeing the header file, vcpackage is able to try to go and find
out what the missing package is and grab it for you? Yeah, it can find the missing package for
you and we can grab that. And there would be a little bit of an installation or configuration
experience in the back end. And we're still
working through the details of what this might look like. But the fact that we could integrate
VC package into the extension, we see a lot of potential there to make your package management
experience just a lot more seamless. Right. And VC package also, I think,
started off on Windows, but also now has full Linux and Mac support. Is that right?
Yep, it's cross platform. So it works on Windows, Mac, and Linux and across all of those platforms as well.
Okay, sounds great.
I wanted to interrupt the discussion for just a moment to bring your word from our sponsors.
PVS Studio performs static code analysis and issues warnings for fragments of code
that are likely to contain errors
and potential vulnerabilities.
The tool supports the C, C++, C Sharp,
and Java programming languages.
At the moment, PVS Studio has 422 diagnostics
for C and C++,
which enable you to detect dereferencing of null pointers,
array bound violations, typos, dead code,
resource leaks,
and other kinds of errors.
PVS Studio supports working with Visual C++, GCC, client compilers, as well as a number
of compilers for embedded systems.
The analyzer works in the Windows, Linux, and macOS environments.
Follow the links in the description to two new posts from the PVS Studio team.
The first one suggests checking your skills to find errors.
For the second one, you can read about a non-obvious case of an analyzer false positive
okay so let's move on to visual studio which has a big release coming out when the big release is
april 2 okay so there's going to be a live event that people can tune in and hear everything about Visual Studio 2019.
It's only in September last year at CBBCon that we're still talking about 2017 Update 9
and everything that comes with it, like just micro-debugging and step- back and other cool features. But in the meantime, the team is already busy working on Visual Studio 2019.
So 2019 is really full of new features, improvements in compiler,
improvements in the libraries, obviously, IDE productivity.
So we're really excited to head over to April 2nd and talk with everyone about it.
You know, I was looking back at past episodes, and it was almost exactly 100 episodes ago
that we had Daniel Moth on to talk about the Visual Studio 2017 release. And I think I had
you and Kenny on a couple weeks after that. So what would you say are the big Visual Studio 2019 features
that listeners may not have heard about yet?
They may not have heard about.
Well, if you follow our blog,
I think some of them may not be surprises to you,
but the big improvements in 2019 would be Live Share
is something that came up with the Visual Studio Code conversation we were just having.
This is a major boost in team productivity that we think is going to move the needle.
And bringing the teams together, even they're located in the same office space or geographically in different places, this kind of reduces the barrier of that collaboration.
We always make improvements and build throughputs.
So for example, we just announced that we have linker improvements
that go to make linker faster by an order of magnitude of 2.4x
compared with 2017, which in 2017,
we also announced that it's around 2x improvements compared with the previous release.
So we keep making
big improvements like that. It's always worth
to upgrade.
There are
always things
we can go into more detail about the
productivity features, but
something that I want to bring up is
an important point of
every time we ship a new release, people may perceive that as a barrier of adopting and moving between releases.
So we actually take that very, very seriously.
And we put a lot of effort in making sure that 2019 release is a friction-free upgrade for people coming from 2017. So from basic things like you can install 2019 side by side
to more advanced scenarios like using the old tool set
that shipped in 2017 that you may still depend on
inside the 2019 IDE,
so you can take advantage of all the IDE features
while still shipping your product
on the version that you locked in.
Yeah, so I actually did that myself with Visual Studio 2017.
I'm still using the 2015 C++ compiler on some projects.
So you're saying with 2019, I can choose to use the 2017 compiler
or maybe even the 2015 compiler?
You can even choose the 2015 compiler.
We want to make it as easy as possible of a decision for you to move to 2019
and take advantage of all the ID features.
And not only that,
but if the reason why you're staying behind with 1015,
it's not because your product or the ship,
but let's say it's because you have a third-party dependency
that it's locked on a specific binary
tied with a vCity disk from that version,
you can actually compile all your code with 2019 toolset.
You could consider moving if that's your choice
because we do have binary compatibility
between all these three versions of the toolset.
So 2015, 2017, and 2019, you can really mix binaries,
build them with different toolsets.
As long as you ship the latest vcredits that come with 2019,
mix and matching
should work just fine. So really,
the high-level point of this conversation
about upgrade is, if you have
any second thoughts
or you're nervous about upgrading, you shouldn't
be. There's a lot of dials
in 2019 to make it as easy for you
to move to the latest version
and take advantage of all the improvements.
Okay. sounds great.
Do you want to dig more into some of the C++-specific
productivity improvements?
Absolutely.
So I was mentioning the build throughput improvements,
so 2.4x linker throughput improvements.
The tool set brings runtime performance improvements as well.
We recently did some work in using the SSA optimizer to also optimize the vector intrinsics.
So that brings around 2.8% improvement
in general code basis.
So by just upgrading and recompiling your code,
you can take those benefits.
If your code is amenable to auto-vectorization
or your source code actually uses the vector intrinsics itself.
And speaking of vectorization, well, and CMD in general,
we added support for OpenMP.
So it's not a full support
for OpenMP 4.5. We only implemented the SIMD part of it, but that is specifically geared to
ML libraries that are making heavy usage of that. So by just recompiling and enabling those OpenMP
SIMD extensions in our compiler, you can get the benefit of a performance report that you get at runtime.
There's a lot more improvements in
the compiler backend, so
just recompile and get more stuff for free.
That's always great.
And then, as you move into the IDE,
there's a lot of
support, for example,
for code analysis.
We added a lot
more rules for our core checkers.
So, for example, concurrency checks
and the core routines now have rules
inside the static analysis to kind of catch
those kind of easy mistakes that you can make
with co-await where you capture by references
and then you co-await and your reference
may be deadling at that point
because, you know, coroutines can move around,
so you don't want to rely on that.
Those kind of checks would be available, and, yeah.
Not to cut you off, but coroutines are still, like,
in an experimental stage, right?
Correct. They're still the TS,
but since they got approved in C++20, we're very excited about that.
So now, coming in maybe an update will probably take them out of experimental flag,
and they will be fully supported as a C++20 feature.
I believe the implementation we have in the compiler is pretty close to what it's in the C++ standard.
But the thing that I'm most excited when it comes to code analysis is the fact that all of those
rules that we invest in and we're trying
to make your code, analyzing
your code and raising those
issues that might be
hiding in your code, we bring
them in the editor now.
So
we have on by default the ability
to run code analysis in the background.
And as you code, you'll get squiggles in the editor telling you about this error.
So in the past, with previous releases,
you would have had to build with slash analyze on the command line.
That is not the case anymore.
It comes for free.
We're bringing that information to you,
and only for the files that you're actively editing.
It's not like an overwhelming list of issues
that you would get from your 2 million lines of code
if you throw slash analyze at build time.
This is geared towards the changes
you're actively making in the editor.
That sounds helpful.
Yeah.
Let's see.
What else do we have?
I'm sure I'm going to forget a few things,
but I'm not going to forget any of them
at the launch event, the second of Ripple.
So I'd like to invite everyone to tune in to that. We're going to have
some cool demos there. Let's see, another one is
the out-of-proc debuggers. So one of the challenges we had
with symbols in the debugging is that bringing them inside the memory, you'd run out of memory.
So for really, really large projects,
this is not something that happens with regular-sized projects,
but this was a problem for large codebases.
So now, by default, all of those symbols go into an out-of-proc process.
It's another process that loads those symbols
and brings that information inside VS
and basically keeps the memory usage a lot lower.
Other than that, there's obviously productivity features
that help you write faster code.
So things that you would normally can do yourself,
but it's sort of cumbersome to get through the process of writing.
So for example, you start using a symbol
and you have to go to the top and do a pound include of writing that. So for example you start using a symbol and you have to go to the top
and do a pound include of that header.
Now Visual Studio can kind of
knows or learns
where that type is defined and suggests
which header to add automatically
like an auto-fix it. You just do control
space and that fix
is going to get in there. Okay, so this
isn't just, you know, you type
out vector int
and it suggests you go and include
the vector header.
It does that too. It will do that, but it will also
go and find
the correct header for some
type that's within your code base?
Correct, yeah. So you
bring in an inu type, it will suggest the header
that is defined, and then it will
use some heuristics to kind of pick
the public header
from the include graph
that we bring in. So even
for third-party libraries,
it will probably work
and we're trying to do our best
to not bring in internal headers from
that library as suggestions for pound include.
And we would
love people to try it and give us the feedback.
What if it's a type that might be defined in multiple headers?
When that happens, we would probably list all of those options.
And then you get to pick your own decision.
So like include std string or include boost string,
something like that, the header for either of those. Yeah, for example,
if you don't qualify the namespace, then it would
be ambiguous, so we'll probably suggest both.
Okay. And we would also, like
you reminded me that there's
also using namespace qualifications,
so if you just name a type
without a namespace, we would suggest
to either using namespace as well, not
only the pound include for
that type. Or
if you want to use fully qualified names,
we just replace that
type name with a fully qualified
name with a full namespace rather than using
depending on your preference.
So yeah, those are some of the things
in 2019 release.
Other
than that, we're doing
work on cross-puffing development. That's something that we talked in work on cross-platform development
that's something that we talked in the past
one of the important scenarios we want to make sure is that Visual Studio
is good
for you to develop not only for targeting Windows
but also for targeting other platforms Linux
iOS, Mac and then
Android so in 2019 Linux, iOS, Mac, and then Android.
So in 2019, we're evolving that work that we started in 2017
with the CMake integration.
You don't have to generate solutions and projects in VS.
You could just point to the cmakelist.txt,
and Visual Studio will do all the work behind
to kind of collect the information needed for the ID,
the IntelliSense information,
how to build the CMake project,
how to debug the CMake targets.
Since you brought it up, I'm not a CMake expert,
but the CMake integration is getting better.
Is that right?
Correct.
So it was available in 2017 um and
the thing that we're actively working on right now is uh to bring a lot more more helpers for
you to write uh cmake projects inside the id so for example there's a targets view that lists all
the targets you don't get only like the disk view with all the cmake txt you can you can navigate
the the cmake targets that you have defined
with the explicit files that are coming in as part of that target
for the particular configurations you target.
So for example, in a target, you could have conditionals of, say,
this file only comes when I target Windows,
this other file comes when I target Linux, and so on.
There's a view inside Solution Explorer that,
depending on the configurations you're actively targeting right now,
would filter that.
So you get to see what's going on at build
or what files are accessible or not.
On the other hand, we're also working on integrating
a lot of functionality in VS that is available today
for MSBuild projects to also make it available for CMake.
So not everything lights up today, full disclosure, but
we're making great progress. For example, Just My Code that I was mentioning at the
beginning that we added in 15.9, 2017, now in
2019 is also available for CMake projects.
Can we go into what exactly Just My Code is?
Thank you. So this is the ability to, when you step through code,
even when you're stepping into, to step over the code that is not yours.
So you're stepping over...
You don't want to step through the standard library.
Exactly.
That's, I think, the primary scenario that comes up.
You don't want to go into the guts of the algorithm implementation,
but if you have a lambda that you call often for the sorting algorithm,
you want to step into that and something goes wrong with the sorting.
And you can always opt out of that.
If you really, really want to find Stefan's bugs,
you would go in the library.
It's easy to do that.
Stefan doesn't write bugs, does he?
I don't think so.
I don't think so, but...
We always live and learn.
Yeah.
So the CMake support is improving,
and one of the things that we work very hard on
is responding to customer feedback.
So we have lots of customers coming in,
trying the CMake experience, giving us the feedback.
We're working very hard to resolve all the developer community issues
that get reported.
And I feel that the experience is vastly improving,
and I'm very excited to get more feedback from the people.
So please keep the feedback coming.
Okay.
I think one thing we talked about with one of the 2017 updates was template IntelliSense.
But that's also getting some improvements in 2019?
Absolutely.
That's something that we're going to continue working on it.
And it's part of our efforts to help C++ developers write as modern C++ code as possible.
Templates is not something new,
but this was a sort of limitations of every idea out there
that inside a template,
when you're writing the template body,
it's very hard to kind of understand the type information
because it's very, like, it's just a T
and you know nothing about it.
T could be anything.
Yeah, exactly.
And most of the times people do know kind of it. T could be anything. Yeah, exactly. And most of the times people do know
kind of what some Ts could be.
And even if you instantiate a template,
you do not get the instantiation place.
You don't get the help to write the template body.
So that's basically what template intellisense is.
It allows you to go inside a template definition
and say T could be int or it could be string
or it could be some other type that's your user type.
And you can try to see how the template behaves
if the template gets instantiated with that specific type.
So then when you're in the template body, you could do like bar dot
and if bar is of type T, you'd get the string members
if that's what you try to instantiate it with.
And that was available in 2017 as well.
With 2019, what we're working on,
we just added support for nested templates
so you can nest as deep as you want to go
and then specify the multiple levels of template arguments
that you get from the code,
basically how it's defined.
We also honor default arguments,
which is something that we didn't do in 17, unfortunately.
But you don't have to specify your argument again.
We're just going to default to that.
And we also remember your choices. So if you tried your template with multiple sets of arguments,
then you will all see them in the list,
and you can often switch between them.
You don't have to retype them.
So we're working on the usability of the feature.
There's more things coming in this space
that will go beyond uh what we
have in the rtw release that we're going to release in april 2nd one of the most uh requested
features is the ability to kind of scout through the code and find all the instantiations for the
given template that that you're working on try to auto complete it for you yeah and kind of list
list those for you to pick rather than you having to define them yourself.
Okay.
What else can we talk about
with regards to productivity improvements?
Step backwards is new, right?
That was actually in 2017 as well.
It is an enterprise-only feature.
But yeah, it basically allows you to
step back, really.
While debugging,
we take process snapshots
and it's not a sort of a
gimmickry of moving the execution
point in the debugger.
It actually gives you
the full context of all the local variables
of the whole heap
space that you had at the time where
the process snapshot was taken so you can you can go back between breakpoints you have multiple
breakpoints and if you remember oh let me check what the variable was that i forgot to check when
i hit the first breakpoint that that's very easy to navigate back and forth is there any limit to
like how far backwards you can go uh there's a limit of memory, so we will take snapshots
and if
the memory conditions are not
satisfactory, we will probably drop
some of the snapshots that are oldest.
So
I think there might be an upper
limit of 100 snapshots as well.
That's a lot.
It should be enough to get started,
but this is the kind
of feature that will also it's we're able to tweak it if needed so we're waiting for customer
feedback to hear more about how people are making use of this how successful they are
if something doesn't work okay yeah something else that it's uh it's available in 2019
is uh intellicode so this is the AI-based IntelliSense support.
And that's something that you need to get
as an extension to 2019.
One of the reasons, tactical reasons,
why we do that is because we work so fast
at iterating over the model that we give users
such that the AI suggestions are relevant to them.
We often add more libraries that we train on.
So we're kind of shipping out a band and kind of bringing updates outside the cycle of Visual
Studio.
And we're going to keep doing that until we're very happy with the level of coverage that
IntelliCon can provide.
So what exactly does IntelliCode do for you?
So, for example, if you're in a 4,
one example would be, and you're using a vector,
if you're in the starting point,
you're probably going to get suggested the begin method of the vector,
and if you're in the condition phase,
you're going to probably get suggested the end method of the vector, and if you're in the condition phase, you're going to probably get suggested
the end method for vector.
Because of the way we run the training,
we see that that's a common pattern
that people would use that code
and kind of the sequence of calls
that they would make for that type.
This is the kind of suggestion that you usually get.
So it's not limited to just vector,
but basically for all the code bases that we train on,
we may suggest a common pattern.
So let's say in an if you would check the size,
and then inside the if you would probably access a member.
So you'd maybe get the suggestion for.at to be at the top
rather than all the other thousand methods that an stl type would
have okay and things like that can you tell us how many types of open source libraries you're
training this on i mean stl usage boost what else are you training on so that's uh we we have uh
i think about a hundred code bases that we train. The interesting bit with that is that when we train on those codebases, it's not necessarily
that we're going to get information about how those codebases are being used.
So the best information we can give you is about STL usage, because for all the libraries
that we train on, those libraries use stl so then this is how the the ai parsing basically handles
detecting patterns of library usage so if we add other libraries for example for training that use
boost then we're going to have even better suggestions for boost and that's really what
we're working on it's finding those code bases that consume the library that we want to give better suggestions for.
Is there ever going to be a way to train it on my code?
So if I have a whole bunch of types and I want it to auto-predict how I use those types, is that going to be doable?
Yeah, and that's something that we're working on right now, basically the training code, working on packaging it up and making it
available as part of the VS IDE
and as part of
the DevOps
experience as well, to say, I want to train
on my code base and then I want to download
the model to kind of augment the model that
Microsoft ships with.
So then, yeah, basically you'd get suggestions
for, you know, a two million lines of
code base,
what are the patterns that are coming out of that.
Okay. Any concerns you're going to program us all out of our jobs?
I don't think so.
But what we want to make use is as productive as possible
and write as correct code as possible.
Okay.
But I wouldn't be concerned about that at all.
Okay. Anything else we haven't talked about yet with vs 2019 i'm a big fan of the um so the back end stuff that we've been doing
we haven't mentioned um there's a new more aggressive inlining switch um so obviously
inlining is super important for c++ because it's one of the main ways in which we remove
the cost of abstractions from your code.
And what we don't want people doing is just writing force inline everywhere
because the compiler's heuristics didn't quite get it right.
Because then we're making users make decisions
which maybe they made the wrong decision.
Compiler might know what it's doing.
Yeah.
So what we're getting now is we have this minus OB3 switch
which tells the inliner to be more aggressive.
Inline more so the users don't have to just annotate everything
with force inline.
They can just say, okay, well, we'll try out this new switch.
We'll see if it buys us some performance.
We can turn it on, we can turn it off and play around.
So it gives people a bit more control over thin liner behavior
without having to do a bunch of extra work.
Another really great thing is
a new exception handling
metadata scheme.
This is the
information
which is encoded in the binary
for
unwinding the stack and
doing all that nonsense.
We tried to optimize how all of this is stored
because having a bunch of information in your binary
can bloat it up and cause all these problems
that people love to complain about.
So we tried to optimize the size of this data as much as possible.
So we brought it down 60%.
It made it 60% smaller.
Okay.
So if your program's using a lot of exception handling,
then you can see some really...
Significant reduces of size.
So we're seeing, like,
if you have heavy exception handling usage,
you could see, like, even a 20% binary size reduction,
which I'm really, really excited about.
So that's under a compiler switch.
So you can try out your programs
and see if you get improvements and let us know.
One thing I just realized we haven't talked about yet
is conformance.
So I know over the Visual Studio 2017 release cycle,
you guys kept getting more and more conformant.
What's going to be the state of 2019
with regards to C++ 11, 14, 17 conformance?
In 2017, we've been done since Visual Studio 2017
in terms of C++ 17 conformance.
For C++ 20 conformance,
we're going to be shipping partial support for the spaceship operator.
So this is the three-way comparison operators,
the official terminology, I guess, but I like spaceships.
So there's still changes which are being made
to the behavior of three-way comparisons,
and there's still changes being made in standards meetings.
So we're not fully up to date with all of the most recent modifications to the papers,
but we have the kind of initial support which was proposed and is going to be in C++20.
If you want to use that,
that's behind a certain flag, right?
Like std C++ latest or
something like that?
Yeah, so it's under
std C++ latest.
Yeah, you'll get that.
The
free comparison operator for people who don't know
is like a...
I think it's most useful in giving you a way to automatically generate
your own comparison operators for your types
without you having to write everything.
So you can just write spaceship operator equals default,
and it will generate lexicographic comparison operators for your types,
which I think people miss that because they see,
oh, it's like a new operator and you can compare things.
But no, this is a really great feature.
It'll help people not have to write so much code,
so much boilerplate.
Some other things we've been working on is feature test macros.
This is a way to query what
features are available
in your implementation so that you can
then
make programmatic decisions about
what features you're using and things like that
and configure
build properly.
That's fully supported
in
2019.
Okay.
A small, simple one is removeCvRef.
So this is a type trait which previously people have been using to decay all the time to remove
references and CV qualifiers from types.
But the problem with that is decay does more work than you need to do.
So it affects compile times, and we don't really want that.
So removeCvRef was added, and we now have support for this as well.
So there was, I don't know if you saw,
about aggregate initialization where you can kind of like uh if you you have a user um if you delete constructors for example you can kind of
sneak underneath them by using aggregate initialization it's like i say well i don't
want anyone to to be able to construct this type um in this way and you say oh well, I don't want anyone to be able to construct this type in this way.
And you say, oh, well, I'll just use braces.
And if your type qualifies as an aggregate, then hey, it just works, which is kind of weird.
So we fixed that in the standard, and now we support that in the compiler as well.
So you can stop people constructing your types in ways which you don't want them to,
uh,
which is nice.
Okay.
Um,
also big ones is,
uh,
we're working modules and coroutines.
So the,
and the support for that has been in there for a while because it was being
worked on from,
you know,
people at Microsoft,
right.
On your team.
Yep.
Yeah. So we've, we've. We've had all of us under
experimental compiler
flags for a while,
but we've been doing a lot of work to make
sure that we're, because there's been
a lot of changes to modules, there's been changes
to coroutines, all of these
big features. We're trying to
work towards getting parity with what
the standard says
rather than just what our old
implementation was. So there's been a lot of work
in that area as well. Do Gabby and Gore
get to go on a big vacation now or something?
Features are voted in?
I hope so.
That would be nice.
Is there anything else we haven't
gone over yet? I think we
covered pretty much all that's coming.
I'm sure I forgot something, but like I said, I'm not going to forget it for the second April release.
I'd say that we're working on a lot of exciting stuff beyond the April 2 release as well for 2017.
We had nine updates for 2019.
We haven't released how many we're going to have but we are working
on update one as we speak right now and and more updates are going to come and um we're going to
have more exciting announcements at build conference later this year uh around our cross
platform development and um is build before or after the april 2 release it's after so april
is the rtw release and then by build i think we may have an update one available for people to try on with even more exciting features. And we're going to continue working on C++ 20 asines to start with and concepts are being worked on right now.
Nothing to announce yet,
but we're going to keep people posted throughout the year
as we make progress on C++20.
Okay.
Since we're talking about C++20 at the end,
maybe we can finish off by just you telling me
what some of your favorite features are with C++20.
Favorite features?
Anything you're really looking forward to?
I look forward to seeing modules integrated in all the C++ build systems.
Okay.
That's going to be some work, but it's interesting work.
And once we get there, I think it's going to make modules a lot easier to use.
Concepts excites me a lot because once they're going to be under ranges,
ranges will be a much easier to use library.
So hopefully more and more people are going to use ranges.
And I'm going to pass the baton to Simon,
which I'm sure he's excited about a lot.
Yeah, of course I'm excited about my own paper,
which is hopefully going to get in extensions to std optional
for composing functions which will operate on the value inside the optional,
or you can compose functions which will return optionals.
It stops you from having that problem where if you have a bunch of functions
which are returning optionals
then you have to call a function check if it returned null opt call another function check
if it returned null opt so you end up just having all of this boilerplate code to check the current
state of the optional and that has to be mixed in with all of your your actual logic and so what
this interface allows you to do is just say,
okay, I want to do all of these operations, and
if any of these returns an
empty optional, then the rest are just going to be
no ops. So that's
made it through to
the library working group
as of last meeting.
So could still be voted
into C++20 because it made it through the working group?
Yeah, so it's... If it's through the working group? Yes.
If it's in LWG now,
then it's going to be voted for approval for C++20. It just needs to get the wording all approved
so it can get merged in.
So hopefully next meeting, if all goes well,
that can get in.
Yes, I'm excited about that.
I'm excited about all of the big features, especially ranges.
I'm totally blanking on a lot of the smaller changes.
There's so many things going in C++20 now.
Yeah, it's looking like it's going to be a very large release.
Yeah, I think definitely even larger than C++11.
So this is going to be
a massive change in
how we program C++.
Okay, well it's been great
having you both on the show. Thanks for
telling me all about Visual Studio 2019.
And the date to remember is...
April 2nd.
Okay. Thank you for having us on your show.
Yeah, thanks guys. Yeah, thanks. Thanks so much for listening in as we chat about C++
we'd love to hear what you think of the podcast
please let us know if we're discussing the stuff
you're interested in or if you have a suggestion
for a topic we'd love to hear about that too
you can email all your thoughts
to feedback at cppcast.com
we'd also appreciate if you can like
CppCast on Facebook and follow
CppCast on Twitter you follow CppCast on Twitter.
You can also follow me at Rob W. Irving and Jason at Left2Kiss on Twitter.
We'd also like to thank all our patrons who help support the show through Patreon.
If you'd like to support us on Patreon, you can do so at patreon.com slash cppcast.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode is provided by podcastthemes.com.