CppCast - HPC and more
Episode Date: February 9, 2016Rob and Jason are joined by Bryce Lelbach to discuss High Performance Computing and other C++ topics. Bryce Adelstein Lelbach is a researcher at Lawrence Berkeley National Laboratory (LBNL), a... US Department of Energy research facility. Working alongside a team of mathematicians and physicists, he develops and analyzes new parallel programming models for exascale and post-Moore architectures. Bryce is one of the developers of the HPX C++ runtime system; he spent five years working on HPX while he was at Louisiana State University's Center for Computation and Technology. He also helped start the LLVMLinux initiative, and has occasionally contributed to the Boost C++ libraries. Bryce is an organizer for C++Now and CppCon conferences and he is passionate about C++ community development. He serves as LBNL's representative to the C++ standards committee. News Can I always depend on return value optimization Compilers and error messages Results of the 2015 Underhanded C Contest Bryce Lelbach Bryce Lelbach Links Lawrence Berkeley National Lab HPX on GitHub Benchmarking C++ Code @ CppCon 2015 Practical Functional Programming in C++ @ CppCon 2014
Transcript
Discussion (0)
This episode of CppCast is sponsored by Undo Software.
Debugging C++ is hard, which is why Undo Software's technology
has proven to reduce debugging time by up to two-thirds.
Memory corruptions, resource leaks, race conditions, and logic errors
can now be fixed quickly and easily.
So visit undo-software.com to find out how its next-generation
debugging technology can help you find and fix your bugs
in minutes, not weeks.
Episode 44 of CppCast with guest Bryce Lelbach recorded February 8th, 2016. In this episode, we talk about relying on return value optimization.
And we talk to Bryce Lelbach from the Lawrence Berkeley National Lab.
Bryce tells us about his work at LBNL and his contributions to the HPX library. Welcome to episode 44 of CppCast, the only podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
Doing good, Rob. How about you?
Doing pretty good. I did want to apologize to listeners for missing last week.
I was on a business trip trip and I was planning to record
when we got back
and it just didn't work out.
I was completely exhausted
at the end of that trip.
It happens.
Yeah.
I did want to talk
just for a minute.
Yesterday was a Super Bowl.
Not really a big sports guy myself,
but I kind of realized
watching it, Jason,
that it was my home state team
against your home state team.
You know, I didn't even think about that either but we totally beat you yes okay yeah you you beat us
pretty handily yes we kicked your butt anyway um at the top of every episode i like to read a piece
of feedback uh this week we got a tweet from uh oh man, I can't read this name, Jakub Zakruski.
And he wrote that he was listening to CppCast while writing some Java.
And he just has to comfort himself somehow.
So that's definitely a great way to improve your Java writing experience to listen to
about C++.
I think that's a great idea.
Definitely.
Yeah.
So we'd love to hear your thoughts about the show as well.
You can always reach out to us on Facebook, Twitter,
or you can email us at feedback at cppcast.com.
And don't forget to leave us reviews on iTunes.
So joining us today is Bryce Lelbach.
Bryce, did I pronounce that right?
Yeah, you got that right.
Okay.
Bryce is a researcher at Lawrence Berkeley National Lab, a U.S. Department of Energy research facility.
Working alongside a team of mathematicians and physicists, he develops and analyzes new parallel programming models for exascale and post-Moehr architectures.
Bryce is one of the developers of the HPX C++ runtime system.
He spent five years working on HPX while he was at Louisiana State University's
Center for Computation and Technology. He also helped start the LLVM Linux initiative and has
occasionally contributed to boost C++ libraries. Bryce is an organizer for C++ Now and CppCon
conferences, and he is passionate about C++ community development. He serves as LBNL's
representative to the C++ Standards Committee. Bryce, welcome to the show.
Hey, guys. It's great to be here.
That's a great bio.
I'm trying to think of one thing to pick out.
What is the LLVM Linux initiative?
So back when I was 18, I thought it would be a cool idea to try to get the Linux kernel to compile with Clang.
And so this was back when Clang was a much younger compiler.
And so it was pretty close.
It was just like a bunch of people had done some work
to get the Clang close to being able to compile the kernel.
And I just sort of took the last step of taking everybody's work,
putting it together, writing the last few patches, figuring
out the last few details, and I booted
the first Linux kernel
compiled with Clang.
There's been a lot of interest in it
since then. A lot of
companies are interested in being able to ship
a Linux toolchain that is
completely under a BSD
style license. That's very
attractive to some companies.
So there's been a lot of interest,
and so now it's an initiative of the Linux Foundation,
and they have a few guys that work on it more full-time.
And there's only a few patches now
that need to be still pushed upstream to the Linux kernel
for it to be able to work just out of the box.
All right, I am going to take a big risk here
about revealing how old or young you are, but how many years
ago are we talking about?
That was, let's see.
So it would have been six years ago.
That's, I think, six years ago.
Yeah.
Okay.
All right.
Yes.
Very interesting.
Well, we have a couple news items to talk to,
and then we're going to dig into some more of all the projects you're involved with, Bryce.
So this first one, Jace, I'm going to let you introduce.
It's a discussion on Reddit about return value optimization.
Yeah, someone asked on Reddit whether or not you can always depend on return value optimization existing.
And basically the answer is no, but you should,
you know, pretend like it does. That's pretty much the conclusion. But I think it's a great
discussion that our listeners would enjoy about how return value optimization works and in what
cases it can and cannot be applied. Okay. Bryce, is there anything you wanted to add to that? Yeah, this is a fairly standard question that I get asked around the office.
And the reality is that most compilers out there are pretty good at return value optimization,
and many of them have been doing it for many years,
because it's really a very essential optimization to making C++ code fast. One of the leading reasons for slow C++ code
is lots of excessive copies.
So I rely on it frequently,
but as I always tell people, trust but verify.
So you rely on it, but check that the code
is what you expect it to be once it's compiled.
That's always good advice.
So this next one is about compilers and error messages
and pointing out how the Microsoft Visual Studio compiler
is still not giving the best error messages.
And it looks like they're reusing some of the compiler explorer that we talked about last episode.
I wonder if they possibly learned about it from CppCast, Jason.
Why don't we just assume that they did?
No, I'm just kidding.
I like to harp on the idea that you need to be using every compiler you possibly can.
It helps make your code better.
Yeah.
So I love this article, even though it's short.
It is very short.
I thought it was pretty interesting that it looks like GCC, Clang, and ICC have identical error messages with this templated code.
Very close.
Yeah.
Oh, yeah, it is slightly different, but almost identical.
Almost identical.
And the Visual Studio compiler message is really unhelpful.
This is completely off-topic, but I always feel vindicated by compiler messages putting const before the type, like I like to do.
That's just wrong, I'm sorry.
It's true that every compiler does do that in their warning messages, though.
Error messages.
I'll give you that.
It drives me crazy.
I put it after the type.
No, I'm like, clearly, I am correct.
So all the compiler writers agree with me.
I'm with you, Jason.
I guess I'm outnumbered here.
Okay. you jason i guess i'm outnumbered here okay and this last one um is actually a contest called the under sand underhanded c contest which just announced its winner for this year
or i guess this is from 2015 they announced their 2015 winner and uh i was not aware of this contest before, but apparently the idea is you are asked to write some C code
which looks fairly normal,
and if you were to have a visual inspection of the code,
another programmer would think there's nothing wrong with it,
but when you execute the code, it does some shady things
that will not be immediately apparent to someone just looking at the code it does some shady things that uh will not be immediately apparent to someone
just looking at the code yeah the winner is uh i mean i i didn't i haven't spent enough time
reading it to even really understand what the winner was doing totally it it basically has
to do with overloading floats and doubles so that uh when you pass a double, it gets read as a float.
It gets read as two floats, I believe.
Two floats, yeah.
It's interesting.
Bryce, did you have anything you wanted to say about this?
I've seen this sort of issue before in numerical codes.
These are the worst sorts of bugs. So none of the people you work with were
doing this intentionally, I assume. Um, I, I, I wouldn't, I'm sure that somewhere out there,
somebody's done something like this, uh, with good intentions, but, uh, but no, not, not maliciously. Right. Very cool. Well, Bryce, can you
tell us a little bit more about what you do at the Department of Energy
at the Lawrence National Laboratory?
I work in a division called the Computer Research Division
and I'm in a group called the Computer Architecture Group.
My research group, it's about eight guys,
about five of whom are hardware people,
and then there's a team of three of us, of three software guys.
And our group looks at what hardware features
or what future hardware architectures would be really useful to the scientific computing community.
So we do a lot of simulations of future designs and look at how they would perform with our codes.
And we also spend a lot of time looking at how do we make current codes faster on existing hardware
or on hardware that's just about to come out.
Like right now, we're preparing for
the Intel Knights Landing processor, which we're about to have a large Knights Landing machine
delivered here at the lab. And it's a x86 architecture, but it's radically different
from traditional x86 server architectures. It's got very wide vector units and a large collection
of low powered cores. And so one of the questions we have to answer is, you know, how do we utilize this new architecture that's a little bit exotic?
Just correct me or explain better, but it kind of sounds like that CPU architecture is leaning more towards a GPU kind of architecture.
Yeah, that's actually originally what the K&L and the Xeon Phi, it's part of a chip
line called the Xeon Phi.
And all of the Xeon Phi chips are descendants of the Larrabee project at Intel, which was
an attempt to create a graphics processor using x86 technology.
So it is very similar to a GPU architecture. It's kind of, to some degree, it's somewhere
in between because you get actual x86 cores. So they're a little bit better for general purpose
processing. They don't have some of the restrictions that you'd have in a GPU architecture,
they can execute branchier
code so this is an architecture intended for servers specifically this isn't something we're
going to see on the desktop as far as you know um no you you might have so the original generation
of this chip came out as a co-processor that you know if you were doing some compute heavy task
you might have one of the corocessors in your desktop workstation.
But mostly this is a server architecture,
and really it's an architecture intended for supercomputers.
This is not the same as the 386 coprocessor I bought once.
No, it is not.
That was a long time ago.
Yeah, that was before my time.
And post-Moor architecture.
So that's basically the idea that we can no longer rely on Moore's law and increasing computer speed.
So we're having to move more towards parallelization to get increased speeds, right?
Yeah, so not just parallelization, but we're going to have to start looking at new hardware technologies,
maybe new materials, new process innovations.
That we're going to have to, you know, for 35 years,
we've had this sort of Moore's Law era
where we've just sort of scaled up silicon.
We've had, you know, these exponential
increases in compute power and in transistor counts, and that's going to come to an end.
And we're going to need new innovation in hardware technologies to be able to continue this trend of
rising computational power in a shrinking die size.
Can you tell us a little bit about HPX and what parts of the project you worked on? We
had Dr. Hartmut Kaiser on a while ago who gave us an overview, but maybe you could talk a little
bit about your area of expertise there. Yeah, so I started on HPX about five years ago. I was
sort of the first dedicated person that Hartmut brought on to work on the project. It was a very small team at
the time, and Hartman had written most of the code. And I came in and I initially worked on
the addressing layer, which is called AGAS. So I rewrote that to be hosted on top of the rest of the HPX infrastructure. It used to be an out-of-line
subsystem so that it sort of had its own communication infrastructure, and we rebuilt
it to run on top of HPX's HPX facility, HPX's communication facility. And so I did a rewrite of that, and that architecture is mostly still
there today. And I also wrote good chunks of the threading subsystem that's still around today.
So I did a lot of the optimization of the thread scheduler and a lot of research into
sort of how to make it as good as we possibly could. And I did a bunch of other things in the project.
I was the release manager for a while.
I got us up into a release cycle,
and Hartmut and I worked together to put together the unit test suite
and just really sort of getting HPX from being a research project
to being the production project that it is today.
So you mentioned their unit test suite.
HPX can run both locally or distributed, right?
Yeah.
So what was it like building a test suite for a project like that,
that you need to test the distributed nature of it also?
Well, so it's pretty difficult.
We actually have a cluster at LSU called Hermione.
If you let interns name your cluster, that's what happens.
And so it's about a 50-node cluster, and it is dedicated to unit testing HPX.
Oh, goodness.
So we run a pretty intensive test suite. We have to run parallel tests. We have to run distributed tests.
And there's a lot of tests that you'll run that are attempting to identify race conditions.
And those are not going to always fire.
You just have to sort of make a best effort to try to make sure that, hey, this regression test is going to actually reproduce this race condition if it ever shows up again.
But sometimes you don't have any
guarantee that you're going to be able to write a unit test that can catch the bug.
Right. So do you have you, well, I know you're not on the team anymore, but just out of curiosity,
if you know if they're trying the thread sanitizers or any of the newer features in GCC
and Clang to try to catch that kind of thing? Right. So I still do some work on HPX.
I'm not with the HPX team in LSU,
but some of my research here at the lab is still HPX-oriented.
So Thread Sanitizer is not something we've really looked at.
We've used Valgrind,
but a lot of these existing debugging tools are not always well
suited to debugging a runtime system like HPX. For example, Valgrind for a long time would freak
out on our context switching routine, which was a handwritten assembly. And it would just segfault
on that routine. And we just, we couldn't use it because it was not capable of handling
a very core algorithm in our library.
And so a lot of the existing tools are tricky to use
and so we rely a lot on HPX's built-in debugging and analysis facilities.
So HPX has a performance counter framework, which lets us gather all sorts of data about various events that are happening within
the runtime. And we also have a pretty robust logging system that's been very useful.
Our exceptions are very special. So when an exception gets thrown in HPX, we capture a stack trace.
We capture all the environment variables.
We capture a bunch of different information, and we pack it up into the exception.
And then the exception will get propagated from wherever it's thrown to the head node,
where it then gets output to the user.
So that's a very useful facility.
And in fact, it's something that I often wish I had
when I'm debugging non-HPX code.
Like today I had a problem that only showed up
when I ran it on 4,000 processors,
and one of them would segfault.
And the GNU debugger just doesn't work that well
on 4,000 processors.
But being able to just get a stack trace from something like HPX is really useful.
So once you scale up, it can be very difficult to debug programs.
It's one of the big challenges with distributed computing.
So did you find the bug then?
I did not find the bug, but we've determined that it's memory corruption and also that the bug goes away if I use TC Malik instead of the built-in GLIBSY.
Oh, that's great.
So right now the patch is to not use the C library Malik.
And I know what type of bug this is, and I'm pretty sure that it's going to show up again.
So we're going to just keep running, but now with TC malloc.
And eventually it will rear its head in a fashion that makes it easier for me to catch it.
Could you look into bringing in HPX into the work you're doing?
Yeah, so this is actually an application that uses HPX,
but the bug only showed up in the MPI variant of the code.
So HPX was not something I was able to use here.
And not my favorite thing, having to debug MPI code.
Well, at least you narrowed the problem down. Greater than 3,999 processors
with MPI
and GLIB-C malloc.
Yeah.
And it's the sort of bug where
I only really...
The reason that I know what type of bug it is
is because I've seen that sort of issue
before, and I sort of...
It's pretty clear that it's
memory corruption, that somebody's corrupting
some memory and that the glibc allocator is using. And it's just one of those things where
when you when you're debugging problems, this scale, you sort of have to rely on intuition as
much as your tools. Now, I am curious if you built with address sanitizer, if you might catch it.
Yeah, that's something that would probably be worth
trying but um i'm not sure whether my compiler on that platform uh supports it because it is
the uh the cray intel compiler so i'd have to i'd have to look into that that's a good idea though
so going back to the work you're doing with department of energy um are you working with
the scientists that are writing code to test against
these servers? And are they writing in C++? Are they writing in something like Python?
So I work with a lot of scientists. And for the sort of role that I play at the lab,
which is I play a role of a computer scientist, I probably interface more directly with application people
than other CS people do. So I really enjoy working with the scientists directly. And I have a math
background, so I'm pretty good at being able to understand what their codes are doing. So as for what we're running,
I'm sorry, can you repeat the second part?
I'm just wondering what the scientists you're working with
are actually writing in,
because I know languages like Python
seem to be more common with the scientific community
that may not have a CS degree
and may not know how to write C++.
Yeah, Python is not a technology that really scales up to petascale and exascale. So there
are a lot of people who use Python, but they mostly will use it for smaller scale runs. And
there are a lot of people who use Python in parallel or who use Python for post-processing.
But most of the mission-critical applications,
which are normally the ones that need to run at very large scale,
are written in either C++ or Fortran.
The reason to use Fortran is because it's very hard to shoot yourself in the foot performance-wise,
and you tend to get very good performance,
and Fortran code is very easy for the compiler
to auto-vectorize.
C++ is more of a recent innovation,
and it's used because C++ gives you access
to all of these powerful parallel programming frameworks
like Intel TBB or HPX or Charm++,
and almost all those frameworks are written in C++ instead of C
because of C++ facilities like resource acquisition is initialization,
which is a common programming paradigm that you see
when obtaining locks or obtaining shared resources.
So C++ has sort of become the language of choice for newer codes.
It's also sort of easier to maintain than Fortran codes for large projects.
So are you deprecating, moving away from the Fortran code base towards C++,
or are you still developing new Fortran code?
New Fortran code is being developed,
and it really depends on which lab you're at. Some labs like Sandia National Laboratory are very heavily C++ shops,
so I think the number is that something like 70% of their code bases are C++ now.
At a lab like mine where we are a basic sciences lab,
so there are some other labs where they have a much more targeted set of applications.
They have a much more focused set of goals, whereas we do basic sciences. So we support
a wide and diverse range of users. So there are Fortran codes still being developed here.
And not all of our code bases are as modern
as they could be. And to some degree, that's because we have a lot of smaller code bases that
maybe don't have as large, as much support, whereas other labs may have a smaller number
of larger code bases, which can have large software engineering teams.
Okay. Well, while we're talking about your role there at the National Lab,
your bio said that you're LBNL's representative to the C++ Standards Committee.
What does that involve?
So that's pretty recent.
I convinced people here at the lab that it was important that we have representation on the committee.
Most of the other major national labs have a representative.
So my role as the committee representative is basically to, one, to sort of defend the
interest of the scientific computing community on the committee, to make sure that our voice
is heard, to make sure that issues that we care about are advanced.
And also, it's for me to be able to communicate to people at the lab what's going on in the
C++ standard.
For example, we develop at the lab here a language called UPC++, Unified Parallel C++.
It's an extension of the UPC language for C++.
So that's a language that they're trying to get standardized. And so
they want to know what's going on in the C++ ISO standard so that they can know how that might
affect the standard that we're putting out here at the lab. And so I also just sort of the resident
C++ experts that people will come up and ask me all sorts of questions about the language.
So do you go to the standards meetings too then?
Yeah. So I started going with Kona and I'll be at Jacksonville and I'll be a regular attendee
from here on out.
Started at Hawaii then.
Yeah. I figured that was a good time to start i had i had just started with the lab around then i wanted to interrupt this discussion for just a moment to bring you a word from our sponsors
you have an extensive test suite right you're using tdd continuous integration and other best
practices but do all your tests pass all the time getting to the bottom of intermittent and obscure
test failures is crucial if you want to get the full value from these practices.
And Undo Software's live recorder technology allows you to easily fix the bugs that don't otherwise get fixed.
Capture a recording of failing tests in your test suites and debug them offline so that you can collaborate with your development teams and customers.
Get your software out of development and into production much more quickly and be confident that is of higher quality visit undo-software.com to see how they can help you find out exactly what your software really did
as opposed to what you expected it to do and fix your bugs in minutes not weeks i want to talk a
little bit about some of your talks um you've given several talks over the past few years at
cpp con and cpl plus now uh Your most recent one was about benchmarking.
Can you tell us a little bit about the history with that talk?
Yeah.
So the story I told at the beginning of that talk was that when in like 2011, me and this
graduate student, Patricia, from New Mexico State University, got put on this project together
to do a performance analysis of HPX's threading subsystem. And we started working on it,
and we very quickly realized that we didn't really have any idea of how to do that. So
how to go about collecting performance data, how to then analyze that data and get meaningful conclusions out of it.
And actually what we learned was that it's really easy to collect data, but it's not always easy to collect meaningful data.
So we were fortunate in that Patricia's advisor is somebody who's very knowledgeable in this area. And between her and my advisor,
Hartmut, we sort of developed a methodology, which I now apply in my everyday work.
And so that talk was just sort of me sharing my experience and my techniques for performance
analysis. Because I really think that when you're trying to analyze what's going on in a parallel or
distributed program, you really need to have a lot of arcane knowledge.
And it's not always clear how you learn those sorts of tricks unless you just have gone
through the painful experience of trial and error.
Okay.
Obviously, people should just go and watch the whole talk.
But do you have any tips you would give someone? I mean, there's just a lot more to it than running
your code and seeing if it looks faster, I guess, right? Yeah. So I think the most important thing
to take out of that talk is that you absolutely must take a scientific approach to any performance analysis. And you should not make any performance
decisions whatsoever without hard data and without hard data that you're confident in.
The reality is a lot of us assume all sorts of things about how code is going to perform every
day. And while we are, you know, right, some amount of the time, we can be wrong. And
there's a tendency that when you're wrong about a performance thing, you tend to end up being
really, really wrong. Like you assume that one trend that a trend is going to be, you know,
one way. And it's not just that that the trend is, you know, 4x when you assumed it was going to be 5x. It's that it's a
completely different trend. So that just the basic assumptions that you've made are just wrong.
So I guess my mantra from that talk is really just you have to test, you have to have hard
data before you make any performance decisions. And you can't just have hard data, you have to
have hard data that you have statistical confidence in.
Okay.
So you've given a lot of talks, like Rob said.
I mean, I guess it's like six or so that you have online right now that we saw.
Do you have any favorites?
That's a hard question.
I liked the benchmarking talk.
It was slightly different from
some of the other talks you'd see
at CppCon. It wasn't a very code
heavy talk.
I think my Boost Serialization and
Boost ACO talk are probably one of
my favorite talks to give.
That's a very
code-oriented
talk. Basically, it's a
two-part talk where I show people how to build
a mini-HPX runtime from the ground up.
And I think for the people who can really appreciate that,
it can be a really valuable experience.
You sort of learn a lot about how you build a parallel runtime.
Interesting.
Jason, do you want to ask this question
about functional programming?
Yeah, so one of your talks is on functional programming.
Is that right?
Yeah.
You know, I've ended up in conversations with other C++ developers.
They're like, well, if you're not using OOP, then what's the point?
But I'm guessing you might have a slightly different take on that.
Yeah, so if you just want to do object-oriented programming, there's plenty
of languages out there that support that. And C++ is one of them. But the key thing about C++ is
that it's not an object-oriented language. C++ is a multi-paradigm language. C++ supports a variety
of different styles of programming. And that's what makes it so powerful. And that's why it's been such a resilient force in the programming community. So the appeal to me of functional
programming is that a purely functional language is much easier. It's much easier for a compiler
to infer information from a purely functional language. There's a couple of basic
facts that you can take for granted in a pure functional language. For example, you always know
what all the inputs and outputs are of any function because they're well-defined. The arguments that
are passed into the function are the inputs, and the return value is the output. The function will
have no other side effects, which means that you have a perfect view of all the dependencies of every function in the program. That is something
that, to me, as a parallel programmer, is very attractive because a lot of what I do is about
trying to figure out what are the dependencies between these two different parts of code so that
I can run them in parallel. In a functional language, you have that information
just off the bat. There's other attractive properties too. You can do a lot of powerful
stuff when you're dealing with functions of the first class object. You can run a lot of
powerful generic code dealing with higher order functions, dealing with lambdas.
And I think the other thing, and you don't really get this in C++,
but in a pure functional language, you have the guarantee of immutability of data,
which is, again, a very beneficial feature for parallelism because if you know that every variable that you write to is just a write once variable
and that if anybody wants to
modify it, they'll take a copy and modify it. Then you don't have to worry about synchronizing access
because nobody's going to have any concurrent rights to the same variable at the same time.
So for parallel programming, functional programming is very attractive, and a lot of the ideas behind an asynchronous programming model like the one that HPX uses come directly from the functional world.
So if you take a pure functional approach in your C++ code, even though C++ is not a pure functional language, is this something that the compiler can detect,
take advantage of, optimize for in any way?
It's funny.
I was having a conversation with one of my colleagues
right before this interview
where we were basically just arguing that point.
And I think the conclusion of our argument was sometimes.
So he had an example with dealing with, like,
with taking a vector of ints
and then having a mapping function
that transforms it into a vector of doubles
and then transforms it into a vector of floats
where in that case in a C++ code
the compiler would probably not be able
to optimize away those allocations
if you're just passing the vectors
through the mapping functions by value or even by reference, that you'd have to create multiple vector objects. allocations. Whereas in a language like C++, the compiler is never going to see two mallocs and
be able to go and combine them into one call together. It just can't make that inference
about a library call like that. So yeah, I think sometimes you can't get all those optimizations,
but in other cases, you might be able to.
So, for example, value semantics are sort of a way of writing a function that's almost pure functional.
So if you write a function that only takes its arguments by value and returns something
by value, well, that's sort of a pure function.
And that's something the compiler might be able to optimize.
Yes, the compiler should be able to do a better job of that.
Sean Parent has a really good talk about the power of value semantics
out there, I think, from C++ Now.
Okay.
I'll have to look for that talk.
Yeah, me too.
Speaking of C++ Now,
I know you're one of the organizers of that conference and for CppCon.
Do you have any talks planned for this year?
I know they just went with submissions.
Yeah, me and my colleague Carter Edwards from Sandia National Lab have submitted a talk to C++ Now about ArrayRef,
which is a multidimensional array reference type that we're proposing for inclusion into the
C++ standard. So the talk would just be an overview of what ArrayRef is, and we'll talk
about how you can use it. We'll show people the reference implementation that we have.
So yeah, I'm hoping that'll be accepted. And I think we'll have a really good program this year.
We had 60 submissions.
We'll have 45 slots.
We have most of the reviews in right now.
John Calvin and I just had a talk this morning about it.
And so, yeah, we should have a program.
I shouldn't give a date for that, but we'll have a program soon.
Okay. And I should plug
the student volunteer program will be accepting applications very soon.
I'll probably put out the call for student volunteers a little bit later this week.
Oh, very cool. We'll post that link once we see it.
I'd greatly appreciate it. I did notice the student
volunteers around last year and they
are both helpful and I can't imagine how much beneficial it is to them to be able to attend
the conference. Yeah. I, I first attended, uh, boost con 2011 when I was, uh, a student and it
was a really life-changing experience. And that's why I'm so dedicated to running the program.
And I think almost all the students who have gone through it have felt the same way,
and it's really been a transformative experience for all of us.
It's really a great program.
Just out of curiosity, how much different is it organizing a smaller conference like C++ Now compared to CppCon?
There's a lot of stuff that is different um i'd say c++ now is logistically easier um that does not
necessarily mean that it's less effort um but what i mean by that is that when we have to deal with
creating the schedule for c++ now we have two tracks versus having six.
Two is a lot less.
In some ways, C++ now, it's a smaller conference, and we've been running it for more years, so we're sort of more familiar with it, whereas CppCon is sort of new.
But CppCon, we also have a lot more people involved, which is nice.
There's very distinct differences between the two conferences, and they're kind of hard
to verbalize.
It's sort of like a you-know-it-when-you-see-it sort of thing, so that John and I both, and
everybody who's involved in organizing, sort of knows, CPPCon is for the entire community.
C++ Now is really this gathering of gurus.
And, you know, this sort of talk is the sort of thing that would be really good for CPPCon,
but it might not be good at all as a talk at C++ Now.
So sort of I think that's the biggest challenge is making sure that when you're running two conferences
that you're not sort of not trying
to run the same conference twice. Okay. Jason, do you have any other questions? I don't think so,
Rob. Okay. I do have one more plug. Sure. Oh, sure. CppCon, and John Cald made it very clear
to me that I had to say this. CPP Con registration should be open very soon.
And so everybody should register.
We'll have a great program this year, I'm sure.
It's going to be in Seattle again.
I don't have the dates on hand, but it'll be in September, same time of year.
Awesome.
Well, where can people find you online?
Do you have a blog or Twitter or anything like that?
I have neither.
I have email, which is the best way to reach me.
I'm on Reddit sometimes, so you can sometimes find me on Reddit's C++ subreddit.
Okay.
Well, thank you so much for your time today, Bryce.
Thanks, you guys, for having me.
Thanks.
Thanks so much for listening as we chat about C++.
I'd love to hear what you think of the podcast.
Please let me know if we're discussing the stuff you're interested in
or if you have a suggestion for a topic.
I'd love to hear that also.
You can email all your thoughts to feedback
at cppcast.com.
I'd also appreciate it if you can follow CppCast on Twitter and like CppCast on Facebook.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode is provided by podcastthemes.com.