CppCast - Val and Mutable Value Semantics
Episode Date: January 20, 2023Dimi Racordon joins Timur and Phil. They first have a couple more conference speaker calls, a new C++20 test framework, and and some updates about Safety in C++ and the C++ successor languages announc...ed last year. Then they talk to Dimi Racordon about the new language, Val, how it relates to C++, and why mutable value semantics are so powerful. News C++ Now Call for Speakers CppNorth Call for Speakers Snitch v1.0 - a "Lightweight C++20 testing framework" "The Year of C++ Successor Languages" - Lucian Radu Teodorescu "Supporting the Use of Rust in the Chromium Project" - Google Security Blog Links Dimi's CppCon 2022 talk on Val and C++ interop "I'll Build Myself" - Phil's song about building C++ P2739R0 - "A call to action: Think seriously about “safety”; then do something sensible about it" P2759R0 - "DG Opinion on Saftety for ISO C++" "The Rule of Two" val-lang.dev - the official website for Val
Transcript
Discussion (0)
Episode 352 of CppCast with guest Dimi Racourdon,
recorded 17th of January 2023.
This episode is sponsored by JetBrains and Sonar.
JetBrains has a range of C++ IDEs to help you avoid the typical pitfalls and headaches
that are often associated with coding in C++.
SonarLint in your IDE helps you find and fix bugs and security issues from the moment you
start writing code. In this episode, we talk about the latest conference news, a new C++ unit testing framework,
and some updates about safety in C++ and the C++ successor languages announced last year.
Then, we talk to Dimi Raccordon.
Dimi talks to us about her work on VAL, one of the successor languages.
Welcome to episode 352 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Timo Dummler, joined by my co-host, Phil Nash. Phil, how are you doing today?
I'm all right, Timo. I'm struggling to go over a cold that I've had since we last recorded,
but other than that, I'm okay. I'm also traveling as well, which doesn't help. How about yourself?
I hope you get well soon. Well, I'm doing all right. I just moved to Finland, so there's
all of that whole find a new place to live get a social security
number open a bank account it's all not very straightforward so i'm kind of a bit busy with
sorting all of that out but yeah it's going well so yeah i'm doing well as well sounds like a
stressful start to the year well yeah i've done this before so i think i'll be fine but yes it
is a bit stressful not Not your first rodeo.
No.
All right.
So at the top of every episode, I like to read a piece of feedback.
So the feedback this week is actually that we got mentioned by ADSP, the podcast.
That's the podcast by Conor Hoekstra and Bryce Delbach.
They're glad that we're back.
Well, they're glad, but Conor was a little bit disappointed because he was hoping they were going to catch up with the episode numbers because they're about halfway behind us.
We're on episode 352 now.
And he was estimating how long it would take,
and now he says it's going to take twice as long.
So I thought maybe we could gradually start to reduce the interval
between episodes so that they will sort of asymptotically get longer
and longer apart
just to just control them really yeah yeah that would be fun they're also wondering um if you're
going to change things so let me actually read the the piece of feedback now so connor said
i wouldn't be surprised if there is a new jingle because you probably didn't see this but on c++
on c's youtube channel just released a video of
Phil Nash singing a C++ song called I'll Build Myself. So yeah, that was actually hilarious,
Phil. Well, Connor and Bryce, thank you so much for mentioning us. We like your podcast
too. And we'd like to let you know that we do not have any plans to change the CPP on
C jingle to I'll Build Myself.
At least not just yet. If you haven't heard the song by the way i'll
put a link in the in the show notes it's great you should check it out yeah now we're going to
keep the the original theme song we actually had a few people ask that as well on on twitter and
mastered on whether we're going to be making changes like that uh funnily enough when we
took over the show i actually went to the the link that's always read out at the end podcast
themes.com and if you go through to the free themes page, the number two entry there is this theme.
So it was actually quite easy to find in the end.
So I imagine there's quite a few other podcasts probably that are using the same jingle.
Yep.
If anybody has heard another podcast with the same theme, please do let us know because
I want to know what they are.
Yeah, that could be fun.
Yep.
All right.
So we'd like to hear your thoughts about the show. You can always reach out to us on Twitter or Mastodon
or email us at feedback at cppcast.com. And don't forget to leave us a review on iTunes.
Joining us today is Dimi Raccordant. Dimi is a researcher at Northeastern University in the US.
She has a PhD in computer science, but only because she bribed
her jury members with Swiss chocolate from her hometown. She worked on model checking and
developed efficient data structures to generate and explore large state spaces. She then studied
logics and type systems while trying to find new ways to teach computer science. She has a passion
for language design with a particular focus on type-based approaches to memory safety and program optimizations. Now, Dimi works on answering research questions related to these
topics and writes formal proofs for a living. Dimi, welcome to the show. Thank you for having me.
About the Swiss chocolate, I'm guessing that you didn't actually get your PhD
because of that, but it did stand out to me
because I am in Geneva as we speak. I'm in a hotel room here. The listeners can't see it,
but the curtains behind me give it away. So yeah, I was wondering which Swiss chocolate it was.
Yeah, I think if I remember correctly, it's Lederach or something like that.
I'll look out for that when I'm here.
Oh, Lederach is good. Yeah, I remember when I was living in Munich,
there was a Lederach shop there. It's like
really good chocolate. Strong
recommendation.
Alright, so we'll get more into Dimi
and her work on Val in a few minutes,
but we have a couple of news articles to talk about.
So, Dimi, feel free to comment on any of
those, okay? Sure thing.
So, first we have some conference
news. So, the
call for papers for C++ Now is now open.
C++ Now is a conference taking place in Aspen, Colorado.
And this year it's going to be May 7th until the 12th.
And the Call for Papers deadline is on the 30th of January.
So you still have about just under two weeks as of today.
But when this episode comes out on Friday,
it's going to be more like
10 days or something
until the deadline.
So you can submit your talk
on cppnow.org.
And also the CPP North
call for papers is now open.
Yeah, we'll put the link
for that one in the show notes.
It just opened last night.
So quite breaking news there.
So Phil, do you know
what the deadline is there?
I didn't actually look up that one.
I know it's sometime in February,
but I can't remember the date off the top of my head.
But you've got a few weeks at least.
All right.
Cool.
So then the next news item is that there's actually
a new lightweight C++20 testing framework called Snitch,
and version 1.0 of snitch was actually released recently it's on github.com
slash c stripe slash snitch so phil uh you have some experience with c++ testing frameworks
do you not like what's your opinion on this yeah so i saw this i haven't actually downloaded it
and tried it out myself yet but just just from looking at the GitHub repo,
it looks quite promising.
If you've seen Dock Test,
which is sort of inspired by Catch,
but a bit more cut down and optimized
for build times and run times,
it's sort of taking that and running with it even more,
leaning right into C++20 to get there.
And I think there's some comparisons
of compile time and runtime performance figures
and compares very favorably against doc tests.
So even faster than doc test.
But again, feature parity wise,
I couldn't tell you,
but like doc test,
it's not quite as fully featured as catch,
but definitely really welcome
to see yet another modern C++ testing framework
into the fray
because catch first came out over 13 years now. test-pass testing framework into the fray because Catch was first,
first came out over 13 years now.
So it's starting to get one of the older ones of the pack.
Yeah, that's really exciting.
So I definitely need to check this out.
So Phil, please don't take this personally.
I know you're the author of Catch
and Catch is an awesome framework,
but I actually did recently
started using DocTest more and more,
at least for like all my kind of small
toy projects because
it's like very
lightweight and kind of
fast and it's header
only which catch I
think isn't anymore
these days yeah so I'm
actually curious about
this new one I'm
definitely gonna gonna
check this out I think
it's definitely worth a
look all right so we
have one more news item
about C++ and safety,
which kind of has been a pretty hot topic lately.
And it's from the Google Security blog.
And I'm just going to read this out.
So the Google Security blog writes,
we are pleased to announce that moving forward,
the Chromium project is going to support the use of third-party Rust libraries
from C++ and Chromium. To do so, we are now actively pursuing adding a production Rust toolchain
to our build system.
This will enable us to include Rust code in the Chrome binary
within the next year.
So I think that's really exciting and interesting.
It's quite a long blog post where they kind of talk about why and how
and what the motivation is.
From what I motivation is from
what i understand is there is this concept which i actually didn't know about which i actually
learned from this this blog post it's called the rule of two so if you have like three things one
some untrustworthy input which comes maybe from the user or somewhere else from an
unclustered source two you have an unsafe language, such as C and C++. And then three, your app is not sandboxed.
And basically the rule of two says,
never do all of those three at once,
because then very bad things are going to happen.
And basically this announcement is about
kind of allowing Chromium
to kind of better satisfy the rule of two
and kind of get to a place where
they can do more kind of safer development.
Yeah, we're obviously going to get to a potentially safer language in a moment, but sticking with C++,
there's also a couple of proposals, actually, or a couple of papers, at least, from Bjarne Strastrup about safety in C++.
And one of those is lamenting that people often
say we'll see in c++ as if they're the same thing are inherently unsafe and while there's some truth
in that it sort of ignores like 30 years of progress on making the language safer and if you
use modern language features you're actually going to be almost on parity with some of the others
that are often mentioned as being safer alternatives and And this post, it's a really interesting post, but it does almost seem to be making that same mistake
where it says, you know, unsafe languages you shouldn't use, along with untrustworthy input or
no sandbox. Maybe a safe subset of C++ might actually satisfy the rule of two.
Do you have any thoughts on that? Yeah, that's very interesting.
I think my understanding of this whole safety thing
is that at least some of those people that say,
you know, C++ is not a safe language,
they're talking about kind of provable safety
or like some kind of absolute safety guarantee,
which C++ kind of doesn't really give you, right?
So there is a safe subset,
but it's kind of difficult to kind of stick to that.
Like even, for example, if you use Unique Pointer,
so Unique Pointer is obviously a lot more safe
than raw pointers,
but then you still have Unique Pointer.get
and then you get a raw pointer back
that you can store somewhere and access later.
Or you can initialize unique pointer from make unique,
which is safe, but you can also initialize it
from a raw pointer that comes up somewhere else,
which another thread or another part of the program
kind of holds onto.
And so that's not safe.
So it's somehow, it seems to be very kind of difficult
to kind of exactly figure out
like what the safe subset of C++ actually is
or have any guarantees, whether it's through the compiler
or tooling or static analysis, to be restricted to that.
I mean, I guess for many projects, you don't have to,
but for some, I guess that's the motivation.
So yeah, I don't know.
I haven't actually read Bjarne's paper.
I only saw it.
I haven't read the whole thing, so maybe I shouldn't too much about that i should go and go off and read it and
then yeah we'll put that in the show notes as well yeah and then comment more on that but i do agree
with him that like it seems that c++ has more of those kind of safe apis or safe features than c
does yeah so i'm actually, the last news item for today
is another blog post by Lucien Radu-Todorescu
called The Year of C++ Successor Languages,
which appeared on the ACCU website.
And it kind of talks about the same thing,
that C++ is kind of inherently unsafe,
whatever that means.
But then also it talks about three successor languages
that have been announced last year.
So Val, Carbon, and CPP2.
And he had actually quite a long blog post
where he kind of lists the pros and cons of all three languages
as far as he sees.
So he says that Val is really awesome because it has these
mutable value semantics
so it kind of follows this quite scientific
approach about
what that is and how to make a safe
language. He notes
that there is no public ban for
interoperability with C++,
which I think we're going to dig into later.
With Dimi,
he says about Carbon, Carbon is great, has better defaults.
Interoperability with C++ is part of the design,
but also actually not straightforward
because carbon has no function overlaying, no exception handling.
And he says CPP2 is, you know, the interoperability with C++ is great
because it's literally kind of on top of that.
But then it doesn't address safety completely.
It fixes some safety issues, but not others.
It doesn't fix object access safety or threat safety.
And so it's a very long blog post.
And at the end, he says that kind of his favorite successor language out of the three is kind
of Val because of this very rigorous scientific approach to safety and the kind of mutable value semantics.
So he likes that a lot.
So I thought it was like really, really interesting blog post to read.
And yeah, it's actually kind of cool that he kind of says Val is his favorite
because this whole thing is such a hot topic
that we actually decided to do a series of CPPcast episodes
about those different successor languages.
And we are starting today with Val.
And we do have one of the developers of Val,
Dimi Akodon,
today here with us on the show.
So I'm very, very excited about that.
So I hope you can dig into all of those things
and learn more about Val and what it's all
about and how it works and why it's awesome.
Great. So actually, Dimi,
you said there was actually a paper
by Lucien, right?
Not just a blog post.
Yeah, I think he wrote a paper for ACCU.
And I think the paper is more about Val
and Val's take on this thing, the mutable biosemantics,
that I think we will get into in this episode.
All right, cool.
So, Dimi, thank you so much for joining us i guess
the first question is can you give us a short summary of like what val is about and how how
you got involved in val and yeah sure uh kind of just give us a bit of an overview before we kind
of dig in a little bit deeper yeah so so the not so short story is is I was working on Swift, which is a programming language. I guess
you've heard the name, it's a thing by
Apple, and I tried
to formalize the language,
and this got me close to some
of the developers of Swift,
notably Dave Abrams.
And Dave was really
into this thing called mutable value semantics,
which is kind of
a fancy way to say
we want to merge functional programming
and preserve the ability to do in-place mutation
because we want performance,
and this is difficult to get in pure functional languages
like your Haskell or whatever.
And so we worked on this mutable value semantics
in the context of the Swift language,
and this got me the idea of trying to push things further because Swift has support for
medieval value semantics, but not a great support for MVS. Some of the stuff is still based on
reference semantics. And then I started to kind of build a language.
Fern's story, I think it started two years ago,
and I thought it would be a very short,
like two weeks project that I would show to my colleagues
to say, hey, we can do this thing.
And well, now it's a two years long project.
We're still not finished.
But that was the basic idea.
You mentioned Dave Abrahams there, and it was from Dave that I first heard about Val.
I think it was his C++ Now talk earlier last year, and then I saw your talk at CppCon.
So who's actually behind Val?
Who started it?
Was it originally your project or Dave's or somebody else's?
So it was originally my project, but Dave joined very quickly, I think,
because he was seduced by the idea of building a language
that is entirely built on multiple value semantics,
whereas Swift was kind of value semantics
plus reference semantics.
And so now I think we are both the owners of the project
in a way.
Right. Is there anyone else involved as well, though, at this point?
Yeah, so some people at Adobe also contributed to the language,
notably Sean Parent, who's also very interested in value semantics, obviously,
and so he's also behind the project.
And Lucien, who wrote the blog post
that you mentioned earlier,
really was seduced by the idea
of using value semantics for concurrency.
And so he's been a big contributor too.
So yeah, that's actually really interesting.
So we talked briefly about the mutable value semantics.
So maybe we can dig into this a little bit deeper
I'm kind of trying to understand how this
actually works and what
makes Val special so you said
it's kind of an approach that makes
it safe and also
kind of potentially can solve
like problems with threat
safety so can I
is it like you
can only pass things by value but then you can mutate
that values unlike functional programming like can you kind of summarize the the concept i'm
just trying to really understand what's going on there because i don't think it's a paradigm that
i've actually seen before in any programming language that i'm familiar with like what can
you do and what you cannot what can you not do yeah sure so um the short answer is uh mutable semantics is
about banning sharing of mutable state so the one thing you can't do is create uh first pass
references you just you take your average language you can take c++ and you ban references and
pointers and everything that gives you indirect access to some storage.
And you ban this from the language and you get multiple value semantics.
This sounds very drastic, but it's actually kind of the way people code in C++.
If you've seen my talk at CppCon, it's saying that.
It's that in C++, we actually write code with multiple value semantics
because when you pass a multiple reference
to some function,
it just assumes that it's unique.
So you assume that there are no other outstanding references
on the story that you get.
So this is the basic idea
behind multiple value semantics.
And from there,
you can derive a lot of properties,
which really looks like
functional programming.
That's why I said it's kind of
in the realm of
pure functional programming,
because now you uphold
the concept of value
and all these kind of nice properties
you get from pure functional languages,
like reference transparency,
you can get with
mutable value semantics.
So yeah, that's actually
really interesting.
But I guess part of the functional programming
is that you cannot mutate stuff, right?
So then if you cannot mutate stuff,
you can't have multiple threads.
Like you can't have a race condition, right?
Because you can't have multiple threads
trying to mutate the same thing
and run into a race condition.
But if you have mutable values,
how does that work?
Can you still not have...
So I guess there can't be another thread
that has a reference to the same thing.
Do you guarantee that somehow?
Or am I kind of on the wrong path?
Yes, exactly, precisely.
So as I said, you ban references from the language.
So you just can't have a data race
because a data race would require a reference
on something mutable,
and you just ban the references altogether
from your language.
So by design, you prevent all kind of unsafety
due to what I call unintended mutations,
like mutation through a reference.
This is the sharing of mutable state.
One of the ways that functional languages work around this is with persistent data types. like mutation through a reference. This is the sharing of mutable state.
One of the ways that functional languages work around this is with persistent data types.
I'm wondering if there's something similar going on here.
We have some sort of structural sharing,
so different instances of a value may actually have some sharing
under the hood behind the scenes.
Is that there?
Yeah, so that's basically the basic idea you don't
have a references but but then if you want to get into how does how does this actually work in a
language and how you can do uh things efficiently then you can think well if i don't have mutation
then sharing is not really an issue right because if i don't mutate anything then you can have as
many pointers pointers as you want.
You will not have a data race because a data race requires mutation.
So first you can share things under the hood
because, well, if you don't see mutation,
it's okay to share pointers.
You can have multiple readers, not multiple writers.
And there are other things you can do.
For instance, if you want to mutate a thing
and you don't want to copy
everything, like in C++, pass-by-value is kind of an expensive connotation because of the copies.
Well, you can say, if I can guarantee uniqueness, it's okay to use a reference under the hood
because I know no one else can access this reference. So your compiler has a lot of leeway
to do whatever it thinks is best for performance,
where in the user model, you really don't see the references.
You only see values.
So can I think of this as a very sophisticated form of copy illusion, essentially?
Yeah, kind of, yeah.
That's really, really interesting.
So how is this different from the Rust Borough Checker, for example,
where you also have
this idea that you can't have you know uh multiple references to the same thing that are of which one
of those are mutating right so you can either have multiple non-mutating ones or one mutating one i
think that's kind of how rust works yeah exactly so it's really really close to how rust works
actually it's kind of the same thing under the hood.
If we think about the execution model that the compiler will actually write for the code generation.
In the user model, though, it's a bit different because Rust is really about creating a type system
that's made it access to references to guarantee that you cannot use references unsafely.
Whereas Val and Mutable Valid Semantics is about proposing a user model that
doesn't need references in the first place.
So you just don't get to work with the references.
You don't need this feature to write your programs.
All right.
Hold that thought for a second.
We have a message from one of our sponsors.
This episode is sponsored by Sona, the home of clean code.
Sona Lint in your IDE is always free and helps you find and fix bugs and security issues
from the moment you start writing code.
Add Sona Cube or Sona Cloud to enable your whole team to deliver clean code consistently
and efficiently
with the tool that easily integrates into the cloud DevOps platforms and extend your CI and CD workflow.
Before we get into more questions, I wanted to make one point about your talk at CppCon,
because I saw the talk and I've also watched it a couple of times since. I'm still not quite sure I fully understand all of it yet,
so I might even be watching it again.
But I didn't find it on YouTube,
so I don't think it's been released on YouTube just yet.
But if you were an attendee of the conference
and you have access to the portal of on-demand talks,
you can still view it there.
So if anyone's trying to find it, bear that in mind.
So I won't be able to put it in the show notes just yet
but I will try and remember as soon as it becomes
available I should go back and put it in the show notes
retrospectively. So if you listen to
this in the future it may be there by now
and if not let me know. Anyway
sorry I'll let you get back to the discussion.
Yeah I think they're releasing the videos kind of very
gradually.
Yeah I also I wanted to, I caught like a bit of your talk at CppCon when I was there last year,
but then I wanted again to like look it up and watch the whole thing again before this episode,
and then I couldn't find it.
So I didn't think of the ITD area thing.
So that's a good one.
Thanks, Phil, for that comment.
So I think I want to kind of dig a little bit into this whole safety thing.
So Lucien's blog post mentions actually seven types of safety.
There's type safety, bounce safety, lifetime safety, initialization safety,
object access safety, threat safety object access safety threat safety and arithmetic
safety like things like no arithmetic like integer overflow and stuff like that and so um does val
like say solve all of them is it like a completely safe language by by construction or is this really
more about the whole memory safety object lifetime safety aspect yeah so so val solves all of them
now the the val object model is is really about uh lifetime safety and um maybe initialization
safety uh i'm looking at the list um so so to to to just uh set things we we use in the VAL team kind of definition of safety, we say it's absence of undefined behavior.
So an operation is safe if it kind of causes undefined behavior.
And then all of these things that you mentioned kind of fall into the realm of undefined behavior, right?
Like bound safety, lifetime safety, it's all in the realm of undependent behavior.
And what's interesting is, for instance, bound safety,
there is a really easy way to check at runtime
that you don't go out of bounds.
It's not so hard to do array bound checking.
We know how to do it relatively efficiently,
and even optimizers can remove the runtime checks, right?
So it's not really so hard, and actually it's the same for many of these items.
What's really, really difficult is lifetime safety, and lifetime safety requires the help from your compiler or your type system.
That's pretty cool that it covers all those different forms of safety. I hadn't appreciated
that before. But I just want to rewind a little bit because you mentioned that it was
at least partially inspired by Swift. And I know it shares a lot of the same syntax and some
features. In many ways, it does feel a lot closer to Swift than C++. But Val is often positioned as
being one of the successor languages where interoperability with C++. But Val is often positioned as being one of the successor languages
where interoperability with C++
is an important component.
So how does Val fit into that?
Is that story there yet?
So, yeah, we're very close to Swift
because we actually started from Swift, right?
We wanted to have the Swift we wanted.
And so we moved all the things we didn't like from Swift
and see what it could become
if it had only VDB-based semantics.
The C++ interop came a little after.
And actually, I think it's kind of an advantage
because compared to these other projects
like Carbon or Cpp2,
we don't...
Well, I personally don't feel so attached
in just preserving the way of coding in C++.
Some of the patterns are really antagonistic to safety.
For instance, the iterator model is a bit antagonistic to safety.
And so by going from, starting from a different language like Swift, we kind of liberated ourselves from the burden of having to keep those features and those idioms at all costs.
So the story for C++ came a bit after, I would say.
Right. Okay. Yeah.
So slightly different positioning compared to the other successor languages.
But I think you really,
I mean, there is a trade-off there.
You've got to lean one way or the other
and values leaning more
in the direction of a cleaner language
rather than that interrupt being the primary thing.
So I think that's really worthwhile.
Yeah, the theme with C++
actually came a little during the development
because first it wasn't
a goal at all. But as we
developed the core calculus, what
Lucien mentions here, the scientific
approach, we discovered that there is
a kind of this minimal
language inside, right? Every language,
every programming language has a minimal core
language, kind of formal language inside.
And this core
language is sufficient to describe
most of the crazy things one can do with C++,
like all the bit pushing,
the crazy bit pushing we can do in C++.
There is a clean way to express that
in Val's formal minimal calculus.
And so we thought, well, actually, it makes sense.
Maybe we can tie these things together
and that can lead us to a greater community
rather than just doing this scientific thing on its own
because the scientific community in the real world
sometimes have trouble getting together.
So would you call that core language asm.cpp, maybe?
Yeah, that's cool i i'm also wondering about like um what's the kind of portability story more more widely kind of like
what uh have you thought about like what platforms val should be available on is it like
windows mac linux is it mobile what about embedded? Do you envision that as something
where people would be able to use Vowel?
Would it have its own compiler?
Does it compile down to LLVM?
So can you plug it into that?
Or is it a completely separate tool chain?
What platforms and tools
will I be able to use with Vowel?
Well, if you look at our website,
there is a roadmap, and at the end
of the roadmap, there is plans
for total world domination. So
I would say all platforms. That's always good.
But
realistically, for now, we are
targeting the main
OSes, like your
Microsoft, Mac OS,
Linux. We
plan on using LLVM to generate machine code.
So, well, if you can run LLVM on your machine,
then probably you will be able to run Val.
We also want to have a transpilation story.
So we are building a transpiler from Val to C++.
So that's interesting.
That's what Herb Sutter is doing with CPP2.
So that's very interesting.
So you have that's interesting. That's what Herb Sutter is doing with CPP2. So that's very interesting. So you have that.
Yeah, we thought it would be a kind of quick way
to get Val going because building the old code
and with LLVM is kind of a lot of work.
And with C++, we already have a lot of support, right?
All these IDs and all the support we get from C++.
And we also thought it would be a nice way
to explain how the Val model
works semantically to
people, because people understand
C++, at least some people.
And so if there is a way to explain
what Val does in terms of
C++, then it's a way to make
this whole multiple value semantics
being better understood.
Oh, hang on. So you will be able to take Val and transpile that into C++
that's actually readable?
You would be able to look at that?
Yes.
And the human could understand what's going on there?
That is so cool.
That is really cool.
That is very similar to what Herb's trying to do with CPP front
in CPP2 as well.
Yeah, we should at least decopy this idea.
Yeah, all right. okay, fair enough. But it would be interesting to see it start to become
available for more like safety-critical systems because that safety is an obvious benefit there.
Of course, so toward embedded systems, this is something we have in mind. We don't have hard plans yet.
Also, ABI stability, all of these things, we discussed them.
We don't have hard plans.
The most interesting thing we want to do right now is,
rather than going the swift road and having this gigantic standard library,
we want to be able to really separate the code language from the standard library. We want to be able to really separate the code language
from the standard library so you can compile your code
without the big library with it.
That would be more friendly toward embedded systems.
But this is really planned for the far future.
These things do take time, and you can't rush safety.
But what is the roadmap in terms of time then?
Because I presume this is not really quite production ready just yet,
but when should we expect to start writing real applications
and systems with Val?
Yeah, we wanted to have a first usable version of Val,
not really for production,
but at least you can really play around
by, I think, the first trimester of this year.
I think we're a little late on schedule,
but yeah, this is still in the roadmap.
I'm looking forward to that.
So one question, I kind of touched upon this earlier
by saying that these kind of value semantics can be,
like the compiler can kind of optimize
this. So
do you have any idea like what the performance
is kind of like?
Will it be comparable to C++? Like would
you be able to use Valve for
something which is like very
performance sensitive, like I don't know, high
frequency trading or like, I don't know,
real-time processing or video games
or things like that? Or is it kind of just not what it's designed for?
Like why you need absolute performance?
Is that going to be kind of competitive with C++?
Because I think Rust is at least marketing itself
to like be competitive.
So I wonder if like Val is doing that too.
Yeah, sure.
This is definitely in our big goals.
Its performance is one of our goals. We have some preliminary experiments from a project that predates Val, actually. It was run on what we studied for Swift. And definitely, we can be competitive with C++. The thing with multiple bioselective semantics is that it really unleashes the compiler
optimization that you can apply without
going through all the
hierarchs that are usually necessary for
pure functional languages.
The thing is with
professional languages is that usually you have
what we call functional
updates. So you just reassign
a variable to a brand new value, right?
And then the compiler has to guess
that you actually wanted to do an in-place mutation.
Well, with primitive device semantics,
you can do the in-place mutation in the first place.
And so you don't need to go through all these errors
and you get the performance that you would get in C++, for instance,
where you can actually do
implementation.
Is there any overhead in certain
cases that you're aware
of already?
No, we really want to build
a language
with no overhead.
So it's one of these zero-cost
abstraction language, the same story
as C++ has and Rust has.
I think we even have an even more transparent story than C++
because Val doesn't have any implicit copies.
So the cost of copying is completely explicit in the language,
and you can avoid it in most cases.
Even if copies are explicit,
in fact, you almost never copy explicitly anything in that.
Right.
Yeah, one thing I remember from your talk
was that all moves are destructive,
which has always been a bit controversial in C++,
whether we'll ever get a destructive move,
and it keeps being proposed.
So what's the situation there?
What's the rationale for doing it,
and were there any problems that you have to overcome?
So I have to admit, I'm not a C++ expert by any measure.
So I cannot really comment on the whole debate
for a destructive move in C++.
In Valor, what we realized is that usually we want to use a move
for cases where you just want to send the value somewhere, right?
You want the value to escape from a function scope or you want to be done with the value.
You just want to give it to another function and then you don't want to take care of it anymore.
Whereas the non-districtly move is kind of this hybrid thing where you still have a shell of something that you need to destroy afterwards.
It's a bit strange. So we, again, using a different language to start with,
we didn't have to bind ourselves with this C++ view
of what a move is supposed to be.
And so what it allows us is to better control the lifetime of the objects
because we can really think of the values themselves
as being moved around
and we don't need to keep these shells
just for the destructor of the local variables to run.
If you move it, then it's gone
and you don't need to take care of it anymore.
So I think it actually makes the model simpler
than C++, but I'm biased
because, again, I'm not a C++ expert.
Talking about destruction of things, I think you also mentioned that destructors were
non-deterministic and that the order that they run in is not predefined and that that gives you
some scope for optimization, but I wasn't quite clear why that would be an optimization opportunity.
Yeah, so the thing is,
because we don't need to run the destruct of objects,
like in reverse order,
like you have this very strong guarantee in C++,
the language is actually free to destruct things
when it thinks it's appropriate,
so when the lifetime ends. And I don't think it necessarily leads to better performance,
or maybe it can be in some marginal cases,
but it frees you from having to keep these shells around.
And that's, yeah, really, think, just makes the model simpler.
The thing with the non-deterministic
destruction, it might be
scary because you think, well, what happens
to III,
right? You want to use III
to control the resources. But
if you want to be
sure you execute some code
at the exit of a scope,
then you can just have
a feature for that.
And we have different statements
in Val.
We copied that from Swift.
This is just a block of statements
that is guaranteed to run
at the exit of a scope.
And so you really
retrieve the ability to
do that for exceptions and whatnot. So you don't the ability to do that for exceptions and link and whatnot.
So you don't have deterministic destruction,
but you still can do error if you want to.
So it's not like garbage collection or something like that,
where you just have no idea how long things are being kept around.
Yeah, exactly.
So you have the guarantee that your objects will be destroyed
as soon as they are no longer needed by the program
because we run some kind of a pass on your code,
which is called last use analysis.
So after the last use of your variable, it will be destroyed.
There's a guarantee that it will be destroyed.
You don't need, there is a static guarantee.
You don't need a garbage collector to do that.
There is not an exact guarantee
about what line of code this means,
but there is a guarantee that it's after the last use,
which is sufficient to avoid the garbage collector.
And for the OII, we have this defer statement.
So the one thing that did concern me about that is
I think the defer statement is great
and you can sort of approximate that in C++
with a scope guard-like object.
But very often, we actually use our AII
within an object that is managing a resource internally
and you want to completely encapsulate that usage.
And when you have multiple of these objects running concurrently and you can't reason
about when they're going to be cleaning themselves up, sometimes that is something you need to
reason about.
So does that sort of push us back to a world where we have to explicitly, say, call close
or something from a deferred block, or is there a way to orchestrate that?
Trying to think about a specific use case, I guess.
Yes, if you want to close.
So managing resources you can do with destructors.
That has destructors and these destructors will run when the object goes out of use,
not out of scope, right?
So when it's no longer used and the destructor will run and you can free memory or close the PyDescriptor or whatever.
Now, the situation you described where you also have a relationship
with other objects is a bit more complex.
I think you get away by not having references.
And so I don't know exactly what kind of scenario would involve a problem
because of these multiple objects,
but I'm sure we can find one if we think really hard.
In that case, I guess, yes,
you would have to use a deferred statement
and be explicit about closing things in the order you want.
Okay, yeah, I'll have to try and come up with a good example
and see how that works for you.
Yeah, thanks for clearing that up.
Yeah, thanks.
So I wonder also, like,
what the future of VAL looks like
and where can people kind of go and learn
about what's going on with VAL?
And is there anything where, you know,
people can go in and use something?
Is there anything available?
And what's next?
Yeah, so we are actually working
on building the compiler.
So it's not ready yet.
I think you can play around a little with the type checker,
but it's really very experimental.
So don't expect too much of the current implementation.
We're still working on it.
The best way to get information is our website,
val-tirolang.dev.
I'm sure we can put the link somewhere. I'm sure we can put the link somewhere.
I hope you guys can put a link somewhere.
And we have the, yeah, everything else is open source on GitHub.
So come, ask questions in the issues, or we also have a website with discussions.
I did actually have one more question for you.
A colleague of mine asked if I can ask you this
because I work for a static analysis tools company.
But with Val being all about safety,
all those different types of safety,
is there actually still a need for static analysis
with a language like Val?
Or do you think the compiler really covers it?
Yeah, I think there is still need for static analysis
for things that go beyond
this very simplistic definition of safety.
Safety is absence of adepant behavior.
This can be guaranteed by the compiler
in a Val-like language.
It can also be guaranteed by Rust, for instance.
For everything else,
maybe your definition of safety
would involve some other properties.
Then maybe you would need a static analyzer.
Also important to mention is in a language like Val or like Rust,
some things cannot be done efficiently
without using an escape hatch from the safe type system.
So in Val, for instance, you can use unsafe constructs.
You just need to mark them as unsafe,
and then you are free to do whatever you want,
like write pure C code if you want.
But this obviously cannot be guaranteed safe by the compiler.
So in these instances, it might be useful to use a static analyzer
or the kind of reasoning to make sure that you are writing correct programs.
All right. Well, I have to get started on that analyzer then.
So I just had a look at the website you mentioned.
That was val-lang.dev.
And yeah, that's really cool.
So you have this language tour there, which kind of explains
all the kind of features and properties you've talked about there is um a roadmap which i think answers the question what
next for val and and there's also a link to the github uh kind of discussion page so yeah really
good good website recommend anybody interested checks it out so So is there anything else you want to tell us before we let you go?
Yeah, sure.
So check out VALA, obviously, and also get into this mutable value semantics thing, because
you can do value semantics, mutable value semantics in pretty much every languages,
and especially C++.
C++ has some love for mutable value semantics.
We just forgot it because we want to have performance
and we use references all over the place.
So I truly believe medieval semantics is a bright future for programming
and you can adopt this programming paradigm in C++.
So check this out and see what MVS can do for you.
Well, thanks very much for coming on today
to tell us all about VEL
and Mutable Value Semantics.
I'm definitely going to be giving another look
and maybe we'll start writing an analyzer for it.
So going to let us know where we can reach you
if people want to find out more.
Yeah, I don't have much of a online presence i'm not a
big fan of uh social media i have a twitter account that's quite progressive these days
but i don't publish anything so i yeah i guess the best way to reach me is just to maybe follow
me on github and and send me a message there great we'll include a link to that on the show
notes as well thanks again yeah thank you so much timmy to that on the show notes as well. Thanks again. Yeah. Thank you
so much, Dimi, for being on the show. I think that was super fascinating. So definitely learned a lot
today. Thank you so much. Thank you very much for having me. Thanks so much for listening in as we
chat about C++. We'd love to hear what you think of the podcast. Please let us know if you're
discussing the stuff you're interested in, or if you have a suggestion for a topic, we'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. We'd also appreciate
it if you can follow CppCast on Twitter or Mastodon and leave us a review on iTunes.
You can also follow me at timur underscore audio on Twitter and at timur underscore audio at
hackiton.io on Mastodon and Phil at phil underscore nash on Twitterio on Mastodon and phil at phil-nash on Twitter or
at mastodon at phil-nash.me on Mastodon. And of course, you can find all that info and the show
notes on the Postcard website at cppcast.com. The theme music for this episode was provided
by podcastthemes.com. Second theme from the top.