Algorithms + Data Structures = Programs - Episode 138: 🇬🇧 Sean Parent on Val! (Part 2)
Episode Date: July 14, 2023In this episode, Conor and Bryce continue their interview with Sean Parent live from C++ On Sea 2023 about the Val programming language!Link to Episode 138 on WebsiteDiscuss this episode, leave a comm...ent, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the Guest:Sean Parent is a senior principal scientist and software architect managing Adobe’s Software Technology Lab. Sean first joined Adobe in 1993 working on Photoshop and is one of the creators of Photoshop Mobile, Lightroom Mobile, and Lightroom Web. In 2009 Sean spent a year at Google working on Chrome OS before returning to Adobe. From 1988 through 1993 Sean worked at Apple, where he was part of the system software team that developed the technologies allowing Apple’s successful transition to PowerPC.How To Get Involved With ValDM Sean on TwitterVal Lang on GitHubVal Teams MeetingClick here to join the meetingMeeting ID: 298 158 296 273Passcode: D2beKFWhen: Tues/Thurs 12:30-1:00 PSTVal SlackShow NotesDate Recorded: 2023-06-29Date Released: 2023-07-14ADSP Episode 137: Sean Parent on Val (vs Rust)!C++ On Sea ConferenceAll Sean Parent ADSP EpisodesAdobe Software Technology LabConor Hoekstra - Concepts vs Typeclasses vs Traits vs Protocols - Meeting C++ 2020Programming Languages Virtual MeetupThe Val Programming LanguageThe Rust Programming LanguageThe Swift Programming LanguageHalide LanguageADSP Dave Abrahams EpisodesCircle CompilerJakt Programming LanguageCppCast Episode 355 - Carbon, with Richard SmithC++ on Sea 2023: Keynote: All the Safeties - Sean ParentRust iterx libraryThe Carbon Programming LanguageIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
One of my complaints about C++ for a long time has been that although it gets used in the GPU space widely, it hasn't embraced the GPU space.
I have criticisms of other languages. My only criticism of Val is that it's not mature yet.
In many ways, Val is my dream language, And I say that from somebody who's largely outside, right?
It's not the language that I created.
It's the language that Demi and Dave Abrahams created.
If we could combine the composability and elegance and conciseness of an array programming language
with the familiarity of a more general programming language, something
that would be easy and natural for your average system programmer today to pick up.
That is the name of your podcast, right?
Yes, yes.
ADSP is officially a, episode 138, recorded on June 29th, 2023.
My name is Connor, and today with my co-host, Bryce, we continue our conversation with Sean Parent about the vowel programming language that took place live at C++ on C 2023.
If you haven't caught the first part of this
conversation, be sure to check out ADSP episode 137. So what I'll say is that I've been listening
to this. I mean, Sean and I have chatted a couple times because it's Thursday now and we've both
been in Folkestone since... We've both been in Folkestone since Tuesday.
And I gave a talk...
I can't remember.
It must have been back in 2019 or 2020,
which was entitled
Haskell-Type Classes
versus Rust Traits
versus Swift Protocols
versus D-Type Constraints
versus C++ Concepts.
The details of the talk don't matter.
But at the end of the talk, I compared these five languages. And much to my surprise,
I gave, there was one metric that was like developer happiness. And I knew a lot about Haskell, a lot about C++, and not too much about D and Swift and Rust at the time.
I know a little bit more about Rust now.
But of all the languages, Swift had the largest happy face on that developer happiness. And that was because just as a C++ developer, before I hit the stack overflows, I would just try to program what I thought was sensible in the Swift playground that they had kind of a gobble-like thing, which would suggest you stuff.
And like 90% of the time, I guessed correctly and it just worked.
Whereas Rust, I'd hit a bunch of errors.
Haskell, I hit a bunch of errors.
D, I hit a bunch of errors.
So what I'm hearing here is that VAL sounds like the best bunch of errors. D, I hit a bunch of errors. So what I'm hearing here is that Val sounds like the best parts of Swift.
And the major problem with Swift,
like I had a programming languages virtual meetup at the time
and I was broadcasting to the 50 members
that were attending online at the time
that were saying,
we're going to wrap up.
We're wrapping up
because we got the buildings,
the room closed down.
Now the building's closing down.
But I was pitching to the people
in my group that Swift
was super amazing.
The problem was that
it wasn't cross-platform.
And I was on Linux at the time.
And when I tried to develop on Linux,
I hit a wall.
But it sounds like Val
is taking the best parts of Swift.
I don't know if there ain't
like what their story is
for Linux and stuff.
But based on Eric's comments and Graydon's comments
and the fact that we've got to leave,
we're going to hand the mic to Bryce,
who's going to put it in front of Sean.
I'm very excited for the future of Val, is my point.
I was going to say, one of the things that seems very exciting to me about Val
is that the value semantic model
seems like it makes Val a great language for concurrency and parallelism.
I don't know if that's something that you designed with,
with it in mind,
but it seems like it makes it,
it will make it very easy to write parallel and concurrent code.
Well,
like surprisingly,
that was our,
our initial reason for going this direction.
Right.
And how much video funding do you need?
So, so the, uh uh you know the what started my team was adobe was having trouble uh as we added more concurrency to our project and as we
moved our projects to a wider diversity of devices. We should walk and talk.
We should walk and talk.
We've done it before.
We've done it before.
We actually walked and ran.
We walked and ran through Venice.
No, we talked and ran.
Hang on.
You give me the one.
The building's closing down.
What are you doing?
Okay.
I can hold the mic.
Okay.
So the stats were showing a noticeable uptick in problems in concurrent code versus problems in straight line code.
And Adobe cares deeply about the quality of our products.
And so the question became, how do we address this? And if you look at concurrency models, in C++, they're horribly broken.
In other languages, they're not great.
The recent stuff that's going on in Swift is passable.
Everybody talks about worry-free concurrency in Rust, but there's no
standard library with Rust that interfaces with the system standard library.
So from a product standpoint, there's not
enough there yet.
Plus all the podcast people are following us.
You're now on the podcast people are following us. Yes. You're now on the podcast.
The moment of fame.
Yes.
So our original interest in Val was around how do we build a better language for concurrency.
Right?
Wait, do you want to be on the pod?
Yeah, we've got the, are you building security or just building management?
A bit of a manager.
Hello, everyone. Hello. What would you building security or just building management? A bit of a manager. Hello, everyone.
Hello.
What would you like to say about the venue here?
Should people come to C++ on C in 2024?
You're going to be in for a treat.
We'll look after you well.
We'll have a great time.
Well fed.
Drinks everywhere.
It's going to be great.
Great fun for everyone.
Come on down.
Will you be here next year?
What's your name?
My name's Hugh.
I'm the events manager, actually.
So yeah, I'll probably be here next year.
Yeah, it's a pleasure. Absolutely fantastic. Thanks for being on the pod absolute pleasure have a good evening guys thank you all right back to you sean so our focus was
on concurrency and if you look the problems with concurrency come with shared mutability if you
have shared mutability you require synchronization synchron. Synchronization imposes a penalty.
If you can get rid of shared mutability but keep mutability, there's huge performance benefits.
You get parallelism without the cost of synchronization.
Okay?
So that was our initial interest in Val.
At least my initial interest in Val.
How come you didn't mention that in the keynote?
Literally Bryce would have jumped up and been calling the CEO being like...
Because I don't think you even need to mention it because I think it's a very evident...
It's not evident from Russ though.
And Russ has got a lot of similar interests.
Yeah.
So that's where we were going.
And what happened recently was an executive walked in my office and said,
so the White House issued this report on safety, memory safety.
So what do we do about that? Well, it turns out there's a significant overlap between memory safety and the non-interference property and what's required for concurrency.
So, you know, just a bit by luck, it's like, well, we have the answer.
We happen to be working on something.
Right, we happen to be working on something right we happen to be working on is this what you want
and you know that at least got us funding through november uh to continue our work on val and and
hopefully pass that so you know i will take it i still want to focus on concurrency which kind of
lands back in in your two court i know we were talking. I'd like to get
folks at NVIDIA
excited.
Are we going back to this place or are we going
out somewhere? Well, I mean, I do
need to go to my hotel and I do need to
finish slides
for my keynote tomorrow. But, you know,
I can't miss a discussion with Sean
Parent. So maybe we should go in and wrap it up.
That is the name of your podcast, right? Yes, yes. ADSP is miss a discussion with Sean Parent. So maybe we should go in and wrap it up. That is the name of your podcast, right?
Yes, yes.
ADSP is officially a discussion with Sean Parent.
I think we have to go into this store at this hour.
Okay.
I don't actually know that that's correct.
That's what happened when I was here late the other night.
I love that we have taken this podcast from road trip to interviewing people while walking.
I mean, this podcast has just flown off the handle, you know?
We had structure, nominally, and now there's none.
Now it's just chaos.
We're going to go over here.
Oh, look, we've got folks.
We've got folks here.
People are still up.
Yep.
So you're hoping to get folks at NVIDIA interested in Vow?
Yeah, because I think the basic model lends itself to concurrency,
and I would like to make sure that we have a strong...
What?
We're not sitting here?
I would like to make sure we have a strong model for GPGPU computation.
Although we're not allowed to say GPGPU anymore.
Jensen banned it.
They're just GPUs.
They're just GPUs.
And GPU doesn't stand for anything.
It's just...
Sure.
Ask Wikipedia.
See what they say.
Yeah.
It's General Purpose Unit.
Whatever it is.
There we go.
So, you know, I was talking with Connor earlier.
I'm actually interested in how we take some of his interest from array languages,
which have huge benefits under SIMD and GPU processing and image processing, frankly.
And I think that very tight syntax, we have an opportunity to do kind of a general purpose programming language that incorporates.
Bryce is excited.
He stole the mic from Sean, folks.
If you're wondering why is Sean no longer talking, it's because Bryce literally grabbed the mic out of Sean's hands. I mean, if we could combine the composability and elegance and conciseness of an array programming language with the familiarity of a more general programming language, something that would be easy and natural for your average system programmer today to pick up, but also have all the power of an arypergman language oh that would be game
you have to understand that bryce is staring deep into my eyes when he says when he says
the familiarity so i think in bry's huge value there and one of my complaints
about c++ for a long time has been that it hasn't although it gets used in the gpu space
widely it hasn't embraced the gpu space and and and even that's a quote yeah and even in the
you know when you look at at the work that's been done on vectorizing compilers it's been
what i refer to as performance through optimization and not not performance by definition. I mean, yeah.
It's funny that C++ and C have become the de facto language for writing performance-sensitive code,
while at the same time it's one of the worst languages
for optimizing loops, for compiler optimization of loops.
Because you have problems like aliasing, for optimizing loops, for compiler optimization of loops.
Because you have problems like aliasing.
You know, you have, with the compilation model,
you could have functions where you can't see the definition of them,
and so you don't know what they could potentially do.
And because it's C++, they could do anything.
They could write around on memory.
It is so hard to get a compiler to do a good job of optimizing and vectorizing your loops in C++.
Yeah. And I was talking with Bryce last night at dinner and saying, you know, when I look at the
SIMD work going on in C++, it's what I want is something closer to Halide, which is let me express
the math of the operation that I want independent of the data type.
Because a lot of times when I'm doing an operation for graphics,
it's like I need to be able to write the operation,
and that needs to work for an 8-bit normalized type,
which is not an 8-bit float.
It's 8-bit that represents a value from 0 to 1 which means when you multiply two
of those it becomes a times b plus 127 which is roughly your half divided by 255 and you can't do
a division by 256 because that gives you the wrong answer and that would just be a shift unfortunately so you're
you always have this divide by 255 which you can do there are very clever techniques to do
to do a strength reduction on that division and make it very efficient but really what i want to
just say is i've got this type which is a normalized 8-bit type, and I want to say A times B and get the
correct result. And so I want to write an algorithm once, and then I want to be able to say,
stamp out code for that, for SIMD that executes as 8-bit normalized,
16-bit fixed point, 16-bit half float, 32-bit float.
Okay.
All of those SIMD optimized and then do the same thing GPU optimized for all of those kernels for all of those algorithms.
And there's nothing in the SIMD library that lets me get even close to being able to represent something in a generic form in that
fashion and provide my types and stamp out that data. The closest thing right now is Halide.
And, you know, I think there's an opportunity here for us to say, look,
when we say generic code, what we want to be able to say is generic code on a GPU, on a vectorization unit,
where we say our value is represented this way. And we can stamp out efficient code
for SIMD, for GPU, regardless of how we're representing our value.
And this is, I think, increasingly important today as compared to five or ten years ago because we have seen an explosion in just a number of types alone that are used.
We've seen increased use of things like fixed point, which this guy has worked on a bit, pointing at Connor here.
But also just in the floating point space, I mean, there's been this explosion of FP16 and FP8 floating point names.
Yeah, and if you're guessing, who cares about the precision?
Am I right, folks?
I mean, there's something like four or five different just floating point formats for FP8.
I know David Olson listens to this podcast and he will be i'm sure happy to
be acknowledged because he's done a substantial amount of work on you know work on those sort of
non-traditional floating point types that are important for machine learning where like we're
just guessing so who cares you know we don't need that much position yeah and i mean i i think i
also have an interest in being able to write that sort of code in a generic way where I can
easily change
the data layout because maybe
on one platform
I want the
data to be laid out in array of struct
but in another platform I want it to be laid
out in struct of array and maybe I want
different padding or different spacing
or different tiling on different platforms
and being able to write the code generically in one way and then be able to make those decisions
later, I think is increasingly important in the world that we live in where we're seeing
an explosion of heterogeneous hardware. Yeah. And it's, you know, the closest thing we have
right now is Halide and hence Adobe's investment in Halide.
And Halide is great, but it's also always a bridge to another system.
And so it's like, okay, well, within C++, we have to do it one way,
and then within Halide, we get to do it in a more natural way. And it's just painful and so so i would like to be able to smooth that out and say
let's have one programming language that can do this efficiently that can let me write
you know my kernel operations make the decisions afterwards about scheduling and tiling and layout of data.
Based on my target,
largely those decisions will get made automatically for me outside of my system.
And all of that scheduling and get very efficient code.
And, you know, so personally,
it's like I think there's a gap in the industry.
I think there's an opportunity here the industry I think there's an opportunity here
I think the timing's right
I happen to think Val is
is arguably the best solution
and like frankly
I have criticisms of other languages
my only criticism of Val is that
it's not mature yet.
Right. In many ways, Val is my dream language. And, and, and I say that from somebody who's
largely outside, right. It's not the language that I created. It's the language that Demi and
Dave Abraham's created, but I see the value in what they've created here. Yeah. So, you know, I think there's
an opportunity there. I mean, as a programming language enthusiast, I mean, I've been in seventh
heaven for the last two years. I mean, ever since Carbon got announced and CPP2 got announced. And
even before that, I think Yacht was a thing. We actually, we meant to have Richard Smith on the
podcast, but then CPPCast had him on first.
And so we thought CppCast just interviewed him.
We were trying to beat any podcast to interviewing all the successor languages.
And we did have Dave Abrahams on at one point.
He did talk a little bit about Val.
But I mean, it's been a year since CppNorth.
And I have to say, there has been not a ton of movement, publicly at least, on carbon.
There might have been a lot of internal developments.
But we will link.
It won't be out probably by the time this podcast gets out.
It might be.
But the keynote that Sean Parent gave was fantastic.
The best part, in my opinion, was the Q&A at the end.
A lot of hot takes on other successor languages.
We won't repeat them here, but go watch the talk on C++ on C's YouTube channel.
The point is we've got VAL, we've got Carbon, we've got CPP2, we've got other languages.
And I have to say, and we've got Circle.
So Circle, in like the pure C++ space, is what has me most excited.
But sort of in the adjacent C plus plus, what's the next
thing? The fact like hearing you tonight say, not just, you know, Graydon's take and Eric's take,
but that one of the get go reasons was, we don't have a language that prioritizes heterogeneous
compute, and how do we target different types of acceleration. And that Val's biggest weakness
is that it just doesn't have a collection of people that are building up the community.
So my point here that I want to ask is if there are people listening to this podcast right now that want to get involved, is Dave the person to reach out to?
Like maybe there are Rust folks that are interested in working on a different language.
You heard that right.
Connor is saying that Rust people should leave Rust.
I didn't say that.
I think Rust is an interesting language as well.
But also, you know, I didn't speak up when I was hearing Sean talk about it.
But the lifetime issue is like, as a consumer of Rust libraries, Rust is amazing.
As soon as you become a library developer and you have to deal with those lifetime issues
and you have to spell the types of these complicated things that you're returning from objects of the iterator model,
I've done that.
I have my own little iterx library.
And I tried to compose two simple functions, zip and map.
And I ended up learning about box and dine and going to stack overflow. And I was like, wow, I really, truly appreciate the value of function deduce return type in C++ now.
Because that was painful.
And it sounds like Val does not suffer from that at all.
In fact, it dodges that whole issue of lifetime.
So, you know, I actually haven't taken a deep dive into Val, but my point is, is that I am
personally excited about it. And I feel like people listening to this conversation are probably as
well. So the question is for those folks that maybe want to get involved, is there a place they
can go or a person they can contact? Not that, like you said, you're not in the community development
initiative, but it's, I feel like people are going to listen to this conversation and get excited the same way that
I am listening to this. So feel free to just point at Dave's Twitter handle if that, if that's what
it is, but what, what should people do if they're excited and want to get involved?
Yeah. I would say Dave is not on Twitter anymore for, you know, political reasons around Twitter, reach out to myself on Twitter
and I can hook you up. Go to the Val Lang GitHub repo, which is pretty easy to Google and find.
Yeah. Yeah. We'll link to it in the show notes. So that'll be good. And, you know, at this point,
it's pretty easy. We have like a twice a week meeting over Teams where we meet.
We have, it's not just Demi and Dave, we have a couple of other external developers who are actively contributing.
One of them is working on the concurrency model and improving that.
We have an active Slack channel for Val. And so wait, are you telling me
you have a Teams meeting
that is open to people
that want to get involved
that any external person can join?
Yeah, absolutely.
I can guarantee you.
I mean, actually, I can't
because I mean,
I don't know how interested
our listeners are.
But the fact that there will be a link
in the show notes, folks,
of this conversation,
where even if you don't want to start doing work,
but if you just want to listen
to the conversation that is happening,
if you are an external person outside of Adobe,
check the show notes.
It'll be at the top.
It'll be the number one link.
I guarantee you there's going to be a few extra people
that are just sitting in on the conversation
wanting to listen,
and a few of those people are going to want to get involved.
And that's great.
I mean, the only way that I see this happening is as a grassroots effort.
It's going to happen because people believe in the model and people buy into it.
And individuals are willing to contribute and build it out.
Right.
There is not, like I said, Adobe's not in the tooling business.
We're not going to move into the tooling business.
So Adobe will put some emphasis behind it,
but we can't match what Google is going to put into Carbon
or what Microsoft is going to put into their efforts
or what Apple is going to put into their efforts.
But I think that that's actually a good thing,
right?
If you look,
you know,
C plus plus didn't happen.
Only marginally happened because Bjarne was being backed by AT&T,
but it wasn't the AT&T name behind it that made it happen.
AT&T at least kept Bjarne in the game
and kept it funded enough
that a community built around it
and made C++ happen.
And I compare
that to kind of Java, which I think
was, you know, heavily
funded.
And pushed down.
Thank you, son.
Yeah, thank you, son.
You know, and Oracle to follow.
And it's, I don't want that.
Right?
Yeah.
If it needs funding from a corporation to succeed, probably it shouldn't succeed.
I'm not saying that about Java.
I'm just saying that in general.
Yeah.
So, and I agree with that. It's like, if this is going to happen, it's going to happen because it's the right answer at the right time.
And I happen to believe it is.
And, you know, Dave Abrahams has been kind of dragging me along, right?
Part of this has been like, he came in, to my team, he had a foot in Val. And part of our agreement was, you know, come on,
work on the team, and you can continue working on Val. And slowly that's become like,
maybe more of the team should be involved in Val. And maybe Adobe should put up some funding
to help with Val. And that's where we are right now. And it's,
you know, it's because I think it's the right answer at the right time. I just do. And compared
to all the other languages, I think we have the best answer. So, so.
You know, I think in a way, some programming languages that start out being driven by one
company and have a lot of resources behind them struggle to move past that legacy.
One of the biggest challenges that the Carbon team thinks that they'll face is making it
not just be a Google thing, that they want to build a community around it.
They don't want its identity to be that it's a Google thing.
Rust also had this in the earlier days where it was a Mozilla thing.
Swift has this problem too.
Some of the more successful languages didn't start out with a huge effort behind them.
And having multiple companies and multiple organizations and multiple people from all over the world from all different backgrounds involved in in the project
instead of a single big benefit uh sponsor that i think makes a project more resilient i i definitely
know that the the reputation of swift is that it is like a mac only thing because i've personally
tried to do mac or not to do swift on linux and they have a terrible story for it and there are
people within the swift you know ecosystem at apple that are working on that but it's just not
a priority for them they're they're focused on swift ui they're focused on you know ios stuff
and that makes sense for their it's still a top 20 language i think it's ranked like 13th or 14th
in the you know programming language rankings depending on the site that you look at. But there's definitely something to be said about languages like C, C++.
PHP is still a top 10 language.
What company is that endorsed by?
No company.
And in spite of people hating the language, it's still a top 10 language.
And you can argue that that's because of Facebook.
Okay, Bryce has something to say.
Well, I was going to say, earlier when we were talking about what was the size of the team that you would imagine for Val.
Sean, you said something like three to five core people.
And as somebody who has spent many years of my life involved in the C++ committee, which is a room of 300 different authors all with their own goals
and objectives and perspectives you know i think the best thing for for c++ was if we just picked
eight it could be eight people at random picked eight people at random but barry's one of those
eight people yeah but he seven seven people at random from the committee and just say like, you're the people now.
Sometimes a bigger team is not a better team.
Sometimes having a smaller, more focused team is the right approach.
Be sure to check the show notes either in your podcast app or at ADSPthepodcast.com
for links to everything we mentioned in today's episode, as well as a link to the GitHub discussion where you can leave questions, comments or thoughts.
Thanks for listening. We hope you enjoyed and have a great day.
Low quality, high quantity. That is the tagline of our podcast.
It's not the tagline. Our tagline is chaos with sprinkles of information.