Algorithms + Data Structures = Programs - Episode 158: The Roc Programming Language with Richard Feldman (Part 2)
Episode Date: December 1, 2023In this episode, Conor and Bryce continue their interview with Richard Feldman about the Roc programming language!Link to Episode 158 on WebsiteDiscuss this episode, leave a comment, or ask a question... (on GitHub)TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the Guest:Richard Feldman is the creator of the Roc programming language, the host of the Software Unscripted podcast, and the author of Elm in Action from Manning Publications. He teaches online courses on Frontend Masters: Introduction to Rust, Introduction to Elm, and Advanced Elm. Outside of programming, he’s a fan of strategy games, heavy metal, powerlifting, and puns!Show NotesDate Recorded: 2023-11-13Date Released: 2023-12-01Software Unscripted PodcastThe Roc LanguageHylo Programming LanguageCarbon Programming LanguageElm Programming LanguageBQN Programming LanguageContinuation MonadContinuation Passing Style (CPS)C++ Senders and ReceiversIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
I did set out to like, you know, this has always been a serious language.
The intention was always for it to be used in industry.
And even if it's only been a little bit so far, it's really encouraging to see it like
cross that milestone for real and be like, yeah, we did actually check that box.
And knowing like how rare it is that a language that sets out to do that even actually makes
it to that point. Welcome to ADSP, the podcast episode 158 recorded on November 13th,
2023. My name is Connor. Today with my co-host Bryce, we continue part three of our four-part
conversation with Richard Feldman, the creator of the Rock Programming Language, and we chat about what the status of the Rock Programming Language is, where it's being
used in production, and more. Speaking of which, what is the state of the art of Rock Programming
right now? I know that from listening to your podcast that there are folks out there using it
for personal projects and stuff. Because other it a, cause other languages like Carbon,
I don't think has a compiler. It has an interpreter. Hilo, I think just released
their compiler on gobble.org. And what's the other one that's being worked on? Oh yeah. CPP2.
I think that has some amount of a transpiler. I don't know if they're calling it a compiler,
but the other languages, I'm not sure if rock fits into that space, but they're the languages that we talk about on this podcast from time to
time, you know, don't have, no one's going and building like, you know, applications in them
right now. So what is rock? I think rock is a bit further ahead on you can, you can tell us.
So I would say that, I mean, rock is at the point where people could be building applications with
them. But I haven't seen anyone really set out and say, Hey, I'm going to do my project in Rock. I think part of the reason for that is I've been pretty vocal about
discouraging people from doing that. Because just because I, I'm very aware of like, you know, how
many bugs are in the compiler and like, you know, how much work we have to do before it's sort of
like a stable experience. And I would hate to have someone start on a project before the language is
ready, and they just get part of the way through and they just get blocked and it's like I'm sorry you know you
got to wait until we finish this you know this part of the project having said that that is
changing I think at this point it's a reasonable choice to build applications in rock and we're
about to update the website to reflect this so depending on when this episode goes out that
might be in the past but here in real time it's the plan is to ship that on next week i don't know if you how
long you're this episode will either be coming out uh this friday which is the so no it's not
coming out because the first episode was strange loop and then our right our hard left tangent into
hawaii land so that'll be the 17th and then this one will either be in part one or part two so it'll
be either November 24th or December 1st we'll we'll wait yeah the new website okay so past tense
all right so as as you all know the new website uh is it looks much more like a normal uh programming
language website whereas the previous website which I guess you can go to like archive.org to
see what it looked like was very like bare bones no styling it was like hey this is a placeholder until we have a real
website because we want the state of the website to reflect the state of the project um but yeah
but but that's like i said that's changing um so yeah you mentioned this a little bit earlier but
um the company i work for now vendor we've been working on introducing rock to the back end and
that's been sort of my major project is um actually so we now have it where uh this is in production so we know it works like
it's actually getting user traffic and whatnot uh and vendors a billion dollar company or at least
that was their last valuation um that their last like fundraising round um but uh i i don't know
how much like what the actual traffic numbers are offhand, but, like, you know, in terms of number of users.
The previous company I worked for, NoRedInk, had a lot of users, but not that high of a valuation.
So, like, millions of users.
But anyway, regardless, it's getting real user traffic.
But it is very small.
Like, what we've done so far is a little bit more than a proof of concept.
It's, like, one step past that.
So we haven't really, like, shipped a big feature more than a proof of concept. It's like one step past that. So we
haven't really like shipped a big feature in Rock yet, for example. So one of the things that I did
was based on my experiences introducing new programming languages, which granted is pretty
limited, but I have done it at a couple of different companies. I have seen it go a lot
better if you can have a really small level of granularity of introducing the new language ideally at the function level for example not at the service level not saying like okay we
have to spit up a new service in this new language it's way better if you can just as we now have the
ability to do it vendor take this node.js typescript backend and just import a rock module and just
call a function from it and it just just works, which it does now.
And we also open-sourced the way that we're doing that.
It's rock-esbuild, if you're curious about that.
So you can use that in your project too if you're using esbuild,
or even if you're not, like the bones are there
to be able to use it in other systems as well.
And also I had to do a separate thing for Jest,
and there's an example in the repo of how to do that for the Jest test runner,
if that's the test runner you're using.
But basically, that is, as far as I know, the only example of Rock being used at production at a company.
If somebody else is using Rock in production, let me know.
I'd be very curious to hear about it.
But just thinking back to, like, I was kind of around in the early days of Elm, uh, like 2014,
like, you know, almost 10 years ago now. Um, and I remember back when there was kind of similarly, there was one company using Elm for one thing in production, uh, which was circuit hub and they
were using it for some little like widgets they had as, as part of their UI. And then, you know,
another company started using it and people started doing side projects and this and that,
it kind of grew from there.
So I kind of think of Rocket being at a pretty similar stage where vendor is definitely the early adopters of, you know, using Rock in production.
That sounds interesting, by the way, we're hiring.
But basically, there aren't any other companies that I know of that have actually started, you know,
building something substantial on it. And all the applications that I know of that people have done side projects and have all just been kind of like pretty small and like for fun. I would say the
coolest one that I know of that somebody actually like, you know, shipped. Somebody built a
mechanical clock that used servos to change sort of of like little fake lcds like you know
lcd clocks where you have like the like the horizontal and vertical bars usually like green
or amber or something um so he made one of those but the bars are actually like made out of wood
or something and there's little servos that rotate them to change like what time it is
and he did the he programmed the servos in rock um on the like raspberry pi or
something um so that's cool but that's obviously not the same thing as like it being a battle
tested language which it very much is not yet still i mean it's it sounds like you're way way
further ahead than um some of the other what do we call them successor i guess rock is not it's
not aimed at being a c++ successor but
oh definitely not no of the languages that we chat about they typically all are c++
you know successor initiative projects but a lot like as i mentioned before a lot of them
um are still very early days like even carbon i think they have explicitly said like their
roadmap is like half a decade or something
before they're taking it very slow and experimenting
and making sure that they're mapping things out in a certain way
that they'll have really good backwards compatibility with C++,
which is fair enough.
You can attack designing language at whichever speed
or design plan you want.
But it sounds like rock is way further ahead because your company's actually,
you know, you're, you're transitioning the backend.
It sounds like to be entirely in rock or at least to a certain extent in rock.
And that's like,
you have to be at a at least a pretty good stage in order to be able to
successfully do that. Right.
Yeah. And I mean,
we're expecting there's going to be bumps in the road and stuff like that.
It's not like we're going to be like, all right,
everybody drop what you're doing and rewrite in rock it's more
like you know the next milestone in my mind is let's ship a new substantial feature in rock
because so far all we've done is just kind of been like okay let's let's try rewriting this
little tiny chunk of logic that's you know somewhere in production into rock and see if
it holds up under traffic and like you know doesn't do anything weird or surprising, which it hasn't. So great, you know, milestone passed. Another milestone was people other than
me writing rock code at the company, because of course, at first, it was kind of all me just,
you know, setting things up and whatnot. Also past that milestone, there've been two other
people who have worked on the production code base, writing rock. But basically, like, you know,
shipping a substantial feature is
obviously more work
and also a greater level of risk
of running into some showstopper
that was unanticipated.
A funny example was I was talking to
one person who was working on the old rock thing at work
and he was like, oh, what's the story around
parsing ISO 8601
date time strings? I was like, doesn't exist
yet, but actually I wrote the library to that in Elm, so I know how to do that. what's the story around parsing ISO 8601 date time strings? I was like, doesn't exist yet.
But actually, I wrote the library to that in Elm,
so I know how to do that.
So let me go like, you know, throw one together.
So they're not particularly complicated format to parse.
But at the end of the day,
there's going to be stuff like that
where we just, you know,
sit down to try to solve a problem in Rock
and it's like, oh, where's this off the shelf thing?
It's like, well, the shelf is very small right small right now so you know uh plus on top of that there's potentially like you know compiler
bugs and things like that to contend with um which you know it's just going to be part of the
experience of using a bleeding edge language you said so your next is miles or one of your next
milestones is going to be shipping some new feature. Do you have a list of features that you are going to be,
you know,
trying to choose from,
or is that,
uh,
down the road?
We have a list of contenders.
Um,
and they basically have to do with,
um,
like whether or not they seem suitable.
A lot of it has to do with sort of trade-offs around how much is it going to
hurt us that we don't have any kind of developed third-party ecosystem yet.
Like in other words, how much of this is just going to be like handwritten from scratch regardless
um and also other considerations are like how big of a project is this because again like you know
the bigger the project the greater the downside potential if you do get blocked for whatever
reason part way through it and then be like okay well we need to ship this so you know how how long
is it going to take to remove the blocker if it's's fast or there's a workaround, that's fine.
If not, maybe we have to go back and rewrite.
And then the more you have to go back and rewrite, the bigger the deal that is.
So we have a list of contenders, but we haven't picked one yet.
Those discussions are sort of like in progress right now.
But it is definitely very exciting to me that like,
I mean, I was thinking about this at some point.
I was like, what is the set of of what percentage of programming languages can ever say somebody other than the person who
created it got paid to use it at work like multiple people um who they never knew before
they took that job like i think it's a very very small single digit percentage might be less than
one percent oh i was i thought you were talking
about total number and i was like oh it's got to be more than double digit but percentage percentage
yeah yeah yeah one or one percent oh certainly yeah there's like lots of languages have done it
but but way way way more languages never make it to that point um yeah especially if you're
counting like all the you know i just made a little language for fun in a weekend you know
type of projects i discovered two languages called seriously and actually.
I was like, what?
Yeah.
Yeah.
I mean, there's like, and like, there's absolutely nothing wrong with that.
Like it's, it's cool to make languages just for fun.
But, but, you know, I did set out to like, you know, this, this has always been a serious
language.
The intention was always for it to be used in industry.
And even if it's only been a little bit so far intention was always for it to be used in industry.
And even if it's only been a little bit so far,
it's really encouraging to see it like cross that milestone for real
and be like, yeah, we did actually check that box.
And knowing like how rare it is
that a language that sets out to do that
even actually makes it to that point.
Do you, I discovered that
because one of my favorite languages is a language called bqn
which is like a next gen apl and it's not currently recognized on github as a language
but it's like they have some criteria that some third-party site does and there you have to cross
some like 2000 to 200 repos they're very close to that that. Have you passed that threshold or are you getting there as well?
Yeah, we asked the same question.
So in order to get.roc
to have syntax highlighting on GitHub,
we have to have more than 200 repositories
that are using the language,
which we do not yet.
I don't actually have any idea what the count is,
but we kind of like did a search at some point
and it was like,
okay, we're definitely not like
right on the cusp or anything either um honestly like most of the repos right now are just like
people doing advent of code and stuff uh and there will be more and more library stuff like we now
have a uh working but not completely fully featured um like postgres adapter that um shout
out to august he wrote that uh and, and like pure rock. So he did
like the binary decoding of like Postgres is format in rock, which is really awesome. Um,
and he also did a graph QL one. Um, Luke Boswell has done like Jason and, uh, has started working
on the, um, uh, Unicode stuff. Uh, so there's, there's a lot of like opportunities for people.
Oh, um, Archer started on a rege regex um so there's a lot of like
you know package stuff that can come up um and then also just you know if people want to open
source their uh you know language projects uh like their applications then you know that's cool
too but yeah i i'm not really uh in a big rush to like you know get to 200 repos so we can get
the syntax highlighting to be it's more important that people are enjoying the language and like finding it useful yeah i think that'll
that'll just happen with time i was more curious uh just because i mean it is nice when you're on
git and i think it's also kind of like a stamp of approval talking about you know the you know
success or uh how many people get paid etc um i. I mean, it's definitely also a milestone,
but it's not one that I'd want to sort of game.
You know, I wouldn't want to be like,
oh, let's everybody go make five repos,
you know, that use like Rock
so we could get there as soon as possible.
I'm just like, yeah, you know, it'll happen when it happens.
Yeah.
I know that we're getting close to time
and I'm not sure if you have a hard stop,
but maybe in the last few minutes here, we've talked kind of a little bit of the language features.
I mean, we've mentioned a bunch of things, which I think implicit like ecosystem stuff if you know coming from c++ we don't really have a
standard package package manager and i think that's a a huge shame but like also like you know
to what extent does rock support metaprogramming because that i think that is like across
functional languages uh there are very different stories for, you know, compile time stuff, whether it's a, you know, they don't support it at all or they support it through, you know, lists, macros, et cetera, et cetera.
So, yeah, like of the stuff that we haven't really mentioned that might pique our listeners' curiosity, feel free to, you know, rattle through things, language or library or ecosystem that apply to Rock.
Yeah, so, I mean, a couple of things. Um, so, uh, rock is a pure functional language. So that
means there is no mutation. There's no side effects. Uh, I guess since this is, I guess
an audience that's familiar with C++, I can kind of talk about what the representation is, or I
should say the representation is going to be, cause we are, we have not converted over to this yet,
but this is what it's going to be and what it's going to be indefinitely so I'll just talk
about it in those terms basically so the way that IO works is essentially that let's say I want to
do a I want to read something from a file and then I'm going to do some stuff with it and I want to
write something back to another file so the way that that works in rock is what you have is we call it a task and this is basically
going to be a struct that has some sort of actually i guess well it's really going to be a
tag union so let's say it's a union um in c terms and you have a discriminant that says like here's
the io operation that i want to do so it could be a file read or file write then it's got also
inside of it, all the
information necessary to be passed to that syscall or whatever the IO operation is. So in the case
of read, that would be maybe, this would be, let's assume that we're not doing like open and read.
Let's assume that we're just like giving the file path. But of course you could have a separate,
you know, open and get the file descriptor if you wanted to. That's, well, that gets into
platforms and applications, which I guess I'll have to talk about next. But so you have the
operation that you want, and you have all the information necessary to perform that operation.
And then the last thing that you have is essentially a callback that says, like, once the
operation is done, I want you to call this function. And that function is going to run and give you
back another one of these union things that says, like, here's the next IO thing that I want to do. And so essentially, now we have syntax trigger
that makes this look a lot more like, a lot more imperative, I suppose, in the sense of like,
you know, okay, first do this file read, I get this thing back, now that's in scope, I can,
I can do whatever I want with that, kind of like Haskell's do notation looks, looks similar to
that. And then once I've done the stuff that I wanted to do, then I'm going to call this file write.
And then, you know, maybe I do some more stuff after that.
But essentially what's happening is that that data structure is, you know, is what allows us to be purely functional.
Like anytime you're doing IO, you're not actually running the IO right away.
Rather, you're building up this sort of series of,
here's the IO operation that I want, and then here is a callback function that is also pure
that's going to return another one of these things. And then once you've built up, your
program sort of describes a big chain of these things. At the end of the day, there's essentially
a runtime interpreter that goes through and steps through all these things. Now, what's cool about
that is that, A, you get all the nice
properties of pure functions. So if you call these functions that return tasks a bajillion times,
they still just return the same thing. They don't run the I.O. operation a bajillion times,
which is really nice for caching and replayability and all these various things.
But also it means that if I want to do asynchronous, for example, you're perfectly set up for that
because you just call the thing, you run the IO operation, and then you don't call the
callback until whatever asynchronous thing you were doing is done.
And so you can interleave these however you want.
The structure is capable of describing like, I want to run this one and then that one,
or I want to run these two
in parallel. I say it present tense like we've implemented that part yet, which we haven't, but
it is capable of describing, like, I want you to run this thing and then also this other thing
concurrently, and then I only want you to run, you know, both of their callbacks after they both
finished in whatever order they happen to finish in. So there's a lot of really nice properties to doing that,
even if you're not trying to be a pure functional programming language.
Another advantage to this is, so in a lot of languages where you have an async capability like that,
you have this split ecosystem where you have the synchronous version of the IO operation,
then you also have the async version.
And if you mix and match those, you tend to have a bad time. In Rock,
I mean, one of the sort of weird benefits of having the constraint of being a pure functional
language is that you cannot possibly have the synchronous version. It's not even describable
in the language. So everything is async, which means you don't have the split. Everybody's
always using async everything all the time. And that's the only way to do IO in the language.
I guess I should pause there before talking about platforms and applications. But
I guess that gives you a sense of like, what the language is sort of about. So we do have like,
tail recursion elimination, and also modulo cons, for those who know what that means. And basically,
that means that if you want to get the compiled output
to be essentially a loop you just write a tail recursive function and it will compile to a loop
which means you can actually have an infinite loop in rock if you write a recursive function
and we actually do like rust we do detect a couple of basic cases of like if every call
results in recursion i think actually i forget if we finish that that
might not actually be implemented yet i know rust has this and we either do already or plan to have
there's some basic things you can detect where it's like you know we can't solve the halting
problem but you can at least detect some obvious like hey there's no there's no base case here so
like it's clearly never going to terminate but at any rate um so using the language feels very expression oriented because that's kind of all you can do um we do
actually have a couple of statements like uh debug is like just right now just like write you know
write something out to like the standard out just for debugging purposes that's a statement and then
the other one we have is expect which is uh kind of like an assertion but um it's like debug assert
in rust where basically it's it's like an assertion but um it's like debug assert in rust
where basically it's it's like an assertion except that it doesn't halt the program it's just like if
if it fails it'll tell you about the failure just for your own like informational purposes and then
in an optimized build it just gets thrown away so this is kind of a way to help yourself out with
testing oh yeah and also if it expect fails during a test then the test fails so this is a way to sort
of sprinkle throughout your program little things that you expect to be true like sort of encoding
assumptions in your uh in your code even if you're like but not so much that i actually want you know
a runtime crash if we get here i i just want to know about it but i don't want it to like stop
the program so yeah let me stop there before I get into platforms and applications,
which is sort of like the big idea in Rock, I guess.
And the thing that's most unique about it,
it's language feature set.
And I don't know,
let's see what you think or if you have any questions.
I mean, my one question,
and we'll see if Bryce has any,
is there a name for that like task chaining IO?
Like, did you, is that like a novel thing that a novel thing that the rock people and you came up with?
Or was that like an idea borrowed from an academic paper that has some esoteric name?
Or maybe it's a popular thing and you were just describing it that I haven't heard of.
The academic esoteric name is the continuation monad.
I did not come up with it.
William Brandon, shout out to William.
He told us about this.
I didn't actually realize that that was something you could do in a language that has the type system that Rock has.
But it turns out you can, and it has a bunch of really nice properties.
We're only kind of scratching the surface of what we want to do with it.
There's a whole bunch of stuff planned that I would be very excited if in the next year we got those things implemented. Um, but I don't know if we will, cause it's, it's, it's a substantial list of like a really cool things,
um, that we want to do with this. But, uh, I mean the, the, the upshot of it that I,
I most excited about it is that from a user perspective, it feels really simple. It feels
like using like promises or like futures or something like that like or like async await like you're just like okay i have this thing i want to
await it um and then all right then i keep going with my life uh there's no split between like oh
this thing is async and this other thing is synchronous it's just like if you're doing io
it's always asynchronous um and then all of the nice properties that come with it are really mostly either behind
the scenes or they enable a feature where it's like oh cool i have this new thing that i can
choose to use for testing or this or that um that i don't have access to in other languages
so is this the same thing as CPS, continuation passing style? It's a form of it.
I mean, so the continuation passing part would be inside that data structure.
I mean, there is a continuation, right?
I use the term callback, but that's basically the idea.
It's like, you know, you run this IO operation.
Then afterwards, it's like, here's the thing that I want to do next.
That's the continuation, right?
It's like, that's what we're going to do after we've done, you know,
whatever the, the IO operation is. So you can think of that as like, well, we've suspended
execution, or you can think of it as like, well, really what we have is we have a continuation
that's like, here's, here's where the rest of the program comes from. And we're not going to call
that continuation until after we've finished the IO operation in question.
I mean, this is interesting because I've never implemented a continuation monad.
But, you know, I've read stuff about it.
And, I mean, Bryce can correct me if I'm wrong, but isn't senders and receivers just the continuation monad?
Yeah.
So that's like a new asynchronous algorithm. I don't know what do you call it a framework
bryce that we're has been proposed sure has it has it been accepted for c++ 26 or not yet
it's been design approved i don't even i don't know what that i don't i don't know what that
is committee speak for but uh let's let's not worry about i don't think we need to not yes it's on track but all right
anyways about the specifics and potentially a new newer C++ near you and C++ near you in the future
they've also implemented a or approved a language feature slash framework which is essentially
I don't think actually anywhere
in the paper that is proposed it but hang on but but i i think that c++ car routines are
much more similar to the async and rock than uh i mean and there is some degree of
in what we call what's an awaitable
in a coroutine
awaitable essentially can be a
sender
to but I think that this is much
more similar to the like the classic
async await pattern
which we also have in
C++
I think what it most closely resembles
is sort of
the popular async await way that languages do that today.
Right.
Which, to my understanding, in C++ we would be able to re-implement coroutines on top of senders and receivers, aka the continuation monad, once we have it.
Re-implement coroutines on top like i thought
coroutines is basically just like a specialization of or like an implementation of the continuation
monad uh to do your async await stuff like continuation monad is like the most general
pattern but like you can you see it everywhere right like there's a bunch of different i don't know
if you call them specializations or applications it is definitely so so um senders allows you to
express asynchrony that is that is linear chained and not strictly continuation passing style
and it allows you to prep to express continuation passing style. And it allows you to express continuation passing style asynchrony.
So you understand the distinction between those two there, Connor?
I think so.
But I feel like this is kind of like now I've treaded into academic waters that, you know.
No, it's actually fairly important and relevant to our platform.
And it's a question of when are you going to know
what the task is that you're going to enqueue.
In a C++ coroutines world,
where you've got a function that co-awaits.
So I got three statements,
each one co-awaits on a separate call to something.
You know, after that first co-wait, the coroutine yields back.
And then you've got this function that you can call to resume the coroutine.
And you have no way to look ahead and to know that, oh, you know, the next co-wait is going to co-wait on this expression.
And then the next co-wait is going to co-wait on this expression. And then the next co-wait is going to co-wait on this other thing. Whereas in the
sender's framework, you would express that as you build a pipe where you'd say like, then
first thing I'm going to call, then second thing I'm going to call, then third thing I'm going to
call. And then, you know, at the time of that task graph creation, you know
and can see all three of those calls. And so you know exactly what's going to happen. Now in the
C++ world, the compiler under the hood, it obviously has to know these things.
But the interface that C++ Curve Routine exposes to you,
the user, and to you, the implementer of a scheduler
or an execution context,
all you get from a Curve Routine is just,
hey, here's a thing that you call
that will resume the thing.
You don't get any other information about the task graph,
which is very annoying for some execution contexts.
Yeah, if you're trying to schedule stuff efficiently.
All right, how does that map back?
Did you track all of that, Richard?
Was that all?
Well, having personally used zero of those c++
features i didn't track great i mean in all fairness one of them doesn't exist yet and
the other one has zero library support for it so there's only a handful all right well i don't feel
too bad well so so i guess another another way to think about it is um
in the traditional continuation passing the like style like if i want to chain like three pieces
of work like piece of work a is going to run to completion and then it's going to
launch piece of work b that i'll run to completion and that's going to launch piece of work C. Yeah, totally.
So the challenge with that continuation passing style approach, especially in this day and age,
is that you might be running on an execution environment
where you either cannot or it would be slow to enqueue the next piece of work.
So like, for example, a GPU.
On NVIDIA GPUs, we can do what's called nested parallelism or dynamic parallelism,
where you can have one kernel launch another kernel and launch another kernel,
which has been supported on NVIDIA GPUs, but has been slow for a long time.
We've recently released a new version of it, which is a little bit faster.
But there are also some accelerators where to the place that can launch the work,
especially if the continuation is wrapping a bunch of asynchronous tasks where we don't have to rely upon continuation, pass, and style where the control flow or the the what's the word I'm looking for
like execution control does not have to be done in place execution control can be done a priori
or can be prescribed or can be like left up entirely to whatever backend you choose to target.
And this problem, I think, is to me the one major downside of async-await
in almost all the languages that I've seen it in,
where it relies almost entirely on the continuation-passing-style model,
which makes it a bad fit for execution environments that
do not support or do not efficiently support running the next piece of work directly from
that execution environment. Be sure to check these show notes either in your podcast app or
at ADSP the podcast.com for links to anything we mentioned in today's episode as well as a link to
a GitHub discussion where you can leave comments, thoughts and questions.
Thanks for listening. We hope you enjoy it and have a great day.
Low quality, high quantity. That is the tagline of our podcast.
It's not the tagline. Our tagline is chaos with sprinkles of information.