Algorithms + Data Structures = Programs - Episode 231: C++26 Senders and Receivers (+ flat_map)
Episode Date: April 25, 2025In this episode, Conor and Ben chat about C++26 Senders and Receivers, flat_map and more.Link to Episode 231 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)SocialsADSP: ...The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBen Deane: Twitter | BlueSkyShow NotesDate Generated: 2025-04-09Date Released: 2025-04-25ArrayCast Episode 103: Julia - an Array LanguageP2300R10 - std::executionC++26 Senders and ReceiversC++ std::optional::and_thenHaskell joinThe Mother of all MonadsChains: Exploration of an alternative to Sender/Receiver | Sean Parent | NYC++Intro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
So the important parts, the three important things about senders and receivers. One, independent of
mechanism of concurrency. It runs with thread pools, it runs with coroutines, it runs with interrupts,
right? So that's one of the things that makes it very amenable to application either in the big
data space or in the tiny embedded space with interrupts. Very important. Second important
thing, it is asynchronous function composition, right? If you think about it like that, that's another very easy way, a good way in to think about it. Nothing happens when you declare senders. You can declare these things all day, you can compose them all day. No work happens at that point. It only happens when you say, kick off the work. And the third
important thing is the what's called structured concurrency, which is to do with lifetimes.
Welcome to ADSP the podcast episode 231 recorded on April 9th, 2025.
My name is Connor and today with my co-host Ben we chat about senders and receivers, flatmap
and more.
My other topic was talking about senders and receivers, you because you and you and Bryce often talk about
parallelism concurrency and so I thought it might be interesting to talk about
senders which are you know in C++ 26 and I wanted to... Bryce made a claim that I
want to rebut or finesse maybe so Bryce made a claim in a previous podcast that I have a quote
here. Today we live in a parallel world. Almost every hardware platform is parallel from the
smallest embedded devices to the largest supercomputers. I would finesse that by saying that embedded
devices have concurrency but not much parallelism and therefore
that almost all hardware platforms in the world today have no parallelism
because almost all hardware platforms are embedded devices but it is certainly
true to say they have concurrency mostly in the form of interrupts and so you
know I wanted to I thought maybe senders would be an interesting thing to talk
about, you know, in particular to talk about the two worlds that it addresses, you know,
Bryce and you live and you work for Nvidia, you live in the world of huge hardware, right?
Massive computation.
I live in a completely different world, which is tiny hardware, no computation,
but still concurrency. Still the problem of concurrency in the form of interrupts. And
senders, I think, addresses both worlds. For a long time, it was developed by folks firmly
with their feet in the big hardware world. But, you know, the work I'm doing with my team with Michael and with Luke, we are applying
it to the very, very small, to the embedded, to the world of interrupts, not just to the
world of huge, possibly heterogeneous hardware, large computation. So it's really interesting
for me to sort of contrast those two ideas and yet recognize
that I think senders has a good answer to both.
I don't know, what do you think?
Well, so I think we should start off this topic.
It's a huge topic.
Having you give an overview of what senders and receivers, we've talked about it before
and admittedly, I said the other day when I was recording an array cast podcast
We were talking about Julia and the topic of multiple dispatch came up and I said to the guest
you know multiple dispatch is one of these things which I have a
surface level understanding of but I've never really like it's the Julia folks talk about how it's it's the killer feature of the language and
I'm like, I don't I clearly don't understand it comprehensively enough to understand what the big deal is. And then even after he explained it, I think
it, my understanding improved, but it still wasn't like, ah, a hundred percent could go
give a lightning talk on this. And I feel the same way about senders and receivers.
My background on it, I've perused, I can't remember which version of the P 2300 paper.
I believe that was the last number. I think we're up to at least
R10 okay at one point I I read most of minus the you know the code the code snippets
That paper and the most helpful thing that I had was because Eric Niebler works at Nvidia
Oh sure point I I grabbed an hour of his time and he actually walked me through,
immediately he was driving, I wasn't doing any typing, a bare-bones implementation.
And the thing that I've heard and I think understand is, you know, in Eric's words,
what iterators are for serial algorithms,
like in, you know, C++11, is what senders and receivers are for asynchronous algorithms and
I
Understood, you know Eric's little presentation and I understand at a high level
But I definitely don't I'm sitting the same position with you know, Julia's multiple dispatch
I don't feel like I could go give a five minute lightning talk and be like, you know, here's the explain the eat
What is it? Yeah life I explained like I'm five. I five minute lightning talk and be like, you know, here's the explain the what is it?
Elif I explained like I'm five.
I think I can I can probably give a good account of that in in five or so minutes.
I feel like I'm up to that.
And we should mention I think it's been mentioned on the podcast before but you yourself are
in a position to do that because you have implemented basically that's right a version
of it yourself which is the number one best way
in my opinion to understand something comprehensively
is just to implement your own version.
I have, it's not a completely standard version,
but it is certainly I have implemented senders and receivers
in a code base, which will be in production,
and certainly to the same platonic ideals as exist
in the standard and to mostly the same interfaces.
Some of the details differ.
Awesome.
So over to you and help me and our listeners, because I guarantee you that more than 50%,
maybe more than 75 or 90 are in the same boat as me.
Okay, so senders and receivers.
Firstly, it's a framework for expressing asynchronous computation.
And the name is a bit of a misnomer.
So, or it doesn't really give the idea of what this let's say.
Really senders, forget about receivers for a minute.
Senders are asynchronous functions, right?
So when you call a regular function,
you give it arguments, you get a return value.
When you call a sender, you give it arguments,
you get a return value,
but you don't get that return value
through a return statement.
Instead, that return value, we might say,
goes onto the next sender as a continuation, right?
So because this is asynchronous functions,
we're not exactly executing it like there are some similarities.
We're not really executing it like a regular function stack
and returning values.
Instead, we're passing the values,
what would be the return values, onto the next continuation,
which is the next sender in the chain.
So a very high level, that's what it is. So some things we can note, senders and receivers
is asynchronous computation embodied and it does not therefore tie itself to any particular
method of concurrency, right? So when you're writing an asynchronous system,
there are three things which you want to separate. One is the expression of the functionality,
the business logic if you like. That's what senders is concerned with. But the other two things are
firstly, the mechanism of concurrency. If you can use thread pools or interrupts or
You know what or deferred queue or whatever?
There are many mechanisms we can use to achieve concurrency senders doesn't care or go co routines right senders doesn't care
You can use any of them and the other thing is like what's the strategy of parallelism, right?
Which things do we actually want to occur
at the same time, probably parallel? And those three things, the business logic,
the mechanism of concurrency, and the strategy of parallelism, we can separate out. And senders
is concerned with expressing the asynchronous computation in a way that does not concern itself with
the mechanism of concurrency. Ascender is an asynchronous function, right? When you make a
sender, it's just a no work happens, right? You declare a sender and you give it, you know, like
some, like you might give it a lambda to do. Like the first, the simplest sender
is called just right and you give it you give it a value and when you run that sender it
will produce that value. In other words, it will send that value downstream to the next
continuation the next sender rather than returning it like a function would.
So the senders are, when you compose these,
you get a basically representation of a computation,
but nothing happens.
And then the strategy for parallelism and-
The strategy for parallelism comes in
in the ways that you compose senders together,
or sometimes in the algorithms that you use
to compose senders together.
But the mechanism of concurrency,
whether you use thread pools or coroutines or whatever,
that's completely independent.
So having declared our just sender, let's say we say, you know, just 42.
So when we run it, it's going to just send 42.
We can run that sender on the sender and receivers framework abstracts this to the idea of a
scheduler.
So a scheduler embodies the idea of where I'm going to run this.
You could have a scheduler that's tied to a thread pool, you could have a
scheduler that's tied to an interrupt, an ISR, you could have a scheduler that's
tied to that does coroutines or whatever. A scheduler, you can ask it for a sender
and the sender that corresponds to a scheduler represents the entry point
into the computation. It does nothing, but it just represents starting the computation on whatever
computation resource that scheduler represents. So the very simplest one actually is just
the scheduler that doesn't a function, a regular function call, right? That's a perfectly valid
scheduler and it's called, I think, the inline scheduler,
not to be confused with inlining functions,
but executes the computation inline in the current thread,
just as if you had called a regular function, right?
That's a perfectly valid scheduler.
And when you ask it for a sender,
it will give you back a sort of regular function.
But then you can compose more senders with that.
So you can say, then, then is a sender algorithm that is that is F map.
Right.
Similar to the and then on the optional API.
Yeah, exactly.
Cy brand got got into the standard and friends.
Yeah.
So it receives what the sender before it sends, it transforms it, and it sends on
the result of the transfer. It's like transform on vector, right? It's like, and then on optional,
it's working on the value, in this case, inside the sender chain. So this is senders implement the continuation monad just like vector implements the list
monad.
Yeah, the list monad and optional implements the maybe monad, right?
Right.
In the same way senders implement the continuation monad.
So you have the same kind of monadic ideas. They are not in C++ expressed in functional style,
but they are given names that are
cromulant with C++, right?
Right.
So we have just is like lifting a value into the monad, right?
It's what Haskell calls pure or return.
And when you say just 42, then you get 42
in your continuation, right? Being sent downstream. Then is like transform. We said let value is a
third algorithm that is like then, except it doesn't return, it doesn't transform the value and return a new value,
it returns a new sender, right?
So it's like you can make a choice of which sender
to run next based on a runtime value
that came to you down the continuation.
It's monadic bind, if you're familiar with that.
And the purpose of monadic bind is to make
that runtime choice when you have that data dependency, when what you run next depends on what was sent to you.
What is the monadic bind in the like RxJS reactive world?
You mentioned concat earlier, it's map concat. It's transform reduce.
Map concat.
So imagine your transform produces a vector from each
element of your vector and then you concatenate them all together in your
reduction some languages call it flat map some call it map can cat okay I mean
I definitely am familiar with flat map but in my mind that's like you're taking
a nested structure and then this is when you need to Google Translate. Let's look at the optional version
We mentioned and then okay, so it's and then on optional actually because
then is called transform on optional and
And monadic bind or let value is called and then on optional. Oh no. So then is called transform and and then.
Because the function you pass to and then
itself returns an optional.
That's the key.
So like you have a pipeline with optional
where each step of the pipeline returns an optional result.
Right.
Like you go fetch some HTML content.
Right.
You make a connection, the connection might fail.
You fetch the URL, that might fail.
Each step is basically a function from a sort of concrete thing,
the thing that's not optional to an optional thing, right?
And the way you chain them together is with and then.
Right, so I mean, in a very simple case of this optional example, you know, transform.
Takes a unary operation.
So say we've got an optional of an int.
It's got some value and we want to plus one.
You just send the the the lambda that you pass to transform.
Its signature is int to int,
because it's gonna do the unboxing of the optional
if it's there, and then calculate the value
and put it back, whereas, and then,
its signature is int to optional?
The function you passed, yeah.
Imagine if you wanted to do-
Or the function that you passed to and then,
yes, yes, sorry.
Imagine if you wanted to do... Or the function that you passed to and then yes, yes, sorry.
Imagine if you wanted to transform with taking a square root, right?
What are you going to do if there's a negative number inside your optional?
Yeah, yeah, yeah.
Your square root, not the standard square root, but you might choose to make square
root return an optional, right?
Because it can fail.
Right.
It might be undefined.
And so if you transform with that function,
the result you get would be an optional, optional int.
Right?
Because you've taken the int inside the optional
and you've transformed it.
It's something that now produces an optional int.
So you've got a doubly wrapped int in the optional.
Right.
Right, right, right.
So what you do instead of transform is and then. Right, right, right, right. So what you do instead of
transform is AND THEN, right? Which effectively collapses down those two
optionals so that you stay in just a singly wrapped optional. Interesting. This
is, this is, I don't know. This is what monadic bind is. Yeah, this is, I've always
been a little bit confused by flat map, but
I'm not sure if this is a eureka moment, but definitely I think my understanding of something
is clarifying here because there is a function or this flat map and what I am about to say,
could be wrong and Ben will correct me if so, hopefully. Okay. like, you can think of if you want to go from, it will go to ArrayLand, you know,
where, you know, you've got a matrix representable as a list of lists in any language like Python
or a vector of vectors in C++.
And sometimes you want to turn that into a flat list and then maybe map, once again,
plus one on that matrix.
So you want to go from matrix to vector, rank one array, and one on that matrix. So you wanna go from matrix to vector rank one array
and then do an operation.
And definitely in certain APIs there,
you can do this with something like a flat map
where they similar to the way that you can do
a transform reduce in one call in C++,
you could also do that with a transform
followed by another call to reduction.
And I know like in Rust and other APIs, they have the map, they have the fold, but they
also have algorithms called foldMap or mapFold or whatever.
And so is this the case where, so my guess actually is where flatMap Actually is not going from like 2d to 1d it's you're in
The list monad aka a rank one vector a list you're doing some operation
that could explode it to a list of lists and flat map right keeps it as
Like sure you go to a mate. So instead of ending up with a list of lists
You're gonna end up still with just a list
Because everything has been flattened which the eureka moment that I was talking about is that like I'm conceptualize of going from like 2d
Down to 1d, but really it's 1d within your operation
You could go up which is similar to the option of an option if you have some failure
Yeah, so you need to make sure that you only got one level of nesting, basically,
whether that's the list monad or the maybe monad.
And so I think that there's a bit of conflating.
I'm sure there is an operation similar to fold map that
is the flat map that actually goes from 2D to 1D.
But in this case, the flat map that we're
talking about, the monadic bind we're talking about,
you're at one level of nesting. And you want avoid in certain cases going up to two whether that's a list of
lists optional optionals does that sound about right and also maybe flat map in no language
actually does the idea does this 2d to 1d um maybe that maybe maybe i've always just misunderstood
flat map and it never does that well so there are a couple of ways to get there. You can either, I mean, you can maybe fuse elements of it in the array languages.
And also you can generate the list of lists and then concatenate them.
Or you can sort of go the other side.
I think I'm thinking of it as like the other side of the square where you, I guess in the case of bind, it's effectively the same thing.
But it's map followed by join, if you like.
Join is the other monadic term that gets used here.
In Haskell, there's a function join, and map followed by join is bind.
Okay, well so let's have a little tangent fun. It's a...
Yeah, we've come off of send and receive and now we're talking about... So I've got another example,
another maybe good example which might help. But before we do that though, let's have this
tangent fun which is something that I've never really taken time to think about and
I apologize to the listener for about to lose you
But this is the kind of stuff that I know Ben spends time thinking about and I should have so join and haskell
corresponds
to the W combinator W which is it sounds highfalutin what I just said
but it's actually quite simple it takes a
Binary in the space of functions, which I actually don't know. What is it? Is there a function monad?
I don't actually know what oh, yeah, is that the function monad or function composition?
That's yeah is a type of monad and so in array languages. It's super super simple
it takes a binary operation and it turns that
into a unary operation by duplicating the argument.
And that's actually, in most array languages,
it is called duplicate, even though it
has a glyph or a symbol.
Sometimes it's called self.
And I think there's one other name for it.
But the idea being that if you want to square something, that is just the W combinator applied
to multiplication.
Yeah.
So now think of your binary operation as a unir operation nested inside another unir
operation because in lambda calculus that's how it might be expressed, right?
You would bind one argument getting back a partially applied function and then you would
bind the second argument and that is a binary operation. Here I am thinking this is going to
be like a seven minute tangent and then basically it took you 10 seconds and like three seconds into
your explanation I was like oh wait that's just but that's a weird I mean so that so it's a function
inside the function the same way as an optional inside an optional or a vector inside you know
vector of vectors. That is amazing I hope I that was, I mean, I got three seconds into your explanation, it clicked.
And I've never understood how, I guess because I've never taken time to think about it because
it's actually pretty straightforward, that the equivalent of copying your argument to
a binary function is just really nesting, well actually no, now that I say that.
I mean, it's it's not invoking
the binary function. It's just like a what you would call it like a partial application.
Yeah, yeah. And I guess it has all that works because everything is technically like a unary
function. Right. I mean, how partial application is built in. Could you build I mean, I guess
we we have we have this on the optional interface, but like, how would you do this for functions?
Would you, in C++, you do this with like bind first just to call the bind first twice?
Yeah, I guess so.
In C++, we don't have very good tools for function composition, but you could build
similar things.
Bind first might be part of it.
We don't tend to express, you know, for historical and maybe for efficiency
reasons, we don't express partial application very much. It's fairly niche thing, but it is
mathematically working out to the same thing as as as n-ary functions. Yeah, I wonder what the
what the disconnect of like trying to spell this because like you or it does it does this rely on
in a functional language like Haskell does this rely on in a functional language
like Haskell does it rely on currying like how are you able to apply this like nest yourself
within yourself and then you end up with a unary function like there must be a way to
do this in C++ with a function like if you take stood col colon colon plus brace brace
like the function object I mean you definitely can write a lambda called w that like does
the copying,
but is there? Yeah, yeah. And you can write a curry. You can write a curry function or higher
or a function in C++. And a while ago, this was a topic of discussion at my meetup, where we
sort of worked it out. And I think Jason put in one of his talks. Yes, I recall seeing this
a while a while back. Anyways, all right, I'll think about the C++ side of this but definitely definitely that makes 98%
Sense minus the C++ piece. So we strayed a little bit from senders
I actually the the other example the other intuition I was gonna give for for join or map concat
was
Imagine writing a chess program, right?
So imagine a very simple kind of,
I mean chess programs these days
are much more involved obviously.
But imagine you're in 1985 writing a naive chess program.
You're fundamentally what you're going to do is
look at the position,
generate all the moves that could be made,
right? From those moves, from those new board positions, again, generate more moves. You
know, you're generating a tree of board positions based on moves and then you're scoring them,
obviously. But just considering the generation, you're doing each, you know, you start with your
one board position, the current one, and you produce N, or new board positions based on
the moves that can be made.
Each one of those N goes on to produce M more, right, based on moves that can be made from
that position.
You want to collapse all these down, you know, in the process of scoring them, you're also selecting the new
best board position, right? And so it's that kind of operation where you are generating each
board position can generate n. So that's your function from value to like list of values,
right? You're in the list moment out here. Yes, that makes complete sense. And to clarify from my understanding is that you're
reducing this list, the flattening happens at every step. You're not you're not doing like the
Yeah, yeah, yeah, yeah, what did Alpha go? It was the MCTS Monte Carlo tree search that one
goes out n steps.
And then after a certain number of steps, it chooses the best path and then repeats.
Whereas this one, your tree search is-
Well, you want to flatten it down.
I think one of Bartosz's talks that he gave at C++ now, maybe 10 years ago, had this.
He talked about the n queens problem, which is very similar.
Right, right.
So you're trying to, you put a queen down
and then that constrains the space.
There are only so many spaces
you can put another queen down.
But there is more than one place, right?
And so you are searching this tree of boards,
of valid positions,
and he uses a monadic approach to do that. Interesting. Well, I'll see if I can find that talk.
I definitely know I haven't seen all of Bartosz's C++ now talks.
I've seen a couple, but I don't recall seeing that one.
So, I mean, to bring it back to senders and receivers, how does this
we're now we now go to the continuation.
I feel now I have not given the five minute.
I was going to say you mentioned continuation monad after the first five minutes.
So I think probably the first five minutes you were good, then you said that and then
I was like, ah.
So the important parts, the three important things about sends and receivers.
One, independent of mechanism of concurrency.
It runs with thread pools, it runs with coroutines, it runs with interrupts, right?
So that's one of the things that makes it very amenable to application, either in the
big data space or in the tiny embedded space with interrupts.
Very important.
Second important thing, it is asynchronous function composition, right?
If you think about it like that, that's another very easy way, a good way in to think
about it. Nothing happens when you declare senders. You can declare these things all day.
You can compose them all day. No work happens at that point. It only happens when you say,
kick off the work, wait for the calculation to finish. Right? Right. So senders are descriptions
of work. They areners are descriptions of work.
They aren't kicking off the work. The work kicks off when you say something like sync weight,
which is an algorithm that consumes a sender or remembering that a sender is a composition of
senders potentially, right? These things that are infinitely composable. So when we say a sender, we might mean a single like just, or we might mean a whole chain of just produce, then do this, then something else, then right send as a composable.
And the third important thing is the what's called structured concurrency, which is to do with lifetimes. Right. So when you're in regular function call land, synchronous function call land, you have a
stack and the lifetimes of your variables are implicit because of that function call
stack.
Right.
So when you call a function, everything, the parameters you pass in like references to variables on the stack, those variables
stay alive during that function call because it's only when the function returns that the
current stack is popped.
The stack ensures that lifetime, the lifetimes of variable arguments cover the lifetimes
of the functions you call them with. Right? And there's a similar
thing happening with senders, except that it's not a stack. These things are called
operation states. So when you connect a sender to a receiver, the resulting thing is the operation
state. I'll talk about receivers in a sec. But the operation state is that lifetime object.
That's where the state lives for the computation.
That's the object whose lifetime covers the computation.
And because senders are composed together
in a very similar way, operation states get composed together
and they form this kind of what's been described
as an onion of states that the outer states wrap the inner states, right? And so the lifetimes
of the variables persist and it's, you know, the lifetime of the variables in the outer
skin and then in the skin inside that and the skin inside that all the way down to the
center of the onion. Similar to a function called stack where you can think about variables
living at levels of the stack, you can think about variables living on layers of the onion.
Okay, I think that makes sense. Yeah. And the first sender that you run is the innermost
part of the onion, right? When that completes, the next sender in you run is the innermost part of the onion, right?
When that completes, the next sender in the chain is the next level of onion skin and
the computation proceeds all the way out to the outer layer.
When the final sender completes, the lifetime of the whole object is over.
So that's how, when people talk about structured concurrency, that's what they mean.
It's this this framework takes care of lifetimes in a in a way that can be reasoned about similar
to the way we reason about lifetimes on the stack, when we call functions.
Right. It is interesting, you know, in this conversation, you know, for better or for
worse, unfortunately, I've probably steered this in a less digestible
thing because I got excited about understanding join. But there is this, like, I don't know
if it's isomorphism is the correct word, but when you think about, you know, the list monad
versus the, we'll call it function monad, even though that's might not be what it's called,
and then the continuation monad, which we're talking about.
And they call each of these operations different things.
There really is like a one-to-one equivalence almost.
You can think of senders and receivers give you
the ability to represent an asynchronous graph
of computation.
And you can think of it in a way,
if you take a bunch of functions that take parentheses and multiple arguments
and you nestedly like Russian doll,
call these in a different order,
that's a way of representing a graph of computation, right?
Like you have to work your way down
to the most inner functions, evaluate those,
pass it back up.
The same way with combinatorial logic and combinators,
you can chain operations together,
which is also gonna represent like this.
And my guess is that this senders and receivers,
cause it's the continuation,
you could actually use this to build versions
of all these different, like you said,
cause it's separate in the-
Yes, you can.
Not that that's the way to teach this or explain this,
which is why I said for better or for worse,
is the kind of the, where the the but we can revisit this topic and it's true
The continuation monad is very powerful and can express other monads
In fact, there's a blog post out there where someone calls it. I think the mother of all monads
Yeah, we will I'll find that blog link that blog and
We can also observe of course that
Asynchronous computation is a superset of synchronous computation.
In other words, we can express synchronous computation in an asynchronous way.
We can't do the other way around, but you can always collapse an asynchronous computation
into asynchronous one by running it on the aforementioned inline scheduler that just
does function calls.
And so there's a lot of machinery in senders, but it can all boil down to exactly, if you
actually want to run it synchronously, it boils down to nothing less efficient than
function calls.
Right.
And like everything in C++, if you run it asynchronously, the idea is that it runs exactly as efficiently as if you
had hand-coded everything.
Yeah.
At least as efficiently.
It makes me think of some of the things
that I look at for work are Jax, which
is a library out of Google, and KooPie,
which is a GPU-accel GPU accelerated version of NumPy.
And both of these have decorators in Python
that you can use to basically JIT compile certain functions.
And internally, they will once again
create a graph of computation based on the function calls,
and then they see what they can optimize.
And so you're building a graph, but in the simplest case,
you can just have a function that linearly calls one thing and then another thing and then another thing.
And then your graph is just a linked list, right? Which is just serial computation. I
mean, technically a linked list is a graph. It's just a very simple one, right?
Yeah. Yeah. And if you run it on an inline scheduler, it more or less literally becomes
a regular function call stack.
Yeah.
So the other thing we can, we can, so I haven't mentioned receivers yet.
Receivers, the proposal is often called senders and receivers.
The important thing really is the senders.
They're what express the computation.
The receivers mostly don't occur in user code.
They're mostly used for the glue between the senders that algorithms use to plug these things together. One of the important things that Senders gives us
is not just the happy path called the value channel, right?
If everything goes right, you're on the happy path.
You're in the value channel, everything's succeeding.
But also, there are two other channels,
the error channel, right?
When something goes wrong,
you want to be able to handle an error,
you want to be able to handle an error. You want to be able to potentially short circuit the computation, error out and handle it a
different way. And the cancellation channel or the stopped channel, right? You want to
be able to tell from the outside this thing to stop because something else changed in
the world and you're no longer interested in waiting for the result, right? Events have
overcome it. And these are really important. And these are also built into the
sender's framework. So every sender can complete in those three different ways with a value,
with an error, or by having stopped. And every receiver handles those three channels. And so
this is built into the idea of the framework of asynchronous with senders,
the capability of canceling the work or handling an error
as well as just staying on the happy path.
Now, because senders are completely agnostic
about the mechanism of concurrency,
the cancellation has to be cooperative.
There is no stop the thread or anything like that because senders don't know anything about
threads or coroutines or whatever else you're using for your mechanism of currency.
So you don't get to preempt them or anything like that.
What you get to do is request that they stop, right?
And they have to take heed of that request to be good citizens, which, you know, they
will do because at the end of the day, you will be writing the business logic.
Right.
Is this one of, I mean, my memory is failing me now.
I remember when Bryce and I were talking to Sean Parent, he gave a talk at, I want to say yeah, it was an Adobe office in New York
for a meetup and he presented, I think since then he's probably presented
at a conference. He changed and he was pointing out potential flaw in the
flaw quote-unquote, you know, a trade-off in the design of senders and
receivers and that was what Chains was addressing.
Was this it or was that something else?
Chains, I did see Sean's talk at C++ now, but I don't remember all the details.
But yeah, well, I can't, I think Chains is basically an alternative way of expressing
asynchronous function computation.
It's not so very different from senders and receivers, but it's a little bit different in the details.
And yeah, it's a trade off, I think. It's a slightly different design.
I think in particular, one of the things perhaps that, and I'm sort of talking just based on remembering now,
there's a design choice to be made about external control of
lifetime. Like if I delete the object, am I allowed to delete the object? Should that result in a
request for it to stop? Should that result in unwinding the chain of computation? There are
different choices that can be made there. And I think chains makes a different choice than
senders in that regard.
Yeah, I'll have to go back and rewatch that.
Or I mean, when we bring Sean back on,
because we already mentioned him two episodes ago,
we can just ask him directly.
So yeah, I mean, I definitely, I mean, I still feel,
I feel like I have a better understanding,
but like when you start talking about the onion analogy
of, you know, about the onion analogy of,
you know, wrapping the state, operation state,
you know, my head immediately goes to like,
how would I implement this?
And I feel like I would still,
it would take me a while to go and implement.
I don't know, every once in a while,
it's like I have this, a better surface level understanding,
but still could I go and implement this thing like,
right now, snap of the fingers. And I probably could, but still could I go and implement this thing like right now, snap of the fingers and I probably could but I would struggle whereas you know something
like a combinator model, I can go implement that with my eyes closed because I...
Sure, sure.
And I wonder why that is though, like what is...
Eric can sit down and implement a simple version of sends and receivers.
Exactly, with his eyes closed.
A few hundred lines of code. Yeah. I have to say in my implementation, I found, you know, there are some C plus plus isms that
you have to kind of dance around a little bit, but the design of it is really very simple
under the hood.
And once you grok the design and once you just sort of get the hang of a few different
implementation patterns, the implementation is really nice.
It just sort of falls out very nicely.
It's not difficult at all in general to add new algorithms,
to add new ways of combining senders, that kind of thing.
It all fits together very nicely in my experience.
Yeah, I know, Eric, that's one of the things that when I've
chatted with him is that, you know, he shows this kind of base set of algorithms and senders.
But similar to how Stepanov was always saying, you know, this is just the initial set, you
know, there should be multiples more, you know, there should always be an underscore,
the counted version of the underscore n versions and like, there should be tons tons more and don't just stop at what the
standard library gives you is, Eric has said the same thing in designing these and proposing them,
is that this is just the core set but it should give you the ability to write a world of asynchronous
algorithms for all your different types of computation that you want to do.
Yeah, yeah.
Which is exciting that we're getting it in the standard because then, hypothetically
speaking, down the road in 10 years there should be a bunch of asynchronous algorithms that you'll
have access to readily available in your standard library near you. Or 15 years, however long it
takes the... it can take time. And so have you, or I'm not sure if we want to kick this because
we've already gone past time a while.
Maybe we should do sender to receivers part two,
and you can talk about some of the algorithms
that you've ended up designing.
Sure.
Because that's something that I have spent
very little time working with.
Yeah, yeah.
We can put that on ice for today
and talk about it in the future.
It would be great if we could talk to Eric as well.
Yeah, I'll reach out.
I know he's a busy guy.
He's juggling several things right now.
Committee work, the internal implementation
of std execution, and now there's
a whole new other C++ equivalent of a Python initiative
that he's spearheading as well.
So he's a very busy guy.
But I'm sure if I mention that we want to chat about centers and receivers, I know he's given talks at CPPCon and it's been one of
his life mission is to make this a more understandable topic because it does for some reason seem
to be a thing that it takes a couple times, including for me, for people to hear it. And
I know that there's more and more folks, you are one of them. Is it Dietmar?
Dietmar.
And there's a few other folks that like, one of them. Is it deep mar? Deep mar.
And there's a few other folks that like, at first it was just Eric and then now that I
know that there's a handful.
Yeah, I think Rob Leahy has been doing some stuff as well.
Yeah, past guests on the podcast.
So it is becoming a more, you know, well understood and, you know, topic.
So part one in the books part two all right maybe with maybe with Eric TBD yeah all right well I mean I know we we had the schedule for 1030 minutes we have a bit over time yeah I should just start looking these for two hours because I don't think we've had a single conversation where we... And you say in general, I book off more time or like, you know, in your head at least.
Well, we end up recording what, a month's worth of podcasts whenever we get together,
something like that.
I think this one, this will probably be three, even though we have the time, the recorded time enough,
but I feel like splitting up the SNR one into two would, you know, that'd be doing our listeners a disservice because
I try my best because I get enough complaints about the slicing halfway through a topic
and then they're like, why do you do that? And a lot of folks I've realized they actually
wait for like all four parts and then they'll listen like a month later all at once.
And then they'll listen together?
Which isn't a bad strategy. I prefer, you know, as soon as things are out
to listen to them, but yes,
if you just finished listening to this,
I took the time to edit the extra 20 minutes.
But yeah, we'll do this again.
And yeah, I'll reach out to Eric
and see if we can get him on.
And to the listener, till next time.
Be sure to check these show notes
either in your podcast app or at adspthepodcast.com for links to anything we mentioned in today's episode, And to the listener, till next time.