Algorithms + Data Structures = Programs - Episode 54: std::partition in BQN
Episode Date: December 3, 2021In this episode, Bryce and Conor live code a BQN solution to the partition algorithm!Date Recorded: 2021-11-23Date Released: 2021-12-03LeetCode ProblemBQN Programming LanguageC++ std::partitionBQN Par...tition YouTube Explanation VideoC++ std::copy_ifC++ thrust::copy_if (stencil overload)C++ thrust::identityHaskell flip aka C combinatorAPL / (compress)APL ⍨ (commute) aka C combinatorAPL ⍥ (over) aka Psi combinatorAPL fork aka S’ combinatorC++ thrust::partitionC++ thrust::count_ifC++ thrust::sortC++ thrust::stable_sortADSP Episode 51: Efficiency vs SpeedC++ Seasoning by Sean ParentC++ thrust::make_transform_iteratorC++ thrust::reduceIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
I put it to Bryce and I said, Bryce, do you know why it's so special?
I think I got it pretty quickly actually.
Oh man, so now we're going to do there where he said, I'm pretty sure I got it pretty quickly.
And then I'm going to go and go back to what it was because I'm pretty sure, I'm pretty sure it took a little bit of hand-holding. welcome to adsp the podcast episode 54 recorded on november 23rd 2021 my name is connor and today
with my co-host bryce we program partition live in bqn and then talk about why it's so awesome.
All righty.
So now we might have missed it.
The good thing, Connor, though, is that you've chosen to solve this problem in this.
This is BQN?
Yeah, BQN. In BQN, which means that, yes, last time you got, you know, we lost lost this great exciting discovery of me um you know
sort of figuring out um like as you were showing me how like the language worked and like how to
solve this problem but it's so esoteric that you were going to get to rediscover that all over
again we'll see i'll actually be interested to be to compare how quickly you pick up what you learned and potentially completely forgot last time.
So what we're doing for the next hopefully less than 30 minutes is we're taking a look at this partition problem that we explained in last episode.
And we are going to be moving, I believe it's the odd numbers to the front and the even numbers to the back, which Bryce astutely pointed out is just a single algorithm in C++,
which is std partition.
And we're going to solve that in an array language.
So the first thing that we need to do, which, you know,
so like as we said, Bryce have done this before,
but we're still going to go at the same,
try to go at the same speed as we did last time,
is we want to create two different masks, a.k.a. Boolean masks.
And a Boolean in an array language
is a one or a zero that correspond to the even elements into the odd elements so if you do
modulus two this pipe is the modulus operator you then all right and in bqn so i guess i should
explain uh apl is the og array language bqN is the newest array language on the block.
If you take the A and add one, you get B.
If you take the P and add one, you get Q.
And if you take the L and add two, you get N.
And that's because Marshall messed up when he was rotating the letters by one.
Wait, no, is that really the story?
Yeah.
You've got to be be kidding the name of the
language is on off by one error yeah yeah and and uh there was a backronym that um i think is sort
of big question notation that you know it's a you know apl and started as a notation for mathematics
and bqn is the next evolution of it that is used for answering big questions and that once the
backronym had been set then it was like oh it's too late to go and fix it and bqn does sound a
little bit better than bqm uh and so we just he just kept it like that yes i'm not gonna say what
bqm sounds like bqn no bqm i don't know sounds like sounds like something dirty bqn though i feel
like that's what you am i don't know get your head out of the gutter man it also has it can
be pronounced uh so apl sort of has an alternate pronunciation um as apple and bqn has an alternate pronunciation of bacon um okay anyways we've uh
so i was i'm screen sharing with bryce right now so we'll talk you through this the first thing
that i did was to pipe piping the modulus operator and then one two three four is there a way to
record ah fine forget it all right all right all right you'll just i will i will i will release a
youtube video this Friday,
or maybe next Friday, which is when this is coming out.
You said that after the last time we recorded this episode.
But then we lost all the stuff.
We lost all the footage.
So it's supposed to go out the day that this also goes out.
But APL, in order to create an array,
you just put space-separated numbers.
But in BQN, you don't have that. You have
to put this little sort of smiley face under bar to create an array. So if you do two modulus,
one, two, three, four, you get one, zero, one, zero, where the one is corresponding to the odd
numbers. Now, in order to filter these basically out and just sort of copy the odd elements, you want
to use a fork.
So let's explain this.
This is an S prime combinator.
We explained this last time.
Where you have three functions, where it's three functions in parentheses, A, B, and
C. A and C are unary functions, and b is a binary function
where the evaluation, and so the argument is x here, the evaluation is b of a of x comma c of x.
So basically you pass your argument x to a and c, evaluate those, and then pass the results of those to your binary function B.
So our first unary function here is going to be the composition or the binding of two to the modulus operator. So modulus is a binary function, but if we compose this, I think that's the right
one, we are going to get our one zero, and I need to comment out these little explanation things here.
And then B and C. For C, we want the identity function.
And then in between the identity function
and our partially applied to modulus,
we want to put basically filter.
And if we get this, this does not work because i think so so so the idea is that uh
filter is a function that takes two sequences one that is you know this um boolean mask
boolean mask um as the first argument um and the second sequence is the elements. And then for each element in the Boolean mass sequence that's true,
it keeps the corresponding element from the second sequence.
Correct. Exactly, yeah.
And so initially, my partially applied to modulus wasn't working
because I was using a composition operator called a top,
and I needed to use one called before to partially apply but that's a detail and so thinking about
this in c++ terms I think the the equivalent would be something like a um
I was gonna say I was gonna say a copy if but i think that actually that's
that's not the um the best way to mentally model this um i think probably the way
and i say that because copy if is like is like one of these algorithms that's
like it's already a composition of two different
like things of like copying and in the filtering like it does you could do this with the copy if
but i think it's probably something like a copy with a with some range piped in i don't know what
do you think i think i think copy if is exactly what this is.
Yes.
Technically it's all bundled into one algorithm,
but like what we're doing here,
the creation of a Boolean mask that is being used to copy elements is not
actually possible with C plus plus standard algorithms.
No,
no,
with like a trans like that.
That's what I'm thinking. You know how all of the, all of like the, the range algorithms. No, no, no. With like a transfer. Like that's what I'm thinking.
You know how all of the range algorithms, they have that.
Projection?
Yeah, the projection.
Yeah.
So the thing is what the projection will do is it will modify your sequence before you copy it, which is not exactly what we want.
However, there is an algorithm that exists where you can provide it.
I won't give you the name of it because it's a dead giveaway if I do that, but you can provide
it with an alternate sequence that represents the Booleans that are exactly what we're doing here.
So you basically have a copy if algorithm that you provided a range that you're going to be
conditionally copying elements. But instead of using that sequence with a predicate to determine do I want to copy these,
you provide it a secondary sequence and a predicate.
And based on the evaluation of that unary predicate
on the alternate sequence, you copy from the original range.
And so that alternate sequence has a name.
And that'll be the hint that I give you.
If I tell you that that you'll probably instantly know
the algorithm um although the algorithm is copy if there's another huge hint it's just not the
c++ standard uh copy oh wait this isn't a c++ standard algorithm that you're it is a c++
algorithm just not a standard algorithm it's not in the algorithm header. Oh, it's a numeric algorithm?
It's not.
That would also be a standard algorithm.
Okay.
It's not in the standard library.
It's not provided by the standard library.
Is it provided by Thrust?
It is provided by Thrust.
This podcast is just like,
it's just constantly Connor teaching me stuff about the library that i'm supposed to be
in charge of although it's not actually true it's these days there's a the there's another person on
my team allison who's taken over uh uh maintaining um thrust so i'm no longer the glorious leader of Thrust, which is great. Like, you know, just
you should always just delegate your job off to other people. But that does not mean that I should
not know these things. Yeah. So when I discovered the stencil overload of copy of, it is such an
extremely useful algorithm because there are so many situations where
exactly what that overload provides you with is what you want to do you want to conditionally
copy something based on some other sequence and so that alternate sequence is called the stencil
yeah which is actually not the best name because it's sort of close to like a tile and that's
something completely different yeah but uh the stencil
overloads and they exist for a ton of algorithms and thrust basically um and sometimes you actually
so you in in the c++ standard algorithms you usually only end up with we'll call them like
sets of one or two iterators so like one that'll provide you the input range and potentially one
that provides you the output range but in thrust, you can end up with three sets. And I say sets because a lot of these algorithms, as soon as you
specify two iterators, begin and end for one range, you only need to specify the first one
for the other two ranges. Right. Because it's going to assume that the size of all of the other
ranges is the same as the size of the first or something like that or or where there's other
assumptions about the size you know that it's um for something like a copy if you know you don't
know what the size you don't know how many times you're going to write to the output um but you
know that it's going to be less than n for example so here's a and so this this algorithm is perfect
so you're going to end up copy if your original sequence is the one two three four your secondary sequence would be uh
i would do it with a thrust make transform iterator that's basically doing uh what we're
doing here the two modulus which is then gonna sort of on demand generate you your your ones
and zeros are true and false so the question is what's the unary predicate that you are going to
pass as the final argument to this that'll be applied to your um and technically
wait wait okay so you have the copy if with the stencil and then the the first the one of them is
this transform iterator um sequence that makes sense and then you're asking what is what what
are you asking so there's and there's a final unary predicate that is applied to your stencil.
Why?
Well, so this is sort of, as I'm asking this, I realize the question doesn't make sense.
It would just be the identity.
Exactly.
So thrust identity is what you would use.
So that's the thing, is trying to model what we're doing here in BQN.
That copy if, pass the sequence, 1, 2 sequence one two three four past the transform iterator that
does the two modulus and thrust identity that's what maps to this because the transform that does
the two modulus is the two uh composed with pipe um the the filter is our copy if and identity is
the thrust identity however that being said in c++ you wouldn't really do that. You don't actually need the stencil overload.
You would just use the overload where the unary predicate is doing the two modules.
I was just talking about what is the most true way to model what you're doing here in C++.
But the identity function here is different than the identity function in the copy if
because the identity function in the copy if you're saying that in the cot the thrust stenciled copy if it applies this function to this the the
stencil sequence in some ways it sort of reminds me of the by key algorithms i actually it's that
seems a little silly to me it seems like we should have a copy of stenciled overload that does not take um uh a predicate function where it just assumes that the um uh
that the the you know the the the mask sequence is convertible to bool um and just like you know
just assumes that and tries it but i mean like you know it's that's that's that's not a super
common use case so like i like, I get it.
I mean, it comes up often.
I mean, probably half the times I've used the copy of stencil overload, it's with thrust identity as the unary predicate.
My next thought was like, well, just make it the default.
But then I remembered, I don't really like default arguments.
So, yeah, probably I would prefer just the overload.
Yeah.
All right.
All right.
We should move on.
All right. Yeah. We right. All right. We should move on. We should move on.
All right.
Yeah.
We got 20 minutes left here.
So we've got our ones and zeros.
And then now we've used it with a S prime combinator that, you know, the binary operation is the filter or compress.
And then we're passing the mask and identity, which gives us basically 1010 with the one, zero, with the binary operation filter in between,
and then one, two, three, four,
and it leaves you with one and three.
So in order to get our odd numbers at the front
and our even numbers at the back,
we basically just want to copy this little fork here
and put them next to each other.
And so I think I asked you last time,
what do you think this evaluates to where,
and it's actually not comma, it is this little sideways, backwards S looking like thing.
So this is catenate.
This joins two lists together.
So what do you think both of these?
This will just give you 1, 3, 1, 3, right?
Correct.
Yeah.
So if you copy this and catenate the result of each of them, you just get 1, 3, 1, 3. So what we want to do basically is just in this second fork
that is just currently equal to our first fork on the left,
we want to basically change our ones to zeros and zeros to ones.
And there's a function in BQN and in APL called not.
And if we, I think if we just put this in parentheses
and evaluate this, we should get that.
So now we have odd numbers at
the front, even numbers at the back. I can't even remember what the problem originally stated,
but you know, it doesn't really matter. I think it might've been the other way around.
And yeah, but I don't think it really matters. It's simple enough to change it there. But I
seemed, well, I was pretty happy with this the first time around. I seem to recall that you were,
yeah, you got to put the pipe around.
Come on, Connor.
Even I could see that was going to be a problem.
So, yeah, you were happy at this point.
And I think you had even remarked, maybe we'll go find, we'll go find previous Bryce because we do have his audio.
And he says, this actually, it looks noisy, but it actually does make sense.
Yeah.
Insert Bryson.
You were like too
many too many parentheses correct yeah when i look at this solution there's one two three uh four
times two is eight different parentheses and more important so like this many parentheses tells me
that like this is not the ideal apl or bqn solution but on of that, we have so much duplication here. We basically have the two composed
with modulus slash identity.
We have that twice.
And then the only thing that's different between the two
is that we're negating the two composed with modulus.
So what can we do to make this better?
So let us see if we can sort of extract out the identity
and the replicate or the compress and to compose
and go from there.
So I can't actually remember how I did this
in the previous video.
But the first thing that we're going to try and do,
so we're going to sort of do this.
I had prepared last time to make sure
I wasn't sort of meandering through this,
but now we're going to meander through this.
So we're going to try and,
so let's see if we can,
we need to put this in parentheses.
So we've sort of factored out the two composed to the left and then the identity to the right.
And then for our binary function in between these two things,
we're just catenating for the moment.
So what we have here is one,
zero,
one,
zero,
and then one,
two,
three,
four,
all in the same array.
And so now we can turn this inner fork, which is currently not a fork, so if we add an A
and a C here, what we want to do is basically we want to basically turn each of these, I
believe, into forks.
So let's just try and do that simply.
And we'll use identity and so actually and this is where the explanation of a binary fork comes in
so previously i explained when you have a fork it's two unary functions with a binary function
sandwiched in between however which is what on the screen is sort of being described here you
can also have and we'll call this x, y,
and z, and then we'll call the input n here. You can have a binary version of this. We're actually, we'll call this a and b. So binary functions are infix, and unary functions are
prefix. So that's why the x is on the right here, but the a and b sandwich, the a and b are
arguments, and x and z are our binary functions that are going to be used first, and y is our
binary function that's going to be used last this evaluates to the following so we
have y of x of a comma b and then z of a comma b and parentheses so we're evaluating x and z
passing a and b as arguments to both of those and then the results of those are being passed to our
binary function y and so what we've done here for the moment is we've just used uh what was identity before but
in a binary context the left and right tax which is basically look like sideways t's one's pointing
to the right one's pointing to the left these are binary functions that just one takes the left
argument one takes the right argument so if we do do this, and I have the right number... You have a straight C at the end there.
Oh, right, yeah. And is this... I think we don't need that parentheses,
and we need to comment out our explanation. So at the moment here, we've got
because we don't have not being used anywhere. So basically the inner fork is just replicate with left and right
on either side of it for the two outer binary functions
and then catenate in the middle.
But if we compose with this identity the not function,
and it's going to be one of these,
basically that not is going to change the mask from 1010 to 0101,
and we're good to go.
However, this looks pretty messy. still have i believe equal number of parentheses yeah still equal number of
parentheses one fewer maybe no no i think yeah one two three four five six seven eight however
we can we can clean this up so whenever you have a um a basically a binary fork where the left binary function is left and the right binary function is right, this is the exact same thing as just the middle binary function.
Because that's what a binary function does.
It takes two arguments, uses the left one on the left and the right one on the right, and you're good to go.
So this will be the same thing.
And then once you have gotten rid of the fork
and you just have a binary function,
well, we don't need these parentheses.
So now we're down to just basically,
we reduced, what was that, four characters?
All of it was just noise.
So absolutely beautiful.
Well, and I'd also argue that it, you know,
it had, I think in practice,
probably the BQN compiler optimizes this under the hood,
but it looks more efficient because you're only building that mask once.
Previously, you had two places where you built the same mask,
and here you've lifted that out.
Thing to note, though, BQN is actually not a compiled language. you had two places where you built the same mask. And here you've lifted that out. So yeah.
Thing to note though,
BQN is actually not a compiled language.
Most array languages are interpreted.
Whatever.
In compiler interpreter,
the thing will do that optimization, I'll assume.
And we're pretty close to being done this solution, but there was
one last step that I showed last time where, because, so right now we've got our two composed
modulus as our left outer fork function, and then identity, and that's what we started with.
And then on the inside, we've got sort of basically a fork where on the right, it's just
replicate or compress and then the
binary function in between that one and the other one is catenate so you're going to end up with
sort of one three as the result of everything catenated with at the beginning the result of
basically are knotted or negate or not yeah knotted so you're the complement of one zero one zero which
is going to be zero one zero one like I said, this is all probably lost
to all the listeners,
but there will be a YouTube video
for those that are intrigued enough
by this discussion to go and watch it.
And it'll make sense probably more with visual cues.
But we can switch the left
and the right sort of binary functions
in this inner fork so that it looks like this.
And because the way that APL evaluates is right to left,
we can now get rid of the two parentheses that are sort of containing this.
Now, but I'm going to argue, and as I argued last time,
that the parentheses actually added some clarity.
To the uninitiated.
Yes. actually added some clarity um to the uninitiated yes but but like now now you're relying on people
know knowing um how uh precedence works in this uh language like now there's just a soup of symbols
in the middle of this and there's no there's no hierarchy or structure i think this is true i mean
i prefer this because i have i have dived into the pool of knowledge that is the array world
and um and also too we haven't even fully like so we've swapped the order but now the odd numbers
are at the front and so this is where i think i got all excited last time i was like you know
and this is where uh the C combinator comes in,
a.k.a. flip from Haskell, a.k.a. the C combinator.
And that's this little squiggly superscript symbol here,
and that basically reverses the order of the arguments passed to your binary function here being catenate,
so we get back this.
And that was even at that point I think Bryce was like was like okay no no no let's uh let's rewind
here you took out two parentheses yeah and added a c combinator and i do not think that increased
clarity we can agree to disagree although many people would just agree with you um
and then yeah we can name this and once we name it as well, we can also get rid of a couple extra parentheses.
But you could have done that with the previous solution as well.
We don't need the arrow here.
And then, boom, that's what's up.
So pretty freaking awesome.
And I just love, you know, what was it?
Just this past Sunday, the LeetCode contest happened.
And then I was solving the solutions in APL and BQN.
And it's just marvelous.
I ended up using a psi combinator inside the S prime combinator, aka a fork, in order to apply a unary function twice to the result of two modified things.
That didn't make any sense, but it's just the power of APL and BQN
and the operators that you get with it.
So we're doing pretty good.
We got 10 minutes left before we got to drop off.
And at this point –
Yeah, yeah, yeah.
I was about to say at this point you got very excited.
And, yes, at this point I got very excited.
Well, this is where we're probably – I'm going to ask this question and you can answer it now, but I'll probably go and find your initial sort of meandering because you know the answer now.
And so at this point in the previous recording, which all of my audio got lost, I was like, and so we just did this super fun walkthrough of an APL slash BQN solution to this LeetCode problem, which was super fun,
but it was not the point at all. The walkthrough was just extra, you know, icing whatever on the
cake. The real question and like the whole buildup to get to this point is what is special
about this solution? Because a lot of people, their initial remark is going to be, well, you know,
you're creating a bunch of temporary
arrays in the form of masks and you're doing all this extra work. Whereas if you just compare it
to the implementation of like the two point, the two finger partition solution where, you know,
you work from the front of the array and the back of the array and you find two that should
are in the wrong partitions and then you swap them. Like from a serial or consecutive point of view,
that is much more efficient
than all this temporary array creating and copying.
And so I understand that point of view,
but I think this solution is super, super, super awesome
sort of in light of that
and why people shouldn't be discounting
like this way
of thinking in array languages and at that point i sort of i put it to bryce and i said bryce do
you know why it's so special i think i got it pretty quickly actually which is the answer is
that this can be paralyzed oh man so now we're gonna do there where he said i'm pretty sure i
got it pretty quickly and then i'm gonna go and go back to what it was because I'm pretty sure I'm pretty sure it took a little bit of handholding because I can't remember what you said.
You said a couple of things. And I was like, that's a good point. That's a good point.
I didn't think about that. Not what I'm looking for.
Oh, yeah, yeah, yeah. Potentially inefficient because you might be generating those Boolean masks twice.
Yeah.
It doesn't have to be inefficient because you can do the fusion here.
If you have this point-free expression, you can go and optimize it so that you only do a single pass.
Now this can be paralyzed, can't it?
I think I got there pretty quickly.
And I think my other points were pretty reasonable points.
But then you pointed out that this is actually—
That's the important part is that I think I said, yes, it can be paralyzed.
But even more exciting than
it just being parallelizable um like take it a step further like what what am i what am i actually
really excited about and then what did i this is this is the current algorithm uh that thrust uses
to implement its parallel partition correct and so and so, and that was, and that was what, uh, after having done this
the first time I did this a couple of months ago, uh, when I was, you know, just messing around,
I thought, I wonder how thrust partition actually does this. And then when I went to the thrust
partition docs very quickly, you could see it's not identical to what's happening here,
but you're basically, you're creating some temporary storage, using it to, I mean, we will follow up with this in a follow-up episode, which is what Bryce was referring to earlier, that there's a count if that happens, a thrust count if, to figure out how many elements are going to be in the first partition so you know where to start the second partition.
What do you want to say? Well, I was going to say that perhaps more specifically that the way that Thrust implements this today is there's three passes, I believe.
There's a count if to figure out the size of each of the partitions.
And then there's two copy ifs, one for the true cases and one for the false cases.
And during our discussions last time,
I think I came to the conclusion that,
and I should add a caveat here.
And the caveat is,
for basically all of the stable algorithms in thrust for stable sort and stable partition,
the same algorithm is used for the stable version and the unstable version.
Right.
Just because there has not been a benefit in doing something different.
And I think in the case of sort, there's not really an option that would,
like a parallel sort option that would be unstable.
Now, interestingly, once we started talking about this for partition, we realized that perhaps we could do an unstable partition faster.
And then I think we came to the conclusion that we might even be able to do the stable partition faster. But basically, we quickly came to the conclusion we probably don't need to do that count if,
so we can get rid of that pass.
And then we just have those two copy ifs.
And the way that you could avoid doing that count if is that if the copy ifs,
instead of them both starting at the front of the sequence,
if you have a type of iterator
where you can get a reverse iterator,
then one copy if starts at the front,
one starts at the back,
and you don't need to figure out
where the second copy if is going to need to begin.
And you just have potential stability problems.
But then the other thing that sort of occurred to me
is, you know, well, if that's the case, couldn't we just do it in one, you know, in one pass? Um,
uh, uh, and I think that might be, I think that might be possible. And I think that's what we're
going to explore in some future, some future episode. Yeah. I definitely think the, the two
pass one it's it's, it should be more than than possible the one i'll have to see if what you're thinking one pass with owen keep in mind here we
have an assumption of owen temporary storage so we don't have to do it all in place um the algorithm
today already use it so the algorithm today is three passes and uses own temporary storage um like apparently not a lot of people are doing
uh parallel partitions because like the current algorithm is slow um so it was clearly never a
big priority for us to optimize it but if you are doing a parallel partition um we're going to make
it faster although you know slow on a gpu sometimes can still be yeah still be a lot faster than whatever you're doing.
The case of – there's one reason that the one-pass version that I have in mind might be slower than the two-pass version.
And it sort of gets into our efficiency versus speed episode from a while back. Um,
that one pass algorithm might be branchy. Um, uh, and so if you, if you have to do
a single pass on a GPU where that single pass has an if statement, sometimes that's going to be,
uh, slower than doing two passes where there's no if statement in each of the passes.
But we'll see.
We're going to explore all that.
It's going to be awesome.
Yeah.
Live coding.
We're going to start a whole series on refactoring and increasing the perf of thrust algorithms.
And a great note to finish on, which I just realized as Bryce was explaining, you know, how the thrust partition algorithm
works and how it could be, you know, enhanced or improved the vocabulary that we're using
here.
Like, this is why, this is why you should learn your algorithms going back to the whole
C plus plus seasoning, you know, Sean parent, you know, know your algorithms.
And then a bunch of people popularizing that sentiment is the vocabulary that we're using
to talk about doing this is algorithms. We're not talking about for loops or, or, you know, kernels or
all this stuff, the lower level pieces that we could build this in. We're talking about a count
if a couple of copy ifs, we're talking about a transform iterator, which is just sort of a,
you know, the equivalent of a transform algorithm. But that, you know, just to show it does it on
demand or lazily. And in my opinion, your ability to like, we are communicating at a higher level than is
possible if we don't have the names of these operations, which I just think that there's
a lot of folks that, you know, you'll give a talk on algorithms and then they'll come
up afterwards and they'll say, oh, is it really worth it?
You know, it's a bunch of effort to go and learn, you know, all the different, you know, names of these algorithms in every different library.
And that's the thing is like the standard algorithms, which is sure these are in thrust, but they're all equivalents of what exists in these standard algorithms.
It's not something that you like, you have to go learn n different times.
You learn it once and it's now enhancing the vocabulary with which you can communicate with your coworkers, which I don't know how this.
It's all about composition too right it's it's that um the the algorithms on them on their own
aren't particularly useful the the interesting thing is the ways in which you can compose and
you compose them together and combine them in the way in which you can compose and you compose them together and combine them in the way
in which you can write your own algorithms just by plugging the right operators into right the
existing algorithms um like you know like reduce if reduced like only added up integers like that's
not a very exciting algorithm um but think about you know think about
all of the things we've done on this podcast with you know the the notion of a reduction
um uh and like under understanding that pattern that algorithmic structure
um is in understanding how you can use it and combine it with that algorithmic structure, and understanding how you can use it
and combine it with other algorithmic structures
is really powerful.
Yeah, I completely agree.
And on that note, we'll wrap it there
because that's such a good note to end it.
And we have to go in four minutes.
So thanks for listening.
We hope you enjoyed and have a great day.