Algorithms + Data Structures = Programs - Episode 131: One Algorithm To Rule Them All!
Episode Date: May 26, 2023In this episode, Conor and Bryce chat with Ben Deane and Tristan Brindle about which algorithm is the most fundamental of all algorithms!Link to Episode 131 on WebsiteDiscuss this episode, leave a com...ment, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the GuestsBen Deane has been programming in C++ for this whole millennium. He spent just over 20 years in the games industry working for companies like EA and Blizzard; many of the games he worked on used to be fondly remembered but now he’s accepted that they are probably mostly forgotten. After getting more interested in modern C++, in the teens he started giving internal company talks and then talks at various conferences, spreading ideas about types, algorithms and declarative and functional techniques.Tristan Brindle a freelance programmer and trainer based in London, mostly focussing on C++. He is a member of the UK national body (BSI) and ISO WG21. Occasionally I can be found at C++ conferences. He is also a director of C++ London Uni, a not-for-profit organisation offering free beginner programming classes in London and online. He has a few fun projects on GitHub that you can find out about here.Show NotesDate Recorded: 2023-05-16Date Released: 2023-05-26ADSP Episode 130: C++Now 2023 with Ben Deane & Tristan Brindle!C++17 std::reducezip_fold_while (No Link)scan_whileWhat Do You Mean by “Cache Friendly”? - Björn Fahller - C++ on Sea 2022b-treeValgrindC++Now 2023: Composition on Tiny Embedded Systems - Luke ValentyCppCon 2016: Jason Turner “Rich Code for Tiny Computers: A Simple Commodore 64 Game in C++17”Thrust Docsunfold (A tutorial on the universality and expressiveness of fold)C++20 std::views::iotaHaskell iterateCatamorphismsAnamorphismsJ cutRecursion SchemesHylomorphismC++ thrust::counting_iteratorC++ thrust::transform_iteratorC++NowCppNorth
Transcript
Discussion (0)
You're ruining my clickbait, Ben.
So the question is, what do we call this algorithm?
What do we call this one algorithm?
Yeah, what is this?
Is there an umbrella term for scans and folds?
Scaffold.
Oh man, now we've got Scanduction and we've got Scaffold.
Welcome to ADSP, the podcast, episode 131, recorded on May 16th, 2023. My name is Connor.
Today with my co-host, Bryce, we continue the conversation with Ben Dean and Tristan Brindle and discuss what is the one algorithm to rule them all.
Anyways, all right, wrapping now, finally, part one. which is what the real reason we've collected this algorithmic minds is to talk about the title of
this episode will be one algorithm to rule them all and this idea came out of the interview that
we had with tristan where we were having a discussion of what is the quote-unquote you know
root algorithm to rule all algorithms or what is the most sort of fundamental algorithm that all other algorithms can be implemented in terms of
or is, does there even one that exists?
We're going to talk about that
for the remainder of this recording.
So I think maybe what we should all do first,
without any explanation,
one by one we'll go around
and we just get to say the name of the one algorithm
and we'll go from there.
Okay, cool.
I'll go first.
Reduce.
All right.
Bryce says reduce.
We'll go to Tristan next.
Zip, fold, while.
Zip, fold, while.
We'll go to Ben next.
I'll take the side that I reached out
to you about. Scan while.
Alright,
so we've got reduce, we've got zip fold
while, we've got scan while.
Now that I'm actually coming around myself...
Reaction from Tristan.
I think...
I think I'm...
I think I'll also say reduce,
although my mind can be changed.
Good man.
But I think reduce is where it's at.
All right.
Who wants to respond first to...
I feel like Tristan has thoughts on what Ben said,
and Ben has thoughts on what Tristan said.
Well, zip fold while seems like a bit of
a cop-out it seems like you're just jamming together some sorry yes yes no it absolutely
you're absolutely right right because um that's that's precisely what it precisely what it is but
also um your answer is kind of the same as mine it's real similar yeah what i don't i don't know
what either of those two things are. Somebody explain.
Can I infer that zip fold is transform reduce?
Or could it be alternatively?
With multiple.
Okay, but transform reduce is just reduce.
Okay.
So hang on a second.
We know what a fold is, right?
We have some accumulator,
and as you iterate over the sequence
you update this accumulator and you have some function that returns this new value,
fold or reduce or standard accumulate. And then we can have a short circuiting version of this that is fold while. Right, And in that case,
your,
your function returns two pieces of information.
It returns your updated accumulator and it returns like whether to
continue or not.
So that might be a pair or you can return an optional,
however you do it,
or you can use an out parameter,
but it has to return these two pieces of information.
Right.
Yeah.
So that's like a fold while it It's a short-circuiting
or a version of fold that can short-circuit. It doesn't have to.
And then you can do, yes, a sort of cop-out version that can take multiple sequences at
one time and have an NRE predicate. And then you can say, you can argue that that is more general
because you can have it, you know,
specialize it for the unary case as well.
Right.
So that is my, the root of all algorithms
because the zip version you can use to implement things
like mismatch and equal and adjacent find
and probably others, but those are the ones that come to mind.
See, I think we all just answered the same way.
Yeah, I was going to say reduce,
but like some of you just said a fancier reduce than.
Yeah.
You asked what was the most basic,
the version of it from which everything else can be.
Right, but we didn't ask for the most generic. We asked for the most basic and the the the version of it from which everything else can be right but we asked
we didn't ask for the most generic we asked for the most basic and the answer is reduce
no i mean i think there are some subtle well ben you're gonna say something now um i was gonna say
i i i think ultimately this question is sort of ill-formed it's interesting to think about but
it's not useful every day um and i and i think one of
the points of algorithms is to be multitudinous and have lots of different formulations so
you know trying to implement all the algorithms in terms of one is is interesting as a pedagogical
exercise but not much more um and i do think we've said basically all the same algorithm in my case I said scan while
I implemented scan while once as a one function interface to a map
as a one, sorry say that again
as the one function, pretty much the one function interface to a map
a map being like a C++ stud map?
yeah, something with that interface
um so something you can iterate actually think of an ordered map right so um something you can
iterate and you can take you can take as long as you as long as you like and when you're you know
it's a it's scanned while so it takes interestingly tristan said like i don't know how you formulate it tristan but in my case i formulated it with a
separate predicate you seem to suggest that the the one function returned both the the fold
the folded value and the and whether or not to continue um That was my sort of, yeah, my way of doing it would be,
so you pass it one function that returns you the two pieces
of information.
Various ways you could return that information,
but I guess you could do it with having two separate,
perhaps it's more general to have two separate functions,
one of which does the accumulation and one of which decides whether to continue or not well i yeah i mean i think
they're equivalent it's just a case of in my case it seemed obvious to to put the predicate separate
in the api um so i did but yes it it it goes down the sequence. It does a scan.
It stops when you want it to stop. And that could be that you can have a predicate that works on the accumulated value,
or you can have a predicate that works on the sort of depth index you've scanned to.
Right.
Okay. works on the sort of depth index you've scanned to right okay and what's interesting about if we
ignore the zip part of the zip fold while and we just compare fold while versus scan while
is kind of what you're talking about ben is that it's not really useful to think about implement
like you can you can implement every algorithm in terms of those two but there's trade-offs for
each one right the scan while you're going to always be building up a linear amount of, you know, space depending on
what you're doing. And then if you're trying to implement a reduction in terms of that scan,
that's kind of wasteful because you don't need to materialize all the first, you know,
zero to N minus one values. You really only want the last one. Whereas with a fold,
you can implement folds obviously directly, which is avoids the problem with implementing a fold in terms of a scan while, but then if you want to implement a scan in terms of a fold, depending on
the copying semantics of your accumulator from iteration to iteration. Like I remember in one of your talks, Ben,
you actually talked about how the, you know,
they changed some requirements in C++ 17 or 20
which made it possible to avoid those copies.
So if you were building up like a string
by doing concatenation,
you could actually do it non-suboptimally.
So I think you could do it optimally in certain cases.
In 20, accumulate was changed to move the accumulated value
through the computation.
Right.
And that's probably what you think of.
So then you avoid those copies.
I mean, accumulating strings still incurs temporaries
and allocation and that sort of thing.
But it's just that now they get moved instead of copied so it's slightly better i still wouldn't recommend that use case yeah so you know you can implement a scan in terms of a fold and a fold in
terms of a scan but there's trade-offs in each one which is why you would probably argue you don't
want to do those the same way like definitely in in Thrust, we do not implement our reduce or our scan in terms of one or the other.
They are two fundamental algorithms.
Correct, Bryce?
Yeah.
I mean, I think that the fundamental difference is the shape of the output
right
and so there will always be an
inefficiency in doing one in terms
of the other
yeah
that reduce gives you back a single
thing you know
a single final value where is a scan
gives you back n things
so if you implement the reduction in terms
of the scan, then you will have computed or stored or kept around more information than you needed.
You only needed the last value of that scanned array, but you had to build up and keep that
whole thing around. So it
may be storage inefficient, it may be compute inefficient. On the other hand, if you implement
the scan in terms of the reduction, then you have to build up your output array sort of on the fly,
and that's going to potentially be inefficient you can't just allocate it all uh
a priori so the big question is the two nvidia employees don't have a suffix underscore while
so bryce i'll ask you how do you you know the the critic would say you can't implement short
short short circuiting algorithms so how algorithms. So how is reduced possibly?
Because I think...
Yeah, but see, I'm thinking about
in terms of parallel primitives.
And in parallel,
if I'm doing something like a find if,
it's just more efficient.
Or it's less efficient, but faster.
Something we've talked about before
to simply not short circuit yeah that's the same thing i would say but if david olsen was here
because i've said this before in a meeting david is a fellow nvidia that works on on mvc plus plus
yeah and he works on compiler stuff as well uh He heard me say that and was like, well, that's an oversimplification.
He says, it depends.
And actually, didn't we get a request to talk about, I can't remember the name of the algorithm
implementation where you do successively.
So like if you're trying to do something like a find if or a short circuiting algorithm,
the way to do it, an implementation on gpu or something in parallel is to just choose different values of n and so
you do like a full-blown reduction but only on the first 10 and then if you don't find if you
don't end up short circuiting in that first 10 then you try the next 20 or something i can't
remember if you start from the 10 mark to the the 20% mark. For a sufficiently large dataset, it may be correct that short-circuiting is going to be faster on modern parallel hardware. you know, search in like some, some text and it's like, you know, gigabytes upon gigabytes of data,
and I'm going to be loading that onto and off of the GPU or whatever my parallel processor is.
Um, then yeah, like short circuiting might matter if I'm like searching for something in like
a hundred or like a 200 gigabyte, you know, input. Um, I just, I don't think that's the common case. And it definitely slows down
the common case having to do short circuiting in a parallel computation, because that means
you're doing some sort of, either some sort of branching or some sort of work dispatch logic where you're not actually
queuing up all the work at once because you're going to launch some of it and then wait.
And like, that's one of the problems with what you just described is like, if you do,
oh, I'm going to do some of this computation and then see if I got an answer and then launch more.
Then if you're on a system where your work and queue times have a long latency, which tends to be true about GPUs, GPUs are bandwidth optimized, not latency optimized, you pay a penalty for not enqueuing your work as early as possible. And it's not just the penalty of how long it takes to send the work to the GPU. That's
actually come down a lot in recent years. It's just the fact that that first chunk of work may
happen so quickly. And then whatever logic you're doing on your CPU, you know, those are cycles
where you're not using your compute heavy resource resource. Like, there's not really much point in having, you know,
the world's most powerful processor in your system
if you're not utilizing it all the time.
And so I do think that sort of speculatively
computing is the way to go.
You know, maybe the best way to do it would not be to launch a piece and then wait to
see whether you get a response, but instead just put the work up into multiple chunks
and enqueue them all at once and then to cancel later if you find the result.
And that may be a better approach because then you don't have to necessarily have the work
like check for completion or anything. It's like you just, you have like this cancellation case um but uh i don't even know how
one would structure that on um on our platform um you know whether whether there's no graceful
way to cancel some out some outstanding submitted work that does not require the work to um
have built-in support for cancellation.
I think it's important to say also that,
so yes, so all of that kind of strategy of parallelism and whether or not to actually short circuit,
I think is important.
But in some sense, it's an engineering choice.
And the formulation of the function scan while or fold while that's an ergonomic choice right that's that's the user
wants to use this function because that's the way they think about it and that's that's the
eventual result they want to get out right and if they can do it in one function you know otherwise it would be you know having to
deal with scan and then i don't know what i find and maybe something else so i think there is
hopefully we can have our cake and eat it and have the ergonomic function that uh but but yes
obviously don't it is frequently faster to do the work rather than as as bryce
just said stop branch re-enqueue whatever in in the parallel context in the parallel case yeah
if you're sequential it's always well is it always i think it's always it's always faster
to just stop right if you're going left to right or right to left. Well, it depends.
A branch-every loop might not be faster.
It depends on what you mean by serial.
If you mean single-threaded, no, it may not be faster because a vectorized version of the algorithm may be faster.
Is vectorized fall under the category of serial, though?
But that's why I said single threaded you may have a single
thread like that's what I think a lot of people would
think of as serial
and by that I mean
sequential a better term then
does sequential definitely rule out vectorization
yeah but nobody actually writes code like that
I mean even if you writes code like that.
I mean, even if you write code like that today, every... I was going to say, like, people write code like that,
but that's just what your compiler...
Every modern major processor,
you're not getting performance out of it
if you're not vectorizing your code.
And so you're leaving a huge chunk of performance on the table
if your code isn't being vectorized.
And so if you're calling something like a reduction or a find if on any modern processor,
and by modern I mean like last 20 years, not really even modern anymore.
Well, any desktop or server processor.
I mean, the target, my my target doesn't doesn't do
vectorization like that at all yeah yeah yeah sure but but the majority the majority of us
are programming on uh on a desktop server or mobile processor all of which are fundamentally parallel uh computers i mean i mean obviously if you know above a certain uh
a certain problem size yeah short circuiting is going to be faster but i think in in the in the
common case problem size and it may it may not be this is like the whole like should you use like a
you know or like a stood map like a red
black trees type data structure or do you just want like a flat map that's just like basically
a fancy vector um in a lot of cases you just want the flat map you want the uh machine sympathy is
the the term that matt uses yeah there's a great talk by bjorn faller um i'll find it i can't remember the title
of it it's i think it's called cash friendliness or something in the title he's given it a couple
times where i thought for a moment i thought you meant cash like money and i was like what
c-a-c-h-e uh and he i think it starts off by just like he finds some performance problem and then
he ends up kind of like i wouldn't necessarily call it yak shaving but he ends up going down
a rabbit hole where he ends up ultimately well i kind of ruined the talk but he ends up you know
re-implementing a bunch of stuff and then discovering that what he did was something
called a bee tree that was just basically like a modified type of tree that had like way better cache locality.
And so it ended up being way more performant than like this theoretical different type of data structure that they were using.
And I found that super interesting.
And he did it all with, I can't remember if it was Valgrind or Grind or however you pronounce that.
But he showed like all the cache misses and whatnot and that like uh sure they teach you this theoretical like
you know big o notation performance stuff but at the end of the day you're actually running stuff
on hardware where it's a much different story that's interesting because if i'm not mistaken
b trees were invented for for storage right they're optimized where there's a the big latency
difference between you know between between successive layers accessing successive layers
of the data structure i guess so in particular you know like that last layer could be on a spinning disk. Yeah. So it's interesting, you know,
the multiple-layered cache and memory
that we have in modern architectures
makes B-trees...
The spread between...
Yeah, the spread.
The spread between mean memory latency
and cache latency
has increased in recent years.
And also as
processors have gotten faster
that you've become a lot more
susceptible to notice
that spread and latency
basically as compute has come
compute has become so cheap
in recent years that it's
essentially free
you know when every
processor can do maybe for you
Bryce when when every when every uh CPU or GPU able to like fully utilize things uh you need to be
able to feed enough data and sort of the this there's now this very narrow window for um how
much data you need to move um to actually be able to fully utilize your central processor.
I'll say central processor then.
So on the other end of the spectrum,
I would like to add an addendum to the talks at C++ Now.
My friend and colleague Luke Valenti gave a talk
about composition on tiny embedded systems.
And just to give you an idea of what was meant by Tiny,
he was live coding and indeed building on a breadboard.
He was driving an LED with a Tiny microprocessor
with 1K of program memory and 64 bytes of ram in c++ 20 and that talk was that talk was
awesome that talk was awesome for a couple of reasons one just just to see that you can use
c++ 20 on that kind of system two the audience were on the edge of their seats because it was
really you know very sensitive to luke to Luke actually getting the wiring correct as well.
Lots of things that could go wrong, but it all came off and it was brilliant.
I remember Jason Turner's talk, CppCon one with the Commodore 64.
But yeah, he's gone and done it with a thousand times less RAM.
He's still using C++ 20 is amazing i do think that the trends that we see in the high end of processors do have a long tail
effect on all processors in general like you may be more compute constrained on a um you know a
small dsp or a small embedded processor but you're less compute constrained than you used to
be. And what's really changed has been the ratio of compute to memory latency and throughput.
Like compute latency and throughput, compute latency has gone down quicker than memory latency and compute throughput has risen quicker than memory throughput.
And that has narrowed the window of how many compute operations you need to do per byte of memory that you move to fully utilize things. And so that, you know, maybe, maybe at earlier points in our
history, we were spending more time waiting for compute. But these days, most processors are
waiting for waiting for memory to be moved somewhere.
So the question is, what do we call this algorithm?
What do we call this one algorithm?
Yeah, what is this? Is there an umbrella term for scans and folds scaffold
oh man we've now we've got scan duction and we've got scaffold
a scaffold while i like that i mean i sort of put them in different categories anyway because
as a scan gives you back like it's a sort of lazy uh like a range adapter right like
as you ask for each element of the adaptive range it goes and it fetches the previous element and
so that's it's almost a separate category to me anyway.
These range adapters are a different category
from the eager algorithms like Fold
or the hundred or so other ones
that will iterate over the whole range
and give you back some result.
They're sort of-
I think, I disagree.
I think that you're describing this
from the perspective of one particular
implementation strategy or technique. And I think that you could implement a fold lazily just like
that. And the lazy way that you're describing implementing a scan would be horribly inefficient
on certain platforms and in certain architectures. In particular, if you're doing the scan in
parallel, you can't do that lazy thing.
So I think that both of them can be expressed
either lazily or eagerly.
And I do think that they're the same family of things.
I haven't ever tried to do a parallel scan,
so I don't know.
Oh, well, we have a whole episode already.
Yeah, I remember listening to it.
But if you think about in your head
the compute pattern for a scan and a reduction.
Can I think about it not in my head?
This is a podcast, and your head is going to have to be where this goes.
But if you think about it it it is the same shape um uh you
it is fundamentally the same shape it's just it's just a question of like what are the
results that you're getting out of it um i don't know i i agree like so i was i was trying to
search because i thought at one point on the Thrust documentation, they bucket the algorithms in the Thrust header.
What documentation?
Yeah, very funny.
Thrust.github.io slash doc and link in the description.
And I thought at one point it had the scans under the – so like the – under algorithms, there's searching, copying, reductions, reordering, prefix sums, and transformation.
So I'm not sure if prefix sums is new, but I thought at one point I saw the scans under
the reductions header, which always bothered me.
And I have this like one table of fundamental categories and there's maps, there's folds
or reduces.
And then at one point I like kind of put scans but like scans are in between like
maps and reduce uh reductions like they're like maps and that they give you n values back or you
know plus or minus one and they're like reductions and that they carry state whereas maps don't so
they're like they're like a map that carries state but i don't know i've never made my mind up
like i do feel that they're different though uh But also kind of now that we have this FoldWile versus ScanWile,
it'd be nice if, like, I don't want to call it a stateful algorithm underscore while.
I was just going to say you could use scan to implement a map.
You could look at a map as a special case of a scan.
Anyway, gosh.
Yes, definitely. look at a map as a special case of a scan anyway yes yeah definitely i was gonna say i mean i think
i think we're overly aggrandizing this with the one algorithm to rule them all because
it it's not even one algorithm to rule most you're ruining my clickbait ben
we're very much thinking in the model of like, okay, these are algorithms that work on a single sequence, a linear sequence, a one-dimensional sequence.
And we're talking about reducing or, you know, all the things that encompasses within that frame.
But, you know, we're not talking about unfolds.
Folds and unfolds.
Folds and unfolds can be viewed as different sides of the same coin and implemented in terms of each other.
And we're not talking about graph algorithms, which is a whole, you know, we're talking about one-dimensional algorithms in a sense.
Or possibly fixed-dimensional algorithms.
We're not talking about... But you can build a lot of the other stuff on top of reductions what
is an unfold an unfold is um the simplest example of an unfold that we have in c++ is probably iota
so rather than going from a sequence and producing a value you're going the other
way around you're going from a seed value and you're unfolding it into a sequence.
The unfold in Haskell is called iterate, right?
Iterate is one of the, yeah, a general form of unfold.
So iterate takes a seed value and a function and it gives you back the sequence of X
and then F of X and f of f of x
etc yeah that makes sense and iota is is that same shape just where the function is the increment
function i was just thinking about how would you express scans and reductions mathematically. And I think mathematically you would define both in terms of
summation would be the underlying operation. There's a reason it's called prefix sum.
So I do think that they're both of the family of summation algorithms.
Sorry, I was just going to say, does that mean that something like a split or a group by or something like that,
where you go from a sequence of T to a sequence of sequences of T,
is that also a kind of unfold operation?
Or is that a different category?
I mean, in a sense, it could be, I guess.
I would say yes.
I wasn't familiar with the term unfold. So I'm just trying to get it straight in my head
yeah it is used
it's more normally used to go from
a you know talking about the one
dimensional case to go from a seed to
to a sequence
but yes I mean
so reduce is called reduce because it
reduces the number of dimensions by one, right, in the data.
And if we think about the reverse of that as an unfold, then if you're increasing the number of dimensions by one,
then as well as going from zero dimensions to one dimension, you could also go from one dimension to two dimensions, I suppose.
Okay.
But it's super useful for things like, oh, I saw a talk at LambdaConf back in the day about tournament design.
You're designing a tournament for some game or sport, and you can express that algorithmically with a sort of unfold.
Don't ask me the details because I've forgotten it,
but go find the talk.
Link in the description, yeah.
I remember at the time.
We'll find that talk.
Understanding it, but, you know,
lack of use has meant that it's gone out of my brain.
The thing I was going to say about unfolds
is that for a long time in my talks i
used to refer to i used to mention that reductions are known as catamorphisms in category theory and
that unfolds or folds are known as catamorphisms and unfolds are known as anamorphisms and for a
while i used to refer to all of these splitting algorithms like chunk, chump by, adjacent, slide, et cetera, the range adapters that we got in C++23, and these all already exist in languages like Haskell, et cetera.
I used to refer to those as anamorphisms, which technically they are, but anamorphisms, a.k.a. unfolds, are a larger umbrella term that I did not mean to use.
I meant purely the ones that are going from a sequence of values to like a sequence of
sequences.
I did not mean like iota and iterate.
And I don't even really think about two-dimensional ones to like three-dimensional ones.
I was purely talking about, you know, you have a sequence of values and you want to
chunk adjacent equal ones.
And so I've started referring to those as just like splits in the J language.
That category of algorithms is known as cut, which I've maybe is growing on me.
I think a useful search term also is recursion schemes.
I've seen that used to refer to catamorphism, anamorphism.
And the combo of
catamorphism and anamorphism,
I'm not sure
which way around. Either way around, perhaps.
It's called a hylomorphism, I believe.
A hylomorphism.
Just a generalized data structure transformation.
Yeah, I tried to understand it.
And I think I did briefly for like a day.
And then you gave me some example that helped but uh it's never stuck what that would have been
yeah i only remember it because i think of it as like going high and then low again
expanding and then reducing but yeah unfolds are very are. I can't remember what, we went back and forth via email.
And then it was about iterate.
And then I was saying one of the most common patterns
is actually like a transform after an iota,
like in both array languages
and even in the last team I work on, Rapids.
We use thrust heavily and they've got fancy iterators and they've got an
iota iterator that's called counting iterator and then they've got a transform iterator
and it really should be the other way around iota's a bad name uh i disagree i love iota
and ben has a guinea pig named Iota. Sadly, no more.
Oh, no.
I'm sorry to hear that.
Rest in peace, Iota.
Guinea pigs are great, but they only last... Somebody clip Connor saying,
Rest in peace, Iota, and put it on the internet
and use it against him.
No, no.
I still have Min element okay is she he getting up
an age uh she's they're all she's because you can't mix sexes with guinea pigs it's bad times
can you fix a guinea pig i don't think think you can fix a guinea pig. Oh, yeah, you can. Although, and it happens.
Usually it's best to leave them alone if you can,
but you can fix them,
and especially if they become prone to cancer later in life,
which guinea pigs don't nearly so much as something like rats.
Yeah.
But yeah.
But anyway.
I'm learning about category theory.
I'm learning about rodents.
This is awesome.
Oh, well, the cool fact about guinea pigs is that, like humans,
they can't make their own vitamin C, you know,
but unlike almost every other mammal you could name.
And so when scientists were trying to figure out what caused scurvy,
they – now, I don't know whether they noticed that guinea pigs also get scurvy or whether they just lucked out.
But anyway, they chose guinea pigs as their lab animals.
And that's the reason we call just generic lab animals guinea pigs now.
Interesting.
Wow.
Look at that.
Which guinea are guinea pigs named after?
Someone told me, I think it was Papua New Guinea.
I'm not sure.
All right.
Well, before we wrap things up.
See, okay.
So a final, I feel like I shouldn't have said reduce earlier because you all said reduce.
I should have said for each.
I should have said for each or transform, which is actually the most fundamental thing.
Like 99 or let's say 95% of code out there is for eachs.
Yeah, because they don't know the algorithms.
Maybe.
And again, only on linear sequences, right?
Only on something you can define a linear
traversal well let's just as like sure but just assume that no because because like a 4h you can
feed it in a sufficient uh iterator that can do a non-linear traversal like like it's it you combine
it with you know some fancy iterator or range I think we're using different terms of different definitions
of linear but
I mean what do you mean by linear
here you just give it you know I mean you can have
a two dimensional data structure like a
like a map yeah like
a red black tree the implementation
but you can still define a
linear iteration I mean you know
you can a flattened iteration I mean I mean something you know, you can... A flattened iteration.
I mean, something will start at the beginning,
and when it comes to the end, it will stop.
Okay, sure.
Something that will...
Without a nested hierarchy.
It can traverse, you know,
like you can have a pre-order, post-order, in-order traversal of a tree, right?
That's a linear traversal in the sense that it visits every element.
Yeah.
Yeah, I'm not saying like like a 4h that like necessarily visits it well it depends on what you define as the element right you could you can you can you know instead of feeding it in
a sequence of every element in the data structure you can feed it in a sequence of just the elements
that you selected or things like that. Sure.
I mean, yeah, the algorithm itself is a linear algorithm.
It happens to be working on an iterator that encapsulates the non-linearity.
Okay.
That's the, I acknowledge your point.
I acknowledge your point.
Well, no, wait, I was going to say, I was like, before we wrap things up and then you interjected, Brycece i said we were the last thing we're gonna say is is there anything tristan ben things we want to
randomly mention plug or just say to the listeners yeah attend your local meetup
if you don't have a local meetup and for whatever reason you you can't start one or find one consider
consider the denver meetup which is a hybrid meetup and now having said that our very next
one will not be hybrid as it happens i was gonna say one of the best meetups out there because
arguably the best in sense of well in whatever sense in sense, in North America. Yeah.
Many, many well-known speakers and good talks that I've seen in the past, including Ben, of course.
Tristan, you've got the local London meetup as well.
We have.
We have run by Phil Nash, which is excellent.
And anybody who's in London, it's not a hybrid,
but the talks are recorded and Phil puts them up on the YouTube channel afterwards.
Someone got in there before you, Bryce.
Sorry, there's just this other podcast host that I like better.
But Connor and I have this exciting, you know,
this exciting little bit of travel that we're going to do,
which I think will be a pretty unique experience.
Your road trip.
The pot on the road.
Yeah.
Will we crash?
It should probably be a lot more planned than it is currently planned.
No, it's all good.
We got the flights.
That's all that matters.
Yeah.
And you got the car rental.
Yeah.
Connor, you're going to be driving.
Is that right?
I'm going to be driving. driving he's gonna have the mic but you're gonna be driving and podcasting oh absolutely we're
gonna at least record one or two podcasts while driving we're definitely gonna get pulled over
by the slovenian or no no that's not illegal that's not that can't be illegal that's not
why we're gonna get pulled over we're gonna get pulled over for driving erratically your tombstone is gonna say killed the environment
so yeah you're gonna fly back across the atlantic for two days and then fly yeah back to europe
yeah i i want to be home to see my family even if it's just for a couple days do you offset your carbon footprint somehow Bryce?
Nope.
Question, which conferences are coming up for each of us? In particular
is there a chance Tristan
that you and I will be attending the same conference
this year at all?
On my radar I've
got C++1C
and CPP North where
you will be delivering a keynote.
Thank you so much to Tristan and Ben for coming on the podcast to talk everything about algorithms.
We look forward to seeing them at a future conference.
Be sure to check the show notes in your podcast app or at ADSPthepodcast.com for links to anything that we mentioned in today's episode,
as well as a link to the GitHub discussion for any comments, thoughts, or questions that you
might have. Thanks for listening. We hope you enjoyed and have a great day.