Algorithms + Data Structures = Programs - Episode 229: multi_transform? for_each_but_last?
Episode Date: April 11, 2025In this episode, Conor and Ben chat about a yet to be named algorithm, potentially multi_transform or for_each_but_last.Link to Episode 229 on WebsiteDiscuss this episode, leave a comment, or ask a qu...estion (on GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBen Deane: Twitter | BlueSkyShow NotesDate Generated: 2025-04-09Date Released: 2025-04-11Haskell initCommon Lisp butlastADSP Episode 36: std::transform vs std::for_eachIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
If we only stick to random access, then we are leaving some algorithmic insight on the table
because we're not genericizing the algorithms beyond what's in front of us.
Welcome to ADSP the Podcast, episode 2299 recorded on April 9th, 2025.
My name is Connor and today with my co-host Ben, we chat about an unnamed algorithm, potentially
multi-transform or for each but last and more.
How have you been?
Yeah, are we are we hopping into things and then we'll the topics will come up as they
come up?
Well, I have some questions.
I have some potential topics that I've, you know, I made a short list of questions for
you a while ago or topics that we could talk about less more so than questions. I think the questions have been answered
from our previous talks, but did you have something?
Not really anything lined up.
When you said you'll bring some short topics,
I figured we would just pick from those.
I mean, if we run out of stuff to talk about,
I do have some stuff we can talk about.
I mean, but I doubt that we will run out of stuff.
But anyways, yes, we can start with how have you been but I doubt that we will run out of stuff. But anyways,
yes, we can start with how have you been? I've been good. How about you?
Good. Yeah, pretty good. We had a fake spring here in Denver and then it turned cold
again. So I turned my sprinklers on and then I had to turn them off again last week. But they're
back on now. It's going to be warm again. Spring weather this week.
Um, last week, but they're back on now. It's going to be warm again. Spring weather this week.
Wow. How long did the, um, the winter rematerialize for?
Well, it didn't get super cold. Um, it was about a week of like cloudy days and freezing nights. I mean, it got down to minus five, minus six Celsius. Wow. Which is not real cold,
but it's cold enough that it's cold enough. It's cold enough. Yeah
You're not wearing shorts outside. There was a one night
I mean it's it's below zero in Celsius right now and in Toronto
But it was really nice for like a week or two. So it's kind of similar but it it
For one night overnight, I guess it was blizzarding for like it started at like 2 p.m
There was no snow on the ground.
A couple days before it had been above zero Celsius.
And then started blizzarding, dropped to minus five,
but then popped up above freezing for the evening.
So then it just turned into this slush,
freezing cold nightmare.
And then the next day, by 4 p.m. in the afternoon,
3 p.m. in the afternoon, it was 18 degrees Celsius, which I don't know if that is in Fahrenheit, but it's like shorts
weather and t-shirt weather.
That's the 60s.
Yeah.
And all the snow had disappeared and there was a bunch of parks that just had like flooded
with because you know, you got all this precipitation and then anyways, it's just like one second
it's nice, the next second it's blizzarding the next second. It's freezing rain anyways enough about weather
Yeah, it's nice here in Denver today. It's like
Sunny and 22 degrees or something at the Celsius ice this spring Celsius. Yeah. Yeah, that is back back to spring
Whether it will be a another fake spring who can tell
Yeah, I mean fingers crossed.
We're recording this April 9th and this will be coming out over the rest of April probably.
Okay.
Hopefully this is going to be springtime, no more snow, but you know, time will tell.
Anyways. So I have been thinking recently about an algorithm that I actually asked you and Tristan
about a while back.
Yes.
Which is, well, there are a couple of algorithms. Actually, I'm surprised we don't already have
them in the standard and not even I've looked they're not in boost algorithms. They're not
in ASL Adobe source library.
Adobe's, yeah.
Yeah.
But one of the patterns I had noticed, because I work in embedded now, one of the patterns
I'd noticed in interfacing with hardware is that sometimes you have a range, you have
an array, let's say, of integers or integral types.
The first one of those you write to one register, then the body of them you count as data that
you write to a different register and then the last one you do something else again.
Maybe writing back to the first register, maybe writing to a different register, but
there's this idea of like you're trisecting the data, you're doing an initial thing and
a final thing and in between you're doing a medial thing for all of the elements.
And so this would be nice to have an algorithm to do that and that was what I originally
asked you and Tristan about.
It struck me that a building block algorithm for that, because it's quite easy, it's trivial
to take off the head of a sequence, right? But it's less trivial to do all but one or all
but n of the sequence, right? So you do something. So for everything in the sequence except the
last one, right? So you stop at the last one. And that's kind of the building block algorithm, an algorithm we might call for each but last or for each
but last n perhaps. And what brought me to mind, what brought this algorithm to
mind for me was this this is an algorithm in common Lisp and it's called
but last. But it's not an algorithm in C++ that I've ever seen anyone use.
When you say but last,
that's minus the transformation, right?
That's just the destructuring of.
Yeah, it's common list calls it but last.
You can think of it as like for each but last.
Right.
Actually, actually in common list,
in common list, but it just returns a list
minus the last element.
Right, right.
I mean, Haskell does have this as well.
Okay. There's four of them, right? There's head, tail, knit, and last, I think. And so I think a
knit is the one that is the n minus one first elements. Okay, okay. But yeah, now you mentioned
it, that does ring a bell. There is, I'm sure, a way to do this in Haskell.
So the question in C++ is how we would do this. Would we do it?
It seems natural to have for each but last,
for each but last end.
These seem like natural interfaces, at least,
that we can think about.
Would it be appropriate to do this?
I mean, we could do this by like a reverse, a take, a reverse,
a drop and a reverse or something, you know, in ranges. Or we could just do the old style
to iterator algorithm, right? And it's sort of trivial for random axis iterators.
It is less trivial for bidirectional iterators,
and it's less trivial yet for forward iterators.
Right.
I mean, and so we're just talking
about the simplified version.
We're not talking about first, middle, and last.
We're just talking about.
No, I'm just talking about, for the moment that the last algorithm.
Yeah I mean my first thought which is odd because I didn't have this thought when you asked Tristan and I via email.
Was you asked us like what would you call this kind of algorithm and I think if my memory serves correctly I said something like multi transform.
Yeah that was for the.
The high level algorithm I think I was asking about, yeah, the three-pronged one.
Yeah. And I don't think that's a great name, but it does capture the essence of like, if you want to talk about the three partitioned one,
it is kind of a transform that's bundled with three different unary operations that's being applied to three different segments. And my thought that I had just now is,
is it useful to kind of genericize,
or like, I don't know, abstract it a level higher
and not necessarily have the first and the last,
but just like three different segments?
So it's like similar to the rotate API
that has three iterators.
You could give this one four iterators.
That is the begin, start of medial, start of final and end.
And that would suit your need.
And then also it would, the reason that I had this thought now, and I'm not sure why
I had it earlier, maybe it was the problem was phrased, I don't know, sometimes your
brain has different thoughts, is that multi-transform then becomes a better name,
because it's not specifically this pattern where
it's like the first, last, and then the middle chunk.
There's no real, like what are you
going to call that tail, middle head transform?
Like that's a terrible name, right?
But that closely matches the semantics
of what you're transforming.
Whereas multi-transform, if you have like variable length segments, it then matches
that.
But then my question to you is, because you're the one that actually has the use case, is
that, does that make the usefulness of it just like more confusing?
Like, really, all you want is that first, last, and middle one, and you'd never have
a use case for the variable length segments ones.
Well, there are a couple of things to say. first, last, and middle one and you'd never have a use case for the variable length segments ones.
Well, there are a couple of things to say.
I think yes, the use case I have is the...
I've been calling them initial, medial, and final to try to find a better name because
first and last tend to be used by iterators and begin and end also mean other things.
So at the moment, I've settled on initial, medial, final as the designators.
But yes, the use case I have is first one, last one and medial ones.
And the use case I have right now involves only arrays, so random access.
So it's fairly simple. If it becomes multi-transformed, the question sort of becomes,
how do you compute those inner iterators?
Because ideally, you want to be able to express.
There is definitely an expression there of,
I want to do everything, something for everything
but the last, or this idea of initial, medial and final, right?
Without knowing how long the range is, how long the range is without caring whether it's a
by die range or a random access range or even a forward range.
The caller doesn't want to, the caller wants to specify,
the caller doesn't want to compute the intermediate iterators is what I'm saying.
Because if you can do that, then you can just trivially split it into three ranges and call transform three
times. Right? That the idea of this algorithm is that you can say, I want to do the last
n, the first n this way, the last n another way, and the intermediate ones a third way.
But like, I don't want to have to compute,
the computation of that iterator,
which is the start of the last n,
and n being one in my case, right?
I don't want to have to do that outside the algorithm.
Wait, so I got a little bit confused there
because when you said,
if you are able to compute the iterators,
you can just call three separate transforms.
I thought this was effectively an algorithm that you can just call once instead of doing that. So
there's some difference here and then you said you don't know the end so that
like there's you want to be able to possibly use this when you don't know
the length of your array or something like that? Well yeah I mean these are
situations we can think about like if you had an iterator in a sentinel and it
was only a forward range, right? Right.
These are potential use cases in the general case.
And then in that situation, my multi-transform with variable length segments isn't possible
because you can't like know elements from the end.
It's only in the case where you're at the sentinel now so you now know.
Interesting.
OK, so that's slightly different than what I was thinking about.
Yeah. Multi-transform. If you can easily compute the iterate, the start and end iterators for
each transform, then we have effectively, you just call three transform three times.
But this is for the case where those are not easily computable. But you know
how many you want to do at the beginning and how many you want to do at the end.
Wait, so can you do it when you know how many you want to do at the end where n is greater
than one? Is that possible?
It's possible. At the moment, I'm still exploring this kind of design space and thinking about
how we might do it, right?
You can go backwards or you can see you know, how many steps it would take you to reach the end, you know, right?
Right. I'm think I'm thinking of just like single pass
You apply your first stop then your second on my third op
Which now doesn't work in this like you'd have to determine where the end is and then yeah
I'm potentially like do the start,
do it reverse from the back,
and then go do the middle section last.
You don't need to do it that way,
but that could be a potential.
Potentially.
In my use case, there is an ordering that the,
yeah, you have to do initial, then medial, then final.
But so it's not three independent transforms.
I see, yeah. But it's something like find last. You know, we have find last now in C++ and I believe it was Zach who worked a bunch on that.
And you have to basically involves iterating to the end because you need to only by iterating the end can you tell that you've found the last thing, right?
And there's something similar going on here in this last algorithm or the initial, media
final, you know, higher level version, which is we need to even in a forward range, we
need to iterate to the end in order to know how far away from the end we are for those
last for those last
for that last set.
Yeah, this is interesting.
There is more to it than meets the eye at first glance.
For sure.
Yeah, I totally misunderstood the subtlety of what you were asking for in the initial
email.
And I think I simplified the problem into I just want a nicer way to spell three transforms,
which is not what you're looking for. Right. If you can just spell those, spell those is what is what you're saying. Yeah.
And my current use case, I can just spell them because I'm only using random access. Exactly.
And it's fine. And this algorithm is, you know, even then I would like to have the abstraction
power of an algorithm, right? So I might like to say do initial, medial, final, but it's not terribly difficult to
write that algorithm with random access iterators, right?
At the moment, the code is full of raw loops which implement exactly that algorithm, so
it's just a case of abstracting that a little bit.
It's interesting too because I've been spending a lot more time thinking about iterators, because I finished,
or we finished I should say,
because we were doing it as the virtual meetup,
the from mathematics to generic programming.
And I think it's chapter 10
called fundamental programming concepts,
where he talks about iterator category, tag dispatch.
And I had never, you know, I mean, you know that that's a thing
if you program, well, maybe, maybe not everybody knows that's a thing, but most folks that
have spent any time studying C++ know that that's a thing, but it wasn't really until
our conversation with Sean, parent, and I should also say for our new ADSP listener,
Zach is a previous guest, Zach Lane, who's a member of the ISO committee and has worked
on algorithms and does he work on, I think he works on Unicode.
Boost text. I think he recently worked a bunch on Unicode with boost text.
Yeah. And yes, we had our couple podcast recordings with Sean back in October and we're
talking about the different rotate algorithms, the three different rotate algorithms, link
in the description. We're not really having that conversation.
But I read somewhere that Stepanov,
I think it might've been in this article
that he had with some online website
after from mathematics to generic programming,
or maybe it was EOP.
And he said that he had over-emphasized the importance
of like the bidirectional and forward iterators that typically are mapped to the linked list
and doubly linked lists and things like that.
And I've been kind of thinking about,
how does this fit into an array language?
Because really, I think that array languages,
for the most part, we just assume random access.
It's in the name, I suppose.
Yeah.
Yeah, yeah.
And that makes, what is the importance of these?
This kind of iterator abstraction.
And there's also some really interesting comments.
I'm not sure if you picked up on this or if you recall from having
gone through all the lectures
back in the day.
But he he has like beef, basically, with Andre Alex and dress
goo, because I think he gave a talk back in 2009 called
iterators must go.
And then he said that he completely disagreed with that.
It's this idea of right now you have an algorithm that
kind of works for random access.
And it's like, how important is it that for C++, I guess,
that we have this ability to abstract over different
iterator types?
And what do you lose in an array language where you just
assume random access for everything, basically?? Well I think it's really important it's
not necessarily important you know from ultimately from an efficiency point of
view because most of the time we do want to use array we do want to use vector we
want to use at least on desktop we want to use the cache efficiently, right?
Now, most of the process in the world don't have caches,
but most of the ones we talk about at conferences do.
But it's important to go through this exercise beyond random
iterators, I think, to gain insight into what
the algorithm's true nature is.
You know, and Stepanov's original idea, I think, was this forward, by-die, and random,
right?
And then he realized that input iterators exist, right?
So in other words, there's a class of forward iterators where you can only traverse the
sequence once.
And so we have basically the three categories
plus the input iterators, a bit strange in some ways,
but it's important to genericize the algorithms, right?
If we only stick to random access,
then we are leaving some algorithmic insight on the table
because we're not genericizing the algorithms
beyond what's in front of us.
There's also the fact of course that we can learn from not just C++ or array languages,
but you know the original Lisp, right? Lisp has lists and they are essentially forward iteration. Yeah, that makes sense
It leaves some algorithmic insight on the table
Yeah, it's it's a shame that Stepanov is kind of retired and he's retired a long time ago But I would love to get his thoughts on like the state of the world today
Including like Tristan's work, right? Like Tristan's given a number of talks. Yeah iteration revisited and still all of the API's we have today
You know because they live in C++ provide API's that you know give you begin and end and I think there's some story to
that
Eric Niebler, you know individual former guest as well on the podcast
Behind range v3 and what's in the standard? I think at one point was kind of moving away from iterators,
but then he talked to Sean and Sean basically like
convinced him that like, you still need this.
I thought you were gonna say like Sean hit him
upside the head and made him realize the error of his ways.
Well, I don't know, maybe that is the story.
We'll have to get Sean back on
and maybe we'll get Eric on at the same time.
And we can also, maybe this is the story. We'll have to get Sean back on, and maybe we'll get Eric on at the same time.
And we can, also, maybe this story is apocryphal.
I don't know who I, I feel like I heard it from Sean.
But anyways, it's just because I went through FM2GP,
and now have been thinking about this a lot more.
And also, how does this apply to languages like Rust?
Rust have traits now, which is superficially different
in many ways, but superficially at some level
the same thing as concepts now.
And I assume, actually, I haven't looked
at the standard implementations of C++20 algorithms and on,
but I assume that those don't use iterator category
TIG dispatch anymore.
They now use, that's the whole pitch
that Stepanov is making is that, you know,
we don't have a way to do this kind of dispatch,
but like luckily I can just create
like a version of these functions
that the last parameter is a tag
and I can basically dispatch on that.
And like behind the scenes, that's what's happening.
But really I would like a language feature
that would give me the ability to do that
and concepts.
Oh, so you mean, yeah.
So tag this batch in this view was a hack to achieve what concepts can achieve better.
Exactly.
And he said as much and then went on to say that like, it's a good hack because at least
the language lets me do it.
There are many languages where I have to do this at runtime.
Whereas at least in C++, this all melts away. Right. Sure, it maybe adds a bit of compile time, but at least in terms
from an efficiency runtime point of view, I'm not losing anything here. But in certain
languages, not only do I not have a language facility, but I don't have anything that gives
me the ability to do compile time dispatch based on the quote the right quote-unquote iterator type? Well yes
first of all I agree but but I wouldn't be I haven't looked at the
implementations in MSSTL or any of the other standard libraries but I wouldn't
be too quick to assume that they've written in terms of concepts I I would
imagine that they you know are still of concepts. I would imagine that they are still using
a tag dispatch here and there maybe. Things did change a bit, I think, from 17 to 20 in
terms of the way iterators are specified, certainly. But at the end of the day, if you
have one compile time way to choose versus another,
then I don't know that you'd go rewriting everything
in terms of the quote unquote better way
if the cogen was identical.
You know, it's sort of,
there's enough work to do besides that.
Well, I mean,
assumably they're writing that stuff from scratch, right?
Everything in the std colon colon ranges namespace.
Oh, in std ranges? Yeah, well, maybe. right everything in the stood colon colon ranges namespace oh instead ranges
yeah well maybe well when I say I should clarify when I say ranges I mean the the
C++ 20 range version of the yeah the so-called constrained algorithms yes yes
I would yeah okay so I would expect I would expect that those do use the concepts, yeah.
But I don't know how much synergy there is between those algorithms and the existing iterator algorithms.
There might be some underlying common things that both use.
Yeah, I'll have to go look. Stay tuned, listener.
If I remember to, which I probably
will when I'm editing this, I will go and take
a look at the C++.
Or you know what?
I'm sure there's at least one library implementer,
either working on GCC, MSVC, or I mean,
there's also folks at NVIDIA.
And I know David, who works on the MVC++ compiler.
I know he listens, or at least at one point
was listening to this podcast.
He probably has the answer as well.
So feel free to on social media
or privately DM me with the answer.
And if not, if I don't get a DM
or a public tweet or post, I will go look myself.
Anyways, we should get back to the multi-transform.
So what is, is your current solution that you just have random access and you're working
with arrays so you have a solution for your specific kind of narrow?
Yeah, I coded something up.
Well, I've been thinking about this a while and it's not been a high priority in my day
job so I haven't really done much about it, but I coded something up last night.
And so the interesting another interesting
thing that's random mix here is that I already have a variadic version of for each and for each n
right so what that means is that basically it's like a zipping together multiple ranges with a
for each and so the for for each the the signature looks like taking an iterator pair and then an n-ary function and then the remainder the remaining begin iterators.
Really? I was going to assume that.
So for each in the standard takes iterator pair and function, right? Unary function.
The interesting part is what it returns.
It returns the function that you give it or the function object, right?
Because you could pass it a mutable lambda, which has, or something similar that has state.
And so it returns the lambda you give it or the function object that you've given it.
Okay, actually, I've got a really stupid question that I actually, it's okay that I have forgotten
how foreach works because I hate foreach. And I mean, I think Bryce and I had that whole
argument of transform versus foreach and the naming at one point where for 30 or 40 minutes,
we just refused to agree on the naming of something.
Just because you said that your for each implementation takes an n-ary object.
Does for each take like the one in the standard take a unary function?
Yeah, yeah.
Does it? So what's the difference between 4H?
I thought so in my head when you were saying that it just takes the iterators and then you apply some operation in the body
Oh, no, it has to because it's an algorithm. Yeah, the idea of 4-h is that you're just doing something for side effects
Right. So you're doing IO or something. That's the kind of primary use case. You're not really transforming
I mean you could view as a transform except that the output iterator would write right something that does IO that you write to right?
It doesn't write back to the iterator.
Right. So there's no output iterator.
Okay. Okay. We're back on the same page.
And so you're saying that your API is that you put the begin and end pair of your first range,
then the n-ary object, then the...
Then the n minus 1 begin iterators in a pack for the underscore N version.
Where does the integer go for the end version?
The size goes in place of the last iterator.
Right. The end iterator.
Right. Right.
And this is an unfortunate API in my opinion unfortunate because with very addicts you have to always put the very addict stuff at the end.
So a lot of the times
Yeah, it's kind of constrained by what I can do. Yeah, okay, so that's the API but it nevertheless it's useful
it's also unfortunate because we don't get to provide the
The end iterators for every range
So it's assumed that all the ranges at least cover
So it's assumed that all the ranges at least cover that first range or at least as long as that
Because if they're not you're in undefined behavior territory. You're running off the end of a rain. Yeah. Yeah, that makes sense
Okay, so I've totally lost track when you started talking about this and then I got confused by for each
Were you saying that you use one of these two?
Okay, so I already have
variadic for each and variadic for each n. Okay, now the interesting part about those is what they return.
For each in the standard, like I said, returns the function object you give it because you
might give it something stateful which evolves over the course of the for each.
And so it returns that.
It does not, however, and the standard for each you just give it the
iterator pair right, but whenever you're talking about the n versions like for each n or transform
n or whatever it might be where they take iterator and size then you're entering the realm of well if
it's an forward iterator or even an input iterator, then you want to return or even perhaps a
bidirectional iterator, right? If it's not a random x iterator, you probably want to
return the iterator as it is after you've stepped it by n, right? Because that would
take work for the caller to recover. And in some cases would be impossible for the recall to recover in the input iterate case. So you enter the world of returning a tuple of things.
You're returning multiple things.
You're returning, for the variadic case,
you're returning n input iterators and the function
object, which itself has potentially evolved.
OK, that makes sense.
Right?
The law of useful return sort of dictates this.
These things cannot be recovered by the caller
unless you return them.
Right.
So, you know, carrying that forward,
I'm thinking about the for each but last algorithm, right,
as the building block algorithm
for the initial, medial, final. And I'm thinking about it in terms of variadic, right, as the building block algorithm for the initial, medial, final. And I'm thinking about it in terms of variadic, right, because we can do for
each but last over n ranges with an n-ary function object. And we would stop
at the last, well, we'd stop at the last element of the first range and the
range, the variadic ranges would be assumed to at least go up to that point.
Right. So you technically wouldn't, you don't need to have all your range having equal lengths,
technically. They just need to be at minimum.
Yeah, they need to cover the space.
Right.
Right. And then the return values have the same kind of constraints. If we're
looking at this as an algorithm that can work on forward iterators or even input iterators,
you need to return the iterators in the state they've evolved to. Right. In a tuple once
again. Yeah. And likewise likewise the function object, maybe.
And all of this is not really considering, so unlike your normal conversation with Bryce,
we are not considering any parallelism here.
We're thinking about this as an algorithm which has to proceed linearly, right?
Which indeed for each in the standard is, it has a parallel version, but the parallel version I think returns void, right? Which indeed for each in the standard is it has a parallel version,
but the parallel version, I think returns void, right? There's a difference there.
Yeah, I honestly had no idea that for each returned anything because why like I said,
I've I don't I cannot remember the last time if ever. I mean, I don't check C++ code into
production but back when I did, I don't know if I ever used a foreach because I was like philosophically opposed to it.
And that's a fine argument, right?
Again, foreach is like the lowest common denominator.
It's kind of like, you can do everything with foreach.
Just like you can do everything with the raw loop.
Right.
But, and most of the time,
a lot of the time if you're transforming data, of course, you want to
use transform or accumulate or reduce or whatever. There's usually some better way of expressing what
you want to do. But sometimes you do really want to do for each and it is the best way to express
that you just want to print all these things out or
whatever it might be.
Some typically some kind of IO and in my world of embedded 90% of everything is IO.
So yeah, I guess back when I was checking C++ into production, I was working on like
the core calculation actuarial stuff.
And the UI all the time was basically
a spreadsheet that the systems engineers,
they were doing all the scaffolding there.
So really, I had to put stuff in some multi-dimensional array
of results.
And then it showed up on the screen.
I didn't need to worry about that.
So I didn't really have like a use case
where I needed to do some side effectful thing. And if I was actually doing that in a calculation,
odds are that I'm not thinking about it hard enough, right? Because it's one of my favorite
quotes is that like imperative shell functional core. And I was always working in the functional
core part. So odds are, you know, I shouldn't need to reach for for each in those situations. But in your case, where, you know, you are the person doing that system
scaffolding to write to somewhere, you know, that that is that is the imperative shell
part where you're going to reaching for that all the time. Right. Interesting. Well, I
mean, we've you've now got a captive audience. So I mean mean, it used to just be Tristan and Brendal. It used to be Tristan and I,
but if folks out on the interwebs have thoughts and comments,
feel free to reach out to Ben
and maybe we'll come up with a better name
for each but last.
Comment on the show notes on GitHub.
Yeah, what is a but last?
So there's a knit knit there's but last
What does knit what's the mnemonic behind knit?
So n it or KNIT I?
NIT oh in it sorry yeah in it, so I think it's short for like initial
But that's just right. Yes. Yeah. Yeah, yeah, that makes sense. I wonder, I know that um,
Pharaoh, although I probably,
I'm not sure also, cause you don't spend much time on the socials anymore.
Or maybe this can be um,
part two now. We're now on episode
230. Be sure to check these
show notes either in your podcast app or at
ADSPthePodcast.com for links to anything
we mentioned in today's episode as well as
a link to a get up discussion where you can leave thoughts, comments, and questions.
Thanks for listening, we hope you enjoyed and have a great day. I am the anti brace