Algorithms + Data Structures = Programs - Episode 72: C++ Algorithm Family Feud!
Episode Date: April 8, 2022In this episode, Bryce and Conor play C++ Algorithm Family Feud!TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachShow NotesDate Recorded: 2022-03-27Date Released: 2022-04-08C++ std::sortC+...+ std::nth_elementC++ std::reduceC++ std::numeric_limitsADSP Episode 25: The Lost ReductionC++ std::partitionC++ std::minmax_elementC++ std::transform_reduceC++20 std::views::transformC++ thrust::transform_iteratorC++ std::partial_sortC++ std::accumulateC++ std::atomicEric Niebler’s TweetTLBH.IT PodcastIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
Okay, you can't call them a rival podcast because they put out like two episodes.
It's a juicy content here.
Bryce, is there anything else you'd like to say about JF while we're on the topic?
I'm sure TLB hit is very good.
They have produced four episodes.
It's not nice to dunk this hard. Welcome to ADSP the podcast episode 72 recorded
on March 27 2022. My name is Connor and today today with my co-host Bryce, we conduct a live C++ interview
family feud style. Are you ready? No, I am not ready. All right. Episode 72 has begun.
And on today's episode of ADSP, the podcast fail Connor's interview question. Bryce is going to answer a C++ interview question
that I legitimately, we could call this the NVIDIA,
the unofficial NVIDIA C++ interview question
because it's only ever been asked once by me
and it might have been a bad question,
but hey, clickbait, here we go.
We should clarify because I just sent an email
about this last week asking the teams that I work with to, you know, asking them to not do whiteboard questions anymore.
We will record a separate episode to follow this episode up, which is a conversation about the fact that we probably shouldn't do whiteboard questions anymore. So that's if you're thinking, oh, man.
But in this particular case, when you use this interview question,
it was for somebody that was a new college grad or intern,
and you sort of gave them the option of do you want to just talk about your resume
or do you want to do an interview question?
Well, no, false.
We talked about just C++ stuff,
and this candidate was extremely enthusiastic
and passionate. So we just talked about C++ stuff, you know, favorite algorithms,
projects they'd work on, resume stuff for the first 20, 30 minutes of the interview.
And then I switched to a technical question and then gave them the option to either just talk
about it. They had a whiteboard behind them. And I said, or you can whiteboard it if you want,
or we can open up a Godbolt
if you would prefer to have a compiler.
So I have zero preference.
I mean, I know I would want Godbolt
because I like having a compiler
telling me what I'm doing wrong.
But like it was up to the candidate what they wanted.
They chose for screen share and Godbolt.
But anyways, so just if you're thinking
these questions suck, we shouldn't ask them anymore.
We're going to talk about that in a future episode.
And my team doesn't.
I mean, if your team doesn't.
Connor's a different team, does a different thing.
I mean, admittedly, this is my first co-op interview or intern interview.
And this is why I wasn't even sure if this was a good question anyways.
So we're going to do this family feud style.
Okay.
I'm going to ask the question and we're going to put the top five answers on the board.
And the five answers are five different algorithms that you can solve this problem with.
And we're going to put what I think is the best at the bottom because it's probably if you're going the family feud style, the least people are going to think of it.
Come at me if you think that the order of these is wrong, but I'm going to consider like sort of the number one answer on the board is the most naive sort of easiest way to do it
algorithm. And then as you go down to number five, it'll be sort of the best way in my opinion. So
try and guess number one first and then go from there to the bottom, but we'll see what happens.
So here is the question. Given, and there's no corner cases, don't think the list is empty, blah, blah, blah. We're just going for the algorithms here. And then we'll talk about the
fifth one if we get to it, which we will, because if Bryce doesn't get it, I'll just tell you.
The question is, given a list of unique numbers, or it actually doesn't even need to be unique,
there can be duplicates, return the top two in whatever form you want whatever pair array
of length two vector of length two and don't worry about corner cases and stuff like that like i said
so there's five there's five different algorithms that at least off of the top of my head i could
think okay well let's let's go with sort ding ding ding ding ding that's answer top answer on the
board i think i'm speaking too loud because
i'm getting a little bit clippy on my end the number one answer is sort we've got zero strikes
and uh that's your one algorithm down four to go let's see um and like we could talk about you know
what to do with each of the algorithms but i'm'm going to assume that the top three, or maybe
even four, you can figure out
the rest, because I really want to talk about the fifth one.
Bryce guessed
nth element.
Ding, ding, ding, ding, ding!
That is the third answer on the
board.
I mean, obviously there's a
reduction that you can do with this.
Is that an algorithm?
Yeah.
There's an algorithm called std reduce.
Ding, ding, ding, ding, ding.
I mean, technically—
So, okay, let me explain why I go there in my mind. Because a parallel, you know, max element or min element is implemented through reduction.
So me being someone who thinks in parallel by default, of course, my mind jumps first to that.
And, you know, it's pretty easy to write a, well, okay, it's pretty straightforward to write
a reduction operation that will give you, you know, the maximum in the input sequence.
But it's also like, it's a pretty simple straight extension of that to just make it give you the two maximum, the two largest.
In parallel?
Yeah.
Well, okay.
Well, I mean, technically std reduce is not on the board.
Well, you're, you're, I'm sorry, but you're wrong about that.
But let's, let's assume there is a parallel algorithm on the board, but it's not std reduce.
But you're telling me you can do it with std reduce and with like a parallel X.
Why do you think that this can't be done with std reduce?
Let's come back to it.
Can you implement X element with std reduce?
Yes.
Then why could you not implement this with std reduce?
What is the return type?
What is the return type of your, that you're... A tuple of whatever your ints.
This is integers, right?
Perfect.
Perfect.
So like, how does that work?
What is the initial argument to the accumulator that you're passing into std reduce if you're returning a tuple?
A two element tuple um the the initial it's going to be like
you know the minimum min value so right like the the numeric limits angle whatever your type is
colon colon min but so you're initializing both of the two elements in your tuple to be the minimum and then how do you how do you implement your lambda so
that it works because you're the type of your list is just an integer and you now need to be able to
go from an integer to a tuple um yeah so for for each integer you're gonna see um
uh you know is this you're gonna do the max of that in the first element of the tuple and you store it, you store that to the first element of the tuple.
What's the signature of the lambda?
The lambda is going to take, you know, tuple of int, int.
And so what's the problem with that now?
Technically it also needs to take int tuple of int int
and also tuple of int int and tuple of int int.
Okay, okay.
So what Bryce has just...
We've done like three episodes on this.
I don't feel that I need to...
So we will now put a sixth answer on the board
and std reduce then is...
Well, actually, we'll have a discussion,
but I'll put it at number five on the board is std reduce then is, well, actually, we'll have a discussion, but I'll put it at number five on the board is std reduce.
I'm certain that it's going to be the fastest way to solve this.
We'll see, actually.
I'm curious because you'll probably have a better idea than I will.
But so the key here is that what Bryce has just explained is that it's not going to be a lambda.
It's going to be a function object with overloads for a tuple
of two ints and an int, two tuples, vice versa. But I think we've discussed in the past that
there's a missing reduction that we should add and that if we had that, we would not need to
have all of this combination of overloads. But you said std reduce though. So if you said
associative reduce, then that would have been okay. have been okay fine the associative reduce fine all
right so that is definitely so but then associative reduce is definitely not faster than the number
six answer on the board so to recap number one is sort number three uh no actually yeah you might be
right you might be right but i i know that the i know that the sixth algorithm doesn't need to be associative
so it is like the purest of the parallel algorithms but to recap number one sort number
three nth element number five stood reduce or associative reduce can you guess number two and
four before we go to six and number two and four to give you a hint are not parallel i mean i guess
you could have parallel versions,
but they're in just the regular C++11 algorithm header.
Can you think of the one that sits between sort and nth element?
The one that sits between?
Well, I mean, partition?
Is partition on there?
Partition won't work because partition takes a unary predicate so you can't really
construct a unary predicate that'll return you the top two unless if you go ahead and do a reduction
in the first place but then that sort of defeats the purpose it's a it's a type of sort
uh well there's there's partial sort right ding ding ding ding ding so like he said sorting
and i didn't realize that you were like asking for um all the specific algorithm calls i assumed
that when i like sort and i assumed that i had that covered whatever all right partial sort number
two on the board and i'll give you number four because you basically already said it it's stood accumulate which is just the sequential
serial version yeah all right so can you think on but there's um
uh there's like there's other algorithms here that you could do um
is there well so you know it's interesting that you say do. Is there?
Well, so, you know, it's interesting that you said like, okay, so we already have an
algorithm that's very similar to this.
I know.
Which is the max element.
Well, so actually, I skipped this, but that was the first question that I asked in this
interview.
My first question, and actually, so this question is the... So is it a list of, like, is it positive integers?
It can be any type of integers.
If it's whatever the integer is, unsigned, signed, it works well.
But this is very similar, like, the algorithm that you're asking for is very similar to min-max element.
And that min-max element is a single pass version that tracks two different extremas.
That is true.
But the implementation of min-max element, like the details in there are non-trivial.
So like bending a reduction to work like that, it's not like a trivial thing.
What do you mean they're non-trivial? What do you mean they're non-trivial?
What do you mean they're not trivial? And wait, so like, let's just pause though, because our
listeners, our listener could be a bit confused because I said that it's actually the first
question I asked. So we did the 20 minutes of talking about, you know, resume, random stuff,
passion for C++. Then my actual question was return from a list of numbers, the smallest and largest. And in my head, I had a
backup question because if the candidate said min max element, like that's a hundred percent,
we don't even need to, like, if you know the algorithm, you're done the question, which is
to answer some people's who asked this question on LeetCode or Reddit, you know, do interviewers
care about knowing the algorithms or they're going to ask you to implement it? It's no, like,
I'm not going to ask you to implement an algorithm that already exists. Because if you're on the day
job, like you're not going to have to program min max element, like, it's a good exercise to
know how to do it. But I would rather have you implement an algorithm that you might actually
end up having to write on the day job, not one that you know, you won't have to write.
So that being said, the second question I asked him is the one that we are talking about
right now.
And so, yeah, like going from std reduce to min max is non-trivial in my opinion.
And we've talked about the top five.
And so for a std reduce, in order to do it in parallel, you have to define a function
object with a bunch of overloads, which I think is like that's non-trivial.
And you'd have to do the same thing for min-max element.
But there is an algorithm that you can,
a parallel algorithm that you can use where
I actually don't know if it would be more efficient
than the std reduce with the overloads.
My guess is that it would be a tiny bit slower,
but I don't know.
I'd have to profile it.
I actually have no idea.
And we're going to talk about that.
I'm desperately Googling because I do not buy your claims about the complexity of implementing MinMax element.
It's pretty straightforward.
I mean, in Thrust, we just do it with tuples.
It's pretty straightforward, I gotta say.
I mean, but that's the thing is min-max element
is a lot simpler than top two.
Like when you do min-max,
you're literally just doing two reductions at the same time
that are orthogonal from each other.
Whereas top two, they're not orthogonal.
You have like six different cases that you need to check
is the current element greater than your your biggest element if yes then like shift your first
one to the second one replace it with the first one if it's not then you gotta check is it greater
than the second one if so replace it and when you're doing it in parallel it's a lot more
complicated um i don't i don't think it's a lot more complicated i think that one can write this pretty pretty neatly what's the neat way is this to reduce i
i'm not gonna try to do it live now but between but after we're done here maybe i will
i agree that it's like not like you know one or two lines of code but i still think it's not like one or two lines of code, but I still think it's pretty clean.
I don't think this is tricky.
Is it with a std reduce, though?
Yeah, with a std reduce.
Okay, with a std associative reduce.
So I agree.
Associative reduce, it's easy because you don't need to worry about commutativity,
and you can just have the tuple.
You can have your accumulator as a pair or a two tuple and your second element as an integer
but with std reduce it's like with an existing i mean there is another algorithm that you can do
it without creating a function object with a bunch of overloads but with std reduce i don't think
it's trivial and the algorithm that i think makes it a lot easier, but I'm not
sure if it's faster than the std reduce with the function object with overloads is transform reduce,
where you transform each of your integers to a two tuple or a pair where the first element...
This is nonsense. This is nonsense. That's just like, what you're just describing is the way that you would implement
this with reduce i said i want to do this with reduce and you said you want to do this with
transform reduce well me doing it with reduce like those are two equivalent things like i'll
just do it with reduce and just use like a like mine doesn't work mine does not require
defining a function object with like the function call operator overloaded four different times.
Neither does mine.
Whatever your solution is, it boils down to some form of my solution where I just use views transform.
Like transform reduce is to some degree now almost redundant in light of the existence of ranges.
I mean, we still probably need it as a customization point.
Using std reduce with a views transform is the same as transform reduce.
Right, right.
I think we're in agreement.
You didn't say that, though.
You said that you needed operator overloadinging which is non-trivial
yeah i think if i was writing it down i would have come to the conclusion that the easiest way to do
that would have been with the reduction don't give me a hard time here well with with the with the
transform but but but like like like to me i would just like writing that that operator like in
having it like that that is as straightforward to me as writing um as as
having a transform operator in there really i'll i'll i'll admit that your way probably is
you know probably fewer lines of code but i'm not i'm not totally sold convinced by that
i mean i think the difference between a function object with four overloads and uh two
lambdas in a transform reduce or a views transform with a stood reduce i think i think the latter is
way more ergonomic to write i think i think we i think you and i were saying the same thing Yeah, like the, so the way that Thrust implements it is with a call to reduce with a special function object that takes, that operates on, you know, tuples and with a transform iterator
that converts the input sequence into these, you know, tuples
so that the reduction operator doesn't have to handle all these cases,
which I think is exactly what you just said.
Well, I mean, that's a third different way of doing it,
which is the same idea.
They will lead to the same generated code.
I'm not a compiler, so I don't even know if that's true.
I am a compiler.
I will tell you that these lead to the same code.
One's a transform reduce.
One's a reduce with a transform, a view transform.
And the other one's a std reduce with a transform iterator. I am other ones a stood reduce with a transform iterator
i am uninterested in like the specific details of how you write the reduction um
the there's you know all forms of writing the reduction to solve this problem are going to
be more or less equivalent and importantly have more or less the same performance profile both
in serial and in parallel especially in comparison to all the other solutions that we just discussed
i mean i'm not a experienced enough gpu programmer to know that that's true there's so
there are the sorting based approaches that we talked about which are
obviously you know going to be not great performance wise for solving this um in in the
in the common case um i'm sure there's going to be some some cases like maybe small input size
where that is just the fastest thing to do or something. But let's just assume that all of the sorting based things like the int element, the partial
sort, and the sort, like all of those are going to be slow. And then there is the approach that
is to do two reductions, to like sort of do this in two passes. There are answers of that category,
which are probably a little bit cruder, but work.
And then there's all of the like,
all of the answers that do this in a single reduction pass.
And like, that's how I break down
like the answers that you can give to this question
and also like the quality of said answers.
Because like the answer that will give you the best performance will be like the quality of said answers because like the answer that will give
you the best performance will be like the answers that give you the single pass reduction yeah i
guess it's from like my inexperienced point of view i don't know that lifting your elements into
a tuple in a transform iterator pass or in the transform reduce version or in the views transform version
plus a reduce i don't know if that's like i think that there's a possibility that's slower than the
first version that you said the stood reduce with a function object with four i don't i i'm i'm
very confident that um because you have to understand like how the transform view like
works under the hood,
where it's quite similar to Enthrust, like a transform iterator.
And so because of that, under the hood,
it's the moral equivalent as if you had a really fancy reduction operator that had all these overloads. Because you're not, keep in mind, in a transform reduce, you're not actually doing a
transform pass beforehand. You're doing it on the fly. Like every time you evaluate the reduction
operation, that's when you're doing the transform. It's only a pseudo sequence that gets produced.
You're not actually producing it and
storing it somewhere. And so because of that, you know, it's, you know, you can think of that
transform as being done as part of that reduction operation. I guess that's the thing is I've never
done any real profiling of algorithms, but like, or see if my thoughts on this if they're consistent with yours though that
i agree that like the three different spellings of the transform reduce one where it boils down
to some kind of iterator thing are all similar but the one with the function object with the
overloads my thoughts is like you're gonna have 80 less construction uh or not 80% less, you will have 80% the amount.
So like 20% less of the construction of two tuples or pairs because of the fact that some
of the times when you're merging, you know, the two top things together, it's going to
be coming from an int that hasn't been transformed into a two tuple and just goes directly into the two tuple
that is being merged with is that does that not affect perf the compiler is going to optimize all
that away really yes really yes like like if you if you if if you construct a tuple of like two
ints from one source int and then you only ever really use one of those two elements of the tuple somewhere.
The tuple does not exist by the time it gets to the compiler middle end or back end. The
compiler is just going to optimize all this away.
Really?
Yeah. Another data point for you that will be useful.
How do you think thrust transform reduce is implemented?
We just already said that.
You told me that earlier on the show.
Transform iterator and a reduce.
Right.
So, like, we asked for this customization point of transform reduce,
but we only really needed it
because we didn't yet have views transform.
We don't do anything special for that case.
We literally just use a transform iterator
because that's an abstraction
that will just be completely optimized away
so i i definitely understand that part the part that i just i i believe you because you definitely
know more about this but it's it is just surprising to me that like in in the version that boils down
to a transform iterator plus a std reduce you are always calling a binary operation that takes two pairs of two tuples.
Whereas in the version with the function overload, you are not always dealing.
You've got four different times.
The majority of the time, and definitely basically once every element's been hit once,
you're always going to be going to the two-tuple version.
Actually, that might not even be...
What I'm getting at is that I'm fairly confident that with most optimizing compilers, with optimizations turned on, writing those two forms of the code will give you equivalent code gen.
Really?
Yeah.
Now I'm super curious.
Compilers are pretty amazing.
I agree, but like.
Let me put it to you this way. In those two different ways of writing the code, you are doing the same thing
and you're taking the same path to get to the result. And I do not believe that there is any
observable difference between those two ways of writing the code. If we were talking about
non-intruder types, like a, like a type that had like, you know,
stood C out in its constructor, sure, these two ways of writing the code would be meaningfully
different. But in this case, we're talking about, you know, reductions on ints and tuples of ints.
And so like these two different ways of writing the code will not be meaningfully
different and like will be optimized to the same to the same output code now i want to talk about
one other way of solving this problem that i think might be some people's first intuition.
And that way is to do something like max element,
then take it out, and then max element again.
Now, this is problematic in a few ways.
Because one, you could have had multiple... What did you do if you had two values um that had the same maximum
value like what if the the highest the there were you know the highest value in the the list was
five um but there were two fives well then it's not sufficient to just get rid like just remove
at the iterator returned by max element um wait is it not that still work no because then you when
you run max element the second time you're going to find the other five and you wanted to find you
wanted to get rid of maximum it was no it was um you see well that's why it was interesting that
that you started off by saying unique numbers and then you said it doesn't matter whether they're
unique or not i actually posit that it does. Because if they are unique numbers, then the max element, remove whatever that iterator was, max element again is a valid solution.
Not the fastest one, but one of the reasons why I think it's one that people will jump to is that it's a solution that does not involve...
The first thing that'll pop into people's minds is sort, which is obviously going to be quite slow.
And then they're like, well, maybe there's a quicker way and then they'll be happy to find
something that's not sort levels of complexity. So like, oh yeah, we can just do two two max element passes i think it's like
a step further to to recognize that you can solve this in the single pass well so uh the reason i
said duplicates don't matter is because in my head returning the same value would have been fine if
there's if there's duplicates you can return five five oh that's interesting um that was a different problem
i did not read that as being acceptable
because you you said
uh you see you said given a list of numbers return the top two and i think that's an ill-formed
question what like the top two what the top two like the okay it's not specific enough the top two values
yeah yeah it definitely doesn't exclude the possibility of what i meant it to be
to be the case um but it's not specific enough yes i agree i and i i think that there's probably
another class of solution here um where you do something clever with min max element
um maybe it's a two pass one too but this this popped into my mind as well
min max element doesn't take an overload though so yeah but you do so you do something with transform
uh uh you know with transform views views.
You think?
Let me put it to you this way.
Okay, this is obviously not practical, but this is just the thing that jumped to my mind.
What jumped to my mind is like, well, MinMax...
So we've gone from practical solutions to perverse solutions now.
How are you bending min-max?
What we're looking for here is a reduction that finds two different things based upon their value relative to the other elements.
And we already have one of those min-max elements. And like, we already have one of those min-max element. And so if there was just
some way of making the, like, that's why I asked if all the numbers were positive. If all the
numbers were positive and there was just some way of inverting all of the top two values to be
negative, then you could just do it with min-max of course to know but that would that would require
you to know what the uh uh what the top two what the second of the of the top two is and once if
you've already known that then you might as well just do another max element does that make sense
yeah which is why which is why min max element does not there's no there's no useful way to use
it here.
Yeah, I see.
Yeah, I see what you're thinking.
Yeah, but that would require like an extra pass up front, which you just said doesn't make sense at that point.
I wonder how many people actually are brave enough to say sort. said it first because i don't have to worry about you um uh about you judging me for saying the one that's going to be quite slow um but like if in an interview i bet
somebody's not going to say sort because like that is the naive solution right because it's
not particularly efficient when you ask this in an interview,
do you do the family feud thing? I assume not. No, no, I did that for entertainment purposes,
although I thought it would have been a curious way to go about it. If I was in an interview,
I probably would have answered it with sort so that then I could get the interviewer to ask me, well could you do better than that yeah and uh well so so here's
the real question is uh you know uh now that we've been edged or i've been educated because i think
that's that's the thing is i went in thinking i have a transform reduce solution that i coded
because i actually started with stood reduce wrote it up and then was like, oh, no, wait, like, I don't have a commutative lambda.
And so I was like, oh, God, does that mean I need to go do the function object thing?
And so then I was like, oh, if I lifted each of the elements into a pair, that'll work.
But then that was my thought is like, I actually don't know if that is less efficient than
the function object. So our resident parallel algorithm expert says it
basically boils down to the same thing. So now that we have our six answers on the board,
plus also the two max elements as well, the question is, is that a good or bad question for an interview?
Well, I'm going to posit that all of the interview questions
that your team or my team have asked,
or maybe not all, I'll say like 90% of the interview questions
that have been asked by your team or my team,
not by my team anymore because we're not doing those anymore but 90 of those questions have been questions that where the
answer has either been stood reduce or stood atomic um like when i interviewed at nvidia i had
eight interview rounds this is back in 2017 and um and all of the interview questions were
answered with stood atomic and it was only after the first four rounds of interviews that i realized
that it was all driver engineers who only understood c and who expected me to answer
with like volatile loads in stores and didn't had no idea what I was talking about, but we're like, he seems to know
what he's talking about. Um, so like if in the sense that like it falls into the regime of a
question that is answered with std reduce, um, sure. It's a good question. Like the, the, the,
I'll, I'll tell you why the questions that are answered with std reduce are good questions, because you can usually tell by how people construct the answer to the question whether they are thinking in parallel by default or not.
Oh, yeah.
But that's the thing is like in my head, if I had gotten this question while I was interviewing, which was a couple years ago now, there's no way I would have answered the std reduce or std transform reduce.
I never would have gotten to that point because at that point I did not understand the requirements on std.
Would you have answered with accumulate?
Yeah, I would have done it with a std accumulate.
And then and then like so basically the associated reduce version.
If somebody answers this question with accumulate, then you ask okay well how do you paralyze it well so and this
is where like i'm curious is what level would you expect um an intern uh to be able to get up to
because like i couldn't have gotten to and i don't think I was expecting, I think I would have been blown away if they had gotten to Stidridus.
I don't expect interns or anybody who's interviewing to be able to, like, I do not think that the answers this question is not going to
like convince me that they shouldn't work at the company it might so the answer is they should
but there's no level uh you have no expectation basically um i just don't think this is a good way to evaluate people.
I think it's a good way to, like, I think if you have somebody that's an intern or a new college graduate where they have no, there might be an effective way to determine that they're good.
But if somebody flubs this question,
or if somebody flubs any technical interview question,
I do not think I would hold it against them.
So it's, yeah, it's basically what I said.
It's like a non-answer answer. A, B, C, that's basically what I said. It's like a non-answer. It's a non-answer answer.
A, B, C, or D.
Well, I don't think this question can really tell us anything.
It might tell us something, but it won't tell us the absence of something.
It's like, okay.
I mean, I think it's a good, you know, like it's good in that like it lets you ask many follow-up questions.
And like it lets you, yeah, like sure, if you're going to ask a technical question, I think it's good in that like it it lets you ask many follow-up questions and like it lets you yeah
like sure if you're gonna ask a technical question i think it's good um but we just shouldn't ask
those yeah we should focus on well we'll save that to a follow-up episode because yeah eric
niebler had a tweet that kind of went i actually don't't know what level of virality is normal or not normal
for him, but when I saw it, I think it
had a triple or quadruple
a triple or quadruple
digit number of
likes and a triple digit number of
retweets.
For my level,
where I'm at on Twitter, I was like, whoa,
that popped off.
But maybe that's just a day in the...
I've gained like 600 Twitter followers in the past week or so.
Yeah, I have not.
Yeah, I have a good number more than Eric.
I have, you know, my Twitter followers appreciate the APL content.
So, you know, that's what's important.
I have 11.6K.
I'm comfortably ahead of JF now.
Like there's no world in which JF is going to catch up.
He's got 10.2K.
That's a dangerous thing to say.
I am confident.
I am confident.
I mean, you have a lot of decades ahead
of you you don't think there's any any scenario that plays out that he you know no he ends up
ahead jf has kids jf has two children he does not he does not have as much free time to
are you not gonna have kids at some point
like like i'm so confident like i'm not even worried about jf like like listening to this
podcast because i know he doesn't have time in his life to be listening to this podcast
jf buddy if you are listening you should text me and tell me that i'm wrong i feel like someone's
just gonna dm him now and say hey you should listen to this episode but that's assuming that
he's going to get that he's going to have time to see the dm and then like he's going to follow
through on listening to the episode that's not going to happen for for those that haven't been
listening since episode zero just so everyone knows bryce is very good friends with jf and he
is a host of a rival podcast okay you can't call them a rival podcast because they put out like two episodes
it's a juicy content here bryce bryce is there anything else you'd like to say about jf while
we're on the top i'm sure i'm sure tlb hit is very good um they have produced four episodes.
How many have we produced?
It's not nice to dunk this hard.
This is episode 72.
But the two hosts of TLB have children.
God, what are we
going to do when we have children?
I don't know. I don't know.
Yeah, you know, I don't know.
I'm getting old, man.
Yeah, we are getting old.
We are getting old.
So the probability that that's going to end up happening decreases as every day goes by.
All right, on that note.
You're like 30.
You're like 30.
You're still there.
I'm like, I hope that we have some.
If you've made it this far in the podcast,
we're just really going to open up right now.
Thanks for listening.
We hope you enjoyed and have a great day.