Coding Blocks - Show Recursion Show
Episode Date: March 15, 2021We dig into recursion and learn that Michael is the weirdo, Joe gives a subtle jab, and Allen doesn't play well with others while we dig into recursion....
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 154.
Subscribe to us, or maybe first you could subscribe to us if you haven't already.
You can hit us up on iTunes, Spotify, Stitcher, wherever you like to find your podcast apps.
We're there.
If you find a podcast destination and we aren't there, let us know.
We'll figure that out.
All right.
I got my coffee.
We're good.
I'm drinking it now and uh you
can go to the website codingbox.net and find channels and able to discussion and coffee
on view and send your feedback questions and rants to comments coding box
leave your coffee at uh this is gonna be a fun episode uh you can follow us on twitter
at cody blocks or head to www.codyblocks.net
and find all our social links there at the top of the page with that.
I am also the caffeinated Alan Underwood.
I'm great.
And yeah,
I'm doing pretty good too.
Thanks for asking.
And I'm Michael outlaw.
All right.
We don't know who this other guy is.
We'll get to that.
This episode is sponsored by Datadog, the cloud-scale monitoring and analytics platform for end-to-end visibility into modern applications.
All right.
And today we are talking recursion, stack overflows, unbounded data, tail call optimization, and all sorts of fun stuff.
So let's get on with some news.
Yeah, and this episode we're going to be talking about recursion and stack overflows and unbounded data and all sorts of other fun stuff.
I'm not biting. I'm not doing it.
Come on. Come on, Alan.
All day. not doing it come on come on all day all right well uh as we always like to do uh we like to
you know take a moment to say thank you to those that uh took time out of their busy lives to leave
us a review so from itunes we have ripco 55 and jlo 115 all right and on audible which you can
leave reviews on uh we got underscore 1T, Marnu, and Ian.
So thank you very much.
That's awesome.
Yep, and 1T actually left a review over there
because Outlaw said the name right on a previous episode.
All right.
So we got a double.
That's awesome.
So thank you.
It's because I left out the underscore.
Yes, exactly.
Now we're not going to get a third one because joe had to go and mess it up by saying
the underscore that's right i got a minor malfunction today it's fine
all right so uh we're talking about recursion uh which is a fun and interesting topic i think uh
maybe uh it reminds me of a scene uh we'll we'll make a decision here at the end of the episode.
First, I guess we got to talk about what recursion is. And just high level, it's a method of solving
a problem by breaking the problem down into smaller instances of the same problem, which is
an awkward definition because nine times out of 10, maybe nine times out of nine, when a programmer
talks about recursive solutions, they're talking about methods that call themselves that's kind of like the telltale you know kind of um symptom that you
can see here and it's not a great definition because sometimes you can have like indirect
functions that call the functions that call the same original function or whatever so that's why
that you know the definition gets a little wonky. But for the most part, you can just think of this like shorthand functions that call themselves.
And the most famous algorithm in all the land, if you go for recursive functions, is going to be the Fibonacci sequence.
Is it not factorial?
Factorial is a pretty popular one, too.
Really?
Yeah, Fibonacci also takes it to the next level,
right? It's a little bit more complex. Yeah, and there's a reason that I like to talk about
Fibonacci first, and there's a reason why that, and we'll get to that, but there's some problems
with Fibonacci that can be solved. So it's like a nice way to kind of walk you through just the
whole world of recursion with just one function so uh if it weren't for
recursion and maybe like story pointing uh i don't know if anyone would even remember the
fibonacci sequence i don't know what it's good for let me know that's near and dear to our hearts
right now too right the story pointing and if you don't know what we're referring to congratulations you win um if you do then you are deep into the world of agile scrum planning yeah and i'm just kidding
of course do it uh there's other useful stuff you're familiar with like the golden ratio stuff
where you listen to tool uh then you're familiar with uh that kind of spiraling effect that you
can get from these numbers because uh the deal is you add the previous number to the next number.
And then you carry that forward.
So it's a sequence.
So it basically starts with 0 and 1.
The next number is going to be 0 plus 1.
Which is 1 again.
Next number is 1 plus 1.
So you get 2.
And 1 plus 2.
3.
Then it starts getting bigger from
there so it's got like a kind of a slow ramp up so you can kind of imagine it's got this big
nice curve that will get drawn uh two plus three is five three plus five is eight
technically though it has to start from zero and one not from zero right yeah yeah so absolutely
otherwise it would just always be zero i mean I thought you have to see it with two numbers,
right?
That's always,
yeah.
In addition.
So zero,
it literally starts zero one,
one,
two,
three,
five,
eight.
And the numbers just keep getting bigger from there.
Cause it's five plus eight is 13 and so on.
Uh,
and so,
you know,
if,
it doesn't really sound that bad,
right?
If I say like, like hey what's the
hundredth one like we could do that on paper in less than a minute you know it wouldn't be that
bad to do it so um what's nice about this solution is if you look at the code for it it's uh really
nice and i know this is gonna be rough and I tried really hard with the notes to try and not have these really tough, hard to understand algorithms.
But the gist is if n is less than one, return n.
Otherwise, return the function of n minus one plus function of n minus two.
So really when we said if n is less than one,
basically if n is zero, return it.
Oh, this should be less than or equal to one.
Sorry.
Okay.
It's also not like n minus one, right?
It's like it's positioned in the set. So if this was like a ray syntax, then it would be like, no?
No, he's correct.
Yeah, because if you want the fifth number.
If you want a big number, it's not going to be 13 minus 1 or 13 minus 2.
No, no, so it's the function.
It's the function.
Yeah.
So here's the thing.
I'll move this off.
Let's say, what's the third Fibonacci number?
Then what we do is we say, three less than or equal to one.
No.
So it's going to be the Fibonacci of,
so I guess you could say like of index two plus the Fibonacci of one.
Okay.
Yeah.
Like if we were to go to back to an array index,
like those minus ones,
it's not minus one from the number.
It's,
it's his position in the,
the index in the array.
Yeah. Well, there is no array
well yeah i know but i'm thinking like if you had an array of fibonacci numbers right like that
again this is why i went to a bigger number like 13 not 3 because you can't get to the previous
two numbers in the sequence by going 13 minus 1 and 13 minus two, right? Yeah. It, you know, because the previous two numbers in the sequence are eight and 13.
No.
So,
I mean,
I think said simpler,
if you want the fifth element in the Fibonacci sequence,
then you have to add the fourth element plus the third element.
So the element position minus one plus the element position minus two element positions, right?
You're dealing with element positions, not the actual values.
Yeah.
So sorry, this is like even this is like the simplest algorithm you can find in recursion, and it's kind of hard to talk about.
So that's unfortunate.
So don't worry, we're not going to do too many more of those.
We're not going to try to explain like merge sort or whatever.
Should we point out that the third element wasn't three though uh the third element is not three but
man we're gonna get so hung up on the numbers if we try to do this uh i think everyone's gonna be
screaming in their cars if they still have cars i don't know right who knows it's been a while
since i've been outside y'all uh all right so what's nice about this is it's an elegant solution when you see it it's
essentially a one-liner especially if you're working with trees or graphs and you need to
like a depth first and first hole of a tree which means basically um going down to the bottom of the
tree and either adding those numbers up or filtering or doing whatever you need to do
then it's like a two-liner if you do that with a while loop, you're going to need another data structure,
like a stack or a queue in order to keep track of your,
your place in line.
And so it just gets a little messy.
And so,
you know,
suddenly instead of having two lines,
you've got 11 and it just goes on from there.
And those algorithms for graphs and for entries and linked lists and
anything else that kind of has like a,
a known amount of data when you start gets really nasty if you try to do it without recursion so that's kind of the benefit
there is certain types of problems just fit really well and i forgot to get a list of them but pretty
much any of the fancy sorting algorithms quick sort merge, anything you've ever seen with a tree. So like balancing AVL trees, search trees, tries, which I mispronounced, but any of that stuff, those
are all going to be tree algorithms. Anything with a linked list can be written as recursive
function. And you know what the opposite of that looks like there is you basically do a
while loop with a counter and kind of set it to next, next, next. So you don't have to do it, uh, recursively. In fact, I'm not aware of any problems that you
have to do recursively. I think there's always an iterative solution. It just going to be nasty.
If you go Google, uh, merge sort iteratively, you'll see what I mean by nasty.
Uh, now the downside is, uh, they're tricky to write. to write uh it's it's one of those things where
you have to be really careful with the details and so maybe your number of lines isn't that big but
it's really easy to mess up and end up with a essentially an infinite loop where you just kind
of uh pop your stack which will hold up real quick just to to ask when you said they're tricky to
write you're talking about the recursive functions right now because you had mentioned iterative approaches.
You're saying that the recursive methods are the tricky bits to write.
Yes.
Yeah.
So it can almost look like magic when you see it.
If I remember in like back in the day, computer science class, when we took our test on paper for me, they would give you like a recursive function.
It'd be like five lines. They'd be like, what does this do and i'd be like oh crap oh okay all right and uh you know it's go up half the paper and end up like kind of indenting every
time i go down and so like pencil marks zeros and ones and it'd be like uh it wrote it like
times negative one or something it reverseses a list or something, something kind of dumb and trivial like that.
But trying to get that from the code is impossible.
And that's how, you know, kind of how we recode is like, we tend to read the lines and then
figure out what it means, uh, even more so sometimes in the function names, because function
names can be kind of bad.
So the recursion is kind of the opposite where it's like, I can't tell what the heck you're
doing by the code.
Like, just, just give me a good name here because seeing this you know thing calling itself and
mildly tweaking the parameters is just tough to derive the meaning from it gets to be inception
like so i think if someone showed me like a say a quick sort uh algorithm and asked me what it did
like okay let me bust out that pen and paper again.
It gets to be Inception-like.
Yeah, totally. When you're looking at it.
And it's hard to derive the meaning just from seeing function calls calling themselves and looking at the base case.
But we'll get to that, too.
So I wanted to mention that there are a couple languages that don't really have traditional loops like for loops while loops they only
have recursion and I
thought that was pretty interesting so I took a look at what that looks
like and basically they'll take some sort of function
that applies the
visitor pattern so kind of apply that function
for each piece of the data I just thought
it was kind of neat and if you imagine a world
without for loops what that would look like
I imagine you'd probably get pretty good with recursion
pretty quickly so uh just a couple other high high level bullet points to hit and then we'll
we'll get into the weeds uh so common algorithms and the data structures so we mentioned unbounded
data like trees and stuff when i say unbounded what i mean is like if i give you a root node
and say how big is this tree?
You don't really know.
You have to kind of iterate through that or else you have to kind of cheat and add some stuff to the algorithm.
Same thing with a lot of graphs depending on how you implement it.
And if you've got an algorithm or a data structure where some other worker is adding things to that tree while you're operating on it then it's certainly the definition of a data structure that's kind of changing out from
under you as opposed to an array which you can say okay it's got 100 it's only ever going to
have a total of 100 uh there we go so these are dynamic data structures yeah and you're basically
saying the input could be any size you just don't, as opposed to like an array where it's all typically a fixed length in any sort of language.
Yep.
So one thing I've been kind of looking at is sorting algorithms, because they kind of break this pattern, because a lot of times sorting algorithms, you're literally looking at sorting an array but uh there's uh recursion is very uh popular like overwhelmingly so in the good algorithms
for sorting and uh the reason that they're popular for that even though it's about a data set
is because sorting algorithms tend to have this um breaking down effect where at the heart of any
sort of sorting algorithm, all you're really
doing is comparing two numbers and saying which one goes first. And then you kind of up-level
and compare, you know, two sets of numbers and go through each one. But ultimately,
your basic unit of work is, is this one bigger than that one? So, these are examples where the problem of, you know, getting these, uh, numbers in order
boils down to a bunch of really small. Is this one bigger than that ones?
Yeah. If you recall, we've actually, the, the, um, we, we have had a few episodes where,
where recursion has come up in the past, but it was like kind of,
I don't know, tangential to whatever the topic was. So, uh, for example, we we've talked about
them in regards to, in episode 89, uh, we've talked about, you know, forms of recursion that,
uh, I'll save for, uh, you know, as we get to it here later in the notes, but, um,
episode 89, which was, you know, in regards to space and time complexity and the impacts that can be had there as well as,
um, what you're talking about now with like search and, and, uh, um, different data structures.
We talked about it as it relates to trees and how recursion was often like some of the,
some of the tree concepts were difficult because recursion was often, you of the, some of the tree concepts were difficult because
recursion was often, you know, uh, uh, a pattern or an algorithm used to even work with the tree.
And specifically search was one of the examples that we had gave. But you could also think about
like, um, what was the, was it the B tree where like it was, uh, constantly like resorting itself,
you know, which was always doing comparisons to be like, Hey, uh, constantly like resorting itself, you know,
which was always doing comparisons to be like,
Hey,
are you,
are you,
uh,
you know,
which side of this,
which shot side of the branch should you fall on?
Are you greater than or less than?
Yeah.
I recently tried to program a red black tree and,
uh,
man,
it's kind of hard,
but,
uh, man it's kind of hard but uh I'm gonna
get the algorithm from
poor Jizzy
I know I'm so
sorry guys
I'm so sorry
terrible episode
um
the coffee's kicking
in though it's gonna
be good
uh yeah
but uh yeah it was
difficult so um it
was difficult because I
was trying to be lazy
and not read about it
so I was like oh
just show me the
the algorithm.
I missed part.
I ended up down a rabbit hole for lack of reading comprehension.
That's the deal with sorting algorithms.
I wanted to mention that because dynamic programming,
which is like when people talk about those horrible whiteboarding fang interviews,
Facebook, Apple, Amazon, Netflix, Google,
it's usually because they're talking about dynamic programming problems,
which are essentially, I think pretty much everyone agrees
that they're the hardest kinds of programming problems that you can find.
And they end up having these kind of difficult problems
that you can solve in a decent and efficient way
by breaking the problem down into smaller pieces and avoiding duplicate calculations.
So remember when we talked about Fibonacci, we said the Fibonacci number for the fifth
Fibonacci number is the fourth plus the third.
And then when you go look at the fourth, it's the third plus the second.
You can see right there, you're going to be calculating the value for three twice.
And looking at the value for three is going to calculate the number for two, which is also, so it's happening for four.
So it's doing redundant work all over the place. And dynamic programming is a good solution that we should probably have an episode on one day that kind of looks at reducing those duplicate combinations calculations by pulling a couple tricks that can be really hard to see at first and you can look at. Because if you're thinking about interviewing for Fang,
and we've got a list of Fang companies that are hiring,
then get to practicing.
And also a list of them that weren't that we talked about last episode.
It was only like two.
It was like Apple and Google and everyone else was like.
Netflix was like the worst.
Netflix was bad too.
Yeah, they were like the worst one about not wanting
remote work if i remember correctly it's only half a thing if by the way i don't know that i'd i'd
ever heard the term dynamic programming because i'm not out doing leak code challenges and all
that kind of stuff like like jay-z does in his early hours of the day um but i looked it up
because i was like dynamic program it sounds like something you do where you just, you know, you go figure things out and
you're not writing, you know, empirical code for everything.
Um, but it's not that it's, it's actually tied deeply to recursion.
A simple definition of it is dynamic programming is mainly an optimization over plain recursion.
So I think what you just said, Joe, and where this
might make sense for people is those problems that you're doing in leak code is there's sort
of an obvious path to doing recursion, right? Like Fibonacci. Okay. Well, I know that I need
to pass in this element position minus one, this element position minus two, and just call the same
function over and over, right? But when you do that, that's why I say it's horribly inefficient
because you're building up a call stack and all this kind of stuff. And dynamic programming is
you're trying to find alternate ways to get the same result without using so much processing
and space, right? Like that's really what it boils down to.
Yeah, and there's several different techniques to do it.
And there's a couple of technical qualifications for dynamic programming.
Yeah, it sounds really general.
You could say dynamic programming.
Yeah, I do that all the time.
Right, yeah.
That's what I do.
Yeah, it's a terrible, terrible name. But I think it's probably older than a lot of other things that we associate now with dynamic.
But one technique for getting rid of those duplicate values is just to cache them.
They're called memoization, for example.
So you say, well, hey, I already did this calculation, so let me just store it in a hash table.
And anyone else who comes along and needs to do this calculation, they can just look up my value because they're already on it.
But even it goes on from there. That's like the most basic technique for dynamic programming and then it just gets
weirder from there and that's always easy for people to be like well what if you've got a
billion numbers uh yeah right that's that's the first thing that somebody's gonna say to you like
if you come up with a solution for for doing that caching well well now you've exceeded the cash now
what yeah totally
totally totally yeah so if you look a lot of prime number generators like this is a common
technique form well i mean it absolutely runs into that and cache was like one of the the two
hardest things in computer science right i mean it's uh you know naming things cache and validation
and off by one errors yep but in this case, cache is kind of a bad
word for it, so I probably shouldn't use it
because memoization is a technical term because
in this case, you never have to invalidate.
So the
technique is similar to caching, but
there's no invalidation.
It's always the same. The numbers are
the same, or the answer is always the same.
Given two inputs or however many inputs,
you always get the same output.
So my joke doesn't exactly work, but I tried to make it fit, okay?
I'm sorry, man.
I'm in a relaxed mood.
And I was trying to work within that space, and I did the best I could, man.
I'm sorry.
Well, actually, I'm the one who said cash incorrectly.
I'm going to double will actually and swing it back
because that's where I'm at.
I need more video game time, y'all.
What you need to do is to take a trip.
That's what you need.
Let's take this trip on the road.
Coding Blocks is going to go to Hawaii
and we're going to
record from there.
We need to record from there. Do you know? We need to record from there.
Did you know like they don't allow like loud laughing in Hawaii?
Oh, awesome.
No.
What's the?
It's just Aloha.
Oh my gosh.
I thought you were for real.
I was like, that sounds good right now.
Oh, I do.
I could see it.
It was all over.
Oh man. Am I that see- right now. Oh, I do. I could see it. It was all over. Oh, man.
Am I that C3 now?
That's from Lars. Thank you, Lars.
That was awesome.
But yeah, talk to the accountant
and maybe we can swing something there.
Right? We need to go.
All right.
So to take it down
a level and get deeper
here,
I want to talk a little bit about how functions work.
And this is going to touch on and overlap on many things we've talked about,
particularly in the first years of the podcast.
It'll be fun to revisit some things.
Because you can't talk about recursion without talking about the call stack.
Finally, we get to it.
Yes.
We did an episode on value uh on uh was it value test first reference types or was it the heat versus stack oh yeah that was way that was like in the
top first five episodes somewhere in there yep yeah well the boxing and unboxing episode i believe
it was we talked about um heat versus stack which that might have been like episode one
nope eyes for interface oh that's right and maybe it was episode two then yeah it was up there
yeah yeah but yeah we've definitely talked about it yeah so uh the just here is that
programming languages generally have the notion of a call stack. And I think it's like, do all programming languages have a call stack?
And the answer is no.
There's a couple that kind of don't that do things in a little bit different ways.
But even those, if you look at them, it's kind of the same thing.
I just don't call it the same.
Yeah.
And we'll see.
I got this in the notes somewhere.
Let me find it.
Okay.
Yeah.
There's actually an implementation of Python called Stackless Python.
And it's basically like Python with a whole lot of limitations on what you can do and what you can't do.
But it runs really well in some environments.
Haskell, Haskell, however you say it.
Haskell?
Yeah, Haskell.
I'll probably say it. Haskell? Yeah, Haskell. I'll probably say it.
It uses a technique called graph reduction where it basically ties into the kind of lazy evaluation mechanism they have where things aren't evaluated until they need to be.
But it's not too far off conceptually.
Assembly, if you think about like if you're doing stuff with assembly, like there's literally no call stack, you know, you're just kind of responsible for doing that type of stuff.
Now you end up kind of re-implementing the stack because you'll do things like, Hey,
um, jump to this address.
And when you're done, jump back to here and, or you'll save kind of the, the, the address
for where to come back to, to a register or something.
And so that kind of uh acts a little
bit like what we're talking about here so enough oh go ahead i was just going to like uh circle
back on that previous comment because like it was killing me i had to go back and find it it was
episode two that we talked about boxing and unboxing all right and it was specific you know
we talked about it specifically in the in a way.NET deals with it. But boxing and unboxing is just like a general concept regardless of language that's good to know.
Yeah, totally.
There's some things that just pretty much all languages have in common, and that's one of them.
And Stackless C was the last one I wanted to mention because it was really cool how it worked.
It basically just goes through and inlines your functions.
So if you have this function that calls that function,
it's going to go get that function
and basically slam the code into wherever you called it
as if it was all just typed in line,
which is interesting but does not work with recursion at all.
What did you call that one?
Stackless?
Stackless C.
Stackless C, okay.
Yep.
You can literally imagine the compiler goes through
and it's like, oh, here's the function call.
Let me go grab that code and copy-paste it into here.
So you could see that it would also make for a very large assembly by the time you're done.
Because that same function call has been repeated in a number of times in that final product.
Yep.
That's pretty gnarly.
So basically, unless you're working with one of those few things I mentioned or maybe a couple of them I missed, your language has a call stack.
So JavaScript, Python, PHP, C++, C Sharp,
they've all got the notion of a call stack.
And if you remember a stack, it's a data structure that's optimized.
Well, I mean, it's built all around last in, first out.
So what's a good example of like in real life last in first
out a stack of plates stack of pancakes okay let's do pancakes and eat the top pancake first
yeah it's the last one you put on the plate yeah it's the last one added to the stack but it was
the first one you you pulled off to eat. Okay. Yeah, I like it.
Or like a glass of water.
If you sip from the top,
the water at the bottom of that cup has been there the longest.
Yeah.
It depends on how voracious you are with it.
Swinging that cup around, I don't know.
You don't stir your water?
What kind of madman are you?
That sounds gross to even shake it shake and not stirred
so anyway so that's literally the stack and that's what it means here too
the deal with the call stack specifically is every time you call a function a frame is added
to the stack and frame is a special term here. And what the frame means is essentially a bucket
of memory that's got a few items in it in common across all these languages and sometimes submitted
data with it that we're going to largely ignore. But the gist of it is, is it's going to have
space allocated for all the arguments coming into your function. So if you've got value types like integers,
it's going to be 32 or 64 bits allocated for each one of those.
So if you take eight arguments, you're going to have 32 or 64 times eight.
Now, if you have reference types, like, depending on your language,
let's call it heaps or nodes or objects or whatever,
then those are just going to be pointers.
So it's still going to be roughly the same size of 32 or 64, you know, each.
Local variables.
Any function, any variables that you are going to be calling in there
need to be allocated in that call stack.
And then finally, you're going to need a return address
of where to hop back to when that function's done.
So imagine we've got Fibonacci.
It's got one argument that takes, uh, you know, that,
that index spot. And so every time we call Fibonacci, Fibonacci, it adds a frame to the
call stack, including, uh, 32 bits for an integer, call it a return address, which let's just say 32
bits to keep it, uh, keep it simple, uh, local variables variables we don't have any so that's it uh 64
bits for every frame so if we call um uh call fib nachi with uh let's say the number three
then it's going to be uh because you know roughly because of the the deal where every fib nachi call
results in two more calls we can call it three to the second, which is nine.
So there's going to be nine frames added to your call stack, each with 64 bits.
Wait, hold up.
No, it would still, you would just call Fibonacci.
If you passed it three, it would call itself two more times to get all the way down, or maybe three more times.
So it shouldn't be three
squared, right? So that's the run time
of it. There might be some kind of
funkiness near the edges there, but three is going to
put one on for two and
for one. So now we're up
to three in the call stack. And then two
is going to do for one and zero,
which are still going to get called, and they're going to exit
immediately. They don't add anymore. So now we're up're up to five oh if you're calling it twice i'm sorry
i got you okay then got it um i got it i think i made a mistake but anyway yeah so anyway the
being around nine um and that's just uh you know roughly the run time of it so uh you know it may
be off by one or two because of whatever but but that's generally the shape of the algorithm there.
So to go back to what you just said, the reason it makes sense.
So I wasn't thinking about it like this.
I think I had pre-optimized it in my head.
Basically, what Joe's saying is the reason why there's going to be more frames there is because if you say you're trying to get the Fibonacci for the fifth element, right? The way that you do that is you would call the Fibonacci
for the fourth element plus the Fibonacci for the third element. That would give you the answer to
the fifth. And so by calling those two functions separately that means you're now doing
um fibonacci a fourth and then it's going to automatically go do the third second first
and then if you say plus the fibonacci of the third then it's automatically going to do three
two one as well and that's why he's getting to that number of frames that's stored so yeah and
just redundancies when i do four it's going to do three
and two when i do um three it's going to do two one so you see the right there i did two twice
right and so each one so 25 roughly operations for uh 25 items on the call stack for uh for
fibonacci of five and that's and that's if you're taking the pure definition of fibonacci which is it's
the function of that position minus one element plus the function of that position minus two
elements right so that's taking that raw equation yeah and even if i call like hey fibonacci of zero
it's still going to result in the frame going on a stack and it's going to pass in the number of zero
it's going to do the operation and it's going to return. So it's going to be done there, but it's still a frame on the
stack. And so just so you know why I was confused when you said nine or whatever initially is the
pre-optimization in my head was, well, you'll just do the one Fibonacci and just go all the way down
from the number that you called it from and then keep track of those numbers but that's not the true definition or the the mathematical equation for how fibonacci works so so if you disregard how you know that you
could store those numbers to use later and you say hey what's the true nature of this then that's how
you get more frames on the stack yep and uh i think i've got a good picture here i swear that we i swear that we've talked about
this too and i'm trying to remember i'm like over here wrecking my brain trying to remember what the
episode was where we talked about like the the um you were talking about like the amount of space
you know that was used for like you know that that frame and i could have sworn that we had
talked about the different types of memory uh and that was you know that part where
that came up i think it was garbage collection i believe that was the it was one of the first
few episodes as well it it was back in that same time period we talked about boxing we also talked
about the heaps and the stacks and all that stuff and garbage collection and all that. So, yeah, at any rate.
And so when the function returns,
basically we peel the item off of the stack and we pass its value back into the function that called it
and basically with the address location
of where to pick back up at.
And what's really nice about that
is the reason that all these languages use it
is it's really quick deallocation.
It's not like the heap where you've got a garbage collection and, you know, all this stuff and things are dynamically sized.
Like we know with every frame call exactly how much space there's going to be.
And when we pop that thing off the stack, we set the pointer down.
We don't go delete the memory.
We just literally set one number down.
So it's constant time and it lets us quickly reclaim memory that
we used immediately as soon as it's not used anymore so hyper efficient which is a smart way
of doing things uh so uh have you ever seen a call stack you mean have we seen the call stack
or the error yeah exactly that's immediately where my mind goes
is uh i'm used to seeing call stacks and thinking about the call stack whenever there's an error and
you'll see like this error happened online whatever which was called from line whatever
which is called from blah blah blah and if you're in java it's going to be like a thousand of those
all the way down uh if you're in uh sane language then it's going to be much fewer
and you know what based off what you're saying here if you ever hear the term stack trace you're in sane language, then it's going to be much fewer. And you know what? Based off what you're saying here, if you ever hear the term stack trace,
you're debugging or something like that,
it's truly just stepping through what you have in the frames of a stack
and showing you what happened in each one of those function calls, right?
Yep.
And, you know, you can actually, every language I've ever worked with,
you can actually just access this call stack.
So if you ever want to do, you know, you got a board rainy Sunday or something, you could write an application that prints out the call stack at every step of the way.
You can actually see exactly how that works, exactly what's been allocated, exactly what the values are at every step. And that's another thing important too, for why debuggers work the way they work is because the values in the frame, once they, you know, you kind of up-level and go to the
next one, they're set. They're all value types or pointers. So you can go back in time and inspect
the values that were passed into your function because they're basically just locked there on
the frame. Now, you know, those reference types get a little weirder. Um, can be hard to do, but that's kind of a big part of the debugging workflow when you're
working with a debugger, and that's why you can kind of hop back and forth and say, hey, wait,
how did this get past enter? You know, you can kind of iterate and jump back and forth.
You can even sometimes drag that pointer back in time and say, we processed starting from here because of the stack.
And when the last frame is completed on your call stack, your program's done.
So that's literally how you know when a program has completed.
There's nothing left to do.
So that's pretty cool.
So the stack size is really limited and i looked it up c sharp uh
it's going to default to one megabyte uh 432 and four megabytes for 64 and you you can change those
values kind of a tweak in your system but uh everyone on the internet is going to try and
talk you out of that probably for good reason i looked at a bunch of other languages too
one megabyte pretty much across the board it's like a standard size i don't know why uh but it's interesting now generally you don't need that much space on your stack because you
know like we mentioned 64 bits for fibonacci you know even if you've got eight arguments it's you
know eight times 32 plus one whatever so it's not exactly the big numbers we're generally talking
about bits you don't want to have functions that take in a kilobytes worth of pointers. Right. And to what you said earlier, that stack
frame size is just based off the local variables within the method and pointers to big data
outside that method. So in theory, I mean, even if you have a horribly inefficient set of methods,
it's still not going to be that big, right?
Even if you have hundreds of them, it's insignificant, right?
You're not going to even come close to one megabyte of memory space.
Okay.
It was killing me.
So I had to find it.
It was episode 95 that we had talked about the different types of the memory. And, uh, specifically at the time, like we were only focusing on like,
like three main segments of it.
Uh,
the,
the heap space,
then the,
uh,
uh,
the static and global space.
And then the stack space,
technically there's a fourth one,
which is like the,
the text space where like the actual code itself sits,
uh,
you know,
that has, that contains all the instructions. But yeah, there's, um,
and in fact there's, there's a question on, uh,
Cora about like the difference between the stack versus the heap space,
you know, the stack being like where all the local memory is. So like if you,
if you go into that function,
let's say this is recursively calling some Fibonacci function, for example,
with that memory that you were just talking about,
and you had a frame added each time,
then each time that frame would contain space
for any local variables used by that method.
So if that method had a local variable called,
you know, I don't know, I, for example,
just an integer, you know,
it doesn't take up a lot of space necessarily,
depending on, you know, what size architecture you're using.
But, you know, that space would be reserved
over and over and over versus the heap
that you were just talking about.
Like maybe you have, like if you did get more advanced and you had some kind of like, um, memoization,
uh, you know, thing that you were using to, that was outside of that and it was on the heap,
then you might be pointing to it. And that's where that would, that would hopefully live on
the heap so that it would be addressed accessible from you know other places and then
uh you know then there was like the static and global um the heap could like come it would grow
but like statics and globals wouldn't so that'd be the main difference there between those yeah
episode 95 what was the title of that one uh data structures arrays and array ish all right i remember that one
uh yeah so uh i mean if the stack is so big one megabyte and uh the arguments are so small and
the return address is so small then you expect to never have to exceed the boundaries of that
one megabyte boundary and yet it happens all the time.
And when that happens,
you get a stack overflow exception,
which literally means that you ran out of room on your stack.
It exceeded the boundary.
It wasn't able to put the next frame on.
Actually,
that's incorrect to a stack overflow means that that's where you go to find
the answers to why you're getting the error that you just got.
It's this,
it's literally a website. And then you should not copy the answer to why you're getting the error that you just got. It's literally a website.
And then you should not copy the answer to that because then the,
what is it?
The licensing for Stack Overflow is not exactly,
we talked about this in the past.
It's not exactly wonderful.
You have to give attribution, I believe, is the catch to it.
Yeah.
I mean, it's kind of funky.
Yeah, it is.
It's weird.
But at any rate all right moving
on what do we got so yeah so now you know why stackoverflows happen uh so you never have to
google for that in any ever again if you ever see a stack overflow exception it means that you
exceeded the boundaries of your stack so don't even bother you need to look at what your code
is doing because it's uh essentially an infinite looping by calling function so it's never going
to finish.
And that generally only happens in these kind of recursive situations because you'd have to really have a lot of nested functions in order to hit a
megabyte with these small,
small bits.
Yeah.
And,
and to sort of tag onto what you just said there to hit the stack overflow
thing, of tag onto what you just said there to hit the stack overflow thing. That means you have to keep
calling functions that had that aren't returning yet, right? Like they're calling another function
and that adds to the stack and then it calls another function that adds to the stack. And
that's why you typically only see it in recursion because the whole infinite thing, like what you said a second ago, typically, even if you had an infinite loop, you're not going to get a stack overflow error because you're not not infinitely adding to the stack. Even if you
call 20 functions in a row inside your infinite loop, it's going to execute that function,
which means it's going to add it to the stack. It's going to get its result,
pop it off the stack, and then do the next for the same one. So
it's kind of hard without doing recursion to blow up your stack.
Yep. That doesn't mean that you might have a library underneath that ends up calling it,
but every time you see stack overflow,
it's because functions kept calling and not completing.
And no matter how hard you try,
not even Java,
I cannot imagine exceeding one megabyte just with like functions,
calling functions,
calling functions.
And so that's one of the reasons people say,
don't try to optimize your code by getting rid of functions.
So a lot of times when people first hear like, Oh, every time there's a function call, it allocates memory.
Well, crap, let's get rid of all the functions and just have one big function with a bunch of if statements.
That is such a micro-optimization.
You're doing so little work there compared to the readability that you're messing up.
And I bring to you Stackless C.
Yeah, there you go. Good luck. That's essentially that you're messing up. And I bring to you Stackless C. Yeah, there you go.
Good luck.
That's essentially what you're doing.
You are doing the compiler's good work,
and the compiler is much better at it.
And compilers, too, are really smart.
They do things like this.
If they see that there's a bunch of little functions,
maybe they will go in line,
or maybe they'll do some fancy tricks there
in order to kind of keep that stuff resident. And you are m mucking with that you're making it harder for it to do its job by trying
to outsmart it so as always we say use a profiler rather than trying to basically if you're trying
to improve the speed of some code and you're getting rid of function calls to do that oh boy
yeah you better have done everything else in the the world first because you're not going to affect the runtime of that realistically.
This episode is sponsored by Datadog, a software as a service based monitoring and anomalies across your entire stack, which reduces the time it takes to detect and address outages and helps promote collaboration between data engineering operations and the rest of the company.
And did you know, it's amazing just how widespread DataDog really is.
They are truly a thought leader in this space constantly sharing
their wisdom with the world they have a great blog where you know it's like hey here's here's
how you can monitor kubernetes or docker or uh apache or postgres or insert technology here
because oh by the way they have over 400 built-in integrations to go along with it,
making it super simple to use them. And if you don't believe why this matters,
go back and listen. We had a whole series of episodes that we talked about with the DevOps
handbook. And one of the key themes there was you have to have metrics and you have to
have monitoring and you have to be able to like react to those situations. Data dog is your easy
button there. They've already, they've already figured out how to get that monitor, what metrics
matter, right? For whatever you're given technology is. And I i double dog dare you oh that's a good pun i double dog dare
you to find a technology a relevant technology in today's space that data dog doesn't have a way to
monitor well i'm glad you mentioned that so uh we've typically talked about data dog in terms of
like uh kind of performance observability and like things are running or not, which is super
important. We haven't really talked about
the stuff they do in the security space. They have over
400 turnkey security integrations,
things like Node.js or
Docker or Kubernetes.
It's actually huge.
I'm not going to list all 400,
but the things they do on the dashboard
they've set up for various
authentication events, threat detection in real time.
I mean, this is the kind of stuff that we work with, right?
So, like, me seeing the pages and how they kind of solve some of these problems at kind of an infrastructure level is really cool.
And these visualizations are just amazing.
So, if you're needing observability for security as well, and you do, then you owe it to yourself to check out
what Datadog's doing here.
It's amazing.
It never ceases to amaze me every time I go poking around
and see what's new with Datadog
and all the new things that they're sharing.
Did you know this?
That they have geomaps to visualize your app data by location.
So if you want to do ROM analytics,
you could literally look at the world and create little heat maps for like where,
where your users are reacting to your application around the world. Like it's just so many amazing,
it's, it's, it's basically like the opposite of a death by a thousand cuts because it's basically
like a million little things that added up to something just so incredible.
And that's Datadog.
So go to DatadogHQ.com slash coding blocks today and start your free 14 day trial.
And if you start that trial and you install Datadog's agent, they will send you a super awesome T-shirt.
Yeah. And again, you start your free Datadog trial to start they will send you a super awesome t-shirt. Yeah.
And again, you start your free Datadog trial to start your monitoring in real time.
And listen to the podcast, we'll see your free shirt.
And just like Al said, you just got to create one dashboard.
Yep.
So the way you get there is go to datadoghq.com slash coding blocks.
Again, that's datadoghq.com slash coding blocks to get started.
All right. So it's that time of the episode where we ask you if you have the time and if you'd like
to do something nice, you know, why don't you go leave us a review? If you want, we've got a
helpful link at codingblocks.net slash review. and make it easy. You can either pop over to audible or iTunes
and you know, again, it's just kind of a way for you guys to give back without actually having to
do a whole ton. So if you do take the time, we appreciate it. And as always, we'll, we'll give
you a shout out on the show. So thank you much. All right. And with that, it's time for my favorite portion of the show.
Survey says.
All right.
Okay.
So we asked a hundred respondents, do you know how much the average hipster weighs?
And the top choices were.
Do you know how much the average hipster weighs either way less than us
yeah i don't know about one instagram oh gosh okay thank you mike rg all right so uh All right. So previously we asked, which hand do you type the six key with?
And your choices were the left because, duh, it's closer.
Or the right, probably because of your ergo keyboard.
Or I also thought about this one, too, after that.
And in fact, I think we had some comments like this, too, like the right hand like because of the numpad should have probably been part of that
but yeah so uh this is this is the math of the chickens favorite kind of uh survey right here
and uh you are up first sir as to uh you know which one do you think it is left or right?
And by percentage?
Uh,
well,
I think,
um,
so just based on my own activity,
I've been watching this for a while now.
So,
um,
uh,
I'm going to go with,
uh,
left 40% and right 40%.
The other 20% is nose.
Nose. Yep. Like you just bang your head down on the keyboard and, well, you% is nose. The nose?
Yep.
Like you just bang your head down on the keyboard and...
Oh, yeah, your nose.
Just the nose.
Like Pinocchio would have a really good...
Yeah, he'd be really good at hitting the six key is what you're saying.
Yeah.
That's right.
Okay.
I'm still not sure I understand what your single answer is there, though.
Like which one you thought won. Yeah, that's right. Okay. I'm still not sure I understand what your single answer is there, though. I don't think it matters.
I think you need to adjust the question, actually.
So left 51%.
Left 51%.
Okay.
Oh, he actually went with a winning percentage.
I'm so disappointed.
Yeah, I am kind of surprised with that, too.
Yep.
All right.
So I'm going to Price price is right the mathema chicken
and i'm going to go left with uh 52 percent oh that's dirty dirty dirty that's the real answer
okay whoa buddy i don't want to be around you too there's gonna be a fight um all right well uh the answer is the left because
duh it's closer 65 percent yeah of the most keyboards 65 the optimum service is not like 90
hey you're on the sculpt ergo right joe yep yours is on the left hand side right yep yeah man it's only these
crazy ergonomic keyboards where they move the six over to the right it drives me crazy i still
haven't gotten used to it you gotta use your nose click click looking like a chicken you know i'm
still i'm like i keep hearing uh you know little like complaints like this from Alan about all these ergonomic keyboards.
Obviously, Alan's still doing this.
The most inclusive ergonomic keyboard review ever done and the most exhaustive one ever done, too.
I don't know that anybody has ever gone to the level of detail that Alan is going to. And with as many candidate keyboards as on is going through.
But then when you hear him like make little comments like that,
you know,
and I'm thinking like,
Oh gosh,
he,
he,
he still has my moon lander is like,
is that a jab at my moon lander?
But I don't want to know yet.
Cause I want to wait for the review when it officially comes out.
But I'm like,
Oh,
like,
am I going to hate this thing when I finally do get it?
I can't tell you,
but you can reprogram the moon lander.
So I could move that six key wherever I wanted.
I can make every key a six key.
You totally could.
As a matter of fact,
depending on what button you hit on that keyboard,
that might be all you can do.
Anyway,
but in fairness, he is not wrong. depending on what button you hit on that keyboard, that might be all you can do anyway.
But in fairness, he is not wrong.
I have definitely done a couple of reviews already,
and they are greater than 30 minutes long.
Now, the honest keyboard review, that hasn't gone out yet, right?
No, I've got to go back to it.
Actually, nothing's happened. I mean, Joe Zach and I can attest to the fact that I don't think we've had more than one breathing hour a day free for the past two months.
So, yeah, soon.
Soon.
It will land soon.
Yeah.
Yeah, no worries.
It's been rough.
The moon lander will land soon.
Yes, on YouTube.
I see what you mean there.
Yes, it was good, right?
I didn't realize I did it until I did it.
All right.
You know, a friend of mine, he just moved.
And so, you know, you get like housewarming gifts and whatnot.
And so I got him an elephant for his room.
And he was like, oh, thank you.
And I said, don't mention it.
I'll get it for his it. I'll get it.
You'll get it?
How do you not get it?
The elf in the room.
Oh, geez.
Oh, boy.
Finally, it took a minute.
It was like too early.
Like, maybe you need more caffeine or something?
Maybe.
That was from Lars.
Thank you, Lars.
That's a good one.
Hey, and by the way, we apologize for no survey this episode.
Lack of sleep, lack of desire to do anything in this world of late has hurt us.
So, yeah, we'll make it up next time.
Yeah, maybe we'll have two or three.
We'll make this survey, we'll be like Alan's tip of the week,
where we just have like, you know,
however many surveys we decide to have that episode how's that sounds good i like it
all right well let's let's get back into how recursive functions work then
yeah and so we talked about uh a second ago how the call stack works, how every time you call a function, it pops a, I shouldn't say pop,
it puts a frame on the stack with space allocated for all the input arguments,
local variables, and a return address.
And sometimes some metadata, depending on your language.
Recursive functions are exactly the same.
The stack doesn't care what it returns to.
It doesn't care what the name of the function is. It just cares about the same. The stack doesn't care what it returns to. It doesn't care what the name of the function is.
It just cares about the number.
So there's no difference in recursive functions.
So generally, if your language supports functions,
it supports recursion.
And there's nothing fancy about it.
Most languages use this kind of stack mechanism,
so there's nothing really magical about recursive functions.
So real quick,
I want to talk about that Fibonacci sequence again.
So I'm going to fix up my math here on air live because carry the one wrong
divide by pi.
And remember when he's going to talk about these numbers,
it's the,
it's the raw mathematical formula, which is the Fibonacci of subsequence at position n is the Fibonacci of position n minus 1 and the Fibonacci of position n minus 2, right?
So it's two calls.
Yep. And so we're
roughly going to approximate the Fibonacci
every call to the Fibonacci function
as 64 bits. So it's going to be 32
for the argument
of your index
and 32 for the return address to get
you back to where you were to continue
processing. So every
time Fibonacci is called, it's going to
be 64 bits on the stack.
In this theoretical function that we're creating?
Yeah. I mean, yeah. It's theoretical in that we don't have it in front of us right now,
but it exists. It can exist. Oh, by the way, I realized there's a bug in the implementation
I typed and mentioned earlier.
So, your branding points if you figure out what the bug is.
Are you talking about the B?
Because I replaced it in the thing.
No, another bug.
Okay.
All right.
Good.
All right.
So, every Fibonacci call, aside from the, uh, crap, sorry.
Uh, for the one for zero and one, which is returned immediately, uh, adds two frames
to the stack.
So if you say, I want to know what the hundredth Fibonacci number is, we're going to do a hundred
squared frames on the stack times 64 bits. And so are looking at oh crap i gotta do the math here
so what's 100 squared uh if i carry the one 10 000 right yep 10 000 so times 64 for the number of
bits so that's a whole lot of bits right there. I'm going to shrink it down to bytes.
And I'm going to shrink it down to kilobytes because that's what we're talking about. So this is 0.6 kilobytes.
So I'm adjusting this here because...
All right.
So for getting the 100th position of a Fibonacci sequence, we're at 0.6 kilobytes or six.
Yeah. Yep. And so half a floppy disk. Right. of a Fibonacci sequence, we're at 0.6 kilobytes or six.
Yeah.
Yep.
And so half a floppy disk, right?
And the big age, age of things, just full of these big, dumb, redundant numbers.
And remember, we only have one megabyte total.
So, I mean, that's a hundred.
A hundred is nothing in computing terms.
So you can imagine if you said, what's the millionth Fibonacci number?
You're going to blow your stack.
You're going to get a stack overflow trying to get to that number, which is crazy because like a million is not a lot.
But a million squared times 64, that's a lot.
That's exponential.
Well, I mean, even just doubling the number to 200 is going to grow that number significantly, right?
Like this isn't linear. It's not like for every hundred, it's going to grow that number significantly, right? Like this isn't linear.
It's not like for every hundred, it's going to be 600 bytes.
No, when you go to 200, you're talking about 200 squared now,
which is a much bigger number than 10,000.
Yeah, it's literally exponential.
It gets worse and worse and worse and worse.
So yeah, I don't care about how good your computer is.
You're going to have some problems getting to the million fibonacci uh number with this method but kind of said like we could do this on paper right if uh if i were doing this on paper i wouldn't do it the way we just described it
what i would do is i would start i would write down zero i have down one and then i would do
the next number which is zero plus one one then one plus one is two one plus two is three and so you're gonna do it on paper on paper really like you're gonna
you're gonna cash you're building the array from the front instead of like saying like hey i want
this number i want the hundredth number you're gonna go on paper from okay let me start calculating
it from the beginning to until i get to that number well you know what i
don't even need paper though because i don't ever have to go back in time i can just sit here and
say okay well three plus five is eight and then the only numbers i need to remember are the last
two five and eight is 13 well that becomes your cash is 21 yeah i don't love that term here because
i'm not invalidating uh but you you know, 8 and 13, 21.
So 13, 21 is I only ever have to remember two numbers.
So I don't need to have this gigantic stack.
Like I can literally deal with this two variables, you know, maybe a temp or maybe a counter to keep track of where I am.
So I can have a memory complexity essentially of zero.
So rather than having kilobytes of data to get to the hundredth number,
I can have 64 bits and that's it.
So Fibonacci is a terrible algorithm to do recursion on because it's just not necessary.
But it is a good example.
And so that kind of the technique or the way things that work uh work
with a workers function where we basically have to go all the way to the end before we can start
adding our numbers back up um by like putting on a frame on the stack frame on the stack frame
on the stack frame the stack oh finally we got to zero let's go back and start um winding this
thing back that's called backtracking. And it means we're doing the work
for the function to accumulate the results at the end of the data. So we go out or we search out of
our unbounded data structure, dynamic data structure. And once we got to the end, we
wind it back and actually do the work we care about. So that's a bad idea with Fibonacci.
Because I just said, well, if we just build this thing up from the beginning, we don't even need to keep track of that.
I don't have to keep anything around other than the last two numbers.
And so I can just swap those things around.
So we can do that with recursion. If we rewrite our algorithm, and I'm not going to try to read it because it's not even that hard, but just doing code on a podcast is tough, but we've got a link to it.
But if we can rewrite this function so it does its work up front and it passes the current number, the current sum to its next call.
So we do the accumulation up front.
And so basically the effect is we say, here's the number I just computed and the number I computed before that, pass that to the next function. We can have that function build up recursively. Now, in a language that doesn't support tail call optimization,
there's going to be no difference because we're still going to add a frame to the stack for every
execution. So we're passing more arguments now. Remember, we're passing the value that we
calculated for n minus two, and we're passing the value we calculated for n minus 1 onto the next call.
It's still going to allocate those three variables in the return space, and so nothing's saved.
Now, if your language supports tail call optimization, it can say, hey, you know what?
I see that there's no more operations that need to be called after this recursive call.
It literally just returns its results to the last function on the stack.
So why don't I just collapse that and take this item off the stack, add the new one, and just change this return address. So now instead of saying return to address B,
just go back to A.
We don't even need to keep this middle frame around anymore.
Okay, so I don't think I'd seen this defined that way.
So what you're saying is the difference,
so to be upfront here,
tail call recursion or optimization,
isn't that what it's called?
Tail call optimization has to be built
into whatever the programming language is that you're using
or into the compiler, I'd imagine, right?
One of the two.
And by doing so, you're basically doing your recursion
in a particular way that it knows
how to take the results from the previous call,
pop it off the stack,
and then just add the new one onto the stack
is what you're saying.
Yep.
So I hadn't heard to that referred to that way,
which is really good
because the other alternative is
you can just write your method in a way
to where you're doing that yourself, right?
So if the language doesn't support this tail call optimization,
you can still achieve the same effect,
but that's, I believe, where you get back into what you said earlier,
which is dynamic programming.
Oh, well, that or doing the problem iteratively.
So the approach we talked about with Fibonacci,
we were basically having a while loop and say,
while number is less than 100, add up the last number plus the last last number
and carry those forward and just repeat until we get to a hundred.
So it was specifically tell recursion when I'm,
when I mentioned episodes 89 and 97 before,
like we had specifically discussed tell recursion during those conversations
related to like, you know, big O and why does it matter? And, um, the, uh,
the tree conversation related to data structures. Yeah. So it's really interesting. So, um,
there's the, the big catch is that your function call, your recursive function call has to be the
last instruction, which means I can't even, uh, take the result of the function add one,
right? Because
that doesn't, that doesn't, that's not allowed. It literally has to be just alone on its own.
So, um, like we talked about memoization, storing a cache over in the reference and just passing a
pointer, great way to do this. So if you can use memoization, uh, either, you know, stick that in
a global or pass that pointer to that object on the heap and then not do anything else with your result after you just kind of pass it back.
Then that is a prime candidate for tail call optimization if your language supports it.
And most languages do. So what you're saying is in the typical allowed when you're doing tail call recursion.
Instead, your return function has or your return method has to be return Fibonacci of something.
That's it.
You cannot you cannot do anything else to it except call that function.
Yeah, I've got a link to that solution and you'll see why I'm hesitant to try and describe that.
So let's see here. If you click that link,
it's the one that's
geeksforgeeks.org
slash tail recursion Fibonacci.
These links will be in the show notes too.
Yeah, we definitely don't want to try
and describe this, but
the gist of it is...
Picture a triangle.
Yeah.
Actually, you know, it's funny you say that
because I've been holding off on saying anything,
but in recursion, there is a way to visualize Actually, you know, it's funny you say that because I've been holding off on saying anything.
But in recursion, there is a way to visualize what's happening on the call stack.
So just for kicks, I want to say this so that if you've never thought about it this way, maybe it'll help.
If you call something like factorial, we'll take factorial because it's easy, right?
You've got factorial four is four times three times two times one. If you created a function that was recursive for factorial, what happens is you put in four, it adds each method call to the stack like we
talked about before. So if you picture like a pyramid or a triangle growing every time it has to go back. So you pass him four that gets added
to the stack doesn't get executed because it says, well, I need the factorial of three first.
So it passes another factorial of three to the stack. So you're growing how tall that pyramid is.
It doesn't do any of those calculations until it gets to the tip of that pyramid. And then it starts doing, okay, I got all the way down to factorial one.
When it's at the tip of that pyramid, it's going to return the factorial of one,
pops that off the stack.
So now the pyramid starts shrinking, right?
You went up to the top of it.
Now it's coming down the other side.
Now it's going to have to calculate factorial of two, return that, then three, then four.
So it does everything sort of reverse order.
So you grow the stack and then it shrinks
as it's returning those values.
I mean, I can see it as a line,
but I guess I'm losing that.
It could be a line, but basically what I'm saying
is it goes up to a P.
Why the bottom of that got wider
because you were adding more to it.
That's where I like the, I don't know,
I was joking when I said the triangle.
I don't know that it works for well the striving growers and the
stack's growing over time so every time you add a method to it it goes up to a point right so yeah
but that bottom frame didn't change in size man i think uh so yeah it's got their examples where
things can get bigger and smaller and can kind of go back in and out. I think it's going to get real messy if we try to stretch
that analogy. We exit.
Abort your graph and it
points from A to B.
We're going to talk about graphs now, right?
Just like I mentioned
doing this stuff on paper back in computer
science school and whenever
the 70s or whenever I was in there.
It wasn't that long ago.
A little bit more. It was closer to the 70s than whenever i was in there this wasn't that long ago a little bit more closer to the
70s than it was so now geez and it went radio silence everybody was like god he's been around
a while he really knows his stuff i was born old too uh so born old so anyway yeah the deal is um
one of the easiest things you can do with this is uh to have pencil and or pen and paper or, you know, like a notepad open.
And basically every time you call a new function, write the inputs to the algorithm at that level of indentation.
And whenever you go back, you know, carry your result back there and you'll end up drawing almost like a Christmas tree on the side of your pages.
It kind of goes in and out depending on the algorithm you're working on.
And when you get to the very bottom, when you, like, hit that position zero, you're done.
Your stack has been completed.
But like Alan said, you're dealing with a stack.
It's only one dimension.
And so that's where, like, the line thing comes in.
So I'm doing the thing I just said, like, we shouldn't do.
We're like, yeah.
So it goes out to a peak and comes back in.
See?
Ah, geez.
Told you.
Yeah.
All right.
So what's pretty interesting about this is doing Fibonacci in this way,
such that the last method, the last thing that happens in our method,
is the one called Fibonacci sequence, and we pass the values we computed.
It doesn't change the big O at all.
We're still doing, um, that's not true. Hey, let me say this differently.
Depending on how you write your function with tail call optimization or not,
it doesn't affect the big O time. You know, if you change your algorithm, like we did,
you know, we did change a little bit. We're not doing redundant values anymore. So that's why I goofed there.
But the gist is the tail call optimization does not affect the big time, big O of your application, of your program.
It affects your call stack and the memory allocation and how many calculations you can do before your program explodes.
But strictly speaking, if you kind of had a compiler flag for tail optimization on or off,
it doesn't affect the big O runtime.
But in practice, it very much affects whether or not your program can complete at all and how long it takes to do, which is interesting, I think.
Okay, because basically what you're saying is mathematically nothing changed.
It's just how you can deal with it is,
is what will work or not.
In this case,
because the space is so limited and memory allocation is expensive.
It's one of those things where,
um,
the,
uh,
the actual,
you know,
real world technical,
uh,
implementation details of your algorithm can dramatically affect,
uh,
whether or not your program can complete in an
amount of time we've got some good links here too if i recall too though because you said
that it was only changing the the return for that last call right so you still had to go through
that's why it's not changing the the big o because you still had to go through like
let's say it was 100 calls you still had to go through all those 100 it's just that instead of
the 100th calling back to the 99th and the 99th going back to the 98th and 98th etc etc etc
instead the 100th could just skip back to the first uh yeah in terms of the frames that were
on the stack yeah totally absolutely you're not changing the amount of work that needs to be done.
Now, I did say that Fibonacci was a bad example there because we did change the algorithm in addition to, you know, what I said about accumulating those values.
It actually did change the amount of work that we needed to do.
So in this specific example, because I did a poor example, poor job of choosing an example, we did actually change the algorithm in order to make it support that. If you've ever seen a recursive example that
has a function named go, where you've got almost like a wrapper function, and then an internal
function called go, that's a really common function name for this technique where you'll
see that somebody has kind of rejiggered things such that the recursive call is at the end.
And so they'll kind of do some things to set up that accumulator value or accumulator values in
the case of Fibonacci and pass that into this internal function that's actually going to do the recursion.
So it's actually not that bad in most cases to convert things to be able to be tail call optimized.
But, you know, it does kind of make things just a little bit more awkward to read and to see but the benefit is you don't have
to worry about stack overload exceptions which is the big downside but like i said not all program
languages support tail call recursion and sometimes they'll have a flag that either turns it on or off
or it's something kind of special so i guess you you have a flag, what's the point?
A compiler flag.
Yeah, I don't know if there's a downside to Tailcall.
Yeah, so I know some C compilers,
modern ones not so much,
but back when Tailcall optimization
was still kind of a newer novel thought,
there would be flags we could say like,
you know, turn it on or off.
I think, I want to say Java had that,
but I don't remember. Let me double check on that. That's interesting. Yeah, I don't know why you like turn it on or off. I want to say Java had that, but I don't remember.
Let me double check on that.
That's interesting.
Yeah, I don't know why you would want to have it off.
Maybe the compiler sometimes makes mistakes because it's reliant on the compiler seeing, oh, I don't have any more work to do after this.
Right, what's the pattern?
It has to look ahead a bit.
Right.
But there is one very famous.
I guess where it would bite you though is uh from a debug perspective though
right because you would you yeah it would mess with the the stack in that regard you wouldn't
have the stack to look at yeah i was i like googled you know looking for like hey what's
the disadvantage to it and and there's one saying like you know it would have no stack so it'd be
that makes sense yeah because if it breaks somewhere in the middle,
all you have is that middle frame that it added there.
So you have no history of how it got there.
Yeah, so I guess that answers the question about why you'd want to use the flag.
But I was about to ask,
oh, should we look for languages that support tail recursion?
And, you know, like as a, you know, an industry,
should we try to focus on those, you know,
instead of, you know, the ones that don't support it?
But it sounds like, you know, there's pros and cons to each.
So all good things in moderation.
Some security methods that basically look for code injection
or security vulnerabilities rely on counting stack frames
to see if anything's been inserted maliciously.
So if you start deleting these middle frames,
they can't do that anymore.
And so it's hard to detect when things are bad.
And like you mentioned, the debugging experience gets weird.
So in C Sharp, for example, you can have a publish,
and Node has the same thing. A lot of languages have the same thing where you can run it
in debug mode or you can run it in release mode, which gets rid of
a bunch of stuff that normally runs and does some optimizations.
Like this would be an example of something where you wouldn't necessarily want it to do that collapsing in
debug mode, but when you release it, yeah, turn that baby on, right?
That makes sense.
So, Python
does not support tail call optimization.
That's it. It's dead to me.
Done.
Java 8,
all the way up to 8,
did not support tail call.
You know what? I'm sorry. Java 11 is when it started.
It's when tail call
optimization... And that's why we only just it started, is when tail call optimization.
And that's why we only just recently started liking Java.
That's right.
Hey, everything still requires Java 8 for some reason.
Who cares that it's on version 95?
That's true. This is why we can't have nice things, Alan.
That's right.
Scala did some stuff, so we'd have kind of compile time.
C Sharp does not innately
support tail call recursion.
It's dead to me also.
Actually, I think I said pretty much
all modern languages support tail call
optimization.
I was just wrong. It just doesn't.
Yeah, what you meant is JavaScript
supports tail call recursion.
That's why i do everything in
javascript actually javascript does because you sent me that video earlier of the ladies singing
to the lion king theme and something that's my tip of the week man oh dude i'm sorry i take it back
that's cool the best talk i've ever seen in my life on YouTube was Tail Call Optimization the Musical
which Disney show tunes
were all about
all about tail call
optimization
excellent you know what I'm going
to one up that and I'm going to go find
the video of the
the lady that did one of the
talks over at NDC London
where she was doing music through code.
I've never in my entire life, man.
Like, it was amazing.
Yeah, that sounds good.
Anyways, go ahead.
And I know you like music.
You like code.
You've actually done some YouTube videos on making music with code.
So you will very much enjoy this one.
Yeah, yeah.
I'll give that a watch.
That sounds good.
I am updating that a watch. That sounds good. I am
updating the notes here.
Yeah, so
what have we talked about? What have we not?
So yeah, if you can rewrite your program.
Version and then
Welcome to Coding Blogs, episode 154.
Yep.
So
yeah, I didn't realize that so many languages didn't support it
because I ran out of time putting this together.
So let's see.
Yeah, we said JavaScript does.
I don't know.
Do you have any other languages you want to check?
C++ support.
Kotlin.
That's another of those things.
There's different compilers for these.
Kotlin.
There's Java.
Support.
Well, let's see.
Well, I guess it's its own language just compiling the Java bytecode,
so that's different. i mean that yeah i would assume uh kotlin compiler converts tail call
optimize i optimize function calls to uh enter iterative code so it's the compiler doing it
when it's compiled and uh you know if you ever seeing like jit code or whatever um this would be
a part where that's not happening here what about yeah but that's fine right like like i said earlier
it doesn't necessarily have to be the language it just has to get turned into the the byte code that
will do it efficiently for you right that's all that matters what about rust uh rust has situations
where the tail call recursion is guaranteed to be optimized.
It looks like there's a couple.
I'm sorry, it's never guaranteed.
But the optimizer may end up doing it depending on some documented, very finicky definitions of when it can be detected.
But basically, Rust has a big emphasis on memory, on memory safety.
And so it needs to be very careful that basically the variables are destroyed appropriately or whatever.
But this is over my head.
I have one more request, only to see if it can redeem itself and maybe win us a listener to PHP.
All right, let's see.
I would just set up PHP like that.
I was wrong of you,
man.
Dear listeners that are PHP programmers.
I apologize for Allen's insensitivity.
Yes.
We already lost somebody whose first episode was that,
but that's fine.
I,
you know,
I do.
While,
while Joe's looking that up though,
uh,
you know,
for those on the,
on the Slack. And if you've listened to this episode for any length of time, you've heard references to Joe, Recursion Joe, and you can meet him on the Slack community if you haven't already.
I don't know if he knew that we happened to be doing this, or maybe if it was because of the Python conversations we've recently had, but he sent me his favorite Python script. It is recursive.
And, and he's like, he passes in like, you know, he'll call like, Hey, who's awesome. And he'll
call this function called get name and he'll pass it in Joe. And it'll be like, Hey, if the argument
is Joe, then it's Joe. Otherwise pass it, you know, uh, return recursion. And it'll just keep
like calling get name with one of those two values. So it's always Joe, uh, return recursion and it'll just keep like calling get name with one of those two values.
So it's always Joe recursion, Joe recursion, Joe recursion, Joe recursion, Joe recursion, Joe.
It's awesome.
He's a really sharp dude over there to highly recommend going and interacting with him on, on Slack or even in the comments on the episodes.
Like he's, he's definitely given some good stuff up on there.
Yeah. the episodes like he's he's definitely given some good stuff up on there yeah when when he does hit
like a recursion uh uh limit you know because when he gets that exception then it's like you got this
i'll share the script that he that he gave because it is funny. All right, Jim, what did you find?
I found a list on Wikipedia.
So Haskell, yes.
Erlang, yes.
Lisp, sometimes.
JavaScript, sometimes.
Depends on different engines and stuff.
Alua, yes.
OCaml, yes.
Python, no.
Rust, kind of, sometimes. Most of the time.
Is it really written like that in the wikipedia page
i want to see this wikipedia page it may be done in limited circumstances but it's not guaranteed
just trust them trust so it kind of literally it maybe is the answer it may be done in limited
circumstances but maybe not where's where's p sounds like if it says limited circumstances then i would
interpret that as most of the time it's not not yeah right yeah uh colin yes um pearl if you
explicitly set a flag or uh with a very explicit variant of go to that takes the function name
uh skull yeah so basically the functional language is uh sharp closure. Well, closure.
And if you use a special function.
So, yeah.
So whatever language you're working in, probably not actually.
Probably not.
So I'm glad we discussed this today.
So useless information.
Hey, JavaScript is on that moving edge.
You know, I know in love. Kotlin. You know, hey. So, Loot is I know and love.
We should just tie all this episode.
Episode 154, useless information.
That's right.
Almost useful information.
Oh, almost useful.
Yeah.
But, you know, I think it's neat.
And you can imagine it doesn't have to just be recursion that this tail call optimization works for.
Anytime it sees a function where all
it needs to do is just return it and say, hey, you know,
slip that out. It's still doing work though
so I would argue that the
while loop is still better because
you can't just reuse a frame
on the stack. You always have to take
the last one off, put a new one on. So you're doing those
allocations still. So you're taking a tiny
hit, but like you said, they're super fast,
right? Those call stacks are hyper fast because it's allocated RAM for that. allocations still so you take a tiny hit but like you said they're super fast right like those those
call stacks are hyper fast because it's it's allocated ram for for that yep and uh yeah so
that's telecom recursion and uh i wanted to talk about telecall recursion um just because it's one
of those things it's like you see come up in interviews or something or someone will talk
about fibonacci and pretty much any time you talk about recursion uh they're going to show you fibonacci and they're going to talk about why
fibonacci is bad and they're going to say but but now you know if you're working in like f sharp then
you can uh do this technique and make things better so this is like one of those fizz buzz
things it's it's just sort of necessary for for interview type things. Yeah.
We have a list of companies you want to interview at.
So I have a nice
article here for
tips on how to write recursive functions.
I initially thought I'd be able to find a bulleted list
that's like, hey, here's five tips.
Here's where we go. Because I kind of have my
own little list when I'm working with
leak code type problems. So I
figured I'd share my list for how I think about recursive algorithms and this is nothing official this is just like
how i think about it this is joe's official list yep nothing for a recursion list for
the official unofficial list and i think it's awesome that he really is so tired that he's
not catching any of this stuff.
No, I mean, I hear it, but just like the humor part of my brain has been bypassed.
It's like an emergency mode.
So I recognize that it's funny.
I just can't laugh.
Joe, that's what you need to do.
Go ride your bike. Get out of the house and just go for a bike ride.
At least you can go ride yours.
I've been working on mine.
So mine's in a thousand parts right now.
What?
Yeah, I'm replacing the drive,
the whole drive train on it and everything,
but I'm like waiting on parts to come in.
And so your road bike,
uh,
no,
the mountain bike.
Yeah.
We'll get on the road bike,
man.
Well,
yeah.
I mean,
technically I guess I could do that,
but,
uh,
side note,
outlaw is possibly the fastest bike rider.
I went riding with this dude and i like i thought
you know i go riding pretty often whatever 20 miles no big deal you know like this is the kind
of stuff i enjoy doing the weekend i was like yeah i used to do some riding uh i went ahead
and got a new bike let's go let's go riding the silver comet trail elena outlaw gets on this bicycle and disappears into the sunset immediately and me and our buddy john
is on episode 100 we're like what the heck man and i don't think we had roadbacks at the time
but still i mean it was just like outlaw just took off and he looked so casual he's like talking to
us and we like i can't even hear him anymore because he's so far ahead of us it's ridiculous
i do remember that particular day because i thought we were all going i looked back and i'm He's like talking to us and we like can't even hear him anymore because he's so far ahead of us. It's ridiculous.
I do remember that particular day because I thought we were all going.
I looked back and I'm like, wait, where did those guys go?
Yeah.
Like, what the heck, man? They have a flat already?
Yeah.
Yeah.
It's just crazy.
I literally couldn't see.
I had to turn around and go back.
Yeah.
Incredibly talented bicyclist.
Did you know, though that? I mean,
here's the downside.
Uh,
did you know that bicycles and motorcycles,
they can't hold themselves up?
Um,
I mean,
kinda,
I mean,
no,
they can't,
they honestly can't.
They're like you,
they're too tired.
Oh,
geez.
The other one from Mike RG.
Thank you.
Yeah.
It fit right in.
I'm like, well, you can get a kickstand.
Well, it's all about the setup, Alan.
Hold on now.
It fit right in.
The joke, telling a good joke, the art of telling a good joke,
it's all about the setup.
My whole conversation about getting him to go ride his bike was the setup
for that joke.
It was well done, sir.
I saw what was happening at the end, and I saw Joe like, no, I totally can't.
I was like, oh, he did.
He's got leaning on something.
He bit.
Yeah.
Why don't fancy bikes have kickstands?
Fancy bikes.
Fancy bikes.
If I'm paying $1,000 for a a bike you better have a kickstand you know
yeah it totally should i i agree but i i do not i i do not agree with this assessment
god save them 100 grams all right so so joe's tips on how to do recursion where are we going
start with a bike ride start Start with a bike ride.
No, so start with the end.
Start with the stop condition.
So didn't really get into
kind of the overall structure
of a recursive function,
but the deal is generally the same
for all recursive functions.
You're going to have
some sort of condition
in the beginning that says
if something, then we return.
Otherwise, we're going to do some work that ends up calling that function again so like in the case of fibidachi we said if it's zero
or if it's one return otherwise you know do these recursive calls and so i like to always think
about my stop condition first because that just makes sense to me and that's something i saw on a
couple different lists now number two this is the most important one and i'm gonna rephrase it here uh
whoa if i can't oh my gosh that stop condition that was hard to write first if you're thinking
of optimizing for a tell recursion uh yeah compiler yeah that's tricky so i definitely uh
wouldn't like when i've kind of dabbled with messing around with tail call recursion i
definitely did it the kind of the wrong way or the easiest way for me to
understand and make sure it works and then try to do tail call.
And then to make sure that it's actually doing tail call is basically print
out the stack or see if it finishes and you know that something's better.
Now the number,
so number two tip,
most important,
do it by hand before you start coding.
If you cannot manually do a simple example of the problem you're trying to do recursively, like in Notepad or with a piece of paper for a simple input, i still fall on the trap even though i know this is
uh like i'll have an idea for an algorithm and i think i know it okay i'm gonna loop through the
tree here get to the bottom do the thing divide by two and then we'll go left or right and then
i'll start coding it and as i start testing i realize i missed some big piece and so it's
really important that you understand very deeply exactly what you want to do and for the most part
if you're doing like this kind of like elite c type things, your program is going to be like 10 lines of code.
It's not going to be that complicated
because the algorithm is going to be very tricky but also very small.
So I definitely highly recommend making sure you really get the right answers
and you understand exactly what needs to happen in those instructions.
When you say, say okay sorry to interrupt
no i'm curious here just a show of hands so you know kind of thing because this is somewhat
related but kind of not but kind of it is and it totally is and that's the recursion part so um
you say do it by hand though so do you often keep like a piece of paper and a pencil beside you and you're like okay let
me write this out and like this is how i do that blah blah blah so like you when you say write do
it by hand like you're literally on pen and paper or pencil and paper because you know i mean you're
not a monster yeah actually so if you're doing like serious interview prep one of the best things
you can do is uh take your hands off the keyboard. Oh, you know, this is like whiteboards. This is maybe old device, you know, pre-COVID device. Take your hands off the keyboard,
go get a notebook and a pen and do that problem, pen and paper. And you don't necessarily have to,
you know, pseudocode, but make sure you can work through that problem by hand without the compiler,
without the functions. Just make sure that the pseudocode algorithm works before you start typing because man when you start typing that's when things that's when you end up down
the rabbit hole and in whiteboard interview you're generally just doing that uh that whiteboarding
writing the algorithm and stuff so if you if you're a solution for prepping for an interview
is to actually solve problems on leak code you're going to spend probably three-fourths of your time debugging
and off-by-one errors and, you know, why is the comma O,
the function didn't do what I expected.
And none of that stuff is applicable for the interview.
So you're wasting three-fourths of your time for a whiteboard interview
by practicing on something like leak code.
So it's really important to think that, like, that pen and paper,
the stuff you do there to make sure that your algorithm is good
before you go and type it is so much more important than the actual typing and getting the right answer and passing out the test cases.
I mean, but that's in the case of like interviews and whatnot.
I'm talking about like even if it was just for work, though, you know.
Well, the same applies, though.
I mean, in all honesty, what he said is make sure you've locked down your algorithm, then implement it,
right? So if you fully understand exactly what you've got to do, I think the same applies,
because he was saying you waste three quarters of your time on leak code, debugging your off by one
errors and all that kind of stuff. Whereas if you go in, not having to try and solve the algorithm
while you're also trying to write the code for it, then you're focusing on just writing the proper code and not trying to
implement the algorithm at the same time.
And you know,
mentally.
Okay.
Let's go back to my question then,
Alan,
do you pen and paper try to solve the problem that you're working on on pen
and paint like physical pen and paper?
Um,
no,
not pen and paper,
but I will draw out something.
I guess the answer for me is better stated in, I will actually go outside of coding to try and write out the steps or draw out the steps in another application.
So maybe not even pseudocode, maybe drawing boxes and saying,
Hey, I want this thing to happen here and this to happen here. Right. So, so not necessarily
pen and paper, but, but not writing code. So I guess, I guess I'm the weirdo because,
uh, and, and this is probably like, I guess, bad habit on my part. Cause I don't like, I,
I would rarely go to pen and paper. I typically would like
write out in comments like I'm still within my ID or whatever. And I write it out in comments like,
OK, I think I wanted to do this. Then I want to do that. And I want to do that. And then I want
to do this. And this is what I want to return. And then I start trying to fill in the blanks
in between. Like I'll have like different comment lines, you know, for what I think the different
steps are. And then in between those comments, I will start to like write in, okay, I think this
is what I'm trying to do, but I don't, I, you know.
Oh no.
So I don't do pen and paper for like line of business type stuff.
Like, Oh, get the record from the database, uh, times by two, put it back.
No, no way.
I'm, I'm specifically talking for like leak code, those kinds of mathy challenges where
like, I need to see the numbers changing for me.
Harder problems.
Which never happens at work.
Yeah.
Never.
I'm talking about the same kind of thing too. I would like just write it out in comments as comments.
Yeah. And to me, that same thing is pen and paper, right? Like you're writing out algorithms. I was specifically trying to like call out the difference of like there. I disagree with you, Alan.
There is totally a difference between the, the, the, the mindset and the thought process
that you might use if you're writing it by hand on paper versus what you're doing on
the computer.
Because, because like your example for a moment, like of like drawing squares and everything,
like you give me a pen and paper, it's super easy to like draw arrows, draw lines or whatever, you know, I can be way more free formed. Whereas if I'm in an
IDE and I'm just like writing comments, then I'm limited to characters, printable characters,
you know? Yeah. Yeah. Like I would probably open up something like a PowerPoint if I,
if I were going to do it on the computer. Right. So yeah, it's definitely, I will step outside of code for
a complicated problem to try and hammer out the algorithm before I try and implement it for sure.
Let's say if you're doing like a red, black tree or you say just some arbitrary algorithm, like,
hey, what we want to do here is add up all the right-hand values from a tree. So if it's ever
a right-hand child, then add it up and return that value. Then that's somewhere where absolutely the pen and
paper comes out. And so I start kind of like verifying the numbers, like then my instructions
are, you know, kind of correct. And that's because I'm doing the pen and paper not to pseudocode,
but to check that my pseudocode is correct. So before I want to get into writing the loops,
and I'll do that all the time on LeetCode.
I'll do it on, you know, like basically in the editor or whatever,
where I'll say like, okay, if I pass in a 10,
it's going to, because the number's even,
I'm going to reduce it by two and, you know, whatever.
So I'll kind of like make sure that my plan, the pseudo code,
so I write those comments just like you said,
but when I'm doing like a LeetCode or something,
before I try to start typing that, converting to code i'll verify my solution because a lot of times i'll
get kind of far into something and realize oh crap i didn't handle this case that's really
important it's so easy to misunderstand the problem yeah i mean i guess that's where i'm like
you know you know trying i guess like learning from my own my own uh you know for my for my
own selfish reasons to you know learn learn the process guess like learning from my own, my own, uh, you know, for my, for my own selfish reasons to, you know, learn, learn the process. Right. Because, because like when in the past
where I have like broken down a pen and paper, it's because I was debugging a problem. Not because
I had started with pen and paper to like, Oh, Hey, how could I solve this leak code problem?
It was like, why isn't this working? And then like, okay, fine. Let me, let me draw,
let me write it down and see like
what if i think what i'm doing would work you know the way i'm doing it but yeah
and imagine if you're doing a sorting or something where um you know like you're trying to do merch
or something i can imagine trying to do merch short without putting a couple values up there
and saying like okay so now if i do this this happens and kind of drawing out a little tree
or whatever right finding what your outputs are at each step so that you know how to validate them
once you implement. Yeah, that's kind of where I'm going to.
So I think where we're at then is that as part of like Joe's official and unofficial instructions
for this, like he's definitely more like, you know, managerial and plan ahead and i'm more like leroy jenkins and get into the editor
so i would say if you did more like you're not like you're not really into the leet code thing
right if you start doing more leet codes you're going to end up doing this because there's nothing
worse than spending an hour fixing your commas and getting converting all the greater thans to
greater than equal twos and finding all those stupid off by one errors.
Only realize you solved the wrong problem.
Or that you missed part of the instructions or something like that.
I mean, it's so easy to spend your entire time solving the wrong problem.
And that happens in interviews too.
So I highly recommend if you're in an interview situation, kind of walking through an example.
Say like, okay, we're sorting these numbers.
So I'm going to draw these numbers.
Here's the algorithm I'm planning on.
Is this the result that you expected?
And if they don't say yes, then thank goodness you didn't start typing.
I like Joe's subtle jab.
Like, you know, if you're not writing it out on pen and paper, you must not be doing
LeetCodes then.
No, no, not a jab at all.
Not a jab at all.
I like seriously.
Totally a jab and I'm out of here.
Forget it.
We're done.
We lasted 154
episodes and uh this is as far as we got yeah so leeko problems are a whole another kind of
programming it's it's not even like it's uh bears very little uh yeah you know i'm sure there are
exceptions but i would argue like finding the longest uh palindromic substring is not something that you typically do.
What about the longest or sorry, trapping rainwater problem?
I'm just looking at problems here on the code for dynamic programming.
Finding the maximal rectangle.
That's not stuff that's anything like what I do.
My day job, man.
That's just a Tuesday.
Yeah.
That's wrong, chief.
Finding the best time to buy and sell stock
so maybe sometimes you know how well i guess all of this late you know is is a great uh segue into
your third tip here in joe's official unofficial list of uh recursion tips that are unofficially
official yeah if you want to get good recursion then you're going to need to do a lot of problems.
That's good because there's
a lot of ways to find problems. I just went and looked
and LeetCode has 1,700
dynamic programming problems.
If you want to learn about dynamic programming,
then here you go. There's even
a couple that are marked easy, which I doubt
because LeetCode's
easy are like my mediums.
I think Leetcode's hard but yeah and that's
seriously no dig at all I think you can live
your entire life without doing a single problem
like this and be an amazing coder
and solve amazing and hard
problems just
totally different kind of thing I was only
teasing you
I'll tell you that part of my brain is dead
died you might need to go see a doctor man i mean i'd love to although like right now like
this i personally wouldn't want to go see a doctor if i unless i really had to just because
like you know with the pandemic and everything like that's the last place i want to be unless
i absolutely have to be there you know although i did go i did go to the doctor recently because i i thought like i was having
some hearing problems and and he said i was going deaf and i gotta tell you that news is hard to hear
oh my gosh oh my gosh another one one from Mike RG. Thank you. All right.
Well, I need a refill, so let's wrap this thing up.
Let's do it.
So just to kind of top things off, so recursion is a powerful tool in programming, and it roughly boils down to a function that calls itself.
It's really great for dynamic and unbounded data structures, things like graphs, trees, linked lists.
Those data structures are very tightly associated
with recursive solutions.
The downside is recursive solutions can be memory intensive,
specifically with the call stack, which is very limited,
and so it's easy to overflow if you are doing things inefficiently
or have a large data size.
Tail call optimization is a technique that the compiler can use assuming that the compiler
supports it and can help mitigate that somewhat.
And if you're doing like a FAANG interview or one of these companies that think that
they're FAANG even though they're not, then you're very likely to hit a recursive problem
so it's worth kind of practicing those if that's what you're going for that's one of your goals
and um knowing that tail call optimization exists and the downside of recursion is really important
because they're probably going to hit on that in the interview and we're going to have links here
to uh all the example code that we talked about and also thousands of problems that you can go solve in your browser right now.
No IDE required.
All right.
So, yeah, we'll have a lot of links in the resources we like section related to this episode.
And with that, we head into Alan's favorite portion of the show.
It's the tip of the week.
It was my favorite portion of the show until It's the tip of the week. It was my favorite portion of the show until today
when you were like, hey, we need a tip.
And I was like, oh, oh.
I don't have it.
I don't have 13 tips.
Right, yeah.
I only have like eight.
Oh, gosh, what do I do?
Yeah, rough day.
All right, well, it looks like I'm first.
So my tip is to take care of your feet.
Whoa.
If you don't take care of feet, if you don't put your feet first, then you're doing yourself a disservice here.
So I found a great article here with very excellent tips for keeping great feet hygiene because nobody wants to smell your stinky peats.
You know, maybe it's cute when you're a kid.
It's not so cute when you're a kid it's
not so cute when uh you're old so you know don't wear high heels every day um you know use the soap
in between the toes um pumice stones you know just take care of stuff um we got a list of common
problems here so there's a great article here on how to take care of your feet because your feet take care of you i thought it was going to be the musical thing but okay uh well this got weird
um yeah so take care of your your feet was that it was that literally that was the that was the
joe tip of the week yeah yeah i got this great moisturizer. I'll put a link. I think that I credit you.
I just put that.
You slop that baby on a couple times a day, and I mean, your feet, it's just like baby feet.
Just squishy, you know, like baby feet.
The humor part of his brain is kicking in now, I believe.
I think so.
I think so.
I think we had enough jokes from Lars and Mike RG that it's finally triggered the Jay-Z that we all know and love that he can.
The elf is important, y'all.
It is.
It is.
So is lunchtime, which is coming up on, you know, lunch.
So we'll have to eat pretty soon.
But, yeah, you know, maybe it's not lunchtime in your time when you're listening to this.
But have you ever, like, side topic, have you ever, like, when you eat something hot, have have you ever like a side topic you ever like when you
eat something hot have you ever like accidentally like uh you know you like you'll rub your eye or
something like oh god hot sauce oh yeah that happened oh yeah it's like the worst right
yeah but so the other day i was eating a sandwich and i got ketchup in my eyes instead. Now I have hindsight.
Oh gosh.
Oh my gosh.
Okay.
Thank you, Mike RG.
I love that.
I got the giggle out of Alan.
All right.
That was really good.
So, okay.
So, so I got two, two tips for you this, this episode. The first one is just, if you aren't already using labels in your Kubernetes, you know, objects, you know, be it pods or, to do is that you can then use those labels as part of
your selectors, which is, uh, you know, really helpful because then like, let's say, for example,
you're in a situation where, uh, maybe you have like a bunch of, uh, say all your engine X, uh,
front ends, right. And you might have a label like app equal website or web server, for example, right?
If you have that, if you have that label there, then you could do a command like a, a kubectl
delete minus L, uh, you know, app equal web server so that you could let Kubernetes delete all of
those instances of whatever that selector is, for example, which,
you know, sounds like really bad. And, but, but the way Kubernetes would work is that that delete
would cause it to respawn whatever those things are, which is really cool because then if like,
let's say you have the scale of your web servers set to like, you know, you want a hundred of those,
those things, you don't have to know how many there are. You're just saying like delete the whole thing.
And it's going to, based on whatever your pod disruption budget is, bring down some certain
number of them at a time and then respawn them. Cause that's what the delete is doing rather than
doing the opposite where you might say like, Hey me manually scale that rep that deployment set or
that replica set down to zero so that all of them die and then i'll scale it back up to 100 which
you would then have a period of of outage time but by uh i'm getting lost on the delete command
but the point is is the fact that you were able to use the selector command you did or not the
command but the parameter you didn't have to care uh you know
about anything you know what the what the number was or whatever you're just saying like apply this
to all of these things which is helpful even for like you know if you're just wanting to get the
pods for example to just look at them so right otherwise you have to know the names of the pod
so this label allows you to get lots of things all grouped together by whatever you
choose to make it. And that's actually a great call out to that because when you do create those
pods, like Kubernetes will add on like a random, you know, string identifier to make that pod name
unique. So like if you had web server, then it might be like web server dash some random string for the first one, web server dash blah, blah, blah for the up to the hundredth one so that they would all be unique.
But you could like when you got the list of pods, you'd be able to get an idea as to like what they what they are based on the pod name.
But so that was the first one.
And then there were the second tip that I had for you.
I have an error in my show notes here, but is, uh, I I've mentioned my, my, uh, um,
what would you call it? Like, um, well, like a man crush on, on security now. Uh, was that right?
No. What's the term for it? I that's right whatever uh but yeah uh yeah pod
crush i mean we we've we've referenced the security now podcast in the past like you know i i've i've
been a long-time listener of it since uh its inception and you know i mean it's an acquired
taste like any podcast like you know listening to this voice that you're hearing right now
uh blasting through your your drums but uh episode, uh, well actually technically it's not the latest episode,
but, uh, the, the most, one of the previous episodes, 808, uh, called C name collusion.
If you haven't listened to it, then, uh, I would suggest listening to it, especially when you get
to the main meat of the episode where they talk about the CNN collusion hack that's going on,
like it is,
it is,
it will make you mad.
Uh,
is basically like,
you know,
like,
Oh,
that was clever.
And yes,
the system allows for it,
but I'm mad that you did it and you exploited it for something that we
didn't intend for it to do.
So,
um,
did you listen to that episode?
Either of you?
I did.
I did.
And that that leads.
Are you done with this one?
Well, I was curious.
Yeah.
I mean, I wasn't sure like how much detail I should go into with that episode.
But yeah, otherwise I can be done.
So, yes, I listened to the episode, which is also where my tip of the week comes in.
If, if it's weird, you need to give a little bit more detail.
Okay.
So, so basically the, the, the, the, in a nutshell, what that episode was talking about
was, uh, the three of us, we have all worked with creating big e-commerce websites.
And one of the things that they do is this thing called retargeting.
And if you don't know what retargeting is,
I guarantee you that you have been a, not a victim of it, but, uh,
recipient.
Yeah.
Recipient of it is a much better word.
So basically what retargeting is,
is let's say you go to some random website and you Google and you're like,
Oh,
Hey,
uh,
I'm looking for a,
a new seat for my new saddle for my bike.
A kickstand for your bike.
That's what you're looking for.
You're looking for a kickstand.
We need to have some words.
Uh, so you, so you're like, Hey, here's a new saddle. And, uh, you, you're like, look at a particular
one. You're like, that's the one I really like it. But you know, maybe buy it, maybe
you don't. But then every Google search from then on and every, uh, Facebook post you look
at from then on, or like any other random site that you go on that has advertising on
the page, you'll see like,
you know, 18 other, you know, advertisements for that exact seat that you were looking at, right?
That is, that is what retargeting is. And there are, you know, a couple dozen companies out there
that, that have different, you know, offerings for this. And in the three of us, we specifically,
uh, worked with Criteo was one of them. And they were the ones specifically called out
because, uh, they specifically sent to their customers an email saying like, Hey, um,
we need you to implement this thing. And basically what they wanted done was that all, you know, the world in the days of the
post Snowden era, right?
We as an industry have gotten much better and much more diligent about security and
things like, you know, HTTPS over HTTP, for example, and getting
much better about like, you know, recognizing where, what data can be used to track you
and getting better about like privacy and everything.
And, you know, some companies like Apple, take Apple, for example, they like, you know,
really try to market on that.
Like, hey, we're privacy focused first,
blah, blah, blah. But basically like, you know, even in your browsers, like everything has gone into this like whole, you know,
a posture of privacy and,
and protecting your privacy and things like that. Right.
And so what Criteo and others did was they discovered that they could kind of abuse the DNS system in such a way that they could get around whatever privacy guards and tracking guards that you have in place.
And so the way it works is they're using a CNAME record for that. And again, Credio specifically called out because they sent the
email out to informing other companies that they needed to do this. But basically, the way it works
is, let's say that you're Facebook and you get this email from Credio. And what Criteo says is like, Hey, create a C name record so that let's say ads
dot Facebook.com will resolve to, um, our, our, one of our websites. Right. But then all your
calls, when you want to call our library, you're going to go to ads.facebook.com, right? And that way, from the user's perspective,
they're only visiting facebook.com.
They don't even realize that they're going to some other random,
like a Criteo website, for example.
And all of their tracking ignores that too.
But the downside and where the security implications come into play
and where your concern should be from a security perspective is that it means that now the way first party versus third party cookies work is that ads.facebook.com is a first party cookie or a first party, you know, domain. So all of your Facebook cookies are going to go to ads.facebook.com,
which is really in that example, being translated back to Criteo, which means that would include
authentication cookies, right? So I'm calling out facebook.com, but really like you would like
specifically facebook.com, you would hope like, Oh God, hopefully they would never do something like that, right?
I don't know specifically which ones were, but I was just using them as an example, so don't get caught up on that.
But at any rate, the point is like episode 808, he goes into more detail and explains it way better than I did because there might even be some details that I messed up in that explanation.
But that's the main big picture gist you know, big picture, just of it,
you know, 50,000 view of, of that episode. Yeah. So just to kind of tie a bow on what he said
there is the key is the cookies because cookies on the top level domain are trusted, any subdomain like ads.something.com. So like,
let's say CodingBlocks, right? Like if we were to go do this and implement this in our DNS records
and we set up an ads.codingblocks.com or.net is what we are, then if it was Criteo that wanted that done, any cookies from our site across any other
domains as well could get sent over to the ads company. And what Outlaw was saying is bad about
that is the reason why you can log into a website in the first place and stay logged in, like if
you're on Amazon or if you're anywhere,
is because there's a cookie behind the scenes that typically stores an authorization token
that says, hey, yeah, you're logged in. So every time you go to a page on that site,
it's going to get that cookie, it's going to find that token, and then it's going to try and match
it to the web service token and say, okay, here's the variables for this user. Well, they actually tested and sure enough,
they were able to go get cookies,
get those tokens out of those cookies,
put it on their own computer
and impersonate somebody else's login, right?
So basically what this means is
not only your login tokens for going to a site,
but also people have a tendency or developers
or companies have a tendency to just cram whatever they can in those cookies as a convenience.
So you've also got personal information in there, right?
First name, last name, maybe address, maybe email address, maybe phone number, whatever, right?
There's typically a lot of garbage just crammed into a cookie.
So at any rate, it leaks that information.
Now, so that leads into my tip of the week. So this is sort of, uh, this is tied to the NAS that I built. And I mean, I've just
grown to love this thing, but I was looking for a way to basically see what was going on on the
network. Right. And you guys have heard of pie hole. A lot of people love pie holes
because you go buy a raspberry pie for 35, 40 bucks, put pie hole on there. You turn it into
your, um, either DHCP or your, your, um, um, no, your DNS server in your house. And then that way,
every bit of traffic that comes through
goes through that thing and it will try and block bad domains, right? Like they've sort of got a
white list and a black list of things that they track and they will block it. So we've talked
about you block origin on, on the show before and other ad blocking software that you can get on
your computer. The thing that stinks about that is it's for your computer, right? If you're on your phone in your
house, if you're on another computer or another device, right? Like some other device in the house
can be going and making requests and they're just open, right? So you've only protected the one
device. There is some software called AdGuard and it's actually AgGuard Home is the one specifically
that I'm looking at and I've actually installed and it's really good.
So it's similar to PiHole and what's really cool about it is this can be your DNS resolution
for your entire network.
So essentially if you were to go into your router and you said, Hey, my DNS server is no longer 1.1.1.1, who is
Google or in the case of mine, mine was automatically picked up by my ISP, which is
Comcast. There's was 75, 75, 75, 75. Instead of those, you point it at your ag guard, um, install.
So mine's on my NAS, right? So I have the IP and then the port for that thing.
And then that way, every single request coming out of your network to go across the internet
gets picked up by AdGuard and it will block stuff for you. So before it even gets to your computers,
right? Your computer requested nba.com, which goes out and calls all
kinds of garbage. That thing will actually block malicious things on there. Some retargeting and
all that. Well, I specifically went and looked to see if it did the C name thing and it actually
does mitigate it. So I'll have a link to both AdGuard home and this other thing that Outlaw was talking about in episode 808 of Security Now,
and it does the full CNAME resolution. So what this means is if like say Critio.com was the one
that you wanted to block because you know that they're doing dirty things with these,
it will fully resolve the CN name from the top level domain.
So if I have ads.codingblocks.net,
it's going to look in AdGuard to fully resolve what the real domain is that
that thing goes to.
And if it's on the block list,
it'll block it.
So that's actually even better than what Steve Gibson was talking about in
security.
Now in that version of in,
in episode eight Oh eight. Cause he said that the,
the you block origins thing for Firefox works because it has an API call out to
be able to call and resolve the DNS and it'll block it.
But that's in Firefox.
Chrome doesn't have that feature.
So if you were to go to a website in Chrome,
even with you block origin, it won't block it.
It's more complicated than that, actually.
In the episode, he actually had, like,
in the show notes for the episode,
he has a table of,
and this was from the researchers
that did all this work, that discovered this.
This paper got released early cause they planned to,
to just publicly discuss this paper in July.
Right.
But,
um,
they provided a table of like,
here's all the different companies that do this tracking,
this retargeting type of thing.
And then here's the mitigations.
And they included Firefox,
um,
Ublock origin on Firefoxfox ublock origin on chrome
and then they also talked about using a dns service like a next dns right right and and
like how they basically in the table show that like depending on which one of the tracking
companies like some of them were blocked by a U block or next DNS,
but none of them,
there was no single solution that blocked all of the tracking companies.
Yeah.
And that's where it's interesting,
right?
So you could,
and the reason why this is my tip of the week is I like having more control
over what,
what I can and can't see.
So if you were to use a DNS service like NextDNS, they're going to block
stuff and you're just not going to get it and you're not going to have any control over it,
right? Like if you go to a website and the website crashes because you block some things on it,
you have no recourse. You can't do anything. But if you have the service on your network,
like this AdGuard Home, then you can say, okay, you know,
I want to go to nba.com. I'm willing to let this thing through, right. To make the page function.
So, um, it's up to you, but they do actually have protections in here to where it'll fully
resolve down to the actual end end location of where that C name's going. So if you want to block it, you can.
So I'm super, super happy with it.
And you can run it in a Docker image, which is awesome.
I mean, the real thing about any approach to security
is it's kind of like getting dressed for winter, right?
Like you just wear layers.
And security is about that too.
And so even as this table that was in the show notes, Steve Gibson's show notes, for winder right like you just wear layers and and security is about that too and so like even
as this table that was in the show notes uh steve gibson's show notes like it was about you know
layers of defense so like you block origin on a particular browser might be a part of that some
a service like a next dns or the ad guard might be another part of that defense and like any one of
those by itself might not be able to solve the problem, but then combined, you know, you, you can stay warm. And you want to know
what's really cool about that by the, by the way, because that's a really good analogy to it
is that's one thing that I really liked about AgGuard is you point your router to the DNS,
right? To, to your AgGuard thing. You then tell AgGuard which DNS servers you want it to use behind the scenes.
Right. So you can tell it, I want to use NextDNS. You could tell it, I want to use the cloud.
Is it CloudFlare? Yeah, CloudFlare. I want to use Google's DNS, whatever. So you could actually set
your AgGuard instance up to use whatever DNS services you want,
but you still have that additional control over what you want to block on your network
specifically, as opposed to what, you know, these external DNS providers will do.
So, um, yeah, it's, unfortunately this is the world we live in right now, right?
Marketers are going to try and find ways to, to track you everywhere. Cause that's
how, that's how they make their money. So at any rate, that was it. All right. Well,
we hope you've enjoyed this recursion episode about recursion. And, uh, you know, if you haven't
already, maybe a friend like gave you a link or something, or, you know, if you haven't already, maybe a friend like gave you a link or something or, you know, you're listening on their device.
You can find us on wherever you like to find your podcast.
iTunes, Spotify and Stitcher are probably the big three.
But, you know, if you have a favorite podcast destination and we're not there, let us know.
We will rectify that issue.
And, you know, if you haven't already, as Alan mentioned before, like we, we greatly appreciate those reviews.
So you can find some helpful links.
It's now plural at dub www.codingblocks.net slash review.
Yep.
And while you're up there, make sure you do check out the show notes.
Like Joe's act did a really good job of putting together the show notes for this particular
episode.
And, and I mean, we have all the links, the discussions,
like just, just to be able to, you know, get your mind refreshed on what we talked about,
be able to go Google and find some more information. Right. So, um, definitely check
out the show notes. And if you have any questions, Joe Zach is back. Yeah. Now, if you have any
questions, rants, comments, whatever, definitely go check out our Slack channel. I think, did we say
our Slack link is broken
on our site? Yeah, I should fix that.
Oh, yeah.
The plugin we use
has not been updated.
Yeah, it's not been updated. So we're
going to figure out something there, but if
not, just go to, you know,
drop us an email at comments
at CodingBlocks and we'll get you added.
We don't like that approach.
But in the meantime, we don't dislike it.
We don't dislike it.
I mean, we'll see it eventually.
Yeah, exactly.
We're not fast at emails.
As you now it's your turn, Joe.
Or not.
I mean, really. I think I think you gave. Yeah, I think I think turn, Joe. Or not. I mean, really.
Or not.
I think he gave.
He's done.
Yeah, I think Joe.
We broke Joe.
Joe is crashing.
I think he had a stack overflow.
And we need to reboot Joe.
If you didn't know, Joe's a robot.
I don't know if that made sense.
Maybe you didn't know that.
All right.
So visit us at codingblocks.net.
You can find show notes, examples, discussion.
Joe's, we have some packet interruption there.
Yeah.
Okay.
I'm going to go play Valheim.
Oh, that looks cool.
I saw that on Steam the other day.
Is it really good?
Yeah, you got to play with people, though.
Okay.
I don't know if I can do that. I don't play with other people.
I have a persistent server. I'll send you the info