Coding Blocks - What is Algorithmic Complexity?
Episode Date: August 27, 2018We continue our dive into Rob Conery's The Imposter's Handbook as Allen is Allen, Joe is Michael, Michael is Joe....
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 88.
Subscribe to us and leave us a review on iTunes, Stitcher, and more using your favorite podcast app.
And check us out at codingblocks.net where you can find shows, examples, discussion, and a whole lot more.
No, it's check us out.
Check us out.
Send your feedback, questions, and rants to comments at codingblocks.net.
Follow us on Twitter at Coding Blocks or head to www.codingblocks.net and find
all our social links there at the top of the page.
With that, I'm Alan Underwood.
I'm Michael Outlaw.
And I'm Jerzak.
Excellent.
This episode is sponsored by TechMeme.com.
Check out the TechMeme
Ride Home podcast.
They have weekday releases with just
quick 20 minute episodes and they talk about all the hot news in tech, like literally the
trending news of the day. And it's kind of done like an NPR sort of news show. So it's really
easy to listen to. And it's a nice way to get filled in on all the things that you didn't have
time to check out during the day on your Google feed and everything else.
So if you would go check out our show notes at cutting blocks dot net slash episode 88.
And you can see the links there and they'll take you over to iTunes or Google Play Music and search for tech meme.
That's T.E.C.H. M.E.M.E. and give it a listen.
Again, we'll have the links in the show notes or you can search on iTunes or Google Play Music.
All right.
And we always like to start off with a big thank you
to people that have taken the time
to go leave us a review up on the various places
where it's done.
So Mike, do you want to kick us off?
Yep.
From iTunes, we have Lego Parsons,
Dennis Adolfi,
Jerbaka,
PS2178
and Tech Systems
alright
sister
big thanks to
Chaka IT
and Timmy Nicepin
that's an excellent name
and
I meant to give
a quick
plug here
last episode
and totally forgot
so Mike Robbins
who does
just crazy amounts
of PowerShell stuff
and
writes a blog.
We've talked about him before.
If you haven't, go check him out.
But what's really cool is he and a couple other guys have teamed up
and created a PowerShell conference book where they've got a ton of tips and tricks
and things you can do.
And we'll have a link on the show notes page here to where you can actually go buy that book
and all the proceeds go towards an on-ramp scholarship program. The authors get nothing.
So, you know, if you want to get something that's really useful for you in your day-to-day
and you do some things with PowerShell, and if you haven't done PowerShell, it's really cool.
Go check this out, man. It's a good way to give back to a good cause and at the same time get something useful.
And finally, a big personal thanks to Jack and Critner for rescuing me from my family vacation.
So I got to do lunch over there in Baltimore, and I'm still dreaming about that Maryland cupcake.
So thank you very much.
Very nice.
All right.
Well, with that, let's get into the
topic of our show tonight,
algorithmic complexity.
And algorithmic complexity,
we've referenced it a few times, especially recently
talking about the imposter's handbook.
And we're still kind of using that for some
guideposts here. But algorithmic
complexity is the study of
how well algorithms scale
in terms of time and memory or space and
right space well that'd be space space yeah memory space that's what i mean got it yep and it can be
used to compare different algorithms so if you've got two different algorithms you want to know like
which one's faster which one takes up more space and that's a way to do it and also helps you get
predictions so like if you know something's performing now and you think about doubling your input size or doubling your number of users or orders in a day,
then you could do a little bit of napkin math and figure out what that's going to perform like and see if you're going to be able to handle it.
I guess you were getting particular about the memory because if it was being stored on disk or a flash or whatever.
Right.
So that's why you wanted to call it space rather than memory.
Right.
Space could be any kind of medium.
So anyways. Right. Space could be any kind of medium. So, anyways.
Yeah, and
so when we usually talk about algorithmic
complexity, we end up talking a lot about
Big O, which is the mathematical notation
that we use to formally describe
the limiting behavior of a function.
But it's not just algorithmic
complexity that Big O can be used for.
It can be used for any sort of limiting
behavior that you're looking at in terms of math.
So terms get a little funky.
So sometimes if you Google Big O, you'll see him talking about functions and sometimes algorithms.
But for our purposes tonight, we're basically going to be talking about algorithms,
but we may say the word function.
I just want to make it clear that if we do end up saying function,
it doesn't necessarily mean one function, one function only.
Sometimes your functions can call other functions, and that can still be included as part of the same algorithm.
There's nothing saying that algorithm can only be one function.
Mathematically, they're still talking about that high-level kind of input and output from whatever calls the rest.
Yeah, and we talked about this in the past, like several, several episodes back.
The definition of an algorithm, really it boils down to it recipe for, for what you're trying to do, right? So if it's 10 steps,
a hundred steps, whatever, what we're talking about is the big O notation or the complexity
for that entire set of steps that have to be accomplished.
Yep. And when I say limiting behavior, it sounds like we're talking about like actual
calculus limits and growth rates of curves and stuff. And so it's going to do a little bit of hand wave and not get too much into depth in that.
But for our purposes as programmers, it basically means boiling down our algorithmic analysis to the most significant bits.
So sometimes we throw pieces of our code kind of away because they don't really matter too much as the input gets larger and
larger and larger. So like some of the things that they talk about with that, right? Like if you have
a few steps where you assign a variable or whatever, that stuff doesn't matter really.
And so they throw it away is basically what you're getting at, right?
Yeah. Which is kind of hard intuitively, at least what it was for me to grasp because like, I'll see a loop that does like 10 things inside of it. And to me, you know, I feel intuitively, like that's doing more work than something that just has a one line in the loop. But that's not really true. And especially if you think about, you know, your functions or whatever, doing stuff underneath, and they're getting translated to, you know, assembly or whatever, then it can be kind of a hard pillow to think that like, you know, we're only really looking at that loop and we're kind of not really even caring about what goes on inside there because we know that a lot of stuff inside that can really have a big impact on performance.
And that's why measuring is so important.
So, you know, I mentioned dropping those constants.
So like the example there I gave where you've got a loop with 10 items in it. We don't care about the 10. We basically throw it away. We just treat it like
it's constant. We say, if you've got a size 100 array or size 1000 array or size 1 billion array,
eventually over time, a 10 doesn't matter because everything is scaling based on the input.
And so if we chart the runtimes over time, then we should see it
roughly conform to this curve. And that's what we mean when we say that we're looking at the,
what was the phrase, the limiting behavior. Or sometimes you'll see the asymptotic analysis,
but it basically means the same thing in math when you've got that sort of curve.
We're saying, you know what, it doesn't quite hit this perfectly,
but it's close enough for our purposes,
and we can kind of generalize and look at these things.
And so coming up in this episode,
we're going to look at a lot of the common types of curves
and the common types of algorithms that fit those curves.
And just know that, you know, things aren't perfect.
You know, we're dealing kind of abstractly,
and there's a little bit of wiggle room.
But ultimately, as the input size increases, that wiggle room doesn't really matter that much.
All right. So I think pretty much covering the stuff in here, I did want to mention, you know,
I did say the testing, but what I mean by that is like, you know, especially if you're dealing
with kind of like real world business programming, sometimes there's stuff in those loops that can
go out to a database or a network call or something. And so that's where algorithmic complexity really kind of falls apart.
That stuff, in a way, it's not really part of your algorithm, right?
We're not including those sort of times.
There's no sort of allowance in Big O for that.
It's just mathematical notation for what kind of stuff you're doing.
And so if you do have an algorithm that fetches data as part of that algorithm, then that
stuff is not going to count. It's going to get thrown away. And so what you want to do is measure
ultimately. Well, it's a measurement of the operations, not necessarily of the time. So time
in regards to big O is in regards to if you many, how much time, if you assume that like each operation were to take a, you know, a same amount of time, right?
You're not counting latency for network connections and network overhead or, you know, latency in regards to like even opening up a connection, things like that, like that.
That's not the type of things.
It's just, here's the operation.
And then how many, you know, if you assume a constant amount of time for each of those operations like how how bad is it right like in one line of code i can print a screen or in one
line of code i could write a core dump to disk right which is going to be you know potentially
gigs both of those have a lot of operations going on behind the scenes that i can't see
but when i'm doing algorithmic analysis i don't want to count that stuff i treat it as constant
because i assume that as part of your algorithm that that needs to happen. And so if there's a faster way to get
the information you need, you know, faster than a core dump, then that's something that kind of
happens outside the scope of you analyzing this one particular function. And so kind of put this
in short terms, you use big O notation so that you can talk about something in a relatively easy
fashion, right? Is really what it boils down to is you don't want to get too far into the weeds
to where you can't talk about it. But, but once you know some of these pieces of terminology,
then if you say, you know, big O of N square or not big O, if you say O N squared,
then people have a general idea of what's going on there, right?
Yep. So it's flawed, but in practice it turns out pretty good
and lets us do things that we wouldn't be able to do otherwise
and we certainly can't do perfectly.
Right.
Yeah.
So as Joe said,
it's a way to describe the complexity of the algorithm
in terms of time and or space, right?
We can measure how well our code scales in terms of time and or space, right? Uh, we can measure how well our code scales
in terms of big O, right? We know if it's, you know, we'll get into some of these, but if it's
a O of one, we know it's going to be awesome. It's going to scale. Awesome. Right. Uh, if O
of N it's going to be linear, right? So we can, uh, you know, refer to it kind of like that.
We can think about like, how well, how well is this piece of code going to scale uh someone had to hear about it being an overloaded term now i think
that's joe what you got yeah did you have an opinion about scaling though yeah if you get
asked about scaling in an interview they're probably not asking about big o they might be
asking about like stateless web servers or um you know, no SQL versus SQL or some of
these other terms that kind of make more sense or used more often than like a cloudy or kind of
architectural sense. So I just want to kind of mention that tonight we're specifically talking
about time and space. So, you know, almost exclusively like single processor time and
something like primary memory.
So when you're talking about scaling though, you're saying, hey, if I have one value as an
input and it takes one second and I have 10,000 values and it takes 10,000 seconds, then you know
that it scales linearly, right? That's what you're talking about with scaling.
Scaling in this regard would be the input.
Scaling based off the size of the
inputs that you're working with. And that's if you're talking about time, right? Now, if you're
talking about space, it might be that, you know, if you have one, then it takes up one byte. If you
have 10,000, it could take up, you know, 10 times that, right? It all depends. And so that's what
we're talking about scaling here. So, you know, you know, speaking of 10 times, I feel like when
people talk about big O and algorithmic complexity, like 10 to one, they're talking about the time.
Yeah.
I'll talk about space.
Space is kind of like, oh, yeah, you can do this too kind of thing.
Yeah, it's usually, but it's funny.
I think when we dig into the weeds a little bit, space is probably as important, if not more in some cases.
Right.
And one thing to point out that the whole reason this stuff even matters, because we haven't really talked about it yet.
We're going to get into the weeds a little bit here in just a second.
Is though when somebody says, hey, how well does that perform?
Usually like, well, it's pretty good.
Right.
But that's not an answer.
That doesn't measure anything.
It doesn't quantify anything.
And so the whole reason we're even talking about this is because it allows you to give a more, while not perfect
answer, it allows you to give you a more understood, easy to understand answer that is a little bit
more accurate, right? That is quantified. Yeah. So this is big O in this regard is all about
comparing algorithms, trying to predict their performance. So we won't know exactly how well
the time is, but like you're saying, we can, we can give an estimation of like, I think this is going to perform well because it's
O of N, or I think it's going to perform bad and here's why, uh, you know, and we have these,
these equations we'll call them that we can refer to. Right. But, you know, I think, um,
Joe mentioned in the beginning though, that there were more uses than just in regards to computers or functions, I think is how you referred to it, Joe.
This was originally created starting back. Starting back – so according to I think Wikipedia was that had the creator, the authors of it, it was created by Paul Bachman, Edmund Landau, and others.
And I thought, oh, that's funny.
They're like, what?
The others don't get any credit, right?
And Paul Bachman is no relation to Ehrlich Bachman in case if you're wondering.
Why don't they just call Paul up and ask him who else worked on it?
Yeah, right?
But the thing is, he laid that groundwork back in 1894.
So well before anything to do with computers, right?
So this type of conversation about Big O
doesn't necessarily have to be constrained to just software because they were just added to it, but then over the years between 1894, 1909, and then today,
there's still been additional mathematicians that have been adding on to this functionality, right?
So it was just kind of humorous, though, that the first two got the name credit and no one else did um but this
is also referred to as the you know joe already mentioned the asymptotic notation no you mentioned
um uh asymptotic uh analysis but big o you know this is also sometimes referred to as the
bachman landau notation or asymptotic, which I'm having a difficult time saying that word fast.
And, you know, the O today, we would say,
stands for order of the function.
But originally when Bachmann wrote it,
it was just order of, right?
So, yeah.
I actually thought for the longest time,
I thought it was omega, which i don't know why i
mean that's not the signal for omega that usually see so it's kind of funny that i thought that
because it was all mathy yeah pretty much yeah i don't know if i ever put any thought to it at all
i don't think i did yeah usually we just talk about big o and we talk about like the upper
bound or worst case scenarios but I did think it was worth mentioning
that some of those Greek letters do make an
appearance. So there's like theta that you'll
see sometimes or even little o or omega
and a couple other symbols that they throw in to
mean various different things.
But I think for tonight we're basically just going to focus on
the worst case scenario. So if we say that
something is O of N,
we don't mean that it looks at every single
input every single time. It may
look at everything but the last one. It may stop on the first. We're just saying in the worst case
scenario, it's going to look at all of the inputs. But still an approximation, as you said earlier,
because it tosses out a lot of key little pieces that happen here and there. Yeah, I mean, it could
look at all of the inputs five times. So five times the number of inputs, but that's going to boil down to big O of N. We're just
going to say, ah, screw the five because as things get on, it's not going to matter.
Right. So you know how, you know how great I am with, um, numbers and examples here.
So I thought, well, how can I try to make this as easy as possible? Right. So
I put together some examples, but I was thinking in my head, I was like, well, okay,
what if we just assume a simple, small array of five integers to try to like rationalize some of
this? And then, you know, you can expand on it from there and on your own, but to just try to
get an understanding, like what some of this is, right? So we're going to go through some of these orders. And, and we'll start with O of one first, right? So O of one, these, this is
constant, the operations aren't affected by the input size. It doesn't matter, you know, that,
that five integer, that array of five integers, it could be 5 million, it could be 500, doesn't matter.
If the O of one operation is still going to take the same amount of time regardless. uh, five digit or that five integer array, uh, you know, accessing an element in that array
would be an O of one operation. Cause it knows just where to go. It doesn't have to do any kind
of scan or anything like that. Yeah. So you remember back, um, was it, it wasn't last episode,
I guess it was maybe two episodes ago now where I brought up about the recall, the conversation that we had about like,
why do arrays start at zero for most,
for,
for a lot of languages.
Right.
And,
uh,
you know,
hint,
hint,
it was the,
because it was a memory offset.
Right.
And so that's how the,
the array operation is able to be an O of one operation.
Right now,
again,
arrays,
not lists. We're not talking about any kind of linked lists
or anything like that, or, you know, specific list implementations, like, you know, let's just
keep it simple and talk about arrays. Right. Yeah. And I'm just wanting to give an example
of the kind of size here, or the kind of time performance we're looking at. So we take that,
that array of five integers, we pass it to a function. Let's just say that function just returns false all the time, no matter what. So, you know,
it's going to take one second. So it doesn't matter if that array is five or if that array
is 50,000, no matter what, it's always returning the same thing. So it should always take about
one second. Yeah. So this, again, this type of operation is considered constant time because like Joe said, it doesn't matter what the size of that array is.
So one of our – I would have to say collectively our favorite resources.
We've definitely talked about it a lot.
Definitely one of my favorite resources is the bigocheatsheet.com.
So if we were to look at there, they have the,
all of them graphed out,
right?
Oh,
of one operations are very green.
These are,
these are very quick,
fast operations,
right?
So we,
we like these.
You don't really get any faster.
Well,
do you?
I actually looked up to see if there was o of zero and uh there's there's quite
a bit of debate like on stack overflow i don't know what the mathematicians would say but um it
seems like stack overflow arguments all lead to a yeah you can have it if it just doesn't ever do
anything yeah i was gonna say what's the use of it it sounds like yeah you could totally do it but
it'd be completely useless so yeah i don't i don't understand what it means to have zero zero it's
like are you saying that it never returns anything? Cause it's like a syntax error. It
doesn't make sense to me. So, um, if you can explain that, I would love to see it in the
comments. All right. Yeah, I don't get that. I'm trying to think of like, I was trying to think
of something clever to say like, uh, it's the else statement that never gets executed or,
you know, it's like if true equal faults, and then you have a block of code and that that's
o of zero because it's never going to happen but it still counts as one operation because you're
doing the if if nothing else well i was thinking about maybe like lock though but you gotta remember
toss that out on big o anyways that that if wouldn't matter typically so yeah it doesn't
make any sense really oh yeah you're right the if wouldn't count yeah it wouldn't count they drop
that off yeah but when they say drop it it doesn't necessarily mean sense, really. Oh, yeah, you're right. The if wouldn't count. Yeah, it wouldn't count. They dropped that off.
Yeah.
But when they say drop it, it doesn't necessarily mean, like, turn it to zero.
Like, you know, otherwise you'd be multiplying zero by whatever.
So it's like turning it to a one.
No, but, yeah, but no, Alan's right, though, because the size that, like, you know, any kind of if equals false type of operation like that wouldn't count in terms of calculating big O.
Because the input size doesn't matter.
Yeah.
We sort of glossed over it earlier.
Like when you said,
when you set constants and stuff,
like typically if in your function you have,
if this equals one,
then do this,
or if this equals two,
they basically say those operations don't count.
They're insignificant for the,
for the grander scale of what N number of inputs are.
So all those if statements and all those operations get thrown out the window
for big O notation. So, so yeah, there again, going back to O of zero means nothing as far as
I can tell. Well, yeah. I mean, because we have it summarized here in a different way because
using my joking example of if true equals false, that would be considered a constant. So it
wouldn't matter in terms of our growth curve. Right.
So that's why we wouldn't count it. So I think going back to what we were originally saying, I think O of 1, the constant, is the absolute fastest that you can aim for.
Until stack overflow.
Yeah.
In a useful operation, I guess.
All right.
Yeah.
So the next one,
we just kind of a little
unconfident with that.
It was like,
Oh,
we're going to get some backlash.
Yeah.
I'm going to,
I'm going to get spanked.
Here comes the zero crowd.
Joe will be there or jurors.
Zach will be there to field any kind of
questions and episode discussion.
And so I talk about math.
I ended up saying something wrong.
Yeah.
I think we did on last episode,
which was called out to us.
So,
um, yeah, check it out. Yeah. Check think we did on last episode, which was called out to us. So, um,
yeah,
check it out.
Yeah.
Check it out.
Check it out.
All right.
So the next one we have up is O of N also called order of N,
which I think,
uh,
O of one is also called order of one.
So I've never heard that though.
Never,
ever.
I've always just heard O of one of O of N.
Yeah.
You know,
you'll,
you'll probably hear at least in our realm,
O of N.
So the order N operations take more time to run for larger inputs, but the growth rate is what they call linear.
As the number of inputs go up, so does your time.
So it's basically the example I gave earlier.
If you have one input, it's one second.
10,000 inputs is 10,000 seconds, right?
So linear is as it goes up, you have a nice graph to where the line just going up at a
good 45 degree, right?
Assuming that the scales are all right.
Yeah, I did a little bit of math for each of the sections that we're going to talk about
each of the orders we're going to talk about today.
So just to kind of keep consistent, like for the constant time for input of one input of
five in our array, it's going to be one second.
5,000, it's also going to be one second.
For order n, now this is really easy.
It's for input size of 5, our array, it's going to take one second.
And then for input size of 5,000, it's going to take 1,000 seconds.
It gets a little weird because of the 5.
Sorry about that.
If you had stuck with one second per input, it would have been a whole lot easier, man.
I'm just saying.
I did.
I didn't realize that there was an example above that specified five.
Oops.
Yeah.
All right.
So that's going to stink.
I'll figure something out for that for the next section.
All right.
But, yes, so that's – I mean, it's pretty easy in that particular sense. Yeah, so this is an operation where you need access to each element of this.
So let's go back to the array example.
Now, keep in mind, I'm just using an array as a way to easily discuss these from an example point of view.
But these O of 1, O of operations, like it doesn't have to
require, you know, an array. It's just easy to, to reason about it if we use that as an example.
So going back to the array of five, uh, integers example that Joe doesn't like that threw him off.
If, uh, if you know what we're saying here with O of N is like, you have to, you have to access
each one of those. So if you wanted to sum up like you have to you have to access each one of those.
So if you wanted to sum up the array, then you need to access each element of the array in order to add it to your running total, for example.
Right. Or if you wanted to, you know, if you assume that you didn't have a let's assume that the array was full of random numbers that aren't sorted.
Right. And you just wanted to find the smallest one.
Well, in order to find the smallest number in that array, you'd have to visit each number to see,
is it smaller than the one you're currently claiming to be the smallest, right?
Yep. So what he's basically saying here in a nutshell is if you're looping over a collection
and you're hitting every item in the collection, you're automatically at O of N.
Best case scenario, you're
at O of N. That's one way to put it, yeah.
You know, I think it's really
interesting here. It's actually a really good way. I was joking.
Because we drop those constants, if
you take your input size and say half it
or divide it by three, which is going to
drastically reduce the amount
of time it actually takes to run,
it still doesn't count.
It's still considered O of N.
So if you say, let's say you create a function called max number good enough,
that only looks at the first half of the array.
It takes the biggest number it finds there and says, you know what?
This is probably fine.
It returns it.
That's still on order of N, even though it only ever looks at half the input.
That's because you're dividing by a constant number is that true though totally i don't think i can argue this one but
that that's i i miss it because i was reading something else he was basically saying if you
if you if you were to say that i'm only going to go through the first half of this particular collection, then it doesn't matter because you're getting rid of the constant.
You're basically throwing it out.
You're still saying it's O of N.
But the only reason I'm struggling with that is we're going to get into the O of log, O log in here in a second, which is the whole divide and conquer thing, which is what it sounds like you're saying though, right?
So the difference there is that divide and conquer thing, which is what it sounds like you're saying, though. Right? Like that's – so we might –
So the difference there is that divide and conquer means you divide and you divide and you divide and you divide and you divide and you keep dividing until you get to your result.
So we're only doing one.
And what I'm saying is you divide by once, a constant.
It's not a factor of my input.
Okay, good point.
I'm not saying divide by the logarithm of input.
What's a real use case of this?
There's not.
He was just making that up.
No, I mean,
I think the example I said
where like you only look at half of an input.
I mean, that's totally contrived,
but you can think of like
some sort of heuristic solution
to an algorithm
or to a problem that's similar.
We say like,
you know, I give up after so long
because this is going to be good enough
because, you know,
say you're like in a real-time context or something where you've got to make a decision in one second, like you're a AI in a game or something.
And you say, you know what?
Here's your time bound.
And so if you don't get through the whole input, just give up.
Who cares?
I could see where you do something like that with photos when you got pixels and color ranges.
Like, you know, you might skip every second pixel because things that are that close to each other might not matter that much.
I could see where you do something like that.
But okay, so I get what you're saying because you're not constantly subdividing and you're only doing it the one time.
You're saying, hey, divide this by two and then go.
Then that constant gets thrown out and so you're still at O of N.
Yep.
All right.
That's because when you chart it on the graph of time there,
it's still going to form a straight line. So it's still going to be linear. It's just going to be
half of what a normal slope of one and one would look like. And that makes perfect sense because
really what you'd be seeing is an equation that said O of N divided by two. That divided by two
gets tossed out the window. So you're totally right. So that's O of N.
So I think what we've already said then is that this is a linear operation.
And that, you know, because as the size of N grows, so will the number of operations.
Now, I'm going to reference Big O Chichi multiple times in here because I love it.
And this is in our yellow range.
So does this mean you should not use it?
It's just, it's fair.
You can use it, but you should definitely be aware, right? If you have to go over everything in the array or your collection,
as you so eloquently said,
then,
uh,
then,
you know,
you,
you just need to be aware of like what you,
what your expected sizes of that collection might be.
Right.
So that you can be aware of it.
Yep.
All right.
So let's get into a fun one then.
So O of one,
O of one and O of N,
those were kind of easy,
right?
The,
from here on,
they're going to get weird and fun,
right?
But in a good,
that's the kind of stuff that we like.
So O of N squared,
right?
Um, wait, who put this put this oh that was me i augmented the notes a little bit sorry oh was this the okay yeah so yeah i'm totally thrown off all right joe you
take this one yeah and then he'll get back on track in a moment sorry so for constant time
order of one then you don't have to look at all of your input.
That's how the way I kind of think of it.
Like, you know, the input size doesn't matter because you're not really looking at your input.
You're either returning the first element of your array or you're returning a hard-coded value or you're doing something else entirely.
And in order M, we said you look at most at all of your inputs once, at most.
So N squared is the worst case, you're looking at
each input for each input at worst. Right, right. So okay, I see where you're going with that now.
So let's go back to my five element array example, right? And suppose that you wanted to return back
a new collection that paired up every number with every other number in
the array, including itself, right? So, you know, your numbers in the array might just be 1, 2, 3,
4, 5, but you want to return back pairs like 1, 1, 1, 2, 1, 3, etc., up to like 5, one, five, two, five, three, et cetera. Right. That's what Joe's
describing is that you have to have a loop that goes, an outer loop that goes over your array.
And then an inner loop that goes over it a second time in order to pair those up. Right. And that,
that, because you're doing that second loop, you're going over every element again. That's where you get into n squared.
So that five element array, you're going to go over it 25 times to produce that pair.
Yep.
So two things.
One, did anybody else here, one, two, three, four, five, think that's a combination of my luggage?
So that's one. But if you've ever seen the code where you'll see something like for I equal zero, I less than array.length, and then I++.
And then inside that, you'll have for J equal zero, J less than array.length, J++.
That's exactly what he's talking about.
You've got two nested loops, or you've got a nested loop that are both going over the same array thing the same
number of times, and then that's how you get into this n squared problem.
Another way to say that is the outer loop is doing it
n times, and the inner loop is doing it n times.
Therefore, it's n times n, which is n squared.
If you remember when we talked about bubble sort, we said that it was a worst case scenario of n squared because every number looks at every other number.
But what's kind of funny about that is when you say that, you kind of gloss over the fact that you never compare a number to itself.
There's nothing to swap.
There's nothing you do.
So in an actual bubble sort example, if you actually console log to every operation,
you're actually not going to see 25. I think
you'd probably see 24. I could be a little wrong on that,
but I'd be less.
The deal is you don't compare the first number
with the first number, so your outer loop starts at 0
and your inner loop starts at 1 because there's no point
comparing. It's just
one of those funny cases where the actual
quote-unquote actual
big O would be n times parentheses N minus one.
But because you drop those constants again, you just end up with N minus, or sorry, N multiplied by N.
N squared.
Yep.
So, and this operation is highly efficient, yes?
No.
No.
These are not considered efficient. In fact, almost every time I've had an interview whiteboard problem or some sort of interviewee tech problem,
they give you a solution that kind of has an obvious solution where you're like, oh, I could do it this way, and it's N squared.
And pretty much any time in an interview you see an N squared problem, you can get that thing down to N log N, if not lower.
Yeah. that thing down to n log n if not lower yeah and and that's the key is understanding that you're
looking at nested loops and that oh this is going to be n squared and that's the first just like
what you said that's the first thing the interviewer is going to say hey uh you know what's
the complexity of this and if you can't look at it and say oh that's n square o of n squared then
that's going to be like rule number one they're're going to be like, oh, man, this guy doesn't even really know what he's looking at.
And just knowing the notation, knowing the naming for this is going to be a big one.
And then you look at it and you say, oh, that's O of N squared.
And then they're going to say, well, can you make it better?
And then you're going to have to go, oh, yeah, I think I can.
But I do want to make do want to like make one uh clarification there though it's nested
loops over the same over the same collection yeah right just if you see a nested loop that's not
necessarily it could be two different collections right yeah that that doesn't necessarily in and
of itself equal o of n squared it could still be very bad yes but you know it's it's when we have
the same the same list that we're iterating over that's when. But, you know, it's when we have the same list
that we're iterating over,
that's when it's guaranteed,
you know,
you're looking at O of N squared.
Definitely.
Yeah, and actually,
we keep talking about N
as if, like,
that's always the kind of inputs
to a function.
But if you remember
the graph algorithms,
some of the complexity there
in time would be, like,
the number of nodes
times the number of vertices
or something like that.
And so the inputs,
you can slice things up differently if you're a math whiz and I am not.
But if you had, say, an example of a function that took two arrays and multiplied the two by each other,
then your algorithm complexity might be like, well, it's the size of array one multiplied by the size of array two.
And sometimes in some algorithms, you'll actually see them specify like that way so they'll say like big o of um n times n is a common one they'll use the letter m and sometimes you'll see them say like yeah they're both inputs screw it let's just call it
n squared right uh because math is ambiguous it can't be there's probably some solid rules on
when they differentiate like that so so where does this thing land on the big old cheat sheet?
Now we're getting into the reds.
This is,
this is,
this is,
let me say,
of your bad scenarios,
this is the best of the bads.
How's that?
Does that make sense?
Yep,
that does.
Like if you had to go down a terrible path,
this would be your best of the worst choices that you had.
Yeah, and I kind of put it in perspective.
If you remember, I'm just going to simplify and say for input size one. So our constant time, input size one was one second.
1,000 was one second.
For linear, our input size of 1,000 was 1 second for linear. Our input size of 1000 was 1000 seconds.
Now our order and squared algorithms,
it's going to be a 1000 times 1000.
So 1 million seconds.
That's a lot of seconds.
Yeah,
that really is for,
for 1000 inputs,
right?
So you now went from 1,000 seconds
to a million seconds.
Yep.
I'm multiplying the time there.
Yeah, that's a pretty big deal.
I totally forgot.
We're talking about these
one seconds in the input size,
but we actually covered this
last episode.
Yeah, I think we did, yeah.
With the...
It wasn't last episode,
but it was a couple of years. Yeah, no, it was Jeff Atwood's Everything is Fast for Small N.
You and I discussed this.
Did we?
And we were saying that like for O of – if N was 1, then it was less than – for N – where were we on?
N squared?
Yeah. where we're on N squared. So if, if N is one, it's less than one second. But, uh,
you know,
for N of like 16 million,
it was 8,923 years.
Right.
It's ridiculous.
So these,
these,
these N squareds get big,
right.
And,
and like Mike said,
this is not even the worst.
This is just where you're dipping your toe into the red.
It goes really bad after this. Well, it's more than dipping your toe. It's very much in the worst. This is just where you're dipping your toe into the red. It goes really bad
after this. Well, it's more than dipping your toe.
It's very much in the red.
But of your
other alternatives, this is like
the best.
This is
where I will stop writing.
I won't go worse
than this in code I'm doing. So I'll absolutely
write n-squared code sometimes because it's faster for me to write and easier to debug like code I'm doing. So I'll absolutely write N squared code sometimes
because it's faster for me to write and easier to debug.
And I'm sure there's better algorithms out there,
but if I'm dealing with say a max of 50 items in my array,
and I know that I'm never going to go more than whatever,
then I don't feel too bad doing nested for loops or something.
I know it could sort it ahead of time and get that to N log N,
like with merge sort,
and then loop through it again.
So it's n log n plus n.
So we drop that second n
because it's the same as saying
two n log n.
So just drop the constant.
Sorry for that divergence.
But yeah, so it's n log n.
So it's absolutely faster.
But for input size of 50,
you're not going to see that benefit
and the code is going to be miserable.
And that's true.
But I mean, to your point, though, we've all done – this is why I was saying, like, you know, of the worst-case scenarios, you know, O of N squared, it's bad.
But if you had to go down a bad path, this is your best choice. We've all written code where we've gone over that collection multiple times because as we're in the process of developing it, you're in the creative kind of mindset.
You're just like, let me get the thought out as to how this might work.
Then you can come back and start refactoring it and tweaking it as you go along.
But your first iteration, yeah, maybe it's not ideal.
Your initial iteration, you write maybe it's not ideal. And your initial iteration,
you write it O of N squared. But again, like Joe's saying, you're probably not working with large
inputs in the beginning, as you're developing it, right? And this is where knowing the use case
matters a lot, right? What you just said about writing code that's going to be nasty or gnarly to look at, right?
Like there's always a tradeoff, right?
There's the readability of the code and then there's the performance of the code.
If you know that thing's never going to grow past 50 and in real world scenarios, these 50 iterations twice isn't at one second per, right?
Like we're saying one second just so that it's easy to do the numbers.
But, you know, when you're talking milliseconds, okay, it's probably not going to kill you if you
just have an array or a collection of 50. But that's when you have to know like, oh, well,
wait a second, this thing's going to start growing because we're going to get, you know,
now we're dealing with customer orders and we plan to one day have a thousand, you know,
per hour or something.
Who knows?
So you just kind of have to know about these things and plan for them as best as you can.
Again, going back to Jeff Atwood's graph here or table here, if we assumed that the input instead of 50 was 256, then N squared is 65 seconds.
According to each one being a consistent unit of time equal to one millisecond.
Yeah, so I mean, it's each of their own, but just be aware of your use case.
And I like what we got here.
What about O of N to the third?
O of N cubed?
Is that a thing?
Yep.
Yeah, absolutely.
So when we're talking about O of n squared, we don't necessarily mean
just a nested four-loop.
It could also be n to the fourth or n to the
fifth or anything in polynomial time. So I just
wanted to kind of call that out. Most
of the time, though, most things that we look at,
it's just going to be n squared.
So that's kind of the most common thing.
And so all the stuff we're talking about today,
all the examples of the curves,
there's all sorts of stuff in between that we're
skipping over just to kind of keep things, you know, reasonable to talk about.
But these are generally the rough classifications of algorithms.
So there's like big groups and like, so things like hash table lookups and array lookups, like those tend to be constant.
Sorting n log n is kind of like the de facto kind of standard there. n squared is kind of getting into the stuff where it's like,
there's almost, at least in every interview I've ever been to,
there's almost always a way to get that thing to weigh faster.
So like n squared is kind of like the, there's a better way to do it zone.
You know what's funny about this?
You know where I see this stuff happen a lot,
that's typically just accidental or lack of knowledge
or just not knowing what people are doing is when
you have something like an array or some sort of underlying storage mechanism and somebody does a
count on it or something to where there's not some sort of indexer that gives you the thing.
I don't think counts one, but if you have an I enumer know, in C sharp, a lot of times it'll tell you,
Hey, be careful because you're probably going to have to enumerate this entire thing to get the
count. And you'll see people do it two or three times in a method. And you're like, Oh my God,
like, dude, get the count of that thing, store it in a variable and then use that later. Right?
So you kind of have to be aware of the constructs of the data structures you're even using because
you might be doing this and not even realize it, right?
Like you might even not have, like you said, nested loops.
You might just be calling the same thing over and over, not knowing what the underlying operation is going to be.
And actually, one of the most common things that I do correct when I see them is if I see things that are using arrays and they keep looking up something by a value in the array.
And so they'll iterate through the array to look for an object
whose key equals 1, 2, 3.
And so you'll see a bunch of lookups that happen in a loop
and they won't cache it.
And so it'll be like 100 times and it's looking this up.
And as you know, that's a linear time.
So as that input size grows, then you're doing a lot of extra work.
But if you convert that thing to a hash table, which has constant lookup, then that's a huge savings in time. So as that input size grows, then you're doing a lot of extra work. But if you convert that thing to a hash table,
which has constant lookup,
then that's a huge savings in time
and it's often something that's really easy to do. You just
kind of pay the hit to enumerate from that
array to a hash table once and then
lookups are cheap ever after.
Wait a minute, though. I missed something there
with your example. Because an array
lookup would be constant. By value.
By value.
I'm sorry. That's the part i missed yeah so so the interesting thing though is and that's where you start getting into
the other side of the equation which is you were initially talking about time now you've doubled
your space right it typically we'll say if if everything had a unique value then you've just
doubled your space because you got the same amount of data in your array and now you have the same amount of data in a hash table.
But you've reduced your time.
Assuming they're all unique.
Well, if you're dealing with reference types, so like objects on a heap, you don't duplicate the whole object.
You have a separate pointer to the object on the heap, but you're not, you know, making a duplicate copy of every customer record.
Oh, you're talking about just pointing back to the array.
Okay.
Well, I mean, like not necessarily in the rail, like the array is going to be an array
of, if it's complex object, like in C sharp, it's going to be an array of pointers to the
heap.
Right.
For those objects.
Fair enough.
If you do the hash table, it's going to be a pointer.
So if you modify the object at, you know, array index five and change the customer's
name, it's also going to affect that hash table because they're both pointing to the same object.
Okay, so that's where, God, it's been a long time since we talked about the stack versus heap.
But this is where you kind of have to know your constructs of your language too, right?
So if you had complex objects like that, it would be pointing to the heap.
So anything referencing that would all be pointing to the same objects, which is what he's saying.
So you don't double your space.
However, if you had an array of primitives like 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, whatever, and you put that into a hash, then you are doubling your space because that's all going to be on the stack, which is, you know, operating at the current function.
So, yeah, I mean, a lot of this stuff goes really deep, but it's, you know, again, there's a trade-off, right? Do you keep scanning the index of the arrays to find the stuff or do
you just put it into separate storage and then know that you can get to those values immediately
on constant lookup time, right? So. Yeah. So that's kind of like a neat little shortcut.
I know anytime I see like some sort of a linked list or array index lookup by value where it looks at each one, I know that's easy.
Hashtable replacement.
So if that's something that's going slow, like in the UI, it's kind of hanging the thread, then that's something I'm going to look for.
It's just an easy win.
Yeah, and when you say that, this is all very language dependent, right?
Yeah, and hashtables are the whole other thing. We're not going to get into that because a lot of it depends on how efficient the hashing
algorithm is and collisions and yada yada.
Yeah, but then you've also got dictionaries and map objects.
Data structures would be a good conversation to have.
It would.
I mean, it actually takes this to the next level to where you can probably do more with
what we're talking about right now if you actually understand the underlying data structures.
And, you know, I was looking, you mentioned about heap versus stack.
Did we ever do, I don't think we ever did like a topic specifically on it.
I think it was just like a byproduct of like a boxing and unboxing conversation, right?
So the boxing and unboxing conversation was episode two.
God, that's been a few.
It's been a few days ago.
It's been a couple episodes since we've talked about it.
I don't know.
Maybe we do something like that in the future.
I mean, it's definitely something that if you're not aware of can really bite you over time.
So anyways.
All right.
So, hey, I'm going to take a quick break and ask you to leave us a review if you haven't already because we really appreciate it.
It helps us grow the show, helps us find new people, helps us keep doing what we're doing.
So, if you like what you hear, then drop a five-star review in there and that would help us out.
And if you don't like it, drop a five-star review and send us an email and we can talk about it.
All right.
So now we head into my favorite portion of the show, Survey Says.
All right.
So last episode, I asked, does having multiple monitors improve your productivity?
And your choices were absolutely. And on the bright side,
pun pun, I get to work on my tan from all the extra UV light or no, but it can be a nice luxury.
And lastly, it depends on the code base, the smellier the code, the more monitors I need. That and air fresheners.
Alright, so
Joe, you weren't here last week
or last episode, so I will let you go
first. Okay.
Which one do you think was most popular?
Absolutely, it does improve
your productivity with 43%.
43%,
absolutely. Dang, son. Even though
I used one.
All right.
I'm going to go with absolutely and I'll go 44.
44.
Wow.
He's going to oversell it by one.
Price is right, baby.
Price is right rules.
That's right.
All right.
Well, I'm sorry, Joe.
Alan wins.
Yeah.
All right.
How far was it?
Is it 60s?
65?
83% of the vote. Whoa.
83%?
Was absolutely.
Wow.
Did anything get zero?
I mean, surprisingly, the it depends was like really close.
It wasn't very much.
So, I mean, it wasn't zero percent but it definitely didn't have a lot of
love which i was surprised about like did which which one did you guys think would like how do
you honestly feel like which one which what's your answer absolutely absolutely you absolutely
think it makes you more product productive yeah man definitely but joe you just said you only use
one monitor so why why? Space.
That space is also important to me.
So I'm taking a productivity hit for a little bit of sanity.
Gotcha.
I have an option for you.
Just buy like a 50-inch 4K.
Then you're good.
Yeah, I don't know about a 50-inch. am i am looking for a new monitor and i was thinking about getting um a um like a 40 something 4k monitor but i really kind of want to curve because so i had to get rid of the lg sadly
and so i'm looking for something to replace it and that's why i'm down to the one monitor but
i'm kind of like you know i, if you're working on a laptop,
you only have, if you were using just the laptop, right? If you're out and about,
you only have the one monitor. So are you saying that you can't be productive on just your laptop?
I mean, you know what? You're not as pretty. You all tab a ton when you're on a laptop, right?
I will say though, and I mean, Macs have had this forever with the virtual desktops.
Those help a lot.
And on windows to windows has it too.
Yeah.
Yeah.
And so windows introduced it.
I don't remember.
Maybe last year.
I can't remember,
but using virtual desktops really is nice.
And on windows,
you know,
like on Mac,
I just would four fingers swipe to the various different screens on Windows. You can Windows tab to get to them or Windows arrow or Windows arrow.
And just like on Mac, you can control it. Windows arrow doesn't do that. Windows arrow
moves my windows around on the screen. You can, I want to say it's like a window control,
control window arrow. That's what I do online. And you can, you can change between desktops or you can window tab and then it'll bring
up all your desktops.
But, um, and depending on what, what drivers you have, like if you're on a laptop, for
example, or the touchpad, the drivers there might also support like a three or four finger
swipe to go from one desktop to the other.
But also on the Mac, you can just press control and the arrow and you can flip between desktops.
Hey, so here's a little tip for you.
God, this is the wrong time to show. But on the Windows tab thing, one thing that in one of the reasons why I didn't adopt the
virtual desktop on Windows sooner was because my Outlook would be on one
screen and my Skype would be on one screen. And so somebody would instant message me and I wouldn't
get it, right? And they're like, man, does this dude ever work? I'd come back over my screen like,
oh man, he wrote me two hours ago. You can, if you Windows tab to where it shows you a list of
all your Windows at the top, you can right click on an application and say, show on all screens. And then that way your Skype, your Outlook, and all those
things will be available on every screen you go to so that if you do get a notification, you'll
know about it. So I actually find that Skype and Outlook aren't so bad though in that particular
scenario you're describing, because they're pretty good about at least without having to do that show on all desktops that you're describing um they wouldn't
show up on my other ones but i actually kind of like it like when i use the multiple desktops
uh within windows it's because i want like here's my communication quote you know quote
communication right window sky you know here's my Here's my coding desktop.
Just to keep the worlds of sanity,
keep those things separate so I can stay sane.
Because otherwise, it can really be frustrating
if you're trying to stay in some mindset
while you're working through a problem
and then you see a, a,
you know,
Slack or Skype or email come in and you're like,
all you gotta do is see like just the first,
you know,
line of it.
And you're like,
Oh,
I didn't know what that is.
I want to read the rest of it or I want to respond to it.
And you're like,
Oh man,
now I'm out of it.
I might've been late for my scrum today.
Cause I was kind of,
I was just sitting there working and I get paid. Hey, you i was like oh oh yeah my bad i guess but you know no judging
yeah no judging all right all right so let's do today's survey is how's your math
so your your choices are there's an app for that or was great in primary school.
Now, who calls it primary school?
Yeah, man.
Really?
We're international friendly here.
Oh, is that what it is?
I'm sorry.
I'm sorry.
Good call.
Way to be PC, man.
Man, I'm going to hear about that.
Dang.
Your third choice, I can tip 20% comfortably.
Or I can still find the area under a curve.
And lastly, I can still log of E with the best of them.
That's LN of E.
That's a natural log.
Oh, sorry.
I misread that.
So C, I already failed.
I can still natural log of E with the best of you. That's a natural log. Oh, sorry. I misread that. So C, I already failed. I can still natural log of E with the best of them. I am curious what the results of this one are going
to be. And I'm going to guess right now that neither me nor Joe will get this answer right.
Yeah, I should have said I can tip 20% uncomfortably. I actually added a digit somehow
the other day and I was like, oh, wait, that's like a $1,000 tip.
What's going on here?
That's because you're a baller.
Oh, no, that was a mistake.
This episode is sponsored by Datadog.
Datadog is a software-as-a-service monitoring platform that provides developer and operations teams with a unified view of their infrastructure apps and logs. Thousands of
organizations rely on Datadog to collect, visualize, and alert on out-of-the-box and
custom metrics to gain full-stack observability with a unified view of all their infrastructure
apps and logs at cloud scale. Yeah, they've got 200 or more turnkey integrations, including AWS, PostgreSQL,
Kubernetes, Slack, and Java. Check out the full list of their integrations over at
datadog.com slash product slash integrations. So key features include real-time visibility
from built-in and customizable dashboards, algorithmic alerts like anomaly detection,
outlier detection, forecasting alerts, end-to-end request tracing to visualize app performance, and real-time collaboration.
So check it out.
Datadog is offering listeners a free 14-day trial, no credit card required.
And as an added bonus for signing up and creating a dashboard,
they'll even send you a Datadog t-shirt.
We've got some.
They look really nice.
Head to www.datadog.com slash coding blocks and sign up today.
All right.
So let's go ahead and jump back in here.
And the very next one is where things start getting a little bit more interesting, like Mike said.
And we have O log of N.
And I think we mutilated this a little bit before,
although I think we got to the right answer.
So when we're talking about big O notation with log of N,
we're talking of base two.
And when we're talking about software terms of this, right?
Yeah, and what we mean when we say log Ns,
we're kind of skipping around a little bit in terms of growth here.
But what we mean is that we're just talking about log of N,
logarithm of N.
So we're only looking at a fraction of our input. And this is the one where it's very different
from looking at just half your input, because what we need to do is kind of keep dividing.
So an example here I like is looking up something in a dictionary, where you kind of flop that big
book open, if you remember where those are. And like, you know, your words in the S's, and you
see, you know, you're in the piece, and you know, you and the S's, and you see you're in the piece,
and you need to guess again and kind of flop a bunch of pages to the right.
And ultimately, you only end up doing, say, eight page flops
to look up something in a 1,000-page book,
which is really efficient and really nice.
And that's an example of a log N.
And it's different from divide because if we were to divide that extra in half,
you would still look at 500 pages of that 1,000 looking for your item.
So this is like an example of like a binary search, right?
It's kind of what you're talking about, like this divide and conquer approach to things, which we talked about, I think, three episodes ago.
Yeah, we talked about that one recently.
Another example, another real world example that we talked about it at the same time was get bisect.
Right. example that we talked about it at the same time was get bisect right so again like joe said just
splitting it up and keep making the thing smaller until you get to the answer you need without
having to go through every single item in the in in your collection if in fact if i remember
correctly uh joe when we did discuss this uh the binary search type of divide and conquer type of operation.
Use the phone book analogy, right?
Where it was like, you know,
you're going to skip to what you think is going to be the right.
And then you'll divide that, divide that, you know,
you'll keep dividing until you get to about where you need to be.
Yep.
Yep.
And so to kind of put this perspective,
this ends up being slower than constant time and it's going to be less than linear.
So it's sublinear.
So it ends up being between O of 1 and less than O of n.
And kind of an example here to kind of put that in terms of seconds
is our old one input, one second example.
For a 1,000 size input, it's going to take 10 seconds.
So compared to linear, which for 1000 took 1000
seconds, that's a huge difference. So log n isn't just a little bit better than linear.
It's way better. It's much closer to the constant line than it is to Oh, gosh, I don't know if
that's true. It feels like it's much closer to the constant line well it is okay so so let me put it
let me go back to the jeff atwood example then where it's where each unit of time was a was one
millisecond of cpu time then for every size of n that he has listed here from one to 16.7 million.
O log in is less than one second for all of those.
So, yeah, it is.
What you said is actually very true.
It is very close to constant time, even as numbers get outrageous.
And I think, Mike, don't you have the big old cheat sheet thing on this one as well?
Oh, yeah, this is this have the big old cheat sheet thing on this one as well? Oh, yeah.
This is green.
This is very green.
This is where you want to be.
And it's literally like, so going back to what you just said, Joe, like if you look at the big old cheat sheet, like the lines are so close to each other, they basically just put a comma between the O log of N and O of 1.
Yeah.
I just looked it up and saw a feel much better is indeed much,
much,
much,
much closer to constant than it is to just N.
Yeah.
And,
uh,
you know,
I,
since we're talking about log rhythms here,
I wanted to at least like maybe remember like how,
how,
what this is.
Right.
So if we were to say that, uh, we had X to the third power
equals 27, and we wanted to try to solve for X, then we could say, okay, fine. Well, it's X to
the third. So it's being cubed equals 27. So we could, we could take the cube root of 27 to solve
for X, right? But if we had a, if we were to flip those around and instead we had an equation that was
like 2 to the x equals 16, right? And now we got to solve for x. Well, that's the type of problem
that we're trying to solve. That's what logarithms are solving for us. That's what this is doing for
us. So, you know, this is when we say that the two to the X equals 16 is, well,
how many times do we have to multiply two to get in, to get to 16? Right. Um,
is that making sense? Is that helping you think? I mean, yeah, that's your logarithm would be
log 16, right? Is what you call it base 2 here yeah base 2 yep that scales super
well i mean it's almost flat as things kind of stretch on so yeah logarithm is really great and
you'll see um a lot of really great solutions to algorithms um you'll start out with the brute
force solution which would be like n squared or polynomial. And someone will figure out, hey, if we can convert this to a log n problem if we also do this.
And so they'll loop over the input or they'll sort it first and do something that lets them kind of sneak this log in there somewhere, preferably inside an inner loop.
And so that's kind of the secret. And so if you're kind of used to dealing with these algorithmic complexities and looking at big O, then you kind of know, like whenever you see an O of N squared on something like an exam or interview, like be looking for a way to reduce that inner loop to a log N operation, either by sorting or doing something like a binary search or doing something to reduce the inputs that you look at and reduce duplicate work and so yeah it's really important to look at those logins
because they scale so well it makes things so much better so when we you know we use that
array of five integers right as just kind of like a baseline to discuss some of this. Right. And, uh, in that example, or with that,
with that array size of five,
if we did an operation that was O of log in,
we're only going to have three operations on that.
Right.
So you could see how like,
um,
the O of N would be five operations.
And now O of log in is three.
So we've already cut it down.
And this is where that the scaling comes in,
where Joe was saying like how much better it gets. Because if n was instead 100, then we'd only have seven operations.
Yeah, only two more steps for
95 or 20 times the input size.
Two more steps. That's amazing. Or four more
steps. Because it amazing. Or four more steps.
Oh, yeah. Four more steps.
See, you know where I, math is hard.
That's why there's enough for that.
Well, it's easy enough to visualize, right? Because let's just put out numbers a little bit real quick.
So two to the fifth is 32, right? Two to the eighth is 256.
So adding three more, you got the 256.
Now let's just do two to the 12th here just for the heck of it.
And it gets big really fast.
So we're at 4096.
12th.
Yeah.
So 4096.
So 12 operations would get you 4096 inputs, right? And that's kind of what we're saying is even as the number of N grows, like 4096 N, that's only 12 operations there. If you did two to the 20th, right, 20 operations would get you in the ballpark of 1,048,576 inputs. So 20 operations. So that thing, you know,
20 operations for over a million inputs isn't much worse than when we were down at 12 operations for
what was that number? 256, I think. Oh, for the, uh, two to the 12th. I can't remember now. Oh, two to two to the 12th
was 4096, 4096. So yeah, man, you, you had eight more operations and you were able to do,
you know, 2000 times the work. Like it's, it's kind of crazy. So, so yeah.
Yeah. You're like, Hey, here's a, here's a book with 4,000 names that I need you to find me
something. And I'm able to say, yeah, I only need to with 4,000 names and I need you to find me something.
And I'm able to say, yeah, I only need to look at 12 of them.
It's going to take me 12 seconds, right?
Yeah, I'm done.
At worst.
At worst.
At worst.
Right.
Right.
You know, when you put it like that, Joe, it kind of reminds you of those type of questions where it's like – I'm trying to think of a specific example, but like the uh oh gosh what's the like the not the state fairs but
the carnival type questions where it's like hey i can answer this and like i can answer something
about you in x number of questions right you know where they they all tell us slightly trick things
i always think of name that tune i can name that tune in three notes. And when they do it in one note, you're like, what? Nobody can do that. How's that possible?
And everyone listening is like, what's the name of that tune?
All right. So I think...
You're not going to touch that.
No, no, we're not. So the next part, I think we need to skip this piece right here because it
leads right into the next one, right? Like kind of takes over over it so who wants to pick up on the next big o notation thing i could do that
so we kind of talked a little bit about um you know n squared and how a lot of times you can
kind of reduce those problems and so you take one of those loops that's looping over your full input
and you figure out how to half it and half it half it and half it or take you know some sort
of subdivision of it and get that convert that inner loop into a log n operation and then if you're still keeping the outer loop and you got
to n log n and this comes up a lot like all the sorting algorithms like a lot of famous algorithms
are all basically kind of riffs on that notion where they take some sort of brute force solution
which runs in polynomial time and they figure out how to reduce part of it and they shift things
around and then you end up
with the n log n operation and what that means is you know a kind of example you can think of like
crudely is that you um you take an array of five numbers and you loop through each of those five
numbers and where you would in the n squared algorithm we have to loop through those five
numbers again for a total of 25 operations.
Now we're saying we only need to look at half and a half and a half. And so what is that like one or two operations? On the inner loop, right? So you're talking about, you're still going through
the outer loop of your five elements, but then your inner loop, you're going to use something
like some sort of search, right? Something that goes in. And so now you're splitting that thing. So your outer loop is still O of N, but your inner loop
is now O log N, right? And that you can't throw away. When you've got two of these things,
now you have to multiply them together. And so your O of N times your O log of N is now O N log N, which I think is what me and Michael were talking about last time where I was like, wait, I think that'd be slower.
And then I got called out many, many times saying, no, it totally can't.
So just, you know, that's where, again, what you were saying earlier, Joe, about things falling in between, right? This one falls in between O of N and,
uh,
N squared polynomial.
Yeah.
Oh,
N squared.
So,
yeah.
And unfortunately it's much square closer to the N squared in this case.
So it's not a great curve.
It's not nearly as close to linear as other ones,
but it can have a really big impact.
And for example,
for that 1000 input size,
the one second thing,
the N squared was 1 million
seconds. This is
10,000 seconds. That's big.
Yeah, so it's big.
10,000 seconds is a long time. It's much
worse than our 1,000 seconds. It's 10 times
worse in the linear case,
but it's still nowhere near a million.
So it's much better than N squared,
but it's not great near a million. So it's much better than N squared, but you were,
you know,
it's not great.
Yeah.
I mean,
this is,
you know,
going back to our big O cheat sheet,
this is in the orange category.
So it,
you know,
and in fact,
the way they have it drawn there,
it's closer.
It's not exact,
but it's closer to the 45 degree angle.
Uh,
I think I heard someone mention earlier.
Yeah, I said that, yeah.
It's not a 45, and it's actually a curve, not a straight line, but it's closer to it,
right?
But definitely in that orange area where, this is probably bad.
You should be concerned, unless you absolutely need to do this.
And if you're in an interview, they give you the problem to solve.
You figure out how to do it in square and they say, can you do it better?
You say, yes, of course.
Here's how I get to NLogin.
They might ask you, can you get it better again?
Now, this is the point where if you look at the problem,
you might be able to figure out some sort of really clever solution and say,
yeah, you can get it down to end time, but they usually won't have you coded. So if you're an interviewer, it seems to
always or often follow this kind of pattern when they give you obvious brute force, you get it down
to n log n, which is much better. And sometimes there's a clever solution that you might be able
to identify and relate that is going to go faster than that. But usually that's going to be really
tricky and really hard. But it does seem like a lot of the N squared problems
that you run into either out in the real world
or kind of in interviews or like code wars,
a lot of them can be reduced to N log in.
That's a definite strong pattern there.
And just as a heads up, what you said,
with creative solutions or they get tricky,
a lot of times it's just knowing math, right? I mean, and that's,
it's kind of unfortunate, but that's really what it boils down to. Like if, if somebody gives you,
you know, you have an array and all the numbers are sequential, you know, and one of them's
duplicated, how do you find the other one? Well, you're probably as a programmer, you're like,
well, I'll loop through it and find the one that shows up twice, right?
Well, now you're in O-N time.
But if you were to just sum the entire, well, I mean, I guess you're still in O-N time.
But there's a way to where you don't have to scan it twice and have extra storage.
And you can basically, there's an equation to where, and I've got a link that actually came out of, oh my God, what's the name of this?
The Impostor's Handbook.
That's a YouTube link to a guy named Numberphile.
But there are equations to where you can, if you know the beginning and end of a sequence,
there's an equation you plug in that will give you the sum of the entire thing.
And all you have to do is take that minus the diff of, of what you expected and you've got your number.
Right.
And so that's very fast time for what you probably would have in your
programmer mindset.
Well,
I'll just loop through it and do this thing.
Right.
Yeah.
I was actually given one of these as,
as a question one time,
then it totally threw me off.
Cause I didn't,
I didn't recognize it for being a constant time.
Cause like,
why,
why would I,
right. But, but the way the question was worded was you have an, I didn't recognize it for being a constant time because like, why, why would I? Right.
But, but the way the question was worded was you have an array of unknown size of integers
that are not sorted.
So they're not, they're not in order.
And, um, but they are all unique integers and they all start at, I want to say if I remember,
I don't remember if it was zero or one, I'm pretty sure it started at one, right?
The array started at one and there's, they're all, let me say how to say this.
If you were to sort them, they would all be contiguous,
except there's one missing one.
And you got to find the missing integer,
right, out of that list.
And I didn't recognize that,
oh yeah, there's a math formula for that
because you kind of already have an idea
as to what the max integer is
based off of the size of the array.
Right.
Right?
So yeah, there was a formula for that. And I'm like, oh man. Yeah, I mean, going back to this, integer is based off of the size of the array. Right. Right.
So, yeah, there was a formula for that.
And I'm like, oh, man.
Yeah.
I mean, going back to this, what Joe said, this is so important. Like if you're in an interview with one of the big tech companies out there,
these are the types of questions that they ask because they want to see how
you think through things.
Right.
And if there's a mathematical approach, it's probably worth asking the
interviewer, hey, is there some sort of formula I can use, right? Is there a mathematical equation I'm just not
aware of to do this? And a lot of times, though, you know, if they see that you thought in that
realm, they'll help you out and be like, okay, well, just assume you can call this function
called, you know, get value, right? And that they might go with that because like Joe said,
a lot of times these
things are way more complex and it's not like you're just gonna be able to pop this formula
off top of your head you know yeah one example i really like is um trying to find prime numbers
like if i ask you hey find me the the two millionth prime number then uh kind of a naive
approach to say well i'm gonna loop from one to two million and i'm gonna check if each number
is prime which means checking if every it can be divided by any number less than it. So you can think that's N squared algorithm.
Like we're going to loop until my, you know, in this case, it's not an input size. We're actually
specifying like a limit, but it operates the same way. So, um, you know, go to your input size and
check each number less than it's N squared algorithm. It performs horribly and you're
not going to make it to that 2 millionth number in a reasonable amount of time.
But if you start thinking, well, you know what?
If I want to know if the number five, or we'll say ten, make it interesting.
If the number ten is interesting, or if the number ten is prime,
I don't need to check if it's divisible by nine,
because that's going to be covered by a smaller number.
I only need to check numbers that are half of it or smaller.
But then you're going to do that. But what we kind of talked about tonight is like dividing
in half doesn't change your runtime. So we're going to be multiplying n for the outer loop
times n divided by two. That's still an O n squared algorithm. So even though you're halving
the operations you're going to do, you're still not going to find that 2 millionth prime in hours. You're going to have to run that
thing all night. So the solution here, I don't want to spoil it, because it gets a lot more
complicated if you try to write your own prime number generator. But you only have to look up
to the greatest common divisor. And you can even do that by keeping track of all the primes as you
go along, which in this case would be 2 million.
So your space complexity is not going to be so great,
but you can basically look at the prime numbers up to the GCD and reduce that.
And so the net result is that you're going to end up running in,
and I believe it's n log n time for a good prime number generator.
There's different techniques and different ways to get that even better and whatever.
But it's just kind of a cool way of seeing this kind of in practice where the obvious solution is terrible.
And as you whittle it down, even things that seem like it'd be really good, like dividing in half,
like you would think would make a huge impact, still going to run all night.
It's not until you really like pare this down into a small, small logarithmic or
even like a square root type solution or set of data to look at that you're going to see some
real benefits. And so what you're saying though, is you're trading off the time complexity for the
space complexity in this case. Yeah. In that case where you're memorizing, that's actually a really
common technique in some code works problems and things like a project oiler, whereuler, where you absolutely do make the tradeoff and say, you know what?
I can store a million numbers in RAM, no problem, but I can't recompute a million prime numbers from scratch every time.
Right.
So just to wrap up here on in log in, I don't know if this was said, but I wanted to make it clear that when we say n log n, that's n times the log of n.
Yeah, and that's what I kind of screwed up on last time when we were like, well, that could get less than.
No, for the most part, n log n is going to be greater than or it will be greater than n and less than n squared.
All right, who's got next?
Let's get into the really bad ones now this is where it gets this is where you have a very bad hockey stick on the graph
o of two to the n so we're flipping the equation yep and just kind of give you some perspective here
you know for the example of 1000 site where the constant time was one second linear was 1000
in this case 1000 input is uh i can't even tell you how many much it is it's like
three with 200 i don't, it's billions of years.
It's longer, much older than the universe.
Like there's not even names for how big this number is.
You've got three to the 284th. So scientific notation, basically three with a decimal spot that should move 284 places to the right, billion years.
So it's more than three Googles.
Oh, way more than Googles.
Yeah.
That's ridiculous. Or 300 Googles.
Does that make it an Amazon or an Apple?
Or is it an Amazon plus an Apple?
300 Google billion
years. That's about what
it sounds like.
I think I might have missed up a little bit.
I think I should not have that billion in there.
I think it should just be three with 284 zeros years.
Oh, okay.
Oh, yeah, yeah.
Okay.
So, that's 300 Googles years.
Yeah, but it's kind of like insignificant when you're like, you know, a billion is what?
Like eight more?
Like, who cares?
Is it 284 zeros?
Like, ugh. so you know i was trying to find like examples of where we might be tempted to run into an o of two
to the n type of problem and i was really having some trouble but i did have find this one answer
from um stack overflow where they the author of the answer suggested that algorithms running in O of 2 to the n
are often recursive algorithms that solve a problem of size n by recursively solving
two smaller problems of size n minus 1.
So things that keep going down and calling themselves or calling things that keep splitting
it up.
Yeah.
So then that led me into thinking like,
well,
Hmm.
And where would you,
we be talking about like,
cause even in the,
we mentioned the imposter's handbook,
right?
He mentions in another chapter that there was after big O about,
Hey,
here's a bad,
um, a bad way to solve Fibonacci sequence.
Right.
And he does this type of thing where it's like, Hey,
you know, Joseph, Hey, solve to the 2 millionth prime number. Well, what if you were to get,
be given a question of like, Hey, solve to the, you know, 100th Fibonacci sequence. Right.
And the first iteration he was going is like, well, you take that 100 in and you would,
you know, recursively go back, you know back 100 minus 1 and 100 minus 2. And then
each one of those would recursively go back 99 minus 1 and 99 minus 2 and then 98 minus 1 and
99. So he drew out this graph of it, right? And he was like, he worked with much smaller numbers
than 100. I probably should have picked something smaller. And you could see that the point that he was showing in the
diagram, though, was that
look at how many times the same
function with the same
argument gets
called in that regard.
Right? And he didn't
call it out specifically as O of
2 to the N.
So that's why I was getting kind of thrown in my own head like, I think this is 2 of 2 to the N. So that's why I was getting kind of thrown in my own head
like, I think this is 2
of O to the N.
I think. It sounds based on the description
like you're doing
something twice, running two smaller ones
that both do something in operations.
Yeah, I don't know if that's true or not,
but it sounds right.
The thing is, you know when you see this kind of stuff
is when people try and break up problems in a super modular type solution.
Oh, yeah.
You see this kind of stuff with SQL queries and that kind of thing.
You try and make SQL modular and you start running into this kind of garbage where, you know, you keep calling the same methods over and over to get the same results back, you know, X number of times. And this is where you see
it typically is when people try and get clever
with using the
least amount of code to
yield the most results.
So you're saying
more code is the answer. Sometimes.
And
I came up with a little contrived example here where
I literally tried to do something that happened
in two to the N operations. So I came up with a little contrived example here where I literally tried to do something that happened in two to the n operations.
So I came up with a little function
where you pass like an exponent
and it will give you a binary buffer back
with two to the exponent number of zeros.
So if you give me a five to my function,
I'll give you a binary array of zeros of size 32.
And so it's really contrived, but it was just kind of funny.
I would print it out in Chrome.
I just didn't JavaScript it in the console.
It was just funny to kind of put 100 in there and see it start spinning for days.
Awesome.
All right, so I think we said this, but just to be clear, right?
This is in the yellow.
This is in the yellow this is this is in the this is in the bright red yes this is this
is very red on a big o cheat sheet so if you find yourself maybe then maybe here's the here's the
tip then if you find yourself doing recursion number one, you already, that should already be a red flag in terms of space, right?
Of stack space.
You need to be aware of what size inputs you're working in.
But if you are doing anything with recursion where you're just iteratively
going back over the same collection,
maybe that's the sign that you should, you, you might be in this territory.
Yeah.
Maybe. I, yeah, I don't know. that you might be in this territory? Yeah. Maybe?
Yeah, I don't know.
I don't know that I've ever really seen one of these guys out in the wild.
I think you'd have to try to mess this up so hard.
Or if you sat there at your computer getting old
while this thing was still trying to run,
that might be the other one.
I mean, we did say 300 Google years.
So, you know, if after google years you're still sitting there waiting on it you've probably done something wrong and you said
years and apple years so you got time yes you got that well um there is one worse thing that
we typically talk about uh and uh this is the worst one that we typically talk about, which is the n factorial. And unlike
the exponent, this one is actually something that does come up pretty often. In fact, every time we
try to get permutations of an array, so our five array example there, it means we want to see one,
two, three, four, one, three, two, four, 1, 5, 2, you know, every combination that's possible with that inputs, that's an
factorial operation.
And so this is the one where I figured this out for our size of 1,000, where 1 took 1
second, 1,000 takes, I'm not even going to try, I didn't even write down the first number
because it matters so little.
It's over 2,500 digits.
2,500 digits of seconds.
That's a lot.
Yeah, I took it into Visual Studio Code just to kind of – so I could even count like the number of zeros.
I had to do a little funky math to even figure it out.
And it was like crashing code on me even to like get all that stuff on one line so I could get the length of it.
So I want you to picture in your head, like when we go back to the big O cheat sheet for this one,
this one is so bright red, it's on fire.
Okay?
And when we talk about a hockey stick, what I want you to imagine is you draw your normal XY graph, right?
Picture that in your head now go from zero and and maybe think about drawing a curve but instead change your mind just go straight line up right that's pretty much in factor o of in
factorial what this does yeah there's no good at all yeah i actually have you guys ever been
in a situation where you've had O of N factorial problems?
I don't think so.
I don't think so.
Oh, I've done a little bit of stuff like toy problems and something like code wars or whatever with factorials.
No, I mean like where you've been real world given.
Oh, no.
No.
See, this is where – you remember how Rob Connery, the author of The Impostor's Handbook, right, he told this story about, like, how he didn't recognize – he actually lost a job one time because he didn't recognize the problem and couldn't – because he didn't recognize it and know how to say it to someone else, then might've mentioned this before in a past episode, but it definitely,
like, while you guys were talking, made me think about this where I was like, I actually had this
come up one time where it was what, what the customer was asking for me or asking of me was
an O of in factorial problem. But at the time I couldn't, I, I didn't know how to communicate that to them.
But all I could say is like, look, what you're asking for is not possible in my lifetime.
And, and the best I could do would be like, look, I'm going to go ahead and write some code to do
it. And then we'll take some time samples and I'll show you how this is never going to finish in a reasonable amount of time, you know, and, and that was, that was as good as I could do.
And so that's why it's important to try to learn these things and understand these things. Cause
you know, when we talk about these, like you were mentioning the, the, the 2,500 plus digits,
right? If we take, uh, Jeff Atwood example here, where each unit of time was one millisecond
of time, right? For n factorial, where n is 16, it's 663 years.
Using milliseconds.
Using milliseconds. It would take 663 years. So that's the trouble that
young me, I want to go back and kick young me in the butt because it's like, I didn't know of a
good way to communicate that to the customer. Like, hey, this is not possible, right? This
isn't going to finish. This can't be done in my lifetime.
So, you know, I don't, you know,
and of course someone else is like,
no, I can absolutely, I can do that.
Of course, not a developer
never, ever tried to step on somebody else
to say, yeah, I can do that.
Oh yeah.
Well, to your point, not to pick,
but it was a project manager.
It was like, no, that can absolutely be done.
I'm positive that can be done.
So here's the thing, right?
If you start going around talking like, well, that's O of N,
then you're probably going to get looked at with stinky looks all over the place, right?
And Rob even says in his book
that the problem is it makes a lot of people very uncomfortable.
If you don't have that computer science background and you haven't gone
through all this kind of stuff,
a lot of times it does make you uncomfortable because you'll,
you'll want to step out of it.
Like I didn't actually go to school for CS specifically.
I went for CIS because I'd already been programming for a long time.
And I was like,
well,
I want to get the business side of things.
And so like Diffie Q and that kind of stuff that,
that Joe probably struggled through.
And I think you probably did too. It's, you know, it's one of those things that you kind of miss
some of these. And so if you don't have that heavy background, it's, it's can be intimidating,
right? But hopefully listening to what we're talking about here is we're trying to put it
in plain language, right? Like, Oh, N is something you can look at and see, right? O-N squared is, think of just the loop constructs and all that stuff. You should be able to kind of
look at it and say, hey, I think that this is probably O-N or this is O-N squared or whatever.
Like, it should be a language that you'll be able to communicate. So like that situation you're
talking about, you could have looked at the guy and been like, that's O-N factorial.
Right.
Can't do it.
I mean, even, I look at now that the way we are armed with information, right?
Yeah.
The big old cheat sheet is an invaluable resource, right? Because you can just visually show
somebody like, hey, here it is. This article from Jeff Atwood, right, where he has the table,
invaluable, because you can immediately just put it in front of somebody's face
and be like, look, this is why what you're asking can't be done.
Right.
I'm not speculating.
I'm giving you evidence.
Yes, exactly.
Right.
Yeah, and I don't think we ever actually said what a factorial is, did we?
I don't think we did.
But isn't that the thing that everybody remembers from primary school?
I always get it mixed up with summation, but it's the product of the integer and all the integers below it.
So, for example, input of 5, it's 5 times 4 times 3 times 2 times 1,
which obviously is going to be much bigger than just 5 times 5, which is n squared.
Yeah, it gets huge fast.
This is the way I want my pay scale to work.
What do I got to do to make that happen?
I would like a 6 factorial raise, please.
Yes.
That sounds amazing.
I found a calculator online that will list permutations.
I was like, is it really that many?
It doesn't seem like that many.
And so, you know, it kind of like you tell it how many you want and it'll generate all the outputs for you.
So, the first one I did was like one or not one, like five, I think.
I was like, okay, here's a ton.
Like scrolling, scrolling.
I'm like, all right, well, I don't trust this.
It can't be that bad.
Let me try 10.
The website's just like, no, it's too big.
Yeah, I can't do that that it's like i can't
permute 10 items in array that's going to be over 3 million it's going to crash your browse bro
wow that's crazy so that yes as you said that one's very red that's on fire red yes torch
type of red yeah your your your computer just melted and you're out of a job.
Very nice.
You're not going to make it to Patch Tuesday before that thing quits.
Patch Tuesday.
For Windows Server 2 billion.
You're right.
Yeah.
Oh, man.
All right.
So we will have a lot of links in the resources we like section for this episode.
So if you're listening along, you can use your podcast player can usually list out the show notes pretty well.
Yeah, most of them do.
The Google podcast player doesn't.
I mean, it's very small minimal maybe
it's a different feed that it's coming from but we need to we need to take a look at that but for
most of them you'll be able to click on the links and go go look at these things yeah and we've been
slacking on the videos by the way we've gotten a lot of feedback about it and those are coming back
i've just been slacking if you see this video right now you can see that my office is a mess
i just moved and i'm still picking up the pieces of my shattered life. Yeah, I
need to get the other one back up.
But yeah, we're getting them back up.
We got... He sounds so sad.
Picking up the pieces of his shattered life.
Come on, Joe. That's a song, isn't it?
Come on. It's not that bad. That's a song.
Cut my life in two pieces.
This is my last resort.
I'm so sick of moving.
Just one more time back to Atlanta and then you're done.
Hey, by the way, we keep forgetting we also have a codingblocks.net slash resources page to where if you go up there and check it out, we have like our favorite of our favorite resources up there.
So every show notes page will have the resources we talked about on that page, but like our very favorites, the best of the best, whatever, or on that resources page.
Including a referral link to portal site, which has amazing videos on algorithmics.
That's what they call it.
You know what else? Hey, by the way, here lately, I've been watching a lot of LinkedIn learning
stuff, which used to be lynda.com, right? They bought it, man. They've got some good stuff up there too.
So I highly recommend that.
It probably should get a link up there too,
because there's some really good courses up there.
Yeah.
So it's one famous cutting blocks host would say,
check it out,
check it out.
All right.
I think that's going to become the catchphrase for a while now.
Check it out.
It reminds me.
All right.
So let's
get into Alan's
favorite portion of the show.
It's the dip of the week. Yeah, baby.
All right. So,
this is...
This was emailed to
us by Joe
Readley. I hope
I'm pronouncing that correctly.
If not, we've said it wrong so many times.
Oh, have we mentioned him before?
Are you kidding me? Joe Rickers and Joe?
Joe Rickers and Joe.
Oh, I didn't recognize that.
We can usually say that properly.
Oh, I didn't recognize that that was him when I saw it. Dang. I'm sorry, Joe. I didn't
even put two and two together. Math is hard. So he sent us, because I've had this affair with Python going on for a while now.
So you have a full Python IDE for iOS with an app called Pythonista.
So I have links to it.
I haven't had a chance to try it yet.
It's high on my list, though. I chance to try it yet. It's like high on my list though.
I want to try it out.
It looks so cool.
But I think part of what I'm struggling with though too,
is that like,
what's my use case for writing code on my phone or on my iPad?
Other than like I'm on a train and you know,
I'm bored.
Right.
Yeah.
Cause at first I was like,
Oh yeah, that'd be cool. Code wars on the go. But then I was like, well, I guess I could right? Right, yeah. Because at first I was like, oh, yeah, that'd be cool.
Code Wars on the go.
But then I was like, well, I guess I could just go to codewars.com then.
Yeah, but typing on a phone just stinks.
Well, it's got type ahead, though.
How many times that bit you?
Never?
I mean, yeah, well, I'm not speaking from experience, so I can't say.
But, like, looking at the pictures, though, they show not speaking from experience, so I can't say. But looking at the pictures that they show you where it's pretty smart about what it's –
I'm trying to type this while I talk, and that's hard.
So you're messing with it right now?
No, like I said, that's the one thing.
The app is $10, and that's another part of the reason why I was like, man, I'm too cheap.
Because I won't spend $10 on this app until I can think of a good use for it. And I'm like, oh, I just got to go ahead and get it.
So I'm going to go ahead and get it.
Yeah, you should buy it.
That's the thing, man.
Really, iOS needs a try.
Yeah, here's what I'm talking about.
Code completion.
A try until you buy.
Can you see that?
Yeah, sort of-ish.
How about, can I get it closer?
It's kind of fuzzy.
No.
No. Not that great. None of that worked no dang i failed all right so coding on ios his tip mine i got lazy this time
i you know i i guess i got lazy or i just want to give some people some credit in the slight
group so super good dave he's been on several of our community talks he he actually came back with
i don't remember what episode it was we were talking about the faking framework that was
created to where people it created like people oh was it last i can't remember man my brain's going
um so he he mentioned that there's tons of other ones out there already as well
so there's uh ruby gems there's a faker that's called that,
that can go create a ton of stuff.
There's another one.
That's also a Ruby one called factory girl,
and it will make a bunch of fake data.
There's another one that allows you to specify the data you want to download
as a CSV or Jason or whatever.
And that's called mockaroo.com.
And then there is a faker in JS that Vlad.os gave us,
which is it's on NPM and it's called faker and it will also generate fake data. So there's stuff on
all kinds of platforms, JavaScript, Ruby, et cetera, an API one. So go check those out.
Links will be in the show notes. thank you for the tip Dave. Yeah some people
are so creative with the names. Vlados
Joe Recursion Joe
Vlados
You guys play Portal 2 or Portal 1?
Probably own it.
It's been a while. Probably don't play it.
I've played it but it's
been a long time. Vlados
God of War man. God of War on the PS4
that's what you gotta play.
That was the voice though though, in the Portal games, right?
The computer.
You guys are embarrassing me.
You need to go play Portal 1 and 2 right now.
They hold up very well.
Oh, come on, man.
There's your tip.
I got a whole closet full of games I don't play.
Portal is fun.
But something about the motion eventually gets to me really yeah emotion
or motion motion okay like some of the some of those maps just get will get me like sick after
a while you just gotta fortify and push through apparently all right drink more i'll play i'll
play i guess one day all right well hey you don't have your tips in here we're skipping joe
no i just i just came up with it now i'm entitled he's being secretive that's right
uh so uh i don't think that i mentioned or i don't think that i've suggested that you check
out dev.to it's recently went open source it's um kind of like um a blog aggregation tool except
the content is actually there it's not like a Reddit or a Hacker News where you actually
click off and go.
It's got a really nice community,
which I think is really important. It's really active.
The site's open source analysis is really
cool. You can see what's coming. They've got a great roadmap.
They're doing a lot of work.
Super Good Dave, who you just mentioned, has actually
been blogging there, writing a lot about Vue
recently and doing a JavaScript type 30 thing.
Dance to Die, Goprogman,
like a lot of the people that we talk about
and really like to follow are all there.
And so if you hit me up on Dev2 or anywhere,
I can give you a list of really awesome people
to check out there.
And if you're a blogger or thinking about blogging,
one thing that's really important to note
is that you can blog on your own blog, say like jozak.com. I can blog there and then sync the content to
dev.to. And when I do that, in order to not lose my Google juice, you can set the canonical
reference. In fact, it's already kind of preset. If you do the sync back to your original blog.
So long tail, five years from now,
when people search the thing you wrote about,
they're still linking to your blog,
but short term,
you're finding tons of readers that you would never have found otherwise who
are just browsing,
hanging out on dev.to because it's an awesome site,
man.
Look at you throwing out marketing terms on our coding podcast.
Long tail.
That's right.
People are like, what?
I've been doing some reading.
What is he talking about?
So you got to check out the site.
It's dev.to, dev.to.
And actually, you should follow them on Twitter and follow everything they're doing.
Just because they're doing really cool stuff in the programming community and making life better for developers and content producers, which is fantastic.
And it's really nice and positive,
not to compare it to other communities
that aren't so much of those things sometimes,
but it's really great.
You should check it out, dev.to.
Check it out.
Check it out.
All right.
So with that, we ask if you're listening to this,
maybe a friend has shown you the way, and now you're hooked.
So go on to iTunes, Stitcher, or more using favorite podcast app and subscribe.
If you haven't already left us a review, we would greatly appreciate it.
I can't – I'm not lying when I say it puts a smile on our face.
So you can find some helpful links by heading to www.codingbox.net slash review.
And while you're up there, you can check out all the show notes, examples, discussions, and more.
And such a few back questions and rants to the Slack channel, codingbox.slack.com.
And make sure to follow us on Twitter at Coding Box and head over to codingbox.net.
You can find all our social links at the top of the page.
Also, I just want to stick this in there at the end
for the horde
I feel really awkward
I don't know what that was
mission accomplished
showing his secret language