Coding Blocks - Does Big O Matter?
Episode Date: September 10, 2018We wrap up our conversation on complexity and play some more over/under as Allen thinks learning is backwards, Michael doesn't write clean code, and Joe brings his dog to the discussion....
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 89.
Subscribe to us and leave us a review on iTunes to learn more using your favorite podcast app.
Check us out at codingblocks.net where you can find show notes, examples, discussion, and a whole lot more.
Send your feedback, questions, and rants to comments at codingblocks.net.
Follow us on Twitter at Coding Blocks or head to www.codingblocks.net
and find all our social links are at the top of the page.
With that, I'm Alan Underwood. I'm Jerzek. And I'm Michael Outlaw. Datadog is a software as a service monitoring
platform that provides developer and operation teams with a unified view of their infrastructure
apps and logs. Thousands of organizations rely on Datadog to collect, visualize, and alert
out-of-the-box and custom metrics to gain full-stack observability with a unified view of all their infrastructure, apps, and logs at cloud scale.
They've got 200 plus turnkey integrations, including AWS, PostgreSQL, Kubernetes, Slack, and Java.
Check out the full list of integrations at datadog.com slash product slash integrations.
Yeah, and key features include real-time visibility from built-in customizable dashboards,
algorithmic alerts.
We've been talking a little bit about our algorithms lately, like anomaly detection,
outlier detection, forecasting alerts, end-to-end request tracing to visualize app performance, and real-time collaboration.
And check it out.
Datadog is offering listeners a free 14-day trial with no credit card required.
And as an added bonus for signing up and creating a dashboard,
they will send you an awesome Datadog t-shirt.
So head to datadog.com slash codingblocks to sign up today.
So as always, we like to start off by saying thank you to everyone who left us a review.
So on iTunes, we have char broiled string cheese.
Yum.
Or is it?
Or would you say car broiled string cheese?
No.
Char.
Nope.
Char.
All right.
And Narnor.
I practiced this one before.
Zhao Ying, 189, and Boot Manager.
You think I got those right? I hope I got it right.
Pretty good, man.
The working out is paying off.
The working out.
You noticed. Thank you.
I'm going to be
at Atlanta, Triumph
for Return. I haven't been back since I moved,
so this is going to be the first time I'm in Atlanta in a couple years
now for CodeCamp 2018. I'm in Atlanta in a couple years now.
For Code Camp 2018, I'm doing a session there about search engines.
So if you are in the Georgia area, you should come check it out.
And I think all of us are going to be there, right?
Yeah, man.
Yeah, we will all be there.
Joe is the only one you're allowed to kick in the shins.
That's right.
That is correct.
I don't take kindly to that.
Yeah.
If you can, I'm quicker than I look.
Awesome.
All right.
And then also, if you happen to be down in Orlando towards the end of September for Microsoft Ignite, I'll be there doing a session on Azure Functions and Cosmos DB and data pipelines and all that kind of stuff. So I don't know if that's closed or if there's going to be people in it,
but come find me, say hi, and that's it.
So we wanted to, on last episode,
we talked quite a bit about various different complexities with Big O
and that kind of stuff, and we went deep on it,
but we wanted to circle back and just
do like a, almost like a Cliff Notes version of this thing, because it's something that a lot of
people get intimidated by. And I think just having some of these quick and easy things that might
help stick in your head, you know, will take you a little bit further. So we wanted to circle back to that. And so I guess let's start off, right? Whenever
you have random access to an item that's O of one, that's basically when we were talking about
like a hash table lookup or a key lookup or something to where it can quickly get to that
item. Like an array access.
Right.
An index or an array, right?
Go straight to item number five or something like that.
So that's always going to be O sub one or O of one.
Sorry.
Good Lord.
It always bothered me that hash tables were O one. And I read it, I memorized it,
but it just seemed weird to me.
You have this hash table with a hashing algorithm,
which gets you close to array lookup speeds, but it's not.
There's a lot of stuff kind of going on underneath the covers, but they say it basically washes out to O of 1.
And I've seen in practice, like, swapping stuff out for hash tables has been really fantastic, but it just hurts my soul to say that's O of 1.
Yeah, because we know that there's other operations going on,
but they say they're insignificant in the grander scheme of things, right?
Yeah, and if you'd love to hear an episode on data structures,
you should listen to it because there's a lot of ones I've kind of forgotten about,
like heaps in particular.
You know, you've been talking about graphs recently and how to implement them and stuff.
Definitely some stuff I could use a refresher on. So if you want to hear that, let us know.
We're thinking about it.
Yep.
All right, who wants to take the next one?
List iterations are always O of N.
You know, even for an array or a linked list,
it doesn't really matter in implementation.
But generally, if you're kind of looping through all of your inputs, it's kind of logical you can think of that as being an order
and operation.
Mike? Yeah, so summarizing from what we said last time
too, if you see the same collection being iterated
over in a nested loop, then you're
in an O of N squared situation.
Yep.
And the O log N, that was like the divide and conquer type algorithms, which is another
one that I've always found really unintuitive.
It's like the Monty Hall problem.
Remember that one?
Oh, is that the one where it's like you get three doors?
Yeah.
One has a card and I'll take it away.
Yeah. Yeah. And you pick one and I'll take it away. Yeah.
Yeah.
And you pick one door and the host says, would you like to pick another door?
And you should always say yes because math.
Right.
Right.
Yeah.
Yeah.
I mean, I've written a little program.
I'm like, there's no way this is right.
Right.
It was something like it works out in the beginning where your odds of picking the correct
door in the first are like 33% per door.
And so you think that when he asks you, do you want to pick another door? You still think, well,
it's still those two remaining doors will still be 33% each, right? But it's actually like 50%
or something like that. Is that right? Yeah. I just never understood by choosing it. And I'm
left on the crucial part where like you pick a door and he says,
you know what?
This other door that you didn't pick is a goat.
Do you want to change your answer?
And you should always change your answer.
And that's because you went from 33 and now you're at 50.
But it never made sense to me.
It's like, wait, why is changing your answer?
Like, okay, if it's 50% now, why is it 50% to stay the same?
I don't get it.
Yeah. So going back real quick on this divide and conquer right that was that goes back to the episode where we talked
about algorithms like the the binary search and that kind of stuff right like it's taking a well
known algorithm and being able to divvy up whatever the work is so then going into the next one the
on login which we've mentioned many times, and Joe even
said on the last episode, like one of the tricks in an interview is you'll get something that is
an obvious N squared. And then you typically want to try and bring it down to this O N log in.
And the way that's typically done is you have a list iteration or some sort of enumerable iteration that's going to be O of N.
And then if you have an inner thing trying to use some sort of divide and conquer on that, and that will get you to O N log in.
Yep.
And why do you drive on a parkway and park on a driveway?
I mean, come on.
Sorry for the weird tangents.
I'm sick.
I'm sick, guys.
Leave me alone.
The worst one is O of N factorial to keep adding loops for every time an item is added, which we kind of talked about the permutations example there.
Then your O of N factorial territory and good luck with that one.
Yeah, I mean, your computer is going to melt.
And you want to think that O of infectorial is just theoretical and it a Microsoft, a Google, an Amazon, you know,
Oracle, any of these big companies, you will be asked, what is the time complexity on this? What
is the space complexity on this? Can you improve the time complexity? Can you improve the space
complexity? Why or why not? So being able to speak in this terminology will matter.
If you're trying to prep for one of those big places, one good thing you could do is probably spend an hour
at Starbucks trying to think of what you don't want them to ask you
and then practice for that. That's amazing.
And tracking the coding interview, right? Pick that book up. Just do it.
Would you say that in regards
to interview type questions, that if you're given anything that's permutation based or combination based, that that's where it's a trick that you got to watch out for the infectorial problems?
Does that sound like a fair statement?
I think it's more, I think it more plays into what Joe likes.
I think anytime that you see something like that, you need to step back and say, is there a mathematical approach to this?
I think that's the first trick.
And then the second one is just what you said, right?
Like trying to get out of the crazy complexity.
Because that's something that I've seen a lot.
Well, I guess, okay, let me rephrase it a different way.
Let me rephrase my question in a different way. What I mean to say is if they give you,
if you see a question that's given to you,
that's permutation or combination based,
then the quote,
like first,
you know,
your easy way out,
your first gut reaction might be to go down an infectorial path by mistake.
And they're trying to like set the trap.
Right.
And to see if you you if you catch it
does that sound like a fair statement i feel like if they're already talking about factorials
interview it's just like oh they won't i'm already exhausted they're not going to tell you
yeah no i and so i will say from my experience and and both sitting on these and being a part
of these interviews is they'll set it and see how quickly you walk into it. But then they'll also back up typically
and be like, could you do this better? And so they won't stomp on you the first time for doing
it that way because it's the obvious way out. But then they'll see if you can think your way
out of it, you know? Yeah. And yeah,
I don't think I've ever seen permutations come up in an interview,
but I've heard,
you know,
with some of the Google sample problems,
like you've got 20 million boxes and 30 billion red balls and how many,
uh,
you know,
can I get some frigging Fibonacci or something next?
I think I've read,
they don't do that anymore though.
They don't do those bizarre questions. like how many golf balls would it take
to fill up a school bus?
That's good. I don't know how you should interview, but
I don't think it should be crazy problems that you're never going to encounter in the workplace.
Man, this is completely tangential,
but I've always had the same thought. Like,
I would almost rather get somebody in that seems to do decent in an interview, let them work for
30 days or something like that and find out like, what kind of chops does this person have?
I've always felt like the interview is, it never hits it. You know, you'll get some diamonds in
the rough that you didn't expect.
You'll get some that felt like geniuses that you also are like, wow,
this dude just really doesn't cut it.
Like, or gal, you know, not, I mean, it's just, I don't know, man.
I feel like interviewing software developers is a flawed process and I really
don't know a good answer to this, to the problem.
Well, that's been our long running joke or my long running joke about like,
you know, you, you go through the process of the interview and with all the, you know,
interview type questions and then, you know, your first ticket is like, okay, we need this,
uh, this div moved three pixels to the left. All right, wait, what?
We didn't cover anything like this in the interview.
Is that in login?
What is that?
Depends on how you do it.
If you do it right, you can do it in factorial.
It's annoying.
There you go.
Annoying.
Awesome.
Oh, annoying.
Who's picking up the space versus time complexity here?
I can do that.
What am I doing here?
We talked a lot about time
and I think that's definitely
the most common thing
when people talk about big O.
They're almost always
talking about time.
But space does come up
and I was definitely
caught flat-footed
in an interview once
where I saw the problem.
I was in the middle.
I pat myself on the back
and they're like,
tell us about the space complexity.
I'm like,
hurt.
I hit the brakes
and I was just caught flat-footed.
You know, I didn't know how to answer the question.
I didn't have the language to really say it.
Like, I could say, like, you know, I'm creating a variable every time.
You know, I knew about the stack and the heap, but I didn't really know how to express what was going on easily in an interview.
I'm sure I just sounded like an idiot.
So, that was kind of a lesson for me.
And I still don't know that I could really give a concise answer. I don't know how to really
say like, oh, the memory usage is O N squared. But I guess it sounds like it kind of boils down
to the same type of thing. I still don't know how to do it. I definitely wouldn't feel comfortable doing an interview. Well, you know, we have here in the notes that, you know,
as regards to like space complexity being about like where the data is going to be stored for this
and that, you know, specifically there's two ways, the heap or the stack.
But there's also the disk that we didn't write down here. Because if you think
back to some of our conversations around like, um, the remember the the sequel series that we did
a while back, and we were talking about like path enumeration versus nested set models versus
closure tables, things like that, right? Like you can definitely get in, you can not,
it doesn't have to be in memory, you know, heap or stack to when you have to worry about the space
complexity of the output of your algorithm, right? So wherever you're writing that thing to, to,
for long-term storage can matter too. And last episode, we actually referred to an example of flash memory
because maybe if you're working on a Raspberry Pi kind of project
or some kind of small hardware device and you just have a little bit of flash memory
and you don't want to wear level it too bad,
then you might have to worry about how much you're writing to the disk, how often you're
writing to that piece of flash.
It's funny you mentioned that too.
I know Aztec
Aztec
has mentioned a few times
he kind of re-implemented some of the
sorting items. We were talking about it
in CRC++
and he was kind of talking about how he was making some small optimizations.
And I think he was kind of like using registers judiciously and whatnot.
And I thought it was kind of interesting because it's like that's never going to show up in any sort of Big O for either the space complexity or time complexity.
But it's something that makes a big impact on the final results when you test it.
That is very true. Just because you're talking in big O notation doesn't mean there's not some
real,
either nasty stuff that could still be there or,
or really good stuff that just isn't accounted for,
right?
These are approximations.
It's,
it doesn't tell you the actual end performance.
Yeah.
And,
um,
I thought it was really interesting,
um,
that they left out the disc because like,
that's so much of what I do on a daily basis.
I throw something either on disk in some way to a database or some other sort of service or something that kind of caches something for me or stores it for me, and then I get it out later.
And that is so much of the performance burden, like that network traffic or whatever, that that is usually so much more of the performance problems that I
see on a day-to-day basis. And it's just funny to kind of see that excluded here. And I don't
know if that's an artifact of like this kind of being thousands of years old or whatever,
or, uh, you know, if, if just computer science is just not really equipped to deal with
that kind of, um, external reliance.
So one of the things that we have here too is the problem with using the scope variables is those are stored in the stack and you have a very finite amount of space in the stack,
right?
Which we put here, but that doesn't mean that the heap is infinite either, right?
Like you still have, you're still tied to a certain number of resources in your heap as well.
But it's just that it's typically
orders of magnitude larger
than what your stack is.
Yeah, and I looked it up for C Sharp.
One megabyte for the stack
and 1.5 gigabyte for 32-bit heap
and up to 128 for 64.
And some of that's configurable.
It doesn't like just allocate that out of the gate.
But one megabyte is absolutely fixed in C sharp.
That's interesting.
That's not a ton.
I mean,
never really thought about it,
but yeah.
Yeah.
Here's the thing.
Like we're talking about managed languages right now,
like C sharp,
Java,
that kind of thing.
Uh,
do I,
it's been so long since I've done any C plus plus,
do you deal with the heap and the stack
in C++ or is it all stuff that you manage on your own? Well, no, you still have to deal with it.
I mean, if you create an object, then that's on the heap.
Okay. Okay. It's just that you have to manage it
yourself instead of a framework managing your heap and your and your stack for you well i haven't
done anything like this c sharp latest or sorry not c sharp c c plus plus like where are they on
like version 11 or something like that like i think they've gotten it's come a long way in regards to
garbage collection and memory management,
if that's what you're referring to.
Yeah, that's what I was talking about.
Okay, that's interesting.
I mean, it's been a long time since I've done any C++ work.
So if I malloc something, is that intrinsically the...
That's a C operation, not a C++.
But yeah, you'd be getting memory from the heap.
Okay. But yeah, you'd be getting memory from the heap.
Okay.
And my pointer is in the stack because it's going to live wherever the scope that I declared it.
Yes, but it's pointing to the heap.
Okay.
Yeah.
And yeah, my understanding of the kind of the stack, I mean, it's definitely rough.
But basically, like whenever I call into a new function, like I've got any sort of pointers over to the heap or I've got some value types that are kind of, you know, they've got their memory allocated there and that kind of gets added on a frame to the stack. And so like I can go through and my stack trace or I can kind of inspect that and see like all the memory that's been allocated for that particular function call.
And if we go into like, say, an inner loop or something, then that's going to get its own stack entry or frame attitude.
And then when that loop is over,
it's going to clean up whatever it's not needing for.
It's going to pop that stuff off the stack.
And when that function is finished,
it gets popped off the stack until the program is done.
When it gets back to that main method there
and it returns out of there and then we're done.
And then your heap in a managed language gets cleaned up over time.
So, whereas your
stack,
like you said, as you get out of the function,
gets cleared off immediately,
right?
Yeah.
It's really efficient in some ways because once your function is done,
any value types or
any pointers that aren't...
I mean, the pointers are going to definitely be cleaned up.
Everything that's in that stack frame is boom, gone.
Yep.
So one of the things that we were mentioning about with the stack here, though, is you can run out...
Or even with the heap, right?
You could actually run out of space before you run out of time, right?
Like when we started talking about these in squared impactorial type things,
like,
you know,
you might have all the time in the world,
your computer can sit there and turn away,
but you're going to run out of space,
try to allocate that stuff.
Ain't nobody's got time for that.
We're going to run out of time first.
You know,
I went back and looked though,
just we referenced,
or I referenced a moment ago,
the closure table conversation and nested set models, stuff like that.
That was back in episodes 28 and 29 when we covered hierarchical data.
And when we were talking about closure tables, we said that the worst case for closure table in terms of storage would be O of N squared.
That's a lot of storage.
Yeah.
But super fast, right? That was the thing about it is it was incredibly fast. O of N squared. That's a lot of storage. Yeah.
But super fast, right?
That was the thing about it is it was incredibly fast.
Yeah.
And like Al said, you know, we usually talk about the stack because the heap is generally really big and the heap is much, much bigger.
And yeah, we mentioned, you know, the stack trace is basically that or the call stack is the list of everything that's on your stack, but then there's the infamous stack overflow, which is when you
blow that stack when you exceed the boundaries. That's if you have an infinite
loop or something or some sort of recursion that doesn't finish before it exceeds
that C-sharp one megabyte boundary.
And that's the name of the website that we're all familiar with.
Yep.
And,
uh,
what was,
uh,
our buddy John was saying that stack overflow is like the number one cause of,
uh,
vulnerabilities,
uh,
still in software today,
even though most languages,
most modern languages don't even,
um,
buffer overflow,
buffer overflow.
Yeah.
That's,
that's usually
what they try to exploit but it's
a similar type thing sorry about
the confusion stack
overflow buffer overflow
buffer overrun yeah
so yeah close enough
and I did
want to mention tail recursion too
which is kind of like something you hear a lot about
in kind of functional languages.
And I was familiar with it because, you know, the Purple Wizard book, the structures in computer programming.
It was like how they used to teach computer science at MIT since the 60s.
It's a book about Lisp.
Lisp is horrible.
Like, if you pretend to like Lispisp then i'm sorry to burst bubble
oh my gosh we will get one person this would be like they were dogging on lisp
you know they know what's funny they know is if you if you were to google tell recursion and like
the first uh thing for stack overflow comes up in that as a question is like what is tell recursion and like the first thing for stack overflow comes up in that as a question is like,
what is tell recursion?
And they say,
while starting to learn lisp,
yeah,
blah,
blah,
blah,
blah.
Like the first thing is the book.
Um,
it's a,
it's interesting book.
It hits on a recursion in chapter one and it just gets like harder and like
crazier from there.
Uh,
I,
I need to get back to that book,
but I want to mention to you.
All right, go ahead.
Well, one thing that's kind of interesting about it
is like theoretically,
if you look at your stack, right?
And you add a frame to the stack
and you can see all your pointers,
you can see your value types on top of it.
Well, what if you could tell that
the current entry on the stack
see, I don't know.
I can explain this very well so it may have
you worth trying but essentially if I can see that nothing that I'm adding to
the next stack is used by my current function so I can see that that might
the the previous entry previous top entry on the stack has no bearing on the output
and all I care about is the new values on the function,
so I'm not using anything in that old function,
then I don't need to just put something new on the stack.
I can replace that top entry.
So an example would be
if you're in a language that doesn't support tail recursion,
you're doing something like the Fibonacci sequence,
then you would do another function call.
You're doing it recursively the terrible way.
You do another function call.
You slop those variables up there on the stack,
and every time you iterate, you're adding a new scope,
a new frame to that stack over and over and over again.
So if you go into the 100th number, then you've got 100 frames added to that stack.
But if you've got something with tail recursion
and you write your function in such a way that it doesn't do anything after it returns, which is where I'm butchering it.
But the idea is that the previous call doesn't rely on anything from your current call.
So it doesn't have to keep adding to the stack.
Yeah, it can just replace.
So instead of adding 100 frames to the stack, it just says, oh, I don't care about the current one.
Let me just blast it off, either pop it off or just replace the values in it.
And so it keeps that call stack at essentially one, which is amazing compared to N.
So in memory complexity, it's like, it's amazing.
It's, you know, order constant time with tail recursion for certain algorithms.
And that's something that either your language supports or your language doesn't.
And so it's kind of funny to think that your space complexity for an algorithm like might
change based on the language you're implementing it in but that's the case yeah it makes sense
though the stack overflow answer that i was referring to says that the original answer was
written in python but they later changed later updated the examples to JavaScript because
modern JavaScript interpreters
support tail call optimization
and Python interpreters don't.
I didn't know that about JavaScript.
But yeah,
I had this funny thought, though,
when you were talking about the list book,
I was like, well, you know, it would be super
awesome if they had made this Easter egg
where, like, see if you even caught onto it, where you read the first chapter.
And then in order to, to you start the second chapter, but then the second chapter requires that you reread the first chapter.
Then you get to the third chapter and it re it requires that you reread the first two chapters, et cetera, et cetera.
Yeah.
Well, nobody makes it to the third chapter of a tech book. What? No Yeah. I thought that's where you were going to go with it for a minute. Nobody makes it to the third chapter of a tech book.
What?
No one?
I'm kidding.
No, I have a good portion of my tech books.
I've only read, say, one third of my favorite books, like The Clean Codes and Clean Architecture
and stuff like that.
Those definitely are cover to cover guys.
But even design patterns, it's challenging to make it to the end, I'll say.
Yeah.
I'm reading one of those challenging books right now.
Which one?
I'm afraid I hate to throw it under the bus since I just said that.
No, please.
I mean, we're all friends here.
It's a deep learning book.
And, and, uh, I mean, I like the topic, but the book that from things that I, from reviews
that I've read of other people, other people, I thought maybe I was having an anomaly of
like, you know, why things I didn't like about it.
And then I started reading some of the reviews.
I'm like, oh, everybody's having the same the same same same problems that i'm having with it but there were several that were like hey
once you get past the first few chapters like it does get better so i'm like okay i'm just going
to power through and keep on but it's been i think this is so common with the gang of four that's why
you see so many factor factories and not so many flyweights. Sleep comes in a later chapter.
Right.
It's probably true. As sad as it is,
I guarantee you it's true.
Yeah.
So how does Big O work for
space complexity? It's basically the same.
Big O doesn't care if you add three variables to the stack
for every input because that's a constant,
so you can drop it. It's much more concerned with the fact
that you're adding things to the stack for every iteration because that's a constant, so you can drop it. That's much more concerned with the fact that you're adding things to the stack
for every iteration, for example.
So that would be, you know, if you do it for every input,
that's going to be order and operation.
Yep.
So to clarify, though, it does, like when you said,
if you're adding three variables to the stack for every input,
that it would care about.
Well, that'd just be an O of N operation, right?
No, no, no.
If you pass in an array of 100 items, and for every one of those 100 items, you add
three variables to the stack for every one of those inputs, then you do care about those
three variables in that case.
It's the case where you don't care where it becomes O of one is
if you add, you take in some array period, you add three variables, and then it doesn't matter
what the size of that array is, those same three variables get added, period. Those are the
variables you don't care about. Yeah, we're kind of saying the same thing. What I mean is like that
example where you allocate three variables per per iteration per
item in that list in that case you're you're doing three allocations times n so three n but we don't
care that it's three so it's just order n it's right it's not order one yeah it's it's weird
but when you go back to the big o stuff basically any kind there's the constants right or these
like that one it's a factor of three on every N,
but you throw that away.
They don't care about it at all.
So unless you were adding N number of items
for every iteration of N, which would be N squared,
then you just throw away that other factor.
It does not come into play.
And I know that's crazy,
but that's what we were talking about earlier too, right?
Like, it doesn't matter that you do
the same 20 operations on
every iteration. It doesn't care. You
throw them away.
Yep.
I'm definitely not an expert at it, so
I would love to hear your
ideas or comments
down in the comments section for the show notes.
This will be what slash episode 89.
Yes.
Yep.
So 89,
we're getting close to a hundred.
We do anything special on a hundred.
Oh,
we'd love to hear your ideas for what special episode 100.
That could be scary.
We do have a whiskey channel over in a slack.
That's right. It'll be scary. We do have a whiskey channel over in Slack. That's right.
It'll be 2019 before we get there.
It will be 2019, but it's not too early to plan.
Yep.
You gotta do a drinking game.
Sorry.
What is that?
I let the dog in.
I didn't close the dogs out of the room tonight.
I'm sorry.
Who let the dogs in. I didn't close the dogs out of the room tonight. Sorry. Who let the dogs in?
Oh.
Oh, my gosh.
All right.
Well, I was going to go on, but now I've got skooking ducks.
So maybe someone else should.
All right.
Well, I'll do it this time then.
If you haven't already, first off, the reviews for this particular episode were just amazing.
Thank you for writing them.
If you haven't had a chance yet and you'd like to give back to us, you know, we mentioned it.
We super appreciate it.
We read them, put a smile on our face.
Go up to codingblocks.net slash review, and there's links there to take you to the iTunes, if that's your thing, or Stitcher.
And if there's another place that you'd rather do it, please do.
And also, as Mike said in the past as well, do share with a friend.
If there's somebody that you want to improve their coding, then share it.
And share it with other people who are interested and passionate about improving their skill set.
So thank you.
All right.
So it's my favorite portion of the show.
Survey says.
All right.
So last episode we asked,
Hey,
how's your math?
And your choices are,
there's an app for that. Or was great in primary school.
Or I can tip 20% comfortably.
Or I can still find the area under a curve.
Or I can still take the natural log of E
with the best of them.
Alright, let's see.
Alan, you go first.
Man, you know what's funny about this one is I want to say that I can tip 20% comfortably,
but that is strictly a US type thing.
There are places in the world where they just don't tip.
So I'm going to toss that one out and I'm going to say there's an app for that, and I'll go with 37%.
All right.
All right.
That's pretty good.
It's at 37.
You're going 38.
Oh, he's trying to block out his dog.
Yeah, it's a royal rumble behind me right now.
Just let it roll, man.
All right.
I'm going to go with 30%, 30% for whatever Alan said.
Really?
You don't even know what I said.
You can't do that.
There's an app for that.
All right.
So you're both going with there's an app for that.
Alan at 37%, I think I heard. Joe at 38%.
Okay.
The winner is
neither of you.
Really?
Really, guys? You don't think our audience
can tip?
That's true.
Wow, guys.
Wow.
That's all I got for you guys. Wow. Wow.
That's all I got for you guys is wow.
How high was it?
It was like 55% of the vote.
Dang, son.
That's awesome.
Which one?
I can tip 20%. Yeah.
All right.
20%.
Okay.
All right.
How many people could L and a V
oh nobody can do that
nobody
no I'm kidding it was like 15%
I think they lied but
that's insane what about the
come on who takes the natural
log of anything without thinking
about it for a minute
I don't even
remember how to do it I'd have to go look
it back up again.
Natural log? Wasn't that like a fraction?
Oh, so now you're gonna...
You can't answer the question with a question.
That's not how that works.
Oh, I mean...
Yeah, I don't know. I mean, I got a cold. I'm sick.
And I can't even tie my shoes
right now, so...
Wait, wait, wait. All right.
Natural log of E, wasn't that an exponent to one over the exponent?
See, that's the thing.
This is where it drives me crazy.
It's been so long.
Oh, man.
You guys want to know the answer?
It's a fraction.
It's one.
What?
No.
Instead of like a base 2 or base 10 or whatever, the base is like 2.718281828459.
Hold that off the top of your head, did you?
I did.
See, I knew it so well.
No, of course I had to look that up.
That's what I'm saying.
Nobody can take the natural log or something without a calculator.
Come on.
Dude, you want to know what's awesome here is there's a webpage that showed up when I searched for it.
Demystifying the natural logarithm.
And I swear to you, it's like a novel.
Like, is that really demystifying something?
If you've got to read 20 pages like oh yeah i need like 20 tweets guys
yeah exactly 20 tweets still too many that's 19 too many you know i've thought about like um trying
to kind of like buy like a college textbook i'm like trying to kind of relearn my maths but like
ultimately i feel like i'm not going to use it so it's just going to be pointless but i would love
to not you know to kind of re
relearn some of that stuff was like i felt like i was pretty good with it for a while there like i
felt like i was like the calculus kid in college yeah same here yeah i had a conversation with a
friend of mine with a friend of ours actually recent here recently where it's like yeah you
know i used to think that i liked math and And then I started reading some of these equations, like getting into
Jacobian matrix
and partial derivatives. And I'm like, wait, what? I don't
remember. What? What is this again?
I don't know. You know what, though? Don't you somewhat feel
like the order of which you get your education is almost backwards?
Because after being a software programmer, you care a lot more about the things that you've already forgotten from school, right?
But if you had been programming and you were in school, you'd be like, oh, that's amazing.
I understand that.
I know exactly. I know
exactly how I could put that to use or whatever. And it feels like it's almost backwards, right?
There should be more of a, I don't know, like an apprenticeship type thing, right? Like where
you work doing something and you're learning about that stuff as you go. I feel like it would
be a much better way to do things. Well, I've had a similar thought where I've kind of thought that it should be more the norm where you go back to school as an adult,
because like, you know, you've been there the further away you've been from the school,
even if you like us are still trying to like study topics and whatnot. Right. It's still easy to, to
lose sight of some of these things that are like fundamentals you know quote
fundamentals depending on you know what you're doing right and it might not be um you know if
all you're doing is moving pixels around you know moving the div three pixels to the left
yeah i'll be like the only four-year-old in stats class that's one oh man, that would be nice. I would love to be better with that.
Yeah.
All right.
Well, this survey, we ask, what's your preferred password manager?
And what's your password?
No.
One, two.
It's the same combination as my luggage.
All right.
So your choices are one password, last pass, key pass plus Dropbox, RoboForm or Keychain,
or I just use the same three passwords everywhere, or whatever is built into the browser.
And we will not be tracking
logins for this.
So if you are using the same
three passwords, we won't know.
Yeah, then you just type away.
Don't worry about it.
Oh, man.
See, if I had to guess for you, if I had to answer
for you guys, Joe's probably the last
same three everywhere.
No, just kidding.
That would mean typing it every time.
I have no online footprint, guys.
I don't use password managers.
There you go.
Facebook's a poo-poo.
All right.
Well, let's get back into Big O and talk about why it matters.
All right. about why it matters all right so um you know one thing that i kind of mentioned it several times is that like this isn't great for this or that or when we're dealing with databases or distributed
systems so like when you have like multiple processors and stuff like all those things
like suddenly changes it's like maybe you're willing to to suffer some extra load if you can
split that up and you know like a map reduced type way or
something that can parallelize and so i was just wondering if we had um like a newer more modern
system for big o um i don't know if you guys ran into anything that's kind of attempted to kind of
bridge that gap in like computer science you mean as a way to measure you know whether or not it's like o of n versus o of n
square like something like that i mean like a whole another system like it's like hey yo big
big o was uh invented like what you remember what year it was it's like 1889 or something like that
yeah so like you would think like by now like there's all these phd students who like every
year they have to turn out papers on some topic, right?
And everyone's always trying to figure out what the topic and specialization would be.
You think someone by now might say, you know what?
Maybe we could come up with a better way to compare algorithms than Big O.
That's interesting.
Big Theta.
Oh.
Yeah.
Let's see.
Better system than Big O. thing big theta oh yeah see no i think this is why though that um i mean big oh yes it the
groundwork started uh back in like the late or that'd be the night late 19th century
um but the this is why we were you know i was kind of jokingly brought it up the last episode where
like the credit went to um you know there was the and others right because it's it's been an evolving
concept right that mathematicians still add to right so i don't know that it's like it's not
like it's been static for the like the last 150 years or I found a Tilde notation.
Yeah, I was just looking at that.
I've never heard of that before,
but it does look like it attempts to do some of the same things,
but it seems like that's kind of more of a replacement
for the actual notation, kind of like replacing the big O
and not so much the algorithmic analysis kind of side of things.
But you know what?
Going back to what your whole thing was,
like dealing with distributed systems or parallel processing. If you think about it in terms of just what big O was even meant to
do, it's not like it changed because of the speed of computing. Like what you're suggesting almost
is like when you throw in parallel processing and distributed systems so that you have multiple
systems working on it,
you haven't changed the time complexity of it. It's just the speed at which all the stuff gets put,
brought back together or whatever.
So,
I mean,
I don't know.
It seems like it's held up pretty well,
even thinking about that,
right?
Like you're still going to have a number of operations that happen on all your
nodes and then they're going to come back together.
And then your constant time would be putting those things back together and
that's going to get thrown away anyway so it seems like it still works you know
it's kind of funny to think like well this operation is uh takes it takes much less operations
and so it's better but this other one that we're getting away from uh runs in 10 of the time i mean
it uses like 10 of our cpus to do it and it like practically kills a computer.
But sometimes, you know, that's OK.
Well, I mean, now we got things like quantum computing that are coming up, right?
Like with the what is it, the R language or whatever for dot net and that kind of or QQ.
That's what it was.
Not R. Yeah.
R statistical.
But yeah.
So, I mean, I wonder if that's going to change things like at that point the quantum
computing is just way faster but it still takes a number of operations to do it right so i don't
know i if this thing's been around for over 100 years it's still just talking about you know what
are the orders of magnitude of processes that this has to do to finish its task?
Well, also keep in mind, too, that when this – I went back and found it.
The date was 1894.
So this predates anything about computers.
So its origin has nothing to do with computers.
We apply it to that field, but it's also used in other fields, too.
That's a good point.
Yeah.
Yep.
Yeah, so I don't know what to do. other fields too. That's a good point. Yeah. Yep. Yeah.
So I don't know what to do,
but it does seem like there's,
um,
like I wish I could like invent like a,
a school of academia or some sort of,
um,
I don't know,
field of study where people are out there doing studies all the time about
like how effective are unit tests really.
And,
uh,
you know,
how do you compare algorithms in a cloudy distributed world?
It would be nice to be able to go to the boss and say
hey, unit tests in these seven different trials
case studies involving companies just like ours show
improved performance to fix bugs of 17%.
You had a good thought that was similar to that
last week, maybe, related to picking frameworks.
Remember that? Oh, yeah.
Someone in the academia world did a research on
how beneficial is it to switch from
a Vue to a React or a React to an Angular or from a vanilla JS to an Angular?
Or does it make the project actually better?
Can you quantify that?
Yeah, how would you?
How would you quantify that?
I want big O for that.
What's the big O of react compared to angular
but how would you do it though seriously like you take the three of us
and react might click for me angular might click for outlaw and view might click for joe and
and it's just because how we think right right? So like going between the three different frameworks, probably it's not even a fair assessment,
right?
Like how would you say that,
you know,
this one's better than the other one because you could probably make the
argument that,
Hey,
for this guy,
it just made more sense to him for whatever his previous experience was.
So I don't even know how you quantify stuff like that.
Well,
maybe you take in big O notation,
how many steps does it take to create something basic?
And then that would be a signal as to how likely you're to create a bug.
Like the more,
the more code required would,
you know,
is the more opportunity to make a mistake.
Right.
So maybe that could be one input.
And then like for common,
uh, you know, add additions or modifications that you might want to make, like adding,
adding a function, a feature or modifying a feature, you know, like things like that,
like how, how many operations does it take to make either of those things happen? Again, because, you know, you might,
the more steps you would take,
the more likely you'd introduce a problem.
Maybe.
That'd be funny.
Yeah, like number of lines written and number of files touched.
Yeah.
This is a file factor of four.
And then, like, good practices, too.
Like, Angular is, you know, out of the box. This is a file factor of four. And then good practices, too.
Angular is out of the box.
You get TypeScript support with the latest Angular.
So that's going to kind of put your guardrails on,
if I could borrow a term from episode one,
related to making sure that the types are what you think they are in a JavaScript world.
I don't know. I'm just thinking out loud.
Yeah, for sure. It'd be cool to see something.
I'd love to see a white paper or something,
if it exists where somebody does some sort of analysis like that and comes up with some numbers.
And even if they say, hey, this is not a great way to make a decision.
But here's one way to perform a mathematical comparison between two different frameworks.
Yeah.
I like it.
I mean, it's all about quantifying it.
Yeah.
So we need to set up a Patreon here.
And if you guys will donate, we will try to find some PhD students willing to do the kind
of studies that we want to see.
We'll like buy their lunch.
Oh, man.
So, we did ask the question though, like, why does it matter, right?
Why does Big O matter?
And somebody had something about cryptocurrency here.
Yeah.
I think that we've seen that matters a lot when um especially as bitcoin
got bigger and bigger they ended up having to fork and make some fundamental changes to their
algorithms because it just didn't scale very well and not very big and even the kind of problems
that they're they're solving kind of for like proof of work and some cryptocurrencies and whatnot like
they know roughly how long it's going to take for computers to solve the problem because they know
how much work needs to be done in terms of times and space.
And that's how they use it to kind of,
um,
gate certain operations,
which is,
I think it's interesting.
So if you're working cryptocurrency,
then you're probably very aware of different performance aspects.
And I think,
um,
probably cryptography too.
Like if you're looking at either writing a new cryptographic algorithm or
trying to crack something,
then you're probably very aware of the efficiency of how you're doing that
because you know,
you're dealing with like astronomical numbers,
data sets.
Although maybe you want it to be slower or take more time in that,
in that realm.
Yeah.
It's kind of funny.
I guess that's the deal with,
you know,
like kind of the old,
like,
I don't want to say old school, like normal, like factorial or factoring based cryptography where you basically say, oh, doing this operation to check if your private key, you know, matches what I expect or whatever to see if this is the private key for my, or whatever,
if the keys match in asymmetric,
then that's going to be fast to check if it's correct,
but it's going to be brutally slow to figure it out if you don't know it already.
Right.
And they need to prove that.
Like,
you can't just say like,
well,
I tried it three times and it worked out pretty well.
Right.
So that's what we call good enough security.
Say again?
That's what we call good enough security. I again? That's what we call good enough security.
I tried it three times.
It's good enough.
So, what are the other reasons you might need it?
We've talked about interviewing.
I mean, it's probably going to come up.
That's way up there on the list.
Yeah.
I mean, in all honesty, the academic side of this is going to matter for interviewing with big companies.
And I don't know that we have it here.
I mean, I guess it kind of fits in one of the other bullet points. So it's just
knowing internally, like looking at it and understanding
what you're looking at in terms of the orders of magnitude,
right? Like, oh man, I'm in an O of N sub squared type thing, right?
This is really nasty.
So it's just being able to kind of look at the pattern and understand that,
oh, yeah, I mean, mathematically,
that means this thing is going to go off the rails at some point.
So I think it's worth knowing just also for your day-to-day stuff as well.
I think it's pretty cool to think, too, like if you know a little bit about bigger than you know what it means to mathematically
compare two things and come up with an objective basis for a comparison so like i was saying with
the two frameworks like it may not work out that great in practice but it's one way to look at a
problem and say like okay let's see if we can kind of subdivide it drop the things
that don't matter so much as we scale and you know compare the actual like the growth of something
the growth curve of a process or um you know of something to see which one's better kind of
objectively so you know it's kind of cool way to think about problems. All right, who's got the next one?
I think that's probably you, Joe.
All right, yeah, a little bit of game programming.
I know A-Star and whatnot comes up a lot,
and so game logic trees or decision trees,
stuff like that, a lot of times you have to take shortcuts.
You just can't go down every path.
You have to prune your trees in order to limit
the number of decisions that
you're looking at and so you've got a lot of times way more input than you could ever use and so you
need to be aware of the kind of problems that you're trying to solve and the amount of data
you've got coming in so that you can you know know when you need to take those shortcuts and
profiling is going to help but it's like one of those things like um you know you could probably save a lot of time up front if you know ahead of time or recognize ahead of time that something you're doing is a, you know, a polynomial time.
And you just know that as the inputs, the pixels, the polygons counts increase, this isn't going to fly.
And so you don't have to go back and fix that later or wait for the profiler.
You can address it up front because you know that it's not going to happen.
Now, someone put in here machine learning and AI, but I'm curious as to why.
Yeah, so for me it was decision trees.
Okay.
Yeah.
So I know that sometimes our buddy John talks about using breadth-first search over depth-first search
because a lot of times
the answer that you're looking for is more likely to be found kind of closer to your,
the point that you're starting at. So if you've got like an infinitely long tree or a tree so
long or so large that it might as well be infinite, then you may want to cut that off.
Like after heuristic, like say, give it a thousand runs and then give me your best answer. And so
rather than, you know, kind of going down one potential tangent,
that whole time you want to kind of stick closer to your root of the tree.
And so, you know, the idea there is just kind of that I think that you have to make those kind of decisions
more often probably in machine learning and AI because a lot of times you probably have too much data.
So you have to look at things like either heuristics
or smarter algorithms in order to kind of cut down
on the processing time or resources.
Brings us to business programming.
I think that Big O probably doesn't matter too much
in business programming.
Yeah, I don't know about that one.
Yeah.
I mean, it looks like the notes that we have here is like usually your bottleneck is in the data tier or the UI.
So as long as you're using good data structures on, you know, whatever your middle tier is, you should be fine.
But I mean, there's definitely, I've definitely seen things. I mean,
heck, I worked on an e-commerce platform one time where, you know, there were just,
they were doing like N squared or N cubed type operations that, I mean, yeah, you can fix it
with some data structures, but, you know, basically they had implemented patterns that looked like it was going to be a modular, easy way to do things, right?
Like a consistent, easy to follow type thing.
And it was fine when there were 100 products in the store or 1,000, but as soon as you got to 10 or 20 or 30,000, then all of a sudden you started really seeing problems right so i think at least in terms of my
my perspective on this one is typically you don't notice it until you started growing
and then and then once you hit like there's going to be a certain line that you go over
and all of a sudden you're going to spend a lot of time and effort trying to fix these
these big o type problems that you run into.
Yeah, that's a really good point, especially with those more steep curves.
Is that like, you know, zero customers is fine.
100 customers is fine.
1,000 customers is okay.
1,200 customers is terrible, right?
Right.
And so it could just kind of go off a cliff really quickly.
And it's not going to scale literally if you've got something that's got a sharper growth curve.
All right.
What were you saying, Mike?
At the start of that, well, basically, you were describing the scale, which is what Joe was saying.
But at the start of that description, though, I can't remember exactly what the comment was that you made.
But it made me think of something that we talked about before where when you had to look at optimizations.
Because I was kind of thinking like, well, okay, your business, your business programming or your business tier, your business logic, that's where
it's like more likely that you might write like a nested loop going over the same collection,
right? Like that's where you're, that's where it's easy to get into like an O of N squared
kind of operation, right? Um, excluding like cross joins in your data tier, but, um, you know, so, so it, it might
be, you might have that problem in there.
And depending on what the scale is, you might not ever notice it.
You know, again, going back to everything's fast for small end, um, you know, you might
not never notice it until the scale gets to where, like you were describing.
But I'm pretty sure we talked about in the past, though, and maybe it was in the How
to Be a Programmer series or something like that, where it was talking about when it comes
to debugging your application or increasing the performance of your application, oftentimes
anything where the latency is at,
that's where your problems are going to be.
So reaching out across a network or reaching out to disc,
like,
you know,
those types of operations in,
I'm pretty sure like Joe,
I think had pulled up a,
like here,
here's how many seconds it takes.
Does that sound familiar to you guys?
You remember that conversation?
We were talking about the hardware.
Yeah.
Yeah.
Caches and how fast those were first level cash,
second level cash.
Yeah.
Yeah.
So,
so it's easy to like in that business here,
you could easily get into bad situations and maybe not even recognize it.
And when you do go to do your performance optimizations though,
you know,
yeah, you might technically have an O of N squared in there.
And but because it's only ever working on a small N, you know, it's not necessarily the thing that you would go after first in order to solve your performance problems.
Right.
Right.
You never want to micro optimize.
We've talked about this.
I don't remember if we talked about it in the 12-factor app or in clean architecture. It was one of the two. But
it was something about there needs to be at least two X returns on performance if you're
going to go after something, right? And here's the thing. Again, it's not necessarily that it's bad,
but I guess going back to the whole question of do you have to consider this in business level programming?
And I think yes, because you'll revisit it at some point, right?
Like we've made the comment before that it's not like Twitter or LinkedIn were built the way that they exist today, right?
They were built.
They hit some level of scale, and then they said, oh crap,
we need to go back and we need to figure out how to make this faster because we didn't expect there
to be a hundred million messages per minute, right? So I guess just the short answer to it is
yes, it matters for business programming. It's just at what point should you be spending time and resources on it?
I went back and found it.
It was episode 45 where we were talking about it,
and it was related to the caching conversations.
And there was a GitHub link that we shared that was latency numbers. Every programmer should know,
right.
And we covered like the latency to L one cash being half a nanosecond versus,
um,
a round trip within the same data center would be 500,000 nanoseconds,
right?
That's crazy.
Um,
you know,
to send a packet across the network, across the internet from California to Netherlands, back to California, was going to be 150 million nanoseconds, right?
Like, things like that. So like, that's where, when we talk about, that's where I was, where I was going with that is that when you talk about performance and big O,
maybe it's not necessarily what you have to go after to get the biggest bang for
buck out of,
you know,
trying to make something more performant.
Yeah.
I think the standard advice is basically use a profiler and try to figure out
where your,
your biggest bottlenecks are and your most common processes.
I always hear that phrase, but I don't really know.
It's not like you can't go to the profiler store and get something.
I know kind of a combination of tools like Visual Studio has got something.
I've used like.peak a little bit to look at memory and stuff, but it's not like a super easy process where you just kind of hit a button and get like the magical answers to your all your problems right so when should you look
into your algorithms yeah i kind of i just kind of spitballed this i thought like when you know
things are running slow but like if we're talking about bad algorithmic growth curves then if things
are running slow then it's kind of
already too late or it's potentially about to drop off a cliff really really quickly um but i mean
well i'm not gonna look at it too early though i'm not gonna spend a bunch of time when things
are running great looking for things that might be bad when i get a billion users and typically
i'm more worried about other kinds of scale so i think if if you kind of have been doing this kind of thing for a little
while,
you've been programming for a while,
you kind of get in a little bit of intuitive sense when you're doing stuff
to think like,
okay,
this is a problem area to watch out for.
And so that's good.
You step into a new job though,
you know,
you're probably gonna have problems.
Maybe there's someone else you can kind of lean on there.
That's going to be more familiar with those.
Those hot spots are probably going to be
I also looked into
you know like hardware provisioning or predicting costs for like
you know like the cloud calculators and whatnot they'll have some kind of tooling
around that but that's more like a cloud kind of hosting environment level not so much
at your code.
Well, unless your code was just awful and required like a massive CPU on a cloud environment, then you might care.
Well, I wonder how many programmers know like, oh, my app
has an average CPU load of 2% and an average memory footprint
of X. I wonder how often, especially web developers, how often
do you look at that stuff? I would venture to say not enough.
And I would also venture to say that when things do get moved to the cloud
and all of a sudden costs skyrocket, people do start looking at it,
right? Because it's like, wait a second, why?
You did mention the profiling. There's also load testing.
Yeah. That's also load testing, right?
That's a good way of kind of flushing out some problems because a lot of times load testing will test some of your most common processes
like checking out or signing up a registration form or something like that.
And so those will kind of like highlight the places to start looking.
And what do you do about it?
Safe refactoring. Should do an episode on that one day uh basically
just refactoring but safe refactoring is the the act of like uh refactoring in such a way that it
doesn't actually change anything so that's actually not a good answer to this you should
refactor in such a way as you can like slice out those eye those algorithms kind of like
isolate them and then work on them in isolation with nice pretty tests.
I do like
the next bullet that we
have here, which is you examine your data
structures. There's probably not much
it'll give you more bang for your buck than
just understanding
the workings of
various different data structures.
Yeah, like I should be using a
tree here instead of a bunch of different lists, or I should be using a tree here instead of a bunch of different
lists or I should be using the hash table or something like that.
That's the most bang for the buck I've
seen.
Any more thoughts?
Yeah.
What was that?
What were your thoughts on it, Mike?
No, I was going to go on to
we already mentioned the profiling, the app,
but someone has here like looking for duplicated work.
Yeah, that's something I've seen when messing around with profilers a little.
Often, sometimes you'll see the count of times that certain functions are called.
And so you'll see a function like getCustomers is called 82,000 times.
And you'll kind of drill in a little bit.
Like, why is this one called called so much more and you'll go see and is it a loop or something
that doesn't need to be like sometimes you'll see like redundant work where like something could be
pulled out to like outside of a loop rather than in it and that's not directly related to big o but
um you know because that's one of those things that can be kind of a constant but
it could really uh save you a lot of work and so that's one of the first things i look at when
profiling is like let me see the most common calls or the most CPU intensive calls.
And that's funny what you just said about that.
It's one of those things that gets thrown out in big O.
It could be a super expensive call that just happens to be in a loop,
but it's not taken into account for when you're talking about big O notation.
So if you're drawing it on a whiteboard, you'd never care about it.
But when you look at the actual implementation of it,
it can matter a lot.
Yep.
You know,
one thing I see a lot of times is like back in the jQuery days,
like it would be common to like,
you know,
like loop over something and you would see like the dollar sign,
you know,
pound,
get the ID or something like a pen,
HTML or something.
It always bothered me to see that,
that dollar sign,
like basically getting that element from the DOM in that loop over and over again and it's just easier to do it that way
but it always bothered me to see that because it's an expensive lookup if it's not being cashed by
the by the library underneath the covers right so yeah but like now that i know a little bit more
about development like what really should have horrified me is just adding to the dom over and
over again right it's like the lookup sucks is just adding to the DOM over and over again, right?
It's like the lookup sucks, sure,
but adding to the DOM, having it repaint
between every iteration in that loop,
that sucks even more.
Yeah, when you understand the painting of the DOM
and the event loop, it's eye-opening.
And that also, that's one of the tools in Chrome,
if you ever, this should probably have been a tip,
but if you ever like were to open up your profiler in Chrome or Firefox or
any of those things and look at what happens when you do something on a page
and sometimes you'll see this, like, it'll look like a waterfall. Right.
And,
and you'll see all the things that are running when it's painting one thing on
the screen or something. And you're like, how did this ever even work?
Right.
Yeah.
So anyways.
All right.
So we got the fun stuff here.
Yeah.
I thought it'd be fun to play a little game of,
uh,
over under again.
So,
um,
let's get an Allen and Ella have not seen the topics that I'm going to spring
upon them right now.
Or,
or have we?
Yeah. I mean, if you went to the second sheet in this spreadsheet then you'll have seen my secrets oops oh man are they right here don't don't click it
i'm not going to i'm that guy that doesn't like to know what gifts i have like i want to open it
and be surprised all right well i uh yeah, let's get to it.
So first up, computer science.
Underrated, overrated, or rated?
Or fine, you mean?
It was over, under, fine, right?
Under, over, fine.
I think it's fine.
Fine.
Okay.
I'm saying fine.
I'm going to say fine, too, just because a lot of people hate on it nowadays. I'm saying fine. I'm going to say fine too. Just because a lot of people hate
on it nowadays.
I think it matters. It matters.
It definitely has its application.
I mean, you know,
that's where like the more
complicated,
like I wouldn't consider, you know,
moving the div three pixels to the
left as computer science,
but, you know, concentrated areas of study like machine learning or AI, those kind of things, that's important work that needs to happen.
If we want self-driving cars to actually work, right?
So I think it's fine.
That's a really good point.
Yeah, really good point.
And yeah, I think like in, I don't know, like early 2000s, like Java ruled the roost and like computer science ruled the roost.
And then like Ruby came out and they were like, no, screw that HTML.
And so like things kind of took a turn and then JavaScript is now like the king.
And like, yeah, so computer science is the backseat right now.
But I still think it's really important for certain applications.
Yep.
Design patterns.
Under.
Okay.
Hmm.
Outlaw, what do you think?
I'm going to say, I'm going to go with Alan on underrated because I don't think it gets talked about enough early on in your education and career.
Oh, yeah.
That's good crap.
You just changed my answer maybe.
So what you're saying, I was going to say good because I do think sometimes it can be overdone, but also I think that there's lots of times that I've looked at the system
and been like, man, why didn't we
just use a common pattern that we
have an ubiquitous term for?
You reinvented the observer or whatever.
Right.
I'm switching to under.
Underrated. I think that people should learn
it sooner.
Data structures.
You go this time, Mike.
I'm going to say underrated.
Same.
Yeah.
So far, we're the same on everything.
Man, this sucks.
You got to throw in something else.
How do you build on Pikachu?
Controversial.
Go ahead.
Node.
Over. Man. um no go ahead uh node over man oh boy here we go it's not over i'm gonna say it's fine
fine uh i'm gonna say i'm gonna say shoot so i'm gonna say fine fine fine it's amazingly impressive what you can do with some
node.js that's it like what all it is it's just yeah it's just a javascript interpreter that's
not within a browser though right like the only reason why we talk about it is just because there's the one thing. I don't know.
It's not.
I don't.
It's a web server.
Dude, go look up an NPM package for anything that you want to do,
and it probably exists no matter how weird.
It itself is not a web server.
It's just that code was written to use it as the interpreter to make a web server.
That's what I'm saying.
It itself is just an interpreter, though.
It's just a JavaScript interpreter.
Express.js was written on Node.js.
I'm just saying, the fact that you could basically,
with, man, what's...
You could write a web server in any other language, though,
but yet you're not as excited about that
as you are about writing it with Node.
This is why I'm saying it's overrated.
The ecosystem around
Node has made Node
impressive.
Okay, so were you this excited about Perl
back in like the 90s because of
CPAN? You're like, oh man, the ecosystem
around Perl is amazing.
Man, I mean...
Well, they weren't using it for the Pearl. That's for dang sure.
I mean...
Just say it,
man. What else
have you seen spring up
and so much come out of so
fast? Java.
There's no JS. Nah, nah.
Not even Java. What? Dude, no JS. For a while. Yeah. Java. As Node.js. Nah, nah. Not even Java. What?
Dude, Node.js. Java was king for a while.
Yeah. Java was king, but
Node.js blew up
when it came onto the scene. Like, just straight up
blew up. And it hasn't gone
anywhere in like a decade
or however long it's been around now.
Has it even been that long? I mean, Java's
been a thing since like the mid-90s.
Yeah, but when's the last cool idea you heard come out of Java?
A lot of stuff did come out of Java, even fluent syntax.
All sorts of cool stuff and annotations.
So much innovation was there in the early 2000s.
But then all the innovators left and went to Ruby.
And then six months later, they went to Node.
Yeah, I don't know, man.
I think it's just fine for that reason.
Like, it's an impressive ecosystem built around it.
I'm not saying that – don't take my saying it's overrated as saying that Node itself
isn't a good thing.
Right, right.
Or that it's –
You just think it's overhyped at this point?
Yes, it's overhyped.
Just like I used to think that Java was overhyped too. I mean, even though I referenced Java as
bringing up so much, because I remember this was the one that
bugged me so much was back in the 90s. I don't know if you guys remember
this, but IBM used to run a commercial
where, I forget, but at one point
during the commercial, you would hear this guy say Java.
Well,
you got to have Java.
And I'm like,
really?
Why?
Why are you talking about any programming language on a commercial?
Like it matters.
Like who cares?
Like that,
that's,
it became a marketing thing.
And that's where I was like,
okay,
it was overhyped.
Right.
And no, it hasn't quite gotten there maybe.
I don't know.
Yeah.
What do we got next?
JavaScript.
It's fine.
Yeah.
Outlaws, you're struggling.
Yeah.
Can I do my Picard?
Like, oh, God.
Yeah.
I guess I'm going to say fine.
So Node, Everrated, but JavaScript, fine. I mean, it's so necessary, though, to do any kind of decent web applications these days.
What about you, Joe?
Where did you land on this?
I feel like, yeah.
Yeah, I'm going to go with fine because of ES6.
If it wasn't for ES6 and some of the nicer stuff that they've added,
then I would definitely say over.
So consider yourself warned, JavaScript.
You're on the edge.
Well, now they're iterating on it fast, though.
Yeah, ES2017 is already a thing.
Yeah, so I like your explanation.
I'll buy that, yeah.
Yeah.
All right, so we're all fine with that.
ORMs.
Ooh.
Outlaw.
Actually, Joe, you go first. I'm going to say nothing,
I guess. I don't know.
I guess
I'm going to just have to say fine because I don't know.
I'm going to say
underrated.
Alright. Oh, that's interesting. I I think I'm going to say underrated. All right.
Oh, that's interesting.
I think I'll go fine, although I'd probably lean towards underrated on it as well. My reason for the underrated, though, is because it's so easy to just write your query,
pass it off to some kind of database system
that can execute it,
like whatever library you're using,
without having a true ORM there
and just getting back a data table.
So you end up with all these one-offs,
and your code is just very intimately aware
of what's coming back from that exact query,
and you're too coupled to those queries. So that's why I was going with that. Yeah, I agree.
I like the explanation. I'll stay with fine.
All right. What about code comments?
Overrated. Yes, overrated.
I think they're fine. I do some nasty stuff and i like to
excuse myself i like to get that moral license but i did some nasty stuff but just comment about
it so you can't hate on me here here's my here's my reason for the overrated is because it's too
easy to rely on hey i'll just write bad code bad code and then I'll explain it with a comment and I've covered my butt, right?
Yeah, that's what I like.
Whereas if you were able to take the time to make the code more expressive to where you didn't need it, right, then that would be a much better approach.
Now, there are situations where a comment might be necessary. And I'm not
saying that there aren't times. I'm not saying that there's never a time to comment. I'm just
saying that it's too easy to fall into the bad habit of relying on them.
I have a different reason for it, but it's along the same things. Mine is,
people don't ever change the comments. So if you
ever do refactor the code, the comment stays
there and then people are misled.
And that's why I think they're overrated.
I'd rather the code be more
expressive. There was something similar that
came up in the
Slack channel.
I want to say that Mike
brought it up, where we were talking about
commented code being in your code base. I want to say that Mike brought it up where we were talking about like on,
or we were talking about commented code being in your code base.
Right.
And he, he'd shared like a reg X to find five lines.
Yeah.
You know,
some X number of lines of commented out code that that might be code that you
want to delete.
And you know,
some of the conversation was about like same as with the comments though,
is that, Hey, you refactor something,
you change something
and now there's this commented out code
and you're not sure like,
huh, what is that about?
Do I just delete that?
Why is that left there?
Should I delete it?
Or was it left there for a reason?
Like maybe they didn't finish it
and so I should also make my change,
like maybe I changed the name of a variable or whatever, you know, because if you were using a
tool to do the refactoring, like to rename a variable, for example, or a class, you know,
it might not catch the comment, you know, because it's a string, right? So it could,
it could be left alone. And then when somebody goes to uncomment that code you're
like okay i'm ready to start working on this thing again and now it doesn't work that's a good point
yeah i don't know i guess i just use it to make up for my bad code at the time but
there's a lot of times in like especially in the ui stuff where i have to do something weird because
something weird is happening i can't afford the time to refact the
world so I'll kind of
put a comment here that says like hey you see
the text defined here but it gets changed
dynamically in certain cases go
check out the you know whatever this file
because this value that looks like a plain
simple thing is not
what it appears and I do the same thing in SQL
sometimes where I'll kind of put like a
not a typo like If there's something that
just looks wrong
or if two things are joined where it doesn't
look like they should be or I'm just doing something
weird for some weird reason
and I can't afford to
or it doesn't make sense to change the
schema, I'll put a comment in that says
yeah, I know what this looks like
but it's actually good because
see this ticket.
Yeah, I mean that makes sense. I'm not going to hate on it. Yeah, you know what this looks like, but it's actually good because see this ticket. Yeah, I mean, that makes sense.
I'm not going to hate on it, but.
Yeah, you're right.
I mean, everything still stands.
Yeah.
I got two more.
Okay.
Linux servers.
Mike.
Huh? That's a toughie. Mike? Huh.
That's a toughie.
Because I want to say fine,
but Windows has been getting so much better now.
Now that you could run Linux on top of Windows,
I almost want to say like,
oh, then you get the best of both worlds.
Yeah.
I'll say fine.
I think I'm going to say it with fine.
Yeah, I'm going to go with fine too, but I feel the same
way. I felt like Windows is catching up and that's kind of
why I wanted to call it out.
That used to be a foregone conclusion
and be like, yeah, if you can deploy to Linux,
yeah, of course. But now
it's kind of like, well, yeah, I mean,
it's cool. It's kind of funny. Now Microsoft
has embraced and now you can't actually
deploy to Linux. And it's like, actually, I of funny. Like now Microsoft's embraced and like now you can't actually deploy to Linux.
And it's like, oh, actually, like I'm okay with Windows now.
Right.
All these years later, I don't care anymore.
Yeah.
It's basically what it is.
Well, it used to be a thing too that like everybody would say like,
why would you even want like a UI on your server?
Like that was a common argument back in the day.
But in modern versions of
windows server you know you get asked that's one of the questions like hey do you need the ui
yeah you want to run headless do you really want to do that yeah what's the new uh pico or whatever
i think it's pico they changed it now it's just windows server core right i think is what they
call it or something oh is it oh yeah that sounds pretty cool. Yeah, I think with Docker and stuff
like that, I kind of don't really care about the underlying
OS anymore, especially with
cloud and everything. I just don't really think too much
about whether I'm running on Red Hat or CentOS
or whatever.
It's kind of commoditized, the operating system.
Yep.
So what have we got last here?
Last one. Yeah, I wanted to say
for programmers, so from a programming perspective, Windows on the desktop.
Wait, like using the Windows operating system?
What do you mean?
There's people that prefer it.
There's people that definitely don't prefer it.
Oh, there's people.
It's either Mac or, yeah, I mean, there's people.
This is a flame war.
I think it's fine. I like i mean there's people this is a flame war um i think it's fine
i like it i mean it depends i guess it really depends on what you're coding in right but to
mike's point a minute ago especially with the windows subsystem for linux now or the yeah
linux subsystem for windows i don't remember what it's called, but that, that kind of is a game changer. So I'd almost lean towards underrated to a certain degree.
Yeah.
So the question is,
do I think that using windows as the operating system for my computer,
is it overrated, underrated or fine?
Yep.
Uh, I had to stick with fine. Yep. I had to stick with fine.
Yeah.
I think a couple years ago I would have said it's overrated because,
you know,
on the Mac I've got the terminal,
I've got the bash,
I've got brew,
like I've got all these really nice tools.
So I've got Docker.
But now I feel like Docker is kind of even that score for me too.
And so windows is kind of like,
it's,
it's convenient,
but there's a lot of stuff I don't care about.
And so it's like Cortana,
and there's just a lot of stuff I don't really care about.
And so in a way, this has kind of become commoditized to me too,
but I still don't want to be working.
I don't want to be living on a Linux box.
I think it's cool.
It's neat.
But I don't want to have to be vimming into things all the time
to change this or that.
I like the Windows.
So I'm going to go with fine too.
Yeah.
I mean, my reason for saying that it was fine was just because the variety tools.
I mean, you mentioned the Windows subsystem for Linux.
But not only that, though, like Git Bash bash, for example, you know, if you
ever use that, I mean, I'm actually trying to break the habit out of using bash on Windows,
because in one form or another, like back in like the sick wind days from long past, right? Like,
I've had this habit of using bash on Windows. And I'm actually trying to like,
get myself out of that world right because i kind of feel
like it's a crutch almost to keep using it as as weird as it sounds to say so like i feel like you
can kind of have your cake and eat it too on the windows of world because there are so many you
know just a plethora of of applications out there and software that you could use on it. Right. Where it's not like, um,
you know,
definitely in like a Mac and Linux world,
you might,
you might find yourself a little bit more constrained in,
especially like maybe not in when you stay mainstream kind of software,
you know,
vertical,
but when you start to get out into like the fringe edges of software, then you might find yourself
more limited.
I don't know. I'm going to say fine.
I totaled up. Today, I was actually the most
optimistic with two unders and the rest fine. I didn't say anything
was overrated.
Nice. You are sick.
You are sick. Yeah, it's
the cold medicine.
Whatever.
It's NyQuil.
So, Outlaw, you had two overrateds.
Alan, you had one
and everything else was pretty close to the same.
Outlaw, you did have more
underrateds, so you have more issue with how people think about things, these topics.
And Alan, you and I are both pretty close on thinking that they're pretty close to rated as they should be.
Middle of the road.
Wait, but you had two underrateds, right?
And you said that meant that you were the most optimistic.
But if I have four underrateds,
doesn't that make me even more optimistic?
Oh yeah.
I guess depending on you had,
uh,
sorry,
three under is,
um,
yeah.
So I guess you could argue it that way.
Um,
but you did have two overrated.
So I guess it depends on how you add it up.
I don't know how to compare these things mathematically.
Right.
He's the curmudgeonly happy guy.
That's a,
however that works.
Get off my lawn, but look at how
green it is. Right. Come back here and
swim in the pool, but don't touch my lawn. Appreciate it.
Yep.
Alright. Well, hey, I
had a thought that I wanted to share with you guys, though.
We talk about...
We've talked a lot about
the Clean Code series, right? we've talked a lot about the clean code series, right?
We've talked a lot about some of the practices that Uncle Bob has put out there.
And I kind of had this thought where it's like sometimes – do you ever find yourself struggling with the concepts of clean code in that you'll want to – you'll want your function, for example.
Let's just say you're focusing on a single function.
You want that thing to be small, right? But ultimately, there's things inside of that that
you might need to do to make it complete. But if you were to break that out into a separate function, then some other developer might come behind you and be tempted
to use it, and it's not something that you want them
to use. It's something that you think, this is already
probably a bad thing.
Does that make any sense? Totally.
So I was curious to see if you guys found yourselves at times struggling is going to sound worse than it probably is, but you struggle with that kind of thing.
I do all the time.
It definitely is a constant struggle, especially with those long methods.
Like, you know, in a perfect world,
like we talked about the five, the rule of five,
where it's like every function is small,
every file is small, every naming space is small.
But in your real world code,
like sometimes you're in really big methods
and you don't even want to break necessarily your method apart
because it's going to get lost in this like sea of sea
of trash right so uh yeah i like i do like um like c sharp now supports um anonymous methods
like inside the actual function and i think that's one way of kind of doing what you're saying because
i have the same struggles a lot of times like i will do stuff that i hate doing like modifying
arguments or like modifying the internal state of like an array or a list or something that I pass into a function.
And I'm doing that because I want the convenience of not having to duplicate code a bunch of times.
I want to be able to reuse this common logic.
But I'm absolutely modifying the argument, which I always think of as being a big no-no.
And making it a private function isn't enough because sometimes I've got some big class that I'm dealing with and i can't afford the time to rip it all apart so i like the idea of doing little small anonymous functions
because it's kind of like a way of saying like hey this is mine in my function leave me alone
yeah i mean i'm sort of in the same but i don't think i view anonymous functions that way i don't
typically do it that way it's kind of an interesting take on it, but I feel like it's still all kind of embedded in your other function. So you haven't broken it up
that much, but at least it does keep it scoped to where people can't mess with it. But I do,
I have the same problems, right? Like I'll look at something and be like, oh man,
like I think the one thing from the clean code series that stuck with me the absolute most is I want people to be able to read what's happening in my method like it's a story.
Right?
They'll keep things at the same abstraction level?
Not as much that, even though it kind of tends to just happen that way.
But if you can read it like a story.
You know, like, okay, go get customer.
All right? Update customer. go get customer. All right.
Update customer.
Now get customer orders.
Now do this with customer orders, right?
Like if I can make it to where I name my methods and I put those methods inside the other method
in a way that reads like a story, that's what I find myself trying to do the most.
Where I typically end up struggling though with that is not, I don't typically think
about, Hey, what, what's another person going to come do after this? Because I can't control them,
right? Like some people are just going to screw up code no matter what, no matter what you do.
Um, but the thing that I struggle with that always frustrates me, it goes to like the CQRS
type things, the command query separation. It always drives me crazy that if I'm going to do something that mutates the state of
it, then I got to go get the state of it right afterwards.
Right.
And that always drives me kind of crazy.
So I'll sort of struggle with things like that.
Like, man, do I really want to split this into two or three calls, even though I know
that it's all going to be done right there?
Yeah. Yeah. Especially in JavaScript, a lot of times
I'll get some JSON from the server
and I want to augment that JSON. I want to
format it. I want to format the dates.
I want this to be capitalized. I want to do
some basic transformation on that stuff.
With JSON, it's so easy to just
take that stuff and augment
it right there. I know what I should
do is basically clone the object first
and then make my modifications there and return that as a new object
so I'm not mutating it.
But, like, come on.
Right.
Nobody had time for that.
Yeah.
I mean –
It's, like, decent.
It's already designed for that sort of thing.
You mentioned wanting your code to read like a story,
but it made me think, like, well,
what happens if the story is the site of the crash and it's just everywhere you're like well technically it's a story no one's
going to be a story you enjoy it could be like a stephen king story and you know you just turn to
the wrong chapter that's that's on you that's awesome but uh it kind of i am i do feel a little
bit better though because i definitely with some of the comments that you made, Joe, was kind of in the same thinking where I've done this.
Although I wasn't thinking of necessarily anonymous functions, but I have – I don't know if I would call it a habit, but I have definitely found myself more in recent, you know, like, you know, year or two using functions like I'll create a, you know, as a variable within a method.
Right.
And then that way I can put an expressive name on whatever the logic is that I want to use.
But like, as Alan pointed out, the scope of it is limited to my function.
So no other developer can use that.
And then when I find that there is a need to move that thing up to like, um, you know,
to make it more exposed, then I can, which is kind of like where I think we had talked
about it.
Maybe it was with the clean code series.
Probably like, you know,
we're creating,
defining your methods as private initially,
and then progressively making them more exposed as you find the need.
Right.
And I find like,
this is like a more granular,
you know,
version of that,
right.
Where like now the scope is just within the method and it's like a closure
basically is what you're doing. And so I, you know, I've kind of,
every time I've, I've done it though, I kind of catch myself and I'm like,
Oh, I mean, on the one hand,
I love it because no one else can use this thing and I'm not necessarily
introducing something bad to the rest of this class, for example.
But on the other hand, I'm like, well, how bad of a pattern is this?
Like, you know?
Yeah.
I mean, I don't have any like crazy passionate opinion about it.
I mean, I get why you do it, right?
And it's like you said, you lock it down.
Nobody else can touch it.
But then you are junking up that particular method, right?
That you're in because you're still going to have to organize it in a way to where it
reads pretty and all that kind of stuff.
Because I mean, in the best case scenario, you're going to have a bunch of anonymous
method or even variable assigned methods up at the top of your method that you're running.
And then it's going to be down at the bottom where you're calling all that stuff.
So it's still not pretty.
Well, that kind of like begs the question, too, though, because there's, you know, a lot of rule of thumb is to define your methods as close to where they're being used, right?
It doesn't get much closer, does it?
To define your variable as close to where it's being used. So, you know,
you could make the case that actually your function is going to be defined
right before you call it for the first time.
But that'll suck too, because then it doesn't read like a story.
Now you've got a bunch of broken up code and blocks that you,
that's going to be hard to follow.
And this is why I bring it up. Right. Yeah, exactly.
The struggle is real.
It is. So, yeah, yeah i mean i would be ashamed to
admit how much time i spend on trying to name things at times yeah that's good i hate how many
times i go into my code and i wish i'd named something better because i'm like damn this is
item and this is item and this is p and i don't do single letters but like i tend to name like the
variable for something like you know i don't really care too often because i only use it
three lines i get confused i'm like wait which one was this again hey you know, I don't really care too much because I only use it three lines. I get confused.
I'm like, wait, which one was this again?
Hey, as long as you don't start your variable name with I underscore
or S underscore, I'll be fine with it.
Just don't do that to me.
It actually makes bile rise in me.
Oh, Hungarian notation?
You're not a fan?
I'm not a fan.
I'm an anti-fan of Hungarian notation.
I did like one thing you said a lot too about
naming kind of like internal variables
in your
function like after what it is so rather than doing
something like doing an operation on
.count you maybe set a variable equal
to you know that.count or.where
.count or whatever if you do something link
it kind of gives a descriptive name so it might be like
bad customer count or something like that and because that's easier to read than seeing some big
long link expression that ends with a count or a length and doing some sort of math on it or
something so it is nice to kind of see that intent there with a variable yeah like a prime example
um actually it came up uh i don't remember if it was today or yesterday but you know you might find
yourself in a situation
where you write an if statement and you're like, if this thing is true and that thing
is true, if both of these conditions, right?
And so you're also like, you know, to make this more clear, I'm just going to create
a variable that holds a function that states exactly why I need that condition to be
true.
So it'd be like,
you know,
and then that way when you read the if statement,
it read better.
But I was like,
well,
I don't need this anywhere else.
There's no,
there's no reason to pollute the rest of the class space for it.
But yet,
and if I did like,
why do I need anyone else using it?
I don't feel like anyone else wants to use or should be using that. Maybe they should. I don't know. I did, why do I need anyone else using it? I don't feel like anyone else wants to use or should be using that.
Maybe they should.
I don't know.
I mean, Uncle Bob would probably tell me that I have the class doing too much, and that's why it's already a problem.
And I wouldn't be able to argue with that.
Far.
Did you just yak on the mic?
I'm yakking myself for talking over you.
Sorry about that.
But I get so excited thinking about the mic. I'm yacking myself for talking over you. Sorry about that. But I get so excited
thinking about the code. I was thinking
about a big if statement. I'll do something like
if initialized or if object
equals null and needs
initialization
or something like that. And I'll put a little comment above
it like, if we don't have the object
code, we need it. And what I
should be doing there instead is using a variable.
And sometimes I catch myself doing that and do the right thing and be like,
Oh,
let me just create a variable that expresses my intent here.
And then my,
if it's super simple and everyone knows what I'm doing,
I don't need a comment,
but I still find spaces where I just kind of instinctively do those comments.
Yeah.
Yeah.
All right.
So how about this one for you?
This is going to be a C sharp specific,
maybe unless you can figure out a way to bring it into another language.
But,
uh,
how do you feel about the use of the dynamic keyword?
Uh,
it,
I mean,
it's a kind of like defeats a lot of the compiling purpose.
So it kind of bothers me a little bit,
but at the same time, there's a lot of stuff compiling purpose. So it kind of bothers me a little bit, but at the same time,
there's a lot of stuff I do.
That's like,
I kind of define my data output in my,
like say SQL query and my C sharp is just passed through.
It doesn't make any decisions.
It doesn't care.
And so it's just a burden to keep that thing up to date.
And so I like dynamic for those cases,
even though I know uncle Bob does not.
Yeah.
I mean,
I have neither love nor hate for it.
I mean, it's a tool, right?
It's a tool.
You're saying like that you're a derogatory term?
Like, oh, you're such a tool.
No, no.
Like, it has its uses.
Like, if you're taking in inputs from systems, like, I mean,
if you look at log formats for, I mean, I work with Splunk on occasion and stuff like that, and it takes in unstructured log files.
And if you're taking in something like that, the dynamic is a perfect example of where using something like that works out.
You don't know the properties and things that are going to be on it.
That makes sense, right? And even like what Joe said, you know, you have something that's going to be shaping your data and you don't want to have to have a one-off DTO for every single different shape of that data that you're going to have for your possible results.
I get it.
So, yeah, it's a tool like anything else, right?
Just like reflection.
You'll have people that hate it.
There's a reason for it. You know, there are situations where it makes
sense. And yeah. Right. So, okay. So it moves.
So one of the complaints that I heard Joe say was that it
you're moving the problem instead of it
being a compile time check,
it moves it to being a runtime check.
Wait, is it even a check?
Well, what I mean is like it could throw an error, right?
So instead of finding your errors at compile time,
that check happens at runtime and it could error then, right?
I have very bad dogs at night, sorry. That that's awesome i think it's hilarious but um but then
the example that joe gave though right where like you're just your middle middle ah easy for me to
say your middle tier was just simply passing the request through and then the response back through it, right? Like, and that's why you didn't
want to have to care about it. But now, couldn't you argue that you're more tightly coupling
the UI tier to that data tier? Because now, you know, the, the, in order for the UI tier to know
what, um, you know, what data came back. It knows exactly what
the data tier sent, and in order for
the data tier to know how to execute
what
you're ultimately trying to query,
it knows exactly what the
UI layer sent. There's no
layer of abstraction between them.
It's true.
Yeah, I mean, that's okay.
Sometimes. I mean, it's funny like where i see that it could
be extremely useful is in situations where you're doing like custom aggregations on data
and stuff and you're not going to
there's there's no standard way of saying hey count of this field or something like that right
i could totally see where like having a dynamic variable there makes a lot of sense.
Where I kind of don't like it though is there's no contract, right?
Like if you have a request to an endpoint to an API or something, there's nothing there
that locks you down to, hey, what's the contract that you're looking for?
There's no way for you as a developer to go find it.
You just have to know what it is. And, and that's the, I think that's probably the one thing that does bug me is
if it's something that's a pass-through, fine, I get it. If it's something that you've actually
got to use, like you have your UI making a request to something, what, what am I supposed
to pass to you? How am I supposed to figure that out? You know what I mean?
Yeah, sometimes you don't care.
Like I remember I wrote a thing for like sending some like syslog events,
whatever, and it would take in like a query and it would serialize it to syslog format and spit it out.
And like whatever columns you had in that query
is what I was going to send to syslog.
And so it was up to whoever was writing that query
or whoever was like generating
that query to generate what it had and that made sense because my code's job was just a simple
transport it's like i don't really care what's in here i don't you know i don't have any rules or
anything per se like my only job is to get this from point a to point b and so i think that was
a good use of dynamic there because it had no sort of ownership. There's nothing to buffer.
Like the UI, like the two points both already had that contract.
Yeah.
So I guess the best would say is it would definitely allow us to be lazy.
Definitely. us to be lazy and it would definitely break everything that we learned from like a clean
architecture and uh onion architecture kind of approach right yeah i would agree depending on
the usage of it for sure yeah i like it so you sparingly so maybe if i went somewhere like saw
like a whole web service tier that's all built around
dynamic i'm like oh man right but what if what if maybe it's okay if the method that's using
the dynamic only has like you know two lines in it right then that's where to alan's point
about the dto it's like well why are you to spin up a DTO just for this? Like,
you know, there's two lines of code and only one of them is using this thing that's
passed in.
Right.
Maybe.
I hate me.
I hate creating new classes.
So I'm with you there.
But now that we got named tuples and C sharp,
uh,
I'm more prone to it.
Yeah.
Yeah.
I hate creating a new class.
It's like,
well,
we find some name
to name these three properties
like as opposed to this
as the six
it is right the naming is going to take
like all day long
it's like the name of this class is
whatever damn data I need for
this method
should we create
should we create tickets in our ticketing system for naming things
spend three hours naming this variable no i'm just kidding that's right the ticket itself was
only four hours but three hours so that was naming it so hey every pull request uh you need to kick
back with at least three name change recommendations oh man oh that would actually be an awesome thing for a pull request like or for for a pull uh a
review of a pull request it's like hey you gotta always like make some kind of recommendation
x number of recommendations or else you didn't really review it yeah that's awesome yeah i'm
not doing any more pull requests thanks guys yeah merge to master. That's right.
Done.
Cool.
All right.
So,
so we've,
we've kind of gone back over stuff,
went on a bunch of tangents and, you know,
hopefully you guys got something fun out of that.
The,
the resource we like is the imposter's handbook,
which we can't recommend enough.
It's,
it's awesome.
And for a list of other resources,
like we're, we're trying to compile a good list of things that we highly recommend on CodyBlocks.net slash resources.
You can check that out.
We've got a link to this book up there and like little brief summaries of all that stuff.
So, you know, check it out.
We highly recommend you use the term.
Check it out.
Check it out.
All right.
So with that,
let's get into Alan's favorite portion of the show.
It's the tip of the week.
Yeah,
baby.
All right.
So I couldn't remember if we've already talked about this,
but angry suit,
uh,
Jessica in our,
uh, Slack channel, but angry suit, uh, Jessica in our, uh, Slack channel, she mentioned
this, uh, tip where if you copy something into your clipboard and let's say it's XML or Jason,
then visual studio will give you some really cool options. If you go to the edit paste special menu. So you could paste that JSON as a class or classes
or the XML as classes.
So pretty cool tip.
Thought we'd share it.
I couldn't remember if we already did.
So if we did, then I apologize
that you're getting a repeat tip.
I did a long time ago and I don't remember. This has been
a long time ago. I thought you did
and I went back looking for it and I was
like, I can't find it, so I guess it never
happened.
Yeah, I don't remember where it
was. So you don't get to take credit for it.
It was in episode
31. No, no. Angry Zoot
got it. It was episode 89.
But Angry Zoot, you could have all the thunder there. That, nope. AngryZoot got it. It was episode 89. But AngryZoot, you could
have all the thunder there. That's fine.
Alright, so mine
I'm actually taking
from somebody as well. So DanceToDie
over in Slack, he shared that
one of our
previous Slackers
Slack? Yeah, it is Slack.
Swix, he actually
has a TypeScript for React cheat sheet, which is really cool.
So we'll have a link to that there because if you try and get started on TypeScript with
React, there are some challenges there.
And he's got some nice stuff in his GitHub page.
So I checked that out.
And then the other thing that I wanted to bring up because it's new, I think as of SQL Server 2012, maybe there is a format function in SQL Server.
A lot of times you'll see people still doing like old school, like say that one of the things that I used to see is if somebody needed a number that was left padded with zeros,
then you would like, let's say that you need a six digit number, then you would basically do
a left string of six zeros and then concatenate the value that you wanted in it and then do
a right six on it, right? You can do that now without jumping through hoops. The format function in SQL Server
is essentially using the.NET format function under the covers. So just about anything that
you can do with a format call, like a two string and using one of the iFormatters there,
you can do the same type thing with SQL Server using the same data types. So I have a link that will take you to the main page
talking about the formatting types in.NET.
And then on the left-hand side in the menu there,
it'll show you like custom formatting for numbers, for strings,
for date times, for that kind of stuff.
So extremely useful and really easy to do stuff
that used to really kind of stink
in sql server yeah for sure and you know uh dance to die's name in slack today is willie the baddest
dog of all and i don't know if you've seen the pic today but it's uh it's my dog willie looking
like a smug little jerk like he's being tonight oh Oh, that's awesome. Yeah, it's awesome. So that's
very apt.
And my recommendation
today is for a brand new podcast. We were just
talking about.NET Core
a couple weeks ago. I was talking with
Bozo, if you're out there, Richel Bozo,
about.NET Core resources
and I said, hey, you got to follow this guy,
Goprogman. He does a lot of writing
about.NET Core,
and maybe he's got something cooking you might be interested in.
And the first episode has been released. So Jamie, Goprogman, has released the.NET Core podcast.
He's starting that up, and he's a really prolific writer.
He's got tons of material.
He's been writing about.NET Core,
and he's been kind of obsessed with it for a long time now.
So if you are interested in.NET Core,
then you have to go
to Excel. We'll have a link in the show notes
there and make sure you subscribe.
We've mentioned him a bunch of times too. He does us
some other shows,
Waffling Tailors and whatnot.
Devotaku that we've mentioned.
So you know him.
You love him. Got Progman.
Check it out.
Yeah, dude's awesome.
Please do.
I have no doubt this is going to be super high quality.
I need to give it a listen.
Yeah, it's really good.
First episode is on the history.net core,
and he really does a really good job of going over kind of how it came to be,
differences between ASP and.NET core and like
different versions and talks about CS Proj and the Jason.
I mean,
everything just,
just really good.
Very nice.
So that's about it for tonight.
Now we talked about big,
we kind of wrapped up some stuff talking about,
you know,
like time and space complexity and how much it really matters to developers.
We gave some opinions and over under, and Outlaw had a great section here on
kind of good and bad things that we struggle with.
Yep. So with that, subscribe to us on iTunes,
Stitcher and more using your favorite podcast app in case if somebody happened to
point you into the direction of our show. And
if you haven't already,
please head to www.codingblocks.net
slash review
where you can find some helpful links there
to leave us a review.
We can't express how much we
appreciate
reading those reviews.
Definitely.
And while you're up there,
make sure you do check out our show notes.
And if you're on your phone or anywhere,
they're usually, they make it over there as well.
Our examples, our discussions and more.
And send your feedback questions and rants to the Slack channel, which is codingblocks.slack.com.
You can actually sign up for it at codingblocks.net slash Slack.
So make sure to follow us on Twitter too.
You can ask us any questions or whatever.
And head over to coonblocks.net
where you can find our social links at the
top of the page.