Coding Blocks - Understanding Complexity Theory
Episode Date: May 14, 2018This episode we talk complexity theory while digging into Rob Conery's The Imposter's Handbook as Allen channels his inner Austin Powers, Michael finds linearly to complex to pronounce, and Joe rui...ns Batman for the rest of us.
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 81.
Subscribe to us and leave us a review on iTunes, Stitcher, and more using your favorite podcast
app.
And go to www.codingblocks.net where you can find show notes, examples, discussion, and
a lot of other stuff.
Send your feedback, questions, and rants to comments at codingblocks.net, follow us on
Twitter at Coding Blocks, or head up to the site and find all our social links there at
the top of the page.
With that, I'm Alan Underwood.
I'm Joe Zach. And I'm Michael Outlaw. This episode is sponsored by Datadog.
Datadog is a software as a service monitoring platform that provides developer and operation teams with a unified view of their infrastructure, apps, and logs. Thousands of organizations rely
on Datadog to collect, visualize, and alert
on out-of-the-box and custom metrics to gain full-stack observability with a unified view
of all their infrastructure, apps, and logs at cloud scale. That's right. With 200 plus
turnkey integrations, including AWS, PostgreSQL, Kubernetes, Slack, and Java. It's amazing.
Check out the full list of integrations at datadoghq.com slash product slash integrations. Yeah, and some other key features include, but are not just, real-time visibility from
built-in customizable dashboards, algorithmic alerts like anomaly detection, outlier detection, forecasting alerts,
end-to-end request tracing to visualize app performance, and real-time collaboration.
Yep, Datadog is offering our listeners a free 14-day trial, no credit card required,
and as an added bonus for signing up and creating a dashboard, they will send you a Datadog t-shirt. Head to
www.datadoghq.com slash coding box to sign up today. All right, in this episode, we're going
to be tackling imposter syndrome by taking a hard look at your skill set and how to fill in those
gaps using the imposter's handbook. Specifically, we're going to be talking about computational complexity in this episode.
And if you would like to get a copy of the Impostors Handbook,
you can go to bigmachine.io and use discount code HAPPYIMPOSTERS,
no spaces, all lowercase, for a 15% off coupon.
And that's Impostors with an E at the end.
I'm really bad about spelling with double O's, which is not the case.
But if you forget that code, you can always tweet us or something,
and we'll hook you up.
It's 15% off, and they actually have print books and digital now,
which is really nice.
So moving on to a little bit of news.
First off, big thanks for all the reviews.
We really appreciate it on a iTunes.
We've got N a R O three,
two,
one,
Mike,
uh,
the FQ bearded wizard and try to,
to,
Hey,
by the way,
you read that exactly the way I did.
It's not bearded wizard.
It's beard wizard or bared.
There was going to be bared. bared wizard yes excellent all right and
i've got stitcher so here we've got
coding berserker the other other michael
that that reminds me of austin powers the
other other white meat um sorry uh gabe
hodges stunned by soup an amazing name
is monkey uval raz sov nine and muhammad Gabe Hodges, Stunned by Soup, an amazing name,
IsMonkey,
Yuval Raz,
Sov9,
and Mohamed Omran.
So huge thank you to all you guys.
And hey,
we released a couple of companion videos to episode 80.
That was the one about Docker for developers.
And so if you head over to YouTube or the show notes or something, you can find links to those.
The first one was Alan developed a ASP.NET Blazor app in a Docker container.
So you get to see the process there.
And I made a little video talking about how you can develop.NET Core with Visual Studio.
And particularly, I wanted to focus there on what Visual Studio does to actually let you build in the IDE,
but have it actually rebuild in the container.
So if you're interested in how that works, then check out the video. It's like three minutes long or something.
Awesome. And one last bit of news here. We got a comment on episode 74 that I loved.
So we all spent a bit of time saying how we love the idea of postponing or delaying
your decisions, right? Like the infrastructure things that you put in place. And, and Christian wrote us and said that, you know, that's all fine and good, right?
But what about the situation where you need to version your data, right? You've got,
you know, somebody comes in and updates a record and you need to keep a track of that. You need
to keep an audit of that kind of stuff. If you only did this in your code
layer, then you're probably going to write a bunch of code to version data, right? Like to figure out
how it's going to store that stuff and all that. And he's like, but then you're missing out. If
you're treating your database as truly just a storage mechanism, then you might be missing out
on some features like temporal tables in SQL server that will auto history that stuff
for you, right? Like it'll create a history table and keep version audit information on that. So I
I'm curious what you guys think about this response, by the way, I love the response.
So basically another way to say this was like by, by delaying that decision and boiling down
the database to just the storage mechanism, you're, you're treating it as the lowest common denominator. And you're not necessarily taking advantage of the features
that that particular storage mechanism actually provides. And instead, you might actually be
reinventing the wheel in some cases, like your temporal table example, right?
Exactly.
I mean, there's a lot of truth in what he says.
There is. Yeah, I can't argue with it. I mean, that's spot on. I definitely, there's a lot of truth in what he says. There is.
Yeah, I can't argue with it.
I mean, that's spot on.
I definitely think that's a downside of doing things the way we were talking about.
And so I think you've got to kind of take everything that we talked about,
everything you do in your day-to-day job with kind of a grain of salt
and try to make practical decisions that make sense for you.
So I think the real thing here, though, is there's a happy medium, right?
There's the extreme case where Uncle Bob's scenario, he never used the database, right?
He found a way to keep coding around it.
And I'd say he's on the one extreme because, like I just said, maybe there were some features that he could have taken advantage of that he didn't.
And maybe he ended up redoing some stuff that he didn't need to do or whatever. And then there's the other
extreme though, where you never even consider that you're not going to use the database. And so you
end up baking that dependency all throughout your code and making it really difficult to test.
There's no abstractions there or whatever, right? So that's on the opposite end of the spectrum,
right? And so the happy medium is trying to like get somewhere in between there where,
okay, let's take advantage of an Oracle or a MySQL or a SQL server or whatever.
But let's keep things in a way that we can, you know, isolate that, that layer
so that the dependency doesn't leak through.
Yeah, it's, this one was interesting. I loved this comment because it was, it was an argument
against what we had been saying from clean architecture. Right. And it's, and I always
like it when somebody puts a counterpoint to it. Here's what I will say though, is if you abstract it, you don't necessarily have to implement everything in code,
right? Like, so for instance, in my mind, at least where I went with this was, okay,
so you save something, let's call it a record. Doesn't matter what it is. Let's call it a person
record, right? You save that. You could still have an abstraction that says, give me version one of this record,
or give me version two, or give me version three. Your code still doesn't know how it's implemented,
right? Whatever that last layer that is talking to the database or talking to the document DB
or talking to the file system, that's where it's actually implemented, right? So I don't think
that splitting that thing out into
the proper interfaces to where you're just saying hey i'm gonna have a thing that says you can go
get a historical record of this right like go give me version five of this record that doesn't
necessarily mean that you just said that you're not going to use the database to do that versioning
for you all it said is you don't care about the implementation of it.
You know,
you know that you're going to need to get historical records,
but that could have been done in the database.
It could have been done in a document DB could have been done on the file
system.
And the implementation is that last leg of it.
Right.
Cause what you don't want is you don't want that,
that the implement,
the specific implementation of how SQL server might do that, right, or how you might get
that version from SQL Server, you don't want that to leak through to the rest of your application
because then it hinders your ability to decide, oh, I no longer want to be on SQL Server.
I want to move off to some DocumentDB database because it'll be maybe more cloud-friendly
or maybe it's easier to scale horizontally,
whatever your reasons might be, right? Like you don't want that dependency, or that that
implementation to leak through because then it creates a dependency. And that that is, I think,
the essence of the clean architecture, what you just said, is it doesn't mean you can't use it,
but you don't want its mechanism to make its way
into your code base, right? Past, past whatever you're hooking it up with. Right. And that,
and I think that's important. And one other thing, and I think we've talked about this in the past,
if we haven't, we could probably talk for hours on it, but the, let's talk for hours of the, the thing that I used to argue all the time with people and I feel is still
legit today.
Like,
huh?
Tazer spaces.
That's one too.
Um,
but when somebody is like,
well,
what if you want to change from SQL server to Oracle?
I'll stand behind this.
I don't think that's really a thing that really happens that much.
Right.
But I do think what happens is you find a gap in what you're using. And instead of full happens that much, right? But I do think what happens is you find
a gap in what you're using and instead of full-on replacing it, right? Like you're not going to just
say, hey, I'm going to jump from this really expensive relational database system to this
other super expensive one. What you end up doing is you augmented it, right? You might have a search
need that you implement something like Elastic or Azure search or AWS cloud search or
something like that. Or you have this whole thing of, oh man, it'd be really nice if we had a graph
database that could do some of the, you know, complex lookups for us in a better way. So I
think that's the more realistic approach where you start thinking about, well, what are the other
ways that we can, that we can store some of this information so that it fits a use case. Right. Um, but yeah,
I love the comment. I absolutely love the comment. I appreciate you leaving it because
the counter argument was excellent, but I think it still does fit the same
abstracted with the layers implementation could be whatever you want it to be.
I still think it's a valid point though. I mean, by kind of programming to like a bland interface,
like you are kind of dropping some of those details and you're dumbing things down to that
lowest common denominator. So, you know, I think it's a good point and something to keep in mind
that like you don't have to, you know, pretend to be completely unknowledgeable about stuff.
Like you should use the right tool for the job. So you just got to make pragmatic decisions based
on your use cases. Well, I mean, the one thing I'd say is if you know,
you are using SQL server, then you can have knowledge about how it does it. And then maybe
that will affect how you write your interfaces so that you know that, Hey, this is a standard
thing that they do. And if I, and if I implement it in a way to where, you know, I know how to access a version or whatever, then that's something that would probably fit with anything else, right?
So it can guide you into your application thing, but you still don't want to leak that implementation through, I think.
Anyways.
Okay. through, I think. Anyways. Okay, so with that,
hey, feel free to comment on this episode,
codingblocks.net
slash episode 81.
And with that, let's dig into complexity.
And
specifically, complexity theory,
also known as
computational complexity theory,
focuses on classifying problems by their difficulty.
So, like, if something's, like, real nasty,
has no unit tests, poor abstraction,
that's something that would be rated really high?
No, I don't think that's what we're talking about here.
Not at all.
It could be.
We can't rule that out
because we haven't seen the problem that that code solves.
That's right.
Yeah, things get really rough.
And I always have a hard time reconciling this kind of stuff with the actual day-to-day kind of nine-to-five work that I do.
Just because it's not real easy a lot of times to see what algorithms are really underlying.
And so many of the quote-unquote algorithms involve manual users kind of pushing buttons and making decisions. And so they're not real clear cut kind of complexity enabled questions. So I've always kind of
struggled with this, but it's been really interesting to read this chapter and think
about it again. Yep. So, I mean, what are we talking about? Like when we say there are like
different types of difficulties, like there are multiple classifications, right?
And we're going to go through those guys.
And the first one is kind of the most common one that you're going to hear about all the time.
That's P problems, which stands for polynomial.
Which, I mean, is there like a non-mathematical way to describe this?
No.
I mean, really?
Well, isn't that the thing, though? This whole complexity
thing is trying to quantify
the complexity instead of, oh, that's really hard.
It's break it down. Well, I think there is, though.
But I have this note. This is in here about the
polynomial, just because when you think of polynomials, then you think about equations about problems that scale linearly as the input scale.
We're not necessarily talking about a specific math problem. that takes in an array and it, no matter how many items are in that,
in that array,
it's going to loop over every element in the array,
right?
Then that thing is going to scale linearly.
That's such a weird word for me to say.
I know I'm not saying it correctly,
so bear with me.
Or I could say,
mess that up and say,
bear with me.
But,
you know,
you know,
so if you put it a hundred, if you send in an array with a hundred items in it, right, and then you send in an array with a thousand items in it, right, the scale,
you can see how that thing's going to scale in the manner, like how has it, as it increases.
And that's what we mean here when we say polynomial. And just to be clear, when we say
scale, we're talking about the time it takes to complete it, right?
So if it takes one second for one item, then 1,000 items is 1,000 seconds, right?
So that's when we're talking about scaling.
As it goes up, the complexity stays the same, and the time is constant, right? Every time that you add another one, you have a constant additional amount of time.
Not completely because you can multiply
and all that kind of stuff. And it can be less, right? You can have things that scale sublinearly
and that's also a P problem. So these are just some number, whether it's one half or one times
the actual number of steps. And so an example I like here is we say, what's the biggest number
in an unsorted array? There's no way around going
through each index and saying, are you the biggest? Are you the biggest? Are you the biggest? Are you
the biggest? So if you've got an array of 1,000 and it runs in 1,000 seconds, then you can feel
pretty comfortable saying, you know what? I know this looks at each item and the actual steps for
figuring out whether this is bigger or smaller, those steps are basically
negligible. So I'm just going to look at this as a function of my input size and say, this is going
to run in seconds. So if you give me, you know, 100,000 size array, like it's going to run in
about 100,000 more or less. You know, we know there's some details here to work on. It's never going to
come out pretty perfectly, but it comes out pretty dang well in practice from what I've seen.
Yeah. So these are going to be the more, the P problems, the polynomial problems. These are
going to be the ones that are typically the simpler problems that we might be asked to work
with. If we're asked to work with them, probably we're not.
Yeah, because I mean, most of these are your simple loops and things that you know have
to iterate.
And a lot of times that's not what you're working in most of the time.
Unless you are a judge residing over a court case against, you know, Oracle versus Google,
and you're trying to write your own loop,
you're probably not going to worry about problems in P right.
In P really?
See,
I would say that I worry about I N I N I N and then space,
not the,
not the next,
not the next.
No.
Yeah.
Yeah.
I was going to say like,
are you kidding me?
I spend all day with P problems.
Like I'm constantly finding the maximum.
I'm constantly sorting things or looking things up and, you know, hashes.
Or, like, most of the code I do, I would say, is very much in…
No, no, no, no, no, no, no, no.
We utilize that.
Yeah.
Yeah, yeah.
We'll use the tools and the framework, and we will write code that uses P all day long.
Wow, that sounds like a weird statement. So, but our boss isn't going to come and ask us, hey, I need, you know, we need to deliver the next version of our amazing website or our amazing game.
And, you know, the gamers, they really want you to write this for loop, right?
Like that's not going to be the request.
Yes.
Right?
So you're not writing your own sorting algorithms every day is what we're getting at, right?
No.
But I do solve things that involve writing loops all day.
Yes.
Yes.
Yes.
Loops, using loops are different, but.
That's not the problem we're solving.
It's just a thing, a construct that we're using to get to the problem that we're ultimately trying to solve.
We write a lot of code using p things,
but we're not solving p problems, I guess, is the thing, right?
I guess it depends on how you scope your problem.
I think the problem is I keep breaking it down into smaller chunks
until it is some sort of mathematically talkable problem.
A lot of times I write adapter codes.
I'll get a hash and it needs
to be formatted like such and such. So I'll kind of
loop through the elements and kind of
format them like I need to. And I know that's
that operation is going to happen
in numbers of times because I have to
go through each element doing some sort of transform.
So to me like that's one individual
sub problem of a bigger problem.
Yeah, but that's not the problem that you
were asked to solve and that's the difference. Right. So the problem that you were asked to solve, and that's the difference.
Right.
So you're saying the problems that we're asked to solve on a daily basis are traditionally
not polynomial problems.
Correct.
They are more complex than that.
The polynomial problems are the simpler problems that we're just expected to be able to know
how to do.
We're not asked to do those, And we might do them many times a day
regularly as part of solving a bigger problem. But that's not what we're going to be asked to solve.
Maybe you guys are doing something different than me, but I feel like I write polynomial
code all the time. And I rarely have to think about the non-polynomial problems. That's the
thing. Maybe the ticket is non-polynomial, but I've figured out how to solve the ticket within
the first hour of looking at it.
And the rest is just like kind of organizing those bits and solving those polynomial problems.
Yeah.
Yeah.
But the bigger problem, the real problem that you were asked to do included all those.
Right.
And it was not – it itself was not polynomial.
So you're kind of skipping ahead.
Yeah, exactly. Because one key point here about these complexity problems is that the computational complexity theorists or whatever you want to call them have figured out various ways to where you can transform one problem into another type of problem, right? So the problem itself in one form might be impossible to solve,
but if you break it apart in these other ways,
then you can break it into smaller units that it is possible.
So let's go on.
We need to lay down some language here,
get some vocabulary under our belt so that we can continue this on.
Yep.
So we talked about P, which is polynomial.
The next one up is NP,
and that's non-deterministic polynomial. And this one's interesting because what it is,
these are problems that are solvable in P time, given a non-deterministic algorithm.
And this is probably a lot of wordy words for people who haven't visited this in a long time.
But in the example that they had in the book that I thought was really interesting was,
and we've all done this, right?
Like you get a car full of people and you're like, hey, where do you want to eat?
It just turns into this thing where it's like 10 minutes later, like, no, really, dude,
I'm just stopping.
10 minutes.
Right.
Like an hour.
So here was the interesting thing is if you started making things complex and you said, hey, let's pull everybody and then everybody has to agree on something.
You start making things more complex.
Well, a deterministic route is what we typically do in programming to where you get some sort of input.
It comes in and you tell it, OK, run that method over there.
OK.
And then when you're done with that method, run this method and you're going to get outputs for your given inputs. A non-deterministic
approach is almost like a guess. It's almost like a random shot at something. So you get an input
and then it says, well, I'm going to choose one of these methods over here. Like I'm not really
going to pick one that you told me to pick. I'm just going to pick one of these. And then,
and then it might grab the result of that and then go grab some other random method
and feed the stuff through that. So that's non-deterministic. That's not a path
that you've given that has inputs and outputs. So going back to this whole thing,
problems that are solvable in p-time given a non-deterministic algorithm.
So basically, you are able to solve things quicker than what you should be able to by almost doing random guesses, right?
Yeah, but you still need to be able to verify that problem, which is its own problem.
Because if it's faster to verify the solution than it is to actually calculate you could just guess right right that's a key that's a key part of what makes a problem
as being able to be classified as non-deterministic polynomial
go ahead well yeah so just to expand on what alan was describing like kind of the way he
described it in the book was like if you had let's say six people and you asked, asked each one
of them to go in order and say like, Hey, pick five, name five places you're willing to go to.
And then the second person would have to not only name five places that they want,
they might want to go to, but also taking the input of the first person and be like,
well, do I also, am I interested in any of those places?
And then that's going to keep going down the line.
So everybody's taking in additional input as you go.
Right.
Well, I think it was like actually six people, six questions.
So that way it came out to be 36 possible choices.
But it basically was like it was going to get exponential as you added in the seventh person.
Right.
And so if you ask the question like that,
where you say like,
Hey,
everybody name,
name six places that you're interested in going.
Right.
Uh,
that,
that was the,
you know,
it might be deterministic,
but it doesn't scale well.
It's exponential at that point because they actually had a picture,
right?
With the six people,
imagine six circles and there were lines drawn
from every circle to every other circle right it was like a graph at that point that's because
every person has to talk to every other person it doesn't have anything to do with the number
of questions like the number of questions is actually like a red herring there it doesn't
matter whether it's six or one or twenty what matters is that every time you add one new person
each person has to talk to another person right whereas the Whereas the lucky guess example might be, the lucky guess
way of doing that is instead of saying like, hey, everybody
list six places that you're interested in going
to, instead you just say, hey,
let's go to tchotchkes.
Yeah. Would everybody be happy going to tchotchkes?
Yeah. Would everybody be happy going to tchotchkes?
Yes or no. And then if
everybody says yes, for example, well,
that's a really quick answer, right?
You can get that.
So that's where it becomes verifiable in polynomial time. But this is non-deterministic because that was, it was changing the algorithm a little
bit, right?
It was instead of, hey, let me put all these possible iterations together.
It was, let me rephrase how we try and get this answer out,
the yes or no. Will everybody be happy going to this place? Yes or no? It short circuits that,
right? And that's why you can get it in polynomial time. But that was non-deterministic.
So check this out. One example I can think of that's even kind of funny here is we could say
that getting a mortgage, it's a polynomial
problem because there's, you know, 300 pages to sign. You got to get an inspection. You got to go
to the bank. You know, there's a finite number of steps here that you have to do for, for each house
you buy. So even though those steps might take, you know, 30 days to actually work through,
it's still a linear process, but figuring out how to, to go to lunch, right. Where each person has
to talk to each other person on a team of to lunch, right, where each person has to
talk to each other person on a team of five people, that's going to result in 20 conversations
potentially, right? If we're talking about each person talks to each other person to decide.
The first problem I described, which took 30 days, it's linear, right? So it's actually a P
problem. But the second problem, even though it's completed in, say, five minutes with five people, it's actually an NP problem.
So just because a polynomial problem is a polynomial problem doesn't mean it necessarily completes any faster.
It's just a matter of how well these things scale.
Correct.
All right.
So just to wrap up this on the non-deterministic polynomial, I think, did we say that these are solvable in P-time by using the non-deterministic methods, which is the lucky guess?
The lucky guess.
Version of that.
And it's kind of like saying like, hey, you know what?
I'm tired of talking with four other people every time we want to go to lunch.
It's like everyone just put their favorite restaurant in a hat and we're
going to draw one. That's it. Yeah. And that lucky guess is kind of like, um, don't necessarily
consider it. Basically we're just talking about like you're changing, like as Alan said, you're,
you're changing the algorithm around, right? Like to where, you know, in, in the example that he
gave in the book, he's rephrasing the question, which is the algorithm in this scenario.
And it's cheating.
Yeah.
The interesting thing here too,
is these are the ones that we deal with the most on a daily basis.
Right.
And,
and I thought this was interesting too,
because I think this is also,
this is true.
These are the ones that we enjoy because it's a challenge,
right?
It's not just a mind numbingly.
Here's a for loop,
right?
This is,
oh, well, I have this interesting problem. How do I solve it? And there was one other part that I
thought was really cool because I never really thought about this. I've never worked in Prolog
personally. But if you're working in Java or C Sharp or JavaScript or a lot of the popular
programming languages out there, they're very deterministic, right? Like this method calls, this method calls that method
and you have your output, right? It's testable. Um, prologue will allow you to have the same
method signatures and it will non-deterministically pick one of them at random to call with whatever
the inputs are. And it'll even do a
thing called backtracking to where if you didn't get the result that you wanted out of the first
method that it called, it'll back up and then call the next method it has. So, so it will actually do
this non-deterministic call for you, which I thought was really interesting. I'd never heard
of it before. I thought it was pretty cool. Um thought it was really weird too. I'm still struggling
with whether I'm working in P problems or NP problems. So many of the problems I deal with
require user input. And that's because I, as the programmer, can't decide for them what should
happen in a given situation. If you're working on some sort of grid and you click 100 items,
does that mean you want every item in this data set, even on other pages,
or do you just want this 100? I can't make that decision for you. So I cheat and rephrase the
question or rejigger things so it pops up a little modal. And I don't consider that an NP problem.
I wouldn't say that is. I don't think it would be. Hey, did I miss it? Did we say that there's
currently no real way to solve problems non-deterministically?
Well, I wanted to come up to that later.
Okay, okay.
I have that later.
But, yeah.
So we'll cover that again.
Scratch that from the record.
Yep.
Okay, so let's move on with our vocabulary here and get to NP complete.
So these are,
these are decision problems that we can classify as NP.
And these decision problems are always in P complete because verifying the
answer is easier.
So what's a,
what's an example here?
Is that something like a cryptography?
Uh, you hit me with a trick question i'm just thinking you know cryptography like we're doing breaking some big numbers down into factors whatever and um it's really slow for me
to kind of brute force that and try to find a solution but once you have the solution it's
cake to verify right yeah i like that i like that that. The Boolean, we get back to this later.
Oh man, the problem is when you search for the examples, a lot of
them are things that we're going to be talking about shortly. So we'll come back to this.
And up next is NP hard problems, which are
problems that can be reduced to other problems in NP, but they are not themselves
within NP.
So that's like combinatorial products. These things are problems, gosh, are NP hard.
The example we kind of talked about like people and choosing which is going to launch is just
too many permutations. But what does it mean to have a problem that is not itself NP,
but can be reduced to other problems in NP?
So this – okay.
So let's take this back.
This is going back to that example that we talked about with the lunch, right?
If you make it the decision problem versus the combinatorial problem, right? The combinatorial problem isn't
going to scale well at all, right? That's going to be extremely difficult, right? But if you just
make it a simple decision problem, then it becomes much easier to solve. That's where breaking it
down, like what you were talking about earlier in your day-to-day, right? That's what you're doing. If you're taking an NP problem, an NP hard problem,
and you break it down into little algorithms that leverage a lot of P-type problems,
that's what you're doing. You're taking the NP hard and turning it into probably an NP complete
type problem. Not with the grid, not with the grid example.
Okay. I'm seeing that NP hard problems are at least as hard as the hardest problems in NP,
which I don't really understand that either. Like, can't, can't we have an NP hard problem that's made up of a bunch of really easy NP problems? I think that's the whole goal. I mean,
what he was hitting on was exactly that,
was trying to take the NP hard and breaking it into things so that it can be turned into
a deterministic thing that can be done. Are we sure that NP hardness has to do with
breaking problems down? Well, I think all of these, though, you're going to try to reduce it.
That's our goal is to try to reduce from a harder version to a less hard version.
So from like an MP hard to an MP.
And an MP to a P.
Like that's our ultimate goal so from wikipedia it says more more precisely a problem
h is mp hard when every problem in mp can be reduced in polynomial time
that is assuming a solution for h takes one unit of time blah blah blah blah so
it's it finishes up by saying as a, finding a polynomial algorithm to solve any MP hard problem would give polynomial algorithms for all the problems in MP, which is unlikely as many of them are considered difficult.
So I think that's the real difference between it is when you get into NP-hard, some of them can't be broken down as well.
So I'm still struggling with the difference between NP and NP-hard.
So NP are classes of computational decisions for which a given solution can be verified in polynomial time.
But NP-hard are decision problems which are at least as hard hard and they do not have to be elements of NP.
Yeah.
I'm lost on the difference between NP and NP hard.
If,
if P does not equal NP,
this is the consequences and Wikipedia,
then NP hard problems cannot be
solved in polynomial time. So that's the difference, right? The NP problems can be
solved in polynomial time if you can take a non-deterministic approach to it, right? Like
we said, you switch the thing to where it's got yes and no's. So that decision, you can't break
down an NP hard the same way. You cannot get it down into polynomial
time is what it sounds like.
I'm going to go with that. It does say
note that some MPHARD optimization problems can be polynomial time
approximated up to some constant approximation ratio
in particular those in APX don't know what that is or even up to some constant approximation ratio in particular, those in APX don't know
what that is, or even up to approximation ratio. So it sounds like you have to,
and we'll get into this a little bit later too, but you sort of have to
say that you're willing to take some level of not completely accurate
in order to attack the NP hard problems.
I think that there's some mathematicians or some students that are in discrete mathematics
classes right now that are like, they're probably screaming at the podcast.
Yeah.
I don't think it has anything to do with the accuracy.
Well, the traveling salesman is what I was thinking of specifically. And there was the whole greedy algorithm thing to where you take the shortest one.
From every point that you're going from, you take the shortest route.
And that's an approximation.
Well, let's come back to the traveling salesman.
Okay.
But, I mean, kind of where I'm thinking about this is like thinking back towards Joe's question about the cryptography, right?
Like, you know mp
hard versus complete but um which is kind of interesting too because if you look at wikipedia
cryptography is covered in both mp hard and mp complete so it's like oh i don't know which one
it is then i'm kind of confused uh so thanks joe um but um the where i was going with that though is that like you know like you said to
create it the first time you have some known inputs so therefore you can easily verify it
right so there's there's where it's verifiable in polynomial time but to try to crack that
once you know you once you don't have the private key, that's where it becomes extremely...
We don't have the technology yet to do that in most cases.
Right.
If you're using a 2048 key, you look like you have a question on your mind.
No, I'm just reading.
I'm having a hard time with this, but I feel like I'm bogging things down.
So I don't think I'm going to be able to figure this out while we're kind of talking on the podcast. decision problems. But Rob broke it down as NP-complete. Think of these as decision problems
versus the NP-hard was combinatorial problems. And that's when I was like, oh, okay, that's,
I guess, where if you have to have a graph of inputs, then that's NP-hard. But if you're just
making a decision, then it's NP-complete. And so if you can break that combinatorial problem down into from NP hard
into a decision problem,
then you can make life easier on yourself.
So NP complete is all problems that includes NP and includes NP hard.
So the thing I'm just struggling with is like the difference between NP and
NP hard. So are we saying that NP are problems that can be verified in polynomial time?
So they can be verified quickly.
And NP hard are the problems that cannot be verified in polynomial time.
I think that's correct.
NP complete could be, NP hard could not be, right?
So the cryptography example where we say, okay, you've given me your secret and I can verify that really quickly, that's an NP problem.
But where are we going to go to lunch?
I can't tell you if we made the best decision unless we actually do the full circuit.
Possibly.
I think that's the way to think about it.
You could reduce it down to an NP problem by rephrasing it, and that's how you can get it into MP.
And then you can verify it, which is how it's also part of MP.
But if you stay with it as it originally was, like what's the most optimal place for the six of us to go to launch, then it remains MPHard.
So another interesting thing here is on Wikipedia,
they have the application areas for NP hard and it is interesting.
Like what they have here is approximate computing is one of the areas where
it's done.
Data mining is another one decision support.
So these are,
I mean,
you know,
yeah.
And you see cryptography was cryptography is in there.
So this is,
again,
this is where it's very much, uh, I don't, I don't want to say guessing, but it's not easy answers for any of those things basically is what it boils down to.
And in the same page, I mean, going back to when you mentioned the traveling salesman, it does refer to that as an NP-hard problem.
Right, because it's incredibly difficult to solve.
Yeah. Because it's incredibly difficult to solve. So, yeah, I mean, maybe we'll clear this up as we go.
And like you said, we probably got a bunch of math majors that are sitting here yelling at us right now.
Or they're probably not even listening to us because they're a little bit further ahead.
So, you know, who knows?
So the next one we have is EXP.
It's the exponentially complex.
So T equals 2 to the n power these are hard uh they
don't scale well because they are exponential i and somebody put in here you know when you have
two to the n power that's a hockey stick right as n increases it just shoots straight up that's a
really big one yeah yeah so um these are the types of problems that we are asked to solve.
And these are the ones that are usually fun and challenging and frustrating and lose hair and get gray over.
But it's the stuff that we like to do.
You guys ever see Dykstra's algorithm?
Familiar with that one?
I'm sure he had a lot.
He definitely had multiple of them.
But it's the one for calculating the shortest links between all nodes in a graph.
Oh, yes.
That's actually covered later in the book.
Oh, okay.
Well, yes.
It's kind of funny to me because it's like that was always given as an example of like here's a really tough problem that we thought was going to be really hard to solve.
But someone actually figured out how to do it in exponential time, which stinks.
But it's actually, you know, much better than what we were doing before.
And it's funny. If you look at the algorithm, like you hear all this talk about it,
and you look at it, it's three, four loops. Or it's just like, you know, what looks to me like
brute force, but it's actually a good solution to a tough problem. Yeah, I think there's the
Bellman-Ford. Was that the one that replaced it as being a better one? Something like that?
I don't know. I think, yeah, because both of those are covered later, but yeah.
And we also have a point in here that, you know,
the ones that we've just covered are the main classifications
that you typically hear about.
There's more, but these are the ones that you'll typically hear about the most.
Yeah, I'm looking at just a bunch of them now,
and even like the algorithm looks differently than I expected. So it looks like my little three, four loop thing
was actually much simplified version of what is actually going on. So yeah, don't, don't quote me
on that. It looks like I need to go back to school because this stuff is getting fuzzy.
That's awesome. And here's the great part, right? There's complexity theorists out there, right?
That's a job you want to have.
Yeah.
I mean, do you recall when you were in high school and your guidance counselor was talking about possible jobs that you could have?
And complexity theorists, that was definitely one.
It was high on the list.
Yeah, that they covered, right?
Yeah.
I mean, that's crazy.
But I guess if you think about it, though, it makes a lot of sense in the today world especially, right? Yeah, I mean, that's crazy. But I guess if you think about it, though,
it makes a lot of sense in the today world especially, right?
Data mining is huge.
Machine learning is huge.
All that kind of stuff.
Like data scientists and people that think about these problems
and ways to solve these problems,
like they're going to have jobs for years to come,
at least until the computers take over.
So, yeah, complexity theorists, they've actually found ways to reduce problems from one to another, which is what we were talking about breaking down, you know, trying to break down an MP hard into an MP complete and then an MP complete into, you know, P-based type things.
So these classifications, these help us be able to communicate how difficult something is.
So when I was reading that, I kind of thought back to our conversations about domain-driven design.
So it was kind of like, oh, well, this is our ubiquitous language, right?
No?
Yeah, I agree.
That was actually something I liked a lot.
I saw some faces, so I was like, wait, what?
Yeah, this is what I liked in this chapter was when he was talking about it.
And we've all done this, right?
Like your boss comes to you and like, hey, can you get this done?
I don't know.
Let me take a look, right?
Like that's typically the answer.
Or I don't know.
That would probably take about a week.
You haven't really quantified anything. You're just, you're spitballing it. Whereas if you looked at it and you were able to break down, man, I feel like we're talking about, it's all about the runtime, not the development time.
This has nothing to do with how fast it takes me to program something.
Well, no, no, no.
Well, maybe your ability to even accomplish that goal.
Right.
I mean, if something's exponential, you might not be able to accomplish it. You know, I mean, he gave an example of, you know, he was asked to,
you know, in real time in a Ruby on Rails app, come up with a graph where, you know,
a social networking kind of graph where they would have these events, they would all have
certain tags on them. And then every user would have a certain set of tags that they're interested
in. And then, you know, you'd be able to in real time, see, here's the graph of everything that you might be interested
in. Right. And so, um, you know, in the beginning he, he was like, well, okay, I think I could,
you know, I think this is doable. Uh, let me, let me take a look at it. Right. And eventually he got
to the point where he's like, Oh, uh, no, I can't figure out how to do this.
And what he later learned was after studying complexity, he realized that the type of problem that he was asked to solve versus had he broken it down, he could have eventually been able to communicate better that, hey, what you're asking can't be done in real time. That's not to say
that it can't be done, period. You know, maybe if we did it in batch, right, and go ahead and
pre-compute results, right? But if you wanted this thing computed on the fly in real time,
we can't do that. And here's the reasons why we can't do that, right? So there was this quote
that he has in here that where he, that I really liked
where he says, think about complexity in terms of time, as you scale the inputs that go into the
algorithm that you're using to solve the problem. So you're right that it's not necessarily about
the, how much time you spend writing the code, but just about the, the, um, execution time. Right. But that's where his
little lessons learned about like, well, you know, it might not perform at all. And so your ability
to even solve the problem in real time is going to be, you know, um, impossible based on that.
So I understand like, if we've got a batch process right now, we've got 10 customers
and I know it runs in 11 minutes. Okay, whatever. And then the boss comes and says, Hey, now we've got 1,000. And you look and you're like, Oh, crap. Now it's over 2,000 minutes to run. You can tell already that it's not running. It's not scaling in polynomial time. So when the boss comes back and says, hey, we've got a big expo next week, we expect 1 million users, you can say this nightly task isn't going to finish in a night.
And so you know that you've got to go and rejigger that algorithm. And so I get why
understanding run times matters. But I kind of feel like you can just do a couple of basic
measurements and see that ain't going to fly. You just need a napkin. Do you really need to
break it down to constituent parts to say, oh, hey,
this is a NP hard.
So I, yeah, I mean, like I get that would be helpful in some situations.
I just feel like that has never come up in my day job.
No, it hasn't.
But I do want to back up though.
What you said, you, you got 10 and then you go to a thousand, right?
Like that's assuming you have something preexisting, but like even going back to the example in
the book is the guy was tasked with building this system. It was new. It wasn't, it wasn't a known, it wasn't an
iterating on anything else. And what his, what his takeaway from that, the important part to me was,
was being able to identify the complexity of what was being asked, right? So maybe you don't ever
talk to anybody and say, Hey, this is MP hard. But the key was is just like the whole, you know, six people talking about where they're going to dinner.
All those lines, the computation, the permutations and the complexity behind it was going to be really difficult to scale.
And you couldn't do an NP time because it just, it wouldn't work that way. It was a
factorial problem. Right. And I think his whole point with what he was saying when he was talking
about that Ruby app was literally like, if I had sat down and, and really paid attention to what
they were saying there, I would have been able to look at that and seeing the, the permutations
were going to just blow out astronomically and it
wasn't possible. Right. And, and he even said, you know, didn't matter that, that I got fired,
you know, you hired 10 more people. They still weren't going to be able to get it done because
of the type of problem that they were trying to solve. And so I agree that maybe you're not going
to be talking about this. You're not going to be, hey, hey, boss, this is an NP-hard problem.
You may not say that, but at least just like a pattern in code,
you'll be able to identify some of those patterns and say, oh, yeah,
this is way beyond what is just a simple problem to solve.
Well, okay, let's go back to this, you know, MP hard versus MP complete,
right? Like one of the examples that was talked about in the book about, uh, this is the,
the halting problem, right? So the halting problem is you need to write a generalized
program that can determine if any other program will finish running or if it will continue,
uh, infinitely. So will it, will it, you know, uh, and I think it's like, if it will finish running or if it will continue infinitely?
So will it, you know, and I think it's like if it will finish running successfully or
if it's with an error.
I don't remember.
But will it finish running or will it just continue infinitely, right?
Right.
One program that can determine that for any other program, right?
That's an NP-hard problem.
How are you going to do that?
How are you going to figure that out to say definitively whether or not it's going to or not, right?
So this problem, the halting problem is considered undecidable, right?
Because you can't, as an NP-hard problem, you can't determine that it's going to one way or the other.
You can't determine ahead of time that it's definitely going to crash with an error or that it's going to run successfully and never have a problem and keep running.
Right?
Does that help?
Yep.
So if you get a ticket like that, you should push back on it.
Yes.
Yes.
No.
I mean, you're laughing, but yes. That's true. That's really
the point of this. And that was kind of his point with the graph example that he gave was that at
the time, it was an NP-hard problem that he was asked to solve, but he didn't recognize it up
front. And so he later had to, you know, he realized it later after, you know, the job was
long gone where his folly was in it.
But that's the point is if you can recognize these things ahead of time, then you could point it out and be like, whoa, wait, timeout.
We got to timeout here.
Because as the way that we're being asked to try to solve this thing, we can't do this.
So we got to attack the problem in a different way.
And that's where you try to break it down from so i was trying to think like can you think of it as subsets like np completes
are but i don't i don't even want to hold on wait let me look forward in the notes because there is
the answer to it there is i don't we don't have it in here i don't see it so it's called sats
right oh no that's in there that's definitely in there where is it yes um you're talking about We don't have it in here. I don't see it. So it's called SATS, right?
Oh, no, that's in there.
That's definitely in there.
Where is it?
Yes.
You're talking about the Boolean satisfiability problem.
Where did I put that in the notes?
I don't see it.
Because this is actually where it matters.
So, I mean, I guess we should.
Oh, I see it.
It's way down.
Oh, dude, no, it's not in here.
It's in the resources we like.
Well, if we come to it, are we going to Google for it?
So, here's the thing.
What SATS is, we don't actually have it in the notes in terms of what we need to talk about.
But this is how you take something like that and you break it down. Basically, the gist of it is there's people and I'll let somebody
find who authored this thing. But the whole gist of it is instead of having these hyper complex
problems, you try and take it to where it returns a true or a false. You try and convert the problem
into something that will return you a
Boolean.
And if you can get a true out of it,
just like the whole example of the six people going to dinner is,
will everybody be happy going to this place?
Yes.
Okay,
done.
We're out.
We figured it out.
And that's what SATS does is it tries to short circuit the problem by
giving you a true or false.
So I know one kind of common example that I actually have seen come up in some workplaces
is scheduling. Like if you're, say, like you work at a college and you've got a bunch of students
and someone comes to you and says, hey, look, just figure out the optimum schedule. You know,
we've got everyone's like, say, work dates and the classes they want to take. Just figure out
the best schedule and do it. That's an NP- hard problem. It's kind of like a similar to the knapsack problem. And we have different
solutions based on, you know, what we can do and kind of different algorithms for solving it in a
decent way. But ultimately, you know, if you got this kind of ticket at your day job, this would
be something where you're like, you have to come back and say, look, I cannot find you the optimal
solution for this. If there's any sort of scale going on here.
It's just not going to happen.
But I've never gotten that ticket.
So I'm skeptical.
I do think I should say like I probably sound like a downer here.
But I do think it's really important to kind of like read over this stuff because it does come up.
And I think it's the basis for a lot of things that we have to deal with.
But ultimately, I feel like this stuff doesn't really affect my day life,
my programmer life pretty much ever.
It's good to be able to speak to it, right?
Like if you ever see MP come up somewhere,
then you can at least know that, oh, that's non-deterministic polynomial.
Okay, I heard it.
I have a somewhat rough understanding of what it is.
And then at least you can see the context of it, right?
Well, my takeaway is kind of similar to what it is. And then at least you can see the context of it. Right. Well,
my takeaway is kind of similar to what Joe is saying though.
Like,
yeah,
I'm never,
I've never heard anyone in a meeting or whatever,
you know,
talk about like,
Hey,
this problem is going to be MP hard.
Here's how we're going to break it down to be complete.
And then we're going to go to MP.
Like that's never happening.
Right.
But I think what's important though about understanding
these is that if you can understand it and you can then recognize that something that you're
being asked to do falls into one of these categories, then it does give you some vocabulary
that you can go back to other people and say, here's why this is a problem, right? And then you can say like,
this problem as you were asking it is NP-hard
because of X, Y, and Z,
and here's what NP-hard means, right?
And I guess it adds maybe some credibility
would be the way to phrase it.
I feel like that's not the best way to phrase that,
but I guess it adds some oomph to the point that you're trying to make.
Well, it's a mathematical provable thing, right?
So it is credibility.
I mean, it's just like if you're talking to politicians, you want to speak in their language, right?
If you're talking to computer scientists.
That's wise.
Fair.
But yeah, if you're talking to computer science people that are traditionally trained in computer science or you're talking to a mathematician, yeah, right? And depending on the field you're in, if you're in data science, I wouldn't be surprised if this doesn't're trying to win that million dollar prize from the clay foundation, then I'm sure you, it says, as of 2007, heuristical SAT
algorithms are able to solve instances involving tens of thousands of variables and formulas
consisting of millions of symbols.
That's pretty impressive when you think about it.
I mean, that says a whole lot.
This thing is chewing through crazy amounts of,
of complexity and it could do it.
That's,
and it's used from things like artificial intelligence,
the circuit design and automatic theorem proving.
I mean,
that's,
that's heavy duty stuff.
Yep.
So just to round this out,
uh,
the last one here would is simply are,
and we won't spend a lot of time here except to say that all problems that to round this out, the last one here is simply R.
And we won't spend a lot of time here
except to say that
all problems that are solvable
in a finite amount of time
are classified as being in R.
So everything that we talked about so far,
if it's solvable in finite time,
then it's also solvable in R.
So if you were to think about
like a Venn type of diagram here,
like these are all circles within that R.
And then the halting problem, like you mentioned before,
does it end successfully or with an error?
That is outside of R because you can't determine what's going to happen.
It could infinitely go on and you'll never get your answer back.
So the scheduling problem I mentioned, that's R.
Because if you sat there long enough and ran through all the
presentations, you can say, yes, this is objectively the best schedule
we can come up with.
And an example of a problem that's not in R is GIF versus GIF.
Hey, by the way, R, it could finish in a million years from now. The problem that's not in R is GIF versus GIF.
By the way, R, it could finish in a million years from now.
The thing is, it's some sort of calculatable end state, right?
Like it's going to happen.
Yeah, it could be like how many years does the sun have left, right?
Like that could be problem R.
So it could be exponential in nature.
So, exponential problems are still, you know, can still be solvable in R.
Right.
Yeah, remember, Earth calculated the number 42 after seven and a half million years of thought.
That's how long it took to reach the answer 42.
He's dropping, like like all these little eggs.
It's awesome.
Laying them eggs.
There you go.
All right.
Yep.
So we say it all the time.
We very much appreciate you guys leaving us reviews.
Those who have taken the time to go do it.
We read them all.
We love them.
I mean, it's fantastic to see what you guys say. So if you haven't already, if you'd like to give
back to us, please do head to codingblocks.net slash review. Click one of the links that we
have there. If you have iTunes, feel free to do it in there. If you don't, you can do it on Stitcher.
You don't have to sign up or anything. And we very much appreciate it.
And, you know, that's a big thank you for doing it.
All right.
And with that, it's time for my favorite section.
Let's get ready to survey!
Ding, ding, ding.
And in this corner.
All right. ding ding ding and in this corner all right so um last episode we asked do you use docker in your current dev life and your choices are yes so i can install and blow away software while keeping my system clean or yes it's part of my build chain or yes it's part of
my production deployment and lastly ain't nobody got time for that all right i think alan went
first last time so joe what you got uh i think that most of the developers listening to the show work at smaller shops
so i'm gonna say uh that uh it's part of ain't nobody got time for that with 32 percent
okay that makes me sad i'm also going to say that ain't nobody got time for that at, what did you say, 30?
I'm not telling you.
35.
Okay.
And I hope that that's not it, but I'm pretty sure.
All right, so let me verify these numbers here. Both of you are
ain't nobody got time for that. Joe at
32% and Alan
at 35%.
Yep.
Joe?
No way he
won. He lost.
Yeah, I know.
Ain't nobody got time for that. 60 60 of the votes 60 wow man you guys
don't know what you're missing y'all missing out you really oh so here's the thing right
it's a daunting world to get started in because you start looking at it you're like
man there's just too much here but. But it is life-altering.
It is.
I just published an open-source project.
It's actually kind of two projects in one.
One of the projects spins up four different services. It's got Elasticsearch. It's got Kibana.
It's got an APM server, a
monitoring server. It's got an application.
Man, that readme says
cd into the directory, darker
compose up.
Boom.
You didn't have to install Java or Ruby or Python, all of which are indeed dependencies of this.
But you don't even need to know about it.
They're in the containers.
Yeah, man.
Yeah.
I knew that's what it was going to be.
I mean, because unless you have a compelling reason to drive you towards it.
So it sounds to me like if you are in that 60%, then maybe check out the videos that Alan and Joe mentioned at the top of the show and maybe see how easy this thing is and how you might be able to start putting it to work for yourself. And maybe it won't fit in your day job.
But definitely in your spare time, spare projects, side projects, whatever.
You might be able to find some uses for it.
And maybe you might be able to eventually figure out a way to fit this into the day job. And you know the funny part for me, at least with this, is we talked about Docker almost exclusively.
Because that's what we're familiar
with. And that's sort of the hotness in containers, but man, places like Google been using
containers for years. I saw something that said that they do a billion plus. So it's not a small
thing, right? Like if you ever look at getting into the cloud or any of that, they're all using containers, whether or not it's Docker or not is up for debate, but the containers are
real, man. They make life a lot easier and they are what allow scalability in a lot of places.
What was number two?
Yeah, it's part of my build chain.
Oh, that's cool.
Yeah.
At 5%.
No, it was 17. Oh, that's cool. Yeah. At 5%. No, it was 17. Oh, that's awesome.
Yeah. Well, we should run the survey one year from today and see what
it says. That's right. We're going to flip this on its head. Yeah. I mean, so
if you look at that, roughly 40%, if you want to take the optimistic
point of view here, roughly 40% are using it
in some way. That's awesome.
So, yeah.
All right.
Let's get with today's survey, which is how important is it that developers have an understanding
of computer science-y topics, much like the one we're discussing tonight?
So, your choices are, uh, no.
Uh, or, uh, yeah, I guess it's good.
Or, it's not mission critical,
but I prefer working with people who know their O from their theta.
And lastly, super important, and I can prove
it mathematically if you
accept my base case.
I feel like
I've been a little extra salty tonight. I gotta
apologize. I guess you guys can probably
guess where I land on the survey.
Let me see. If I can
guess, I'm going to go ahead and predict his
results for episode 82.
Joe is going to say no, but he'll say it, but he won't be too sure of it.
So he definitely won't go beyond a third.
He'll fall somewhere in that 30-ish percent range.
That's not my actual answer.
Despite my kind of salty attitude,
I do,
I am happy to be reading the book and I think it's good to go over this
stuff.
It just,
I think that I'm kind of frustrated from my college experience and like
going and I spent so much time like with this stuff and you get out,
like I got in the workspace and I was even in the workforce at the time
and just not using it.
And I was really frustrated.
I was like,
man,
I got to go home and do this homework that has nothing to do with my day job. Yeah. I guess I just gave my answer crap.
You know what? That brings up another thing. Just random thought. I have a buddy who's,
who's son also a friend of mine is in school and he's about to finish college. Right. And,
and I mentioned this to you guys off air. And this is interesting for anybody that's in school
right now. And you're taking classes you're
like how does this even matter right like he took a class that was an oo related class and it was a
lot of design patterns and stuff and he's like this is useless like what why am i even doing
this and he's not doing well in the class because he just doesn't find it interesting
and it's unfortunate but that's that's kind of true at that point in your learning
patterns don't make sense because you haven't even really figured out how to write the code.
Well, yeah, you don't even know what kind of problems you run into in code. So learning
these patterns before you understand how the code even works, doesn't make sense. And it's,
and it's a shame, but just do know that when you're taking
those things they matter later because when you start looking at code you start understanding
code because you can look at the patterns and you know how they work yeah and i don't even know if
we've ever gotten to the most important pattern like we've talked a lot about patterns but there's
one that's like the most important one and i don't know that we've ever discussed it. I'm curious.
Hollywood template.
No, we've talked about that.
It's the hello world pattern.
Every project begins with that.
You got to start somewhere.
That's right.
So, uh, to do list.
That's right.
Yeah.
I found this while, while we're in a humorous mood on Reddit programming humor.
I thought I would share.
Debugging, verb, being the detective in a crime movie where you are also the murderer.
Nice.
So true.
This episode is sponsored by Stack Overflow.
Your engineering team already knows and loves Stack Overflow.
They don't need another tool they won't use.
That's right.
Get everything that 50 million people already love about Stack Overflow in a private, secure environment with Stack Overflow for Teams.
Stack Overflow for Teams is a private, secure home for your team's questions and answers.
No more digging through stale wikis and lost emails.
Give your team back the time it needs
to build better products.
Yeah, and with plans starting at just $10 a month,
you can get an unlimited, private, searchable Q&A archive
served securely on a dedicated network.
Try it today with your first 14 days free.
Go to s.tk slash codingblocks to learn more.
That's s.tk slash codingblocks.
All right, so got a question for you.
Will we ever have the ability to solve
problems non-deterministically? Yeah. So this is kind of what we hinted on before, right? So
there are some who think that we will eventually be able to, whether it be because there's some
new algorithm that comes out to solve these, or we're able to put this into a chip
right obviously that implies that there are some that don't yeah so is that an example here where
we've got a quantum computer and we've got like some sort of chip we've got a little box we kind
of close the box and there's a bunch of qubits in there and they go around they jump around they be
all different states at different times and when we open that little box and when we observe it we can see quickly in some sort of like linear or even sublinear amount of time
the answer to non-deterministic problems is that like one of those solutions is that the kind of
chips that they're talking about or am i totally off the rails uh i wasn't necessarily thinking
about in terms of that kind of chip i I guess that kind of chip could fit though.
That's like,
that's an example I could think of.
It's like where my understanding of like how that stuff is supposed to work is like,
we kind of like shut that stuff off.
And then when we opened up,
there's our answer and magically,
and it's not that it happened magically.
It's just that these,
these qubits existed in multiple different States at different times.
So it was able to like do things very quickly.
I mean,
when I think about things being done non-deterministically
though do you ever like think of like there's a maze with some rats in it and they're just going
to like run around and eventually they'll get to the cheese and you know at the other end you don't
know how they're going to get to the cheese but they're going to get to it and they're not going
to do it in any kind of rhyme or reason they're going to bang their head into every wall until
they eventually get there right like that's sort of what I – that's what I imagine.
Right.
Now you got to put a rat in a chip is basically what we're going to get to.
And that's going to be Frito-Lay's next flavor.
That'll be the Super Bowl winner.
So I think the big deal here is that if we are ever able to ever in the history of the universe,
if we are ever able to solve a problem non-deterministically,
basically, you know, that means that P is equal to NP, which is, you know, a big problem.
This is the question, like, if you can solve this or prove that it's not true, then you
can win a million dollars with the Clay Institute.
But more importantly, like, you'll, if you're familiar with this type of problem, you'll
be able to recognize like nerd t-shirts or, you know, little jokes in Futurama or whatever
that references sort of thing. But if it's ever in the history of the universe,
the case that NP problems can be solved in polynomial time,
then NP equals NP. That means that the set of problems in polynomial time are completely,
it's just they're the same as the problems in non-deterministic polynomial.
Which makes you wonder what happens to the NP-hard problems at that point.
Do those also scale down?
Yeah, I kind of struggle with that.
And so I watched a whole bunch of videos
on YouTube because I was trying to figure out it's like, so somebody does like they scribble
on the math board or the chalkboard or whatever. And they finally prove at the bottom, like,
oh my gosh, P equals NP. That to me tells me that every problem that we used to think was hard and
not solvable and not in a scalable amount of time actually does have a solution. We just haven't
found it yet. And what I kind of got
from that was, well, if we could figure out that it's the case, then we'll be able to use what we
learned from figuring out that it was the case and kind of like create a kind of a pattern that
will help us solve these problems. But that seemed really like wishy-washy logic to me.
And so I'm sure this is just like a deficiency in my knowledge, but I didn't understand how
proving that these problems have a solution equated to having the solution.
Well, that's the weird part too, right?
When you say that there's a pattern, like that's almost the whole point of non-deterministic though is there's no pattern.
It's almost like randomness.
So imagine someone proves like, hey, actually there was a pattern.
It just, you couldn't see it before.
You couldn't see it.
So they prove that there exists a pattern. It was too you couldn't see it. You couldn't see it. So they prove that there exists.
It was too complex to look at at least one pattern.
Okay.
All right.
It doesn't mean that there's one pattern,
you know,
it could be a lot of different patterns.
And that's why I kind of thought like if somebody did prove like tomorrow that P equals NP,
it doesn't mean that all of a sudden like cryptography is worthless.
Okay.
Right.
Am I right or wrong?
I mean,
for our sake,
we would hope so.
Right.
But that's kind of where a lot of the conversation around quantum computing has
gone.
Right.
Like is that,
Hey,
if that's thing ever becomes a reality or when it becomes a reality,
then our current cryptography,
as we know it, our current
encryption is done. It's gone. Right? Yeah. But isn't there a difference between knowing that
theoretically I can verify solutions in polynomial time? Isn't there a difference between knowing it
theoretically possible and then like actually being able to do it? But if you do, if you are able to reduce all NP problems,
all non-deterministic polynomial problems into polynomial time,
it's not just the verifying of them.
You're able, what that means is you're able to solve them in polynomial time.
But isn't there a difference in knowing that I can solve them in polynomial time
and then knowing how to solve them in polynomial time?
Oh, definitely. It's the how-to that would change everything, right?
It's not necessarily the knowing how-to.
It's how it would actually happen or knowing that it could versus how it is.
Yeah, but knowing that it could, knowing that P equals NP
or being able to definitively prove that P is not equal to NP,
that will get you a million bucks if you do that tomorrow.
Okay, let me see
if I can answer Joe's question this way.
In a nutshell,
what little I can speak to about
keys here, right,
for encryption keys,
is that what makes that possible
is that you can take two big primes, prime numbers, multiply them by each other, get
some result, and then that can be your public key, right? But the ability to take that number
and figure out what two primes made it up is what makes it difficult, right?
I believe so.
And so, I mean, there's more magic happening there,
but I'm definitely grossly simplifying the whole process,
so I'm probably going to get a lot of emails.
Because there's also the private key that's involved there somehow, right?
Yeah.
But what I'm saying is to get to that process there, right,
the whole way that the public key infrastructure works is that there's this promise of, hey, we can take really large prime numbers and we can very easily do the multiplication and get a result, and that's verifiable, right?
That we can do in polynomial time. To take that result, that product, we can't figure out what two primes created it.
So therefore, there's the problem.
But if what you were saying, Joe, was that if you could like instantly know like, oh, hey, yeah, we can do this real quick.
And there's this formula.
We didn't know it at the time, but now we do.
Then that becomes, now you've made it polynomial.
If you can solve that, right? If you can solve it.
Right. Yeah. Which is what you were saying.
Yeah.
Yeah. It just kind of confuses me because I guess if someone proved it tomorrow and if I were relying on encryption to protect my data, then I would know suddenly that my data isn't really protected.
And that somebody at some point, if they're not already doing it, will theoretically be able to come in and do that.
Yeah.
Be able to solve my problems much quicker.
And actually, I saw a good analogy where someone referred to these types of problems as like riddles.
Like if you hear a riddle the first time, it's really hard to solve.
But once you know the answer, it seems kind of obvious.
And so I actually went and looked up a couple riddles from Batman.
Okay.
From the Riddler?
Yep.
Would you like to hear one?
Yes.
Oh, God, would I?
Absolutely.
All right.
We're five little items of an everyday sort.
You'll find us in all a tennis court.
I mean, do I need to start naming tennis stuff?
Five little items of an everyday sort.
You'll find us all in a tennis court.
I got nothing.
I'll tell you, it's vowels.
Every vowel is in tennis court.
That's nice.
And how long would it take you to sit there and work through that?
But once you know the answer, it's like, well, yeah, okay.
I should have looked at the word.
There's no why though, but whatever.
It's just sometimes.
Yes.
That is not true, Val.
Can I do one more?
You can.
Yes.
This is from the show Batman, the old 60s show.
What weighs six ounces, sits in a tree, and is very dangerous?
My beer.
No.
I'll just tell you.
It's a sparrow with a machine gun.
Yeah, I'm sorry.
This show is terrible.
It's not even funny. Isman show a machine gun is six ounces yeah it's terrible but it's funny that batman got it right
it was like oh obviously yes all right well bam pal yeah i ruined the show i'm sorry
so i was salty and then i told bad riddles i do want to make the point though of
the the of stressing here that the one of the goals of turning your complex problems
like when we talked about those combinatorial problems and trying to turn those into a
decision problem is your ability to verify the answer right that's one of our goals. So with that, he goes into
this really cool part, which was CARP's
21 NP Complete Problems. So he doesn't necessarily go over all of them,
but he does give a high-level
overview of some of them.
No?
Yeah, he does. I was actually trying to look it up.
Oh, okay. Well, I'll go ahead and start us off then.
So, starting with the knapsack. So, this is a
combinatorial optimization
problem. So, you're given a set of items,
each with a weight and a value, and you're trying to determine the number of each item to include
in a collection so that the total weight is less than or equal to a given limit.
And that total value is as large as possible. So when I was thinking about this,
I was like, well, okay, this sounds perfect for a game show. So like you remember those shows like
shop till you drop, right? Like, you know, you're, you're trying to fit as much stuff into your
shopping cart and, uh, in a given amount of time. Right. And you're trying to get as many big ticket items
into that shopping cart as possible. Right. Right. That's, that's what this is. Right. So,
but then like the little devil on my shoulder also thought like, well, that's also the goal
of every robbery or every heist movie, right? You're trying to get as much of something
that is as much value as possible as quickly as you can,
and you have a finite amount of space to put it in,
like your devil bag or your car
or whatever your robbery mode of means happens to be.
So that's the knapsack problem.
The perfect example for me is managing your inventory in a game like Diablo,
where you've got a bunch of squares, right?
And you've got the sword that takes up three squares,
and it's worth $17.83 if you sold it.
And you've got two rings, and each one is worth $600.
So individually, the rings are worth more per space,
but you only have two rings and one sword.
And so if you've got a whole bunch of items items and as you pick up items as you go along,
you're constantly trying to balance your inventories that when you go back to town to sell,
which is a royal pain in the butt, you want to make the most money.
But you can't just look at it per slot because it gets really complicated.
And you can have really big items that are really valuable,
but they prevent you from getting other stuff kind of worked in there.
So it's a really annoying problem.
You just need to write an algorithm for your game.
Yeah.
And, you know, you say when you said started mentioning games, I started thinking about like other games.
I'm like, oh, yeah, I guess this does kind of apply to a lot of games, right?
Like, you know, Minecraft immediately came up as you were describing it.
But then I was like, well, I guess this could be true of any game because like any first person shooter game, right?
You are typically allowed to carry two guns and you want to get as much bang for buck as possible out of those two guns.
So, you know, the limit is going to be your two here. And your value is going to be how powerful they are
or maybe how good you are with them.
I have flashbacks to Skyrim
where I'm like,
oh, I've got 11 arrows
and to pick up the spot.
These are useful.
However, this candelabra
is worth 157 gold.
Money is also useful,
but less useful in this dungeon.
So maybe I should drop the candles
or maybe something else entirely.
And then you spend next to, you know, like all your gaming time just in the stupid menu trying to figure out what to drop.
Yeah.
And then I thought about like, you know, you could flip the definition of like where this container is around maybe.
And you could even apply it like you totally open up a whole world to me when you mentioned gaming.
Because you could think about it in terms of, like, card games like a Pokemon
or a Hearthstone, right? Where, like,
you know, in Hearthstone you have a certain amount of
mana to use, so, you know,
that, you're trying to get as much bang
for buck out of the cards or the,
you know, what you're going to play,
right?
So, out of each one of those
mana that you have, right? So, if the
mana is your container, right, you're trying to get the most bang out of that buck, you know, right? So if the mana is your container, right?
You're trying to get the most bang out of that buck, you know, the most value, which
would be like which cards you're going to play or which steps you're going to take.
So yeah.
Yep.
Building the deck too.
Like, you know, you get so many cards and you can't look at the cards individually because
one card is not necessarily valuable on its own.
But if it's in a deck with these other two cards that have combinations and suddenly
the value of the card rises and this is why like these are going back
to when we were talking about np complete versus np hard right common combination combinatorial
optimization problems right like you're trying to decide like what's the optimum optimal
combination of things well you can't you can't actually figure that out. Like in a game like Hearthstone, right, like you're limited, right, in time. So you can't actually run through every combination to know like what's the optimal one. Chess, like you can't run through every possible combination to determine, okay, this is the optimal move I should make in the next, you know, to ultimately win, right? Like you have to take some best guesses as like,
I think this is going to be the best thing to go, right?
So you have to reduce those problems down.
Yep.
And so, you know, we even mentioned the scheduling problem.
That's a variation of the NAPSEC problem where we've got a finite amount of space.
We've got things that have different values.
Like if you can get more students into one class and that's more valuable
than the smaller class with less. So yeah, they're out there, they exist and they're frequently
referred to as knapsack problems. And one thing that's kind of cool too, is that, you know,
if you can recognize these common types of problems and you know, if you're in say a job
interview and someone says, Hey, all right, we see your solution there.
Is there a faster way to do it?
And you recognize like, hey, this is a variation of the knapsack problem.
You know right off the bat like, well, first of all, no, it's not going to be very fast.
I'm going to have to either solve it heuristically or approximately.
So, you know, take your pick.
Go for a gritty solution and feel good about yourself.
Pedal on the back.
Yeah, you know, another thing that came to mind as you were talking about that is,
you know, when you mentioned that scheduling too
would be like airlines, right?
Like any kind of trying to schedule,
like who's going to sit where, you know,
trying to get the most amount of people
so that you ship that plane.
Well, that's not really that.
That's probably one of the other ones
that we're going to get to.
So we'll come back to that thought.
All right.
Well, the next one is called clicks.
And this is really easy to kind of visualize when you think about something like a Facebook or like a dating website where you've got graphs and graph theory involved.
And you've got like a social network where the graphs vertices represent the people on the edges or acquaintances.
And the click represents a subset
of these people who all know each other and algorithm for finding these clicks who have
things in common as well as you know mutual friends but not the people they don't like or
maybe their exes or whatever and things just get kind of crazy when you're dealing with these kind
of big data sets and it's um i mean those are kind of those aren't the problems that you're
gonna be able to solve with a couple of naive for loops. In fact, you're probably not going to be able to solve them at all, really, right, in any sort of reasonable amount of time.
So that's where those heuristics come in.
Cool.
And the next one we have up is bin packing, which is similar to the knapsack.
And it's, again, another combinatorial optimization problem, meaning there's too many permutations.
But basically, the gist of it is you're going to try and take objects that have different volumes, and they have to be packed together in a finite number of bins.
And the whole goal is to minimize the number of bins you use.
And so there's all kinds of things out there. It actually reminds me of, it's,
it was a,
a tip that I had way back when that was like a JavaScript thing that would
pack a bunch of like divs together so that they would fit up a space as best
as possible.
Right.
So I remember that.
Yeah.
And it was really interesting,
but yeah,
this is another one of those.
Just,
it's a very complex problem.
It was a Pinterestification.
Remember, it would like find images in order to pack them to like a nice square.
So there was like a minimum amount of space between them.
Yep.
Yep.
So that's really nice.
This is more like the airplane thing.
I think I was referring to where we've got like a bunch of people that need to go places, but we want to schedule the least number of flights in order to get people there and maximize our value.
So we're not wasting fuel. Yep. Or another way to say that would be that because it was the minimize the number of bins
used is what was kind of throwing me off here. So if you thought of the bin as the seat on the plane,
then you want to send the smallest plane possible with the most amount of people on it
so that you can use your bigger planes for the larger trips.
Yep. And if you've got UPS trucks, how are you going to pack all the
packages on that thing so that you can have one trip out and about? Yeah. I mean, there's all
kinds of interesting uses for this particular problem. Yeah. Basically, any kind of shipping
service is going to... The bin packing problem is going to be something they're interested in. So
UPS, FedEx, Amazon, or even... It doesn even like, let's talk containers again, except this time I don't mean Docker.
Right.
Right.
Like maybe you mean shipping containers, uh, you know, overseas.
Right.
Uh, yeah, those would all be like bin packing problems, even like how you get those containers onto the ship.
Right.
Yep.
Um, so, uh, another one that he had here, which, uh, I don't actually see it as one of the 21,
because this one was listed as an MP hard, but he had it listed here,
which was the traveling salesman, which was, again,
another combinatorial optimization problem, right?
So for anyone who isn't familiar with the traveling salesman problem,
this is if you're given a list of cities and the distances between each pair
of cities,
what is the shortest possible path that visits each city exactly once and
returns to the origin city.
So think any kind of mapping software here,
which really there's only one Google maps.
Clearly,
clearly that was like we,
we,
there was a survey about your favorite mobile app and Google
maps was like clearly the winner. And it was the only mapping service that existed. So if you know
of something else that you're using, you're wrong. I know. I'm just kidding. I'm just kidding. But,
um, think of, you know, think of like outside of, um, uh, Google maps or any, any kind of
application, if you were to put this into more practical terms
in regards to application, like think of any kind of like you mentioned political figures earlier,
Alan. So like any kind of election campaign tour, right? Any kind of concert tour, right? You know,
that that politician or that musician, they want to start in the city, but they ultimately want to
get back home, right? So they're going to want to try to, you know, if they're doing a US tour, to go from every city that they need to get
to and eventually get back home, right? But again, it's a combinatorial optimization problem,
because, you know, you're not going to go to every one first to figure out like, hey, what was the
optimal way, you're just going to like, uh,
you know, this is where the greediness comes in is that, you know, you might just approximate that, which I'm kind of stepping on something that Jay-Z was going to get to. So I'm not going to say it.
Oh, go ahead. What were you going to get to? Oh, just the next section. So go ahead.
Oh, the, the thing that was interesting was when they talked about the greedy algorithm
that I thought was cool was whatever city you're in, take the shortest path next, right?
Right.
And you just keep going that route.
But that doesn't take into account whether or not that shortest path was backtracking or whether it was going further out or going further west or east or whatever.
So it's an approximation, and it's good most of the time.
But going back to what you said, it's not like you're going to calculate all 80 different routes to find out which one's the best.
Or maybe you do.
Maybe it's worth doing that if it matters, right? Like if you're UPS, it might matter.
Yeah, I mean, so here's the thing. So in regards to approximation, God, I can't speak tonight.
Approximations can sometimes solve these problems, these NP-complete problems or NP-hard problems in polynomial time, right?
So let's go back to our traveling salesman problem.
So if you consider a map routing kind of program or problem,
you might head to the nearest city first, like Alan was suggesting. And then from there,
you might head to the next nearest city, which, like you said, could be maybe going a little too
far south or one direction or another that might not be ideal, but whatever. It's the nearest one,
right? And this approach is called the nearest neighbor.
The nearest neighbor is a greedy algorithm,
meaning that you do what suits your needs
at your current position and value on the graph, right?
Like you're not necessarily looking at the big picture.
You're looking at what solves my need right now.
And the nearest neighbor algorithm
is usually, let's say, within 25%
of the shortest path on average. So to your point, if you are a UPS, for example,
and you're trying to schedule your trucks, maybe that 25% means that you are being 25%
inefficient, or you're only being 75% efficient. So you might
try to solve these traveling salesman problems. You might be focusing on solving these, like that
could be your bread and butter, right? Because you're trying to get more and more efficient.
If you're any kind of shipping service, right? This is your world of pain, traveling salesman
problem. I do feel like though, you know, we talk about the shipping services, UPS and stuff,
but I do feel like if you're a grad coming out of college and you get hired at a FedEx or UPS or something, they're not going to be given and won't be given. But just understanding, again, going back to the complexities, understanding
that when you look at these problems that, hey, some of these things
are solvable and some of these things really aren't within a realm of reasonability
then I think that's the important part.
Yep. And I scrolled too far.
Yeah, so approximation is great and it's all these times in p times. We discussed that already.
Yeah.
So I don't know why I'm repeating myself.
Yeah. So I was just going to sum this up by saying that we can typically call the simple boring problems like, you know, just using a for loop or something
like that, you know, we can solve in p time, in polynomial time, right? The more difficult and
more complex problems, like group decisions, you know, those are going to be solvable in exponential
times, time, rather. Some problems are too complex to be determined in finite time, and we classify those as undecidable.
And anything that can be solved in a finite time is solved in R.
So polynomial and exponential are subsets of R.
And anything between polynomial and exponential, we have subsets of those problems.
So we've only talked about some of them. There are more that we have more classifications.
But, you know, the more common ones that we're going to talk about are the non-deterministic polynomial, which are exponentially complex problems that can be solved in polynomial time with non-deterministic methods. And then these non-deterministic polynomial problems can
be further broken down into NP-complete, which are decision problems that we can quickly and
easy verify, versus NP-hard, which can be reduced to other problems that are still within NP, but the problem itself is not NP. And we're often asked to solve, you know, as our day jobs,
like it's the NP complete and NP hard. These are the problems where we spend our days. These are
the problems that we're asked to try to solve. And, you know, whenever you get that first problem,
I mean, just know that like when you even, especially,
especially if you're starting out, you know, as your first developer job, right. And you get that
first problem and you're like asked to solve this thing. Right. And like, it's always going to be
seem hard. Like the thing that you're being asked to do, that's why you're being asked to do it.
Because if it was easy, you know, they would have just, someone would have just already done it.
Right. It's hard. And that's why you're being asked to do it.
And the first thing that we started to do is to chip away at that to figure out, like, well, how can I break this thing down?
Like when Joe was talking about, like, well, I always just break it down into a set of P problems, right? But the thing here is trying to recognize these classifications.
Try to recognize these problems when you see them as, oh, well, this is an exponential problem.
You've got to recognize you're not going to be able to solve that in real time.
That's not going to happen.
And approach them from there.
Approach them with care once you can recognize them for what they are.
All right.
Great.
So, yeah, that was a great wrap up.
So, I guess on to resources we like.
Of course, the Impostors Handbook.
And don't forget, we've got that discount code for you.
Happy Impostors.
That's Impostors with an E there.
If you forget, just tweet us or something, a DMS, and we'll hook you up.
Also,
it looks like we got a big old cheat sheet here.
What else we got?
We have the Boolean stability problem. We have a link for that.
Satisfiability.
What did I say? Stability.
Satisfiability. Good call.
And then CARPS 21 MP complete
problems. Also, I do want to point out
because we didn't mention it about this book at the beginning,
what this book is, and I didn't know this.
These guys, Mike and Joe, were like, yo, let's do this book.
And I thought it was going to be about imposter syndrome.
And it is, but it's mostly not.
So it's about people that got into software and their developers,
like Joe said, like in your day to day, you know, you, you create solutions, like you solve problems
and all that kind of stuff. But the real gist of this book was, Hey, for all you that don't have a
classical computer science background education,
this is the stuff that will help bring you up to speed with some of that. So that when somebody is talking about semaphores or these MP complete problems,
you don't feel like, oh, man, I need to go Google something real quick
because you're just in the dark, right?
And so this is like a really good guide or thing, supplement.
Here's the way I would phrase it. I love this book because it gives you enough to wet your
appetite on a particular topic. So it's like, here's a range of topics that you should know
something about. And you don't necessarily need to have a doctorate in any one of these topics, but you should
at least understand some basics and have some kind of a fundamental foundation on each one
of those.
And if you are, you know, if it does whet your appetite to where you really want to
do a deep dive into any one of these particular things, he has resources that you can go to look at for that purpose.
Yeah, he's got links all in it.
But it stays – it's a super awesome book.
It's not a hard read.
No.
Well, it's not a difficult read, but it will wear your mind down on some of them.
I mean, heck, we were talking through this stuff,
right?
And,
and some of it gets confusing.
It's deep,
but I'm not saying it's not deep.
Yeah,
it's deep,
but it is written extremely well.
But I did,
I wanted to point out that it's not like,
you know,
Hey,
you feel like an imposter.
No,
this is literally something to help augment and supplement your knowledge so
that you can,
you,
you can just take your career further,
right? You can, you can just take your career further, right?
You can approach things in a scientific approach.
And again, this book can be found at bigmachine.io.
Yep.
Yeah, they've got chapters like Big O, Lambda Calculus, Functional Programming, like all sorts of stuff that you probably care about.
So even if you are familiar with this stuff, it's nice to have a refresher.
Yep, totally.
All right, so sorry for the derailment.
So now it's that time of the show.
My favorite part of the show is the tip of the week.
And so with that being as how I pulled a comment out of Episode 74 that we could discuss in the news,
I'm also going to steal that as my tip of the week.
So if you've never heard of these things called temporal tables in SQL Server,
they were introduced in SQL Server 2016, although Michael found a link on Microsoft sites in 2008.
I'm about 99.999% sure it wasn't there. Yeah, I'm pretty sure we need to find the
GitHub page for that, you know, the GitHub project for that particular page so we can
submit a pull request to update their documentation and be like, nope, not 2008. That's right. That's actually a
really cool thing is Microsoft will allow you to pull requests through documentation. That's
amazing. So temporal tables in SQL Server 2016, if you've ever had to deal with audit tables in
the past, you know, it's kind of a bit of a pain in the butt. Typically you end up writing, you create a table that looks exactly like the table that you want to audit. Typically
you have some sort of triggers that say, Hey, on update on delete on insert, whatever, insert
another record into this other table. And you manage that stuff all manually with temporal
tables. SQL server now has it built in to where you can turn this on for a table
and it does all the auditing for you. And there's even some special goodness that comes along with
it. Uh, somebody was telling me the other day that like you can have views that hook up and
you say, Hey, give me where the other versions are and it'll, it'll do it for you. So, um,
there's some really cool stuff that happens there. So if you're working in SQL Server, you got 2016, go check that out.
All right.
And for my tip, I wanted to mention the Hacker Daily podcast.
If you're listening to this episode, then you probably listen to other podcasts and you should check this one out.
And kind of the deal with it is it's a daily-ish podcast where the two hosts will kind of scan Hacker News for you
and kind of write up some of the most poignant stories for the day.
And they'll actually bring in comments and stuff like the best of the comments
that kind of illustrate various points about those articles.
So if you're like me and are an avid reader of Hacker News
and sometimes miss out, either way, it's fun for me to listen to the show.
And if I'm familiar with a story, it's always kind of nice to hear their perspective on it.
And if I missed a day, then it's nice to kind of catch up.
So I definitely recommend checking that out at hackerdaily.co
or just search Hacker Daily on your podcast app of choice.
So I got a couple quick ones.
One of them, which I super love this one.
I found this, it just kind of like came along my feed.
And, you know, that's the beauty of, what's that?
It's not Google Now.
Yeah, Google Now.
Yeah, the Google Now.
You know, where like things just pop up into your feed.
You know, they're just interesting.
And you're like, oh, yeah, of course.
Thank you, Google.
Yes, I did want to read that.
How did you know I needed that in my life?
And so there was this
great story there that was debug CSS. And literally that's the title of it. And the subtitle is
not clickbait. And the guy gives this really great little utility out there.
So if you ever wanted to see where your containers are fitting, what's going where, what's overlapping where, where the spaces are.
Basically, he breaks down this process that he goes through where he was trying to do this manually.
And he would set some overlay styles to put borders around different, all the spans and the divs and et cetera.
And then eventually it evolves into this GitHub project where you can add this as a bookmark to your bookmark bar in like a Chrome or Firefox.
And you can go to any page and click it and it'll show you, you can see where everything's broken out. Like,
you know, draw boxes around everything and you can see, oh, I see where this problem is now
because this thing is, you know, I didn't realize it was spanning that tall or that wide or,
you know, it's overlaying where I didn't think it was going to, right? It is awesome to use. Now, uh, if you do any kind of web development,
uh, and especially like if you're trying to, to, you know, if you ever get stuck in like weird
CSS kind of issues, like why is this thing not, uh, you know, the size that I thought it should
be. Right. Um, so that one was cool. speaking of web development we were talking about um
in the past a couple episodes i think there was talk about like freezing the state of something
so that you could see like you know an element on hover and everything and you know i don't know
that we ever talked about this and maybe we have, um, I feel like maybe Alan had mentioned this in the past, but if we hadn't, I wanted to
bring it up, which is like, you know, you can just right click on any call and say, replay X HR to
replay that call. Right. So if we hadn't already mentioned it, I definitely thought that in the
same kind of similar vein of everything else that we talked about recently with Chrome DevTools, that that should be in your playbook
of tools that, you know, if you want to, you don't necessarily have to reload your site.
If you're testing something on this, if you're like making changes to your server side code,
you don't necessarily need to refresh your client to see if that change took effect or if that got you the result you
wanted, you could just right click in the Chrome network tab on the specific API call and click on
replay XHR and it'll redo that call for you. So it's really nice way to see the result. And then lastly, so it was build this week for Microsoft. And they just read this week
came out with this new GitHub project.net machine learning. And I am so excited cannot wait to
actually have spent some time to dig into this. You can go to dot.net slash ML. And it's the whole machine learning
site for Microsoft. ML.net is an open source cross platform machine learning framework.
So now, historically, Python and R have been the language of choice, the de facto languages for machine learning. And now there is a serious
commitment to be able to do this in C sharp. So in the GitHub project, we'll have a link to the
GitHub project, but they show you how to use it, how you can build this project. There's rules
about how to contribute to it and whatnot, how to get this thing installed into your.NET Core projects.
And you can see the roadmap too of where they plan to take this thing, like what types of
machine learning algorithms are already available and what their current plan is,
what's on the near term versus long term. Super cool. I'm super excited about this project. Can't wait for this to become more fully featured.
Exciting times. Very cool. You know, I hooked up my source code recently in Chrome. And one of the
freakiest things that happened to me is I didn't realize that changing the files like in my editor
would actually affect Chrome like live. But one thing i did is uh i was like clicking
around oh crap let me set a debugger in this whatever next time i refresh you know i'll see it
and i was clicking around and it hit the debugger i added it after the page is loaded and chrome saw
the change on disk and went ahead and updated and the next time that particular function executed
it picked up my debugger. It was the craziest thing.
I don't know how they do it. It's magic.
But it's really convenient. So, go
Chrome. That's awesome.
Yeah, some of the stuff that they
have done in that is
nothing short of magic. It's really
you got to applaud them
on the things that they have done in Chrome
and Chrome DevTools.
Remember what it used to be like, right?
Alert, alert, alert.
Yeah, that used to be the JavaScript debug way.
Not anymore.
Yep, thank you.
And that's it for the show.
So we're talking about tackling imposter syndrome
by kind of taking a hard look at the things that we don't know
and kind of filling in the gaps based on the imposter's handbook and uh this episode we focused on computational complexity and don't
forget about that discount code happy imposters and uh just tweet us or something if you don't
remember yep yeah and so with that subscribe to us on itunes stitcher and more using your favorite
podcast app be sure to leave us a review. Like Alan said, we love to read your
reviews. It really does mean a lot to us. And we really do greatly appreciate you taking the time
out of your busy day to leave us those reviews. So head to www.codingblocks.net slash review.
Yep. And while you're up there, check out all our extensive show notes, examples,
discussions, and more. And before you take over, we always forget about this.
If you want stickers, you know, we did, we have the self-addressed stamped envelope,
send it to us.
We have that on our slash swag page.
Yeah.
And, um, let us know what you think about the episode.
You can send your feedback questions or rants to the Slack channel, or you can drop a comment,
uh, especially if you've got
something insightful.
Like,
I'm definitely still struggling
with some of these concepts,
so I'd love to hear
your kind of explanations
and counterpoints.
And make sure to follow us
on Twitter, too,
at CodingBloxer.
You can go over to the website
and we've got a bunch of links
you can find
to find us anywhere.
Oh, yeah.
Oldest computer.
No, the Abacus.
1800.
Oh.
What was the oldest computer is the question.
Like computer as we know it computer?
117 years old.
Nah, he's making faces.
Well, if the computer is a person, then it's like, I don't know, 600
BC? Yeah, 2200
years ago.
What?
I hit it on the head, didn't I?
Wait, if the oldest computer is
a person, you're really going back 2200
years? Dude,
yeah, BC didn't mean there
were no people. It was before Christ.
That doesn't mean they're still alive.
But there were people before 2,200 years ago.
I don't know.
That's what I'm saying.
I'm just saying 2,200 years.
I'm good.
That's my number.
I'm sticking to it.
You're both wrong.
Okay.
The oldest computer can be traced back to Adam and Eve.
According to you.
It was an apple.
Scream for a little memory.
Oh, why did you do that?
Everything crashed.
Oh, we are recording, aren't we?
Yes, we are.
That's amazing.
You're listening to Coding Blocks, episode 80.
Nope, you're not.
I don't know why I can't read.
It clearly did not say 80.
That's awesome. But I was convinced.
It's 80.