Coding Blocks - Graph Algorithms
Episode Date: July 16, 2018We continue digging into Rob Conery's The Imposter's Handbook as Joe explains Florida time, Allen likes greedy algorithms, and Michael shares his geography knowledge....
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 85.
Subscribe to us and leave us a review on iTunes, Stitcher, and more using your favorite podcast
app.
And check us out at codingblocks.net where you can find show notes, examples, discussion,
and a lot of other stuff.
Send your feedback, questions, and rants to comments at codingblocks.net, follow us on
Twitter at Coding Blocks, or head to www.codingblocks.net and find all our social links there at the
top of the page.
With that, I'm Alan Underwood. I'm Joe Zach. And I'm Michael Outlaw. This episode is sponsored by airbrake.io. When your website experiences an error, airbrake alerts you in real time and gives
you all the details you need to fix the bug fast. And while airbrake is great for monitoring web
apps, it's not just for it. I recently just
hooked up the airbrake monitoring to a batch processing job that I use for QIT. And that
process looks at like 100 or so podcast feeds and aggregates them. And so what I'm doing is I'm
sending custom data along with each time I attempt to parse a feed. And so when one of those feeds
fails, like if the website's down or somebody has a malformed feed, I know exactly which one had the problem. And I can easily see what the
error message was and how often it happened and when things went wrong. And then I can go yell
at the podcast. Right now, CodingBlocks listeners can try Airbrake free for 30 days, plus get 50%
off the first three months on the startup plan. To get started, visit airbrake.io slash codingblocks.
That's airbrake.io slash codingblocks.
So today we're talking about a couple of dynamic algorithms that represent a certain class of problems that are really important and often overlooked in computer science.
But first, we're going to start off with a little bit of news here.
So, Outlaw, you want to tell us about those iTunes reviews?
Yep.
So, not really sure if this is supposed to be Friedelman or Friedelman.
So, either way, I hope that that works.
Just an employee, Orbiter, Monkey, Road Rage, Bud Lee, Ahmed and App, Neononymous, Jones's Dad, and Matt.
Matt.
Got wiki.
There you go. Yes wiki. There you go.
Yep, and over on Stitcher we got
Troy and AbeD in the modem,
which I hope you get out of there.
And Rodolfo, so thank you very much.
We really appreciate those reviews as always.
Even if we put your name, we're really grateful
and we read all of those, so thank you very much.
Even though
me especially often
ruin the names and I apologize.
Yep. And just one quick tidbit
here. I'm going to be heading out to St. Pete,
Florida. So if you're in the area,
August 7th, I'm going to
be out there. So come up and
say hello, kick me in the shin or something.
It's going to be awesome. so come up and uh say hello kick me in the shin or something that's going to be awesome i love that we gotta we need to get joe like a t-shirt a coding box t-shirt on the front
but on the back it says kick me yes in the shin in the shin specifically in the shin
all right so today we have got as we mentioned topic. And one of the keywords in the intro there was dynamic programming.
So somebody want to fill us in on what dynamic programming actually is?
Yeah, sure.
I don't want to go too deep because the definition is pretty specific.
And, you know, we don't do very good with very specific things if you listen to the show for very long.
But it's just kind of at a high level. It's an optimization method for algorithms developed in the 1950s by drum roll, Richard
Bellman, which we're going to be saying this name a lot coming up as we're going to be
talking about development forward algorithm.
And the idea there is that you simplify a problem by breaking it down into sub problems.
And then you iterate over the sub problems and you kind of gather up those sub-solutions and take the best result.
And it works really nice with memoization, which is basically the act of storing calculations
so you don't have to redo them.
So it's basically just a certain way of kind of solving certain types of problems that
come up pretty often.
And I mean, this kind of stuff doesn't really come up too often in the day job, but I wish it would.
Cause it's kind of fun.
Well,
maybe if you worked at,
uh,
maybe these problems,
well,
not the dynamic program,
but just like the other program algorithms we're going to be talking about.
I was thinking like,
depending on where you work,
they might come up,
right?
Yeah,
definitely.
The type of work that you do for sure.
So yeah,
one of the,
one of the key parts of both of these algorithms we're going to discuss tonight are called weighted directed graphs.
And we'll have a link in the show notes to Wikipedia that you can actually go look at some pictures and see these things.
But we're going to try and describe this in maybe a kind of fun way.
So, in a weighted directed graph, I'm going to try and break down these pieces for you.
So you have to mention that dynamic problems aren't just for graphs.
It's just that just so happens that the only two problems we're talking about
to deal with directed graphs,
but you can do all sorts of other stuff with it.
Definitely.
All right.
So in a weighted directed graph,
you have a starting point also called as a vertex or a node, and a number of
other points, also known as vertices, that are all connected with directional arrows. That's where
the directed part of this graph comes from. And the fact that they're connected is what makes this
a graph in the first place. So if you think of like a bunch of little circles with arrows going
back and forth between them, That is essentially a directed graph.
Now, the directional arrows between the vertices can be one way or two way, right? Like you could
either have, you know, going from node A to node B or from node B to node A or both. You could have
an arrow going both ways. Additionally, from any vertex, there can be, they can point to other vertices as well, right?
So you're not just having to point to the node next to you.
You could go to other nodes in the graph, basically anywhere you want.
And to pile on here, usually those arrows have a number associated with them, and that is the weight. And that's where this weight in the weighted directed graph comes
from as that weight could represent anything from time to distance to whatever your use case may be.
So that is the definition of a weighted directed graph. And those weights don't have to be
symmetric, right? So one direction can be one number and coming the reverse way can be something
totally different.
Yep.
Yep.
And we'll get into a little example of that here in a second.
And not only can they be different, they can also be negative in some uses and positive in others.
Right.
So and we'll talk about the algorithms and what they support in that regard here in a minute. So to kind of try and paint a picture of this, I thought that maybe having like a little story as opposed to trying to use letters and numbers because I think we lost some people in the last episode.
Sorry about that.
Yeah, sorry about that.
What? No way. It totally made sense.
Right. 1, 2, 3, 8, 6, 9.
So think about this.
Think of a neighborhood.
So we're going to have this neighborhood and me, Joe, and Michael are this, think of a neighborhood. So we're going to have this
neighborhood and me, Joe and Michael are going to live in this neighborhood. But in this neighborhood,
we don't have great roads, right? Like there's some really awesome roads and there's some things
that really suck. And so we've got some really hilly terrain, some forests and all that kind
of thing, right? So we're going to make the source house, my house. So Alan's house is going to be the source house and I can get to Mike's and Joe's
house. So I'll have lines pointing out for me to their houses, right? Arrows going that way,
but they can't get to me. So I can, I can go slide down the hill to Mike's house,
but it's a very muddy hill. So you can't get back out to mine, right? And there's some sort of other crazy obstacle in the way of Joe's house.
But between their houses –
It's uphill both ways.
That's right.
Through the snow.
But both Mike and Joe can get to and from each other's house.
All right.
Now, here's the interesting thing, right?
We could say – it may be a way to paint this so that it would make sense for people. Like, why would the arrow going from Mike's house to Joe's house not be the same value going back, right? Let's say that Mike lives up on a huge hill, right? And so for him to get down to Joe's house takes 10 minutes, but for Joe to get up to Mike's house takes 20, right? Like think of any use case that you could possibly come up with, but,
but it's that type of thing.
So really now what you've got is I can go to either one of their houses and
they can go in between their houses.
But now really where these algorithms come into play is what is the shortest
distance to get to their houses or what's the shortest amount of time? I
guess in this case, I should say, would it take for me to get to Mike's house and the shortest
amount of time it would take for me to get to Joe's house? Assuming that time is the weight.
Assuming, yes. Assuming that the weight on the graph is time. Yep. So Alan wants to know on
podcast recording night, what's the latest he could
leave based on what house he's going to in order to get there exactly 45 minutes late that's right
is that fashionably late nowadays that's just normal yeah that's florida time all right so
so now this is where the two algorithms we're going to discuss come into play so the first
one we're going to go over is Bellman-Ford.
And who wants to kick us off with that one?
Either way, I guess I started talking.
So getting a little meta there.
Sorry about that.
So the Bellman-Ford algorithm is an algorithm devised by two people.
One was Bellman, which you mentioned earlier, which kind of came up with this whole notion of dynamic programming. But the idea is that it uses a mathematical approach called a
relaxation method, which sounds kind of funny, but all it really means is that we keep looking
for better solutions. And wherever we see a solution that's better than the one that we've
already got, we can replace it. So in our kind of example there, the neighborhood, like if we kind of were
starting from blank slate, we would say that since we know nothing, it takes infinite amount of time
to get from Alan's house to Michael's house and from Alan's house to Joe's house. But as we go
along and figure those routes out, we're going to update that. And the first time we see any route,
we're going to say, oh, you know what? 12 minutes is better than infinite, so we're going to update
basically a table of values
here. And we're going to call that, it's
basically called relaxation.
Which doesn't sound very relaxing to me.
I don't know. But
I guess, do you want me to go ahead and describe the algorithm?
Or Mike,
do you want to jump in?
Either one of you.
It doesn't matter? Yeah matter yeah sure so we'll start
at the we'll start at the home node so like alan said we will start at his house uh and try to go
from there and get the weights for each of the connected nodes and keep track of those so we
started as an unknown so the weights from alan's house to my house or to Joe's house were infinite. But then we realized that there's,
there is a weight.
So let's say that it was,
uh,
like 20 minutes to get to my house and 30 minutes to get to Joe's house.
You know,
if Alan went directly to my house,
it's 20 minutes.
If Alan went directly to Joe's house,
it's 30 minutes.
And you might think that would be the most efficient route,
but, uh, there's no, that's not necessarily the case. Like maybe you that would be the most efficient route, but that's
not necessarily the case. Maybe you can take the lazy
river winding down, whatever, find
some other way, stop through Will's house or something
else and find some better way. But we just don't know
that yet because we haven't traversed through all the
potential nodes yet.
Yep. So we'll go through
to the next node on the list
and get the weights for that. So
in that case, let's say we go to my
house and my house only connects to one other house and that's Joe's. And it's five minutes
to get from my house to Joe's house. So we now know that the cost from my house to Joe's house
is five. But when we previously tracked the cost from Alan's house to Joe's house, we said that was 30 minutes.
However, taking the winding river approach that Joe mentioned, if we go through my house instead, it's 20 minutes to get from Alan's house to my house and five minutes from my house to Joe's.
Therefore, it's 25 minutes for Alan to go through my house to get to Joe's.
Yep. And, and the key part here is basically if you,
if you visualize this thing,
you've got a table with all the ends with all the nodes in it,
right?
And then the shortest weight between those.
So you're going to have Alan as a node and mine's going to be zero because it
takes me no time to get from my house to my house.
And then,
and then Michael's would be a node, and it started at infinity.
Then it went to 20, and it's going to stay at 20.
And then Joe's started at 30 because it was from my house.
But now because we can go through Mike's house, we can get there in 25.
And so you just constantly update this table as you go around these nodes,
recalculating these values until
you get down to the shortest possible route from the source node. And that's the important part,
right? Every measurement is the shortest from the beginning node to get to all the other nodes.
And this is where that relaxation comes in. Cause we started with the cost to get to Joe's house,
Joe's house, that cost went from through three different values. It started at an infinite because it was a complete unknown.
Then we relaxed that to 30 minutes.
And then after going through my house, we realized we could relax it again to 25 minutes.
Yep.
And in a nutshell, that's it, right?
We did a very small set of nodes here so that,
so that we can describe it,
not confuse everybody.
But essentially the,
the Bellman Ford algorithm is,
is you keep going around the list of nodes until you've basically gotten to
the lowest one.
Like if nothing changes on,
on any particular pass,
then you know,
you're pretty much, you're done at that point, right?
I see you going, eh.
So I don't know if you can say that.
Like some particular versions of the algorithm may kind of buck out early if it sees that things have kind of stabilized.
But the actual trick here is that kind of at the top level, We know that there's going to be a maximum number of edges or connections between the nodes of the number of nodes minus one.
Because if you have two nodes, there's only one line in between.
And so it kind of exploits that connection.
And so what we do is actually kind of another way to say this algorithm is we start at a high level.
We say from i equals zero to the node count minus one.
Let's just do this thing. So if we got five nodes,
we're going to loop four times. And then we're going to say in each of those sub iterations,
we're going to look at each node and we're going to compare its neighbors and say, hey,
am I a better route to get from the starting point to me than me to my neighbor? Is that
faster than the time I've got recorded in this table? If it is, then I'm going to record it.
And that's where that kind of dynamic problem comes in
because we're saying each node is kind of doing this smaller problem
where it says, is my time to my neighbor plus the start node's time to me,
if we add that up together, is that less than what I've got stored?
If so, I'm going to memorize that result and store it in the table
and kind of forget about the numbers
that made it up. I'm just going to stick
in that final number there.
And then redo that cycle enough times.
By the way, that's a very important
piece of this that I think
some people might be like, well,
wait a second, how do I get there?
This algorithm
does not store the
routes to get there.
This algorithm is all about finding the shortest,
you know,
the smallest weight to get from the starting point to that node.
So even though you might've had to go on through Mike's house and Will's
house and,
you know,
John's house to get to Joe's house,
that doesn't matter.
The only thing that this algorithm cares about is the end result of what that
value is at Joe's house.
Yeah, now if you really want that route, you can keep track of it and stuff.
It's just going to get – the algorithm is going to get a little bit hairier.
But for the main point of this algorithm is that you're going to end up with a table that has all the most efficient times from the start node to all other nodes.
And it's kind of counterintuitive to think that you're going to keep recursing over these same nodes over and over and over again.
But that's kind of the trick there is because we're subdividing the
problem.
We need to loop over those nodes and do each node trying from basically
each one to all of its neighbors like that many times.
So you're going to hit that a node,
that Allen node.
You know,
if we've got three,
three places in our neighborhood,
we're going to hit that thing twice.
Yeah.
If you have 10 nodes,
then you're going to loop through all 10 nodes nine times.
If you have 20 nodes,
you're going to loop through all 20 nodes 19 times.
So it's the number of nodes minus that.
So it's a fairly expensive algorithm in order to get that.
Yep, for sure.
So it's outer loop through the node count. So it's, it's a outer loop
through the node count. So it's been literally a for loop between two numbers. The inside there,
you're going to hit every single node. And then inside of that, you're going to look at each
node's neighbors. Yeah. So we're kind of hinting around the complexity, but let's dig into it
though. So, um, real quick, I want to say that in the last episode, we were talking about like, you know, the as it related to previous algorithms around sorting and whatnot, things like, you know.
How would you say it like absolute V and absolutely and things like that. And it was pointed out like, Oh, Hey, no,
we're just dumb.
Um,
no,
it was pointed out.
Uh,
I don't know how I'm going to mess this name up.
Zoily,
uh,
left us a comment on that episode and said,
no,
no,
no,
uh, you're just being stupid though.
That represents the set.
So it's the set of vertices and the set of,
uh,
edges,
not the,
um, absolute values for him. And I was like, Oh yeah, it's been a while since I the set of edges, not the absolute values for them.
And I was like, oh, yeah, it's been a while since I've had a math class, I guess.
So with that, now we can read these, right?
And we can say that the worst case, the time complexity for this Bellman-Ford algorithm is going to be the set of vertices times the set of edges. So the number of vertices
times the number of edges is your worst case time complexity. And I don't think I mentioned it
earlier. The edges are the arrows, the lines connecting the nodes. So the number of total
nodes you have times the number of lines that you have. Yeah. Or in our house thing, it would be like the roads connecting to the houses.
Yeah.
Yep.
Yeah.
That's interesting.
This is the first algorithm that I think that we've talked about that split the big O up this way.
Like maybe eventually we'll do episode on big O.
We're kind of diving this a little bit more.
I just think it's kind of interesting that it actually breaks the input space up into like the vertices and edges.
Like normally, like for the sorting albums, we just talked about the size of the array
and we didn't really talk or didn't really look at it any other way.
So I just thought it was kind of interesting to take that input and actually kind of specify
it that way mathematically.
Yeah.
Well, I mean, but it makes sense though, right?
Because you have two inputs coming in, right?
And they're a factor of each other.
So, you know, it matters.
So here, the best case, you know, as Joe mentioned,
there are alternative versions of this algorithm.
So the best case of time complexity is going to be just the number of edges.
That's if there's one arrow basically in between each node, right?
Yeah.
Well, I'm also imagining like between best case and worst case,
as Joe mentioned about some versions of this,
some implementations of the algorithm
might buck out early, right?
So there's this range, right?
Your worst case is the set of vertices
times the set of edges,
and your best case is just the set of edges.
Yeah, I think that the best case there doesn't
deal with the input. It's just
a matter of how it's actually implemented.
So whether you pass something with a lot
of edges or a few edges to it, it should still be
the number of edges for this particular
implementation.
Yeah. And I didn't look at
the different implementations for this one. I mainly
looked at Dijkstrastra but I know that
things got pretty weird
like there are dash structures
that I've never heard of
like a Fibonacci tree
like that was a new one
for me
sounds cool though
I didn't look at it
yep
and so this works
works well for simple graphs
but it's not the most
efficient algorithm
for complex or dense graphs
because you know
it's that
number of vertices
and number of edges kind of grow in the,
in the normal versions of the algorithms,
then just kind of performs not so well,
but we didn't really give an example of negative cycles.
And the example of time doesn't really make sense to think like if I go,
you know, unless I cross the time zone.
But wait, did we, did we talk about the space complexity yet?
I think I skipped it.
No, we didn't. Okay. So just to come circle back to that before we get to the negative cycles,
the worst case for this space complexity for this is going to be the number of vertices.
So I found that, you know, kind of curious, like there, there were the opposites, right? Best case,
the time complexity is the number of edges, but worst case, the space complexity is the number of vertices.
So this goes back to that minimization table that Joe was talking about, right? Like you're
keeping track of that. So your worst case space complexity is the size of that table,
which is going to be one entry per, uh, vertice. Yep. Worst case. That's really good. Yeah. So
that seems to be the best case also,
right?
Like there's really no change cause you have to keep track of it to all of
them.
So that's your best,
worst.
That's your,
your standard case.
It's your case.
Yeah.
Yeah.
How would you say?
And your,
your case is your,
your case is now.
Yes.
Oh,
sub number of vertices.
Now there was,
there was a statement here too,
though.
It was like, if, if each of your if each vertex has at least one going edge,
then you can approximate the complexity to just be O of N squared.
Yeah, I guess I'm okay with that.
We've got that kind of a double four loop sitting there up on the top.
And then if you say the N-ers is just how many neighbors it has. So if your
graph doesn't have a lot of neighbors, then
we don't want to deal with getting things complicated.
We just want to define this in terms of N.
And N squared is pretty good.
Yeah. Yep.
All right. So now on to negative
cycles, which doesn't really work too well with the
time example unless you think about
going to someone's... We go to Will's house
for the Super Bowl or whatever it's called uh nab well no no we can still use this ball you said no no that's
what he said oh yeah it's a baseball thing whatever it is yeah it's the major league super
ball so no it could still work because will has a time machine. Oh, yeah. Yes. So he could have a negative cost.
It could still be a positive cost from Alan's house to Will's house,
but a negative cost from Will's house to my house.
Okay.
And Dolman Ford deals with that.
It's okay with that.
The way it kind of adds things up, like if you've got a negative edge in there,
you can end up – it's just going to factor
in, it's just going to work. What you can't have though is a negative cycle, which is, I'm not
really sure how to define it, but basically you can just keep going around in a loop.
Yeah. If you basically had a triangle of nodes that each had arrows pointing to each other
that had negative values, you could just continually go around and subtract values.
And so you're always going to be approaching negative infinity at that point.
Basically, the point is, is like your ultimate cost needs to be greater than zero.
As you define the cost from any node to any other node, it should be greater than zero.
And you get a negative cycle once you start going less than zero because with Bellman-Ford, as you iterate back around those nodes again, you'll hit that negative cycle again and you'll just keep reducing it, which is what Alan was talking about.
You're just working towards negative infinity.
Yeah.
And so that's the negative cycle, right?
If you get into a loop to where you can just keep reducing the cost, then you'll never be able to solve it, right? If you get into a loop to where you can just keep reducing the cost, then you'll never
be able to solve it, right? But this is both a pro and a con for the Bellman-Ford algorithm.
Because on the one hand, because the existence of this negative cycle exists, you can find it,
right? You can detect it and just abort early. So if you're looking for that, then once you find that there's a negative cycle, you can report it, right? You can detect it and just abort early.
So if you're looking for that,
then once you find that there's a negative cycle, you can report it and you're done, right?
So there's your pro.
But it's a con though,
because the existence of this negative cycle
means that that particular graph,
you are unable to find the correct answers
for like what the shortest paths are using this algorithm.
Yep.
But it is worth pointing out that the Bellman Ford does handle negative edges,
which is something that we'll find out about Dykstra in a minute that it doesn't handle.
I kind of don't care though.
So in all the examples I saw, it was basically dealing with numbers from 1 to 10.
So what I figured is you could just add 100.
And as long as that negative number wasn't more than 100, then you're always positive.
So no biggie.
Well, what if it costs you money?
What if you get paid?
What if Alan were to get paid to go from my,
from his house to my house,
right?
Like he would make money to go in that direction.
But for him to then go from my house to your house,
it would cost him money,
right?
So depending on like what your weight represents,
you might care to have a negative number.
It might mean something to you.
Right.
Like we want to know if like we're going to run this business into the ground
from the first destination, right?
Well, I mean, we're there.
Hey, do you expect to see his algorithm with an N plus 100 in there starting off?
If there's any negative values.
I just add 100 to all the node values and I call Dykstra's.
I've got it.
Add 100.
Just multiply everything by something until it's like greater than zero.
That's right.
Exactly.
It'll be fine.
Awesome.
Heuristic.
So with that, as we do every time, thank you all who have left us reviews.
We read them all.
We very much appreciate them.
If you haven't,
and you've thought about it many times while you're driving,
you know,
just a reminder,
if we make it easy for you,
if you want to head to www.coatingblocks.net slash review,
and we have links there to iTunes,
Stitcher,
some other places.
So,
you know,
if you would like to give back to us and put a smile on our faces,
please do go up there and leave us a review.
Greatly appreciate it.
Hey, and you know what?
We've said we ask about leaving a review, but you know another way that would really be beneficial?
Tell a friend.
Oh, totally.
I don't know that we ask for that that much, but that would be amazing.
Tell a friend.
Spread the word.
And you got that friend that needs the show, right?
It's like yeah man i
saw that pull request here's this podcast that's right what i what what my take on that is you're
going to make your life better as a co-programmer if you get them listening hopefully uh yeah see
on that way to graph the first one's free that's right like y'all thought we were talking we're
talking about time like talking we're talking about
time like no we're talking about money we're starting a little business here making it rain
all right so all right well with that uh let's head into my favorite portion of the show. Survey says. All right. So last episode, we asked, what is your most
productive season? And your choices were spring. It's the best way to avoid the pollen. Or summer,
let me stay behind my keyboard in the nice air-conditioned room. Fall, the sky or leaves are falling.
Safer to stay inside. Or winter, oh, scares me. I'm not going out there. Or seasons,
we don't need seasons here. All right. So Joe cheated. I'm letting Alan go first.
All right. So because he cheated, I'm going to have to figure this out.
First off, the pollen thing is definitely a Georgia thing.
So I'm going to say that nobody chooses that.
No, there's pollen elsewhere.
Honestly, I think it's the winter one just because –
It's the Olaf.
Yeah, it's – Olaf scares everybody.
And unless you have a snowboard,
you're not going out there.
Right.
And the kids are all in school and all that kind of stuff.
So I'm going to say this cause summer's too many distractions.
Let's go with,
with winter.
And I'm going to go 37%.
37.
All right.
And I saw a screen,
but it's not my fault.
It was a screen share. I got to teach you a lesson, but it's not my fault. It was a screen share.
I got to teach you a lesson here.
It ain't my fault.
Screen share on the wrong monitor or not realizing that you're still screen sharing when you flip over to something.
You know what I'm saying?
You know what I'm saying?
Unfortunately, I don't really remember what it was, so I'm just going to say seasons.
We don't seasons here.
And I'm going gonna go with a good
26 26 so winter for 37 or we don't have seasons for 26 i want to live there all right well
by prize is right rules you both lose oh really but But we do have a winner who picked the number one choice.
It's me.
And it's Alan.
Oh, man.
Golly.
People don't like Olaf.
He's scary.
Yeah, but what was the percentage?
It can't have been that much lower.
34.
Oh, man.
You didn't miss it by much.
You were both in the neighborhood.
I mean, like, Joe was at Seasons was the second, and that was 23%.
Oh, man, these people.
Which I want to know where those people live.
Right.
They have no Seasons?
Yeah, San Francisco to San Diego.
They're all right there in that bunch.
It's got to be.
I mean, I don't leave the house, so it doesn't matter if there's Seasons or not, really.
Well, I mean, you know, you got to go outside and get a taco every now and then.
That's true.
I'm guessing down where Joe lives, is one season and it's just hot.
Yeah, it's just hot.
Like flip-flop melting hot.
Our Taco Bell delivers though.
Taco Bell delivers?
Are you serious?
Flip-flop melting hot.
George, get with the program.
Dude, Taco Bell delivers?
Come on, man.
I know, man.
Where do you think I got this from?
That wasn't from vegetables.
Oh, that's awesome.
I've got to imagine it's like an Uber Eats delivery service that does it.
I don't know.
I don't care, man.
You knock on my door with tacos, I'm giving you some money.
All right.
Well, let's continue this theme.
So since last episode was the most productive season, let's find out what your most productive
time of day is. So your choices are morning, got to get my work done before the rest of the office
gets in, or midday, let everyone else go to lunch. I'm getting this commit merged or evening.
I'll let the traffic die down while I close one more ticket or night.
Now that all of the meetings are over,
I can finally start my day.
I think I know what the answers to these will be.
I don't know.
I'm not going to lead them.
I'm not going to lead them.
This episode is sponsored by airbrake.io.
When your website experiences an error,
airbrake alerts you in real time
and gives you all the details you need
to fix the bug fast.
And speaking of fast,
it's really nice to get those real-time notifications.
You know, I mentioned that batch process earlier.
And because of how I built that process,
it spits out a ton of output to screen, which means that unless you're sitting there watching
it or you're working out your scroll finger, scrolling up, up, up to see if there's any
problems, then it's really easy to miss things. And Airbrake catches that stuff and aggregates
it and notifies you immediately. So then I can go and bug those podcasts about their busted feeds.
Right now, CodingBlocks listeners can try Airbrake free for 30 days, plus get 50% off the first three months on the startup plan. To get started, visit airbrake.io slash CodingBlocks.
That's airbrake.io slash coding blocks.
Okay, so now that we've got Bellman Ford's algorithm behind us, let's talk about Dijkstra's algorithm. Starting with, who is Dijkstra? Yeah, and he's a famous programmer, mathematician.
He's the go-to harmful guy that's kind of made some waves. Wrote a bunch of books. And we talked
about it not too long ago when we talked about structured programming, which is kind of a way of programming using a functionalism,
compositional deconstruction, kind of like a precursor to OO. And so a pretty famous person.
And he invented this algorithm that's so famous and so awesome that it bears his name, even though
it's not easy to spell for, at least for us Americans.
You know what's funny about this algorithm before we get into it?
This is naturally how my mind worked.
This particular algorithm, like the Bellman Ford one, like it took me a minute to wrap
my head back around it.
This one is like, this is like how my brain was like, oh, this is how you would totally
do this, right?
So to me me the main
difference between these two algorithms is that uh this one is greedy it's funny that you say that
little dog on it i take it all basically which is that has a negative connotation of course but
like basically it makes uh like locally optimistic i think they call it um changes basically, well, I guess I'm kind of jumping ahead here,
but it's really similar to the first one where you start out with this table,
you initialize it and say,
Alan, to get to Alan is zero and the rest is infinite.
We don't know.
We loop through and figure out all of Alan's connections
and we update the table.
So, so far we're exactly the same as Bellman Ford.
Now the difference is instead of having that outer loop that says we're going to loop over
this thing like four or five or 20 times, we say, tell you what, make, find out which of the nodes
that you're connected to that we haven't looked at yet. So like a house that we haven't visited yet, that is the shortest distance to you right now,
go there and check out their neighbors.
So rather than kind of like looking at every node,
kind of like not paying any attention to how far away it is,
it's like,
just go ahead and look at the shortest one.
And so it makes this locally optimal choice here.
That's the big difference.
And because of that, it's able to kind of save some time.
And so this ends up performing better,
but the downside is it doesn't do well with those negatives at all.
It's flat out not going to work.
Yeah, I was going to say, it's not that it doesn't do well.
It won't do them.
Yeah.
Yeah, but to kind of say it a slightly different way,
we basically start with an unvisited node with the lowest weight.
So that first iteration, using the neighborhood example,
we've got Allen, you can connect to Allen.
It's the only node we know about because that's the one
that we're defining all the other relations to.
So we start there because it's zero.
We look at your neighbors, and we log the time.
If it's less than, first run is going to be infinity.
So we're going to log both those neighbors.
Then we take a look at whatever neighbor is the closest and figure out that same thing.
And we repeat that process by again, looking at the next closest neighbor that we haven't already visited.
And we do it again.
And so this one, you're only going to visit those nodes one time.
Yep. So an important thing to note here is as we talked about in the last one, the memoization
table that we had, where it was, you know, Alan, Joe, Michael, those, those three,
those three nodes that show up in there. And then there was another column that was basically the
distance or the time or whatever we wanted to call that, the cost.
This memoization table is going to have one additional column, which is going to be,
have you visited this yet? Because you have to keep track of which node you've been to so that you don't go back to them, right? You can never backtrack. You can never touch that same note
again. And so as you visit, like if I went to Mike's house first, then I'm going to
mark that thing off, say, okay, I can't come back here. Right. And then I'm going to go from his
house to the next house. So it might be Joe's house. And if Joe's house is a shorter distance
than Will's house, then we're going to go to Joe's house and we're going to mark that off.
Right. And you keep traversing it like that, going around the nodes, just picking,
Hey, what's the shortest distance, Which one's shorter than the other one?
And keep going that route.
Yep.
So built in forward, like it works, like it's no slouch.
You know, no one can really give it a hard time.
Works great with those negative weights.
But Dykes is just really elegant.
So I'm a fan of this algorithm.
Now, when I actually coded it, it looked a lot less elegant than it does on paper here,
especially with all my console.logs in there because it's JS for life.
But it's just a really nice algorithm.
And I think that a lot of problems are really similar to it, especially in the dynamic range.
So this is a good one to know.
Yeah, I wanted to just briefly touch on it.
You hinted about the greedy algorithm, which is what Dijkstra's is.
But I don't know that we really maybe put a kind of formal definition on it i know that we somewhat
hinted on this like um episode or two back because i think we talked about the traveling salesman
came up um or knapsack yeah or change, I'm trying to remember what that, with the context of that episode, but it came up like a couple episodes back.
And we talked about like if you try to create any kind of a map routing type of algorithm, right?
Where you're trying to get from, say, Georgia to California, right? Then you're going to, you know, a greedy algorithm would take, Hey, you know, I might not
be able to determine the absolute shortest path all the way. That's, that's, that's the problem
with the traveling salesman problem is I might not be able to, there's so many different routes
that I could go between Georgia to California. But what I can do is I can just say, well,
what's the quickest way to get from Georgia to say Alabama,
and then from Alabama to Mississippi, right? And you could just keep going that path. So you're only looking at like, what's my next best, my next most beneficial step that I can take.
And that's how it's being greedy. And in the case of the Dijkstra algorithm, you're looking at what's the lowest cost next vertice that I can go to, and that is my next destination.
As opposed to the Bellman-Ford where it's not greedy because it's going to go through every single possible iteration to get to that final answer. And I think this is a really cool example because usually you hear about
greedy algorithms and places where it's not,
it's a,
it's a hard problem.
So it's not something you can really solve.
And so we're usually dealing with approximations,
but this just happens to be a case where we do something in a greedy way.
And it just so happens to be the,
the most efficient way of doing it.
What's the,
it works out mathematically. It's proven to be the most efficient way of doing it. It works out mathematically.
It's proven to be the most optimum route.
Either of you guys have where you most often hear about greediness come to mind?
Fake news.
Okay.
I don't know.
I mean, I think of the change problem.
You got to make 78 cents in the least number of coins. My first thought is like, well, I don't know. I mean, I think of the change problem. Like, you've got to make 78 cents in the least number of coins.
My first thought is like, well, I don't know.
I'm not going to do the math ahead of time, but I'm going to start throwing out the quarters until I can't anymore, and then we'll just go from there.
Okay.
That comes up in your day job a lot?
No.
I'm talking about day job.
Oh.
Either of you?
No one has one?
Salary negotiation?
I don't know.
I see Alan is like in deep thought over here.
I got nothing.
Where I see even the word greedy come up at all in any kind of conversation is around regular expressions.
Really?
Yeah.
You never seen that where this particular expression would be greedy? And if you wanted it to not be greedy, then you've got to do blah, blah, blah.
All right.
I don't write those.
I go to Stack Overflow for those kind of regex.
That's what I'm doing wrong.
Google.
Yeah, you can't write regex like that.
That's crazy.
Dude, I do like writing regex.
Oh, man.
Once you get into backtr tracing or whatever they call it,
like,
uh,
it is,
but that's where like,
you know,
regular expressions are another form of like greedy.
They can be fast,
greedy,
uh,
algorithms.
So,
yep.
And so there are times where you might not want a greedy,
regular expression.
Um,
all right.
All right. All right.
Well, fine.
I believe it.
I don't know.
I guess I need to bone up on my regex.
You're doing something I'm not.
Apparently, I'm doing it wrong because I'm not going to Stack Overflow.
That's right.
Yeah.
Maybe I just slide those tickets your way.
So, you want to hit us with the results on this, the complexity and those various pieces?
Not really.
So, I mean, the results of the actual algorithm are exactly the same as Bellman-Ford.
You're going to get a table out that has all of the shortest distances or the shortest numbers to all the other different nodes.
The complexity gets weird because there's a bunch of pros and cons,
and there's a bunch of different ways to do it.
You can use, like, adjacency lists or priority queues,
or you can use a matrix, or you can use a Fibonacci heap or something else.
And so there's a bunch of different ways.
They each have their pros and cons.
And so it's kind of hard to really give, like,
one particular value for the speed here.
But I don't know. Like like do you guys have a favorite
well it's still way faster than bellman ford right like yeah yeah and so um like the i think
the most common one i've seen is the uh using the adjacency list and priority q together so you got
big o of the vertexes plus at the edges which is less in parentheses right so you add up the vertices and the uh the edges and you
multiply that by the log of the vertices so you don't need to look at all of them way smaller than
n squared yeah because remember we could um yeah we could approximate that other one to
o of n squared and this one is a you know we got those v's and E's in there, but you can kind of think of it as a sub exponential.
And that's not the word for it.
Basically, an N log N kind of thing.
You know, it's funny about that, though.
Typically, in big old notation, don't you typically throw out the factors?
So that could be simplified to O log of V.
If I remember correctly, I may be wrong.
No, I think Joe is more right.
It would be like
o v log v but he said it is low log um o of n log n i thought you right because like once you
added once you add v plus e then it might as well be the same like it doesn't really make that much
difference to it it kind of maps to input. So it's like input size times the log
of input size. But yeah, you can't just drop
you have to drop constants. So if you've got like
something happening five times in a loop,
it's like, yeah, it's just a loop.
Okay. I couldn't remember it was factors. Okay.
It's all about how it scales.
But regardless
though, any of these different
implementations that you've talked about,
whether you're combining a priority queue with an adjacency list or a matrix or using the Fibonacci heap, either of these are using, when you talk about their complexity, logarithms of the vertices.
And as we talked about before, anytime you talk about logs, then you're talking about a magnitude of it.
You're not talking about
the entire inverse exponents yeah it's sublinear it's like even better than linear scaling right
that's so so can you see the motions i'm doing with my arms right now
yeah that's what made it all click yeah well because the point i was trying to make though
is like before we were talking about like either for Bellman-Ford, you're either multiplying, which is when we got into the O of N squared, or you're just taking the set by itself, right?
That was your best case.
Yep.
Whereas this, where the number of vertices go up, your actual complexity goes down, which is interesting.
Yeah, you know, and I just realized we didn't mention the best example of why you would use an algorithm like this.
Why do you need to know from a starting node to all other nodes?
It's like, well, video games, obviously.
So A-star algorithms are heavily based on Dijkstra's algorithm.
It's somewhere where we don't have negative weights because, you know, the bad guy needs to get to the good guy,
or the bad guy needs to get to the turret, or the bad guy needs to get to the turret or the bad guy needs to get to you know run away or whatever and needs to
figure out what like what it's able to get to and the a star is basically an adapted form of this
where it kind of prune down prunes down the tree to get even more um optimized to run faster because
that's the kind of thing that needs to run over and over and over again. Awesome.
So comparing the two algorithms, really, you get the same end result,
assuming there's no negative edges, right?
So you can compare the results between the two.
Dykstra is gritty, we said.
Bellman-Ford is not.
Bellman already said the negative.
And I don't know.
I guess we're going to repeat.
Just Bellman has built on the notion that there are less vertices than there are vertexes.
So there are.
Is that right?
No, this is just the number of iterations, right?
Yeah, we basically don't need to iterate more than the number of nodes
we have. Those V words.
We don't have to iterate more than that. So that's why we're saying like that.
There's four houses in the neighborhood. We don't have to do
more than three checks.
Yep.
Yeah.
Yeah. I mean, that's about it for those
two algorithms. And again, those algorithms are
important because they're kind of exemplified
dynamic programming and they're used for a couple of different things. I don't know.
Do you guys see this kind of problem solve up? Have you ever Bellman Forded something?
Well, that's why I was thinking like before, like, no, short answer, no. But depending on
who you're working for, the type of jobs you're working for. I mean, I've had friends that worked for companies where they were writing
mapping software for delivery trucks within a company or whatnot. If you're working for
like in the Google Maps division, then you're probably going to care about routing type
things. Or maybe if you're doing, you work for LinkedIn or a Facebook or something like that. And you're trying to do like, uh, you know, any kind of connected graphs within,
but, Oh, you know what though? I just thought of an example where I'm wrong though. We,
we did need this. I would venture to say, even if you have like a warehousing application where
you have to, you know, try and figure out the best routes to get from one place to another
and, and optimize flow throughout, throughout like some sort of area.
These types of things would come into play there.
I've done a little bit with A-star stuff, just messing around with video games.
And there are a couple of cases where I can think of like dynamic problems that I've solved
that are kind of similar, where I basically kind of break the problem into various pieces
and then kind of store that information, like usually a database or something.
And then later I kind of interacted with that um the results of those calculations rather than calculating all that stuff on the
fly because i want to know how things kind of relate to each other so obviously save kind of
score um so kind of similar but i don't know i am actually so i program these guys just kind of like
the the sorting out of them just kind of try and get my head around the algorithms and um but like
once you do one of them the second is almost is almost like a straight-up copy-paste
and just kind of tweak that last little bit there.
I did like Dykstra's better, though.
Yeah, I just thought about something there
in a previous gig
where we did need one of these algorithms
but didn't realize it until we had already moved on.
Because in any kind of any kind of graph
type of programming right you're allowing for um if we were to talk about a tree right like
in our previous conversations about like breadth first and depth first right where you know there
was like this kind of like quote parent-child hierarchy, right? But with graphs, you don't necessarily have that, right?
You know, my house pointed to Joe's house, Joe's house pointed to my house, etc. So, you know,
that was bi-directional. You know, you could have multiple houses pointing to multiple,
like, quote, parents or whatever in a graph, right? There's no kind of relationship there is where I'm going with that, right?
And, you know, kind of this was kind of like one of the takeaways from obviously we're,
you know, we still have been going on with Rob Connery's The Impostor's Handbook here.
And, you know, one of the points that he made in the book, though, was like knowing when you need one of these things, right? And he talked about a story where
there was actually, it impacted his job because he didn't realize that he needed to know something
until it was like too late. He realized it well after the fact, after he already lost the job,
but by then it was way too late. And that was kind of what happened with us is that,
well, I mean, not that we lost our job because of, but, uh, you know, we thought that we were looking at a tree problem. And so we
started attacking the problem in that direction. Right. And, you know, imagine you've got like
massive amounts of code and then you realize, oh my God, it's a graph. Right. And like, by then
we were like, oh my God, how do we go back and redo this to make a graph problem?
So changes everything.
It is it is important to, you know, to recognize it for what it is and to be able to apply the right algorithms.
Now, like when it comes to graphs, like I had to make a decision.
I haven't really thought about a graph or program to graph in a long time.
But as soon as I went to do it, I was like, oh, you know what? First, I need to make an actual graph kind of object.
And so there's a couple different ways to do it.
I was kind of curious what you guys,
like maybe one of you guys shut your ears
and the other ones say how you initially think
about programming graph.
I'm kind of curious to see if your default way
you would build it is different than mine.
But I don't have one.
You don't have one?
I mean, I don't know.
What do you mean? you yeah i guess not
mine would have been basically like an array with the vertices in one and then probably
from that point have
i don't know if I'd do two-dimensional
arrays to where I...
You're basically talking about a matrix.
You could say if you've got the rows and the
columns or the different nodes, you can say
A talks to B by row and column
and B talks to A. It could be different by going
to... So you can capture
all those relationships there. So that's funny
that you said that because that's not the way I did it.
What about you, Outlaw? Were you thinking kind of along those lines or?
No, I really wasn't. I mean, I kind of liked where Alan was going with it, but then he threw
me with that last part because I was thinking like, oh yeah, okay. So if you had an object that
had the, you know, the list of here's everything that I can point to. So you could do it multiple
ways. You could either have like a two-dimensional array
where you have your vertices and the things that it points to or you could literally just have a
list of like key value pairs that point from this node to that node this node to that node right so
that's an adjacency list yeah when you basically you store all the connections like one big list
and that's got some really nice advantages too. What route did you take?
I basically treated it like a tree.
So I kind of had like a starting node
and it had its neighbors
and then each neighbor had neighbors.
So basically I programmed it
like as if it were a tree,
except that it could have cycles.
So I don't know.
How do you handle the cycles at that point?
Or does each node have its own tree?
Well, how do you have and how do you
happen how do you handle it if it has multiple parents that's fine it doesn't have a notion of
parent it only has a notion of neighbors so it's like a tree but it's not a tree okay so basically
what he's saying is if if we use the house examples, right? Like my house to your house and let's,
I correct me if I'm wrong here, Joe,
let's say that you two could go between each other's houses,
but I could only go to your house, right?
So I would be at the top of the tree.
Then your house would be next.
Then Joe's house would be next.
And then your house would be next in that tree again, right?
Yep. Yeah. Because the one arrow would be from and then your house would be next in that tree again, right?
Yep. Yeah, because the one arrow would be from your house to his house and then the next
node or child in the tree would be going back to you, but it would be a child of his in
the direction it could flow your way.
Seems like it would be kind of expensive.
So it's not in terms of a large, like as the size of that tree grew.
It could be.
Now, the interesting thing is, though, when you do it that way, you're not pointing to an object reference, right?
It's literally just a named node.
Well, I did pointers.
So like I did pointers back to my nodes.
Oh, you did really?
Yeah.
So like you would have your first node and you can think of it as like an object it's got a value and it's got an array of pointers or as linked list pointers uh you know
it's called an array of pointers that would point to its children and each child had their value
and an array of children but now if they um you know if they point to another node they don't have
to recreate this not duplicated it just that pointer points back to the parent or wherever it needs to go.
Like parent doesn't even make sense.
It's just a list of connections.
I mean, originally, like one of the thoughts that did come to mind too
was like a doubly linked list.
But then I didn't like the idea of like, well, you know,
when you talk about doubly linked lists or just linked lists in general,
you're only talking about like one, you know, connection point. Right. And so that's why as soon as you
said array, I was like, Oh yeah, that'd be much better. Cause then you could like have,
Hey, these are the things I can point to like a whole list of those things. And it could be,
you know, uh, however many that particular one's needs, it could grow to whatever its need is. So
yeah, I mean, yeah, like my, my, my, the way I did it is definitely not good. Cause like,
if you need to remove a node from this list, like you got to recurse the whole
thing looking for like, you know, pointers to it, unless you do that, the double linked
list.
So that would kind of get you around the problem.
So I knew this was not an optimal way of doing it.
I was just kind of like interesting.
Like when I, like after I programmed it, I went to go look like how other people did
it.
I was like, Oh yeah, these people did it like better data structures than I did.
So I was just kind of curious what you guys,
how you kind of thought about it.
So this was a map that we were making.
If we were,
if we were going to create this as a,
for a mapping solution,
maybe we would have all the cities with their roads,
the fastest roads that would connect the city.
And that would be how we would decide which cities make it into our list or
like, okay,
so if I wanted to make, let's continue on with this array idea, right?
So I want to say what cities can Atlanta connect to?
Say airports. What, where can you get from the Atlanta airport?
Well,
the reason why I wanted to go with roads is because I want to pick the roads.
If I'm creating a list of cities
that atlanta can can get to i want to only target what are my fastest roads because those might be
my weights right well i'm seeing like with the wind and like different routes you could say like
getting to new york is maybe slower and you add in time zones and stuff in there so you can get
those weights so i'm trying to avoid like hey well you could technically get to anywhere from atlanta because you could take like back roads
but by by using the faster roads then you're like narrowing the list of cities by interstates okay
yeah so so yeah if you had that you could if you did the, the, uh, uh, what was it called?
The,
the set.
Um,
we do the keys.
No dictionary.
Ah,
hash.
No,
you said it.
It's the adjacency list.
The adjacency.
Thank you.
My brain.
Um,
so yeah,
you could do it that way and you'd have,
you'd literally just have key value pairs of Atlanta to Montgomery,
Atlanta to Nashville,
Atlanta to whatever.
Right.
Yeah.
Or if you did the,
the array type thing,
which it might end up being more like the tree,
like what you were talking about.
If you had,
if you had a two dimensional array,
you'd have Atlanta.
And then in there you'd have a list of all the points.
Like you'd Atlanta would be your first index of your array.
And then the second portion of your
array would be another array of just a list of all the cities that it could attach to,
right?
Right.
So going from Atlanta, you might have, what's that, Tennessee?
Nashville.
Chattanooga.
Chattanooga, thank you.
Chattanooga.
See, I'm so good at geography.
Let me tell you about my geography. I don't know if you heard. All right. So you might have, Chattanooga might Chattanooga. Thank you. Chattanooga. See, I'm so good at geography. Let me tell you about my geography.
I don't know if you heard.
All right.
So you might have Chattanooga might be in that list.
Birmingham might be in that list.
And Macon, Georgia might be in that list, right?
And if you're trying to make that routing back to California, going back with what we were talking about earlier, then you've narrowed that list down to only the fastest roads that, you know,
cause those you can get to by interstate. Right.
So now when you look at it from a greedy point of view, you could say that,
well, Atlanta to Birmingham is going to be the, you know,
my most optimal way to get to California than the other two is because it
takes me more West westward Lee than macon or chattanooga
right yeah no i'm like looking like like no one really did it the way i did it so everyone says
basically you do adjacency list so you do a matrix so i'm like the way i did it works i'm
sure it's not optimum so it's probably crappy for lookups and crappy for edits.
So it's just not a good data structure for this.
But you did yours by a tree.
Well, it's not a tree because it's got cycles.
But I basically have a node, and that node has neighbors.
And I store the weights in that array, too.
Right. But you have a list of neighbors. I store the weights in that array too. You have a list
of neighbors.
That's an array.
No.
That's the way we're talking about it then.
It's an array with the dynamic size.
Yes, but that's what we said.
It's a list.
I think we're on the same page.
It's similar to a tree.
Stop using the same page. Similar to a tree. It's like a tree that can... No, no, no.
Stop using the word tree.
It's not a tree.
I don't think you made it a tree.
It's a list.
A tree would be object, object left, object right, or child left, child right.
Yeah.
A tree would imply that...
It's not really one list.
A tree would imply that there's only...
That there's...
You know, a node has one parent.
I got to see your code code that's basically what it
boils down to i'm sure we're gonna do a pull request can you have it trees not have cycles
is that like well then it becomes a graph yeah well then then what do you call that kind of
implementation if you have a tree and you had a cycle that's a graph graph yeah if you've got
what do you call that implementation?
If you've got pointers to the other objects and you've created a relationship graph.
Okay, so it's a graph, but it's not an adjacency list and it's not a matrix.
So what is it?
I don't know.
It's just a graph.
I just made a graph.
No, what I'm saying is if your tree has cycles in it, as you're saying, where one node can circle back to a parent, right?
That's what you're saying?
That's what you're calling a cycle?
Yep.
So in that case, that's not a tree.
That's a graph.
Yeah.
But there's multiple different ways to implement a graph.
You can do it with a matrix. You can do it with the adjacency list.
Or you can do this weirdo thing that I did.
Which sounds like it's not that weird because this was basically where the three of us landed.
I want to see the weirdness.
I really do.
All right.
I need to see the weirdness.
But anyways. All right. Well, we're going. I need to see the weirdness. But anyways.
All right.
Well, we're going to move on to Alan's favorite portion of the show.
It's the tip of the week.
That's right.
So what you got, Jay-Z?
You're up first.
Hold on, man.
I'm looking at my graph.
Oh, you.
All right.
So, oh, this is a really good one.
While I was working on these algorithms today in VS code i didn't javascript javascript for life um i kept having to go back and run stuff in the terminal so
i would go type type type then i would go click in the terminal and then i would you know click up
and run my thing it wouldn't work so i would go back to the editor pane make another change you
know add another console log back to the terminal i was
like man it stinks that i have to keep taking my hand off the keyboard go click the stupid terminal
so i looked for a shortcut and there isn't one but i found a link on stack overflow where somebody
was able to set up conditional um conditional key bindings in visual studio which sounds tougher
than it is you basically just paste this stuff in a little file.
And what it does is it lets that control,
what's it, the tilde or back tick,
that normally opens up the terminal,
it acts a little differently now.
So I pasted this into my VS Code,
and now when I hit control tick,
it'll open up the terminal if it's not already open.
If it's open and I'm in the editor pane,
it'll put me in the terminal. And if I'm in the editor pane, it'll put me in the terminal.
And if I'm in the terminal,
it'll put me back in the editor pane.
Oh, nice.
So what it really just means
is I don't have to use the mouse
to go back and forth
between the pane and the terminal.
I like it.
Yeah, and it's the shortcut
that was already being used for the terminal.
So rather than that now
just opening and closing the terminal,
it just toggles back and forth
because I like to have that thing open
all the time anyway.
Now, okay, I was super curious about this.
Do you guys find it inconsistent with Control Tick opening the terminal?
Like inconsistent?
Like you don't want it to do that?
Well, like sometimes you want it to and it doesn't do it, and you're like, why aren't you opening it?
I don't know that I do it enough to notice.
Yeah, we just always have it open, so I don't really know.
That's why I can't ever...
Yeah, I have that problem where it's like,
even like right now, I have code open right now,
and I will do Command-Tick because I'm on a Mac,
and I get nothing. it's looking at me
and like you're crazy why would you do that right isn't it control tick well i thought it would be
command on that oh yeah all right you're right no because that'd be the equivalent of windows tick
yeah you're right yeah so maybe that's what i'm doing wrong dang it i'm doing it wrong all right nothing to see here
moving along hey how about azure all right so check it out i have i have a couple of tips here
and the first one is i actually got off a mailing list that i didn't know about so
there's this thing that is free azure credits for students which is really cool. So if you are a student and you would like to get a hundred dollar credit to
play with an Azure, go get it.
And on top of that, you also get 25 free products, virtual machines,
artificial intelligence, databases, all kinds of stuff. Um, yeah, man, it's,
it's definitely worth playing with you. You get things like, uh, man, it's definitely worth playing with. You get things like, I believe, like SQL databases up to 250 gigs, 750 hours of Linux virtual machines.
And what are you going to do to qualify as a student?
Because I'm a student of life.
Yeah, I just wonder, like, this is worth going back to school for.
I've been studying it for a while now.
Oh, man.
Oh, my guess is you need studentoflife.edu as an email address.
Oh, I need an EDU.
That would be my guess.
Now, I don't know that much, but it's worth checking out.
If you're a student, if you're in school, you are learning, you know, go get this stuff and play with it, right?
I'm going to register an EDU domain real quick.
I hear those are harder.
So, yeah, I'll have a link in the show notes for
that and then also so i've talked about this in the past like i want to say instagram i had
mentioned their engineering blog was just amazing right when they were talking about how that they
they ended up scaling their postgres implementations where they sharded things out and
they talked about their complications and all that well Well, our buddy Ryan, Ryan Monster on Slack and Twitter,
he found this resource that is just crazy. It's a GitHub page that somebody put together
that is a list of all the engineering blogs they could find. And there are hundreds of them.
Like literally you could burn days, weeks reading these things.
And there's probably just gold nuggets in here,
like the Netflix ones, the Instagrams.
They got Flickr.
Google Research was in there.
Google Research, Intel, Instagram, Instacart.
It seems impossible to keep up
with this, though. You're basically
re-indexing the internet of the
companies that you want to follow their blogs.
Dude, I was shocked by
how many are in here.
There's an...
No, that's not... Yeah, NVIDIA.
Blogs.NVIDIA.com.
These are companies that are
solving big problems. And they their problems that a lot of
people like if you ever want to see you know what kind of interesting things are people having to
solve and then he also lists a bunch of individual blogs as well so uh and technologies so it's not
just companies so like you you might follow a microsoft blog but what if now how stick with me here? You want to follow the Microsoft edge blog.
Doesn't everybody, you, you could do that. You could do that.
Yep. So, so definitely check out this list. I mean,
there's probably some just really good information in here. So, um,
that's it for mine.
All right. In mine, I'm going to credit to, uh,
Chris Bennett who emailed us this great tip,
which is, have you ever been in Visual Studio and you're editing around and you hit F5 and
the wrong project starts up? Oh, yes. Right. So this little tip here, I'll have a link to it in the show notes.
But basically what you can do is you can right click on it, select set startup projects.
And there is a radio button there called current selection. solution, it'll automatically select the project for that file so that if you do the F5, it'll
start at the correct one.
Now, this is going to be beneficial if you have multiple independently runnable projects
within your solution and not just like DLLs or something like that.
You know, they're together, but it's pretty neat little tip.
You can see a visualization of this in the link that'll be in that too.
Good stuff.
So thanks, Chris, for writing in with that tip.
Yeah, definitely.
And I have a correction to make.
Radha from earlier in the episode.
And first of all, I got a big apology to Will Madison
because I know he's going to hit me on this.
I misspoke.
So an adjacency list is more or less what I programmed.
I read the description wrong.
It's a little different.
It's a little bit better,
but it's basically the same kind of deal
where you keep an array of the nodes,
and inside each array, you keep that list of neighbors.
Okay, yeah, yeah.
So I just had a suboptimal way of kind of, like, I didn't keep an array of them,
which just makes it easier to kind of pop around in there
without having to search through the whole list.
But it's pretty much what I did, just not as, I didn't do as good of a job.
Now, the thing that I described where you basically keep track of each pair,
the things that's connected, that's called an edge list.
Ah, okay.
Edge list.
And so edge list and adjacency list are not the same thing.
I got him confused when I was reading up earlier.
Wait, say that. I know Will Madison.
He's going to know, and he's going to get me on it.
Say the edge list again?
The edge list is the one where you keep track of all the edges.
So if A connects
to B, you're going to have a big array that has
A, B.
And then A, C, and A, D.
And the adjacency list,
you've got an array with
all your nodes. So A, B, C, D
are in the array. And each
array, each index, is going to point
to a list of all the neighbors
for that node. So that was the
two-dimensional array that I mentioned initially.
Yeah.
You described an edge list, not an
adjacency list. So sorry
about that, everybody.
No, I didn't.
Yeah, I described them both.
The two-dimensional array would be an adjacency
list. Matrix. No, that's the matrix.
So there's three different ways.
And that's why when I was Googling earlier, I was getting confused because I kept seeing adjacency list. Matrix. No, that's the matrix. So there's three different ways. Oh.
And that's why when I was Googling earlier,
I was getting confused because I kept seeing adjacency list or edge list,
and I was thinking it was the same thing,
but it's totally different.
So edge list is the list of connections.
Adjacency list is a list of all the nodes
where each node has a list of all its neighbors.
And there's adjacency matrix,
which is the rows and the columns
that show the connections.
Okay. The only one I was ever thinking of then was the adjacency list okay okay and that's the one that's more most similar to a tree maybe that's why we got no stop no it is it can be well i mean
depending on like if you had a plain graph but but as soon as, but the difference here is that the graph can allow it to, you know, have circular references or, you know, like quote multiple parents.
So, but so if you've got a tree and you have a bug in your code and you accidentally connect one of those nodes to a parent, is it no longer a tree?
Well, that's what I'm saying.
Like, so, okay, let's, let's take this back to real world.
In the example that I was talking about before, when we started the implementation, we had mistakenly worked under the premise that, hey, this is a tree.
Then we weren't even allowing for multiple.
Right, right.
So it wasn't like there was an option for multiple. Right, right. So it wasn't like there was... Yeah.
It wasn't like there was an option for multiple parents.
There was the one parent.
Yeah.
Right.
But you can still...
I could goof that up.
You just add that parent as a child
and have that parent not having children.
I can give up.
And in the real world,
I'll have,
you know,
I've seen some funky,
funky trees.
Sounds like a song,
but,
but yeah.
But then the problem is like,
how do you know when to stop?
Like you're trying to create this tree.
Oh yeah.
I mean,
that tree is not going to do right.
Well,
yeah.
And yeah,
it gets way more complicated that because like even your scenario
uh from a space problem you're like way increasing the space because that parent might point to
multiple children and each of those children might point back to another parent that you're
going to like now you're you're just increasing the size, your space for that tree.
They're all pointers to have doubles of that.
Well,
maybe they're pointers.
I don't know how you're implementing it.
Yeah.
Yeah.
I think we're,
we're running off the rails.
Cause like I can do anything really poorly.
We've proven that.
Listen to this conversation.
All right.
So you thought,
you thought we were going to make this episode without getting confused.
Yes. We mostly did. Yeah. Yeah. We gotcha. So you thought, you thought we were going to make it through this episode without getting confused. Yes,
we mostly did.
Yeah.
Yeah.
We gotcha.
So check it out.
We do have some resources.
We like one of them is the imposter sandbook that we will have a link there.
We've got some Wikipedia links.
Uh,
there's also a couple of YouTube videos that were pretty helpful and just,
you know,
being able to see exactly what's going on and what we were talking about. Hopefully we simplified it a little and just, you know, being able to see exactly what's going on
and what we were talking about. Hopefully we simplified it a little bit, but, you know,
seeing a video with some, some additional nodes and vertices will, will help drive it home even
more. And I found one article that was, um, it described itself as basically the FAST,
the capital F-A-S-T, it's an acronym, the FAST method for solving dynamic problems.
So if somebody throws a dynamic programming problem to you in an interview,
then you could kind of follow these steps.
And it was a pretty cool way of kind of breaking things down.
Hopefully, I don't get any dynamic programming problems in the interview
because it was definitely harder than it should be for me to implement both of these.
Cool.
All right.
Well, with that, thank you for listening and subscribe to us on iTunes, Stitcher, and more using your favorite podcast app.
As Alan mentioned earlier, if you haven't already, be sure to leave us a review by visiting www.codingblocks.net slash review.
And also, like I mentioned too, please share with a friend,
mention it to a friend that would be helpful.
Awesome.
And while you're up there at coding blocks,
go ahead and check out all our show notes,
which are extensive examples of discussions of more.
And send your feedback questions and rants to the Slack channel,
coding blocks,
dot slack.com.
We hang out there and also a lot of really smart people hang out there and,
uh,
help us figure some of this stuff out.
Uh, usually retroactively, unfortunately, but, um, follow us on Twitter. there and also a lot of really smart people hang out there and help us figure some of this stuff out.
Usually retroactively, unfortunately.
But make sure to follow us on Twitter at CodingBlocks or head over to CodingBlocks.net and you can find all our social links at the top of the page.