Algorithms + Data Structures = Programs - Episode 220: Graph Algorithms & 7 Bridges of Königsberg

Episode Date: February 7, 2025

In this episode, Conor and Bryce chat about graph algorithms, the 7 bridges of Königsberg, Hamiltonian paths, Eulerian paths, and more!Link to Episode 220 on WebsiteDiscuss this episode, leave a comm...ent, or ask a question (on GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonBryce Adelstein LelbachShow NotesDate Generated: 2025-01-22Date Released: 2025-02-07Algorithms by Panos LouridasICPC7 Bridges of KönigsbergHamiltonian PathEulerian PathElements of ProgrammingC++20 std::ranges::transformHaskell iterateC++20 std::views::iotaFrom Mathematics to Generic Programming (FM2GP)PLVM MeetupIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8

Transcript
Discussion (0)
Starting point is 00:00:00 I was thinking that this is very similar to a Hamiltonian path. And I just looked up what the difference is. So a Hamiltonian path is a path in a graph that visits every vertex exactly once. Whereas an Eulerian path is a path in a graph that visits every edge exactly once. And it's sometimes also called an Eulerian path or an Eulerian walk, and an Eulerian circuit is a Eulerian path that starts and ends at the same node. Welcome to ADSP The Podcast, episode 220, recorded on January 22nd, 2025. My name is Connor, and today with my co-host Bryce, we chat about graph algorithms, the seven bridges of Konigsberg, Hamiltonian paths, Eulerian paths, and more.
Starting point is 00:01:02 End of episode 219, start of episode 220. So, as I said before, I listened to some of the chapters, and chapter one of this book was What Are Algorithms? And he makes this one other claim that I didn't mention, which is that he says that algorithms must terminate after some finite number of steps, that an algorithm cannot run forever. I'm not sure whether I agree with that. But in general, I like the way that he defines algorithms as an algorithm is a set of steps to do something, and it's a set of steps that you could execute by hand. I'm on board with that. But so chapter two is about graph problems, which is not something we've ever really talked about amongst ourselves. Do you know much about like graph problems? Not really. I mean, there was a point in time back when I was a budding ICPC programmer, and you had to then go and learn all the different shortest path.
Starting point is 00:02:13 Yeah. I can't even remember the name of them, but, like, there's the beginner ones that, ooh, let's look it up. I want to say Mistral, but that's the name of an LLM, not the name of a shortest path. Let's go for, actually, let's actually go to an LLM. Because I want to say one of them is like a three letter as well. It's like Kim or Kip or something like that. List me just the names of 20 very common graph algorithms. Obviously, Dijkstra and ASTAR
Starting point is 00:02:46 but I want, I think they're called like spanning something like that, common graph I do believe that this chapter covers it's got a section on shortest path and I believe it covers the Dijkstra algorithm Yeah, Prim was the so I'll just read out
Starting point is 00:03:01 the 20 that were on chat GPT. Am I on 01? I know they give you like a couple of 01 for free. And I think this is my first. It's 10 a.m. on a Wednesday. You know what I've used my 01s for? Travel planning. I was about to say travel planning.
Starting point is 00:03:20 It's very good. Do you know that through Perplexity AI we have 01? Oh, I did not know that. Yeah. You got to just register, email them. By we here, Connor means we in video. Yeah, yeah. I was about to explain to the listener.
Starting point is 00:03:35 This is what do they call it? Inside baseball talk. I think I'm not actually sure for how long, but Perplexity AI, which is a search engine, which gives you all the top models. Let's actually go. We're taking a tangent, folks. First time ever on ADSP, the podcast. And you can either. Our first tangent ever.
Starting point is 00:03:56 Yeah. First tangent ever. You can either set it so that it says default, which is optimized for fast search. And I think then it dispatches to any one of the models that it currently supports. And or you can hard code it to the one that you want. So currently, I have mine hard coded to 01. But they also give you access to cloud 3.5. Sonnet, Sonar Large, GPT-40, Sonar Huge, Grok 2, and Cloud 3.5 Haikyuu. And I just became self-conscious of the way that I'm saying. Is it cloud or is it clawed?
Starting point is 00:04:31 We don't know now. Anyways, forgive me, internet, for potentially mispronouncing cloud. And it's actually pretty phenomenal. Like, the stuff that I'm able to do now, half the time, if I'm doing personal stuff, I'll just give it a whole script and I'll just be like, I need you to make these changes. And it goes and does it and it works and it's fantastic. Anyways, Perplexity AI, I guess that's a, you know, if you are quote unquote sponsoring a company where some of the employees at that company may have podcasts, you may end up getting a free plug.
Starting point is 00:05:06 I don't know how much it costs because we get it for free for, I think, a number of months. Currently, it's still letting me log in. So anyways, I also have ChatGPT, though, because ChatGPT gives you faster responses. So if it's something that I know that whatever the free model is, we'll do okay with, such as listing me a list of 20 things that doesn't actually need to be correct. We just go to it for that.
Starting point is 00:05:28 And we go to perplexity AI if we're willing to wait five seconds for a response, which is quite a bit. Anyways, top 20 algorithms brought to you by ChatGPT. Could be 01, could be whatever the free tier is. Number one, actually, do you want to play a game? You want to see how many you can guess? Oh, of the top uh graph algorithms i'll give you if you can get five because i technically already said that number six was prims that was the three letter one which actually turned out to be four
Starting point is 00:05:55 letter four letters oh there's it actually and actually you think think of very generic like not even named after someone or with a special name like Like number one and number two, you definitely know. The question is whether you will think of it. It's more of like a category of algorithms. Well, there's probably connectivity or like connected components. It's probably somewhere on there. You're thinking too specific. Okay.
Starting point is 00:06:18 You definitely learned this in school. Like everybody learns this as the two broad categories of graph algorithm breath first depth first yeah so depth dfs is number one bfs is number two and then we'll just go down the list full points for bryce it's gonna be like the shortest path um yeah dykstra's is number three number four is bellman ford five is floyd warshall six prim, Prims. Seven, Kruskal. Eight, A-Star. Nine, Johnson's. Ten, Tarjan's algorithm for strongly connected components. I said connected components.
Starting point is 00:06:55 That counts. Eleven is Kosaraju's algorithm also for strongly connected components. Twelve, topological sorting. Thirteen, Edmund's CARP algorithm for maximum flow. Four, Edmunds-Karp algorithm for maximum flow. 14, Ford-Fulkerson algorithm for maximum flow. 15, we're going to mispronounce this one. It could be, we're going to say Dinnich, D-I-N-I-C algorithm for maximum flow.
Starting point is 00:07:19 16, I'm going to mispronounce this one too. It's getting rough, folks. Heer-Holzer algorithm for, or actually I'm guessing because it's named after Euler, it's going to be pronounced Eulerian circuits. Yes, that's actually, we're just about to talk about the Eulerian circuits. That's one of the... 17 is Flurry's algorithm for an Eulerian path. 18 is the Chu-Liu slash Edmunds algorithm for minimum spanning arborescence. 19 is
Starting point is 00:07:50 Kahn's algorithm for topological sorting. And 20, thanks for staying with us listener, Warshall's algorithm for transitive closure. And Prim's was the one I was thinking of. I think Kruskal. But here, let's ask one follow-up question.
Starting point is 00:08:06 What is Prims' algorithm? It's minimum spanning trees. I was going to say minimum spanning tree. An example of. I knew the word spanning was in it. I remember that from school. So I got into the memorizing of these algorithms at a certain point in time. I think I got through all of the MSTs.
Starting point is 00:08:27 And then I think, you know, Dijkstra, ASTARS, those are all pretty straightforward. But then I didn't get to, I mean, Floyd Warshall. I think I programmed once. I don't remember what the algorithm is. But all of these like hyphenated names, that's why I kind of like fell out of love with icpc is that like you know intelligence and intuitive thinking works up to a point and then like you just need to start memorizing these hyphenated named algorithms for like specific things which uh didn't really i didn't really fancy that didn't really fancy that anyways over to you now let's talk about
Starting point is 00:09:02 your larian circuits because that's where this book opens up on. And I had a flashback to my college years. It's pronounced Euler. And I used to pronounce it Euler and then I got raked in the YouTube comments. It's pronounced Euler. So I believe that means that it's an Eulerian. Unless if they change the pronunciation of Euler's name. We can go with Eulerian.
Starting point is 00:09:28 Come at us, Internet, if it's wrong. So I remember this example from school. Are you familiar with the seven bridges of Konigsberg? Not Kongsberg in Norway, where we've been. Norway. Konigsberg. I have not been. Only you have been to Tech Town.
Starting point is 00:09:47 Oh, okay. All right. Fair enough. I am familiar with the graphic, but I think it's just you need to cross all seven is the problem. So it's, is it, yeah, it's seven bridges. You should go, we'll put a link in the show notes, but you should go look at the bridges. And there, I put a's two islands, and then there's two mainland portions of the city. There's to, I presume, the north and south.
Starting point is 00:10:33 And the question is, can you visit all of the bridges and all of the islands without crossing a bridge twice twice yeah and I'll save you and the listener the the entire lesson
Starting point is 00:11:01 but basically this for some reason, this became a mathematical challenge in the 1700s, I guess, to spur the citizens, the local citizens, to think more deeply about math. And Euler came up with a solution, and his solution is that the answer is that no, you cannot do that. And if you look at this as a graph, if you form a graph from the two islands and the two mainland regions, you'll see that every node has an odd number of connections to all the other nodes. And so because of that, that means that you can't visit every bridge, right? That if one of the nodes of this graph has three connections to it,
Starting point is 00:12:03 if it had two connections to it from somewhere else, two edges to it, you know, if it had two connections to it from somewhere else, two edges to it, then you could go visit it over one edge and come back over the other edge. But if it's got three or if it has any odd number, then you're going to always be left either unable to visit that last one or you'd visit that last one and then you would need to go back on a bridge that you've already taken. Does that make sense? Sort of the intuition there? I think so, yeah. I was thinking that this is very similar to a Hamiltonian path and I just looked up what the difference is. So a Hamiltonian path is a path in a graph that visits every vertex exactly once. Yeah.
Starting point is 00:12:48 Whereas an Eulerian path is a path in a graph that visits every edge exactly once. And it's sometimes also called an Eulerian path or an Eulerian walk. And an Eulerian circuit is a Eulerian path that starts and ends at the same node. It's called an Eulerian circuit or an Eulerian tor. What was your although? Well, no, I'm just trying. I always thought that Hamiltonian paths weren't allowed to visit edges more than once. I think it may be that they can't visit an edge more than once but they
Starting point is 00:13:25 don't have to necessarily visit every edge oh yes yes yes yes yes because i was gonna think by definition if you have like um if you visited an edge more than once you're gonna inevitably end up at a vertex more than once and i i think that i think that an Eulerian path, you just have to visit all the edges exactly once. You don't necessarily, you can visit the same node multiple times. Yes, yeah. That's the distinction. Oh, that Hamiltonian is visiting each vertex exactly once and Eulerian is visiting each edge exactly once. It's interesting because I definitely know that in my programming, competitive programming days that I've solved Hamiltonian path problems. And I'm pretty sure the problem even stated that it was a Hamiltonian path problem.
Starting point is 00:14:18 But I've never come across Eulerian paths. Maybe it was implicit in the problem, but I've never come across the word, which is surprising considering that it is the sibling concept to Hamiltonian. Yeah. So then these are called either paths or walks, and
Starting point is 00:14:37 when you have a path or a walk that starts or ends at the same vertex, it's called a tor or a circuit. So, anyways, a little bit later in the chapter, it starts talking about cycles and graphs and graphs without cycles and the importance of DAGs. And that's when I – I was going to say, are we going to say DAG, directed acyclic graph? Yeah.
Starting point is 00:15:08 Woo! It's everybody's favorite word or acronym. That's when I pulled out EOP because for some reason it just reminded me of, well, I guess it's not shocking, but it reminded me of this part of EOP where we're stepping up talks about orbits. Are you familiar of this part of EOP where we're stepping up talks about orbits. Are you familiar with this part of EOP? I think it's like the second chapter. Nope.
Starting point is 00:15:32 I don't think I ever got past. I'm not even sure if I got past the first chapter. But wait, hold that thought. I'm going to get a beverage. I'll be back in two seconds. Oh, that was quick. That was actually two seconds. All right.
Starting point is 00:15:43 Orbits, EOP, go. Chapter two, apparently. All right. Orbits EOP. Go. Chapter two, apparently. Well, how many chapters did Zach say we're supposed to read? Because I thought I had read, I think maybe it's just the first chapter. He said like the first chapter is like 80% of the value. So maybe I didn't make it to chapter two. So chapter two, and I've always thought that it's an interesting topic for chapter two. Chapter two talks about transformations and their orbits. And when it says transformations, it's not, I think, the type of transformation that you would think, which would, when I say transformation in the context of Stepanov, what do you think? To transform? Right. Which is taking a function and applying it to all elements of some sequence.
Starting point is 00:16:33 What every other language, actually not every, what almost every other language calls map. Some languages call it each. My guess is that maybe Stepanov would also call it math because when he talks about transformations in EOP, that's not what he means. In EOP, a transformation is taking a function, a function that's a homogeneous function. So it's a function that on some particular type T where it takes in a single T and it produces a single T. And a transformation is what happens when you take one of these functions and you apply it repetitively in succession. So like you do like, you know, f of f of f of f of f of x
Starting point is 00:17:31 or f to the nth power of x. Make sense? Yep. And so like in the case of if your operation is multiplication, then literally taking the nth transformation would be taking the number to its nth power. And if your operation is sum,
Starting point is 00:17:59 then taking it to the inf uh uh transformation would be uh multiplying the number by you know n times um and uh so that's the notion of transformations and then he talks about orbits. And an orbit is a structure to a transformation. So a transformation has an orbit. Let me see if I can get this right. Its orbits are the elements that are reachable from a starting element by repeated application of the transformation. So, y is reachable from x under um so like if you can if by it makes sense by calling f of you know repeatedly on x if you can get to y then uh uh that's an orbit um so for for well well actually here's a quick question does this transformation only apply to binary operations,
Starting point is 00:19:28 or is this also applicable to unary operations? So, like, in the examples you use, like the transformation of repeated pluses leads you to multiplication, the transformation of repeated multiplies leads you to power. Those are two different binary operations that lead to a different binary operation. Is the same concept applicable to the plus one function, a partially applied binary operation, which is a different binary operation, is the same concept applicable to the plus one function, a partially applied binary operation, which is a unary operation,
Starting point is 00:19:50 in which case, if you're starting from any value, plus one will get you to all the integer values. All right, technically, if you're starting at one, then it's technically natural numbers. So he, when he's talking about transformations, he talks about, he says that it can be applied to in-area operations, but that mostly we care about unary and binary. So I think it's meant to be applicable to both unary and binary. Although he defines transformations as being something on a unary function. So my examples of addition and multiplication may have been poor ones. I guess what he means when he talks about,
Starting point is 00:20:38 he gives sum and product as examples here. The transformation of sum or product would mean like f of x would be x plus x or x times x. So it'd be a unary operation, not a binary operation. I see. Makes sense. In which case, what I said before about it, about what you get out of sum, it wouldn't be multiplication, right? Because f of x, if it's sum, would be... So I think this is classically... Ben and I chat about this all the time,
Starting point is 00:21:21 that Haskell has this function called iterate, which is the repeated application of a unary function to a starting value. And IOTA is a specialization of this, where IOTA's unary function is plus one. And you give it a starting value, and then it just generates you a sequence of however many times you want to apply that function. But you can generalize this to any unary operation if you want to create all the powers of two you can create a yeah multiply by two unary function and just start with one and then you end up with one two four eight sixteen etc exactly uh and this concept is called different things in different languages but uh i think yeah and haskell is called iterate and that's where we
Starting point is 00:22:03 will which is interesting because iterate is is – it's an interesting name. Stepinov calls it Transform. Haskell calls it Iterate. Huh. Yes, I think that's exactly the same idea. But, yes, some of what I said before may have been not quite correct. But it's certainly – when he's talking about it here, he means it as a unary operation, a unary power operation.
Starting point is 00:22:30 So we explained what orbits are. And then there's a notion of an orbit can cyclic if there is some n such that x equals f of n of x. So if at some n, some number of applications of your function, you get back the original x, then you've got a cycle. Yeah, makes sense. The easiest thing you could think of is modify your plus one unary operation with another unary operation of modulus 10. Right.
Starting point is 00:23:14 So if you start at zero, you'll go up to nine. And then when you hit 10, you'll shoot back down to zero, which is obviously cyclic. And then there's also a notion of terminal transformations, which is it's terminal if at some point you reach an element that is not in the domain. So at some point the sequence ends. And so if it's not terminal and not cyclic, then it's infinite. So he defines four different types of shapes. The first is infinite if there's no cyclic or terminal elements.
Starting point is 00:23:57 Then there's terminating, which is if it has a terminal element. And then it's circular if it's cyclic, and then it is p-shaped if x is not cyclic, but its orbit contains a cyclic element. So think about the p as like there's some starting sequence that you never repeat, but then you start at like zero and then you go to five. And then there's a cycle from like five to ten. Make sense? Like if you start iterating the orbit and then eventually you enter a cycle. So you won't repeat everything, but you eventually could go into a cycle and that's what p-shaped should be okay does that make sense
Starting point is 00:24:54 there's a yeah there's there are diagrams hopefully very nice very nice and um and uh then he talks about some algorithms for determining the cyclic structure, determining where the collision points would be, where the collision point is a point at which you have two different numbers of invocations that give you the same value. So basically, this is how you detect cycles or the length of a cycle. So it's like, you know, if there's, is there some F to the nth power that is equal to some F to the, you know, nth power? Or he says F to the 2n plus 1 power. And then he goes into some algorithms for measuring, for finding collision points and for measuring the size of orbits. And it's a very interesting part of the book and it comes up in a couple later parts of the book. I think it also comes up in the bifurcate coordinates section, which is chapter 18, which I know well because when I was in college, I did an independent study class with Hartman where we – no, sorry.
Starting point is 00:26:38 Chapter 7 is coordinate structures, and I did an independent study class with Hartman where we focused on coordinate structures. And that's what made me remember some of these algorithms. But I was thinking about this notion of transformations and orbits and cycles and whether there's any relation to graph algorithms and cycles and cycle detection and graph algorithms. And I don't really know what the applications of this field of transformations and orbits is. I imagine that random number generators and things like generation functions like IOTA, et cetera,
Starting point is 00:27:27 are one such case. But I don't know what the other applications would be. But reading this first book, this algorithms book, got me thinking about whether this part of EOP has some relation to graph algorithms. Well, this is just another reminder that I need to go read EOP to have something to say here. I mean, this is technically part three of this conversation, which is not going to come out before next Monday, but I may. It is penciled in, although I probably should have announced it by now. It is penciled in that potentially the first meetup for the from mathematics to generic programming book. I'm reviving the programming languages virtual meetup, which has been dormant for over a year. Technically, Leslie was running it for, I want to say, a year, year and a half while I was taking a break.
Starting point is 00:28:22 Anyways, and that is the sibling book to EOP. But yeah, I mean, it definitely sounds like there's definitely some, what do you call it, similarities between those two things. Do you have like further thoughts other than when you were reading the algorithm or the graph chapter that it was making you think of EOP? And that's what led you to reaching for it? That's sort of the reach. That's sort of the point where my reading reached that I was like, hmm, I remember this section of EOP and maybe it's got some relevance. Yeah, that was pretty much it i i mean i suppose i i'm just curious as to what the application of these stepanov transformations and orbits are like why why is this chapter two in his book um and i know it comes up in some of the later chapters um and maybe maybe that's the answer and I just need to reread the whole book. But yeah.
Starting point is 00:29:26 I guess the thing that but the thing is that when he talks about detecting cycles and measuring the length of cycles he's talking about it on not on sequences or on graphs, but he's talking about it on one of these transformations, which is just a function that you call on, you know, one particular and I mean that's useful but is that something that can generalize
Starting point is 00:30:12 to like I have a graph or I have some not just like a function but like I have some data you know some structure some data structure I think that's usually where we want to do cycle detection is some structured data, not on a function itself.
Starting point is 00:30:31 And so really, that's the question that I had when I was reading the other book was that I'd remembered this part of the Stepanov book talking about about these techniques for finding cycles and classifying the shape and structure of applying a unary function to some, a homogeneous unary function to some input. And I'd never thought before that, huh, maybe that generalizes to finding cycles in data structures or things like that. And I'm not sure that it does, but maybe it does. Yeah, maybe. it does yeah maybe that may maybe if maybe if you have an algorithm to detect a cycle and some you know uh uh like function f of x like imagine that i'm trying to detect like find a cycle like in a list maybe my function could just be uh you know like advanced to the next element in the list?
Starting point is 00:31:46 Yeah, I guess it could, right? Like, imagine that my... Well, so that's, if you're trying to compute some kind of, like, reduction on some iterate sequence, if you detect a cycle, you can just convert it
Starting point is 00:32:00 into some kind of, not constant time operation, but it's constant time once you find the cycle, right? Well, so I guess here's one of the applications of it. Imagine that my input is some iterator and that my transformation function just calls next on the iterator,
Starting point is 00:32:23 just advances the iterator right so then i can use these step enough algorithms for detecting collision points and detecting whether something is circular and whether something is a cycle i can use that to see if there is a cycle in my whatever thing it is that I'm iterating. Right. Yeah. Yeah. That makes sense.
Starting point is 00:32:51 So then there's the usefulness of this. I guess I must have – maybe that's something that's obvious. It doesn't seem particularly insightful. And maybe it was just so obvious that he didn't explain that particular application or maybe it comes up in later chapters and i'm just not remembering that but it seems like this generalization of the cycle detection to just an arbitrary function it gives you the most general form of this yeah he really he really was very brilliant in his ability to get to the most generic form of something. Speaking of meetups, speaking of meetups, and this is a good segue slash closing. Well, I was going to say part four, speaking of meetups, and then we're going to get some 2025 hot takes.
Starting point is 00:33:42 I mean, technically, the time this comes out, it's going to be like mid-February. Be sure to check these show notes either in your podcast app or at ADSPthepodcast.com for links to anything we mentioned in today's episode, as well as a link to a GitHub discussion where you can leave thoughts,
Starting point is 00:33:54 comments, and questions. Thanks for listening. We hope you enjoyed and have a great day. Low quality, high quantity. That is the tagline of our podcast. It's not the tagline. Our tagline is
Starting point is 00:34:06 chaos with sprinkles of information.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.