Algorithms + Data Structures = Programs - Episode 281: From Hylomorphisms to Boost Ranges to Jello

Episode Date: April 10, 2026

In this episode, Conor and Ben chat about Haskell deforestation, hylomorphisms, boost ranges, Jello and more!Link to Episode 281 on WebsiteDiscuss this episode, leave a comment, or ask a question (on ...GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra: LinkTree / BioBen Deane: Twitter | BlueSkyShow NotesDate Recorded: 2026-03-30Date Released: 2026-04-10DeforestationPearls of Functional Algorithm DesignAlgegraic Identities for Program Calculation (1989)The Algebra of Programming (1996)Kadane's AlgorithmA short cut to deforestation (1993)HylomorphismStepanov's "Notes on Higher Order Programming in Scheme"Boost RangesJelloIntro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8

Transcript
Discussion (0)
Starting point is 00:00:00 Things like fold fusion. And it's called deforestation because of the process of removing trees. And he using these algebraic laws transforms that program into a linear Cadane's algorithm solution. And in the midst of that paper, he mentioned something called the fold scan fusion law. Right. So the anamorphism is a, the opposite of a fold. Where you're going from a seed value, it's an unfold. Yeah, an unfold.
Starting point is 00:00:28 Going from a seed value to a sequence, let's say. And catamorphism is the generalization of going from a fault, going from a sequence to a summation value. And so the hyalomorphism is where you fuse them both together. You don't need to generate the sequence in the middle, as it were. Welcome to ADSP the podcast, episode 281, recorded on March 30, 20, 26. My name is Connor, and today with my co-host, Ben, we chat about Haskell deforestation, hylomorphisms, the boost ranges, jello, and more.
Starting point is 00:01:13 I mean, I sent you some topics of like a rabbit hole that I kind of... Yeah, he sent me some cryptic topics, I have to say. Algebra laws, BMF, Haskell Deforestation. Are you familiar with Haskell Deforestation? Does that mean anything? I'm not familiar with any of these things, really. BMF, I looked up, is that... Well, I'm assuming it's... Is that Bird Merton's formalization? Yes. I think formalism, if I'm...
Starting point is 00:01:42 Formalism? Sure. And so, I mean, there was two different. Let's let me open. And this is the process of algorithmic transformation of programs, typically inside the compiler in a functional language compiler. Similar to deforestation, so it ties in with that. So wait, so you...
Starting point is 00:02:01 Which is the same idea. You weren't familiar with deforestation, but then you just referred to it. So have you heard of it before, or... I've heard of it. I know almost nothing about it, but having... having read half a Wikipedia page, I constrain together a few sentences. Yeah, let me, what is the... So I'm trying to find the paper.
Starting point is 00:02:24 So I gather that it's related to things like Foul Fusion. And it's called deforestation because of the process of removing trees and simplifying tree structures. Is that why it's called that? Because honestly, I haven't actually gone and read the paper yet. Well, I have to think, it's a Philip Wadworth. Philip Wadler used that expression in a paper deforestation. Yeah, it was, because that's the thing is, I have too many now of these, of these deep-dive papers that I've had, you know, these research engines go and build.
Starting point is 00:02:56 Yeah. I'm trying to. Well, this touches on, I'll let you search. Well, I'm just looking for, because I have these three different, three different papers, and all of them have to do with Richard Byrd's algebra laws. So he's the author of the algebra of programming, which I have not read yet, but I know that you've put it on my radar. So I assume that you've read or at least perused it at some point in time. I have, now, if you're going to put me on this part, I can't remember what's in it.
Starting point is 00:03:30 But I have read it. Yes. Was Richard, Richard Bird also did a couple of other books. I'm thinking, do you thinking functionally with Haskell? Is that Richard Bird? The Tiger book? I know the, is the tiger book the designing algorithms with Haskell?
Starting point is 00:03:46 That was written by Byrd and co-authored by Jeremy Gibbons. Give me two seconds here. Okay. So here are the books I was thinking of. The tiger book is thinking functionally with Haskell. Okay. So there's two tiger books. Well, the other one is a lion book.
Starting point is 00:04:09 Oh, it's a lion book. algorithm design with Haskell. That's what you just said, Richard Bird and Jeremy Gibbons. I actually have not the algebra of programming, but the fun of programming by more of the usual suspects. Jeremy Gibbons and Ergadamor. Probably mispronouncing that, sorry, it's probably Dutch. And this book is also one I was thinking of,
Starting point is 00:04:34 pearls of functional algorithm design. This is a fantastic book. Really good. We've got to put that on my list My ever-growing list That's a risk of talking to Ben folks So don't think you're going to help He's going to help you get through your reading list
Starting point is 00:04:50 He's only going to add to it I literally have books sitting on my shelf That I purchased years ago I still have the annotated Alice in Wonderland And I tell myself every Christmas Because I don't know why But it seems like Christmas is the right time You got a little bit of a few days off
Starting point is 00:05:04 To read the book Still have not been sitting on my shelf It's a beautiful book Oh you should read it It's great. I know. Every time I mention it, you say the same thing. He's like, ah, you got to put that at the top of your list.
Starting point is 00:05:15 And I'm actually listening to an audiobook that Phineas Porter, a past guest on ADSP. He recommended me a book called The Fund, which is a book about the myth of Ray Dalio and kind of his, I don't actually know much about Ray Dalio, other than he's got Bridgewater, et cetera. Anyways, he told me to read that book like a month and a half, or a year and a half. ago and I immediately purchased it and I'm only just getting around to reading it. It's too little time in the day. We need whatever, Dr. Strange's ability to astral project and read stuff while you sleep. Well, I recently reread the annotated Alice and I'm probably going to read it again in the next month because the talk I'm giving at C++ now is the Plato, McGrath, Sartre and Carol talk. So you're doing research. You're doing research for
Starting point is 00:06:10 your upcoming talk. It's one of the best ways if you really want to force yourself to do something is to sign up to give a talk. There you go. You've already read the book. So, all right. The Pearl of Functional Programming by Richard Bird is now on my list again. I did while you were...
Starting point is 00:06:24 Pels of functional algorithm design. Or functional algorithm design. Oh, there we go. Google knew what I was looking for. And is that a book that's in... So first of all, we've got to figure out what year. It is copyright 2010, which means... It's probably the Haskell phase.
Starting point is 00:06:44 Yeah, looks like it. Yes. It looks like the code in here is Haskell. Yeah, I've got like a little Google preview. And I'm looking at zip filter, repeat. And so what's curious, I mean, how did I end up going down this rabbit hole? I was doing some work-related stuff, and I stumbled across some stuff that made me think of a paper that my boss put on my radar. like two or three years ago in 2020.
Starting point is 00:07:12 And it was a paper called the algebraic, 1989 algebraic identities for program calculation. And it's only like a seven or eight page paper, I want to say. Okay. It is... Who's the author? Richard Bird. And I didn't actually realize when my boss put the paper on my radar
Starting point is 00:07:32 that it was written by the same individual that had written algebra programming because that's what I most closely associate Bird with at the time. So I haven't read any of Byrd's textbooks, but I am aware it is at the top of my list of books to read as the algebra programming. And I go back to this paper and it's a really, really fascinating, I would probably put it in my top five papers that I've read, along with Frazal Forms by Ken Iverson and a couple others.
Starting point is 00:08:00 And it walks through the Cadane's algorithm, maximum contiguous suboray sum. And he shows a program. transformation from the max of the sum of segments. I think it's max of mapping the sum over segments, where segments are every single possible contiguous subarray. So it's like a cubic, you know, algorithm not very efficient. And he using these algebraic laws transforms that program into a,
Starting point is 00:08:33 the linear Cadane's algorithm solution. And in the midst of that paper, he mentioned something called the fold, scan fusion law, which is that if you have a scan succeeded by a fold, you can merge those two essentially and basically end up with a single fold and therefore you don't have to materialize the result of the scan. Yeah. Which is great because it's still linear in time complexity, but now it's constant in space complexity, which is a fantastic improvement.
Starting point is 00:09:09 Very nice. And I, you know, thought it was a cool paper, but it didn't really think too much more of it. But somewhere in the paper, he says, you know, this is just one of many laws. And at the time I was like, well, I don't have time to go on some deep dive. But now, thanks to these little research engines we have, I this time looking at it, went and did like a little research engine. And it came up with a bunch of stuff, which is like the history of this kind of not necessarily program transformation. So that's what the BMF, the Bird Merton's formalism,
Starting point is 00:09:39 and then later on this language called squiggle. It's all about like stating these algebraic laws, some that are as simple as like the last element of a scan is equal to the equivalent reduction. Right. And some, you know, some of them are even simpler. Like the first element of some kind of mapping operation is, can just, can just be like first or last or something like that. some of the algebra laws almost seem obvious. Yeah. Oftentimes that's the case, of course. Yeah, but if you need those obvious laws to be a part of your algebra in order to do like the full transformations and whatnot.
Starting point is 00:10:16 But I am less interested in program transformations per se than I am in the idea of just building like libraries or languages in the case of Haskell where you can automatically avoid these materializations, which essentially is what like ranges in C++ is, right? Like, you're smirking, you've got something to say. No, what confused me was what, the way you just said that, you said, I'm less interested in program transformations than in making these libraries that do program transformations. So I sort of think there's the same thing what you just said, right?
Starting point is 00:10:59 Is it the same thing, though? I mean, that's a good, great question. Maybe it is the same thing. because is the implementation of a library like ranges in C++ or iterators in Rust or streams in Java? Is that the same thing as programmatically transforming one program into a more efficient program? Well, I think in the best case, it is the same. There are a few examples of that in ranges, I think. and I don't think they are at all formalized.
Starting point is 00:11:36 I don't think they really use program transformation formalisms or laws. I think they are just sort of ad hoc empirical observations in most cases. Things like if you do a reverse view of a reverse view, then what you get back is the original view unwrapped. Right? There's no need to wrap it twice because that double wrapping is the same as if you hadn't wrapped it. Mm-hmm. So wait, what did you say there? You said that it was an ad hoc observation.
Starting point is 00:12:08 What was the... My feeling is that these things are just ad hoc sort of things as they're empirically observed. You know, they, I don't think that, I could be wrong, but I don't think that implementers or API designers in C+++, oftentimes sit down with the background of these formalisms in their mind. they put together useful algorithms, algorithms that have proved to be useful in everyday, their day-to-day life and what people have asked for,
Starting point is 00:12:42 and they see opportunities for optimizations, right? A lot of times when we program C++ in particular, we're always thinking about how can we make this fast. And so we tend to see empirically opportunities for optimizations. In my experience, we don't often get to that point coming from formalisms. That's what I'm saying. I mean, and that's exactly what we're talking about, folks.
Starting point is 00:13:06 I mean, I'm thinking my gears are spinning. They're turning in my head. And I guess that is what I become very curious about. And we kind of skipped a couple, or I don't know if we skipped or we just went on tangents and now we're coming back. But, you know, we mentioned Haskell Deforestation, which is a term that I had never heard of. And the paper that I had been, that was put on my radar by the, you know, research engines are a shortcut to deforestation, 1993, by Gill, Launchbury, and Peyton Jones.
Starting point is 00:13:38 And the little summary of it is basically it introduces this kind of fold R, which it consumes and destructs a list, and then build, which is like a, you know, generator or a producer. You know, there's a bunch of different names for these things over time. But in this 1990-3 paper, they're calling it the fold R slash build rule. Whereas if you, like, chain these things together, you can basically, you know, it's like if you're, if you're summing up an iota sequence, you don't actually have to build. Okay. Okay. This is like, this is fusing together an anamorphism and a catamorphism. Exactly.
Starting point is 00:14:13 If we were to go to that terminology. Yes. And it's so crazy sometimes how this stuff works, because I was talking to. Is that called a hylomorphism? Yes, yes, hylomorphism. Yes, highlo-morphism. And you actually put this on my radar at one point, and I recall you explaining it to me, and I maybe nodded my head. And I was like, ah, that makes sense.
Starting point is 00:14:31 But it's one of those times where you're kind of squinting and being like, I mean, in theory, it makes sense. But let's hope you don't expect me to explain it to anybody else because I didn't really understand it that well. But yes, hyalomorphism is the combination of anamorphism and a catamorphism. Right. So the animorphism is a, what's the word, the opposite of a fold? Where you're going from a seed value. It's an unfold. Yeah, an unfold.
Starting point is 00:14:54 Going from a seed value to a sequence, let's say. Yeah. And catamorphism is the generalization of going from a fold. of a fault going from a sequence to a summation value. Yeah. And so the hyalomorphism is where you fuse them both together. You don't need to generate the sequence in the middle, as it were, and you go to 01 space, like you would just say.
Starting point is 00:15:15 Yeah. And so, I mean, this is, and this is what's kind of interesting, is that, like, it's 2026 right now. This paper was written in the 90s, but, like, it's crazy to think that if we go back 50 years ago, like, people didn't have terminology for talking about this stuff. and like at a certain point like I remember reading one of, or not reading, but perusing Alexander Stepanov's papers on his, you know, Papers.Html page. And I think it's on like higher order notes on scheme or something like that.
Starting point is 00:15:44 And on page like 86, he's got an implementation in scheme of reduce. And he's got a little comment that says like, you know, reduce comes from APL. But just this, the idea that like, oh, this reduction operation is a thing that we need to name. and it's like a higher order function. Right. It seems like, what are we talking about? Like, that's obvious. Like, every language I've ever used has, like, in one form or another, reduce function. And, but, you know, Alexander Stepanov was naming this and, like, pointing at APL.
Starting point is 00:16:13 You're 15 to 20 years younger than me, Connor, right? So the same is not, like, there's no reduce in basic. There's no, as far as I know. I guess that's true. There's no reduce in Pascal. There's no reduce in the languages that I. learned as a child as a teenager. There's no reducing, is there reducing Fortran? I don't know. I mean, they're, they're, Fortran is pretty barebones at least, Fortran 77 as I learned it
Starting point is 00:16:41 back in the day. So yeah, I guess it depends on when you started your programming journey. But I mean, anyways, this is just like a side note that I'm reading this kind of history of papers that started in the 90s of this idea of, you know, call it what you will. And I'm Metamorphism plus catamorphism equals halomorphism. This paper's calling it Foldar slash-Bel-R rule. Yeah, I don't, in my everyday life, I don't go around throwing these high-faluton Greek-derived terms around. Hylomorphism, that's the one I remember because of the sort of pneumonia of sort of going
Starting point is 00:17:18 high than going low, sort of expanding out and contracting back. Yeah. I mean, so the thing is, I agree, very high-falutin words, but we need a way. to talk about this stuff, right? Exactly. When you're defining, like Dykstra said, the purpose of abstraction is to create a new level
Starting point is 00:17:37 on which we can be absolutely precise. It's not trying to be fuzzy about these things. If you are working in that space, you need the terms to work with and be precise. Yeah. And so we have the fold-build rule from this paper, but then there's another paper that was published in 2007 called StreamFusion
Starting point is 00:17:58 from lists to streams to nothing at all, that builds on the fold build framework, but then adds other things to the fusion quote-unquote rules, like zips, can cap maps, take and drop, et cetera. And so it seems that there's like a history of papers that are showing that like you can do more than just like avoid materialization and like reduce your memory footprints than just like, you know, build and fold, you know, maps you can fuse together.
Starting point is 00:18:26 You know, the, my favorite, the zip tail trick that many people, including Michael Case, I think, Odin Holmes, Ivan or even Kuchek, I always mess up his pronunciation of his name, but, you know, author of the functional programming in C++. A lot of these, you can avoid materialization by doing this kind of, quote unquote, lazy operations. And it, anyways, I've just been trying to sort all this stuff in my head because you're saying that, yeah, a lot of people that implement this stuff. They're not going to some set of academic papers and being like, here is an algebra that I can then implement and here are all the lazy operations. Well, I think, I think in some ways we might come out with better outcomes if people did that more. But my point is they're not, they're not working in that highly, they're not mathematicians, right? They're not working mathematicians. They're working programmers. They're working library writers. Their first job is to
Starting point is 00:19:22 produce something that's usable, performs well, and fulfills use cases. Right. So their focus is not necessarily in using formalisms to achieve that. And now I think there's a level of like, let's, it behooves us to recognize what we're doing, right? This is what I always, I always think. The process of writing code is, and producing nice code, producing beautiful code is in many ways, recognizing what it is you're actually doing. And sometimes, you know, sometimes the mathematics helps with that. Yeah. It's like we said before, like you can be a good baker without studying chemistry.
Starting point is 00:20:02 Yeah. Right? But if you are a good baker, odds are you kind of know chemistry, at least as far as it relates to baking. And studying chemistry can make you a better baker. So I think that, you know, it's that same analogy applies to programming and this kind of mathematics. Yeah. I mean, I feel like I'm a bad baker and a bad chemist, but I've got a lot of thoughts on like the chemistry. Well, I live at altitude, so baking is a little bit trickier usually.
Starting point is 00:20:31 I've had some bakes come out poorly. Really? Altitude makes a difference? It certainly does. Oh, well, that's good to know. I guess I live at roughly sea level, so I guess I have it as easy as it gets then. Well, yes, most recipes you find are designed for sea level baking. Altitude requires some adjustment sometimes.
Starting point is 00:20:52 Really? I had no idea. I had no idea. Anyways, all of this is to say that I feel like there's a, I don't know, there's a through line here from these, you know, academic papers and formalisms and algebras and calculi, whatever you want to call them, which like I said, I'm not a trained mathematician. And I'm doing my best to read the academies and parse this stuff. But then the other day I actually was trying to figure out, like, who was the original implementer of, like, the boost range V1? I know there's, like, there's boost ranges 2.0. Because V3 is the one we'll know.
Starting point is 00:21:31 Yeah. So did Eric implement V1 and V2 before V3? I don't know. No, I don't think he did. I mean, if you go to boost ranges, if you Google it, it comes up with 2.0, and then it says copyright of Thorsten Autosin and Niel. Greaves. Those names are not familiar to me. But then also is 2.0 an evolution of 1.0, which they wrote, or is it the same thing that happened with V3? And it has me very curious of, at what point did the, like, you know, and you can tell me, you've been programming in C++
Starting point is 00:22:05 plus much longer. And you've also been doing high performance stuff because you, you know, worked at, you know, gaming companies before and high frequency firms before. is like is the idea of operating on you know what in boost ranges were called fancy iterators and then kind of evolved to be views in range v3 is this like a modern thing because like I know sick pee had the idea of streams and like those existed in like lisp land and scheme land decades ago yeah because lisp had closures and if you have closures you can make lazy streams yeah and so is is that like the genesis of because there seems to be like definitely there's an academic like paper trail of people writing about this stuff and discovering this stuff and then we live in 2026 today where all modern languages have versions of these kind of stream like fusion like libraries and uh anyways
Starting point is 00:23:00 i'm just very curious like where did it all start and at what point did people start realizing that you can do like i don't know that you can do these kinds of lazy things was it was it did it come from the functional programming influences of languages adopting? Or does nobody know? And I'm just doing like, what do you call it? Programming language archaeology. Well, programming language archaeology is a lot of fun, of course. And I, you know, does nobody know?
Starting point is 00:23:29 I think the question really is, well, maybe that question, maybe the question is stated isn't worth answering. But we can only, we can observe what's happened, right? which is the history of programming languages, and in particular in the last 20 years, depending on which language you use in your day job, you know, that you will have noticed the influence sooner or later.
Starting point is 00:23:54 But functional languages have for sure been influencing mainstream languages, if you like, for at least 20 years, probably a little bit longer, you know. And so, so, you know, back in the earlier days, we had Fortran, you had LISP, those were two very different takes on computation, right? Fortran embodying the sort of von Neumann machine architecture,
Starting point is 00:24:21 Lisp embodying the Lambda calculus. We go forward a little bit, we have, you know, the, and we can sort of, if we squint, think of those as the pragmatic camp and the mathematical camp, or the camp, you know, the camp, which is concerned with what hardware does, and the camp which is concerned with maybe more mathematical ideas. That's probably a very unfair generalization
Starting point is 00:24:49 because, you know, it's not like the people who make programming languages are in general well versed in the mathematics, right? It's not like the folks who made Fortran were not mathematicians. They absolutely were mathematicians. But, you know, then we move through the 70s and we see C and we see ML, right, 1973. And then as we move into the 80s, we see, and beyond, we see a proliferation of the mainstream, sort of C-derived languages. And then we see the web and we see things coming in from the functional side.
Starting point is 00:25:26 You know, JavaScript at its core has a lot of things in common with a Lisp, right? So it pulls from both camps, if you like. And then it's very, you know, in the last 10, 15, 20 years, very in vogue for the mainstream, what we might have considered, you know, C++ until until 2000, maybe, until 2005 even, we would have put in the machine sympathetic camp, right, squarely. But, but now C++, certainly for the last decade plus, 15 plus years even, has had a lot of influence from the functional site, you know. You go to any C++ conference, you'll hear people talking about monads. And, you know, folks who work in functional languages are like, you know, it's nice that
Starting point is 00:26:18 kids are learning about these elementary concepts now. Introducing lambdas. You've never heard of anything like. Exactly. C++ only got lambdas in 2011, in C++ plus 11. Mind you, C++11 is now further away than C++ plus 98 was from C++ 11, right? C++ plus 11 was over half a standard lifetime ago.
Starting point is 00:26:45 Oh, wow, yeah. But, you know, so the languages that I work in, have worked in for my entire career, I'm happy to see are getting more influence from the functional side of things because that is, you know, and generic programming is very functional in nature as well, as a paradigm.
Starting point is 00:27:06 Because that seems to be the best way we've found so far to achieve actual useful composition, to achieve actual useful reasonability of the code. And it's the declarative functional style that gives us that. So I guess in that story, it comes from the functional influences. I guess. I mean, I probably cut tons out of that story.
Starting point is 00:27:34 you know, there are lots of languages along the way that, in a way, all languages sort of influence each other. And so I didn't mention language, you know, some very influential languages, which are not... Small talkers out there going, no, you didn't mention small talk. There's languages like self and clue, and there's languages like Ada. Yeah. You know, that pioneered what we might think of as aspects of object orientation or aspects of generic programming. Yeah. Yeah, I've got to continue doing my archaeology in because I really feel like there's something here, you know.
Starting point is 00:28:13 I mean, what is my goal at the end of the day? I work for a company that runs programs on a different architecture, you know, GPUs. And a lot of this literature that's been written about algebraic laws and programming transformation, it implicitly is designed for CPUs. and the quote-unquote fusion laws that are possible on a CPU are not the same as the laws that are possible on a GPU. Okay. The fold scan fusion law, I mean, there's a version of it that can work, but scans are not parallelizable the same way that folds are. A fold can be parallelized essentially in the same kind of way that a CPU would do it. Yeah.
Starting point is 00:28:56 A GPU would do it. But due to the way that a scan... Because the scan has the data dependency. Exactly. Whereas the fold, if you have just an associative operation, you can execute it in any order. Exactly. And so that is a small difference if you're computing, quote, unquote,
Starting point is 00:29:16 you know, left to right sequentially. But it's a massive difference if you're trying to parallelize things. And I think the quote unquote, what did you say? You called it ad hoc empirical observations. they become more difficult, you know. Observing that, you know, you can chain maps together and, you know, avoid materializing stuff, I think is pretty straightforward and obvious. I don't know. Maybe not for everybody.
Starting point is 00:29:43 But across different, like, hardware, these, like, implicit algebraic laws actually are less obvious, and there's way more opportunity to leave performance on the floor, especially if you're dispatching to different types of hardware. And so I have this like grand vision in the back of my head that's like there's some truth out there in the form, in some, whether it's a formalism or an algebra or whatever, or maybe none of that stuff. And it's just a thoughtful implementation of a library that like hides that stuff from the user, which is kind of what ranges is, right? Like you compose ranges together. Yeah. You get an efficient slow memory program. Anyways, you're about to say something. I agree. I was going to say I agree 100%. Like when you write a library, if you are, if you are, if you're, you. are making these observations, if you are, you know, writing the library to do certain use cases, and you notice you can fold things together, that's great, right? But I agree at some point, like, unless you are like a super genius, I guess, you are going to hit a wall where just the next, you just don't see that these things can be folded together because it's really, really not obvious, right?
Starting point is 00:30:55 it's very difficult and it's very difficult to recognize what we have written in code, like in that sense. As well I was talking about earlier, the recognition of what we're actually doing. And to do that, you do need to study a bit of the chemistry, right? You do need to study a bit of mathematics. And then that helps you unlock something in the code.
Starting point is 00:31:18 And then frequently, you know, the experience I've often had is that sometimes the code just melts away, right? When you've decomposed it and you've expressed the mathematics properly, the thing, the thing that you had, which you thought was some kind of kernel of complexity in there that you couldn't crack, right? Actually, that's now been decomposed and parts of it have not just become simpler, but they've actually disappeared. And are you saying that that happens behind the scenes with like a well-designed library, or you're saying that that is something, once you've studied the quote unquote chemistry and math that you as a programmer can kind of tear apart.
Starting point is 00:32:00 I'm saying in my experience, studying the mathematics and the, and it is the only way I get to that point. Like, I'm not clever enough to write a library that does these things and, and simplify it enough internally without using the mathematics. Right, right, right, right. So yeah, it requires taking the time to, like, whether it's mathematics or chemistry or whatever metaphor. Whatever the analogy is, yeah. Yeah. I don't know. This is what's been dominating, like, my free cycles. Because, you know, my affinity for array languages as well, like, there's so much of, I mean, there's different flavors of this that have come up over the years where at one point I had
Starting point is 00:32:45 the realization that, that what happens with, like, iterator tag dispatch, is basically like a form of this, right? You know, you pass a collection to a well-designed, thoughtfully designed algorithm, and depending on the iterator category tags, it'll then dispatch to a certain, you know, algorithm that will be the best performance for the iterator types that have been passed to that algorithm.
Starting point is 00:33:14 And array languages do this kind of thing by attaching metadata to the container. So, you know, if it's only, Boolean data, ones and zeros, or if it's a sorted container, you know, that's an example of if you have a sorted data and you do first, you know, that's an 01 operation that it's like, well, why would you be writing code where you're taking the first of, you know, something that you just sorted? Well, it's like, well, actually, there's a few different operations that end up with sorted data that are, you can easily forget about. Like, if you do a max scan, that results in
Starting point is 00:33:45 sorted data. You might not immediately think about that, but it is a, there are multiple operations that result in these kind of like metadata properties that if you store them and you check just up front before you do some operation, it could lead not to like a, you know, we're talking like a quadratic to linear or quadratic to linear rhythmic. It could be like, what is, n-log-n is linear rhythmic to 0.1, which is like absurd. And, you know, you tell people this and they go, I don't know, that would be obvious. Obviously, no one would ever. You would be surprised at how often, like you.
Starting point is 00:34:18 Nothing is that obvious. Yeah, I agree. And I started one of my little side projects from a couple years ago was this Jello program that would enable you to type keywords and convert that into a jelly, which is like a code golfing. And even for me, it's a very hard language to remember all the different symbols because they use like the extended character set of, I think it's Spanish or something where it's like over dots and under dots and accents. Oh, wow. So it's not even symbols. and I started, there's a file that's called like hints or something where any time it recognizes
Starting point is 00:34:54 a composition or some like idiom, it says, oh, you can, you can use this instead because like it's, it's so hard to keep track of all the different little idioms and patterns and things like that. And I would love a language. Like, it's too complicated with a language like C++ that you'd get these little syntactic, like, oh, hey, by the way, you're doing this. You could be doing this and that'll be better for a reason, X, Y, and Z. a language or a library that just hides all of that stuff for you and just whatever you write is the kind of naive, you know, way of expressing your solution. But then, so I guess maybe we are back to program transformations.
Starting point is 00:35:28 But what is actually a program transformation? Does a library that does all that stuff? Does that count? Does it matter? I don't know. I think it counts. Whether it happens inside the compiler or in the library, I mean in C++, expression templates are often touted as this kind of thing. right?
Starting point is 00:35:46 Expression templates count as this thing. They're often cited, I would say. Interesting. So think about like a linear algebra library using expression templates. The reason it uses those is to take what you write and transform it into a machine sympathetic thing. Like, you know, fuse multiplier would be a trivial example, right? Huh. Yeah, I never thought about that.
Starting point is 00:36:15 I mean, this is, it makes my brain hurt of like all of this stuff. It's like, it's all the same stuff at the end of the day. And it makes me think that like we're too early in the computer science. Like we don't actually have a well-formed vocabulary to talk about all this stuff because you find the same patterns. I don't know. This makes me, always makes you think that like Bartaj Maluski is correct in that. He's working on his second book, right, about how, you know, everything's category theory.
Starting point is 00:36:41 But then there's some other like higher level category theory where it's like music programming. It's all the same at the end of the day. day, it's just a different spelling. Yeah, maybe. I mean, we can keep generalizing and generalizing, and we get to a point where everything's the same because it's ultimately generalized. Be sure to check these show notes, either in your podcast app or at ADSP thepodcast.com for links to anything we mentioned in today's episode, as well as a link to a get-up discussion
Starting point is 00:37:04 where you can leave thoughts, comments, and questions. Thanks for listening. We hope you enjoyed and have a great day. I am the anti-brace.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.