Future of Coding - Worse is Better by Richard P. Gabriel

Episode Date: October 30, 2022

Following our previous episode on Richard P. Gabriel's Incommensurability paper, we're back for round two with an analysis of what we've dubbed the Worse is Better family of thought products: The Ris...e of Worse Is Better by Richard P. Gabriel Worse is Better is Worse by Nickieben Bourbaki Is Worse Really Better? by Richard P. Gabriel Next episode, we've got a recent work by a real up-and-comer in the field. While you may not have heard of him yet, he's a promising young lad who's sure to become a household name. Magic Ink by Bret Victor Links The JIT entitlement on iOS is a thing that exists now. Please, call me Nickieben — Mr. Bourbaki is my father. A pony is a small horse. Also, horses have one toe. Electron lets you build cross-platform apps using web technologies. The apps you build in it are, arguably, doing a bit of "worse is better" when compared to equivalent native apps. Bun is a new JS runner that competes somewhat with NodeJS and Deno, and is arguably an example of "worse is better". esbuild and swc are JS build tools, and are compared to the earlier Babel. The graphs showing the relative lack of churn in Clojure's source code came from Rich Hickey's A History of Clojure talk. To see those graphs, head over to the FoC website for the expanded version of these show notes. Some thoughts about wormholes. futureofcoding.org/episodes/059Support us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 My first smartphone was the Nexus one. And it was so bad that it turned me off Android forever. Because it was like, I was living in the loft above a house, and there was no insulation. One of the windows was broken, and it was the middle of winter. You wake up and you see your breath kind of thing. It was not heated. And the touchscreen just wouldn't respond if it got below a certain temperature. you wake up and you see your breath kind of thing like it was not heated and the touch screen just
Starting point is 00:00:26 wouldn't respond if it got below a certain temperature and so there was one morning where like my alarm goes off and i can't stop it because the touch screen doesn't respond and so it's just like no this this sucks too much yeah so i had the the g, which was also called the HTC Dream. And then I had the Nexus S. Because I would go every other generation of phone, basically. And I've done that for a while. And now I have like a Pixel 3a that I still use. And I'm considering getting an iPhone at some point with the Dynamic Island.
Starting point is 00:01:01 I know. Jesus Christ. I just think it would have been really funny if they named it i land oh boo yeah that would have been just a because at that point it's like they haven't i know the iphone's still that but they haven't really like emphasized the like you know the i in so long it would have been hilarious. Lowercase I, capital L. The I naming thing was a Steve Jobs thing. After he died, they haven't introduced a new I thing.
Starting point is 00:01:36 It's one of Steve's whimsy things that the company has distanced itself from. The dynamic island does seem a little bit more whimsical. It has a sort of an impoverished whimsy to it. It's like if you've been living in this bleakness for so long, the fact that it has anticipation and elasticity to the way that the pure black rectangle resizes itself, like, that's not quite... See, this is why I like to mention these things,
Starting point is 00:02:01 so that I can get your scathing reviews of them. And if it were truly whimsical, it would be end-user programmable so that you could put your own little googly eyes on it and make it look like something out of Wattam or one of those games by the guy who made Katamari Damacy. Put little wiggly eyes on it, and then it would be whimsical. Little caterpillar legs or something like that. I don't know. Yeah, I think Apple and end-user programming
Starting point is 00:02:25 haven't been a thing for quite a while. Oh, you kid! You! What little you know of you, what you speak of how you speak it. Like, AppleScript has been a thing for, what, like three decades now, almost? And it is still supported,
Starting point is 00:02:44 and they're still adding features to it when they built shortcuts recently and brought that to the mac they put in integration with automator the thing that it's replacing like apple for a while there were sort of quiet on scripting but they have recently come back around to it and are working on it actively. And they fired Sal Segoian, the guy who created Automator and was like a huge champion of end user scripting. But they've been keeping the flame alive. And they seem to be institutionally of an opinion at whatever level that makes this kind of decision that like user scripting on their platforms is something they like and that they want to nurture and encourage. They just don't make it a tentpole feature because I think they see it as a niche thing,
Starting point is 00:03:31 but it's a niche thing that they do support. Yeah, I see what you're saying, and I definitely see that angle. And shortcuts is kind of impressive. I guess the thing I was thinking about is, I can't execute arbitrary code on my iPad, and I would love to be able to so I could build a tool. Because I really want to program on my iPad in a nicer way, and the only real answer for doing that is cloud computation.
Starting point is 00:04:01 Because they require that you don't have code interpretation. So they can make the end- user programming, but I can't. I think they relaxed that in the past couple years in that one of the entitlements you can get on an app that you submit now is a JIT entitlement and you can get permission to have your app's memory space be read-write rather than just read-only. Two or three years ago you couldn't do that. I'll have to check it out then, because that would definitely be...
Starting point is 00:04:27 That's one of the things, as I build things, that I'm like, you know what, it would be really nice to have this. Do certain things on my iPad with its nice touch interface and pencil and all of that. And for sure, they don't give you much in the way of APIs that you would actually want to build useful stuff, like doing background audio kind of things. If you want to make a music app that can send audio samples
Starting point is 00:04:53 over to another music app that's also running, like if you want to have some effect app that just takes audio from some other source and applies some crazy effects to it and then passes that on to some other app, you can't do that. but maybe that was an intentional choice because they realize that worse is better is this our segue segue the segue oh dear uh all right so okay so if that's where we're starting here's how i'm going to start this better the outside experience is what matters if anything needs to be compromised it should be the internals of the implementation this is apple the outside experience is what matters worse the
Starting point is 00:05:39 internal implementation is what matters if anything needs to be compromised it can be the outside This is Windows. I rest my case. It's so interesting. I agree with you, I think, on this. I definitely think Apple is kind of the quote-unquote right thing approach. For anyone listening, we read some papers about worse is better. In fact, we read some papers about Worse is Better. In fact, we read three papers about Worse is Better. We got The Rise of Worse is Better by Richard Gabriel. We got Worse is Better is Worse by somebody who is totally not Richard Gabriel.
Starting point is 00:06:19 Named Nicky Ben borbacki yeah which is actually richard gabriel writing a rebuttal to himself we should acknowledge is a reference to nicholas borbacki the the fake mathematician oh i didn't know that and then we got is worse really better by rich Gabriel. So we got this like, these are very, very short. It's almost even a stretch to call them papers. They're almost like notes or memos in some ways. The context is really like why Lisp didn't win. And so he's trying to give an explanation for that and argue for this worse is better philosophy. Well, or at least he's considering if that's the reason
Starting point is 00:07:07 like i think the fact that there's this back and forth between nickel bin slash peter um and uh this oh i did it didn't i peter gabriel damn! No, I did not mean to do that, damn it. I've been waiting for it. No. What is it? Richard Gabriel. There we go. Oh, God. Okay, so the fact that there's this back and forth between these different
Starting point is 00:07:38 papers makes it feel a little bit to me like maybe how I felt doing this podcast or other things in my life. I'm sure you've had the same experience where you come out of the gate with some, oh, I had this realization or I had this idea and it's just been kicking around in my head for a while. And I think this is it. And you say it and it goes out there and it makes a big splash and people are like, oh my God, how could you think that? Or, oh yeah, that's so true. I totally know what you're saying. And you think about it and think about it and you realize you know what that's not quite it
Starting point is 00:08:07 And you sort of continue Wrestling with something in your mind even after it's gone out into the world and other people have made a mountain out of it So it's neat to look at this this trio of papers. I call it the worse is better family of thought products because It's not just the initial paper which i have some impressions on i guess we'll get to that when we get there but it's like um but the the follow-ups both of them feel sort of like different ways of saying well that initial one didn't really get it it didn't't really, you know, I made a cut, but it wasn't quite the right cut. I should have maybe measured twice and cut once. Instead, I just made a cut.
Starting point is 00:08:50 And so the follow-ups I thought really do help to dial in what's the point being made, what's the lesson that you can take away from this, what's the right way to think about this dynamic here, and is this dynamic even a thing and i have to say worse is better is worse the the rebuttal has to be the best performance art of like pretending you're a hacker news thread before hacker news threads were a thing yeah right like like it's it's a real there's real argument there's real content in there but the the rhetoric is just it is like the best version of everything you see on Hacker News.
Starting point is 00:09:27 Yeah, yeah. Right? It is the style of argument that programmers often use. And because it's like also satire in some ways, because, you know, it's a fake writer, et cetera. Like, I don't know. There's just some beauty about it. Responding to your own writing under a pseudonym is a great thing. Like if you're a writer,
Starting point is 00:09:48 you should do this too. Because it's it's delightful to read. And it's also very empowering. Because you can criticize yourself in ways that other people can't criticize you. Because of, you know, decency or not wanting to appear to be a mean person on the internet like responding to your own writing in a scathing way is just a gift it's a it's a treasure and so um i think having that like comparing it to hacker news totally like there's some of that in there but there's also some really good like self-biting oh yeah absolutely and especially because uh first name check richard is um he's actually a good writer like he's he's got a style and character and so that when wielded in this sort of playful like oh yeah i'm gonna write a fake response to myself sort of way like it
Starting point is 00:10:40 it really shows you know how much charm he can put into what he's saying and how he's saying it. Yeah, I think if it was anyone else, it might not come across as well. But he has this little tongue-in-cheek way of writing in general that I think becomes even better as he's talking about himself in the third person. The biggest thing I think our audience will want to see out of this is how is this future of coding relevant? Because I think of all the papers we've done, this is the least obviously, you know, future of coding relevant. But it might be the most actionable of all the ones that we've read.
Starting point is 00:11:17 Yeah, I agree. In a lot of ways, this is kind of an advice piece. It's giving you a different lens to look at the work we've been doing and to say, should I be approaching it this way? And what have I been losing out on by approaching it the way I have been? Because I definitely think most FOC things fall on one part of this spectrum. Yeah. All right. So we get this kind of contrast between these two approaches. There's the MIT approach and the New Jersey approach. The MIT approach is also called the right thing. And the New Jersey approach is this worse is better school. What makes the shortest of the three papers, which is is worse, really better, the third one, what makes the shortest of the three papers which is is worse really better the third one what makes it useful is that it actually goes back and explains these ideas in the way that
Starting point is 00:12:13 the first paper failed to they're both trying to build something good the way that they go about building it is ever so subtly different here's an an exact quote, and I'll do radio voice on this, from the third paper is worse, really better. With the right thing. And remember, the two sides are the right thing and worse is better. Those are the two sides. The right thing, MIT, Stanford, that's the Apple-like one, clearly, you know, thumbs up here.
Starting point is 00:12:41 And then worse is better is the New Jersey style. It's windows, thumbs down. With the right thing. No, I can't do radio voice because I'm laughing. With the right thing, designers are equally concerned with simplicity, correctness, consistency, and completeness. With worse is better, designers are almost exclusively concerned with the implementation, simplicity, and performance, and will work on correctness, consistency, and completeness only enough to get the job done, sacrificing any of these other qualities for simplicity. So to me that explanation from the third paper is the
Starting point is 00:13:23 thing that actually speaks to their difference, that explanation from the third paper is the thing that actually speaks to their difference. That there's these four attributes, simplicity, correctness, consistency, and completeness. And on the right thing side, you really want the software that you write to be perfect from the perception of the user. The person who's using the software from the outside, should be treated with the highest respect. And they should have a piece of software that is complete, consistent, correct, and from the outside, simple. And if you need to make the internals really, you know, crazy and
Starting point is 00:13:59 scary in order to satisfy that outside experience, fine. And this matches, not to make more of this joke, to actually make it serious, this matches what I've heard about the philosophy of software engineering within Apple from, heard this from Don Melton and other folks. Don Melton's the guy who started the Safari project, for instance. They are totally okay with shipping software that is an absolutely terrifying goddamn mess on the inside if that's what you need to make something that feels really amazing to use and does all the things that you want from the outside. That, you know, do whatever horrible dark magics on the internals that you need to do in order to make it nice for the outside user whereas the worse is better style is it's almost like running a marathon instead of a sprint it's like
Starting point is 00:14:53 you need the internals to be simple and to be workable and to be improvable and this you know the essays will explain why in the context of the 1980s these things mattered. It's a different context today. But you want those internals to be way, you have to do one thing, and if you want to use it this other way, you have to do this wildly different, unexpected thing that you wouldn't think to do. That's okay if that's what you need to make the internal simple. That's my best understanding of what the trade-off
Starting point is 00:15:38 between the two schools is. I think that's a great explanation. The one caveat I would give that may be a little different from the Apple characterization is if you trade off performance, that's also okay. So like, you know, it might not be the best user experience, because the performance might be not anywhere near as good as it should be. But that's okay. It's okay to give up performance in order to have simplicity of the user experience. So Lisp is the example, you know, usually given here, which, you know, if you think about, especially like early Lisps, it was this beautiful expression, this beautiful way of programming, but it was way slower, like orders of magnitude slower than assembly, which its interface was nowhere near as nice. But the end result could be better because of that performance. So the right thing is the best, except for in terms of performance.
Starting point is 00:16:35 You mentioned that might be different from Apple. Welcome to the accidental tech podcast. That is actually apple like apple stuff has always been slower for things that like you've always been able to go over to a windows machine and get hardware components that are commodity and don't have apple's insane markup and so you can get an edge on hardware performance that way that makes sense um especially since they switched to x86 you know in the mid-2000s and and like for gaming and that kind of thing like it's no contest right windows is always less overhead you can get more direct
Starting point is 00:17:11 access to the screen whatever whatever whatever will make it easier for developers to go in and run their stuff wildly fast and so even there i think the comparison holds up Macs are slow, caveat. I say this as a lifetime Mac user. I'll fight anybody. They are slow, asterisk. Yeah, I think we can probably talk about Apple and how they meet or fail to meet these various philosophies for the whole podcast, but I think we should. I think we've gotten the point across now because I could like, well, what about the first iPhone and the fact that it didn't have apps? And anyways. Yeah.
Starting point is 00:17:49 And we should, right? God damn it. Somebody's got to take this trillion dollar company to task. Take them down a peg. But yes, I think this has been helpful for kind of characterizing this thing. And so the contrast we get here in the paper is not Apple versus Microsoft. No, of course not it's linux or sorry not linux i just said i meant lisp and i was look i said i meant to say lisp and
Starting point is 00:18:11 i was looking at the word unix so linux uh so the l and linux stands for is lisp lisp yes gross so lisp versus unix and c yes right that's the contrast we get here which i think you know especially at that time made a lot of sense right common lisp was this very designed system doing the correct thing making sure that every part fits together making sure that the philosophy is the best it could possibly be you have the most flexible software maybe not as fast and the implementation is going to be quite intense especially like the the common list object system right like there's a lot involved in that uh whereas early unix and c incredibly simple implementation with perhaps a clunkier interface to do that. And so we get kind of a story to contrast these two approaches and to see how somebody solving the same problem could take
Starting point is 00:19:12 both approaches. And this is called the PC losering problem, which I just think is great. And I'll go ahead and explain why it's called the PC Losering. PC here does not mean personal computer. It means program counter. So it's like what program is running on the machine. And loser here is the MIT word for user. They referred to, instead of calling things user mode, they called them loser mode.
Starting point is 00:19:42 Do you know where that came from? I don't know the history on that. No, but it just feels very, you know where that came from? I don't know the history on that. No, but it just feels very, this kind of elitist, like, oh, the users of our programs are, I mean, there's still that attitude, right? That users are the ones who are wrong and our software's right. I bet there's some interesting history here
Starting point is 00:20:03 that somebody in the audience is just screaming at their car stereo. So yes, so losering means making the program counter go to user mode. And so there's this problem that we don't have to read the full details here, but basically this is talking about kind of an implementation detail of the operating system. They're both working on Unix. In this case, there's two people, one from MIT and one from Berkeley,
Starting point is 00:20:31 working on Unix. And they're discussing this problem and how does Unix deal with it. And the problem is if you're trying to make a system call and the program is currently running in the middle of a bunch of user state, how do you deal with that? Because you can't really easily save exactly where the user is, and there's times where you have to exit out and return back to the user rather than dealing with the system call.
Starting point is 00:20:59 The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that the resumption of the user program after the interrupt, for example, re-enters the system routine. So the idea here is like, you need to do this complicated mechanism inside to make sure you bring the program back so it can finally do this system call again, because the first time it didn't work. What we are told is that the MIT person didn't see anything in Unix that did this complicated logic. And New Jersey guy says, yeah, they don't do it.
Starting point is 00:21:42 Instead, every single time you make a system call, it will just tell you, did it succeed or not? And you have to deal with it. You, the person writing the code that's going to make the system call. Yes. There's no like, oh yeah, we'll figure out how to bring your program back so it can automatically make this system call in the right time. It's just like, ah, no, it failed. You better deal with the fact that it failed and like sit here and loop and until it works. And so this is this contrast between the right thing, which requires really complex implementation, and the worst is better approach, which is so easy to make, right? You just set an error code, and you make sure the programmer deals with that error. It's easier to make the operating system in which this takes place.
Starting point is 00:22:28 Like if you're implementing Unix, it's easier to say, oh yeah, system calls can just return an error code rather than system calls are guaranteed to succeed. But if there's an interrupt in the middle of the system call, the operating system needs to do a Herculean task of figuring out exactly how to resolve that so that the program that is calling that system call doesn't have to know or care what's going on. It can just like be written, assuming the system call will always succeed. This is what we're given is kind of kind of this like simplicity trade-off, right? Do we make the interface simple,
Starting point is 00:23:07 meaning you write your program assuming system calls just work. The interface to a system call is really easy. You make a system call and we make the implementation more difficult. We have to uphold that simple interface. Or do we just complicate the interface a little bit? Now you make the system call,
Starting point is 00:23:23 but there's also this error condition you have to check. And now our implementation is simple. We don't have to worry about doing all the complicated things. Who owns the complexity? Is the complexity owned by the underlying system, or is it pushed up to the users of that system? We've done this Apple-Microsoft comparison, but when I was trying to apply this to programming languages today, I have to admit, it was hard to really make...
Starting point is 00:23:51 I could always end up making an argument for both. Yeah. Like, I don't know if this distinction is sharp enough for me to really decide which language is making which approach? Yeah, I don't know that it's something that I could easily point to a language and say, this whole language takes this approach. But I can definitely point to specific ideas. Like, I'm sure you've had the feeling or you've heard people say, how many tens of thousands of developer hours have been wasted because of this one detail in some low-level system abstraction where if they had done something slightly differently, when when designing TCP IP, or whatever, it would have saved thousands and
Starting point is 00:24:50 thousands of hours of debugging implementation. Or I don't want to say implementations, because we're talking about two separate systems here, each of them has their own implementation. But like, the the lower level system, pushing this additional complexity outside of itself onto the users of that system just ends up like being multiplied over all of the people who have to use it where if they had just swallowed that little bit of extra complexity internally when they first started it would have saved all of the users of that system all of that debugging pain so that's definitely a thing that happens but i didn't have anything where it's like oh this is a a language where they did the right thing or this is a language where they did the worst is better
Starting point is 00:25:31 other than you know the examples that gabriel gives us i think if you put things in contrast to each other it's easier to do this rather than like one did this one take the worse is better approach where versus like between these two things which one is more worse is better which one is more the right thing uh so uh you know i know there's been some flame wars going on but i'll i'm not trying to add to them but uh zig versus rust zig definitely feels like feels like the worse is better approach. And Rust feels like the right thing approach. Now, I'm not saying what wins here, right? And the right thing isn't saying it's better.
Starting point is 00:26:15 Because obviously this paper is telling us worse is better is better. But Rust is very upfront design. Let's build the whole thing so that you never have to worry about memory safety we're going to make the implementation as complex as we need to in order to achieve that goal whereas zig says like we can get memory safety but it's not going to be a crazy complicated implementation i'm not going to weigh in on memory safety because I don't really have a pony in this race. I do have a pony, but not in this race. Do you race ponies?
Starting point is 00:26:50 Is pony racing a thing? Pony racing is a thing. Whoa. But not that I do. Racing is a, so my wife's a horse trainer, but we do shows where it's like people go ride horses around in a circle and not race it's just about you know look at the proper form and yes and look at the horse uh and we just bought a uh 19 week old horse a weanling a weanling is that the name for a fan of the late 80s early 90s band wean because it should be yep that's that's what it is. Definitely not a horse that just got weaned off of its mother's milk. Yeah.
Starting point is 00:27:28 Anything ling. I'm a lisp ling. I've just been weaned off of closure and I'm starting to take an interest in racket. Yeah. So instead of rust zig, I could have compared Rust and Go. And I think you would still see that same comparison. But if you compare Go and something else, you might see Go as that right thing, right? Like Go versus C is probably more of the right thing than worse is better, right?
Starting point is 00:28:01 Like there's a much more complication in the implementation than especially early C, let's just say. Now, there's another couple dynamics at play here before we start playing the very fun game of which one is the right thing? Is it JavaScript or Scala? I don't know. That's hard. That's so hard.
Starting point is 00:28:19 Really? That one seems obvious to me. No, not at all. Scala is cursed. It's so... Anyways so anyways oh but scala is the quintessential right thing approach no the implementation is a garbage fire it's a pile of tires so complicated yes yes yes to achieve ends that you might not agree with oh i suppose yeah that is the right thing damn uh-huh you gotta you gotta take away sorry like it's is the right thing damn uh-huh yeah you gotta you gotta take away sorry like it's quote the right thing yeah yeah yeah right yeah it's it's a name it's not it is right because the whole point of this essay is that the right thing isn't the right thing but
Starting point is 00:28:55 it feels so wrong um i just imagine this is your guilty pleasure. You're off over having, uh, like you're like, it feels so wrong, but so right at the same time, uh, you're secretly writing all sorts of, uh, monads and, and Scala, uh, selling monads on the street.
Starting point is 00:29:17 Turn on a red light in my window. There's Scala programming happening here. Um, so yeah, you gotta, you gotta take away the away the the yes you know the right thing that's why like if we said the mit approach right scala is the mit approach so this is why i wanted to say there's a couple of other dynamics at play two two dynamics i want to talk about the first one is that what we need to do when we are comparing things about you know about these two styles this
Starting point is 00:29:48 this the right thing versus worse is better and this comes up i think in the second paper worse is better is worse is not confuse something being designed in a worse is better style with something that hasn't been well designed. Because there's a whole other category of criteria that can be applied when evaluating a system. And that is, to what extent is the design work actually any good? Like, was it something that was thought through? Was it something that was built with care? Was it something where there was a good understanding of the problems to be solved did design thinking happen because there are many cases many many many cases where maybe not even that design wasn't done but that
Starting point is 00:30:32 the problem wasn't fully understood and so the design process that happened produced a result that was not a good fit for the problem and i read some other things that people other than nikki ben borbaki wrote about this worse is better stuff like reflections on a you know blog posts that sort of thing and something that people do where they make a mistake is they might make a project and think to themselves ah yeah i made this project and I made it in the right thing kind of way. And it had these problems. And so I went and I made a version two in a worse is better style. And that worked so much better. And so I get it. I get why worse is better. When what actually happened is the first time they made it, it was the first time they were learning about the problem. And when they went back and made it
Starting point is 00:31:23 a second time, they had such a better understanding of the problem from having built a system to attempt to solve that problem once already so you you have to be careful when doing these evaluations to avoid you know those other factors because those other factors like how well do you understand the problem how much design thinking did you do? What strategies did you employ? Those things are orthogonal to this distinction between the right thing versus worse is better. The right thing versus worse is better is just about where does the complexity go?
Starting point is 00:31:57 Does the complexity get owned by the system that you are building? Or do you push that complexity out to the users of your system? The point of this, what is better about the worst is better? Yeah, that was my thing number two. Yes, in what way is worse better?
Starting point is 00:32:16 And the point is not that it is better full stop. Yeah, it's not like aesthetically better or whatever. Yeah, yeah. It's not some objective, this is for sure the way you should do it. And there's never a reason not ever to do the right thing. Or like a taste thing or any of that. It is drumroll. It's really that it's better for adoption, that people will adopt your software that's designed in worse is better approach over the right thing which seems counterintuitive that seems so backwards uh-huh yeah i mean this is really an explanation of like why is lisp not the most popular language because it's clearly the
Starting point is 00:32:58 right thing it's clearly the most capable the the best language ever, right? In the minds of the Lisp community, especially at the time, which, you know, Richard Gabriel is a big part of, right? He's a designer for Common Lisp. Like, why is it losing? And this is his explanation. And yeah, you can explain like, why does he think that this leads to more adoption, the worse is better approach? I actually don't know. What I do know is that somebody needs to write or maybe title a podcast, worse is better is cope. Because that's totally what this is.
Starting point is 00:33:36 This is just Richard Gabriel struggling against the fact that Lisp lost and C and Unix won and trying to come up with some kind of an explanation for how that could be. And it's weird that it led to this particular like, ah, yeah, I put my finger on it. This is what it is. It's that, you know, people who adopt software actually want to have a sh** outside experience or they're willing to tolerate a sh** experience if it means that the thing performs better it's like it runs faster and it's easier to port and so that that's that's fine even though it means that i have to read a bigger manual and do some more error handling when i call into the system or whatever yeah he he says that the programmer is conditioned to sacrifice some safety, convenience, and hassle to get good performance and modest resource use.
Starting point is 00:34:29 Yeah. And this was actually the key that made me, like, I, in a lot of ways, I do believe this, that worse is better in the sense of like, worse software often gets more adoption. But it's for different criteria today, I think, than what he just said here. And I think it's because, and actually the second article, in some ways hints at that, and not intentionally, but no one's willing to trade off safety, convenience, and hassle to get good performance and modest resource use today. Instead, they're willing to trade off performance and resource use to get convenience, right? Like think of like electron apps, right? We can definitively say that electron apps don't perform as well as they could
Starting point is 00:35:18 on our machines, but programmers use them all the time and we choose them because they make our implementation less complicated. Yes. They make it easier to write software that can be ported or they make it easier to write native software if you have a background in web development and not native development. Yes, exactly. So in some ways, he's right that it's about this portability. But now that we have these platforms that enable portability for us, we can trade off performance and resource use for portability and
Starting point is 00:35:53 convenience. In that case, the thing that is the worst is better in the example of Electron, is that the software being written on top of electron? Or is that electron itself? The software being written on top of electron, it's worse in a different dimension than what he's talking about here. That's why I'm saying like, I do think this idea works, but not in the details, he said, because our values have shifted over time as our machines have become more powerful. At least, you know, really, I have to say, this is really dividing up by different subgroups because there are people who still hold to this performance
Starting point is 00:36:31 at all costs sort of thing. I'll give an example of one of those because it's interesting that this came up, that you mentioned Electron, because mine is also a project that is popular among the JavaScript crowd in the way that Electron is, which is Bunn is which is bun which is a a new sort of competitor with node js and denno as a like javascript runtime uh that you
Starting point is 00:36:55 could build web services or command line tools or other things on top of and the whole reason that bun exploded in popularity this past month when it you know burst onto the scene was because it's so much faster than node and denno even though it has only a small fraction of the features and the developer writing it the solo developer who's making bun has explicitly set a goal of, I'm not going to support all of the node APIs, at least not at first, we'll work towards it. But that's not the goal. We're not going to guarantee that everything's going to work in exactly the same way, we're going to sacrifice that consistency. And you'll have to be responsible for making sure that any code you write that could run on top of bun or node is able to handle those inconsistencies in the API's. And there's some correctness sacrifice, but it is written to be as blazingly
Starting point is 00:37:59 fast as possible. And it's just like tons and tons of work put into the implementation to make it so that it is screaming fast. And people just jumped all over it with giddy excitement because part of our engineering spirit or something like that as programmers is delighted by finding a case where, oh, somebody made the wheeled gear chain pulley turning machine turn its gear pulley chain wheels faster there's some kind of you know delight in oh you made the machine more machiny it's doing its machininess in an even more machiny way than it was before and this gets into so many other philosophical things but it's something that can be measured, right? Like you can measure, oh, the performance is like three X faster. If I use this thing instead of that thing, that's awesome. I'm totally gonna want the three X faster thing. What's the cost?
Starting point is 00:38:54 Oh, well there's some inconsistency. Okay. Well, what inconsistency? Well, I don't know. You'll have to just use it. And if you find that an API works differently here than there, that's the inconsistency. Good luck. Because I didn't write documentation.
Starting point is 00:39:12 I'm over here just cranking the speed knob as fast as I can crank it. Completely agree. I think the JavaScript ecosystem in this case is so interesting for some test cases on, does this philosophy still hold? I'll give just a slight variation that I think is still keeping with it, like SWC and ES build. So, you know, these are projects that build, you know, compiled JavaScript, bundle JavaScript, way faster than Babel, or snowpack, gulp, grunt, pick your thing. Okay, so keep that in mind while I, I read this quote here. So we hear that, you know, worse is better will spread like a virus. All right. And so it says, once the virus has spread, there will be pressure
Starting point is 00:39:53 to improve it, possibly by increasing that functionality closer to 90%. But users have already been conditioned to accept worse than the right thing. Therefore, the worse is better software first will gain acceptance, second will condition users to accept less, and third will be improved to a point that is almost the right thing. Classic Windows. Classic Windows. Yes, yes.
Starting point is 00:40:18 So the question is, though, did that actually play out, or did we accept some worse is better software and we couldn't improve it till it was the right thing and then we went and rewrote the right thing so okay so babble you know which was used to be six to five was six to five was definitely a worse is better piece of software right it did these very particular things they had some inconsistencies, and it started improving, where it increased in functionality, etc. But it never got as good as the right thing. And that's why ESBuild and SWC started with the right way of doing it, the low-level software,
Starting point is 00:41:01 and built it with performance in mind from the beginning and kept the interface. Yes. And that's an interest. I actually just rewrote all my JavaScript build tooling. Don't ask. And was surprised to find that so many of these different JavaScript transpilers all use the same interface and they are literally drop-in replacements for one another, which is a fascinating test case for this kind of, you know, theories about system engineering and design. Yeah, totally. Yeah. So like bun, you gave the example where right now, at least there's definitely some trade offs in terms of interface, but like SWC NES build, there's no trade offs in terms of interfaces, the exact same interface. Well, except SWC, if you go and look through its documentation, has a whole bunch of options that you can pass when invoking it that are accepted but not supported.
Starting point is 00:41:52 So that you can take a big blob of configuration that you're using from ESBuild or whatever, switch to SWC, and it will just ignore those flags. This is why I find this so interesting is because that's why I said like any piece of software in isolation is so hard to do this, is it worse is better? Is it the right thing? But once you contrast to it becomes easier to see. It's more of a spectrum than an absolute. But what I guess I'm interested in is, is it really the case today that the worst is better solution ever approximates the right thing? Or is it that there are some things like Linux, for example, that are just such a huge cost to ever switch away from that we just won't build the right thing because it's too expensive? And is it that like these worse is better things? Yes, they do gain adoption, but they never approximate the
Starting point is 00:42:51 right thing. And we never get to that good point. And someone always has to go and rewrite it from from scratch, taking a more right thing approach. The thing that I see happening in the real world isn't, does the worse is better thing ever get improved to the point that it actually, you know, matches 100% of the functionality that the right thing would have. So it's like, do they ever, you know, approach a convergence where you built as the right thing eventually get compromised and that it's so hard to maintain some kind of purity, some sort of these are our commandments. These are our rules that we're going to adhere to as we design the system. Practical realities of the world be damned. Like, I just don't think that that ever plays out, that eventually the right thing can't be kept simple, that it has to start being a little bit compromised in one way or another. This is all about whether software survives. This is about whether software will be adopted and will continue to be used or whether it will wither and die like Lisp did in the face of Unix and C. And so maybe it is that, you know, the software that I'm talking about that started as the right thing and then gets compromised and gets worse, does that because of survivorship
Starting point is 00:44:24 bias. I'm aware of that software because it survived because it became worse and i'm not aware of the software that never got worse that kept its right thing always and forever but then died in obscurity that second dynamic there that this is about survival this is about adoption this is about how widespread is your software going to get like it is hard to think of you know what are the systems that got to being universally adopted that maintained some kind of purity of design that they are you know internally very simple or externally very simple rather the interface is very simple that they are internally very simple, or externally very simple, rather. The interface is very simple,
Starting point is 00:45:06 that they're very approachable. Maybe like TCP? Maybe something that's a protocol? Maybe JSON? Is JSON something like that? Markdown, maybe? Is Markdown... Markdown, there's so many variations of, right?
Starting point is 00:45:22 And most of the variations are attempts to make it more of the right thing, I think. Ah, yes. Which is weird. It does seem like a corollary here would be that if something is the right thing, it's going to be less popular, right? Yeah, yeah. Like, not only will it be less popular, like, even if you produce it, even if it's total, even if you complete it, you put it out in the world, it doesn't have the properties that people look for and are attracted to.
Starting point is 00:45:53 Performance and portability. See, that's where I still question it. I think he's gotten onto something here about this worse is better. I just don't know if the criteria he's using is quite right. So for example, Clojure. Clojure is clearly a right thing approach. Having read the Clojure core source code, I think I would agree. The implementation is, you know, it depends on the JVM first off.
Starting point is 00:46:22 So we have to like bundle that up together, right? Yes. But also, it is a piece of software that hasn't changed. You can go look at those graphs that have been posted. Yeah, I love those graphs. Right? And it has not changed. It has not been radically refactored.
Starting point is 00:46:42 And it probably never will be. And people who have attempted to go do it... Zach Tellman, pour one out. Yeah, nothing ever happens with those forks, right? Or nothing is allowed to happen because they necessarily introduce complexity into the implementation that is intolerable because it's not the right thing anymore. Yeah, there's this very clear, like it was designed. It is exactly what it needs to be and it cannot be anything else. And so it's not popular despite closure. You know, it's, it depends on who you're, what you're comparing it to, but it has pretty decent performance. Yes. it's not C, but Python versus Clojure, I'm going to win in terms of
Starting point is 00:47:29 performance with Clojure. And portability, it's on the JVM. It's the most portable thing you could imagine for non-embedded software. Well, and that relationship to the JVM is fascinating in this case. It's like Electron's relationship to the V8 runtime. There's this sort of dynamic in the closure in JVM where it's the heaviness of the JVM that allows closure to be so light. And that the JVM can internally be the worst in that it can be an internally very messy implementation. It can do all the chaos that it needs in order to be...
Starting point is 00:48:12 The interface to it is real finicky, and you have to be very particular about it. And it got built up over time of all these little compromises, and it was not amazing when it started. The JVM is the worst is better approach. And it now approximates the right thing. And at the same time, if you compare the JVM to C++, it feels like JVM is more of the right thing than C++, right? This is why I'm saying it's all this matter of perspective
Starting point is 00:48:43 and where we are, especially with the ecosystem we have now. And so I feel like basically what I want to say is I think this can actually be more generalized and I think this becomes more relevant to the future of coding audience in general if we make this less about the particular values that were in vogue in 1989 when he wrote this and more well-designed non-changeable things like things that are not going to morph and modify
Starting point is 00:49:18 and add new features and etc usually end up being less popular than those things that are you can see the potential in you can feel like you're part of making the software better and you have to kind of work around some of the rough edges and you feel more accomplished for doing that that's fascinating because that's, that gets into the culture of it using like, basing your career on a language that follows the worse is better philosophy means you're going to be reading exciting release notes as each new version comes out and says, you know, performance is still great, but we've added this new feature, or we've refactored this part of it to make it even faster than it was before. And I think we also have to just to like,
Starting point is 00:50:12 kind of continue on like where I think this paper doesn't cover the whole things. And of course, it's not. I mean, these are like three, three or four page papers, very short. But we have like, simple implementation, simple interface, complex implementation, simple interface, complex implementation, complex interface. Right, it's a two-by-two if we wanted to do it that way. Exactly, and that's what I was thinking about. There's simple implementation, simple interface. What goes there?
Starting point is 00:50:40 Maybe TCP? Yeah. That might be where your TCP falls. Simple implementation, complex implementation. We know that's the right thing. But what about complex interface, complex implementation? Okay, there's Windows. That's Windows. There we go. We've nailed it. Uh-huh, right? So that's Windows. It's also Ruby. Huh, okay.
Starting point is 00:51:09 In a good way, though. Now, what's the interface to Ruby? Are we talking about the interface to the compiler? Are we talking about the syntax of the programming language? Are we talking about... The syntax to the programming language and the concepts that the programming language are we talking about the syntax to the programming language and the concepts that the programming language introduce right like ruby is a beautiful complexity like i you know i i am now working on a ruby jit compiler so i'm learning more about ruby yes yes ruby is not my language of choice. But the people who love Ruby love it for its expressiveness in being able to use all sorts of different parts, to have all sorts of different features to kind of choose.
Starting point is 00:51:56 And contrast that to Python, where there should be one and only one obvious way to do something. Ruby's like, let's have more. And that's always what Ruby is doing, is let's have more. And there's a project now to rewrite the parser, because it's such a big chore for everyone to maintain their own parsers. And so they're trying to make one parser to rule them all that's pluggable, so that every project can use this kind of, you know, not have to re-implement it because it's a huge undertaking.
Starting point is 00:52:28 So that'd be like all like CRuby and IRB and all the different projects that are like Ruby runtimes. And linters. Or linting tools, yeah. LSP, yeah. Like anything that wants to look at Ruby syntax right now has to kind of re-implement their own parser. And Ruby keeps adding new syntax. So it becomes an even harder problem. And I think this is so interesting
Starting point is 00:52:54 because it doesn't fall on this spectrum that we've been offered because it's an embracing of complexity both at the interface and the implementation. In some ways, I think what we see here is an embracing of the complexity in both, so that you, the end programmer, can have a certain kind of simplicity. And that's what I find, like, I just, I think this paper kind of opens us up to a lot more choices and a lot more ways of thinking about our programming languages, our tools, etc. than just what we have here with the two approaches. that's probably if i had to criticize like if i had to give some negative feedback about this um about these papers which i don't i don't have to give negative feedback about everything but
Starting point is 00:53:52 i'm gonna um it's that the the use of the terms worse and better like there's an inherent value judgment in there and i'm sure this was not done carelessly i'm sure this was this was a something that gabriel deliberated over but it's it makes it hard to talk about because there are so many different aspects that contribute to something being worse or better and in these papers they're meant in a particular particular way. Like it's about survivability. It's about adoption. So it's hard to keep that in mind because these open up questions about much broader aspects of design, about which you can have value judgments and you can make reflections on their effectiveness.
Starting point is 00:54:41 And you would want to use the words like, oh yeah it's it's worse if you don't do any design at all and it's better to do design and i'm sure there's you know contrarian takes that would say well actually no there's some times where it's better to not do design at least not up front because exploring the problem intuitively is better than exploring it in a way where you come in with preconceived notions or something like that right like there's there's so many other aspects to this that you would want to use the words worse and better to describe, but those words are taken and given particular meanings. Yeah, I think it's very intentional. I think part of it is that it, it makes for a, it's kind of like clickbait.
Starting point is 00:55:19 Totally. Yeah. How can worse be better? There's no way you can have worse and it'd be better. That's against the definition of worse. What are you even talking about? Yeah, but he could have said, like, simple made easy, right? It's a... Uh-huh. And all of those, like, simple versus easy kind of things had that same problem, but at the very least,
Starting point is 00:55:42 they started with rigorous definitions of the terms like hickey headed that off at the pass he's like we're gonna use language in a very precise way coming into this yeah and gabriel starts off with two lists of four bullet points each where these defining these terms are the definitions are the same on both sides and you have to read it so closely and it's so hard to tell apart. Yeah, it's intentional, but unlike the other things by Gabriel, like the later papers in the worse is better family of thought products and incommensurability that we read last time. Unlike that, this first paper just feels like it makes a big soup of mud. Yeah, I will say that's one of the reasons I love it. But the rebuttal goes on to kind of give you, you know, like, my first rebuttal is that there really isn't a worse is better design philosophy. The caricature of what
Starting point is 00:56:39 this philosophy is, is so ridiculous that no one would ever express espouse or follow it uh i just love that you know this is richard gabriel talking to himself saying that like this this whole idea this is why i say this is kind of like a hacker news uh you know hot take right like this whole idea is just so ridiculous like what are you even talking about? And I think had he been clearer in the first one, we wouldn't have gotten this masterwork of art rebutting him in the most ridiculous ways imaginable. So, you know, you don't get rebuttals like this from Rich Hickey. No, yeah, because, well, I bite my tongue.
Starting point is 00:57:24 Yeah, like the rebuttal's delightful, but I don't know that it's actually a rebuttal. Like I think the third paper is worse, really better, which is really just a, hey, let me get a do-over of some explanations from the first paper that I kind of flubbed a little bit. I think the second paper's doing the same thing. I think it's, there are some dynamics
Starting point is 00:57:46 in the first paper that are hinted at, but weren't really spelled out clearly enough. So let me just spell them out a little bit, because I just want to make sure everybody's thinking about that too. And so this paragraph you're reading, the first rebuttal, concludes with, the dichotomy that might exist is that of right thing design versus no design. So it's that going back to those two dynamics I brought up earlier, it's the first dynamic. It's that when we're talking about where does a given system put its complexity, does it own it internally or does it push it onto the users of that system? That is different from design versus no design. So I don't agree with the rebuttal that there is no worse is better design philosophy.
Starting point is 00:58:29 I think there is. I think calling it worse is better is bad use of language because it's really about, and we really need some words for, who owns the complexity? You building a system or the people who are going to be the users of the system you're building. And that is such a, such a important decision to make when you're designing a system or when you're designing a feature or when you're designing an interface, let's say interface when you're designing an interface, because that's the boundary between two things, right? That's the boundary between the thing you're making and the people who are going to use it. In interface design, should the complexity be yours or should it be your users?
Starting point is 00:59:10 You have to make that decision over and over and over again, constantly. And there are so many different ways to come down on which side of that decision is the right one in any given circumstance and what's weird about this is that it's it's these papers are asking us to consider entire large systems like unix or c as kind of monolithic in their alignment on that philosophy about who should own the complexity and i don't think that's fair like that's the rebuttal i'd make i don't remember if that's a rebuttal that actually came up in the in the rebuttal paper by borbaki it does yeah not to be confused with um his mathematician father um there's my expanded lore of the borbaki universe um but should you consider all of unix to be designed such that complexity is pushed onto the user instead of being in the internals of the system no i don't think that's a fair thing to make i think it's not just between any two systems
Starting point is 01:00:10 that you can make this comparison it can be like between any two apis on the surface of a system or one api on one system compared to another api on another system right like we could start getting into how is this relevant to foc if you wanted to because i think that question who should own the complexity is super relevant and there are some better and worse choices to make there because as you know gabriel has explained survivability is at stake whether you choose to own the complexity yourself or push it on to your user does seem to, I think I agree with him in this, it does seem to have an influence on whether your software is going to be adopted and survive. But yeah, worse and better as words are just terribly off the mark in terms of allowing us to talk about this dynamic in a useful way. So I want to make kind of a meta note, because we haven't really gone through the rebuttal. And I
Starting point is 01:01:10 don't think we need to go through the details of the rebuttal. It's actually not a rebuttal in the strictest sense. What this really is, so there's these concepts called defeaters in philosophy, right? So if you have a belief, you could have a defeater for that belief, that is something that should make you give up that belief. And one way you could have a defeater is someone rebuts your belief, right? They all the things you thought were true, they show you how they're not. Another one would be to undercut your belief. So for example, maybe you think it's raining, but then you just realized you took a psychedelic drug just a few minutes ago. You don't have any good evidence to say it's not
Starting point is 01:01:52 raining. No one's rebutted that, but you might be like, maybe I don't really have a good reason to think it's raining because I just took a psychedelic. Maybe I haven't been skewered from the bottom of the soles of my feet through my entire body, out my head, and then twisted around the skewer like a flag wrapping around a flagpole. Maybe that's not what I actually exist as in this moment. Oddly specific. Yes.
Starting point is 01:02:17 So yes, right, this is undercutting. And that's really what this rebuttal is trying to do. It's not trying to rebut. It's trying to undercut. It's trying to say all of the reasons you had for thinking there even was a contrast of this worse is better approach and the right thing really don't exist. That there has never been a system that's really like this. That there's just trade-offs and trade-offs have to be made regardless of what system
Starting point is 01:02:42 we're in. And that all of these like differences between lisp and c are really like historical accidents they're really just facts about computers they're facts about like the vac the pdp 11 versus the pdp 10 and the market and and there's really not this distinction and to try to draw one is to just be sloppy yeah which i don't agree with at all uh yeah i i i mean i don't agree with the rebuttal but i just want to you know kind of characterize what the rebuttal is i think the real point of this though was actually the last little section the rebuttal here wants to tell us that the original paper not only was wrong, but that it actually taught people incorrectly.
Starting point is 01:03:31 He says that the advice it gave was that we should aim for less than the right thing. That we shouldn't try to make the best thing, and instead we should aim for something less than great. And I don't think that, you know, obviously, since this is Richard Gabriel writing, obviously, Richard Gabriel did not intend to give that advice. But that's what a lot of people took from it. And so now, in his pseudonym, we get this statement. This advice is corrosive. It warps the minds of youth. It is never a good idea to intentionally aim for anything less than the best, though one might have to compromise in order to succeed. Maybe Richard means one should aim high, but make sure you shoot. Sadly, he didn't say that.
Starting point is 01:04:16 He said, worse is better. And though it might be an attractive mind-grabbing headline seducing people into reading his paper, it teaches the wrong lesson, a lesson he may not intend or a lesson poorly stated. I know he can say the right thing. And I wish he had, which is just, come on. Like, yeah, this is Richard. Like I just, there's so many things like I, you know, I didn't get to give all the little like witty things in here. Like I just, i do love how this rebuttal is written even if i don't agree with the rebuttal itself of the three this is the one that i wonder the most about
Starting point is 01:04:52 just because it i do think it is him trying to help people get a better understanding of what he meant by the first one because the first one like the first one is clearly sloppily written right and that gives him this i can see him thinking oh man i feel kind of bad about how i wrote the first one maybe i could just rewrite it and and publish it as a follow-up and it would be like you know the clearer more refined version that can replace the original but instead i'm going to have some fun with it and i'm going to dunk on myself for being sloppy in the first place and at the same time Try and clarify a little bit of how people should think about it
Starting point is 01:05:31 but you know what what would be even more fun is if I do that by kind of Putting some of those arguments that people are using against it down on paper, you know I'm gonna actually take some of those counter points that people are making and put them down here Because it feels a little bit like that right like this this this rebuttal contains a little bit of the critiques but i don't think it i don't think it helps like i don't think it helps the understanding because yeah it's got this huge digression about the pdp 10 versus the pdp 11 like it's got this historical context stuff that was inserted as like here's an explanation for maybe some more realistic reasons why unix won and lisp lost so it engages with that aspect of the original like
Starting point is 01:06:13 response to that but it doesn't really and other than that ending section and a little bit at the beginning i don't think it does much to explore that dynamic introduced in the first one to either because it's being sarcastic right like it's it's a it's a fake uh rebuttal so it would actually be in effect meant to bolster the original argument so it doesn't actually do that though like it doesn't actually help us get a better understanding of that original dynamic, but it doesn't like point out a critical failing in it either. It's not, I don't think it's a defeater. I don't think it actually engages with that idea from the original paper about like where does the complexity go. I don't really see it doing that all i see it doing is defeating the the flawed interpretation of the original argument
Starting point is 01:07:06 where it's like oh yeah the original argument seems as though it's saying design versus no design and that's clearly stupid nobody's gonna build a system that's not designed everybody's gonna try and do a good job right like you should never look at an organization that's behaving in a chaotic way and assume that the people there are stupid. It's that everybody is doing the best that they can in the circumstance that they're in. And maybe they have incomplete information, or maybe they have more information than you have, and so you don't know what they're responding to. But, like, everybody's doing their best. And so it makes that point, but I don't think that's an interesting point to make in response to the original paper because it doesn't
Starting point is 01:07:46 And maybe this is just like historical perspective allows me to see the original paper in this way Or maybe the third paper allows me to see the original paper in this way But it there's a dynamic in the first that is fascinating that the second one doesn't really Engage with I don't think I totally see what you're saying And I do think this point of where does the complexity lie is interesting. But I think we have to keep in mind with this rebuttal, it's less about where does the complexity lie and does where the complexity lie actually have an important role
Starting point is 01:08:22 in the potential popularity, right? And I think that's what he ultimately is trying to undercut here is saying, no, it didn't. Because there has never been a case where there was really this head-to-head competition between something that moved the complexity in these two different places.
Starting point is 01:08:42 They were really separate use cases. But I agree with your general thing. I don't think this rebuttal, I actually am not sure if he meant it mostly as a fake rebuttal, or really he himself is wrestling with which one's true. I think we need to move into, though, how is this, like, while the discussion has been, you know, I think interesting, and I think people who are involved in future of coding will see how it applies.
Starting point is 01:09:11 I'm interested to hear, you know, your thoughts on like, how does this apply to today, future of coding projects? What lesson should we learn for, you know, approaching those things. Because, you know, you, I will say, seem to really be on the right thing approach when it comes to HEST. In that I'm not actually doing any work on it because I haven't figured out what the right thing is yet. Yes, and that you're taking the time to design up front. You're not putting out a partially baked solution
Starting point is 01:09:47 that will evolve and become the right thing over time. You want to start right. I could argue the opposite. I think I actually am doing that in that the thing that I have released so far is like some GIFs. What if you took a NodeWire programming language and put some little dots on the lines and had them move?
Starting point is 01:10:14 What can you do with that? And now I'm just waiting for somebody else to take that idea and actually make it usable. Which is very much worse is better. There is no implementation. All the complexity outside of my culture is so many also. Take this, and see if I can encourage you to explore. Thank you. I don't think I would ever throw bags of my own blood at anything. I could imagine throwing it to somebody because, you know, there's a life or death situation. For some reason, I have a bag of blood
Starting point is 01:11:33 to help save their life. Wow, now I'm okay. I have a bag of my own blood. Do you have a bag of my blood? Everyone has a bag of their own blood. But do you have a bag of my blood? Everyone has a bag of their own blood. But do you have a bag of my blood? This is a, you know, I'm trying to make sure that there's not something I'm missing here. Did you take my blood? Yes, I I'm pretty sure I actually do get
Starting point is 01:11:58 bags of your blood too. Oh, okay. And if you want you can have bags of my blood. Okay. I don't think I would can have bags of my blood. Okay. I don't think I would throw my bag of blood at a beached thing. Yeah. Even if it was a life or death situation? I mean, if it would help?
Starting point is 01:12:19 Because life or death could mean if I throw it, I kill it. Well, that's the point, though. The point is to, well, I guess you can't kill it. The point is to make it go away. Because to me, a beached thing is something I need to save. Hmm. It's supposed to be in the water, and now it's on the beach. Yeah. Probably a nice animal, but I probably shouldn't touch it because I'm not a marine biologist,
Starting point is 01:12:44 and the sea would be angry at me if I did. A nice animal like a whale or a chimera? Yeah. I assume that it's a whale that has a golf ball stuck in the blowhole. That explains a lot. Yeah.
Starting point is 01:13:01 I am definitely down a wormhole with all of this right now. Oh yeah, me too. Me too.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.