Programming Throwdown - 184: Asynchronous Programming

Episode Date: September 23, 2025

184: Asynchronous ProgrammingIntro topic: AI ScamsNews/Links:Coding Adventure: Ray-Tracing Glass and Caustics (Sebastian Lague)https://www.youtube.com/watch?v=wA1KVZ1eOuABoson AI announces Hi...ggs Audio V2https://www.boson.ai/technologies/voice The Misconception that Almost Stopped AI [How Models Learn Part 1] (Welch Labs)https://www.youtube.com/watch?v=NrO20Jb-hy0A mind-bending conversation with Peter Thielhttps://www.nytimes.com/2025/07/11/podcasts/interesting-times-a-mind-bending-conversation-with-peter-thiel.htmlBook of the ShowPatrickThe Hobbit (JRR Tolkien)https://amzn.to/4mevuzEJasonNYT Word GamesPatreon Plug https://www.patreon.com/programmingthrowdown?ty=hTool of the ShowPatrickEscape Academyhttps://escapeacademygame.com/enJasonMulti-modal LLMs to make calendar meetingswww.chatgpt.comTopic: Asynchronous ComputingWhat/WhyMulti-threading in between the linesMany of the benefits of multiprocessing without the overhead/complexityHowCoroutinesThread-Local MemoryBlocking vs Non-Blocking operationsBlocking: arithmeticNon-Blocking: Reading from the network card into thread-local memoryInterpreter lockingTypescript: Single threadedPython: GILImplementationsPolling (not-Asynchronous)Callbacks (interrupts)Multithreading (with queues/message passing)Promise/FuturesAsync/Await  ★ Support this podcast on Patreon ★

Transcript
Discussion (0)
Starting point is 00:00:00 Programming Throwdown Episode 184 Asynchronous programming Take it away, Jason. Hey, everybody. This is going to be a fun topic. I think this is something that, I don't think they teach this uh well you know this gets back to my like i took so many theoretical classes i
Starting point is 00:00:35 didn't really take anything practical in college but but but they didn't uh i don't remember learning about this at all in college but it's extremely important and useful so i'm excited for us to get into it um but before we do all that i wanted to talk about something that kind of blew my mind that happened recently to someone in our neighborhood okay so imagine you get a call and it's your voice like it's actually oh actually I don't know if it literally was his voice or if it was just a voice of a child his age but basically it's it's this voice telling her it sound like her son saying hey I am uh actually I'm not gonna say her son's name even if I could remember it but I'm so and so your actual son right and you know I fell and I need some money to like you know help me get to the hospital or whatever right And it's a scam, but it's like in your family member's voice. Like they've cloned their voice and it says it's from them because they probably go on your Facebook and figure out if you've ever posted publicly like, hey, me and my son, it's just
Starting point is 00:01:46 say Doug. So they call you and it's like, hey, it's Doug. You know, I fell and I need you to wire me some money or something like that. So, you know, the person knew was a scam, but it really kind of blew my mind. I've been getting a lot of AI scams on my phone where they text me. I don't say I want to fall for it in the sense like I didn't lose anything. But the first time it happened, it was like, hey, you know, I'm in Austin at the 6th Street parking garage. Where are you?
Starting point is 00:02:19 And I was like, well, who is this? It's like, oh, you didn't save my number? I'm like, okay, no, I didn't save your number. Who is this? It's like, oh, I'm Jolene or something, whatever. The name was the name I didn't recognize. And I was like, okay, yeah, this is clearly, but when I feel like, you know, I couldn't prove that that one's AI, but I feel like people are using AI to scam people.
Starting point is 00:02:45 And the one over the phone was definitely some AI thing. And, yeah, I just wanted to throw that out there and get your thoughts. It's pretty mind-blowing. Yeah, be careful. I mean, scams in general are... kind of an well, bad but interesting topic.
Starting point is 00:03:03 I don't know how you say that. Like it's actually really unfortunate and scammers suck. I don't really understand. Anyway, whatever. Listen aside. But I think that...
Starting point is 00:03:12 I mean, it's big money, right? That's whatever. Yeah, but still, like, I, a lot of the people that they scam, right? So part of it that's confusing to me is like, oh, this is an obvious scam. And it's like, well,
Starting point is 00:03:24 that's part of it, though. They want to kind of weed out people who are going to be, like they want someone who's not going to catch on later. And so, you know, they want someone who isn't going to ask questions. Like you're insistent of who is this or potentially looking for you to fall into it. It's sort of the, you know, when you're sometimes when you're young, you tell these jokes where like you're hoping the person will give a certain reaction so that you can sort of like say something funny in response.
Starting point is 00:03:50 And so they're sort of, instead of saying who is this and they say, oh, you didn't save my number and go, oh, is it Sarah? And then like, yeah, it's Sarah. and then you're like, oh, it's Sarah, but it's not, right? Like, they were just hoping you would say a name first. And like you said, it can be really easy to fall on these. And I do worry, like you're pointing out, AI on two fronts, scaling these out so that they don't even need someone to be attentive to all of these different threads that, you know, they can just mass do them and have auto responders and keep people busy
Starting point is 00:04:21 until they like figure out something is scary. And then the voice cloning, eventually video cloning, I the video cloning the video stuff is getting to the point where everyone goes oh it's so obvious it's so obvious and they do like some frame by frame like look at the and it's like but who in the small little window when you're scrolling through social media can see like this frame by frame pixel analysis where you know the pinky finger glitches out for for one frame or whatever it's you know I remember you and I discussing on this the Will Smith eating spaghetti oh yeah that's right that wasn't that long ago yeah now now like I saw some video from a football game over the weekend where one of the coaches had an issue and somebody made an AI video of him like like basically super vulgarity like dropping like saying how bad it was or whatever and it just turns out it was like completely fake and they the they said it underneath but you had to like click it and look for it a lot of people probably re-shared it thinking was funny like look the coach like basically going off on on reporters and stuff and so
Starting point is 00:05:24 it's becoming more and more prevalent and it only takes you being off guard for for sort of a moment to get wrapped up on this. So yeah, definitely be careful. But I actually have a full observation about this. So you mentioned like, you know, we all kind of go, oh yeah, voice cloning. It's dangerous.
Starting point is 00:05:44 You and I have lots of, you know, audio out there. It's, you know, whatever. But now like you see demos on YouTube or people showing, hey, from even just maybe 10 seconds of an arbitrary sentence, they can sort of You can use a different voice or clone the person's voice, you know, do voice-to-voice sort of transition. Yeah. So I've seen these before. So then I wanted to do this for basically like an audio enhancement thing.
Starting point is 00:06:08 I had some audio I had recorded somewhere and I was like, oh, I want to use voice clone, which I know should be possible to like feed it my own voice from a podcast recording or something high quality, feed it the like not good quality and have it basically reread that stuff in the high-quality voice. which is a use case as supposedly possible. And then you immediately bump into the fact that, like, it's actually really difficult. Like, as much as people say, oh, there's going to be this. It's non-trivial. Like, I guess there's a lot of pay solutions, but it's often unclear if the paid solutions are any good
Starting point is 00:06:39 or if they're just like the open source stuff that someone's charging for. But like cobbling together all of the like, you go download this model, you run it on comfy UI, you do that. Like, it's still in the realm of like, this is reasonably difficult to access. So for all of these AI things, everyone's like, oh, it's so easy to do this. And then, or generating an AI video. Like we all know, we've all seen some crazy AI video that was hilarious or partly convincing.
Starting point is 00:07:05 And then you go try to do it. And I don't know, maybe I'm just not smart enough or paying enough attention. But they seem a little cherry picked. Like people may have gone through a lot of work to cherry pick the one convincing video. And if you go do it, you think it's going to be magical and it's actually a little disappointing. That's been my observation. Yeah, I think there's two things there. One is, you know, definitely it's, the open source stuff is not trivial to get up and running.
Starting point is 00:07:31 I do think that the paid solutions are like, you know, pretty accessible, but, but you're also right that you often have to iterate a lot and you have to get, it's almost a skill in and of itself, getting it to sort of behave and feeding in the right sort of negative prompts so that it doesn't go off the rails and all of. that. So, yeah, that's a whole skill in and of itself. Yeah, someday, it is getting better fast. So I think we are getting close. But it's one of those things where I was thinking about this the other day. It's like, if you could describe where we are right now with AI, people would be like, oh, that's going to be so magical. It's going to change everything. The world, like, and then we got here and it's like, oh, huh. You know, like, I don't know. Like, an AI could clearly probably pass the traditional sort of turning test. I maybe that's controversial.
Starting point is 00:08:24 And then now it turns out like, oh, yeah, but it didn't just solve all of life's problems. Like, you know. Yeah.
Starting point is 00:08:30 Yeah, I think that, to double doubt on that, I think AI is just not going to create the kind of monetary value that people thought it would. You know, like the thing about it is,
Starting point is 00:08:41 the reason why Fang got so incredibly rich is that now people can like be super productive anywhere you know and so like you could be waiting in line for a ride at Disney and like working you know like you could be answering emails talking to people on like that was something that in you know 1999 you couldn't do if you were waiting in line for an hour at Disney you couldn't do your job right and so you know a combination of of Apple Google for search these other things like allowed people to be like super productive um you know when they're even when they're idle and uh and for AI it might it might do that um but it hasn't done that yet it might take a lot longer to get there okay we're way
Starting point is 00:09:36 off the AI scam thing but i have heard some recent interviews along the lines and they said actually like the the the thing the like what will be the inflection point or the turning point is when all of these advances sort of make it into robotics. I could see that. Do you think that's like a different economic equation? Like when not just oh you can be taking the AI and making programmers more effective or making people's email responses better or like sure all of those things maybe economic value is a little like but like if you had the level of improvement we've seen there in sort of like robotics. Is that going to be a major? or unlock it totally totally yeah so here's a very simple example like think how many parents
Starting point is 00:10:24 have to like either changes or work schedule or do something kind of wonky um to just like shuttle their kids around oh yes yeah yeah just driving it's like oh i pick uh one of my kids up from school i drive them to the soccer field and and especially when they get older they don't even want you know they don't want parents to be sitting there like watching the practice yeah exactly It goes in the car, so you can see you. Yeah, yeah. So you either go home then or whatever. Like that's just one example.
Starting point is 00:10:55 Like you could have a robot that goes up and down the street taking everyone's trash to the curb. I mean, there's just so much you get unlocked with robotics. But I think it'll take a while. All right. Yeah, well, let's keep moving through the agenda. All right. So, yeah, don't get scammed. Don't get scammed by AI.
Starting point is 00:11:13 Yeah, do be careful. And warn your parents and grandparents or, you know, family like as well who may not be as aware that that's a thing yeah um yeah very true very true all right uh what's the first news of the show all right so i have i have no news this time uh but i do have two good video links that i wanted to to shout out um and so the first one was a recent video by um sebastian i actually didn't look up how to say his last name i i know i'm not going to try I said, log? Log.
Starting point is 00:11:45 Okay. Anyways, and it's coding adventure, ray tracing glass and caustics. This is a series of videos that Sebastian's been doing. They're sort of like the, in a similar vein as I guess like three blue, one brown. Like, you know, it's like sort of a nice quiet pace, but like deep content and just sort of like working through something. So they're not like five minute YouTube shorts. They're sort of longer form videos.
Starting point is 00:12:10 And he's walking through the process of coding. he's been working on a ray chaser recently but going a little step further than the normal and so in this one he's sort of talking he gets to talk a little bit about like physics physical based rendering and sort of like how glass has certain attributes and indexes of refraction and sort of if you build up from like really simple sort of rules you get very complex interactions almost becomes a physics simulation right like the caustics is the like very bright shiny thing that If you have like a glass object, you get, you know, sort of like rainbow diffusions in some areas. In some areas, you get, you know, like very bright, shiny spots or whatever.
Starting point is 00:12:49 And if you play video games, often you won't see those kinds of things because they're not really meant to be simulating physics. They're meant to look nice and run fast. Yeah. But in ray tracing, you can be kind of more physics-based. But a lot of videos on his channel that are just like really, really good. But this one's really nice. And then the reason why I wanted to bring it up here, other than like, you know, I mean, maybe this is really cool to you. It's great background photo.
Starting point is 00:13:17 This is kind of tends to be the YouTube I watch. I try to explain to people. There's people who watch YouTube and it's like, you know, more the shorts content or entertaining. But there's a lot of this like science engineering kind of stuff out there that I feel is at least a little bit better than I'll just call like brain candy. And it's sort of replacement for, you know, the TV growing up. We were like, what, Bill and I, the science guy or something. that would be like, you know, a little, and even that was high strung, I guess, but, you know, teaching, teaching sort of science concepts in an entertaining way. And there's a bunch of people in
Starting point is 00:13:51 this vein. But I will say that I also began to realize that a lot of the classes I took in university, you can kind of end up not building a curriculum, but if you just watch some of these channels about how microprocessors work or people kind of building their own circuits, some of this stuff about ray tracing, you get a lot of what a traditional, at least when I was in college, sort of like computer science degree would cover. So if you can kind of get through the basics of programming, learning about like graphics, learning about like data structures, he gets into debugging because by necessity he like, oh, here I didn't check the sign properly or I did a dot product and I didn't think about like this set of inputs. And so you actually end up backing your way into a lot of
Starting point is 00:14:37 that education. And so I think that's really an interesting thing to say if you can, if you find this at all entertaining, I think there's like a definite learning ability that can come here from sort of watching this and sort of seeing how, how they kind of go about problem solving and working through things. And it is a little bit more entertaining than just watching, you know, I don't know if he does it anymore, but not choose to do like live streaming of him programming or whatever. Like, and I guess we'll do. Yeah. Oh, we should do that. No, no, no, no. I don't know. I don't I never made able to watch one of those. This is a more edited form of that.
Starting point is 00:15:12 Cool. I'll have to check that out. I used to do a lot of that ray tracing stuff back in college, and it's very fun. All right, my news is actually, okay, so it's kind of related to the voice quoting. Oh, okay. There's a new open source model called Higgs Audio V2. And it's from a company called BOSON AI. And it's like really, really good.
Starting point is 00:15:45 So historically, this company called 11 Labs had been kind of dominating the space. So if you want to do voice cloning, actually 11 labs is something I've never seen before to this day, which is you can describe a voice and then say what you want that voice to say and it'll do it. So you could say, you know, an evil, smelly ogre says, who's in my dungeon? And it'll actually, like, make up a voice for that description and then say who's in my dungeon in that voice that just made up.
Starting point is 00:16:18 This doesn't do that. But... 11 Labs is paid, though, right? 11 Labs is paid. That's right. But this is totally open source, and it can clone a voice. So you give it, you know, an MP3 of you kind of narrating something, and then now it has your voice.
Starting point is 00:16:35 It could say anything. It handles like, you know, you can do kind of like in a movie script or brackets. You say, you know, laughs or chuckles or whatever, and it'll actually laugh while it's talking. It's really, really, really good. It's definitely just like heads and tails above anything I've seen in the open source world. So I'm going to try and play with it. I have some funny things I want to do. I'm not going to run a scam call center.
Starting point is 00:17:05 Oh, no, no, no, please don't. But I have some other funny things that I would like to do with it. So I'm going to try and mess around with it. But I saw a demo that one of my coworkers presented, where they cloned their own voice, and it sounded really believable. So it's like we're getting to that place you talked about where anyone with the GPU could just do this.
Starting point is 00:17:29 Yeah, so what is? is the, I guess, for examples, like a lot of these, they don't need to be, like, super fast real time. Like, I know they could run an SUV, but I don't feel maybe we're just still too early. So for text LLM's open source, and maybe it's just because I've never spent time. There's something like LM Studio kind of gets there. So there's like a GUI that you download, and then from there, it offers you links. And I help someone in my family was trying to use some offline LLM stuff. And so I was able to walk them. through pretty easily. You go there, it offers you models that are appropriate for your computer
Starting point is 00:18:07 tells you if you big, tells you if they're too small, has filtering, then you download it. It gives you the nice traditional sort of like chat GPT style. Like you can type in text in the bottom. It has prompts in. It'll answer. But it has like, you can go deep on it, but it sort of harnesses a bunch of the open source stuff under the hood. So you can do like traditional chat app with one of these offline LLMs. You can switch models midstream, and it'll, like, just feed the context into the new model. Like, it kind of just works as a graphical user interface. But for a lot of these other ones, like the image generation, I don't feel like it's as good.
Starting point is 00:18:42 Video generation, not at all. These, like, audio ones, sort of, I've never seen anything like that. I feel there's still, like, a bunch of this, maybe we're just still too early, but, like, single front end. So we're audio, right? You're either wanting text. You want, like, the thing, the target, the source. They feel like there's only a few moving pieces. Maybe there's just not enough users,
Starting point is 00:19:03 but it feels like it'd be really easy to just build an app that's like a front end and handles like I'm looking at the GitHub link for the one you were just talking about. And it's how to set up conda or virtual environment. But I'm never going to describe to someone in my family how to do these steps. Like they will never be able at this level to use this. Yeah, we're still not quite there yet on the user experience. Totally agree.
Starting point is 00:19:26 I think the quality is getting there though. the flux text-to-image model is really, really good. It's mind-blowing. It just needs a better user experience. And the new one, we don't have it in our news, but the new one, Google did Nano Banana or whatever. Have you seen this one, too? I played with it a little.
Starting point is 00:19:47 Yeah. Specifically for like image editing tasks. So like saying I have an image and I want to change it in some way, like I was asking it to do like people removal from background. and it works really good. It's kind of scary good. Now that one's paid, right? That's a Google product,
Starting point is 00:20:06 but still, like, yeah, you're right. We are getting really close on the image stuff. Yeah, totally. Okay, my next news article is a YouTube channel as well. Okay, that's the thing. Anyways, and this is on my, as people have listened to the podcast before, no, machine learning AI is not my background, but understanding sort of how the back propagation works,
Starting point is 00:20:30 how the gradient descends and sort of the model trainings is all very like interesting to me. I had a book of the show a little bit ago that, you know, kind of similar or maybe it was even a news article. But I think that I'm continuing to try to learn and sort of work on my back, not just as a, how do you call like practitioner, like user of these things, but trying to understand some underlying thing.
Starting point is 00:20:52 And I think there's some concepts there that can cross apply to other. sort of like traditional computer science sort of applications. So it's just definitely like an interesting thing to learn about. It is pretty wild to see AI go kind of mainstream. It's been just a really weird experience.
Starting point is 00:21:09 Like all these people are now talking about stuff that was like really not that cool that long ago. Yeah. Oh, I'll tell you off that. It's just an observation about that. But I know you've been in the field for very long. You were in the field before it was cool.
Starting point is 00:21:22 And now it's cool. So like that makes you cool. Uh, maybe. Yeah. I don't know. Or maybe it makes me too early. I don't know. We'll find out. Uh, but too early. Everyone's too early for something, right? That's the thing. So even now, you might be too early if it grows another 10x. So yeah, that's a good point. It's like Bitcoin. Uh, yeah. Well, I wasn't, okay. Yes, that was what I was. Um, okay. Back to my focus. So the channel name is Welsh Labs. Um, and they have a sequence of videos that, um, they were doing called how models learn. So this is the first video. that I have linked in the show notes, you can just look up Welsh labs. It's called the misconception that almost stopped AI. Now, I am not Jason, so I cannot vouch for the sort of like how truthful versus narrative the backstory is or any of that stuff. But they're kind of talking about gradient descent and how if you give the sort of how I
Starting point is 00:22:18 understand gradient descent to work, the very simple, you kind of heard about it in computer science classes. You've seen it on the internet once or twice. And the way that you would kind of assume it might work, it just feels like you would get stuck in local minima all the time with how big these models are, right? Like these models, the dimensions are really high, the parameter counts are obscene. Like you would just say there's an intuitive. And that was sort of what I thought to. I didn't really understand the unlock of deep learning and sort of like what happened. because in my mind, you know, you kind of picture the 3D
Starting point is 00:22:51 scene with all the noise in it and some well in the middle, but how do you get there? How do you know where it is without just searching the whole face? There's the, what is it called the like, it's called a Gaussian, but like that's a sombrero hat function where you kind of get trapped outside of the hat.
Starting point is 00:23:07 Yes, yeah. So you're right. And I, the video series kind of walks through the same same thing. Like this is the simple explanation about gradient descent. Here's why. it would work, except, like, as you point, gotcha. Like, it doesn't really work. Not really a gotcha.
Starting point is 00:23:23 Well, it doesn't really work, except, wait, it did work. Why, what happened, you know, like, and they walk through how some of the things about higher dimensional spaces about batch, batching, about some of the techniques that are applied, sort of help actually almost always guarantee you sort of get where you want to go to some degree with, with, you know, obviously probably some caveats, but it just a really well-handled explanation. And they do a couple really interesting things I've not seen before. So rather than just sort of give it as a simple explanation, they actually take some deep models, some like early versions of GPTs and extract some of the layers and sort of talk about what is happening. Sort of like
Starting point is 00:24:05 visually saying, hey, at this layer, if we go here in this actual model, maybe not state of the art, you know, I trained it. And this is what you kind of see developing over time as a consequence of this technique. And then later on, even after this, one of the videos is talking about sort of how you would take coordinates and divide them up with a sort of classifier into boundaries for a really complex country boundary. It's one of these ones where like there's a city, which one polygon of the city is a part of one country and the rest is another. It's like some really weird place. And how that as you sort of take these linear equations, stack them up, do these other techniques that you can sort of start approaching a generalized
Starting point is 00:24:50 representation of even very, very complex sort of shapes and spaces. And I just really think it was well handled. It really helped me. It's one of those things like I kind of had to watch it again just to kind of like, oh, okay, I see where it's going. But definitely also kind of entertaining. So if you, I guess like me, have some understanding of some of these things, but there's a, there's like a gap, right? There's a goal from, okay, I kind of know what gradient descent is.
Starting point is 00:25:18 I could have a casual conversation about it, but how that ends up getting applied to giving us, you know, these AI models that we have today is there's some steps, missing steps in between and this isn't going to take you all the way or make you sort of like able to design state of art models or improve them, but giving you better glimpses into sort of some of the techniques that are used to cross that chasm. Yeah, totally. Yes, this is really interesting. I think it is pretty remarkable that it works.
Starting point is 00:25:53 I will say that people get hung up on a lot of these, like, it's not optimal or, you know, theoretically this can't work or doesn't work or, et cetera. And my response to that is, like, it just has to be better than what's out there now. And so if what's out there now is nothing, it just has to be better than nothing. And so the bar is just a lot lower than like the theorists want it to be. I actually think the thing that almost made AI not happen
Starting point is 00:26:26 was people's fear about getting wrong answers. Like remember when, you know, GPT was in its infancy and Facebook released this thing that, like, wrote research papers. I don't know if you remember that. I forgot what it was called, But, you know, it was panned, and someone made a research paper where it's like the benefits of, of, like, drinking urine or something. There's just some bogus thing.
Starting point is 00:26:51 And, you know, because it's like, you can talk past the sale with AI. You can say, like, hey, there's incredible benefits in drinking urine, write the research paper that kind of explains this phenomenon. And it will kind of take what you said for granted and then just go off and do the thing. Um, so you can trick people, you know, that's what people are doing with the research paper thing. Um, but you could also just encounter problems like the famous, uh, how many ours are in strawberry, um, you know, should you put glue on pizza? It was all these famous examples where they eyes gone wrong. And, and people were just really fixated on, um, um, on all these on on the worst case scenario. And there's actually like a, there's a, um, uh, uh, There's like a, I don't call it a mental disorder, but there's like a, I went to this like career coaching thing. It was like a big seminar as a lot of people. And they, the lady who was giving the seminar spent a bunch of time talking about worst case scenario thinking. And she's like, this is one of the biggest problems I see as a career coach and as a life coach is people
Starting point is 00:28:03 doing worst case scenario thinking and not thinking about average case scenario. Or also best case scenario, but more importantly, average case scenario, right? And so there was a phenomenon of worst case scenario thinking with AI for a long time. And people, you know, just didn't want to release something like Chad GPT. And I will give Open AI a bunch of credit that they basically put it out there and got it to be really popular. And as, you know, they got a bunch of pushback, they dealt with that, you know, in a way that allowed that. to eclipse that problem. Yeah.
Starting point is 00:28:45 And I think it is still, like, people talk about the hallucinations and the incorrectness. I mean, I think it's not fully solved, but like you said, that's not meaning it isn't beneficial. We just have to learn. You know,
Starting point is 00:28:57 I think that same to your point, you kind of say, take almost anything we have today. Oh, a car, like a car is dangerous. Like cars are going to kill people. They're going to drive fast. And it's like, yeah, that's true. But, like, there are still
Starting point is 00:29:10 benefits there are still other things you know we have to learn collectively individually how to like be responsible with the use of of tools and yeah I think to always with new things there's a debate about like usefulness versus danger I guess and you say worst case and you can always hyper fixate anything we do has sort of danger right going outside and exercising people someone told me oh you know you exercise you can have a heart attack like people say often when you exercise you you like you can cause a heart attack and it's like but by X like but also by exercising you are lowering other issues that lower their probability of having a heart attack so like you know so your solution is like sit inside don't go outside don't exercise don't move like
Starting point is 00:29:59 you're gonna you're gonna die from that for sure you know and so it yeah these balances can be really difficult to find okay wow we yeah took that in a different direction but totally well Well, I guess it segues nicely to my last news, which I won't dwell on because we spent a lot of time talking about news. But it's basically an interview with Peter Thiel. And there's a lot of content there. But the thing that I took away that was really interesting was, you know, at one point the interviewer says, why AI? Why is there so much focus on AI? And I expected some kind of answer like unlimited productivity or, you know, wages can go to, you know,
Starting point is 00:30:40 the price of oil or something, right? But actually what he said was it's the only thing. Like AI is like the only like really exciting thing out there. And that was pretty mind-blowing. It's like is that, you know, if that's true, like why aren't there more exciting things? And maybe like I do feel like there is a tendency for everyone to kind of chase one thing at a time. Like, remember when car play and Android Auto
Starting point is 00:31:13 and just like getting stuff in the car was such a big deal and all the companies were copying each other? And now we're kind of seeing that with AI and it's like we just, for some reason, in society, we get like just fixated on one thing at a time.
Starting point is 00:31:29 But I thought that was really interesting. I wonder if this is like a recent, he's trying to make it sound like this is a recent thing, but I wonder if we've just always just been fixated on one thing. Interesting. I think in a similar vein, what I've heard a bit as well, like the focus, the drive
Starting point is 00:31:52 towards it from a variety of facets is it feels close to something like akin to an escape velocity. So like getting into a post-scarcity world. You can get whatever level of like engagement you want with it. but basically if we can develop an somebody can develop an AI system that becomes sort of super intelligent and then you can assign it to the task
Starting point is 00:32:17 of improving the AI systems somebody will will basically like break past the barrier right get into orbit whatever reach escape velocity and then they it'll be like winner take all and so you see a press from country level from company level
Starting point is 00:32:34 as almost an existential risk because if some other company does that, then they will have a system which can basically build any system. And so search, you know, chat, uh, image. And like we'll just get one by somebody who can basically devote all of the, you know, computational power they have to just building all of the other competitor stuff. And then those competitors will only have whatever resources remain from their sort of war chest to spend to, to, to also get there.
Starting point is 00:33:06 or they'll just be drained of, you know, financial inputs. And so maybe it's a bit the same. It's like the only interesting thing. But like there is also this may and maybe it's like focusing on the danger part you were saying. It's like there's this focus that if you could get to this sort of recursive AI that improves itself, then that will be so transformational that whoever, if someone gets there first like before everyone else by.
Starting point is 00:33:36 enough of a margin that the like everything changes yeah it's interesting i wonder i wonder to what degree that's true um you know i feel like it that's that's a that's a that's a really interesting perspective i think that people i think that okay even if the ai can do general things when you go to evaluate it you're going to evaluate it on your specific things that you care about and so in that sense like that part is not going to scale um but uh but maybe we can get the evaluation to be somehow like okay in our case you know the earth is our evaluator you know it's like it's like we went out and we hunted the tiger and one person you know got the tiger and the other person did it and that second person goes hungry and so we
Starting point is 00:34:36 the planet Earth is our as early man's evaluator but you know now we have to have synthetic evaluators for synthetic problems I wonder if that maybe that's like the next big job title you know AI agent evaluator that's like the new data scientist or something I mean there yeah people are talking about in the similar vein I guess moving to completely synthetic input instead of using sort of the Wikipedia the X tweets Well, X, X, X is what they're called.
Starting point is 00:35:09 Anyways, but like it's X. Yeah. The X's? The X, X, X. Oh, the X is on X? Yeah, I don't know. The Reddit posts, you know,
Starting point is 00:35:18 basically instead of all of the corpus, it's just not enough. Like, can you move equivalent to AlphaGo, you know, moving from the, you know, recorded games of Go to just self-evaluation. Like, can we shift that?
Starting point is 00:35:30 And like, what happens if you invent basically like a parallel universe of input and train on that and like what does that unlock or enable yeah so to use the robot example um you know can the robot effectively know if it if it when it goes to take your your trash can to the curb can it effectively can it know with high accuracy whether it did a good job or not you know i mean i guess it could take a picture of you know your your trash can on the curb but like if the robot did a bad job maybe it also does a bad job of knowing if that picture is correct you know like maybe what if it dumps your trash out on the in your driveway and then takes
Starting point is 00:36:20 a picture of that is like i did a great job you know so it's it's like it starts it's just kind of you have you've sort of moved the problem but maybe you still have a problem but maybe that is actually going to and that's the self-driving thing you were kind of alluding to car stuff like it's a bit difficult fitting in current systems but i mean i think maybe the what about if you think about the whole solution so the job well done is did the trash end up in the dump and that's did the robot system take the can to the curb did the garbage truck robot pick up was it able to successfully pick up without like so maybe the garbage can knows the mass of the trash that was in it and the mass that comes out at the end and if that ratio you know is less than one you get a bad score if it's over
Starting point is 00:37:05 one you get a bad score it needs to be exactly the mass in is the mass out at the end maybe you have to account for evaporation or something but like yeah and then actually it becomes this end-to-end measure and you have to attribute to like you can improve the garbage truck you can improve the like roller outers you can you know whatever it and it tries to iterate collectively so for a while we just end up with like giant dump trucks that back up to your house and try to like go into your garage door and extra you know you're just crazy stuff yeah yeah I think I think that's kind of where we're stuck now is is we're using human even you know open AI all these companies are using people to evaluate the models and and and that's where we have to get beyond it the models need to be evaluated by the earth somehow, by the environment. But I don't have a counter to what else is super interesting right now for everyone to focus on. So I can't disagree with the thesis from.
Starting point is 00:38:09 Yeah, me neither. You know, that's what made it so fascinating. Like, if I could point to like, oh, you know, gig economy is still a thing or something, then, you know, but there really isn't anything else, which is kind of wild. I think VR is dead. Maybe AR potentially. They are. It would be cool. Space travel, but like, it feels high barrier to entry. Yeah, you know, space travel, I have a hard time getting excited about space. I don't know why. It just doesn't really like, it doesn't sit. It doesn't, it doesn't excite me for some reason. Unless there's like gold on the moon. I just don't know about it. You're like that inside, dude. Well, maybe I can only be excited about one thing and that's AI. Maybe it's a me problem.
Starting point is 00:38:56 What about AI in space? Oh, mind is blown. Dyson's your program to power out AI. I can't get back into that. All right. All right. Time for a book of the show. All right.
Starting point is 00:39:09 I will, Confession Corner, I have not had a ton of time to read recently. I have not made a lot of time to read recently. So I went back into my history of books, and I'm going to give a shout out to The Hobbit. If you have not read The Hobbit, if you've not sort of jumped into the Lord of the Rings.
Starting point is 00:39:32 Delved? Delved. Dived. Yes. Spelunked. If you've not picked up your hammer and gone with the dwarves to mine and, okay, I'm going to stop because someone's going to call it out. You haven't gone to Moria. That's what I was going to say, but I was like, I don't know. I'm going to use it wrong. Because like, I don't remember which one's the one that had the problem. Anyways, the Hobbit is a sort of good entry into the world of the Lord of the Rings of J.R. Tolkien, obviously a classic. Hobbit is, I will say, a pretty easy self, relatively self-contained read. So definitely worth picking up if you've never done. I do think there's a lot of folks and disagree who kind of said this. It's J.R.R. Tolkien sort of set up a lot. of the arch type archetypes that we have today of sort of dwarves elves humans and like some of their
Starting point is 00:40:32 characteristics people complain those tropes are overplayed today but if you read modern fantasy there's a lot of built up stuff that you can kind of trace back to the origins and I think it is a good story I think sometimes people will say you know like for me the first time I watched the three stooges as like I didn't find it that funny I'm like I've seen these sticks before but it's like yeah but when they did it it was like they were the people like they invented it but other people took it on i think the hobbit to me doesn't really i don't just like suffer the same thing like you can read it and it is still good even though other people have continued to like pick up and and sort of run with those concepts like it itself is still a sort of fun and exciting read
Starting point is 00:41:12 yeah totally i think uh if i remember correctly it's been a long time since i read it but there is a ton of world building. And because it's older, it does go at a slower pace. I feel like, you know, we're such dopamine fiends in the 21st century that, like, it's a, it feels like, like, we just can't have something that goes at a slow pace. If you've ever seen, like, old movies from the 50s or even like Space Odyssey, you know, these kind of movies, they go at such a slow pace. slower pace. It kind of blows your mind. The pace is crept up over time. It's like a boiling water frog type of thing. So it is going to be a slower pace and a lot of world building. But if you stick through it to the end, I think it's a phenomenal book and it leads you into all
Starting point is 00:42:05 the other Tolkien books. Yeah, I've always heard it like called prequel to the Lord of the Rings, but it was written before the Lord of the Rings. So it's, I don't think it technically qualifies as a prequel, like it was the book written and then Lord of the Rings was written after. So I don't know what you can call that. But it does, the story contained does precede the rest of the story of the Lord of the Rings, which is even a larger term. Even I think actually a little slower, a lot more world building. But Hubbit is definitely shorter. I think it's even considered kind of a kid's book. But you know, don't let that scare you off if you're an adult. But it should be a pretty quick read. Yeah, totally. All right, my book of the show is even more of a cop out. It's
Starting point is 00:42:45 It's what I've been doing instead of reading, which is maybe a shame, but it's a New York Times game. So this is an app. A lot of the games are free. You can subscribe to get access to the crossword puzzle, the famous crossword puzzle of New York Times and some other stuff. But it's very fun.
Starting point is 00:43:05 It's a simple app. It works offline. So if you're off the grid, you can still use it. I think it's something where like after a couple of days, it'll stop working. But if you go off the grid for, you know, an afternoon or something, it's fine.
Starting point is 00:43:18 And it just kind of keeps your mind going. It's got Wordle, the famous Wordle, but it's also got, you know, a few other ones. And it's a nice kind of thing to help you kind of build your vocabulary and literature skills if you want to kind of continue to sharpen that X. I did see something that Wordle will have to end in 2027. Did you see this? They're going to run out of words. Yeah.
Starting point is 00:43:45 So basically they have like a dictionary of 2,300 words. They're on like, whatever, in the mid-thousins. And so like in 2027, they'll run out of like words. And then people point out, yeah, but they could just expand the dictionary, sort of change the rules. It's only like under their current rule set of not allowed to repeat words and stick to the words in that sort of like word list. But yeah, I guess in its current form, it has a finite number of like puzzles to have
Starting point is 00:44:13 from its sort of dictionary. Man, it's like, we thought Y2K was fake, but it's real. It's just coming in 27. Yeah, the world's going to end. What's going to happen? They'll have to release Wordle v2. They'll have to add an extra letter. Yeah, that's right.
Starting point is 00:44:31 Oh, man. They'll split it. It'll be Wordle light. They'll take away a letter. And then World Deluxe, it'll add a letter. And each one of those is a microtransaction. Oh, man. Oh, okay.
Starting point is 00:44:47 For $2, you can have a wordal that is not expired. I'm terrible at Word Game, so I, thank you for this pick. I won't be picking it up. You can, I'll do it in space. How about that? That way we both get something. Wordle makes me infuriated. I just look up the answer because it makes me feel dumb.
Starting point is 00:45:07 Like, I can't even get good guesses to keep progressing. Like, I know I have, I just purposely make bad guesses just to try to get more clues. I know I should be able to form a, I can't, and then I just like, rage quit. Oh, man, I have to confess or admit, I am a wordal God. I am actually so good at word. I actually sent one to my wife the other day where it was like, the first word only had one letter is out of place, and then the second word I picked had no letters. So all I had to go on after two words was one letter that was out of place, and I still got it
Starting point is 00:45:41 on the third try, because I just eliminated so many vowels. and other things that, like, there's only one thing left. I got out my first try. I looked up what the answer was. Yeah, take that. Remember for a while? Scrabble was really big, words with friends or whatever. Oh, I remember that.
Starting point is 00:46:01 Yeah, I used to play. Oh, man, I played so many games of words with friends. But you're right. It had to be with friends. You couldn't play anonymously or you just get wrecked by a person who's cheating. All right. Let's have been fine. Oh, man.
Starting point is 00:46:17 All right. And if you want to support our New York Times addictions or our trips to space, please give us some money on Patreon. I'm just kidding. Actually, none of that money actually goes to us. We put all of it back into the show. We literally haven't taken a dime of it. But we have used it to do advertising and reach out and get guests and all of that good stuff.
Starting point is 00:46:41 So if you like the show, if you want the show to reach more people, please support us on Patreon. It's not technically a non-profit, but it is a nonprofit. We go and put all that money back into supporting the podcast. All right. Time for tool of the show.
Starting point is 00:46:59 All right. What's your tool of show, Patrick? It's a game, shocker. Escape Academy. Look, I thought for sure I must have shouted out this game like on the show before. And if I did, one, I don't care
Starting point is 00:47:12 because it's worth it. Two, I searched through all of our show notes and the website, and I couldn't find it. So I don't care. I'm double down on it, doubling down on it. Escape Academy is a great game. It's about playing an escape room with a story, a narrative about being in an academy that challenges you to solve puzzles and, like, you know, has a kind of life or death setup situation about problems. but in reality it's just like a really good, well-paced, you know, escape rooms can be, for me, there's some I've done before and it's just like, really, like, how was I supposed to know?
Starting point is 00:47:51 You just need to like randomly kick around until you, you know, find this thing where like the path to progress is non-obvious. I don't like the sort of just search around and find. That's not my, you know, cup of. Yeah, I was talking about this with a friend of mine about my cousin when we were nine years old. He made a Choose Your Own Adventure game in Basic on our Commodore 64. And it was like, you're in a town, there's a bank, and you can also walk down the road, which do you want to do? So it shows, you know, walk down the road.
Starting point is 00:48:24 It's like, you meet a guy, you die, the end. Yeah, and it's like, okay. So like whenever you make one of these, you know, point and click adventures, choose your own adventures, it's like you have to like somehow like very subtly hint to the. the person what the right answer is, but it has to be subtle enough that when they pick the right answer, it's satisfying. And it's like, I feel like there's an art form to that. I've never made one of those, but it feels actually a lot more complicated than you'd think. I'm sure it is. And I'll say for me, the subtlety of the answer can be problematic, but even just understanding like what the puzzle is. So like in your example, like making sure that it's clear that there's
Starting point is 00:49:09 some bit of information you should figure out to make the choice. If you just present it as a like trivial choice and I make the choice and then like, oh, game over. It's like that, that's just, I'll just quit. Like, I don't want to, like, that's not fun. And I was, I did play one the other day where there was like a hidden number, like, you know, sort of in very faint sort of color that you were supposed to see. But I didn't see it.
Starting point is 00:49:34 And then I just kept bumping my head against the wall. I asked for a hint. using the hint, I was able to solve a puzzle I wasn't supposed to be able to solve yet. And then I bypassed like a third of the thing. And I'm like, I only found out at the end when it's like, oh, you were supposed to do that. And I'm like, but I missed a very subtle clue. The game didn't help me. And like a bypassed a bunch of it because I was able to kind of like brute force a problem I wasn't supposed to solve because it was the obvious puzzle.
Starting point is 00:50:00 And I just assumed it was what I needed. So I asked for a hint and it gave me. Okay. All that aside, Escape Academy isn't really like that. it does have a good hint system it's reasonably easy to play through if you're really expert at you're probably laugh like oh this is this is really easy but it is fun it has a cooperative mode which is really cool so for me oh interesting kids and they got really into like getting out the piece of paper to like you know sort of solve some of the problems or write down the notes or you know figure out um
Starting point is 00:50:26 it doesn't require a ton of that um but there is a little bit of it um and some word play and stuff but there's overall i i loved it i it's probably the only the only the only game where I've like paid for the DLCs and like played all the DLCs like wow I don't think any other game I always I hate DLCs like yeah same can't express how much I hate them but it's probably the only game I've ever like paid for DLC like played it and feel like I've gotten my money's money's worth and like absolutely would have done it again oh that is amazing I'm gonna have to try this out you've had some amazing game recommendations like I I never finished Dave the Diver but I
Starting point is 00:51:04 really enjoyed my time with Dave. I haven't finished it either. Yeah, but you know, the first half of the game is amazing. Like, I feel, I feel like I'm super satisfied. I don't think I really need to see the ending. Yeah, kind of the same. An Escape Academy is available on, like, Steam. I think, you know, it's on Switch.
Starting point is 00:51:23 I think it's on Xbox as well. So most of the sort of gaming platforms. And it's not super expensive because it's been out for a while. Very cool. All right. My tool of the show, and this might start to become, more of a trend, but it's basically using AI, like using LLM. So I'm putting a link to chat GBT, but you can use really any of them. But it's multimodal. So for folks don't know, that just
Starting point is 00:51:48 means, you know, combining several modalities, such as text, audio, visual, etc. Multimodal LLMs to make calendar meetings. So, for example, you know, my kids do Taekwondo. And I'm always forgetting, like, what times, because they have different times, they're different ages and all that. I literally just took a picture of the Taekwondo schedule, and I said, create me a calendar, there's a thing called an ICS file. I guess the C is for calendar. I don't know. But it's a file that you can then, when you open it, it creates a calendar meeting. So I said, like, create an ICS file that has like a recurring meeting for me at these times. Like I have one kid who's this age, one who's this age. And so find, you know, from the picture, find the times that I have to be at
Starting point is 00:52:37 Taekwondo with them and create an ICS fall. Done. Done. Like now in my calendar, every whatever day, there's meetings for that and on Saturday and everything. You know, one of my kids does, he's like a vocalist, right? And so he does a bunch of like choir shows. So I took a picture of that schedule. I'm like, just put on my calendar all the choir shows from this grade, done. And that has like been amazing like just taking pictures of things i took a picture of of um i guess i'm getting off topic a little bit but i took a picture of my trees in the backyard and i was like are these trees healthy uh you know what type of tree are they what should i do to make them healthier and it gave me a really solid advice um the whole thing is amazing but uh but but especially the calendar
Starting point is 00:53:26 thing i talked to a few friends about this and they had never thought to do that that. And so I figured it would be a good thing to talk about on the show because it helps so much. Yeah. I think this is one of those cases. I was tempted to bring up an example where I tried doing this and it didn't work very well. But have you done it recently? Yes. Oh, okay. That's okay. It was a bit more complicated than what you were saying. But I think that I won't take that that bait and when instead I'll say is what do you have to lose? Like if these ICS files, suck or like they just delete it like yeah like the bar is really low like if it screws up the fallback is what you wasted what maybe three four minutes like trying to like
Starting point is 00:54:12 prompt it to give you what you want and then like it didn't work okay just go back to doing it by hand anyways yeah so I feel like I have been trying to encourage uh like people I work with whatever the bar isn't and I think you alluded to this earlier the bar for AI isn't the magical everyone with how I got to be 10x better 100x is going to replace all the engine no like, can you be 10% better? Can you find a way or a place or a set of things it can do to make you 10% better, 5% better, 1% better, like, can you enable you to do something when you wouldn't otherwise do it, right?
Starting point is 00:54:44 Like, I think, and to your point there, a lot of the barriers is just trying it. Like, just try it. Like, who cares? And it's not like that stuff's private. Like, it's a Taekwondo schedule is probably on their website. Like, yeah. There's no privacy concern there either. So like, in my opinion.
Starting point is 00:55:00 I guess maybe it knows where you are so it can send Uber food to you Uber eats food. No, I just like advertising. Hey, we know you're at Taekwondo. Would you like French fries? Oh, man, it sounds delicious. This is getting better and better.
Starting point is 00:55:15 I think that, you know, this is another thing I've done kind of also off topic but related is I actually put in a loop. I said, find Python files that don't have a corresponding test underscore file and create a test for them. and then verify like that the test pass and I actually ran that in a loop so like it just it just kept doing that and I just let it go for for hours and when I was done it created like 80,000 lines of tests actually probably more than that and it's just like that's amazing like these are all now I'm going to have to go through them today and a lot of them you know might be crap but some of them aren't right and like you said it's it's it's
Starting point is 00:56:00 These are things like I wouldn't have done, you know, because I just don't have time. But if the AI goes and does it, I can check the work. And that's easier than doing all the work. And then eventually we'll get this evaluation problem fixed. I won't have to do that either. So are you saying it helped you asynchronously write to the tests? Yeah, that's right. Yeah, I had a global lock on my time.
Starting point is 00:56:26 It's called family. And while I was locked, it went off. So what would you say is the what and why of asynchronous computing? No, I'm just kidding. We're transitioning into our main topic. All right. So here's the reason why I decided I reached out to Patrick to cover this topic. Because I've been asking this in interviews.
Starting point is 00:56:51 We were looking for one more person to add to the team. I think we found them. But I was asking this. kind of question like what's what's asynchronous what's multi-threading and very few people knew the answer which really surprised me so um so i thought hey this is a good topic for us to cover um i like to think of it and this is a term i kind of made up but uh i think it it kind of sums up pretty well asynchronous programming is multi-threading between the lines so in other words um you know you're writing lines of code and only one of your lines can really be executed at a time but in
Starting point is 00:57:36 between the lines there might be areas where you can do multi-threading so for example um you might have a line that says um download this image off the internet and so the system can only read and understand that line sequentially it can't do other things while it's doing that but then the part that actually does the reading over the internet and waits for those packets to come from across the world, that actually could be done kind of off to the side. And so you're doing multi-threadic in between the lines. You don't have to worry about semaphores and mutexes and all that stuff because of this constraint. And so that makes your life easier. And so you get kind of the benefit. Because really like when you want to, it depends on the problem.
Starting point is 00:58:27 trying to solve. But for web applications, for example, what you really want is to not be waiting on the internet. And if you're not waiting on the internet, then you're pretty close to optimal for your, you know, web service. Because usually you're waiting on the internet to get your request, you know, complete, waiting on the internet to send that request to, or intranet, maybe to send that request to some database, you know, waiting for that database to come back. And so if you eliminate all that weighting, you don't also need to do a lot of multi-processing.
Starting point is 00:59:06 You don't necessarily need to do a lot of instructions at the same time. So that's kind of the big motivation for async. And so you need a couple of like core building blocks to do asynchronous programming. So you need thread local memory. So in that example I gave, where you're waiting to get some data, you know, over the internet, that data needs a place to go
Starting point is 00:59:38 that you know, no other routines can access that data. So for example, you know, one way it's done in Go is through something called Go routines. And so a go routine is not like a new process on your OS. It's not like a whole new process, but it just kind of keeps track of the state of execution in this code. And it also has some thread local or routine local memory.
Starting point is 01:00:12 So if you say something like, you know, I want to download this packet of data, from the internet, and then I want to store it in this variable, you know, it has a place to go that it knows is kind of protected. And then while it's doing that protected operation, you know, reading from the internet to some protected thread local memory, it can kind of say, other threads can do other things. Other threads can be reading other things from the internet, or they could be doing arithmetic or whatever, because I know that none of those other threads can affect this operation that I'm doing right here.
Starting point is 01:00:51 And so because I know there's no side effects, I can now let those other things go. And so you could have a hundred routines all downloading things from the internet at the same time, even if your program is single-threaded. So you'll hear about blocking versus non-blocking operations. And that basically means, you know, a blocking operation is one where, you know it's not kind of releasing that lock so if you just say imagine you have two kind of routines and both of them are just running a four loop and they're just adding two really large arrays airwise right well those are blocking operations so it's going to happen is the first routine is going to run and it's going to do that entire four loop and that second
Starting point is 01:01:50 routine is just going to have to wait. And then when the first routine is completely done, then it'll finish and the second routine will do its four loop. And so you haven't achieved really any parallelism, right? But now take that same scenario, but instead of just adding things in a for loop, you're going to have a four loop that's downloading from the internet. Well, that operation, that download from the internet operation, has been built to be non-blocked. And so in the case of, let's say, like, TypeScript, where it's single-threaded, you know, you have a way of telling the TypeScript or the JavaScript VM, like, hey, I'm about to do something now that is isolated, that's non-blocking. And so, you know, other routines can go and do work while this routine is just frozen on this operation. And then once you've downloaded what you needed from the internet, then you go back.
Starting point is 01:02:50 to the virtual machine and say, hey, I'm ready, you know, I'm ready to block and take this, take this, take, take some time. And when it gives you that, that time back, then you can go and copy that information into that variable and all of that. In the case of Python, Python actually has co-routines and threads. And so what that means is, you can actually have multi-threading in Python where you have several different threads accessing memory at the same time. But there is a global interpreter lock. So here's a good way of thinking about it.
Starting point is 01:03:41 In the case of, let's say, TypeScript, if you're inside a TypeScript function, you kind of have to wait until there's a non-blocking call. So the example I gave with the for loop, you're just stuck, right? In the case of Python, it would be a little different. If you're using threads, what would happen is the first thread would do part of the for loop, and then it would release, and then the second thread will do part of the four loop and then release. And they'll just kind of, you know, checkerboard this.
Starting point is 01:04:12 And so it's not the first one goes till completion. There's sort of this, you know, striping effect. Now, it might take the same amount of time in the end as the TypeScript. version, but, you know, they're both making progress at the same time versus TypeScript is single-threaded. And the way Python achieves this is through a mutex. So this is called the Global Interpreter Lock or Gill. And it basically says, you know, you can have many threads that are all sharing, you know, your VM, but only one of them can execute at a time. So if you wanted to accomplish the same thing with TypeScript.
Starting point is 01:04:50 Imagine the for loop example I gave with TypeScript. So you have a for loop, goes from 1 to $10 million, adds these arrays together. But after every iteration of the for loop, you put a yield. So yield is a special keyword in asynchronous programming. Yield basically says, I just want to give up my time slice. Like, I'm not done yet, but I feel like the right thing to do is to give up my spot and let another thread do some work. And so as a programmer, you might have various reasons for doing this, right?
Starting point is 01:05:28 So imagine if your four loop said, you know, four, you know, A, 1 to 10 million, take these two arrays and add that element together and then yield. So now you're going to get that striping, right? So now the first process will yield. the second one will be able to do one iteration of the four loop and then it will yield back to the first one and so you know the Python multi-threading is the equivalent
Starting point is 01:05:53 of like having a yield after every operation and so you know the gill kind of enforces all of that that kind of what do you think Patrick any any holes in that that we could cover I think for me my introduction is probably
Starting point is 01:06:13 like a bit different but I think it parallels so I don't think it's contrary to what you're saying but I for me it was and you kind of bump into this if you ever do like Arduino programming but like embedded programming and there sometimes the mapping is actually really weird from something like Python to something like running C++ and an embedded microcontroller because it's just sort of like different paradigms like networking is well now it's more common when I was doing it was very rare that you would you know kind of do networking related stuff in in these contexts and there we would talk about like interrupts and so the idea is if you imagine like a microcontroller wants to talk to a sensor that sensor communicates over something called i squared c all it is is like a set of
Starting point is 01:07:07 wires and voltages that move up or down in a certain pattern to send data from like your piece of silicon to like another piece of silicon glued to your like thing with wires or on a printed circuit board or something else. And so you want to measure, let's say, temperature of the room. And you want to record this. And sort of the way that it used to kind of always be done was something called bit banging. And so bit banging would be, hey, I'm a microcontroller. I need to read this. So I'm going to set up an I squared C library in code that says move the voltage on this wire from low to high, wait this amount of time where a wait is go into a loop that is like a certain number of instructions long, time to be like the right amount of wall clock time based
Starting point is 01:07:57 on the like rate of the processor. So it's just sitting there doing like no op instructions, but it's running the instructions. It's using all of the power of the full CP. which has like all sorts of peripherals on it. So it's sort of consuming the most. And you sort of like go high for a certain amount of time. Then you go low and you're moving along the sort of data to be sent out to ask for a temperature reading. Then you have to wait and then it starts coming back in. But you can imagine like thousands, tens of thousands of like no-op cycles waiting for like the voltage to move up and down due to capacitance and the rise rate and like the data transmission.
Starting point is 01:08:35 So the CPU is really bored, but it's sitting there doing nothing, right? And so this is the equivalent of like polling or single threaded and you're doing all of the work in one context. Then people realize, well, we have more sophisticated silicon. Why don't we make like a sort of finite state machine, an accelerator, like another like set of transistors where you sort of said it's thread local memory. The equivalent here is we have to call it a register. So you have a set of registers and you could like write into them, hey, I want to. to send out this data stream and then you would say go and what you could do then is put the CPU into a low power state like go to sleep and then you're going to save power right and then or you could
Starting point is 01:09:18 go to other processing if you had it but saving power is also really really important a lot of these contexts what's going to happen is the little bit of circuitry is going to like send the thing out it's going to wait for it to come back and then it's going to interrupt and you're going to have a little function that is mapped to say, when this thing is done, it's going to call the instruction that you've loaded into this register. So the instruction there needs to be jump to a function that you wrote. And then that function, when it runs, is going to push the current thread, the current running CPU thing into the stack. It's going to allow you to run some piece of code. And that code could be copy the data out of the registers into a set spot in memory or into a queue or
Starting point is 01:10:06 just handle, like write it into the flash memory, but that could take a long time so maybe you don't want to do it. And you can imagine as you're handling 10, 15, 20 sort of like peripherals all coming in at different rates, doing all of that and sort of making sure that none of it gets dropped becomes harder and harder if you don't sort of have some mechanism that allows this sort of distributed handling to kind of happen. And so the asynchronous context maps pretty cleanly to what you're saying. You're sending some data to the network and saying, hey, I need you to transmit this.
Starting point is 01:10:41 It's going all the way across the world. It's coming all the way back. And you can sit there and wait for it. And that's called blocking. But under the hood, the blocking is still just basically sitting there. Are you done? Are you done? Are you done? Are you done? Are you done? Are you done? It's not doing any work. It's not bit banging anymore. It's just waiting on the other little processor to do all the work.
Starting point is 01:11:00 And so I think as you kind of pointed out, there's like this clear mapping about blocking versus non-blocking. I think the rest of the conversation becomes in a bit what you're kind of getting into is like there are kind of two ways to handle it. One is sort of in line in the code, I want to ask for something to happen. And while it's happening, that set of consciousness, that execution is just kind of kind of pause there and maybe other stuff can run. And then when it's done, I'm going to pick back up.
Starting point is 01:11:31 And that is one method. The other method, if you think about like interrupt as a form of like a callback, you could just have like almost a DAG, which is I'm going to ask for something to occur. I'm going to transfer control to that thing. When that thing's done, it's going to call the next stage. Oh, sorry, DACDirected Aclic Graph. So it's just a pipeline. And so you call to the send networking.
Starting point is 01:11:53 And when the networking is done, it's going to call some function. But that function is completely different part of the. code, but it's responsible and it has enough context to say, and maybe even pass a little bit of context, hey, I'm getting a response. And when I get a response, this is what I'm going to do. So if you think like a graphical user interface and I say, type in my text to my friend and I click send and then I don't need like control to return really. When my friend sends a response back, that code just needs to know to write it to the screen, like write it into the next line of the message conversation. And so you can kind of have this feedback.
Starting point is 01:12:28 forward and so you don't really return back to the same point of control and there are these kind of and they can be mixed but these kind of two paradigms where you well i guess there's a third one but like there's this sort of block and wait uh and then control returns there and that can be blocking or asynchronous for allowing for cooperation you can have this like feed forward sort of dagger approach and then the vinyl method is like kind of spoiler we'll get to it in a minute but like the the the sort of future promise where you're getting some state back that somebody else is filling it and you're going to go check on it later you're going to go ask later hey was that done yet or wait for it to be done or something else i think that's like a little bit of like a different handling
Starting point is 01:13:11 so there are these kind of different methods but under the hood they're all really the same thing they're just how the programmer wants to relate to that work um and how how you want to set up and what other kinds of things in your system and if you have nothing else to do letting it can be a really good answer in some context. But in other contexts, you have other background processing or other things that need to be handled. And so depending on the paradigm of your program, you'll kind of choose a different asynchronous behavior.
Starting point is 01:13:41 Yeah, yeah, that makes sense. I mean, I'm reminded of, and I should have made this a news article, but if you read like the history of N-G-N-X, so N-G-I-N-X, it's spelled N-G-I-N-X. but basically it's the super fast, you know, web handler, web service handler, and it does routing, proxying, et cetera. And when EngineX was invented or, you know,
Starting point is 01:14:07 was initially being developed, the best thing that was out there for doing web stuff was something kind of like Tomcat. Tomcat is a Java thing. There was other, it's written in other languages. But the way these things would work is, you know, when you started Tomcat, cat it would spin up like let's say eight processes like OS level processes on your computer
Starting point is 01:14:31 and it's like okay here's your eight uh processes and and then the main process when it sees there's a new web request it would just send it to one of those eight or all eight of them would listen i'm not sure at that point what exactly happens there but basically one of those eight you know receives a web request handles it, which might include like going to the database and doing other stuff. Maybe it's a, it's over I2C, it's like talking to a temperature sensor or something, and then responds with the response. But because you had eight processes, you could only do eight things at a time. And it could be that all eight of those are waiting on the database.
Starting point is 01:15:13 And you really could add a ninth one or a tenth one without affecting that computer's load, because they're all idle, right? And so EngineX was pretty revolutionary in that they basically said, okay, we're going to create the same eight processes, but we're going to use asynchronous programming. And so, oh, and so, you know, if one of these eight processes is waiting on a database,
Starting point is 01:15:45 then that same process can go and, like, fetch another web request while it's waiting on the database. And so you could actually have, you know, eight processes handling a thousand web requests at the same time instead of eight at the same time. And this like blew everybody's mind. Like I remember we were using Tomcat on something. This is like 2003-ish. And yeah, and just yeah, constantly running into issues with quality of service where you know we'd get um i was working at a online school and so registration day there'd be a ton of people hitting the site and they would inevitably bring it down and uh and so you had this machine that was mostly idle but then also just blocking everyone's
Starting point is 01:16:35 requests so so engine x was it was amazing um the the thing about it is if you were to do this yourself, like what Patrick was saying, you know, if you were to, you know, to try to implement this asynchronous thing yourself, what you would have to do is, and I'm going to use the higher level terms, this probably doesn't work in Arduino, you could help translate. You'd basically have to say, okay, what things am I doing right now? Okay, I'm doing these eight things. Okay, can I sort of peek at each of these eight things to see if one of them is ready? Oh, the sixth one is ready. Okay, you know, execute that function. Oh, now like that function got a little bit further, but now it's blocked again. Okay, put it back on the list. And so you could imagine like
Starting point is 01:17:22 what a nightmare that would be to have to code up, you know, for every application. So, so in this case, like that whole nightmare of just peaking at a zillion different things all the time and finding the one that's available and doing it, like someone else has implemented that nightmare for you so that you can just kind of work at a higher level. Okay, there's a bunch there. That was really good. So I think to be clear, this is all a bit, how do you want like recursive, fractal? So the operating system is doing kind of what you're saying.
Starting point is 01:17:57 So it has a scheduler, modern computers normally like a preemptive like scheduler, which means exactly that you have some thread running that gets stuck or is not or elapses time and the operating system will like suspend it and run a different thread in order to like time share um and and sort of hopefully some of them are just doing like busy loops polling for something and you're getting a win but there is like some like it's not magic none of this stuff is magic so everyone who always it's kind of one of those two sides of the coin in some cases it sure seems like magic but in other cases you're like I don't get it it like there's still only one CPU and yeah you're right there is really only one CPU so in like
Starting point is 01:18:41 your database example, let's say you have sort of like the database is local on disk and you really are kind of bottled. Like let's say you're already maxing out the like hard drive RAM transmit speed from your CPU to RAM. Whether you have one eight, nine, 10, 100 like database jobs queued up, you're not gaining any more throughput by like knowing that you have 100 jobs or knowing that you have eight jobs. Like there's nothing more to do. The trick comes in that that isn't normally the case. It's not that every job coming in needs the database. Some jobs, you will already have a cached answer. It just wants to say, what is the list in your example of like, you know, registration? It's like, what is the list of all classes? Well, you could have
Starting point is 01:19:25 nemenized. You could have cached that like, you know the list from five seconds ago of all possible classes. You don't know their current state, but you know all the things. And so some requests can be quickly turned back around so that the server feels more responsive. And so while you're waiting on the hard drive RAM to transmit database responses, you're sort of getting these things back. And the question becomes a little bit on like the scale, are you running, are you the only thing running in the operating system? Are other things running? Do you have other threads running? Do you have like, there is still a sort of like fundamental limit to how much stuff can happen. And going to an asynchronous solution isn't a cure all,
Starting point is 01:20:09 but it is one of these cases as you're pointing out often we can know more about the problem than the generic operating system solution would give you and so letting 8 OS threads handle something isn't going to be optimal if we know that the work is very heterogeneous because if we are having one in 10 come in that needs a database but it takes a long time to get a database response then as soon as you have the few queued up everybody else even that doesn't need the database is stuck waiting. If you sort of have this sort of asynchronous thing that you're mentioning, you can just burn through the entire queue of everybody who doesn't need the database.
Starting point is 01:20:50 People who need the database may still get a failure, right? Like, you know, they may still have a timeout. But overall, your metrics are better. The responsiveness feels better. And you can kind of guarantee you're maximizing the proper throughput. Yeah, yeah, totally. I mean, this is also like there's a, dynamic at play here, which is like throughput versus latency.
Starting point is 01:21:12 So imagine. Oh, yeah. Okay, good. Yeah, like, imagine like the connection. Imagine like the database is like a giant sewer pipe, but the water can only go, you know, one meter a second. And so, you know, you could put like a ton of, of water through, but it can only go through at a certain rate.
Starting point is 01:21:34 And so that's where, yeah, having like a zillion connections to the database actually better because you're just hiding all of that latency. But you're right. With asynchronous programming, you do run into a lot of problems around, you know, now you can do a thousand things, but that also means that potentially there's a thousand things
Starting point is 01:21:56 that are in a bad state, or if there's a bottleneck further down in the chain, then now all those thousand things are sort of fighting for resources. So even in asynchronous solutions, like if you look at fast API, for example, even though it's asynchronous and you could handle a thousand requests on one thread if you wanted to, even there you would specify a limit. So you typically would say something like, if I have, I'm just going to make up a number, if I have 32 requests simultaneously on this machine and a 33rd comes in, I'm just going to reject it. and return a HTTP 503, and then my client will call again and hopefully get a different machine. Because even though the machine might not even be very busy, you just don't want to be in a weird situation where there's a thousand things that aren't getting done, because that's usually a sign
Starting point is 01:22:57 that something else is wrong. So asynchronous, to your point, it's not a free lunch. You do have to, you know, now that you've been given this power, you do have to use it responsibly. Yeah, you bring up a good thing that was like an interplay here. We didn't have it in notes, but like queue management and there's like stacked up cues everywhere. So as you're going to mention, you're going to have like a load balancer and the load balancer will have like a little bit of a queue at sending stuff. If you just consume a thousand things, even though you can't handle them, some other server may have finished everything it's doing and you're just like holding those thousand waiting for expiration basically. And in reality, as you said, if you returned an error code after a certain amount, it'll just run back to the load balancer and hopefully get assigned to a machine that like isn't having an issue or isn't overloaded or whatever and those have their own cues so there's like this very interesting whole system dynamic the other thing about asynchronous program although it is like important to know about and be aware of is you do have to just like multi-threading you do have to be concerned about like race conditions data safety issues depending on what all is happening right so
Starting point is 01:24:08 So you can imagine I send a network request and then I have an update from the user. I want to do something different. I've changed my state. I send the new networking request that I kind of want to supersede, but the old one comes back. And I think it's an answer to, you know what I mean? Like you have to manage that, wait a minute, like I have to be really careful. I'm doing more than one thing at a time. And so you can end up with all of the traditional things you would hear about related to sort of thread
Starting point is 01:24:37 safety, being careful, making sure that you know, you're aware of whether the data structures like cues and other things you're using are multi-thread safe, even if you're only doing async stuff, because you can end up in these weird situations where out-of-sequence stuff is occurring or overrides or you just even be aware that it does add a lot more complexity to the system. But there is a lot of gains to be made in many situations. Yeah, totally. And this is really what separates kind of like masters of their craft from apprentices. You know, it's like all of the lessons learned around, oh, you know, we got burned this way or that way or we, you know, the database kind of like fell over because we weren't monitoring it. And what it really comes down
Starting point is 01:25:28 to is just setting up a ton of monitoring and constantly tweaking. And constantly tweaking. the knobs. You know, it's, if, if, um, if the machine is like mostly idle, then maybe it can handle more processes, but, um, you know, if processes are getting backed up, why are they getting backed up? And so there isn't, uh, like a, there isn't a system you're going to build that, that has concurrency that's going to work perfectly for the first time and forever. You know, as, as the load profile of changes, um, you'll have to, adapt to it and it becomes kind of a living system. This is this is pretty unavoidable. So async, you know, helps with, you know, all of the mistakes that come from multi-process, you know,
Starting point is 01:26:20 as sync, you know, you can eliminate a lot of those as as candidates. Like if we were doing our, if we were basically building async ourselves, we'd have to wonder, oh, did I get the process pool correctly? Am I releasing the semaphore at the right time? And so because of async, you're, you don't have to worry about that, but the consequences you still have to deal with. You bring up a really good point, actually. I think if you're going to go to this approach, so there are some like UI frameworks,
Starting point is 01:26:50 I feel are much more commonly written in an async at the framework level because you want to make sure that the user interface stays responsive. So you shouldn't ever do anything blocking. And so they're actually really set up from the start to be asynchronous. But people who write that, well,
Starting point is 01:27:12 I should say a lot of people I work don't write UIs. they're not actually even familiar with that. As you pointed out, if you ask them about it, they wouldn't really be aware of like the nuances of these tradeoffs and the decisions and the complexities and measuring and all of the parameter, like having reasonable parameters and monitoring whether or not like they're set well would just be stuff not present.
Starting point is 01:27:33 So it is really weird. like on the one hand it's common enough that you would sort of say like it's in every UI framework to have like asynchronous stuff what do you mean like people don't know what it is but it's like this very weird division of practitioners where sort of like I think there are whole classes of people who go a very long career without really getting into any of it yeah yeah totally um which is why you need to subscribe to programming throwdown it's a perfect segue to our outro.
Starting point is 01:28:08 I mean, we, you know, I feel like we gave a pretty good background. I think, you know, the sort of maybe the call to action here is really learn ASync programming. I mean, if you,
Starting point is 01:28:21 if you're doing UI stuff, you pretty much have to. I think it's at the point now where I think Android and iOS, if you try to do a network request on the main thread, it'll just like give you an error like a runtime error um it just literally will not let you use the main thread to do non-blocking stuff um so so it's a non-starter but but even if you're
Starting point is 01:28:47 doing anything with distributed anything with queue processing um you'll have to you'll encounter this in some way shape or form so so if this is all new definitely dive in this is an area that's worth learning. I think with with um, um, I think with AI and agents and all of these things, it's going to become even more important, you know, an agent might send a text message and wait to get a response from somebody and it can't just block the machine while it's doing that, right? You want you want to run like 10,000 agents on one machine. So, uh, so super important area. I think that
Starting point is 01:29:27 it's definitely something that's worth researching. And if you have any questions, you can always go on Discord. The Discord channel is getting more engagement. I always try to reply to stuff
Starting point is 01:29:42 on there. So check us out on Discord. She's just an email. And it's a great cover in this. Yep. Thank you everyone. All right. Catch you later. Programming Throwdown is distributed under a Creative Commons Attribution Share-A-Like 2.0 license. You're free to share, copy, distribute, transmit the work, to remix, adapt the work, but you must provide attribution to Patrick and I and share like and kind.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.