Python Bytes - #330 Your data, validated 5x-50x faster, coming soon

Episode Date: April 6, 2023

Topics covered in this episode: Pydantic V2 Pre Release microdot The impossibly small web framework for Python and MicroPython GitHub Actions Tools: watchgha, build and inspect, and pytest annotate... failures PEP 709 – Inlined comprehensions Extras Joke See the full show notes for this episode on the website at pythonbytes.fm/330

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 330, recorded April 4th, 2023. I'm Michael Kennedy. And I'm Brian Ocken. And you can connect with us over on Fostodon. If you're on Mastodon, find us there. I'm at mkennedy at fostodon.org. Brian's Brian Ocken over there.
Starting point is 00:00:22 And the show is at pythonbytes at Fossadon dot org. And if you're interested in the video version, check out Python Bytes dot FM slash live. Click that. Go over to the YouTube channel, subscribe, get notified, you know, get a little pop up when we start streaming live. It's always fun to be part of it. We encourage people to check the show out that way as well. So Brian, let's start off with something that has been almost exactly one year in the works. I was just going to ask you about that. So was this about a year ago we talked about this Pydantic rewrite? I think I saw something on Twitter of all places from Samuel Colvin saying it was April 4th, 2022 that I started working on Pydantic version 2.
Starting point is 00:01:09 So that sounds like to the day. Okay, well, it's not completely here yet, but it's here enough to try. So I'm pretty excited about it. So Pydantic v2 version 2 pre-release. So people can, it's available now. So people can install it you have to do the like the uh pip install dash dash pre uh pynantic and then you can say batantic greater
Starting point is 00:01:33 than or equal to 2.0 a1 i guess if you want to get the alpha one or better so the big news is alpha one's available um and it's pretty exciting uh there's a whole bunch of great stuff changes we've talked i think we talked about it you talked about on your show i think also yeah yeah um but the um uh the headlines here is uh one one it's not complete yet this is the alpha we're not even debate is yet so if you see if you try it out and you see something, they've got a GitHub, created a GitHub issue. And they want to have people use the bug V2 label to create issues around the version 2 because they want to hop on those right away. Anyway, so the big change was, one of the big changes was to move a lot of Pydentic and the rules and everything into a different module called pydantic core that one's mostly written in rust and um and so it's like five to fifty times faster
Starting point is 00:02:34 uh overall uh with for performance so that's pretty exciting because these are i mean this is when you're using pydantic it's it's hitting for every interaction, right? So as fast as possible is great. And I do like the idea of separating the Rust part out into a different module, a different package, Pydantic Core, so that they can kind of maintain it and have safety and maintenance around that a little separate. I think that makes sense. Yeah, and people used to have to create their derived classes and put a lot of their customization there you know what's called root level validators and things like that where it's like i want to validate the
Starting point is 00:03:14 whole class not just a certain field or you know if this is set then that has to be set that way like a lot of those things had to be done in an oop way and i think with the pydantic core you have more direct access to like a layer below so it's not just faster which is fantastic but it also opens up like more ways to interact with pydantic which is cool yeah um so they've got a lot of stuff uh working already um they want people to be able to experiment and try out their base model that changes the a lot of the same features for validation but the there's new method names and it is a change to data classes serialization strict
Starting point is 00:03:52 mode different schemas lots of changes for v2 so like people to try it out the there is a lot of stuff still under construction, mostly documentation. And some of the base settings have changed from they were base settings and now they're going to be in Pydantic settings. That's not quite ready. So there's still some work to do, even in the migration guide. So they've gotten a start on the migration guide,
Starting point is 00:04:20 but it's not there. As you see on some of the links, there's changes to data classes, changes to base model. Some of the stuff's not there as you see on like the some of the links there's uh changes to data classes changes to base model some of some of the stuff's already there but it's still under under construction so pretty exciting i'm i'm definitely excited and the five to fifty times faster that's no joke there's you might be okay well what are you doing i'm like parsing like a settings file or i got a single JSON document, whatever. All of FastAPI is deeply, not all,
Starting point is 00:04:51 but much of FastAPI is deeply based on exchanging rich data with Pydantic, right? So your API layer could get much faster, right? And you can also use this, people maybe don't realize it. You can use this with other frameworks as well. You could use it with Flask. You could use it with Pyramid. You know, FastAPI is cool
Starting point is 00:05:08 because you can put just, there's a argument of type Pydantic model and it automatically fills it all in. But all you got to do is just take, here's the post dictionary and feed it to a Pydantic model. Like just inside the function is the first line and you're in the same place.
Starting point is 00:05:22 So you can use this across all these areas. Then for example, Pythonbytes.fm is powered by Beanie, which is PyDantic plus MongoDB plus Async, which is awesome. But every single database record comes back, goes through PyDantic. And if you're using something like Beanie or SQL Model and FastAPI, your data layer goes through PyDantic
Starting point is 00:05:41 and your web layer, because there's like multi-layered Pydantic operations on every interaction. And so making that part five to 50 times faster is just huge, right? That's a really big surface area to make a lot faster. I got speed ups as well to talk about later in the show, but it's not that much area. That's awesome. One of the things you bring up, which is interesting, is that a lot of people, I mean, there's tons of people that use Pydantic just by itself
Starting point is 00:06:08 with their own code. But the people that mostly touch it through FastAPI or Beanie or something, they may have to wait until those projects bring on the changes then, unless those projects have branches for B2, which who knows.
Starting point is 00:06:24 I hope so. and maybe there's a way you know i think the breaking changes in say base model for example are i think they're kind of deprecated already and 1.10.7 just they're still there right but i think they become breaking changes here um if you're also, if you're doing model validation, a lot of the function names gets changed. Yeah, like things like from RM go away. There's a bunch of little, I don't think they're big changes
Starting point is 00:06:55 that are going to be a huge problem for people, but they are incompatibilities, as you point out. Both Roman Wright and Sebastian Ramirez seem to be really on top of their projects, respectively, Beanie and FastAPI. So I feel like by the time this becomes fully released, they'll be there. Yeah. And what's one of the reasons why I covered it is to try to promote everybody.
Starting point is 00:07:14 Nudge, nudge. Hey, exactly. Nudge, nudge. Exactly. Please. Yes, indeed. Yeah, it would be awesome that when this comes out for just boom, V2 is out, like you just update all the packages that depend upon it and you can adopt it right away.
Starting point is 00:07:30 That would be great. But to be fair, like let's say I'm working on like a side project that's using FastAPI or Beanie or something and I don't have it in production yet. I'd be like, yeah, let's use V2 right away. But if I've got a production system, I don't want to switch right away. So I get that there's projects at different levels and group projects like FastAPI and Beanie have to keep that in mind.
Starting point is 00:07:54 So, yeah. All right. All right, well, this is exciting to me. I know this has been coming for a long time, so excellent work. What you got for us? Well, that's a quick a real quick, real time follow-up. I interviewed Samuel Colvin. He loves to do stuff on the fourth of months, apparently on
Starting point is 00:08:10 August 4th as well last year called Titanic V2, the plan. So people can check that out if they're, they're interested, but let's talk about something really small. Okay. Okay. From a friend of the show, Miguel Grinberg, and he created this thing called micro dot micro dot it's very small it's it's bigger even than like the the semi dot or the regular dot very small no so this is yeah so back to web frameworks um this is the impossibly small web framework for Python and MicroPython. So I believe its reason for existence really is to basically bring something like Flask to MicroPython and CircuitPython, which is cool. However, it also runs under standard CPython, which opens up some interesting possibilities as well.
Starting point is 00:09:00 So if we go down here so if you're familiar with the flask api you should be real familiar with micro dot so app equals instead of flask you say micro dot got a function it says i want to do an app.route on it or i don't know if he supports a direct verb htp verbs there like app.get app.post like flask recently, but app.route for sure. And then one of the differences is you have to pass a request object into the functions there, whereas Flask has this thread local ambient variation of this thing. So you'll get like a 500 error if you try to, you know, just run this directly without adding that request. So it's easy to overlook but other
Starting point is 00:09:45 than that other the fact that there's a request parameter to the views basically the same thing okay so yeah that's pretty interesting now there's a bunch of compromises that are made here because micropython doesn't support jinja it doesn't supportask, all of these different things. So there is a template language, but it's not Jinja. So there's a bit of a migration if you were going to take this on. So you can run it under CPython, but you can also run it under MicroPython. There's the HTTP methods. I think you see the old style. No, no, there's an app.get.
Starting point is 00:10:24 You can do the old style where you pass the method names like get and post, or you can just do an app.get. But yeah, again, if you're familiar with Flask, the way you do routes, the way you pass data into the functions, all those things are absolutely the same, which is pretty cool. One thing you can do is you can return JSON responses, and you can even just return a dictionary, which I don't think you can do in Flask.
Starting point is 00:10:47 Maybe you can, but I think you have to JSONify it. I think you've got to say flask.jsonify. So this is kind of an upgrade, I would say. So if you have a little tiny thing, like let's get it, a little tiny thing like this, Brian. Okay. Right here.
Starting point is 00:11:01 However big is that? You know, like not about the size of my hand, like half the way across the pole of my hand. Got one of these little tiny MicroPython, CircuitPython things. You can now put APIs on here and you can put even really interesting things like it has support for concurrency.
Starting point is 00:11:17 So Flask doesn't support directly having async and await, I don't believe. Not fully anyway. You got to switch over to Cort to do that. I think they partially support it, but not full async and await, I don't believe. Not fully anyway. You got to switch over to Quart to do that. I think they partially support it, but not full async and await. But you can use the MicroPython async IO extension here and get APIs running with full async and await
Starting point is 00:11:35 concurrency support doing JSON or other things made with PyTantic. Not sure. That's pretty cool because a lot of the, I mean, that's just an easy, like throwing a rest api or some sort of api on something to throw back jason to um to uh that would be really cool and you don't need i mean for for applications like that you don't need a lot of templating around
Starting point is 00:11:56 so yeah and so let's see go to the core extensions here you can see there's actually a bunch of cool core extensions so got the async and await support so all you got to do is just, where I'm going to scroll, just write async def endpoint, right? Boom, off it goes. Yeah, because you really want to async that hello world. Yeah, for hello world, not so much. But if you're talking to files that's in our database or something.
Starting point is 00:12:19 Yeah, that's true. You can use, what is that, micro template? It says uTemplate, but I bet it's a micro I think it's pronounced template template yeah u template it's a lightweight memory efficient template for Python
Starting point is 00:12:35 which it looks very very Jinja like as well so that's a pretty straightforward thing but it's not identical right what else we got this is Jin in it so oh can you oh you but no hold on it's c python only oh okay okay right got it so because it's not supported in micro python got it but you but if you're if you're doing it on c python you can you can add those templates yes this is actually why it's pretty
Starting point is 00:13:02 interesting to me so you can do umLS, HTTPS support, right? Which is pretty cool. You have web sockets, you have asynchronous web sockets, you have cores, cross origin, resource sharing settings, and you can even deploy it. So it comes with its own little web server, which you can run on your MicroPython daily, right? But if you were going to put this into a big data center
Starting point is 00:13:26 on a huge rack of servers, that might not be the best choice. So you can run it on MicroWhiskey, which is awesome. You can run it on GUnicorn. You can even run it on GUnicorn with UVicorn workers to get like the awesome libUV high performance async support right so so you can deploy it onto kind of the top tier python stuff put you know nginx in front of it and all that it's pretty cool right yeah now what are the ways you run web apps on servers especially in python
Starting point is 00:13:59 because of the threading support is not as friction-free, I guess, you know, like the GIL and all that, is you'll farm it out into multiple workers, right? So I can't remember what we have for Python by, it's not too many, like two or four different worker processes. And each of the worker processes kind of gets round robin brought in to handle requests. So like if one of them is busy,
Starting point is 00:14:22 it'll automatically send the request over to another one, right usually they'll there's maybe a couple limits but one of the limits you really want to consider is i don't want to run the machine out of memory so i think the ones we use like 150 megs per worker process so at some point they'll only create so many and then the servers like okay i've had it but with this little thing that runs on micropython you could scale the heck out of it you could have a ton of worker processes under micro whiskey oh yeah or not under or under uv accord right and i actually did a little super simple test like i wrote the flask equivalent of hello world literally exactly the same code with the changes I talked about. And then the MicroPython version, the, sorry, the MicroDot version. And that one, both running on CPython,
Starting point is 00:15:10 nine megs for this framework, 25 megs for the Flask framework. So maybe you can have twice as much processing horizontal scale with this thing, but deployed to real servers. So there might actually be some interesting advantages to having this really tight framework. I want to run it in a bunch of Docker containers
Starting point is 00:15:30 that are using the smallest amount of memory. I want to farm it out and say, no, I'll have 30 worker processes on this server, not five. I don't know. We'll see. Yeah. So Pamful is pretty psyched about the memory saving out there as well. So I think it's good.
Starting point is 00:15:44 Yeah. And I think it's good. Yeah. And I mean, for lots of jobs where you don't need, if you don't need the power, don't use the power. So yeah. Yeah. Pretty neat. So well done, Miguel. And yeah.
Starting point is 00:15:55 Now, Brian, let me take just a moment and tell everyone about our sponsor. This episode of Python Bytes is brought to you by Influx Data, the makers of InfluxDB. InfluxDB is a database purpose built for handling time series data at a massive scale for real-time analytics. Developers can ingest, store, and analyze all types of time series data, metrics, events, traces, in a single platform. So dear listener, let me ask you a question. How would boundless cardinality and lightning-fast SQL queries impact the way you develop real-time applications?
Starting point is 00:16:28 InfluxDB processes large time series datasets and provides low-latency SQL queries, making it a go-to choice for developers building real-time applications and seeking crucial insights. Optimized for developer efficiency, InfluxDB helps you create IoT, analytics, and cloud applications using timestamped data rapidly and at scale. It's designed to ingest billions of data points
Starting point is 00:16:52 in real time with boundless cardinality. InfluxDB streamlines building once and deploying across various products and environments from the edge, on-premise, and to the cloud. Try it for free at pythonbytes.fm slash influxdb. The link's in your podcast show notes. Thanks to Influx Data for supporting the show. Over to you.
Starting point is 00:17:14 What's your next one? I want to talk about GitHub Actions a bit. So a lot of my workflows have moved over to GitHub Actions. And so there's actually three projects that I wanted to talk about that I thought were neat and worth, and they're all kind of in the GitHub action genre. What you got, what you got,
Starting point is 00:17:31 what you got, what you got first. What you got, watch GHA. It comes from dead batch elder. This is a, a, just a simple tool to,
Starting point is 00:17:43 to watch your GitHub action progress from a command line. So I think it's a command line thing. It looks like command line. And so it has a little progress bar thing, progress dots that go green. And then they start out gray, and then they go white and green and stuff. To see the different things. You got like uh we were running three seven on ubuntu pipe yeah so if you've got a big matrix that's doing a lot of stuff
Starting point is 00:18:11 it's kind of hard to keep up with what all's going on so yeah um this is kind of neat to watch just a little tool from ned thanks ned um uh uh one of the other things i i thinking about, so I just my talk at PyCascades was talking about that you can share, you can just you can share packages without actually ever building it. Because pip install will build your wheel for you if it's not built already. But you probably should test that. And one of the ways you can test some of that building is with with NX build and inspect Python package. So this is a GitHub action that will, it does a lot of stuff, but it does it does a build to make sure the build works. It also does has a linter to lint the wheel contents. It also uploads the wheel and the source distribution
Starting point is 00:19:05 as GitHub action artifacts. So it actually does generate the wheel for you as an artifact, which is kind of neat. The it also one of the things that's one of the things that's always a mystery to me is like the makes making sure
Starting point is 00:19:20 I have everything that I want in the S dist, the source distribution. And this this will lint that. And also, well, I guess it doesn't lint the contents of the S dist, but it does print them out. So print does a tree of the S dist and the wheel in the output so that you don't have to download it to check it. You can just look at it in your GitHub output.
Starting point is 00:19:41 Make sure all the files and resources you might need to send out come along yeah um and this uh uh yeah um i i had recently made a change to a package and it took out the tests and i had somebody say oh look we want the tests back in so it's kind of nice um so uh uh let's uh i guess that's that's it with that it's kind of a neat uh github action thing um that you just put it put it in it's one of those actions so you just like specify it and it just works it's nice uh the third thing i wanted to bring up was um uh pytest github actions annotate failures um so this is a uh one of the just a nice extra thing that i hadn't heard about before um pytest it's under the pytest dev uh umbrella but it uh it is a it's a pip install sort of thing and what it does is it makes sure that the all the the proper stuff gets output um so that you
Starting point is 00:20:40 can have nice annotated failure your asserts if there's failures, it's annotated nicely in GitHub Actions. That's it. Just some fun GitHub Action stuff. Yeah. Once you really start getting into CI, CD, it's fun. You're just like, oh, and now that it's automated, we could do this, we could do that. Yeah, but when you automate all the things,
Starting point is 00:21:00 and then when things go wrong, you're like, oh, God, then we have to pull it down and check it again. So having some of these debug stuff and things up up in the cloud um it's good yeah very handy excellent all right well i have a pep for us to discuss okay 709 inlined comprehensions now this is a debate that I seem to have only on YouTube. If I'll do, like I've done some videos about list comprehensions or other sort of design patterns, you might involve comprehensions. And people are like, oh, Michael, you said that a for loop is different than a list comprehension. But look, it says for thing in collection. And so they're the same.
Starting point is 00:21:42 And so I just don't, you just don't know what you're talking about. Like know what let's disassemble it let's see what it does is it the same disassembly no it's completely different disassembly that means the implementation of those comprehensions are different i don't care if the word for appears in both of them they're not the same thing this pep brings them and it tries to take the both the best of both worlds though and it says there are some things we do to make comprehensions work but look like they're just right there in the same function or in line even if you don't have a function but in fact there's kind of this this thing behind the scenes that's happening where we create a nested function that you never see but the interpreter creates and then calls it and and that's the interpreter, okay?
Starting point is 00:22:26 The comprehension, rather. So this PEP by Karl Mayer is basically saying we could get really good performance increases if we just change that implementation a little. And the reason it's created as a nested function and not just some inline code is, what if you have a variable x in your regular function and then you have x as a loop variable or as the item variable in your comprehension or things like that right you want them to still be isolated so that's
Starting point is 00:22:58 basically the idea here it says comprehensions are currently compiled as nested functions which provides isolation of the comprehension's iteration variable, but is inefficient at runtime. So PEP 709 proposes to inline list dictionary and set comprehensions into the code where they are defined and provide the expected isolation by looking at all the variables, creating a copy of them, running this in place. And then if there was a variable for that loop variable, just put the old value back, right? Kind of push and pop them there. And the benefits here are up to two times as fast
Starting point is 00:23:34 as comprehensions are today. So then they said, this is translating to an 11% speed up in one sample benchmark drive from real world code that makes heavy use of comprehensions in the context of doing actual work. That's pretty cool, right? Yeah. I believe comprehensions were in general slightly faster than for loops that would just do something
Starting point is 00:23:55 and put it in a list. So making it two times faster still is even better. So if this gets adopted, it's in draft form right now. i can go back to my youtube comments and have even further nuanced discussions about like here's yet again how they are not the same thing but they look similar um so yeah i i never would have thought that the i should reuse a variable in the in a comprehension though i i don't do that but i guess no. No, I think it's like if, let's say you've got two list comprehensions, you know, X squared for X in first set, then X, two X plus one for X in other sets.
Starting point is 00:24:36 And those are two separate list comprehensions. You don't want like one of those. You want to keep them. They want to be like, okay, this X is only for this comprehension. That's what it's like. So if you have have an embedded comprehension you might use x in both places right or if you have an x x and y equals something and then you generate a comprehension you say x in there like there's a couple of there's some yeah yeah i mean i guess i was just thinking my own style the the second one i would never do if i had if i was already using x i probably wouldn't use x
Starting point is 00:25:05 the comprehension but in in but i've i'll often use i or x in a comprehension in embedded ones and don't even think about it so yeah interesting cool yeah david pool says uh there's i'm sure there's good reasons for it but i wonder why comprehensions don't use name mangling strategies for their foreign names every everyone's got to be named underscore underscore X. That is a good question. That reminds me of a joke. What it's doing now is it basically says we're going to
Starting point is 00:25:34 create a function, and so that variable is basically a local variable, that function, which has no influence on it. Were you going to actually tell us the joke? No, it's going to wait okay all right oh but we or should we do it now i mean oh just um i think it was ned bachelor actually that uh mentioned that um that dunder we often talk about dunder init instead of double instead of double underscore init but it's really it's really underscore underscore init underscore underscore
Starting point is 00:26:05 so it's really quander um wonder and uh and so i responded to him and said i don't i don't think so i think it's i think it's dunder init dunder is what it should be um but that would be redundant oh it's pretty bad that's that's pretty much on par with the joke we got at the end so people all right so if one basically the way to understand this you can't look at the code and tell right which is why people incorrectly try to correct me on youtube so it you look at the code and it looks like oh it's just a four list and we took out the line breaks and put brackets so it's the same thing um so if you look at it, now, you can see, if you create a function that creates a list comprehension, you'll see it creates what's called a code object of type list
Starting point is 00:26:51 comprehension, then it calls make function, then it loads the list, and then it does a bunch of stuff on it. And then it actually, you can see there's like the sub function that gets disassembled, and it says, we're going to build a list, load fast, iterate it, list append. And what's really interesting, this is the part that differs from for loops. There's a byte code called list underscore append. If you do this with a for loop where you have a list and you call append,
Starting point is 00:27:16 it loads the function append and then calls append on the operands. But it's not in the runtime in a for loop. In a comprehension, there's a special bytecode that runs, and that's like the primary difference, okay? So, but the drawback, right? So the benefit is list append is a bytecode operation, not a function call.
Starting point is 00:27:34 But the drawback is there's this object created, there's a stack frame created, there's a function call over to this comprehension call. Like there's an issue with all that stuff, right? So the new one just says what we're going to do is we're going to create a new op code called load fast and clear which is like i'm going to load the variable x and if there was one of those before um we're going to hang on to that just in case you know so we can put it back to avoid that and then it calls build us and like
Starting point is 00:28:01 you can notice there's no there's no function call. So no stack frame, no extra function call. There's no list comprehension object, all those things. And so this is the new bytecode operation that manages that variable isolation. And then you just do it directly, which saves you a bunch. We talked about the 2x speed there.
Starting point is 00:28:21 So that's the PEP. People check it out, see what you think. That's really interesting, Michael, but you had me at it's faster i know exactly i just want people to kind of know what's what's happening and why it might be faster and and so on so pretty neat um you can see it does have the the only possible i guess concern or the reason they say why is this even a pep why is this not just, hey, I made it faster. Like, why do we need to discuss this? Yeah. Just like you said, it's faster. We're done. Let's go. Is there are user observable changes if the user doesn't like
Starting point is 00:28:56 themselves, basically? For example, why would a user return locals as the item you want put into a list during a list comprehension? Well, if you did do that, you would see that it's not the same as before. I have no idea why you would ever try to do that, but that technically would be a breaking change. The other one, slightly more valid perhaps, is that if there's an exception inside the list comprehension because it used to be in a separate function call it was there's it would show up in the actual traceback call stack right but now you are not in another function it years on a line in the og function so you you don't have that basically it's missing from there so you would have a slightly different traceback exception. Well, traceback, the exception would be the same,
Starting point is 00:29:48 but the traceback call stack listing would be different. That potentially affects somebody, but not a lot. I don't know. It's a 2x, it's a trade-off I would totally think is worth taking. Yeah, but I see why it's an observable behavior change. Yeah. Although I learned from Brett Cannon just a couple weeks ago that locals often has weird stuff in it.
Starting point is 00:30:14 If you look at locals a lot, sometimes there's stuff in there that you don't recognize. Interesting. Yeah, the locals one seems like, you know what, don't do that. But the traceback one, I can see, okay, we're always looking for this. And like, if I get an error, I try to like look at the traceback to figure out what to tell people or I don't know, theoretically could have, it still seems unlikely. I feel like you shouldn't depend upon what's listed there, but I'm sure somebody does somewhere.
Starting point is 00:30:41 Well, like IDE makers and things like that. Yeah. Yeah. Cool. All right. Well, that yeah yeah cool all right well that's it for all of our items you got any extras i don't do i thought i didn't have any extras but i i think by the time i'm going to try to predict the future a little bit because i have some control over it so i do have an extra um i'm going to be releasing either today or tomorrow by the time this podcast comes out this is going to be released but if or tomorrow. By the time this podcast comes out, this is going to be released.
Starting point is 00:31:06 But if you're watching it live, it's not yet released. So I'm going to be releasing a new course, Python Web Apps That Fly With CDNs. It's just over a three-hour course that's all about taking CDNs and applying them to like Flask web apps and also hosting video content and large files and how do you geo replicate that we use a lot of these techniques in python bytes to like make the website faster as well as to make to deliver you know terabytes of mp4 mp3s to people so check that out i will put a link in the show notes again if you're listening live this is not out yet but it will be out by the time the mp3 hits your podcast player so if you have the audio only version go check it out links in the show
Starting point is 00:31:48 notes nice yeah i think that's a really really cool course i think there's so much people can get out of it in terms of like it's really easy you know 30 minutes and you're like oh our app is so much faster and ours we can use smaller servers that's really great well three hours plus 30 minutes yes well once you know once you. Well, three hours plus 30 minutes. Yes. Well, once you know the thing, it's probably 30 minutes to apply to your app, is what I mean.
Starting point is 00:32:10 Yeah. David Poole says, the traceback one could be worked around if the debug compiled used the old function style method, like aggressive optimizations in GCC with inline functions. Okay. Possibly. Interesting.
Starting point is 00:32:24 You would have to have people buy into that, but right. I mean, I'm sure Brian, you're very well aware of the debug versus release builds and optimization levels and all that stuff in C, right? No, I mean, yes, but I don't use them. No, you don't need that. Well, I don't, I personally don't like to, I don't like to test in something
Starting point is 00:32:44 my user isn't going to see. So like to test in something my user isn't going to see so i always test in optimized released um got it yeah yeah but it can make a big it can make a big difference but in the python world we don't really discuss that so much right yeah and except for the the the comment the one thing to be aware of uh that i was while we're while we're on it we probably haven't mentioned this lately, is asserts are awesome in your test code, but they're not that great in your, actually, they're pretty great in your function code also,
Starting point is 00:33:12 but just don't depend on it because assert lines could be completely removed if you have the optimization on. Absolutely. All right, you ready for a joke? Yes. You a fan of movies?
Starting point is 00:33:24 Like watching movies and stuff? Yeah, I just went to a great movie. Yeah, so. Nice. Well, as a software person, especially if you do a lot with Linux or Mac OS, you might not be able to watch the movies too much. This one says, I can't watch movies on my computer. All it does is bash scripts.
Starting point is 00:33:39 Bash on the scripts of the movie. Oh, okay. Not plot or run shelf scripts. I'm not sure which uh okay that's funny i think sort of yeah somewhat you know it's what i have for you i have i have an incredible one that i want to put up but it's like only video that has music and no no spoken word so i don't think it fits for this format but i'll you know what i'll throw it in i'll throw it in people also check out the movie. It's about releasing stuff to production.
Starting point is 00:34:07 So it's, uh, it's pretty epic. Put it in the list, but the one we won't play is something that's like 30 seconds long. That has nothing but music. Cool. Nice.
Starting point is 00:34:17 All right. Um, is that all we got? That's all we got. It's a wrap. It's a wrap. Yeah. Thanks as always.
Starting point is 00:34:23 Thank you. Later. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.