Python Bytes - #436 Slow tests go last

Episode Date: June 16, 2025

Topics covered in this episode: * Free-threaded Python no longer “experimental” as of Python 3.14* typed-ffmpeg pyleak * Optimizing Test Execution: Running live_server Tests Last with pytest* E...xtras Joke Watch on YouTube About the show Sponsored by PropelAuth: pythonbytes.fm/propelauth66 Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Free-threaded Python no longer “experimental” as of Python 3.14 “PEP 779 ("Criteria for supported status for free-threaded Python") has been accepted, which means free-threaded Python is now a supported build!” - Hugo van Kemenade PEP 779 – Criteria for supported status for free-threaded Python As noted in the discussion of PEP 779, “The Steering Council (SC) approves PEP 779, with the effect of removing the “experimental” tag from the free-threaded build of Python 3.14.” We are in Phase II then. “We are confident that the project is on the right path, and we appreciate the continued dedication from everyone working to make free-threading ready for broader adoption across the Python community.” “Keep in mind that any decision to transition to Phase III, with free-threading as the default or sole build of Python is still undecided, and dependent on many factors both within CPython itself and the community. We leave that decision for the future.” How long will all this take? According to Thomas Wouters, a few years, at least: “In other words: it'll be a few years at least. It can't happen before 3.16 (because we won't have Stable ABI support until 15) and may well take longer.” Michael #2: typed-ffmpeg typed-ffmpeg offers a modern, Pythonic interface to FFmpeg, providing extensive support for complex filters with detailed typing and documentation. Inspired by ffmpeg-python, this package enhances functionality by addressing common limitations, such as lack of IDE integration and comprehensive typing, while also introducing new features like JSON serialization of filter graphs and automatic FFmpeg validation. Features : Zero Dependencies: Built purely with the Python standard library, ensuring maximum compatibility and security. User-Friendly: Simplifies the construction of filter graphs with an intuitive Pythonic interface. Comprehensive FFmpeg Filter Support: Out-of-the-box support for most FFmpeg filters, with IDE auto-completion. Integrated Documentation: In-line docstrings provide immediate reference for filter usage, reducing the need to consult external documentation. Robust Typing: Offers static and dynamic type checking, enhancing code reliability and development experience. Filter Graph Serialization: Enables saving and reloading of filter graphs in JSON format for ease of use and repeatability. Graph Visualization: Leverages graphviz for visual representation, aiding in understanding and debugging. Validation and Auto-correction: Assists in identifying and fixing errors within filter graphs. Input and Output Options Support: Provide a more comprehensive interface for input and output options, including support for additional codecs and formats. Partial Evaluation: Enhance the flexibility of filter graphs by enabling partial evaluation, allowing for modular construction and reuse. Media File Analysis: Built-in support for analyzing media files using FFmpeg's ffprobe utility, providing detailed metadata extraction with both dictionary and dataclass interfaces. Michael #3: pyleak Detect leaked asyncio tasks, threads, and event loop blocking with stack trace in Python. Inspired by goleak. Use as context managers or function dectorators When using no_task_leaks, you get detailed stack trace information showing exactly where leaked tasks are executing and where they were created. Even has great examples and a pytest plugin. Brian #4: Optimizing Test Execution: Running live_server Tests Last with pytest Tim Kamanin “When working with Django applications, it's common to have a mix of fast unit tests and slower end-to-end (E2E) tests that use pytest's live_server fixture and browser automation tools like Playwright or Selenium. ” Tim is running E2E tests last for Faster feedback from quick tests To not tie up resources early in the test suite. He did this with custom “e2e” marker Implementing a pytest_collection_modifyitems hook function to look for tests using the live_server fixture, and for them automatically add the e2e marker to those tests move those tests to the end The reason for the marker is to be able to Just run e2e tests with -m e2e Avoid running them sometimes with -m "not e2e" Cool small writeup. The technique works for any system that has some tests that are slower or resource bound based on a particular fixture or set of fixtures. Extras Brian: Is Free-Threading Our Only Option? - Interesting discussion started by Eric Snow and recommended by John Hagen Free-threaded Python on GitHub Actions - How to add FT tests to your projects, by Hugo van Kemenade Michael: New course! LLM Building Blocks in Python Talk Python Deep Dives Complete: 600K Words of Talk Python Insights .folders on Linux Write up on XDG for Python devs. They keep pulling me back - ChatGPT Pro with o3-pro Python Bytes is the #1 Python news podcast and #17 of all tech news podcasts. Python 3.13.4, 3.12.11, 3.11.13, 3.10.18 and 3.9.23 are now available Python 3.13.5 is now available! Joke: Naming is hard

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 436, recorded June 16th, 2025. I'm Michael Kennedy. And I'm Brian Ocken. And this episode is brought to you by Propel Auth. We wanna say thank you, thank you to Propel Auth
Starting point is 00:00:21 for sponsoring the show. We're gonna tell you more about them later, but TLDR, if you have to do authentication in your app, that can be a huge hassle, it's not the core business of your app. Give them a look, they'll solve that problem for you and let you go back to building or whatever you're supposed to be building.
Starting point is 00:00:36 Speaking of supposed to, you're supposed to be following us on some form of social, I would say, don't you think, Brian? So we got our social media links out on the top of the show notes so check that out there we do our live streaming if you want to be part of the show while we record it we flip the switch and make it go live around 10 a.m. on Mondays Pacific time and all the older versions are there as well and finally if you want a really nice
Starting point is 00:01:02 detailed email summary with extra information and background details on what we talk about, become a friend of the show, sign up for a mailing list. We're not here to spam you or to resell you for sure, just to send you notes about things like what we covered in the show, maybe a very rare announcement of like a new course or some event or something like that. But we'd appreciate if you signed up there as well. And Brian, I've always, always been impressed with the ability to do multi-threaded programming. I've enjoyed it on other languages and platforms where it was sort of full featured.
Starting point is 00:01:40 So that's why I was so excited when free-threaded Python came out. But it only partly came out, but it only partly came out, didn't it? In 313, like partially or with a caveat. Yeah. So, um, let's see. I just, here we go. So, um, and what was that? Pep 70 now 703. So anyway, um, there was, so I can't remember the pep for when they, we had, um, free threaded as an option that you could turn on, but then now there was an announcement, exciting news, I saw this on the socials, from Hugo Von Karmanad. Exciting news, pep779, which was the criteria for supported status for
Starting point is 00:02:22 free threaded Python has been accepted, which means free threaded Python is now a supported build. What that means is they will drop the experimental label for, and so for 3.14 beta three due on Tuesday, it will no longer be experimental. So that'll be really what that means is we're ready to kind of support it. Actually, I wasn't sure really exactly what this meant. So hop over to the
Starting point is 00:02:50 Hugo linked to a discussion and I may have the wrong link there. So this discussion area was talking about the steering council approving 779, which was the criteria, with the effect of removing experimental. And then there's a lot of details of what all this means for phase two. There's a lot of stuff that has to happen in phase two, which is like making sure that the API, ABI compatibility requirements for experimental projects, some performance and memory guard rails, we've talked about those before. I do like this, there's
Starting point is 00:03:27 a section on what documentation needs to be there before we completely like jump in with both feet, we need to make sure documentation is clearly written and maintained. There's some high level concurrency primitives and some benchmark requirements that are needed. If you pop down to the end, it says, we're confident that the project is on the right path and we appreciate the continued dedication from everyone working to make free threading
Starting point is 00:03:53 ready for a broader adoption across the Python community. So there's a lot of work to do. And I wasn't quite sure exactly how much work there is left to do, so I asked some people. So we got a response of like, hey, does this mean that we're gonna be the default? And I knew the answer that it's not gonna be for a while, but I wanted to have somebody more core than me answer that.
Starting point is 00:04:16 And Thomas Wooters says, basically it's gonna be a few years before, it's really the default and really we have, how do we say default it's it's gonna be at least it can't happen before 3.16 because of the stable ABI support requirement and it may take longer so and really default this is squishy concept so good answer from Thomas thanks but this is encouraging I'm excited to move forward with the free threading path. I'm as well.
Starting point is 00:04:48 You gotta go slow with this kind of thing because it's such a change. Especially at the lower level C API and integration like you're talking about. Yeah. Yeah. But I'm very excited for it. I think it opens up a lot of very interesting opportunities.
Starting point is 00:05:03 Like right now, if I write Python code and I don't go do something explicit like multiprocessing, I can get 10% of my computing resources, which is pretty darn low. So the ability to just say run this in parallel and actually get it to run normal without the constraints of serializing over to multiple processes is really cool. And that's kind of where some of the documentation needs are there. And maybe those are already there, but I just don't know where they are.
Starting point is 00:05:29 But the thoughts of, OK, I'd like to have my project. Let's say I want my project to be supported on support free threading. What does that mean? What do I need to look out for? I mean, I obviously need to check all of my dependencies to make sure that they're tested on that, but what do I need to test? And things like that. Those are good things to document. Yeah. Yeah. I suspect if you're doing pure Python, it's pretty straightforward. There's always the concern that whatever you're doing needs to be thread safe, right? And I think
Starting point is 00:06:01 people put Python aside in programming in general. They don't think enough about the thread safety aspects or even error handling sort of consistency type of stuff. I took three steps, there was an error, but the fourth step was required to put it back into a consistent state. I caught the error, it's fine. No, it's not fine, it's really broken. So there's a lot of situations like that I think you might need to consider you know if you're doing multi-line Python things you might need a lock statement. We'll see we'll see what shakes out. We also really like Python to be super easy for people to come on board. I mean people are building you know web scrapers in ten lines of code or something. Yeah. And we don't want to get rid of that easiness, so yeah.
Starting point is 00:06:45 Yeah, I totally 100% agree with that. I do think that the complexity lies down in the libraries less than in your application, right? Because in your application you can decide, well I'm not doing threading, so problem solved. But as a library developer, you can't necessarily decide, not without just saying it doesn't work, you can't decide whether your library is being used in a multi-threaded situation.
Starting point is 00:07:09 So I think the simple use case of throwing together a script there. So Kishan out there says, do we need to use locks in Python now, the free-threaded version? I think yes, maybe, but only if you're doing a multi-threaded code. But you might need that anyway before because of the multi-line consistencies, right? Even though every line might run on its own, like this block of five, you're not guaranteed that it's going to run as a block. Anyway, way more details than we necessarily need to go into.
Starting point is 00:07:39 But it's going to be interesting. And I imagine these conversations are coming back with this being the default thing. And if you want to try it out, UV makes it super simple. UV, create a virtual environment with Python. Just say, VV, I think probably 3.13.5T would do it, although I haven't tried. I have not tried.
Starting point is 00:07:56 Well, yeah, but basically, definitely 3.14T, hopefully. Yeah, clearly we should have tried this before. So we'll get back to you on that one. Yeah exactly. Well, we'll figure it out We'll figure it out speaking to figure out what am I gonna figure here? So let's talk about Actually, I have one other sort of async and threaded thing to talk about later. I mean, let's talk about that first I'm gonna let's let's change the order here. Look at that. Boom. We can do it live So I want to talk about Py leak P. P-Y-L-E-A-K. For like a memory leak, but instead of checking
Starting point is 00:08:28 for memory leaks, what it looks for is async IOTasks, threads, and event loop leaks. So if I run, if I call a function that's asynchronous, and I think it is synchronous, but I call it without assigning it to a variable or a waiting or anything like that, that just creates the coroutine and the un-executed coroutine just chills there until it gets cleaned up and it doesn't actually run that operation. Not good.
Starting point is 00:08:55 So that's what this library is about. It's about detecting those types of things. So let's go look at some examples. So you can do context managers. You can say async with no task leaks. And then if you somewhere within the execution of that context block, if you do that and you somehow call an async function
Starting point is 00:09:16 but you don't await it, like for example here, they say async io.createTask given a sleep. That will come up as an error. It can either be a warning or an exception if you want it to break the the build and a test you can say treat as errors and it'll say look you called this function and you didn't wait for it and tying this back to what you were just talking about Brian you can say with no thread leaks and if you create a thread and start it but you
Starting point is 00:09:40 don't hang on to it as a variable, then it'll tell you like, hey, you're no longer in control of this thread, this could be a problem, right? So basically, I imagine it's probably the thread kept running on the other side of that context manager, I'm not entirely sure. You can also do that for event loop blocking, right? If you, and the event loop is the async IO event loop, right?
Starting point is 00:10:02 So this one's actually really interesting. It's if you're doing blocking work in an async IO event loop, right? So this one's actually really interesting. It's if you're doing blocking work in an async IO event loop, that itself should be async IO aware, right? If I'm calling HTTP endpoint for some kind of external service, I should probably be using HTTPX's async client,
Starting point is 00:10:22 not request, get. Which means basically it says like, I'm doing an IO thing, but it's blocking because I'm not using the AIO native version of it, right? And so the example here is like, if you call time.sleep when you're checking for no blocking, that's an error because you should be using AIO.sleep, which you await and allows other work
Starting point is 00:10:44 to happen at the same time. Right? So basically it clogs up the entire AIO processing across all of the multi- concurrent contexts. They're not threads, but like, right, the stuff that can run side by side, it blocks it. So this will detect that. That's really cool. Yeah, this is a really neat library. There's a bunch more stuff. You can do this as decorators. You can get detailed stack traces. It'll show you details about what has happened here.
Starting point is 00:11:11 There's a leak task called task two on line nine of this example. And it shows you the code of what happened and so on. Yeah, lots of different things going on here. I don't want to go into too much detail. I've kind of gone on and on about it. But it's not just a single thing. This is a pretty comprehensive library and I like it.
Starting point is 00:11:30 I like it a lot. I reached out to the rough folks and said, it would be really great if you could detect when there's an async function that's called that wasn't awaited. And they said, that really sounds great. We have or are considering it, but it's really hard to do. And so, you know, maybe you could just throw this in here as like one more thing and set the error, error version, and then run PyTest and see what happens, you know?
Starting point is 00:11:54 Yeah. So seeing, seeing, I'm just curious how you would use this. I would expect, especially as you're building up an application, especially if it's maybe, maybe all the time, but maybe your first time doing an async application, just to make sure that you're doing things right, putting some of these around, decorators around some of your methods within your code. Once you have things production ready, would you take this stuff out or would you leave it in place just to...
Starting point is 00:12:21 I think I might put it in a unit test, but take it out of production. Yeah, okay. Probably what I would do. You know, the area where this really helps is, it's helpful when you're creating a new project, but when you're creating a new project, you're in the mindset of, I'm gonna call a function,
Starting point is 00:12:35 oh, it's async and you're actively working on that code, unless you're vibe coding, then all bets are off. But if you're legitimately working on it, then you're like in the flow and your editor ideally gives you a little warning about those kinds of things. However, what this really helps is if you're converting from a synchronous situation to an async one. Like for example, when I converted the TalkPython code from synchronous pyramid to async court,
Starting point is 00:13:01 which is basically async flask, there was a few places I messed up because there's so much code and there's all these functions are called and you look at the code and it looks fine but if you convert that function from sync to async but you forget to find every place you're using it and add in a weight then that's that's when this happens. So does it what what happens in like the, so does it work anyway? It's just slower or what happens? So for the thread one, it may work anyway, but like the async one, the async task one, maybe, maybe not, because if you don't await it, it doesn't actually execute.
Starting point is 00:13:39 Yeah. Right. You create the, the co-routine, but then it's the awaiting of it that actually makes it go. And so if you said like, log this message, async-routine but then it's the awaiting of it that actually makes it go. And so if you said like log this message asynchronously, but you don't await it, it's never going to log it. Yeah. Right. Maybe if you call create task, it might start.
Starting point is 00:13:54 I know there's like some different ways in which it could be done, but. But this is, this is great to have some, I feel like this is a, like what scaffolding or training wheels or something to put on stuff just to make sure that things are running right. This is good. Yeah, it's definitely, definitely good. Cool. What else is good? Authentication that you don't have to deal with is good.
Starting point is 00:14:12 Yes, it is. So on that note, we want to thank PropelAuth. So thanks this episode is sponsored by PropelAuth. PropelAuth is the easiest way to turn authentication into your advantage. For B2B SaaS companies, great authentication is non-negotiable, but it can often be a hassle. With PropelAuth, it's more than just functional, it's powerful. PropelAuth comes with tools like managed UIs, enterprise SSO, robust user management features,
Starting point is 00:14:40 and actionable insights. As your product grows, PropelA Auth adapts with it supporting more advanced authentication features. And the best part, Propel Auth has native support for major Python libraries like FastAPI, Flask, and Django. You can easily integrate them into your product. When Auth is effortless, your team can focus on scaling, not troubleshooting. That means more releases, happier customers, and more growth for your business. Check them out to get started today. The link is in your podcast player's show notes. It's a clickable chapter URL as you're hearing this segment, and it's at the top of the episode page at pythonbytes.fm. Thank you to Propel Auth for
Starting point is 00:15:19 supporting Python Bytes. Yes, indeed. Thanks, Propel Auth. And I would like to point out, Brian, that all of the chapters are clickable. So I don't know if everyone even knows that most of our episodes have chapters. If they don't, that's usually because I forgot somehow, which is not the case. But every item on there for the chapters
Starting point is 00:15:40 is also a link to the main resource of whatever. Like, for example, the PyLeak one will link to the PyLeak GitHub repo if you click it. And I don't like people to skip on us, but I mean, I understand if we're talking about a topic that you really don't care about, that's one of the cool things about the chapter markers, you can just skip to the next topic or something.
Starting point is 00:15:59 Yeah, or if you've heard it four times, you're like, yeah, this is the seventh time I've heard about this library, you can skip it if you want. I mean, the show is pretty short so but still None the less nonetheless all right. Let's talk about the thing that I was gonna talk about first, but Kicked it down the down the line which is typed ffmpeg Okay, so I don't know folks know but ffmpeg is a command line CLI
Starting point is 00:16:23 video processing masterpiece it is a command line CLI video processing masterpiece. It is a beast of a library. And it's actually what I use for the TalkPython training courses to generate like all these different versions and resolutions and streaming styles and stuff. And you know, if we had a, let's say a five hour course, probably turn FFmpeg loose on the videos for, I don't know, 15, 20 hours, something like that,
Starting point is 00:16:48 and just let it grind on my Apple silicon. And I've got a whole bunch of automation to make that happen, which is cool. It would be easier if this existed, probably. So this typed FFmpeg is a Python wrapper that supports working with filters and typing for FFmpeg. So pretty neat. And it's kind of like PyLeak. It's more comprehensive than you would imagine. So this one offers a modern pythonic interface to FFmpeg, providing extensive support for complex filters with pictures. It's
Starting point is 00:17:19 inspired by FFmpeg-Python, but this one enhances that functionality with autocomplete, comprehensive type information, JSON serialization, and so on. So like if you look at the repo they show you, if you type FFmpeg.input. and then down comes a huge list of things with stunning documentation and type information. I mean look at that Brian, That's pretty nice there, right? Yeah, it really is. Yeah, I was really surprised. So it comes with zero dependencies,
Starting point is 00:17:51 comprehensive filter support, robust typing. That's the point of it, basically. Graph visualization, which I was talking about, hinting at partial evaluation, media file analysis, and a bunch of things. So easy to use. It shows you how you can, you know, if you wanted to flip a video horizontally and then output it, you can see a little graph of input and then it applies all the operations. There's an h-flip operation in the graph and then
Starting point is 00:18:16 there's an output to the file. You get it more and more. I know this is like you get even that interactive playground where you can drag and drop the filter bits together. What? This is, I know this is like you get even that interactive playground where you can drag and drop the filter bits together What this I know I'm telling is way more than you would expect So yeah, really need to visualize what's happening and yeah, I don't do this kind of stuff where I'm like creating really complex graphs It's more like format conversion resolution conversion stuff that I use it for But yeah If you do a lot with FF MPEG and you do stuff with video check this out. If you pay hundreds or thousands of dollars to cloud providers to like re-encode video for you you definitely
Starting point is 00:18:52 want to check this out. Oh okay. It might be might be saving a lot of money. I used to use AWS they've got some video processing API sort of thing and eventually it was Cecil Phillip actually that convinced me I should just do FFMPEG. Yeah, this is probably just calling FFMPEG in the background. I'm sure that they are which is crazy, right? Yeah. I mean it's a little bit faster if they do it but you know what I only do it once per course it's not very often. Well I guess if you're doing like your all of your courses were like 15 hours like I get why you have to pay people to do that, but you know, whatever.
Starting point is 00:19:26 Yeah. Cool. Yep, yep. Yeah, if you use good caching, then life is much easier. You can just rerun it if you add a new video to re-encode the whole thing. I love cache. I don't know about cache.
Starting point is 00:19:36 I do too. Over to you. I was gonna talk about, what am I gonna talk about? I'm gonna talk about PyTest. Like I kinda like PyT test actually. So this is, it's a fun article by Tim Kemenin and this is short. So I almost put it as a second, like a, an extra, but it's just really cool. Um, so it's a,
Starting point is 00:19:57 his article is optimizing test execution and it's running live server tests last with pi test. Okay. so this is about testing websites, web, you know, using the live server fixture. And so if you're using that, we're using Playwright or Selenium, that's definitely, this is definitely for you. But also really, if you have, the techniques in this are really cool for,
Starting point is 00:20:24 even if you're just, if you have any other, like if you have a test suite that's, there's some slow parts and it's slow because of some fixture that you're using, like some external resource or whatever, any test that uses that is a little slower, you can use the same technique. So I just wanna preface that.
Starting point is 00:20:40 So why run slow tests last? Why do they want to run them last? Well, for efficiency, you get faster feedback for unit tests that allows you to, faster feedback for the fast tests. I don't know why he puts unit tests. It could be any fast test. Allows you to catch and fix issues easier, faster.
Starting point is 00:20:59 You're not waiting for them. Also resource management keeps resources consumed by slow tests like database connections, external services and stuff not tied up through the whole test. So keeping those isolated to the end totally makes sense. So how do we do this? Well, he's talking about installing PyTest Playwright, which also is a great plugin to drive web tests using using by test. So, and PyTest Django. So this application is running a Django app. And then using, so his tests are using live server.
Starting point is 00:21:34 So what does he do? He's adding a new marker and E2E marker for end to end. And then, but he's not actually marking anything with that manually. He comes by and uses one of the PyTest's lovely hook functions. And this one is collect modify items. And it's sort of an oddball. So it's good to have some easy examples like that.
Starting point is 00:21:54 So what this does is it goes through all your tests and looks for all of them that are using the live server fixture. And then it does a couple things. He's adding the marker E to E. So adding the end-to-end marker to all of the tests that use live server, it really, E to E was, you could do live server marker. You could do any marker name you want.
Starting point is 00:22:15 But why do we do that? I'll get to that later. So he's adding the marker to the slower tests. And then he's splitting them up and running all the other tests first and then, and then he's splitting them up and running the, the, all the other tests first, and then the live server tests second. And that's really kind of the trick about PyTest collect, modify items as the way to either you can, you can bail on some tests or you can separate re reorder them. Um,
Starting point is 00:22:39 and he's using the reorder, but since we're having to loop through all of them anyway, he's using that to add the marker. And then, so why do that? Well, he's got a little example with a slow one or a fast one, but you can use that marker then and say, you know what, I'm debugging a unit test. I don't want the live servers once to run. So you can say, Hey, don't, don't run the end to end ones. You can say PyTest-M not E to E. And that will run all of your tests
Starting point is 00:23:07 that are not using the live server. And that's a cool use of markers, auto automatically applying markers, it's a cool thing. And then also, for example, you can, if you just want to run the live server ones, you can say E dash M E to E as well. So a really fun little example of how to automate pushing your slow test to the end
Starting point is 00:23:28 and being able to select them or not select them. It's cool. I love this idea. I think I might adopt it. That's nice. Also, a gentle introduction to hook functions, because hook functions can be a little scary. And something simple like reordering your tests doesn't seem like seem like it would be simple but it's only what 13 lines of
Starting point is 00:23:48 code including a comment some link lines it's not bad. Yeah that's not not too bad at all. Okay yeah I'm definitely gonna look at this because I've got some some tests that are blazing fast and some that are pretty slow for the various web apps I got so you know check it out. I don't use live server or any of those things but you know it's like I want to get the site map and then call a representative subset of things within there just to make sure it's all hanging together. That's definitely an E to A test. Well, and also like, so if I see a lot of cases, if somebody's using a database connection,
Starting point is 00:24:18 they'll have like, or, you know, using using a database to even if it's just a mock database or a small one, but they've got a whole bunch of test data that they've filled in and maybe it's not really slow, but it's slower than their other stuff. It's often accessed via a fixture and you can easily select the tests that use that fixture. It's pretty cool. The other thing I brought this up by because is,
Starting point is 00:24:43 I wanna make sure everybody, I mean, yes, I write about PyTest a lot, but I like other people to write about it too. So please, if you've written a cool document, cool, some cool PyTest documentation, send them to me. So, yeah. Indeed. Looks good. All right.
Starting point is 00:24:59 Let's jump over to some extras. All right. We have a new course at Talk Python Training and this is the very first announcement for it. I haven't even got a chance to send out email about this, but Vincent Warmerdom, who's been on Python Bytes before, created a short LLM building blocks for Python course. And so this isn't like prompt engineering or anything like that. It's like, what are some good libraries you can use to build code that uses LLMs for various things? How do you get structured output? Like for example, how can you use Pydantic
Starting point is 00:25:30 to communicate to the LLM how it should speak to you in a JSON response instead of a text response? Stuff like that. Yeah, super neat. So check the course out. It's just 19 bucks over at Talkbython Training. Just go to courses or go to Talkbython.fm, click on courses. It'll be right there at the top of the new courses list. So check that out, that's super exciting. Also, over at Talkbython, I've done this thing called deep dives, where it goes into a particular episode,
Starting point is 00:25:56 and you can look at it, it'll tell you background on the guests, background on important concepts you might wanna learn to get a better sense of understanding what's going on, background on the guests, background on important concepts you might wanna learn to get a better sense of understanding what's going on, or diving extra details into each of the things we've spoken about and so on. So the news is I have finished a long journey
Starting point is 00:26:16 of getting one of those deep dive analysis for every Talk Python episode for the last 10 years, and the result is 600,000 words of analysis, if you were to go through and read them all. It's 4.5 million characters. That's a lot of content. And, but that makes the search over there better, because if you search, that now includes,
Starting point is 00:26:35 it basically, the search engine considers the deep dive as part of the episode, and it looks for content within there, not just within what I put in the show notes and so on. So really, really cool, super proud of that. That was a lot of work, but it is now done. So I wrote a little article about that, and I'll link to it if you're more interested than what I just said.
Starting point is 00:26:57 Also, remember I had a rant. I even named last week's episode, stop putting your.folders in my tilde dash, tilde slash, whatever. Well, Eric Mesa said, hey, the place to store.files week's episode stop putting your dot even worse because the.files and folders are not even hidden, right? But what about Linux? Well this XDG standard speaks to that and so I even did a little put together a little cheat sheet on it or whatever. So put stuff in your you know where the config files go. Well they go in home slash you know like tilde whatever your dollar home is right. Basically tilde slash dot config so maybe dot config slash my app, some settings.
Starting point is 00:27:46 Or there's a cache folder, and then you put it into the.cache in there. There's still a few dot folders in your repo, but not one for every single application you happen to have run or something has run for you. So this is kind of cool, and then people can check it out. There's a lot of details I put together here, and even a way to use this XDG library, which
Starting point is 00:28:08 is right here somewhere, and Python, how to use it. Or actually just a function you can use. But pretty cool. That's pretty cool. Any idea what XDG stands for? Zero. I have zero idea. That's fine.
Starting point is 00:28:24 I'll look it up for next time. I did look it up as part of like putting that little cheat sheet thing together, but then it was last week and I forgot. Yeah, that's me. Yeah. Okay. Is that your extras? No, I got a couple more.
Starting point is 00:28:37 I'll go quick. Okay. Every time I think, you know, are you a fan of Scarface? Do you watch that when you're younger? I'm like one of the only people that ever ever had. No, this is a Godfather. This is a Godfather actually. This is Al Pacino though, so that's why I thought. Same actor. Every time I think I'm out, they pull me right back in, right?
Starting point is 00:28:53 Well, that's me in the pro version of OpenAI. I thought like, okay, I'm just going to go back to being a regular, normal user of this thing and then no, they're going to release 03 Pro. So I'm like, I have to pay the ridiculous money to try that out again, because it's really worth it. Although I will say, I'll take one for the team here. I will say, 01 Pro was incredible and it's starting to be phased out. I don't know how much longer it'll last. 03 Pro does not seem nearly as good to me.
Starting point is 00:29:18 Not even close. I don't know why. 03 is pretty good, but so maybe I'll, I'm considering going back to a regular user again, but every time I'm out Brian Let me write back in Okay another one real this is Dynamic when I wrote it down it was 17 right now. It's 20, but Python bytes is the
Starting point is 00:29:37 20th most popular tech news podcast in the world according to good pods. Okay, according to good pods, which is decent and the world according to good pods okay according to good pods which is decent and number one developer news show period how about that specifically developer not like okay also covers just gonna jet tech or AI or whatever that's pretty cool so thanks good pods for pointing that out I used to use chartable but then Spotify bought them and shut them down thanks Spotify I think it was Spotify I'm pretty sure they were definitely bought and shut them down. Thanks, Spotify. I think it was Spotify, I'm pretty sure. They were definitely bought and shut down. Okay, I want you, anyone out there listening,
Starting point is 00:30:09 do not take action on this item until you hear the second follow-up item in this extra because it's important. On June 3rd, Python 3.13.4 was released. Hey, right, this is cool. Covered some CVEs that had to be fixed so we quickly got things like tarball security issues that could be really bad if you process
Starting point is 00:30:31 tarballs from external input. So you might think, I want to go install that. No, no, no. Just a few days later, hey, 3.13.5 is out because we had to quickly release this to fix an issue that was in 313.4. Okay. So make sure you get 313.5 if you have either 313.3 or 4 because 4 doesn't actually, I don't know what the actual issue was but this one is like, oh my gosh, we got to fix it again.
Starting point is 00:31:00 That's it. Those are my extras. All right. I only have a couple but let's pop over. So along the free threading topic, this was mentioned by what John Hagan sent this in. And this is from the python.org discussion thread. There's a discussion thread called, is free threading our only option? This is from Eric Snow, whose opinion I at least want to listen to. So there's an interesting discussion about whether or not free threading is the only way to go. And he does mention, he's
Starting point is 00:31:39 not recommending to not support free threading, but there's other things to think about. So I'm just gonna drop this link. It's a kind of a long discussion, but it's an interesting read. Yeah, it's also noteworthy that Eric Snow did a ton of the work on sub-interpreters. Yeah, and that's part of it is talking around sub-interpreters. And some of the interesting discussions here,
Starting point is 00:32:04 like one of the things popped down that I thought from Antoine Petrou, Petrou, and he's the maintainer of PyArrow, says, just as a data point, our library supports free-threaded Python, but I've not even looked at sub-interpreters. So, and I kind of, I know that it's gonna be complicated to, or at least going to be complicated to,
Starting point is 00:32:25 or at least it might be complicated to think about free threading, but, thinking about sub-interpreters blows my brain up. So, I'm not thinking about them at all. Well, if you could have each thread have its own sub-interpreter, how about that? Or multiple sub-interpreters. I don't know. Or each sub-interpreters have its own multiple thread. Sure. Yes, hence the brain blowing up. Yeah, anyway.
Starting point is 00:32:50 And another free threading topic. This is from Hugo Von Kaminad. Free threaded Python in GitHub actions. This is just, he actually released this in March, but this is really how to, how to make sure, so we're encouraging people now to make sure with 3.14 and at the very least with 3.14 to test free threading for their project. So if you have a third party, if you are a maintainer of a third party,
Starting point is 00:33:19 basically if I can get your stuff via pip and it's got a library that I can use in other code, please test it for free threading and then tell people whether or not you're supporting free threading. And this discussion is how to do that within GitHub actions. So really great write-up on how to... And it's basically just add a T to the... It's not bad. So this isn't a lot of extra work. Indeed. Not too much. All right. Well, you're ready for a joke.
Starting point is 00:33:49 Yes. Close it out with, uh, an idea. So naming things is hard, right? There's the famous joke, which will introduce this joke is that there's two things in computer science that are hard naming things, cash and validation and off by one errors, right? Yeah. So this one comes from programming humor and it's a bit of a meme. It has two senior devs just like fighting, like, you know, literally wrestling, fighting away. It says, here's a code review meeting. The variable name should be number to be updated,
Starting point is 00:34:19 says one of the senior devs while she's throwing down the other senior dev. The variable name should be updated number. Meanwhile, the junior dev is sitting there, eating popcorn, watching it go down, while working on a new file with variable names such as AA1 and XYZ. Yeah, and I'm over here. Do you guys have naming?
Starting point is 00:34:39 Do you have naming debates? Sorry, go ahead. You're over there. No, we use linters to do the, to do the argument for us. Um, but I'm looking at this going, it's, it's a camel case. It needs to be snake case. What's up with this? It's gotta be like a JavaScript or C sharp argument. Um, and I'm, I'm one of the worst whenever I see, uh, um, uh,
Starting point is 00:35:03 style guides getting written, which I always cringe when there's a new style guide in the team. But I want to, I always make sure to make sure that it comply in like it's at least adds to and doesn't distract from actual like common practice in the rest of the industry. And the other thing is for the short variable names, you have to allow things like x, y, z for and i and j for loop variables and stuff. Although I do agree that using both i and j is evil because some fonts, you can't really tell much
Starting point is 00:35:37 of a difference between the two, so yeah. Yeah, but like for n and int, that's like steeped in historical math math style outside of programming, right? Yeah. And I've had people like... X and Y for algebra, absolutely. And then I've had people gripe about using I as a variable for a loop. And I'm like, that's just so common. It is like for I in this, especially if it's not a nested loop why not yeah well have you done any c++ come on that's like one of the first things you for and i
Starting point is 00:36:13 yeah equals zero i plus plus i less than n you index into the array because that's how it goes yeah yeah anyway well so you don't invite me to your code review meeting because i'll be the Anyway, well, so you don't invite me to your code review meeting because I'll be the the grump in the background Maybe you should I Know you don't like what they wrote, but they have a point right let let the I be let it be yeah All right. Well, thank you. Yeah. Thank you as always and thanks everyone. Bye y'all. Bye

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.