Python Bytes - #477 Lazy, Frozen, and 31% Lighter
Episode Date: April 20, 2026Topics covered in this episode: Django Modern Rest Already playing with Python 3.15 Cutting Python Web App Memory Over 31% tryke - A Rust-based Ptyhon test runner with a Jest-style API Extras Joke ... Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Django Modern Rest Modern REST framework for Django with types and async support Supports Pydantic, Attrs, and msgspec Has ai coding support with llms.txt See an example at the “showcase” section Brian #2: Already playing with Python 3.15 3.15.0a8, 2.14.4 and 3.13.13 are out Hugo von Kemenade beta comes in May, CRs in Sept, and Final planned for October But still, there’s awesome stuff here already, here’s what I’m looking forward to: PEP 810: Explicit lazy imports PEP 814: frozendict built-in type PEP 798: Unpacking in comprehensions with * and ** PEP 686: Python now uses UTF-8 as the default encoding Michael #3: Cutting Python Web App Memory Over 31% I cut 3.2 GB of memory usage from our Python web apps using five techniques: async workers import isolation the Raw+DC database pattern local imports for heavy libraries disk-based caching See the full article for details. Brian #4: tryke - A Rust-based Ptyhon test runner with a Jest-style API Justin Chapman Watch mode, Native async support, Fast test discovery, In-source testing, Support for doctests, Client/server mode for fast editor integrations, Pretty, per-assertion diagnostics, Filtering and marks, Changed mode (like pytest-picked), Concurrent tests, Soft assertions, JSON, JUnit, Dot, and LLM reporters Honestly haven’t tried it yet, but you know, I’m kinda a fan of thinking outside the box with testing strategies so I welcome new ideas. Extras Brian: Why are’t we uv yet? Interesting take on the “agents prefer pip” Problem with analysis. Many projects are libraries and don’t publish uv.lock file Even with uv, it still often seen as a developer preference for non-libarries. You can sitll use uv with requirements.txt PyCon US 2026 talks schedule is up Interesting that there’s an AI track now. I won’t be attending, but I might have a bot watch the videos and summarize for me. :) What has technology done to us? Justin Jackson Lean TDD new cover Also, 0.6.1 is so ready for me to start f-ing reading the audio book and get on with this shipping the actual f-ing book and yes I realize I seem like I’m old because I use “f-ing” while typing. Michael: Python 3.14.4 is out Beanie 2.1 release Joke: HumanDB - Blazingly slow. Emotionally consistent.
Transcript
Discussion (0)
Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.
This is episode 477 recorded April 20th, 20th, 26. I am Brian Hawken.
And I'm Michael Kennedy. And this episode is sponsored by us and you guys. So there's a bunch of courses over at Talk Python Training.
There's a Pyt test courses over at Python byte. Wait, PythonTest.com. My own site. I forgot the name of it.
Thanks to everybody for the Patreon supporters and a lot of people encouraging me and grabbing copies of Lean TDD and I've gotten some good feedback.
So I'll plug that later in the show as well.
Yeah, if you'd like to reach out, send us topics.
One of the topics I'm covering is from somebody that sent it in and we really appreciate that.
So get a hold of us through Blue Sky, Mastodon, or through email or the contact form on Pythonbyos.
fm and all of that info is at pythonbites.fm. If you are listening and you're thinking, hey, I'd like to watch this live sometime,
you can just head on over to pythonbites.fm slash live or just look around. You can find links to
watch us live on YouTube or watch the past episodes. And finally, please join the newsletter because
we send out links and information and background information. And some people have mentioned to us before
that some topics are a little over their head, but we don't want that.
So we send you some background information so that you can understand every topic we talk about.
And with that, I'll take a rest, and it'll be your turn.
You know what?
I need to take a rest to you, honestly, Brian.
I've had a long weekend, big party for my wife and had a bunch of our friends in.
We ran this party van bus thing.
I drove around a bunch of wineries, and I just need to rest.
But, you know, I'm at the jingo.
rest type? I don't know. I need legit rest. I need legit rest. But I'm going to tell you about
Django rest. In fact, modern rest, which is a framework for Django that is type-based. So all the
classes support and use runtime type information to do all of their magic, right? Not just
autocomplete or blending. And they has true async support. So I think Jango Ninja like. But with a
different take, okay? So this is a pretty cool project here. And yeah, I just looked at and thought,
you know, this is, this looks like something that's really fun. It actually, one of the differences
than, say, Django Ninja is it supports multiple model foundations, I guess. So Pydantic, which is the
first one listed here, which is great, but also message spec and Atters. Remember Adder's?
Like, Adder's is still a thing. Yeah. From Panic and, you know, kind of data class.
style. And one of the things that's interesting here is it says if you use message spec,
it allows you five to 15 times faster APIs than the alternative. Message spec is all about
ultra compact exchange on the wire type of thing. Also has true support for ASGI ASGI, ASYNC applications.
But one of the things that's interesting is it's just good old Django, like nothing too new,
nothing that you wouldn't expect.
So if you're doing Django, it feels just like, yeah, that totally fits in.
There's a getting started page here, which is a little zoomed for all of us,
that we can go down and sort of go through.
It's pretty interesting.
One of the things is interesting also supports Pi Pi Pi Pi, Pi, not the mispronunciation
of Pi Pi, but literally Pi, which I think is interesting.
And Django 4.2 or above.
Hat tip to an upcoming topic, the default recommended way to install.
it as UV, then poetry, and then PIP, so that's pretty cool.
So you've got to do things like when you install it or you set it up,
you'd say, I want Django Modern Rest as a package, bracket Pidantic or bracket adders or
bracket message spec, so that you get your various dependencies installed that you're going
to need, or, you know, just whatever, just put Pidantic as a dependency as well, then you're good
to go.
Also interesting.
Remember I talked about LLMs.txtee and how I added that to Talk Python so people can get
better LMs understand better how to work with Talk Python.
They also understand how better to work with my courses.
So this does this as well.
It has explicitly a LLMs.txty and a LLMs-Full dot TXT.
So if you just say, hey, Claude or whatever I'm working with,
I'm going to start this project and it's using Django Modern Rest,
a pretty new framework you might not know it.
So you can actually just drop that URL and I'll say,
please read this before you begin this project and make a note that this is a resource.
for you. It also has support for Context 7. Are you familiar with Context 7? No. So Context
7 is, I honestly don't really know what to make of Context 7. I thought I understood it, but I
kind of don't necessarily. But what it does is Context 7 is a website where you can enter
different libraries into, and then they parse it and turn it into something that AIs can use
to understand that library better. I'm not sure how great it works, but you come in here and it has
different skills, for example.
Like it has a Django Modern Rest from Django Rest Framework's skill.
So you could give it this skill and say, hey, use this agent that understands both of these frameworks
because I want to upgrade from Django Rest Framework, DRF, which has some of the craziness
that we talked about last week, remember?
Or two weeks ago, but last episode.
Anyway, this is a pretty cool project.
Interesting.
Yeah, yeah, yeah.
It's got stuff for, hey, for my.
Fine. Let's see what it says about it. I don't know. Actually, it just takes me to it. But, you know, it's got, you can submit your own library to this, by the way. So that's, I think how this got here. I think I may have submitted this and so on. But yeah, anyway, you can say, hey, I want AI to understand my library better. And this also has a MCP, which you can install. I'm actually, I'm not a super huge fan of it. I got other things that I do for this. But anyway, it's, it's interesting that they explicitly went to that effort to help you get started, both converting and just working with. All right, so let's look at this showcase.
Like, notice here, oh, actually, I gave them some short shrift here.
Look at this.
They do message spec, which is cool, so you can do message spec.
But they also do Pydantic, Atters, data classes.
We're going to be coming back to that.
Typed Dict.
Now, that I did not see coming.
Type Dic.
And straight named tuples as one of your foundations, if you like.
How would never see this?
Okay.
All right, so let's go down.
If you scroll down, I didn't want to do the, I'm going to do message.
I'll do Pynatic, whatever.
So we can go down a little bit further, and there's a full example here.
And it just shows you like a one file Django thing.
So it shows you how to set up your Django app, your templates, et cetera, et cetera.
And then you just can create, you create these models for data exchange in your web app,
which I think is pretty interesting because a lot of people think of the model exchange
to be like basically database classes, but that's not really what you want.
You want like, what is this form submit or what does this API receive as data,
regardless to how we store it the database, right?
Yeah.
So as user create model, which just has an email or a user,
response model, which has a UUID, which is the ID of the created user, right?
So then you can create an endpoint and you say this thing accepts a post.
It is, you derive the class from controller of Pidentic serializer or controller of
Adder's serializer.
And then for your signature of your function, you say, hey, I want body of user create model.
And then it automatically parses and validates that Pidentic style.
Then you just use it.
So you'll never get to your code if your Pythantic model doesn't validate,
pars, all those things.
So you don't have to check that kind of stuff.
Pretty neat.
Yeah, that is cool.
Yeah, so there's a lot more to this.
But people can dig into it and explore it.
But it looks like a pretty strong contender.
If you bump over to GitHub, it's got about 1,000 stars, 100 forks.
Pretty active.
How old does it look?
A month?
No, six months.
It was created six months ago.
So I think that's still, that's pretty good growth,
a thousand stars, and six months.
I mean, it's not open claw, but that's,
It looks like there's a lot of active development going on still right now.
Yeah, a commit an hour ago.
Let's look at the commits.
What's going on here?
Four hours, five hours, seven hours, 11 hours.
Yeah, that's just those are the active commits of today.
That's pretty solid, honestly.
Really good.
Yeah.
So, anyway, I throw it out there as another Django area that people can pay attention to do, another Django framework.
It's very similar to Fast API and very similar to Django Ninja,
but it seems like it's a little more flexible.
and the way you work with it.
Yeah.
Interesting comment here of,
from SunPos,
Django Ninja,
Django Bolt,
Django Modern Rest.
I guess people get back to Django
and enjoy it.
Nice to see.
And I think that has,
I think there's a lot to be said
for using LLMs and AI
is because Jango's been around
for a while,
so they know how to deal with it.
So, yeah.
Jango is very well understood by AI.
So that's actually,
that's actually a huge bonus.
my mind. Yeah. Yeah. Well,
should we shift gears?
What's new? Well, what's new is
Python. I think it's been up for 30 years. What are you talking about?
Well, so Python,
I'm looking forward,
so there's a Hugo von Kemenade.
I'm sorry, I always mispronounce your name,
but we love Hugo.
So 315,
there's 315, Alpha 8 is out,
plus a release of 314,
3144 and also 31313 are out.
There's a post about that.
But I'm looking forward to 315.
So Windows 315.
We're looking at the status of the versions.
We still have like six months to go before we can really solidify the start using it.
But I think that I'm excited to get started sooner anyway.
So we've got what, a beta's come out in May,
CR's in September, and the final is.
plan for October. But look at what's already in there. Already in the alpha, we've got
explicit lazy imports. And that we've been talking about that on the show. And that's already
there. Frozen Dicked built-in type. Anyway, what do I have up here? I've got, yeah, the frozen
dicked built-in type. This is pretty cool to be able to, by default, do a, like a dictionary that
can be used as that's hashable.
You assign it at instantiation time
or when you define it
and you can't change it after that.
So it's hashable.
So that's pretty cool.
Yeah, and one thing I'd like to add to this frozen deck
that I think is super interesting.
And we've had other frozen types like set, I think, and list.
But in this Python T, the free-threaded Python world,
one of the things that can really unlock concurrency
is not have to worry about locking on different objects.
If you work with Frozen Dix, it's read only
and you can just have all the threads
ram on it all at once, right?
You don't have to worry about locks
once it's created.
So people just consider
adopting immutable data types in general
when possible.
Like if you're creating a DICT but you're not going to change it,
Frozen Dict seems like something cool to put in place
that adds a little more security.
So if you don't expect it to change,
like you can set it up now so it cannot change.
Yeah, and even things that generally
we think of changing,
you can use data flows like algorithmic stuff to do that's functional like like you said it's a
a functional model for some part of your system that can be easily async async because it's all it's all
immutable types so yeah yeah pretty cool um we get unpack unpacking of comprehensions um better so that's
kind of fun star and star star are working better for for that stuff uh let's see what did i have here
that I wanted to talk about.
I don't remember.
Anyway, yeah,
lots of great stuff.
Let's see, annotated type forms.
Oh, Python now uses UTF8 as default encoding.
Can't wait for that because I'm tired of typing UTF8.
Did you shout out lazy imports?
Yeah, well, explicit lazy imports,
that's probably what I'm most excited about is being able to just say,
just say lazy import, Jason, or lazy import, whatever.
and it doesn't actually get imported
until somebody actually uses it at runtime,
that's going to make,
that's just such a clean interface
and it's going to make everything,
a lot of stuff so much faster.
In my world, it's the testing stuff
because so much,
because Pytest imports everything to start with,
but it doesn't need anything.
The tests only need this stuff when the tests are actually running.
So having test runs will be a lot faster with lazy imports.
And can I add that that's a little bit of foreshadowing right there.
Is it?
It is.
Carry on.
Okay.
No, I just, um, uh, just a lot, a lot of exciting stuff going on in, uh, in 315 that I'm, I'm, I'm,
I'm already excited to play with it.
And, um, and it used to be, I think like, I don't even remember how far ago.
It was where it was sort of hard to grab an alpha release.
But now with UV, I just said UV self update and UV,
Python install 315 and bam I had the alpha.
So that's awesome.
Yeah, you could even just say UV, VE, and V and V and V and V and V, and just say, you know, dash that, like give it a version of the Python.
And if you don't have it, it'll just go and say, okay, we're getting 315.
Yeah.
Pretty cool.
Yeah.
Anyway, that's all I wanted to say, but just I'm excited about 315.
Cool.
I am excited about 315 as well.
And that is because I just, I'm excited about a lot of the things there.
I'm extra excited about this lazy, this PEP 810 lazy imports because I've discovered that it can make a mega difference.
And it lets you write a lot of cleaner code than you would right now.
So I wrote an article that was sort of a guide of some project I did a couple of weeks ago.
And I would have talked about it last week, but we skipped last week because I was at conference.
So we're talking about this week.
And the title of the article is cutting Python web app memory by over 31%.
And that's across the entire server running Python bytes,
um, running Talk Python,
Talk Python training, all those things.
So I just sat down and said, you know, it's kind of ridiculous how much memory these
apps use.
Like Talk Python training alone.
I'm just going to focus on that, but like apply to this to most Python web apps or APIs.
It alone was using one point, almost 1.3 gigs to run.
Like, that seems a little ridiculous.
And there's a separate little search damon process, and it was using 700 megs, just chilling it.
Why do you need so much memory bad Python app?
Who wrote you is what I want to know?
So I set about the process of going, like, at least let me understand where this memory is going.
And if there's any way that I could do something to make it better, okay?
It's not like we were running out of memory, right?
I have a 16 gig server running in the cloud, and I think it was using 9 or 10 gig.
So there were six gigs left, but at the same time, it's, what if I want to run other apps?
Like, I want to self-host something that would maybe power the web apps or, you know, be like a CRM or some other thing that I just want to run and not have to set up other infrastructure.
It'd be great if it's like, oh, there's so much RAM, it doesn't even matter.
You know what I mean?
And RAM, by far, is by far more critical and scarce than CPU.
Not, you know, put this RAM crisis aside, this stuff that AI is triggering.
just straight, you get 16 gigs RAM to it, you get 8 CPU.
For most people, most workloads, the CPU is pretty chill, and the RAM is a lot higher.
You've got to get a lot of traffic before CPU becomes the problem, right?
So thinking about the RAM, I think is important.
So I started working on this and said, well, what can I do?
The starting point was 100, 1,280 megabytes.
And the little search Damon thing that I told you about, that once an hour, once every day,
I can't remember the schedule.
I think it's a few times a day.
It'll pull all the content of Talk Python training and turn it into a search engine.
So, like, you go over here and you're like, hey, I'm interested in, you know, taking some class.
And in your class, you can say, oh, I'm not logged in.
But you can just go over and search and say, well, do you talk about pie tests?
Let's see.
Well, why, yes, we do.
And it has, like, all these really nice, deep understanding of, like, the hierarchy of stuff.
It's not just like a regular search, right?
Like, it's something I put together.
But it's still, this is a ridiculous amount, 700 megs for something that just, like,
like read from the database and writes to the database and otherwise it's doing nothing.
So I'm like, well, how can I do this better?
The first thing I did, there's five things I'm going to talk about.
Number one is I was running two to three worker processes to scale out web request
because everything was still based on pyramid.
It's synchronous.
It's whiskey.
I'm going to have fewer worker processes than I really want to have better concurrency in the one
worker process that's there, right?
I don't want like one slow request to be just like, well, that's it.
The site's not responding, right?
Because of the Gill.
I decided the first thing to do is to rewrite everything in Quart.
And it could be other languages, anything that's async.
I could have rewritten in Fast API, but I really like the Flask model.
And Quart is the true async version of Flask, right?
So that's what I did.
And that let me turn the other worker process off, which right there just cuts your memory straight in half, right?
Because they both use the same amount of memory, and there's two of them.
And, you know, people think, like, oh, yeah, whatever.
Like, Michael, your site's just a, it's just a blog.
Like, dude, what are you talking about?
Like, you seem to think, like, there's a lot going on here.
But I work on a real app, so it doesn't apply to me.
I ran my little tallyman thing against Talk Python training, not the any of the podcasts.
So just the courses.
178,000 lines of Python, 300,000 lines total.
Like, that's enough to, like, spend some time to figure out what's going on, right?
That's complicated enough for most apps, I imagine.
At least to be somewhat representative.
Yeah, but you're, like, running courses and podcasts and stuff.
stuff. This is more complicated than just a blog.
Yeah, that's true. That's true. Yeah, but
tell Reddit that. So
then the next thing, the number two
was, I'm going to rewrite this in
the raw plus DC
design pattern that I talked about. Just using
straight queries, not
ORMs or ODMs. Although I have something
and I'm resting a same extra about that, but still
just straight queries and then mapping data
classes, or it could be PIDANTI or adders,
right? It doesn't really matter. Pick a model,
a simple data model. And that actually
made a pretty big difference. That dropped
200 megs, 100 megs per worker, just switching away from an ORM to just raw queries and data
classes.
And it almost doubled the request for a second, which is wild.
Okay, so then number three, well, once I had done the court thing, I was able to just
tell Grannie and, like, I want one worker or not to, or three.
For a while, it was three or four, and I've been, like, dialing it back.
It's got faster.
So that, say, 500 megs there.
So now we're down to, yeah, we're down to 500.
And then for the search thing, what was happening is it would load up a bunch of imports,
10 getting exciting here, would load up a bunch of imports, and then it would run a bunch of
code that would like get all, I don't know exactly where the memory was going, but a lot of stuff
that it would work with by interact with the database and so on would get cached or just left
in memory.
So it was using 708 megs of memory.
So I said, well, what if, like really the main core.
loop of the app is just start, look at a timer, after a certain amount of time, run a really
complicated set of queries, and then write a bunch of structured data indexed back into a certain
structure so I can query it ultra-fast, right? That main part doesn't need, it was importing
like all the library, like all of Talk Python training, which would pull in everything that
Talk Python training's main dunderer. NIT would pull in, which would pull in all the libraries,
that every, you know, it would cascade into this mega import. So I said, well,
What if you just had the loop and then in a separate file, you ran, you would start a process that ran that separate file that did the indexing and then stopped.
And like just finished, like, because it didn't need a response.
There's like, okay, I'm done indexing.
And that temporary subprocess would be the thing that did all the imports.
And when it shuts down, those imports go away.
So that took it from 708 megs to 22.
That's great.
That's insane, right?
Yeah.
So that's the sub process is still getting like 700 megs to.
bags or whatever, but it goes away.
But for like 30 seconds to a minute, not
constantly, right? And these spikes are, I mean, they're fine, but it's like,
you know, not everything is spiking in memory at the same time, just like they don't
in CPU. It's just like, it's not a fixed cost, which is pretty interesting.
You know, the work was just like, I'm just going to do the index, move the indexing function
to a new file, move the imports that it needs to a new file and just do a sub-process
to call it instead of calling it directly, done.
Yeah. And it made, I don't know what that division is, but like 20,
times 30 times better. It's incredible. And the last thing, this one I think is going to surprise
people. This is the one that really hits the point home for lazy imports. If you type the words
import Boto3, because you're doing something with S3 or something similar, your working memory goes
up by 25 megs per process per worker. If you type the words import map plot lib, your working memory goes
up by 17 megs. If you type import pandas, your memory goes up by 44 megs. Those three imports right there
almost 100 megs of memory usage.
Yeah, it's a lot.
So are they needed?
If you're doing core data science, they are.
But for me, there's like an admin section
where I can go and view some reports.
If I view the reports, I use these libraries.
But if I don't view the reports,
which I really don't look at them hardly ever,
just like maybe once a month.
Like, hey, I wonder what that looks like.
Let me go hit it.
It pulls all that stuff in,
and then it generates the report.
But the worker process recycles a couple times a day.
Yeah.
So even if I view the report,
five hours later, that stuff's unloaded again and I get a new version, and it's not another month
until I load up that 100 megs. So I went from like 500 megs to 450 just by saying, well, instead of
importing the top of the file, let's import in the function that generates the actual picture,
you know, the report that I need from this. And boom, 100 megs less memory usage. And if that had the
word lazy in front of it, I wouldn't have to rewrite my code. It would have effectively the same behavior.
It wouldn't import until I actually run the function,
but I could do PEP8 magic and put it at the top.
What do you think of that?
That's pretty cool.
So now I'm thinking that lazy imports,
it imports when it needed,
but it doesn't ever unimport.
And I'm wondering if like a future Python will add like,
you know,
something to the lazy import that like cashes stuff out of memory.
Right, right, right.
Like the runtime could see nobody is caring about it being imported anymore
and no one has a value on it.
so maybe it could just go away 100%.
But that means, I mean, people I think often think of lazy import as something that's a speed,
a speed up, like, oh, it's faster because you don't have to do all the imports until you use them.
I'm sure that's true.
I don't have numbers around it.
But what's really interesting is there are some imports that are mega in the amount,
how much they actually increase your working memory.
If they're lazy and you don't use them very often, they will not run very often.
And I think it will actually make a pretty big difference.
So, you know, like you just write code, like, just do import inside the function instead of at the top.
Like, your lenders will go, you shouldn't do this.
Like, you leave me alone.
I'm doing this.
This is really good for me.
So, and the final thing, I just moved a bunch of caches to disk caches instead of memory caches, which is good.
And so I put a little picture in there, but it saved a ton of memory across 3.2 gigs less memory used on the server by applying that to Python bytes, Talk Python training.
There's a bunch of other apps, but they, oh, in the search thingy.
But yeah, those were the four big ones.
Cool.
Pretty cool, huh?
Yeah.
Yeah, it's very cool.
So, a little bit long there, but I thought that people might appreciate some kind of a roadmap and how you can do that yourself.
So the total savings was 190, 1,98 megabytes to 472 megabytes used across the apps.
That is a lot of difference.
And you can get a way, you can run more apps.
you can scale out more and get way better performance because now you can, like, run more workers
if you really needed to or whatever.
I think there's a lot of benefits there.
Yeah.
Cool.
A lot of excitement out in the audience they're talking about this topic.
I think it's cool.
All right.
Well, one of the things that I get excited about is testing.
You don't say.
Pick that up about me a little bit.
So I want to talk about trike right now, like as in a tricycle.
So this is a new.
a new project, and how new? So it's got four stars, but it, I mean, it just went up like last
month or something, very recent. So taking a look at this, this was submitted by the person
that created it, Justin Chapman, but I'm, you know, I like the idea of like thinking outside
the box for testing. So this is, Tricke is a Rust-based Python test runner with a just style
API, which is, so I'm not, I'm not familiar with Just, but that's what, is that JavaScript,
Just? I don't remember. Anyway, maybe. I think so. So, yeah, you can tell how Python focused I am most
of the time. But so what is it going to look like? It's, let's, let's zoom in a little bit,
uh, getting started. So the, uh, it, it looks like, it looks way different than Pytest. So we've got a,
let's say we've got a normal function that's ad for example and we want to test that we'd say like
with describe and add so with describe and then some comment with this like your test name I guess
and then decorators of test and then another I think that's just a description but it's saying first I
thought it was a string that was being parsed to make it run versus one plus one I think it's just
the message that comes out yeah it's just the test case and so this is a very very
basic, we're going to get more from Justin to describe this. I've like reached out to him and said,
this is really interesting. I'd like to know more. So I'm going to do more research. I haven't
really played with this yet, but, but I, but I, I'm intrigued by it. I kind of like pie tests.
Pi test has, uses just assert. And for ASync for for for things that soft asserts, I guess,
I use the my pie test plugin called Check, Pytist Check. But this is different. By default, all of
these are soft asserts.
So it doesn't stop the test.
You can expect a lot of things.
It uses the expect keyword.
So what do we got here?
We've got a watch mode so that watches to see if you have new things to test.
Native Async support, fast test discovery in source testing.
So you can, the ability to just put tests right in the source code instead of having to
have a separate test.
You can do that with Pytest, of course, but mostly people don't.
Test support, kind of like PITEST, client server mode, which is, this is an interesting one.
So the client server, they're all interesting.
But this idea that you can have a server running, so why would you do that?
So one of the things, like I was just talking about for memory wise, if you run Pytest, it has to import everything, imports a lot of stuff.
And then you're running tests.
The server is just doing that so that it's, while you're doing watching, it's done that.
It's got a warm cache of everything so that the individual tests can go faster.
client server mode, you know, pretty assertion of diagnostics.
Of course, we would expect no less from a new test framework.
So basically parse all, do the test discovery once and then just rerun it unless,
is that why that then?
I think so. So it's doing like for instance, it's doing a changed mode.
Oh, like there's, I think it's for picking up new tests.
So as you're like if you're doing test in development and you're modifying a test,
modifying code. It'll pull in the stuff that changed into the, into the server.
But it can probably skip discovery or limit discovery, like the one change file or something
like that. Yeah, that's interesting. We're right. And it's using, I think it's using Git information
to find out which, which elements have been modified and or. Yeah, why not? Most people are using
Git anyway. So I think that's how it's using it. Anyway, oh, interesting. I commented that I,
Yeah, I just sent him information about Pytest Check, and he added that to the...
Oh, yeah.
Soft decisions.
Like Pytast Check.
Very cool.
And I'll throw out there that I also...
I'm plus one for them on the fluent API instead of the raw assert.
I really like the expect this two equals dot, and like you kind of like put it together as kind of an English-like sentence.
Where, you know, you can say like in list, you give it an item and all.
list or something like rather than you know just doing the in the exact whole I
don't know I like that fluent do you okay yes because that's not something I don't
know no it's not weird it shows up a few times in a I've seen it in a couple
different test frameworks and it's and even some there's a there's a
there's an extension for unit tests to be able to do this and then I think I've
seen pie test extensions to do this but I it's not something I am I could get
used to it, but it reads in English fairly well.
Yeah, that's why I like it. It's more writing, but the truth is, if your editor's not
auto-completing when you type 2, you're probably doing it wrong, right? You're not writing
all those characters, you're selecting them from a short list. Yeah. And for me, I like the
readability. Rather than looking in a search statement, go, what is it really trying to get
out with this, like, combination of something that resolves to bullying? You can say, like, you know,
expect this value like to be in the list or to not be in you know like something like that right
yeah i don't i like the readability of it um i i would just just notice also this is made by zensicle
that would tie into previous conversations so at least the website is so um anyway i i think that i
did think there's some interesting interesting work here i like the async support i definitely
want to play with this because i think that's it's an interesting idea um
So anyway, cool, cool, well done.
It's very new.
We'll have to see where it goes.
And I sent him some questions.
I'm sorry, Justin, I haven't read your reply yet.
He already responded to me with some questions.
I sent some questions yesterday.
Kind of like I was curious about startup.
Apparently there is setup, set up and tear down sort of fixture like.
There is a fixture feature.
I just haven't figured it out yet.
So it's somewhere in the documentation.
So anyway, exciting things, and I'll keep an eye on this space.
Indeed.
Let me throw out a meta topic before we get to our extras brain.
Sort of inspired by this, but more broad.
I know people are hesitant to adopt new frameworks,
like a new testing framework or a new web framework
or a new database thing or something.
But with the agentic AI stuff that we have these days,
if you pick one and you're like, oh, it turns out it's no longer updated
or I don't like it anymore or whatever, you know,
it's so much easier to just go,
make that back into one of these or convert it on to the next thing I want it to be instead of having
some huge like oh no now we've got to take you know two weeks and we're all rewriting the test like you
probably could get two from pie dust pretty quickly yeah yeah probably could and also um yeah that the
questionable thing that the scary thing is like like this is pretty new is it going to stick around
like um that that that's that's the big one a lot of people have excitement around
I was a little bit, so I took a look at this is under the the J. Chap and he's the current
CTO of a new V so he's got a he's probably using this at work with his on his day job.
So this is a good thing.
He's contributed to TY and UV which is kind of cool.
So that's cool.
There's some hints that maybe this will stick around and maybe in especially if he's using
it on a regular basis.
He's probably using it and supporting it himself.
So a little bit more.
Yeah.
So I do check these sort of things out.
I see a lot of new projects that are probably assisted by AI to get created.
It might just be a fun toy for somebody.
And that's not something I really want to cover on this podcast.
But something that looks like maybe they're serious about it.
Yeah, we'll cover it.
It doesn't matter.
Cool.
Yeah, it looks very neat.
All right.
I've got a handful of extras.
Do you hit your extras next or?
Go ahead.
Go ahead.
Okay.
Let's see.
I saw this came up in a couple of newsletters and I was intrigued by an article called
Why Aren't We UV yet talking about, did some an analysis of different, I'm not sure how
they got them, but like some top Python projects, stack over, I don't know, that, you know,
UV is popular.
Why isn't it being used more?
The interesting thing, I just, I'm not, I will link to it.
of course but I think one of the reasons why this this gap looks so big is that a lot of people
with requirements. Text are you using are still using UV. I have a lot of projects at work that
are requirements. Text-based and everybody I know, I'd like that our instructions are to use UV.
Just because it doesn't, we're not publishing UV lock file doesn't mean that we're not using
UV. Every single one of my projects.
that's not true almost every one single one of our my projects has a
requirements dot txc and no uv lock and it's all uv yeah so i don't know if the
assumptions that this study are correct is all i'm saying yeah that's a really good point i very
much prefer requirements dot txc with penned stuff and like then the uv lock i don't know why i just do
well i don't know why well right now i'm i'm i'm leaving it open for the developer to
choose if they want to use UV or not.
Yeah.
We probably get to the point where we're not making that a choice.
But anyway.
I find that Requirements.txte diffs a little nicer.
It's like it's easier to read and especially get diffs to look and go, oh, yeah, okay,
this is what change.
Whereas the UV lock has got like so much, especially with the hashes and so on, that it's like,
you know, there's so much noise in the UV lock versus the requirements.
.txtee.
Yeah, but I'm, mine are pretty noisy, though, because I'm using like requirements.
and using UV to publish a more detailed requirements.
.
Yeah, I do the same thing.
But I don't include the hashes.
Maybe I should.
I don't know.
According to listening, when I was listening to Python bytes
recently, and we are supposed to save a hash is there.
OK.
What else?
The PyCon US talk schedule is up.
If you're going, you can check it out.
One of the things that noticed, I knew, as a as a submiter,
I did notice.
this, but there's an entire AI channel, like, what are they, track, an AI track.
And I don't know how I feel about that, but whatever.
I think I'm excited about it, honestly.
Oh, yeah?
Okay.
Yeah, I think so.
I won't be there, but everybody that does, you're going to be there, I think.
You will be there in spirit, and I will hand out Python Byte's stickers, and they will be
carrying some of you with them.
Cool.
Let's see.
Oh, I wasn't going to cover this, but now that I've already have it up.
Justin Jackson has a, what has technology done to us blog, a post.
I was just reading about that recently.
Oh, that's it for my, oh, I have one other extra.
Here it is.
The Lean TDD book, I hinted at that earlier.
I, in a couple days ago, put out version 0.6.1.
I'm still on track.
So this is really close to what I want to read.
So I'm, I, when I, I'm taking a trip, business trip, and then when I get back from the business trip, I'm going to start recording the audiobook for this.
But I've got new, new cover art with the little rocket.
I like rockets.
And I'm pretty excited about the state of this right now.
I'm happy with the flow.
The first iteration of it, I didn't enjoy reading it.
And why would I want somebody to buy a book that I'm not enjoying reading it?
But now I'm like reading it all the time.
I don't want to read other people's books anymore.
I'm liking my own.
Anyway, enough about me.
That is my extras so far.
Awesome, awesome.
Congrats on making progress on the book.
So I was going to cover that 3144.
Python 3144 is out.
But you kind of already talked about that before.
Well, we just zoomed by 144 though, so I'm glad that.
Yeah.
There's some nice stuff out here, and I'm not going to go into detail.
It's just an extra and all those kind of things.
But there are fixes CVE, such and such.
There's two security vulnerabilities address.
Actually three.
Sorry, I miss one.
So at least there's at least three security CVE things fixed in just 3144.
And there's a couple of other security issues as well that don't seem to have CVEs.
So that's alone.
It's probably worthwhile.
So I will instead tie back to your why aren't we UV.
Why aren't we UV yet?
And just point out that if you just type UV Python upgrade, it will do now an in-place upgrade of 3143 or 2 or 1 into 314.
for. And so your virtual environments and all that stuff just should pick that up at the think.
If not, at least a minimum, you can now recreate the virtual environment with it.
Yeah, well, not only that, any of them that you have installed, if you've got all the versions
of Python you've got installed, well, if you say UV Python upgraded, upgrades all of them.
Yeah, it's excellent. And in true UV style, it does it in parallel because it should.
Yeah, actually, after you mentioned that, I just went over and clicked and did it, and it's done
already, so.
Yeah, that's awesome.
All right, one more release.
I just want to give a shout out to you.
I've been kind of like bagging on beanie a little bit.
Although I'm a huge fan of the...
Bagging on beanie.
Beanie bags?
Just saying, like, remember, I've been talking about my raw DC pattern.
Like, there's two reasons I was moving to it.
One, because I think AIs are much, much better understanding raw query syntax instead of
ORMs or ODMs wrapped around something which wrapped around classes, which and then
results somehow sometimes into raw, you know, queries, like that kind of thing.
Yeah.
So I've been talking a lot about that and so on.
but also that it was some of the libraries I was using were no longer updated.
And I'm like, ah, this is such a, such a hassle.
Like, you looked at the releases for Beanie and you waited for that to come out.
You'd see that, like, seven months ago, it was like, oh, yeah, we fixed a couple of things.
And then a while ago, we made some changes that introduced, like, some weird breaking changes,
but, like, we kind of fixed the breaking changes later.
You know, it was not really getting a lot of love.
So there's actually a major release to Beanie that has, like, a ton of,
of fixes and a ton of contributors, a ton of stuff.
So if you're using Beanie 2.1.0s out, you should definitely check that out.
Nice.
Yeah.
I feel like we got a joke maybe.
What do you think?
You want to take this one?
Yeah, I'll take this one.
But on the topic of updates, I am, that's one of the things with agents that I didn't
realize that I was going to enjoy is that I think I want to write this up for next week
or the next time we record.
But the maintaining an open source project is easier now.
With, when you can offload some work to an agent, I'm actually a better maintainer now than I was before.
Me too.
There have been some projects.
I'm like, gosh, that's kind of tricky.
I don't know.
Really justifies the effort.
Some people are asking for features.
And I'm like, you know, we really should support that like new feature.
Hey, Claude, how hard would it be?
And that'll sketch it.
I'm like, okay, yeah, this is totally doable.
Yeah, I was doing some gardening this weekend.
and having an agent work for me while I was doing my own thing.
So anyway, let's, let's have something funny.
And this is, I was gonna bring this up.
This is an April, I think an April Fool's joke
from Mother Duck, which is a great for their Duck DB company,
I think.
Right, Mother, they're the company behind DuckDB
and this is like their commercial offering
so you can run that you better basically.
Okay, well they put out Human DB instead of Duck DB.
Instead of DuckDB, HumanDB.
And they did like human feet instead of duck feet.
And it says blazingly slow, emotionally consistent.
The world's first human power to analytical database.
Why pay for compute when Dave is right there?
And this is just pretty darn fun that you can just,
you can do PIP install human DB and import human DB and do queries.
And it just like plays for you.
and it plays an audio.
Dave is squinting at this.
And it's like,
yeah.
Can you actually install it?
Yeah, I did.
It did install and ran it.
And it's pretty funny.
Dave is,
yeah, contacting Dave for this query.
The website is very complete about how this all works.
It's got in brain storage,
post it indexing.
Suboptimal but colorful.
Each index is handwritten and stuck to the monitor
Bizzle. Yeah. OLA processing, online analytical humans.
Eventually consistent. Dave will get back to you.
SLA is one business day or three if it's quarter end or five if Dave's on PTO.
We'll circle back. SQL or natural language. Yeah.
Dave learned sequel first, then English. He understands both. Just ask him anything.
So yeah. So yeah, I did PIP install.
this played with it and it's it's pretty funny um to to watch oh you can do there's examples so select
average salary from employee wear whatever um it just runs it borrowing gary's ledger pad from the query
and then you have to wait for it oh the average the average engineering salary is somewhere between
somewhere around 87 thousand dollars give or take i ran those numbers using gary's ledger pad but
he wants it back so you know rounding
so i just had a lot of fun with i probably spent 20 minutes playing uh with with human db when
a couple weeks ago so yeah that's really funny benchmarks oh it's got benchmarks so duct db is uh
0.0 0.0 3 seconds human db 2 to 4 business hours nice uh duck tb is uh pennies to run but human db is
$49 a month plus snacks.
Nice.
The vibes, clinical versus immaculate.
Gut feeling.
It built in gut feeling.
That's great.
It remembers your birthday.
Dave is thoughtful.
And DuckDB will not remember your birthday,
but it probably will if you put it in the database.
That's funny.
Oh, wow.
They've got pricing.
Maybe they have an enterprise here.
Enterprise, let's talk.
Unlimited human analysts, on-call overnight human.
We'll figure out the SLA,
dedicated to Slack workspace,
quarterly team pizza party.
Uh, this is, this is funny.
Dave gets equity.
Dave gets equity.
Uh, the zero dollar one for the free one.
Emotional support not guaranteed.
So that's funny.
I wonder, can you just buy it?
Upgrade to pro?
Uh, no, because it is a joke, but it was funny when people carry it farther and
actually take a credit card or something.
So, yeah, it's pretty funny.
I love it.
Anyway.
Hey, I have one really, really quick thought to close things out.
Okay.
A little bit practical.
For those folks out there that are interested in,
this why aren't we UV yet study,
you could look at the
the requirements.tXT file
and if it was generated with a PIP compile
sort of thing, a UVPIP compile,
it would say the command
is UVPIP compile.
It'll say generate with command
and that command will start with UV.
So you could parse the requirements.
Dot TXT files and get greater visibility,
but only for the ones that PIP compile,
not like install, just unpinned versions.
Well, and also if that's how they're,
if they are generating the requirements.
sottex with UV.
It would raise the numbers.
I don't know how much.
Anyway.
Oh, also, so many
required, like so many
projects or libraries
that are not, they don't have either.
So like, yeah, true.
That's true.
Libraries just have their dependencies
and Pipeproject.com.
So there's no, they're not UV lock
or requirements, you know.
The world's complicated, Brian.
Do you know what's not complicated?
We are at the end of the episode.
So thanks, everybody.
I'm going to press the goodbye everybody button.
Goodbye, everybody.
Bye.
