Python Bytes - #465 Stack Overflow is Cooked
Episode Date: January 12, 2026Topics covered in this episode: port-killer How we made Python's packaging library 3x faster CodSpeed Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our... courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: port-killer A powerful cross-platform port management tool for developers. Monitor ports, manage Kubernetes port forwards, integrate Cloudflare Tunnels, and kill processes with one click. Features: 🔍 Auto-discovers all listening TCP ports ⚡ One-click process termination (graceful + force kill) 🔄 Auto-refresh with configurable interval 🔎 Search and filter by port number or process name ⭐ Favorites for quick access to important ports 👁️ Watched ports with notifications 📂 Smart categorization (Web Server, Database, Development, System) Brian #2: How we made Python's packaging library 3x faster Henry Schreiner Some very cool graphs demonstrating some benchmark data. And then details about how various speedups each being 2-37% faster the total adding up to about 3x speedup, or shaving 2/3 of the time. These also include nice write-ups about why the speedups were chosen. If you are trying to speed up part of your system, this would be good article to check out. Michael #3: AI’s Impact on dev companies On TailwindCSS: via Simon Tailwind is growing faster than ever and is bigger than it has ever been Its revenue is down close to 80%. 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business. “We had 6 months left” Listen to the founder: “A Morning Walk” Super insightful video: Tailwind is in DEEP trouble On Stack Overflow: See video. SO was founded around 2009, first month had 3,749 questions December, SO had 3,862 questions asked Most of its live it had 200,000 questions per month That is a 53x drop! Brian #4: CodSpeed “CodSpeed integrates into dev and CI workflows to measure performance, detect regressions, and enable actionable optimizations.” Noticed it while looking through the GitHub workflows for FastAPI Free for small teams and open-source projects Easy to integrate with Python by marking tests with @pytest.mark.benchmark They’ve releases a GitHub action to incorporate benchmarking in CI workflows Extras Brian: Part 2 of Lean TDD released this morning, “Lean TDD Practices”, which has 9 mini chapters. Michael: Our Docker build just broke because of the supply chain techniques from last week (that’s a good thing!). Not a real issue, but really did catch an open CVE. Long passwords are bad now? ;) Joke: Check out my app!
Transcript
Discussion (0)
Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.
This is episode 465, recorded January 12th, 2026, and I'm Brian Okin.
And I am Michael Kennedy.
And this episode is sponsored by us and you.
So thanks to everybody that does support us through Patreon supporters and also through grabbing the PITES course or the Lean TDD book.
or so many of the massively cool courses at Talk Python Training
and also Michael's fun book on deployment and stuff like that.
If you'd like to, we're recording this for the podcast,
but also broadcasting it live.
If you'd like to join the live, you can join us at YouTube,
but you can just go to Pythonbytes.fm.
And there's a link on how to get that and what the timing is and all that.
Also, you don't need to take notes.
You can just sign up for our newsletters.
So code Pythonbyst.fm, grab the newsletter.
And we don't do very many spammy things.
We just let you know what we talked about.
And with all the links and a little bit of extra information.
Yeah.
And we love growing that list.
It grows by a little bit every week.
And it's so fun to see.
So, and with that, Michael, what do you got for us at first?
Let's talk about Port Kill.
The port killer is an interesting Mac app.
And if you're not a Mac, I apologize.
Oh, no, it's a Windows as well.
It's a Windows as well.
So if you're a Mac or Windows, which is a good chunk of us,
here is a cool application.
So let me just lay out the problem for you, Brian.
Either you've got 20 terminal tabs open
or in one of your four IDE windows,
you've got your API running,
or you've got your web app running,
or something like that.
that and you go try to run it again, not knowing that fact. And it says, sorry, that port is already
in use. You're like, oh man, where is it now? And that's on you to know where you ran it. You know,
I mean, there's a button to say run in your IDE versus running the terminal. And so maybe just
press the button. You forgot, oh, yeah, that thing was like one of the running things. But since
2025, there's another reason this is a pain is some of these agentic AI things. They're like,
oh, let me start the server for you so I can test it. And it'll like, it'll, like, it'll,
like temporarily fire up the web server, make a request, and like lose track of it and not kill it.
And it's not in your terminal.
It's not in your ID.
It's just there somewhere in the background hidden.
You can't get to it to shut it down.
You go to task manager activity monitor and you say Python is like four things.
Like, hmm, well, three of those are supposed to be running.
Which one do I kill, you know?
So it's just a real hassle.
And so this port killer solves that problem, not just for Python, but for applications in general, right?
So it's a powerful cross-platform port management tool for developers.
It monitors ports, manages Kubernetes port forwarding, and even does integrated cloud flare tunnels.
So it's really, really slick, actually.
So what you do is it's got a little notification section menu bar thing that drops down.
And you can have multiple cloud flare tunnels, which we've talked about in GROC and I've talked about
rat hole, like cloud flare tunnels or a thing like that.
Like put this on the internet.
other people can get to it.
So maybe I can help debug or show off something, right?
Like if you're trying to debug a web hook, the only way to do that is online, right?
Because some other servers is trying to get to your dev machine.
So these tunnels are super helpful.
We talked about that.
So that will actually manage these things.
Like for example, you can go to say your Flask app.
And I'm pretty sure you can, I don't see the UI exactly that, but I think you can go to it
and say, oh, expose this Flask application as a Cloudflare tunnel.
Boom, it's off to the races.
That's pretty cool.
Yeah.
Yeah.
But it also shows you all the local ports that are open and you can go kill them.
So what you can do is you can actually go to either search for a process or a port.
You're like, it says that port, you know, 5,000 or 8,000.
Let's go with 8,000.
It's a little safer.
8,000 is taken.
You can't run your app.
Why?
Just go in there and type 8,000, find the thing and press kill because you're like, stupid clod
code left that thing running.
And I can't find it to kill it.
So I'll do it here.
You know what I mean?
Or you can just see what ports are open,
what's running, all sorts of stuff.
And I think it's super neat.
You install it on Mac OS via HomeBrew,
or you can just download a DMG.
So that's pretty cool.
And yeah, I mean, that's pretty much what I got to say about it.
But it is nice.
That's cool.
Yeah.
Yeah.
I don't like this is obviously,
I'm not even sure why I'm coming on on this.
But I at most have like one or two ports open when I'm working on
a web app because I'm not heavy duty web developer, but yeah, it looks neat.
Yeah, well, the thing is you might not have very many, but you can only have one app on that
one. And if it gets, if it gets lost, especially if the little AI thing goes off and does something
and then leaves it just dangling, you're like, oh, man. Yeah, you got a rogue agent. You got a hunt down.
Yeah, exactly. So it's, even if you're just doing one, it's like, it's still valuable.
Yeah. Yeah. That's pretty cool.
Pete. Well, what am I talking about? A fine question. I'm going to talk about,
making things faster and profiling.
So Henry Shriner sent this to us,
and he's a friend of the show,
and we're fans of Henry Shiner's work.
So he's a core developer.
The article is how we made Python's packaging library three times faster.
And this is cool because packaging is one of the,
is one of the heaviest hit packages on PiPI,
Even though a lot of people might not PIP install it themselves, it's used by a lot of other stuff.
And I, let's see, I often have been doing it when I'll reach for packaging if I'm trying to compare versions.
Like at runtime, I'll say like, you know, if I'm testing with something at 315 versus a different one or something like that,
I'll often use packaging for grabbing versions.
Anyway, there's a bunch of fun stuff in the packaging package.
But so how did they make it three times faster?
The article talks about like graphing and plotting
and how he's figuring this out for a while.
And I'm going to scroll past all of that and talk about the actual updates.
And one of the things I love about this is sometimes, I mean,
pre-you know, we often hear about,
premature optimization is bad.
But there are times where you see stuff that's taking a while
and you're like, I really wish this was faster.
And so there is a time for optimization.
And often it goes along something like this.
It's you put some profiling together,
you try to measure the different parts of it,
and then measure it and try to reproduce those,
make sure they're consistent times so that they're not random.
But then you just do like 10% at a time.
So this is an interesting one of 10% speed up
of just stripping zeros out of stuff.
And they had a reversed list,
iter tools drop while a way to reverse it,
to strip the zeros.
And they just, they just made it a little different.
And more, actually more readable and less elegant,
but it's nice.
So instead of one line, there's five lines of code with a four loop.
But it sped up the whole thing by 10%.
And that's awesome.
And then, okay, so that's a little thing, 10% speed up.
And then the faster Reg X, 10 to 17% faster on 311 and above only.
And that's another thing is sometimes newer versions of Python have more tools at your disposal.
So you might have to have two implementations, but why not make it faster for the newest?
So that's pretty cool.
Removing, I don't actually get this, removing single disk, oh yeah, removing single
dispatch 7% faster.
Remove duplicate version creation.
37% faster.
Awesome.
Removing name tuples.
20% faster.
Just all these different little tiny speedups and they're like 10 to 20%.
And in the end, it's a lot faster.
So I really like writeups on people just taking a little bit befores and afters.
And that it doesn't mean to say if you've got something that looks like the before code,
that you should speed it up, you should measure, and make sure that it's readable.
I mean, I think the first goal is readability, next measure to see if it's too slow or something.
Yeah.
So write it in the form of readable, understandable, and then figure out how to make it fast.
Yeah.
If needed.
Yeah, but there's, there are tools available if you are ready to try to make things faster.
Indeed.
All right.
All right.
I have a, I don't know how to phrase this.
I think it's not good.
Let's call this a not a good story, but it's quite the story.
So this is AI's impact on dev companies, open source.
We start with a quote from Simon Willisand's sort of link blog, which itself is a quote.
Okay, so Adam Watham is the CEO of Tailwind Labs, which is the maker of Tailwind CSS.
And the story about what's happening right now with Tailwind CSS is bonkers.
Like you would have never predicted it.
But once you hear the story, you're like, ah, yes, okay.
So from Adam, he actually did this 30-minute morning walk where he sort of walks and talks,
and it's kind of like a journal, but public or something like that, I guess.
And he says, we had six months left.
It says, I just laid off some of the most talented people I've ever worked with,
and it blink and sucks.
The direct written version is, this all actually came from this thing called LLMs.
TXT, which actually is pretty interesting, like robots. TXT, but for LLMs, I'm thinking about putting
this on Python bytes and stuff to help LLMs be more efficient working with what we produce.
So if somebody wants to ask a question, like, hey, what did Python bytes say about this?
Like what did Michael and Brian cover here and what was their advice?
Right?
If we put that as a, hey, LLM, you could technically scan the entire system and try and understand
that.
Or you could use our search for these keywords and then here's how you access the transcript
and then parts just, you know what I mean?
You could give like a little advice.
So someone had proposed as a PR to put that onto Tailwind
and said, could you please put that onto the Tailwind documentation
so that we'll be more efficient talking to Tailwind
or asking questions of Tailwind from our AI agents?
And people lost it.
People who lost it.
It wasn't good.
And Adam said something to the effect of like this actually is a pretty decent request.
But the reality is that 75% of the people in our engineering
team lost their jobs yesterday because the brutal impact of AI. And if I add this feature,
it's going to make it worse. All right. So here's the insane aspect. Yeah. If you look at traffic
to the, if you look at downloads, sorry, on NPM, I guess that's traffic of a form. If you look at
the downloads of Tailwind, they're up six times year over year. Wow. Six times more usage.
Don't you think that would make the product more viable? No. Because, because,
if you go to the tailwind's documentation, it's all free, tailwind's open source, but there's a thing that
says get tailwind plus. You see that in the docs. However, even though there's a 6x increase in usage,
there's a 40% drop in traffic to the documentation and revenue is down five times or down 80%.
Why? Because instead of going to the docs to get help, you just say, hey, AI, install tailwind and do this.
Or you don't even say that. You say, make it pretty and like tailwind is the default styling mode of
of AI. That's pretty sad, huh?
Yeah, it definitely is.
I don't, yeah, it, it sucks.
I don't know what the solution is.
I don't either.
Avoiding lLMs.txte is probably not the answer,
but it's also not a solution for sure.
I mean, it might need a different business model.
You know what I mean?
Yeah.
I mean, we've both seen this as well.
Like the, um, I, I was faced with a higher server costs on my blog.
And then I put in, you know, we talked about this earlier.
I put in monitoring to see who it was and it was mostly robots.
And then I cut robots out and we got zero traffic.
So I'm like not zero.
That's not the answer either, though, is it like to, it's like saying I do not want to appear in Google because I hate the traffic.
I mean, I know that's exactly the same.
But with Google, they said if you don't let our robots in to scrape your site for our LLN,
then we're not going to scrape your site for search traffic either.
I think that's the thing.
And that's lame.
I think that you should be able to say,
no, I don't want somebody to scrape my site for AI.
But please do keep looking at it for search.
We just like just take a moment and just say that there are really good alternatives
to Google, like start page, like Kaggy.
Yeah, but a lot of them are all built off of the same data that Google sells.
And StarPage is built off the step.
So it is, in a sense, really, really the same.
Yeah, that's a good point.
Okay, I'm not even done with the story, Brian.
Okay, let's hear the rest of the drama.
Okay, so that's half the story.
There's a really good video by this guy.
I just came across Maximilian Schwartzmuller.
He does a bunch of great developer video.
It's more of like a...
And an awesome name.
I know.
It is an awesome name.
He's more like a JavaScripty guy, but he does a lot of just like thought pieces on programming
in general.
And he's good.
So he did a 11 minute, 55 second video on this tail when it's in deep trouble.
Talks about it.
It actually shows a lot of graphs and data.
So it's pretty interesting.
Then the other is video from the primogen.
And this is the other side of, this is another company in the same theme.
And stack overflow is cooked.
It's, it is, it is insane.
Let me see if I can find a spot here.
Let me see if I can find a spot here with this graph.
So there's an insane graph if you watch that video, but I can just tell you the words.
Tell it to you more, Brian.
So in, I don't have my show notes up, so I'm not to do this from memory.
But I believe in the first month of Stack Overflow's Life, there was 3,800 questions asked.
Yeah, in 2009, I've got the notes up.
Yeah, okay.
Almost, almost nothing.
I mean, it's just getting going.
It's like a forum.
Like, no, of course there's going to be almost no questions.
It goes up and it stays 200,000 new questions asked per month, almost through the pandemic.
And it starts going down.
And this is not purely from AI.
It's going down like pretty precipitously for two years before chatyPD comes out.
When it does in like mid-2020, it drops even harder.
So here's a, here's an insane stat.
In 2016, there were 200,000 new questions asked at Stack Overflow.
In 26, well, technically last month in December 2025, there were 3,740-something questions asked.
the same as the first month they open.
200,000 to three or whatever, 3,000.
It's insane.
That is insane.
So, I mean, actually, I think the primogen guys got a premium to prepare your goodbyes.
It's pretty much it.
Now, it does seem to still be going strong, like selling this data, this pure data in a sense,
two AI companies, but for how long is that going to last, you know?
That's like a, you know, we don't have any cows left and we can't get milk.
But we got a lot of cheese that's really good in the,
freezer so well we could sell that for a while until it's gone like until you're like without new
questions there's not new information there so um and who's answering even if there are new questions
who's answering them you know yeah i bet you half of those three or four thousand are like AI generated
yeah i honestly i can't believe this i knew that they were going down but 200 000 to a couple
thousand is an insane level yeah and um yeah and and they were actually one of the better ones as far as
ads and sponsors and stuff.
It doesn't, I mean, Stack Overflow, we make fun of it.
It's in stuff, but it's a part of software, and it wasn't really that spammy.
It just was, it didn't have a bunch of pop-ups and stuff.
So I actually kind of liked Stack Overflow.
I know.
I mean, I don't know.
I don't know exactly how to feel, but it even went so far as we had the Stack Overflow
keyboard that only had three keys.
Yeah.
Right.
It had the, instead of the controller command, it had the stack overflow.
logo and then a C and a V. It's a beautiful joke.
Just, you know, that's how important that used to be.
Yeah, you take an error message and just throw it into Google search and you get a Stack
Overflow question answering it. That's how you figure stuff out.
Exactly. There was a period where we read books and then we copied and paste it from Stack Overflow
and now, now I guess we copy them and we just take it from AI. But this, I think this is
noteworthy in a big way in the developer world, right? This is, this is big. Yeah, and I mean,
I know that I, you know, the people behind Stack Overflow, there's a lot of money there. And I think
the top, top of the chain, they're probably fine. But the, the, but everybody they got laid off,
that's lame. And then, but Tailwind has always been a, a lightweight business model, even though
they're doing, they're doing some amazing things. And like you said, it's being used by almost
everyone. So yeah it's crazy. Yeah. Kiva out in the audience points out like it's insane how
symmetric this graph is. I agree. It is insane. It's like almost a perfect trapezoid. Not quite,
but almost. Yeah. Well, I'm pulling up stuff from the comment. Henry points out that he's just a
PiPa member and builds many, many Python things, but not quite a core develop. Oh, okay. Keep the record
a little bit. Okay. Well, Henry Schrenner is a core asset in my learning journey. I guess I
I'll put it that way.
Over to you.
Okay, let's, I wanted to talk about Henry Shriners.
No, similar.
So we talked about Henry Shrinner's article on Python packaging and making it faster.
And it reminded me of something that I wanted to talk about for a while and I just forgot about.
So thanks, Henry, for the reminder.
And it was because I was, and it's around profiling.
So I was looking through not the packaging library.
I was looking through Fast API once looking at their, I like, I kind of do this.
I look, I kind of troll sometimes.
I go go through GitHub workflows of different packages to see what sort of tricks they're using in their workflows.
And I remember.
It's like the view source of the modern day.
Yeah.
It's one of the wonderful things about open source packages is you can do this.
So Fast API, plus the folks of Fast API are really great about trying to stay cutting edge.
and helping other people.
So looking through here,
I saw Cod Speed benchmarks.
And I'm like, what is this?
So that's my topic right now is the Cod Speed tool.
And probably maybe it's code speed,
but it's spelled COD.
So I'm going to pronounce it Cod Speed.
It could be measured in like how fast
does a cod swim in the ocean?
And this is like a metric.
Is it a laden cod or a?
Yeah. Heavily linked.
Bight your kneecaps off. Get back here.
So, I'll talk about the actual tool, but as we're looking through Fast API, there's also
in the merge requests for new features, there's a performance report that is linked, and you can
go and take a look at it. If you can click on that, you go and see, like, you can go and poke around
at that particular merge request. I'm looking at the overview.
because fast API on codspeed.io has this a graph of making,
basically making, it's not very noisy of like the different times they've measured things
and how the metrics, the performance metrics have gone up and down.
And it's really kind of cool.
So I was like, can I use this?
Yes.
So I'm going to try this out.
I haven't tried it yet.
But at codspeed.io, and there's a nice rabbit logo, which is nice.
Anyway, it integrates into dev and CI workflows to measure performance, detect regressions.
And if it was just a, like a just a paid tool, I wouldn't probably be covering it.
But it's one of those awesome tools that's actually not that expensive per month if you're actually paying for it for your project, for a commercial product.
But it's free for open source projects.
So I love those sorts of things.
They're given back to the community with free tools.
That's great.
So what I wanted to talk about is we go back to the homepage.
It's pretty cool.
One of the things that talks about is catching performance and not noisy metrics.
So I'm not sure how they're doing this, but they're saying that traditional metrics kind of jump around a lot,
but that Codspeed has got a small variance.
It's possible that they run this on dedicated hardware.
So you're not subjected to other things.
Other noise.
Yeah.
So they're doing flame graphs to pinpoint problems and speedups and slowdowns.
But one of the things I was really impressed with is right there on the front page.
They tell you that you can design, design, performance tests, integrate with your CI workflow, and get reports.
And there's like a Python button.
You can click on it.
It's like, oh, you just take a, do a pie test.
you just mark whatever test with PITES mark, benchmark,
and those are the ones that are used in your benchmarks.
It's easy.
You're already using them for testing your code,
so you can just grab some of your,
maybe some of your high-level tests that test the whole system
and try to hit and hit, you know,
grab a handful of them that hit most of your system.
Might be good.
You throw them on all of them, maybe, I don't know.
And then what do you do in CI?
So once you, to try to integrate it in CI,
there's there's uh tabs for get lab and gethub but they already forget for gethub actions there's a they have their
own um action that you can use and you just say use that and then run your run your whatever your code is so
i went back to what fast API's workflow to see what that was and their uh their metrics when they're
running is just they're just running pie test it looks like they give it a cod speed flag i don't know what that
does but you know maybe it turns does some cod speed stuff so um just fun uh i i thought
it would be good to give a shout out to these folks.
And I'll try it out on one of my projects and maybe report back how it's going.
But you can look directly how fast API is using it.
It's pretty cool.
Yeah, two quick real-time follow-ups.
I'm going to be interviewing Sebastian Ramirez and his team about Fast API Cloud tomorrow on Talk Python.
Nice.
So maybe if I can keep it in my mind, I'll ask them about this as well.
It's pretty interesting.
Yeah.
But if people want to watch that live, that's tomorrow,
morning. And then Henry points out that Astral uses Codspeed, and that was mentioned when people
are asking about proper benchmarking for packaging. That might be where I saw it. So, all right. Well,
those are our topics. Do you have any extras for us? Let's see what I have. I do, indeed,
do have some extras. So remember how I talked about DevOps, Python supply chain security made easy
and Python supply chain security made easy and all that.
And I gave some examples using PIP audit,
how to set it up cached build time checks
so that your Docker images won't build
if there's a CVE detected.
Oh, yeah?
Well, boy, oh, boy.
Are there more CVEs in the IPI space that I realized?
Because just middle of last week,
I couldn't ship Python bytes
because there's a vulnerability in here.
We're not going to let it out.
So here's the cool thing.
It absolutely caught it.
let me release a new one, but it is a hassle. And I've also run into a little bit of a bigger hassle,
not a big, a unforeseen hassle. I said, okay, well, what we're going to do is we're going to
delay updating to the new thing by one week to make sure that that stuff gets fixed. Well,
turns out when they find a problem, they fix it and they release that day. But if I apply that technique,
it doesn't get rolled into my stuff for a week. So got to be a little bit careful to just
sort of pin some stuff for a little bit or not. Don't up the,
the dependencies until you get that or if it's not you know whatever you're going to do about it right but
yeah it works because i couldn't release the website till i like manually put different versions in to make
it skip those issues that's good it's a good thing all right on to the next one uh you probably
have heard brian that you should any time you're doing security you need to have a decent password
you know this as a user but if you build anything that has user off well you would have some level
complexity i feel like there's so many web builders out there that just suck so bad you
you're like, how do they let you behind a keyboard?
For example, one of my banks, this is a international bank,
limits my password length, not minimum, maximum to like 12 characters.
That's insane.
That is just the stupidest.
I'm like, are you putting that straight in the database?
Because if you're putting that, like,
if that's because that's a database constraint,
you better not be putting my password straight in the database.
You know what I mean?
Really bad.
But in the extreme end.
Does the length, like, affect what the hash link looks like?
No, the hash is the same length from one character.
and a million characters. That's why I'm suss of it, you know. But there are actually some
nuanced aspects. So if you're using MD5 to hash your passwords, no, don't do it. Please don't
do it. But there's some really nice things you can do. Like you can use more modern ones.
And especially something that you can do that's really powerful is to use what are called
memory hard problems. So cracking passwords is an embarrassingly parallel problem in that like,
I've got a hash and I'm going to try a bunch of different stuff against it.
And if I got a GPU, maybe I run like 4,000 variations in parallel.
And if any of those match, then we call it good, you know, and use all the cores.
That's because GPUs scale compute super well.
But you know what they don't scale well?
Memory.
And so one of the new ish algorithms is called Argon 2.
Really highly recommended because it is not subject to this brute force attack.
Because instead of using a lot of compute, it uses a lot of memory.
So you can't parallelize that as easy.
You're like, well, each hash takes, I don't know, 15 megs to do an attempt.
So, you know, how many of those can you parallelize before you run out of memory sort of thing, right?
Well, it turns out that this has become a point of a problem.
So hackers have started using as a distributed denial of service type of thing, very long passwords.
So like one megabytes worth of text of password because the Argon 2 and the memory hard ones,
the bigger the password is, the bigger the text is, the more memory they use.
So if you just jam like an insane amount of text, all of a sudden, it becomes not just memory
hard for hackers, but even for your server, it's like, okay, this is off the charts, right?
So I just want to put this out there on people's radar, like Prudis out there says,
my bank also restricts pathway length. Boo banks, they suck. But maybe they shouldn't allow a million
characters like limited to 100 or 50 something way bigger than you're going to do but like not
unbounded right don't just say less than it's a super easy check but this is definitely something to
keep your radar on character character restrictions are terrible also the ones that say like you can't
use some special characters like why would you not be able to use special characters because exactly
i i honestly think like a good chunk of this and maybe this is the bank thing as well it's like
it's a tech support thing. Somebody who is like 78 doesn't really know what they're doing, can't type
in their password. They're like, I called tech support and they're going to help me type it in or
something. I don't even know, but I feel like it's, you know, restrict the characters use so they're
more legible. I don't know, but why would you allow an L and a one? I don't know. It's all messed up.
Or an 0 and 0. Like can't use zeros. You only use those or vice versa. Well, the ones that like
require you to use a number also. It just seems weird. It's like these
particular like 10 characters are special and we we need one of those 10 to be in of all the possible
characters these 10 that's a really good point makes me so frustrated I have one password generate
these like ones that are easier to type if you got to type them on your phone or something but
they're like 30 characters and capital words lowercase words dashes and says you don't have a number
or a special character I'm like do you really think it's going to make a difference if there's an
exclamation mark on this 30 character random. Like, no. Anyway, I am here with you. All right.
Last extra, I sold my computer and bought a new one at this place called Swapa. So I kind of want to
talk about Swapa really quick, because I just think this is a good, good alternative to buying
computers from Apple directly. It's kind of like eBay, but specifically for computers. So you don't
buy it from the company. They facilitate you buying it from another person, which I think is kind of
interesting. So I had this Mac Mini super minimum version that I had bought with some trade-ins that
effectively I paid like 360 bucks. I sold it for 600 on here and then turned around and bought a
maxed-out Mac Mini Pro version, which has got like 14 cores and lots more memory, like a lot
of, a lot of upgrades for like $1,700 instead of $3,000 and you apply the $600 and it gets like,
hey, that's almost nothing in price. So someone was saying, like, oh, hey, I, I don't have an
Apple Silicon thing and I need to do builds for it and what a hassle that is. So like even the most
brand new Mac minis are for sale for like four or 500 bucks. But if you look and you just want,
I just need an Apple Silicon thing. You could go to the 2020 ones. And it's like 280 bucks for a new,
not new, but mint-ish computer, right? Like a decent M1 if you need to grab. Anyway, I thought that
just throw that out there because I had a nice experience both buying and selling my computers there.
Now I got like a really maxed out one, which I used to make my unit test go twice as fast.
Nice.
That was fun.
Yeah.
Nice segue there.
So as an extra, I've got actually, so you all know I've been working on a new book.
So Lean TDD.
And I'm kind of excited about this update because I was working on just the next chapter.
And then it was getting big.
And I was writing a lot.
So I decided to break it up.
So I split the book into three parts.
There's foundations that's talking about Lean and TDD.
And I think I want to expand the TD.
I've got some questions on test driven development from people that were new to it.
I just sort of assumed everybody knew about TD.
So I'll probably expand that later.
So I've got foundations.
And then what I released today this this morning was, what is it,
one, two, three, four, five, six, seven, eight, nine new chapters on, on part two, which is lean TDD
practices.
And so instead of dumping this all in one big chapter, I put it, I split it all up.
And some of these are, to be fair, in a PDF form, some of these are a couple pages.
So there'll be like three or four pages in your ePUB.
But they're, I don't want to, it's a lean book as well.
So it's, I think that the total right now, I'm up to like,
70 pages. After this, I do want to try to wrap this up by the end of this month still. So I got a
couple of weeks left. I've got the last part is considerations. I want to cover the role of test
engineers if you have them on your team and also talk about test coverage and monitoring.
But so that's, it's only a few topics I got left. And that'll wrap up the first draft.
So excited about, I've been really excited about the flow and getting this in here.
It is funny, though, things that you think you just know in your head when you try to write it down in a sequential form.
It's like, wait, how should I phrase this?
I thought I could just talk about this easy, but trying to get it out of your head is, yeah, sometimes tricky.
So that's my thing.
Also, appreciate, so I had set up GitHub repo that doesn't have the book, but it has just a place for stuff.
if it's Lean TDD book under Aachen for feedback.
And not very many people were using it,
but just recently,
I want to shout out to Timothy Mallee.
It gave me a bunch of stuff,
like look through a bunch of typos.
I really appreciate that.
And then some other ideas about maybe expanding on some sections.
So I like the feedback.
Awesome.
Congrats on the progress, too.
Thanks.
Yeah, that's my extra.
How about something funny?
Yeah, let's do something funny over here.
So this one is about how amazing agenic coding is,
even if you don't even know anything about programming.
You don't know anything about the web, Brian.
You can do it.
Here's a quote, I believe this is on Reddit.
Claude code is blinking insane.
I know literally nothing all caps about coding.
Zero all caps.
And I built a fully functioning web app in minutes.
Check it out at Local Host, Cullen 3,000.
Yeah. Nope. No, we're not going to be checking it out at 3000 because you know nothing.
That's funny. I think that captures the end of 2025. Vives is just like, it's amazing and terrible.
Yeah. Well, I mean, when do we get AIs that can deploy for you? Well, are we there yet?
I don't think we're far away. I just had a cloud code helping me set up a special web socket.
HTTP to WebSocket to secure WebSocket conversion on EngineX
and it was like here, boom, perfect.
But you got a know to ask, right?
That's the thing.
Yeah.
Yeah.
But these are weird times.
They're very weird times.
There are weird times.
But we're still hanging in there.
And if you're hanging in there too and hanging in with us,
I really appreciate everybody listening to the, listening to the episode,
listening to Python Bites and supporting us through all of the different
means and yeah we're going to do this as long as we can so thanks Michael yeah thank you
Brian see you later bye everyone bye
