Python Bytes - #434 Most of OpenAI’s tech stack runs on Python
Episode Date: June 2, 2025Topics covered in this episode: Making PyPI’s test suite 81% faster People aren’t talking enough about how most of OpenAI’s tech stack runs on Python PyCon Talks on YouTube Optimizing Python ...Import Performance Extras Joke Watch on YouTube About the show Sponsored by Digital Ocean: pythonbytes.fm/digitalocean-gen-ai Use code DO4BYTES and get $200 in free credit Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Making PyPI’s test suite 81% faster Alexis Challande The PyPI backend is a project called Warehouse It’s tested with pytest, and it’s a large project, thousands of tests. Steps for speedup Parallelizing test execution with pytest-xdist 67% time reduction --numprocesses=auto allows for using all cores DB isolation - cool example of how to config postgress to give each test worker it’s on db They used pytest-sugar to help with visualization, as xdist defaults to quite terse output Use Python 3.12’s sys.monitoring to speed up coverage instrumentation 53% time reduction Nice example of using COVERAGE_CORE=sysmon Optimize test discovery Always use testpaths Sped up collection time. 66% reduction (collection was 10% of time) Not a huge savings, but it’s 1 line of config Eliminate unnecessary imports Use python -X importtime Examine dependencies not used in testing. Their example: ddtrace A tool they use in production, but it also has a couple pytest plugins included Those plugins caused ddtrace to get imported Using -p:no ddtrace turns off the plugin bits Notes from Brian: I often get questions about if pytest is useful for large projects. Short answer: Yes! Longer answer: But you’ll probably want to speed it up I need to extend this article with a general purpose “speeding up pytest” post or series. -p:no can also be used to turn off any plugin, even builtin ones. Examples include nice to have developer focused pytest plugins that may not be necessary in CI CI reporting plugins that aren’t needed by devs running tests locally Michael #2: People aren’t talking enough about how most of OpenAI’s tech stack runs on Python Original article: Building, launching, and scaling ChatGPT Images Tech stack: The technology choices behind the product are surprisingly simple; dare I say, pragmatic! Python: most of the product’s code is written in this language. FastAPI: the Python framework used for building APIs quickly, using standard Python type hints. As the name suggests, FastAPI’s strength is that it takes less effort to create functional, production-ready APIs to be consumed by other services. C: for parts of the code that need to be highly optimized, the team uses the lower-level C programming language Temporal: used for asynchronous workflows and operations inside OpenAI. Temporal is a neat workflow solution that makes multi-step workflows reliable even when individual steps crash, without much effort by developers. It’s particularly useful for longer-running workflows like image generation at scale Michael #3: PyCon Talks on YouTube Some talks that jumped out to me: Keynote by Cory Doctorow 503 days working full-time on FOSS: lessons learned Going From Notebooks to Scalable Systems And my Talk Python conversation around it. (edited episode pending) Unlearning SQL The Most Bizarre Software Bugs in History The PyArrow revolution in Pandas And my Talk Python episode about it. What they don't tell you about building a JIT compiler for CPython And my Talk Python conversation around it (edited episode pending) Design Pressure: The Invisible Hand That Shapes Your Code Marimo: A Notebook that "Compiles" Python for Reproducibility and Reusability And my Talk Python episode about it. GPU Programming in Pure Python And my Talk Python conversation around it (edited episode pending) Scaling the Mountain: A Framework for Tackling Large-Scale Tech Debt Brian #4: Optimizing Python Import Performance Mostly pay attention to #'s 1-3 This is related to speeding up a test suite, speeding up necessary imports. Finding what’s slow Use python -X importtime <the reset of the command Ex: python -X importtime ptyest Techniques Lazy imports move slow-to-import imports into functions/methods Avoiding circular imports hopefully you’re doing that already Optimize __init__.py files Avoid unnecessary imports, heavy computations, complex logic Notes from Brian Some questions remain open for me Does module aliasing really help much? This applies to testing in a big way Test collection imports your test suite, so anything imported at the top level of a file gets imported at test collection time, even if you only are running a subset of tests using filtering like -x or -m or other filter methods. Run -X importtime on test collection. Move slow imports into fixtures, so they get imported when needed, but NOT at collection. See also: option -X in the standard docs Consider using import_profile Extras Brian: PEPs & Co. PEP is a ‘backronym”, an acronym where the words it stands for are filled in after the acronym is chosen. Barry Warsaw made this one up. There are a lot of “enhancement proposal” and “improvement proposal” acronyms now from other communities pythontest.com has a new theme More colorful. Neat search feature Now it’s excruciatingly obvious that I haven’t blogged regularly in a while I gotta get on that Code highlighting might need tweaked for dark mode Michael: git-bug Pyrefly follow up Joke: There is hope.
Transcript
Discussion (0)
Hello and welcome to Python Bytes where we deliver Python news and headlines directly to your earbuds.
This is episode 434 recorded June 2nd, 2025.
I'm Michael Kennedy.
And I am Brian Ocken.
And I am super happy to say that this episode is brought to you by DigitalOcean.
They've got obviously a bunch of amazing servers, but some really cool gen AI features we want to tell you about as well.
So we're going to be telling you about that later.
The link with a $200 credit is in the show notes.
So a little spoiler there.
If you would like to talk to us on various social things, tell us about, tell us about
what we might want to cover or give us feedback on what we have.
It links to our Macedon and blue sky accounts are at the top of the show notes as well.
You can join us live right here, right now on YouTube, almost always Monday at 10, unless
something stirs up the calendar and breaks that.
But we try to do Monday at 10 a.m. Pacific time.
All the older episodes are there as well.
And finally, if you want an artisanal,crafted special email from Brian with extra information about what's going on in the
show, well sign up to our mailing list. And Brian, some people have been saying
that they had been having trouble receiving emails, like they signed up and
they didn't get them. Yeah. Yeah. Well that's because there's a bunch of jerks
on the internet and they make it hard to have nice things like email that works.
So it's like so many spam filters and other things that I've done some
reworking and I think some people who signed up probably will start getting
email again, but basically their email providers had been saying, Hey, you're
sending from an IP address that has previously sent spam blocked.
Well, we use SendGrid.
SendGrid just round robins us through a whole bunch of different IP addresses.
And if we happen to get one that previously got flagged, well, then you
might get unsubscribed from an email list.
How much fun is that?
So I've done a bunch of coding behind the scenes to try to limit those
effects and just send it again next time.
Cause it will be a different IP address.
Ah, jerks.
So I appreciate the spammers.
Yeah.
Thanks.
And I guess with that we're ready to kick it off.
What you got?
Well, I've been, I've been speeding up some test suites.
So I'm interested in the, this blog post on trail of bits dot blog.
It's a trail of bits blog.
Um, and I think we've covered some stuff from them before, but anyway.
Yeah, usually they're a security company.
They do really interesting research into a lot of security things.
Oh, really?
Okay.
Well, apparently one of the things they've worked on is, or at least they're writing
about is making, yeah, it says TrailorBits collaborated with PyPI several years ago to
add features and improve security defaults across the Python ecosystem.
But they also, today we'll look at equally critical aspect
of holistic software security test suite performance.
So there was some effort to speed up the test suite.
And this is incredible.
So one of the reasons why I'm covering this
is to speed up test suites,
but also I often get questions about is PyTest robust enough to test a large system?
And yes, it is.
And I actually don't even think I, I mean, warehouse is a decent size.
Apparently they've had the current or the test suite as I've read this writing is 4,700
tests can't the test count was 4,700. It's quite a few tests. And so warehouse
errors by API. There's a nice graph on the blog post showing the time spent. So they
went from 163 seconds. So that's what? Two minutes and I don't know, like almost two
and a half, three minutes?
Yeah, almost three minutes. Almost three minutes down to 30 seconds.
So this is a nice speed up.
And even as the test counts were going up,
the time spent was going down.
So how did they do this?
The big chunk of the performance improvement
was switching to PyTest XDist.
So XDist is a plugin by the PyTest team,
by the core team, or at least that's who's maintaining it.
And that is 67% of the reduction.
What it does is it allows you to run it on multiple cores.
So like I think in this one, they had a 32 core machine.
They can run it on almost every.
So you use multi-processing?
Yeah.
Threading, right, yeah.
Yeah, it is multi-processed.
Yeah, I think there's some configuration there that you can fiddle with, but mostly it's multi-processing.
So there is some overhead in doing that, so you wouldn't want to try to speed up a really
fast already fast small test suite would go slower with XTest.
But anyway, this is a larger one. But there's some, one of the things I like about this is cause it's not a free lunch
for Xdisk because it's not always easy to split up your test suite like this one.
So they were talking about parallelizing the test execution.
You can just say num process equals auto or dash n equals auto, and that just runs it on a bunch.
It doesn't really work if you have things that are a shared resource like a database.
So they also talked about setting up database fixtures such that each test worker gets its own isolated database.
They show some of the code on how to do that.
But this is open source code,
so you can go check out the entire thing if you want.
The other thing that you get with Xdist is very reporting.
They increase the readability by using PyTest Sugar.
I don't use PyTest Sugar a lot,
but it sure is popular.
And it gives you a little check marks.
But one of the things it does is makes it,
makes Xdist even more verbose.
But it's kind of a nice green check marks.
So it makes, it feels good.
It's better than the little dots.
Anyway, so that was a massive improvement with Xdist,
but that's not all.
Python 3.12 added the ability for Coverage.py to run faster by using the Sys monitoring
module and NetBatch Elder implemented that a while ago.
So they turned that on with a Coverage Core environmental variable.
And that sped things up quite a bit as well.
Another 53% time reduction.
Then test discovery phase.
This is an easy one.
Didn't, didn't increase time that much, but it's just everybody should do this.
It's one line config to say, where are my tests?
So that's, that's a good one.
And then a, a last one is unnecessary import overhead.
Cause it's kind of an interesting thing that I was like, how did they do this?
And through testing and they're using a thing called DD trace.
And that is through, what is DD trace through?
Did data dog library.
But what, what it's, I don't know what it does really, but I just checked it out.
I'm like, why, how are they, I looked at the pull request to see how they did it.
And they're using a flag dash P that is, allows you to turn off plugins, either
turn on or turn off plugins in your test suite.
And DDTrace doesn't look like it's a PyTest plugin, but it does have one.
So I took a look, DDTrace comes with a couple
of PyTest plugins, so that makes sense
that those are gonna pull in DDTrace
when the plugins get read.
So anyway, interesting side tangent right there.
But really interesting read on how to speed up test suites.
And this has reminded me that I really need,
I've got a whole bunch of tricks up my sleeve too. I'd like to,
I need to start a, a, uh,
how to speed up your test suite post or series of posts. So,
that's super interesting. Good stuff. Give me ideas. Yeah. Anyway,
and I'll have actually my next topic.
The next topic we talk about later in the episode will be around speeding up
test suites as well. So, okay. well, it's all about speed this week.
Speed to speed.
All right, this one is super interesting.
And so I came across, I don't even see this.
I don't spend that much time on X, not necessarily
because I've got like some vendetta against X,
although you would know that from our reviews on,
I think it was on DocBython,
somebody absolutely like went, as having like a moment.
Um, cause I said, Hey, Macedon is cool.
Like, Oh my gosh.
Anyway, uh, no, I don't know much time on there because, um, I just find that
like, I feel like the algorithm is just hides my stuff and I'm getting really
conversations or engagement.
So that said, I ran across this thing that is super interesting from
Pietro Charano. Charano.
And it says people aren't talking enough about how most of open AI,
AKA HHCPT, tech stack runs on Python.
And there's this screenshot of a newsletter that talks about it.
Okay.
So the tech stack, this is super interesting.
Python.
Most of the product's code is written in Python.
Frameworks, fast API, the Python framework used for building APIs quickly using standard Python type hints and
Pydantic, talks about it.
C, for parts of the code that need to be highly optimized, the team uses lower level C for the programming language.
And then something called temporalporal. For asynchronous workflows,
Temporal is a neat workflow solution
that makes multi-step workflows reliable
even when individual steps crash
without much effort by developers.
I actually don't know what Temporal is.
Maybe it's Python, it probably is, it sounds like it.
Just to remind me, this is OpenAI's tech stack?
Yes.
Okay.
Did some searching and I came up
with the original article there.
And this comes from a conversation
around building ChatGPT's images, the image generation
stuff, where you say, make me an infographic about what I've
been talking about or whatever, which is incredible these days.
So the article is entitled, Building, Launching,
and Scaling ChatG GPT Images.
It's OpenAI's biggest launch yet with a hundred million new users generating
700 million images in the first week.
But how was it built?
Let's talk about Python, right?
So Python, FastAPIC, and Temporal.
How cool is that?
For people who are like, well, it's fun to use Python for these toy projects, but
it's an unserious language for unserious people who don't build real things.
Where is it?
100 million new users in a week.
That's pretty epic.
Well done, FastAPI.
Well done, new versions of Python.
All these things.
I got to know, what is Temporal?
Is this what it is?
Probably, it's probably not even it.
You know, a double execution.
This is written in Go, so apparently Temporal is probably not.
Anyway, isn't that interesting?
It's always fun to have a data point. Yeah, I
like that. I think there's a lot that we don't even know, that
people don't talk about that are written in Python and FastAPI now. So it's a
different world. Two or more is nice. DigitalOcean. DigitalOcean is awesome.
Yeah, DigitalOcean powered Python bytes for a very long time. I love DigitalOcean. I highly recommend them. But you got some specific to say, don't you?
I do. This episode of PythonBytes is brought to you by DigitalOcean.
DigitalOcean is a comprehensive cloud infrastructure that simple to spin up
even for the most complex workloads and it's a way better value than most cloud
providers. DigitalOcean companies can save up to 30%
off their cloud bill.
DigitalOcean boasts 99.99% uptime SLAs
and industry-leading pricing on bandwidth.
It's built to be the cloud backbone
of businesses small and large.
And with GPU-powered virtual machines
plus storage databases and networking capabilities
all on one platform.
AI developers can confidently create apps using that their users love.
Devs have access to the complete set of infrastructure tools they need for both training and inference so they can build anything they dream up.
DigitalOcean provides full-service cloud infrastructure that's simple to use, reliable no matter the use case,
scalable for any size business,
and affordable at any budget.
VMs start at just $4 a month
and GPUs under $1 per hour.
Easy to spin up infrastructure built to simplify
even the most intense business demands,
that's DigitalOcean.
And if you use DO4 bytes,
you can get $200 in free credit to get started.
Take a breath.
DigitalOcean is the cloud that's got you covered.
Please use our link when checking out their offer.
You'll find it in the podcast player show notes.
It's a clickable chapter URL as you're hearing the segment and it's at the top of the episode
page at pythonbytes.fm.
Thank you to DigitalOcean
for supporting PythonBytes. Indeed. Thank you very much. All right, let's see what we got next, Brian.
Okay. PyCon. Neither of us made a PyCon this year, did we? That's too bad. Yeah. But, you know,
say levy. Sometimes that's how it is. And I would venture that most of the people
listening to this show didn't, because if everyone listening to this show didn't.
Because if everyone listening to this show attended PyCon, it would sell out many times over.
So that would mean most people here are very excited to know that
they can now watch these talks.
Most of them, there's something going on with 40 of them, but there's a bunch,
uh, there's what, 120 of the talks are online here.
So I'm linking to the playlist for the PyCon videos, which is pretty cool.
This came out a lot quicker than it did last time.
Last time it was months until they published these, which was unfortunate.
But you know, this is like a week or something after the conference.
So that was really good.
And I pulled some speed.
Yeah.
Yeah.
Yeah.
And I pulled up some that I want to highlight.
It's, it's too hard to navigate the playlist.
So I'm just going to just read them out that I liked here.
So I found the keynote by Cory Doctorow to be super interesting.
It was basically how, uh, like going deep into his whole, uh, in
poopification stuff that he's been talking about, which is a really,
really interesting idea, um, a little hard, which is a really, really interesting idea.
A little hard to hear because of the mask, but you know, it's okay. Still worth listening to.
There's one talk entitled, 503 days working full-time on FOSS lessons learned, which sounds
really interesting. There's going from notebooks to scalable systems with Katherine Nelson.
And I just had her on TalkPython.
So for all of these, I'm linking them in the show notes.
And when I say, and on TalkPython, I linked over to that episode or that
video or whatever as well, because her talk is not quite published yet.
It's just recorded in advance.
Unlearning SQL.
Doesn't that sound interesting?
Like most people are trying to learn SQL.
Why would I unlearn it?
The most bizarre software bugs in history,
that's interesting, the Pyarrow Revolution in pandas,
I also did an episode with Reuven Lerner about that.
What they didn't tell you about building a JIT compiler
in CPython by Brant Booker,
also did a Docbython episode about that
and linked to that one.
This one's cool, from Henik, design pressure,
the invisible hand that shapes your code.
He's got some really interesting architectural ideas, so super cool.
Marmo, the notebook that compiles Python for reproducibility and reusability, and a talk
with an episode about that.
GPU programming in pure Python, and a talk with an episode about that.
And finally, scaling the mountain, a framework for tackling large tech debt.
Our scale.
That looks interesting.
Don't all those talks on super interesting.
Yeah.
Yeah.
So I've linked to all of them.
I pulled them out.
Y'all can check them out if you want.
They're in the show notes, the most bizarre software bugs in history.
Total click bait, but I'm going to watch it this afternoon.
I've got them.
Exactly.
I can't wait to watch it.
Yeah, no, it's fun.
All right, over to you.
Okay, this is a interesting header on this,
but table of contents expand.
Anyway, I just wanted to find some posts
to talk about this because it's a technique
that I use for speeding up test suites. And it wasn't covered
in the last post that we talked about. So optimizing Python import performance. So in the
previous discussion, we talked about removing using dash p and PyTest to remove plugins that
might remove imports of things you don't need. But what if there's things that you
do need but not all the time? So one of the things I want to talk about is this test collection. So
like the other one, they used Python dash dash x import time and you use it by just like running,
like you can run your app or you can run PyTest afterwards and you can find out it prints out a list of, this is a
minute since Python 3.7 I think, but it prints out a list of all of the imports and how long it took
to import them. And it's a little bit hard to parse because it's sort of like a text-based tree
level thing, but it's not bad. And looking at all of that, you can try to find out which ones
are slow. So one of the techniques is lazy imports. And this is a weird quirk about Python
that I didn't learn until recently was that when you import something, normally we put
the imports at the top of the file, but when you import a module, it imports everything that that module imports also.
So if you don't, if the things that you're depending on are not really part of the, your, if the user of it doesn't really need to import that stuff, like just for imports, you can hide that import within a function, and it still acts globally. So like in this example,
it says process data import pandas as PD.
It's only imported when the function runs,
but even after that function,
that pandas is available in the module
for everything else also.
Kind of a weird quirk, but it works good.
And I'm just gonna tie this together
to testing right away.
Tests, at test collection time, PyTest imports everything.
So if you don't, and you probably don't need to import any of your
dependencies at collection time.
So I hide a lot, I look for any expensive imports and move those into a fixture,
usually a module level auto use fixture to get that import to only run when the
test runs, not when you're doing collection. So that's an important trick. Avoiding circular
imports. So I just thought that, so hopefully you're already doing this already, but it says
circular imports force Python to handle incomplete modules.
They slow down execution and cause errors.
Well, I knew they caused errors.
I didn't understand the just slow down execution part.
So there might be a way to, like there might be some legitimate cycles, sort of, but get
rid of them.
That's weird.
It does sometimes have to, you have to restructure your code.
The third thing is keeping dunder and it's very light.
And this is a tough one for, um, uh, for, for Pytest tests and stuff,
because, uh, we sometimes I have a tendency to shove things into dunder
and it, um, and especially for importing everything, but clean, keeping those
dunder and it files as clean and fast as possible.
So there's other things in this article as well, but those are the three that I really
hit on to try to speed up test suites, cleaning up the import time.
So yeah, that's really cool.
And you might wonder like, well, what is slow, what is fast?
How do you know?
Well, there are tools to import profile imports.
We've talked about it before.
I don't remember which one we covered, but there's one called import underscore profile. Cool I was looking
for that. Nice. Yeah and so you can just say run my code dash import profile
and then you can just give it the things that you would be importing and
it gives you a nice little timing of them. And probably a lot of them are super
slow and you're wasting your energy to deal with trying to optimize that stuff
but some are fast. I mean some are not fast and some use a lot of them are super slow and you're wasting your energy to deal with trying to optimize that stuff, but some are fast.
I mean, some are not fast.
And some use a lot of memory and different things like that.
So this is actually pretty interesting to see how that all works, right?
Yeah.
Actually, so when I was doing some optimization for a big test suite recently, you got to
measure because there were things that were fairly large packages that I just assumed
they were going to be slow, but they had their import system optimized already. So those
ones don't really... Some packages that seem large like pandas or something might actually
be pretty quick, but it looks like it's in this example wasn't, but like NumPy, it does a lot,
but it's a pretty, it's a faster import, so interesting.
So it also depends, you know, how are you,
you only import it once per process, right?
It's like, it's not gonna be imported over and over again.
So if it's 100 milliseconds, is that worth worrying about?
Maybe, maybe not.
And also with throwing things away,
like let's say you've got, um, so one
of the things often in a, like a test require development requirements file
or something, or, or in your, your test requirements, there might be, so there's
two, two sets of things that I look at often, they're PyTest plugins that are
useful for developers to run locally, but they're not really needed
in CI. So you can like take them out in CI. And then the reverse is like in CI, we often
have, I often have like reporting system, like I might export all the data to a database
and have some plugins to handle that. And that's not needed locally. So turning those
off locally so that you can have a little faster run. So run so things like that so anyway speeding up testing is a good thing
yeah absolutely it's all right extras you got extras i got yeah so i've got a
couple extras um how about you let's go with yours first i got a few too
okay so this is pretty quick um so this is from uh hugo uh peps and co a little
bit of history about where pep came from. I've just been using it since using the word.
But apparently Barry Warsaw came up with the acronym and he calls it a backronym.
He liked the sound of PEP because it's peppy before he came up with the Python enhancement
proposal acronym.
So that's an interesting thing.
But that also takes a look at since then,
there's been a lot of improvement proposals
and enhancement proposal,
like acronyms all over the place
in different communities.
So this is interesting.
Like AstroPy proposals for enhancement.
And you totally know that they intentionally reverse those
so they can make ape.
That's great.
Just a bunch of fun.
The second one is a pythontest.com.
My blog has got a fresh coat of paint.
It's got light and dark modes, but it's more colorful now.
It also makes it glaringly obvious
that I don't blog as much as I'd like to.
The fourth oldest post is from January of 2024.
Oops, gotta get on that.
But one of the neat things I like about it is,
and part of it is I didn't really like my theme,
so I wasn't really blogging much,
so I think I like it better.
Hopefully I will.
And it's got a neat switch feature.
Is it still running on Hugo or what's it running on?
Yeah, it's Hugo with, and this, I don't even,
it's a JavaScript-based search thing,
and it's pretty zippy.
This is all just like, I like this.
Yeah, very cool.
Oh, what was it?
I was gonna, there was one change
that I hopefully maybe might change
is the light mode for code highlighting looks fine,
but it's a little hard to read with dark.
I don't know, that red on the black. Oh, yeah, yeah, yeah. Okay, yeah. You could probably change up's a little hard to read with dark. I don't know, that red on black.
Yeah, yeah, yeah.
You could probably change up with a little CSS action.
Probably, yeah.
Anyway, those are my extras.
How about you?
Oh yeah, very nice.
I got a few.
Just following up on your extra real quick,
I'm doing a stories from Python history,
like through the years panel,
and Barry Warshaw's gonna be on there on the six, which is about four days,
Thursday, something like that.
Okay.
And yeah, that pep story.
I'm going to try to get him to talk about that.
So my extras, this one is a, certainly could be a full fledged item, but I
don't have enough experience.
I don't really know, but here's the deal.
So you could have some kind of SAS bug system could be Jira.
Everyone loves Jira or it could be GitHub issues or it could be something else.
But what if you just want something low key, you know, it's going to be
right there with your projects.
Git is already distributed.
So there's this thing called Git bug.
Okay.
And the idea is this is a distributed offline for standalone issue management
tool that embeds issues, comments, and
more as objects into side, you could get repository.
So when you get, when you get clone, you just get the issues.
And when you get pull, you update your issues.
Oh, cool.
Interesting, right?
And it comes with some kind of UI.
I haven't played with it that much.
A CLI tool, Tui or even a web browser that you can run based on this.
So that's pretty neat.
And then there's something about,
it will sync back and forth with things like GitHub issues
and so on if you want it,
or GitLab using something called bridges.
I think that's pretty cool actually.
I don't know how it works, but it seems pretty cool.
But I've not used it at all
and I have no interest in using it.
Cause for me, I just, all my stuff's on GitHub.
I'm just gonna use GitHub issues. It's just fine.
But I can see certain use cases. This would be pretty neat.
What else have I got?
Follow up from last week.
There's a, our face, but, um, this is from Neil Mitchell.
Remember we talked about Pyrefly, the new, um, type linter, mypy like thing for,
from Meta.
Yeah.
And I said, oh yeah, it's kind of like. T Y or formerly Redknot from Astral and I said, oh, but the Astral has this LSP.
So we got a nice comment on that show from Neil says, Hey, from the fire
Firefly team here, thanks for taking a look at our project.
We do have an LSP IDE first and LSP are approximate approximately synonyms
nowadays, and we're exploring whether this can be added to Pylance.
So there's more to Pyrefly than I gave them credit for.
Very cool.
Yeah.
What else?
Oh, I think that's it.
You ready for a joke?
IDEs and LSPs are synonyms?
I think the autocomplete feature,
the features that make IDEs more than just a basic editor.
Okay.
Go to definition, refactor, find messages that I think that's what he's saying.
Okay.
Yeah.
I mean, I wouldn't fire up Powerfly just from a command prompt and
I need to edit this email. Um, let me fire up an LSP.
Yeah, exactly.
But I think that's what he said.
Like the features of IDEs are basically just backends to LSPs.
All right, are you ready for a joke?
You did a nice job full circling your whole experience.
So I'm going to do that as well with your PyDesk and so on.
So check this out.
We're all, as programmers, aware by now, surely, that AI tools are good at writing code and
it's going to mean some
sort of change for us somehow, right?
Some of us are worried that, you know, maybe this might be the end.
So the joke is there's two programmers who are super, they're like about to be hung in
the gallows, right?
Like it's pretty, pretty intense.
It says programmers worried about chat GPT.
And then they look over at the other guy.
He's also, he says mathematicians who survived
the invention of the calculator.
Looks like it was first time, eh?
It's pretty good, right?
Yeah, that was pretty good, yeah.
I wonder if that image was made with ChatGPT.
That would be a sweet sauce on it.
Probably not, but it's probably from a movie I haven't seen.
Yeah, probably, maybe.
But, yeah, first time time the mathematicians is survive
Yeah, gosh
The mode show my age
But I still remember all my math teachers saying you have to do you have to learn how to do this by hand because you're
Not gonna mark around with a calculator or every day. Yeah, you're not going to are you wait?
Well, you know, maybe maybe yeah, maybe hold enough to your phone. Yes. I will yeah or your watch or whatever
Yeah, my kids
in oddly good at putting her phone at anything like a math problem and getting an answer
She uses it to check her work, which I think is good. At least that's the claim. Yeah
Yeah, we live in amazing times but also interesting times as the curse slash quote goes
Yeah, I'm gonna have to read the comments to figure out what a sink and banana suits mean. Oh
Yeah, there was a banana suit
One of the videos on yeah
Yeah, any out there pointed out that from one of the pulling out some of the talks one more talk thought was interesting was
there pointed out that, um, from one of the pulling out some of the talks, one more talk that was interesting was Pablo Galindo, Salgatos and Yuri Selvanov.
Was especially fun talking about async and wearing banana costumes.
Well, it could be better.
Nice.
Yeah.
Well, yeah.
Thanks for being here, Brian.
Thanks everyone for listening.
See you.
Bye.
Bye.