The Changelog: Software Development, Open Source - Reinventing Python tooling with Rust (Interview)
Episode Date: October 1, 2025Charlie Marsh built Ruff (an extremely fast Python linter written in Rust) and uv (an extremely fast Python package manager written in Rust) because he believes great tools can have an outsized impact.... He believes it so much, in fact, that he started an entire company that builds next-gen Python tooling. On this episode, Charlie joins us to tell us all about it: why Python, why Rust, how they make everything so fast, how they're starting to make money, what other products he's dreaming up, and more.
Transcript
Discussion (0)
Welcome, everyone. I'm Jared and you are listening to The Change Log, where each week we interview the hackers, the leaders, and the innovators of the software world.
We pick their brains, we learn from their failures, we get inspired by their accomplishments, and you know we have a lot of fun along the way.
Charlie Marsh built rough, an extremely fast Python linter, written in Rust, and UV, an extremely fast,
Python package manager written in Rust, because he believes great tools can have an outsized
impact. He believes it so much, in fact, that he started an entire company that builds next-gen
Python tooling. On this episode, Charlie joins us to tell us all about it, why Python, why Rust,
how they make everything so fast, how they plan to make money, what other products he's dreaming
up, and a whole lot more. But first, a big thank you to our partners at Fly.io, the public cloud
built for developers who ship we love fly you might too we're all about it at fly.io okay charlie marsh from
astral on the change log let's do it what's up friends i'm here with kyle galbraith co-founder and
CEO of depot depot is the only build platform looking to make your builds as fast as possible
but kyle this is an issue because githubh actions is the
number one CI provider out there, but not everyone's a fan. Explain that. I think when you're
thinking about GitHub actions, it's really quite jarring how you can have such a wildly popular
CI provider, and yet it's lacking some of the basic functionality or tools that you need to
actually be able to debug your builds or deployments. And so back in June, we essentially took
a stab at that problem in particular with Depot's GitHub Action Runners. What we've observed,
over time is effectively get up actions when it comes to like actually debugging a build is pretty
much useless the job logs in get up actions UI is pretty much where your dreams go to die like
they're collapsed by default they have no resource metrics when jobs fail you're essentially left
playing detective like clicking each little drop down on each step in your job to figure out like
okay where did this actually go wrong and so what we set out to do with our own get up actions
of observability is essentially you built a real observability solution around kit of actions.
Okay, so how does it work?
All of the logs by default for a job that runs on a depot kit of action runner, they're uncollapsed.
You can search them.
You can detect if there's been out-of-memory errors.
You can see all of the resource contention that was happening on the runner.
So you can see your CPU metrics, your memory metrics, not just at the top-level runner level,
but all the way down to the individual processes running on the machine.
And so for us, this is our take on the first step forward of actually building a real observability solution around GitHub actions so that developers have real debugging tools to figure out what's going on in their builds.
Okay, friend, you can learn more at depot.dev. Get a free trial, test it out.
Instantly make your builds faster. So cool. Again, depot.dev.
Today we are joined by Charlie Marsh, the founder of Astrol, a company that makes NextGen Python tooling, maybe you've heard of UV.
Charlie, welcome to the show.
Yeah, thanks so much for having me on.
I'm really excited to be here.
excited to have you here
topping the stack overflow survey
most admired or desired
I'm not sure how they broke that out
I've forgotten already but you're at the top
we talked about you on news
a month or two back
but man everyone is either
loving or wanting to love
this tool you've come up with UV
can you tell us about that?
Yeah yeah no thanks it's um
honestly I feel pretty lucky
because I don't really spend a lot of my time
thinking about how to get you know
on top of the stack overflow developer survey
I get to just spend my time building the thing
and talking to users in the issue tracker
and trying to make it better
and then it just keeps growing.
So it's kind of like a dream,
you know, a dream job in a lot of ways for me.
So UV is our Python package manager,
Python tool chain manager.
We kind of view it as like the one thing you install
that then gives you everything you need
to be productive with Python.
And there's a couple aspects that I think make it unique.
One is as with a lot of the things we build,
It's really focused on performance.
So we think a lot about how do we make things very, very fast, things like way faster
than you thought they could be is sort of the way that we try to view it ourselves.
But we're also trying to just take a lot of the complexity out of building with Python.
I think like Python packaging has for a very long time been a thing that people have complained
a lot about and often for good reasons, like they're running, users running into problems,
people having trouble trying to understand why things are so complicated or things aren't
working the way they want. And so a lot of what we've tried to do with UV is cut through a lot of
that complexity. In some cases, by bringing more functionality together. So, like, you can just
install UV and then it can manage things that previously you might have needed to distribute across
a bunch of different tools. So maybe before you had to learn, like, four or five tools. And now we
say, hey, here's UV. It can do all that stuff for you. And then partly also by coming up with some
of our own kind of like workflows and APIs that we've put into UV that we think make things
basically make it easier to get things right for users. Yeah, we model UV a lot off after we're
very inspired by like how Rust does tooling. So UV is a lot of ways modeled after cargo,
which is like Russ package manager. And with Rust, it kind of feels like you install this thing
and then everything you do is like very high confidence. Like you kind of like know how to like install
dependencies and run code and test code. And like that.
That's what we want to get to with UVs.
We want to give people kind of this very high confidence experience of working with Python.
Everything for Russ really revolves around cargo, right?
Like cargo does pretty much all the heavy lifting.
So it's linting, it's doing builds, it's doing all the things.
Yeah.
And it's, I mean, Russ in like other new programming languages, it has this like kind of second mover.
I don't know what's second mover, but it has this late mover effect where they get to learn a lot about like,
what makes a programming ecosystem nice to work in.
And so they got to do, like, it's a very different position
from when Python was being created.
And it was like decades ago.
And everything kind of evolved very organically.
And it wasn't really clear like how serious things would be or like what was
going to change like packaging kind of just like emerged as this like organic
property of like people needing to share code and like distribute code.
And for us, it's really different.
It's like, okay, we're building a new programming.
language. Let's learn from all these things that people have built. And so in Rust, like,
cargo is very much like a Blust tool. It's like you install Rust with Rust up and then you use
cargo for for everything. And in a lot of places, cargo is actually kind of like a front end to other
tools. Like Rust actually has a formatter called Rust format. But you never really think about that
as a user. Like as a user, you just run cargo format. And that actually runs Rust format. So Rust becomes kind of
this like focal point for, or sorry, cargo becomes this focal point for how you work with
Rust and all of that design was very intentional. So for us, it's a little different because we're
like coming into this ecosystem that's decades old and like absolutely enormous, which is like
the Python ecosystem. And we're trying to both like meet people where they are in a lot of ways
and be like, hey, we want to give you better tools that like don't require you to completely rethink
like how you work. But also we sort of think about how do we build towards a very
different experience and give people something that is very different for people who want to
embrace kind of a different way of working. So we try and do actually both those things. Like we try to
build tools that are, um, what we would say are like drop in replacements are like very compatible
with how people do things today. But then we also build tools at the same time that are kind of like,
hey, if you want to work the way that we think you should be working, like here's like a very different
way to work with Python. And you get kind of different benefits depending on which you opt into.
But it's very different from building for Rust, building tooling in Rust, for example, or for Rust.
One thing that, just back on Rust for a second, one thing I read actually on Wikipedia from Graydon, the fellow that created Rust, was just this notion, I think, is important to mention, to mention, he said, let me see if I can get my words, right?
In Wikipedia, it says this. I'm not sure he pronounced his last name. Is it H-A.
H-O-A-R-E-H-O-A-R-E, I'm not sure.
Yeah.
Emphasize prioritizing good ideas from old languages over new development.
So as he was thinking about Rust,
let me prioritize these good ideas from older languages,
even some obscure languages over new development.
And I'm going back to the quote,
citing languages including,
and I left that out because I was trying to paraphrase to a friend.
And then it was like many older languages are better than new ones
in describing the language as technology.
This is cool.
technology from the past come to see the future from itself.
I just thought that was really, really cool.
Like this show, like as we pull back the layers of software,
as we pull back the layers of, you know,
hey, this is how Russ does it.
And so this is how other folks are doing it or whatever.
It's this learning from the community at large of software,
not so much the Python world or the Ruby world or the Rust world or the Go world.
It's like these ideas that are spread across even other languages that I'm way less
familiar with, if at all, they impact how you build the tooling you build. And I think that's
kind of cool. It's just like this technology from the past come to see the future from itself. That's
just poetic and beautiful. I'm a huge fan of that like general idea of like cross pollinating
ideas. And yeah, I think about this a lot when we're building tooling. It's like I mean,
it's not totally unprecedented that we encounter some problem that no one has worked on before,
but it's pretty rare. Like most of the times when we
going to solve a problem, it's worth looking at, okay, well, like, how do other ecosystems
or how do other tools, like, approach this problem? And, like, before I worked on UV, we built
a tool called Ruff, which is a linter formatter, sort of, like, in Cargo, in Rust, it would
be like our Rust format, our Clippy. In JavaScript, it would be, like, some combination of, like,
prettier and ESLint and all that stuff. So it's like a static analysis tool, formats or code.
It, like, fixes issues. And when we worked on that, like, so many of,
of the design decisions and design questions just came down to like, okay, well, like, let's
go look at a bunch of other ecosystems and how they do this. So we looked at like Ruby,
like obviously we looked at like prettier a lot in ESLint and like decisions that they had
made. We looked at like Rubocop and Ruby. We looked at Clippy. And we still do that today.
And so like I don't know. I think and even with now like we're as a team at Astral,
we're like about 20 people. And as we've like put together that team, it's also been very
intentional that, like, I've tried to suck people in to Python. Like, it's not like we only hired
people who have worked in Python their whole career. Like, it was very intentional that we actually
brought in people who, in some cases, like, had done almost no Python and brought in very
different ideas, like people who had written tons of Rust or a lot of Go, even people who, like,
spent most of their career in the web ecosystem. Because I like bringing in those different
ideas and having those different perspectives and, like, bringing different energy into, like,
a program ecosystem. So I'm a really big fan of stealing basically good ideas from other
ecosystems and like looking at prior art. I think that's like kind of always the first thing that you
should do. Yeah. Anybody who's against that just I don't understand that logic. Like, why would you
not look at if it's my art? You know, that's the all the time I'm against it. It's my art. Well,
I think in programming in particular, like, you know, you look at a package manager or registry,
why would you start from zero? Why would you, I mean, first principles for sure, but based on
the past based on other implementations, not based on, I'm not looking at you because that's your
thing. It's like, no, let me become wise because of what you've done or the road you've gone down
and then begin from first principles based on this just new vantage point that you would
otherwise not have if you didn't look. It just doesn't make any sense to me. Yeah. Let me share
a little history, which illustrates exactly what you're saying. So I was in the Ruby community,
Charlie, Adam and I both were. And Ruby had, you know, average package managing.
in the pre-Rails day and then Rails got so big
and had so many people using it that it was like
it just wasn't enough. In fact, we had this whole
vendering thing and eventually there's a couple
fellows, you hit a cat's being one of them who was like, we're going to
fix packaging for Ruby. And Carl Lersh, I'm not sure that's how
he says his name, but he was the other one who I remember. And they built
Ruby Bundler, which eventually was like first partied
into the whole thing and became the package manager for Ruby
and it was much better. They learned a lot. They
made a lot of mistakes and they made a lot of people happier than they were.
And then, as you may know, Charlie, uh, Yihuda and Carl went over and built cargo.
Yeah, exactly.
And so they took their learnings from building bundler and the stuff that was good and
then they dropped the stuff that was bad over to that.
Right.
Built cargo.
Cargo became awesome.
Everybody inspired by cargo yourself.
Now UV based on cargo inspiration.
And a cool full circle moment is what I read.
I'm sure you know about this.
Maybe you don't.
RV, which is a new effort by Andre Arco and a few other people, to basically build a new
Ruby thing, which is based on principles and things they like about UV. So, like, it's a full circle
inspiration there that is just really cool and just shows how stealing good ideas from other places
is like, it makes us all better. Yeah, yeah. I do love that story. I thought that's where you were
going to go. Yeah. Which is, yeah, it's very cool. And I don't know. I mean, I think people, it's actually
hard, like, if you're going into a process, like a language design process or something,
this idea that you have to go do a bunch of homework, I think, is actually, like, it's work
to, like, go out and, like, see why, what decisions people made and why, and, like, why they,
whether they've worked out or not. And there's always a temptation to think that your problems
are, like, different and new, that, like, no one has really, like, solved these before.
And it is often the case that, like, you're working on a problem that's not totally new, but is
new in some different way. I'm sure most problems that you work on have some context that makes
them new or different. Right. And so there's a lot of like taking in information, looking at what
people have done, trying to understand like why they made the decision, like what the impact has
been. Like has it worked out or not? But then also understanding like why your context is different
and like why, you know, how you need to adapt it. So it's a big, I'm always kind of like harping on
that, especially now as I've gotten a little bit, not that I'm like hugely involved.
but as I've gotten more involved in Python standards,
and I get pulled into different discussions or different ideas,
I'm kind of always trying to push on,
well, how do other ecosystems like solve this or have we looked at?
There was one proposal recently around sort of like having like default optional features
and packages, and we keep trying to bring up,
well, let's look at how Rust has done it and like there are actually some problems with it.
And so let's make sure that we think about like what those problems are
and like how they will affect like this design or like ways.
we could do it better.
So, yeah, it's not only taking the good ideas.
It's also kind of looking at, like, what could be done differently.
Learning from the failures, yeah.
Or the tradeoffs, you know, sometimes it looks at like a failure,
but actually it was a perfectly reasonable tradeoff given their context.
But the good news is we don't have that context anymore.
And so we can avoid that particular problem.
Adam and I, even though we're not daily Pythonistas,
we felt the pain of Python package management because there's so much tooling that's
useful in Python, just massive, as you said.
And so we had conversations and shows all about, like, help us to get rid of our, what is it?
Not FOMO, the opposite.
Like, our fear of using Python.
Yeah, yeah, yeah, yeah.
When I have to installing the Python is like, oh, I don't have to do.
Is it the right way?
What's going to happen?
That's what we're trying to overcome.
Right.
So my question is, like, where, like, how did you pick this problem?
Because it seems like it's rife for disruption, but it's existed for so long.
Like, how did you come to this idea of like, well, we're going to do a new package manager for Python?
Yeah, totally. So like I said, we started with Ruff. So we started building this like static analysis tooling. And, you know, I started that just started as an open source project. And eventually I turned it into into this company. And, you know, when I was looking at what we want to be as a company, it's like, okay, we want to be like the Python tooling company, let's say. Well, if we want to be the Python tooling company, I think we have to like take on the hard problems in Python. And for me, so for me, it was like, okay, we have to do something in packaging because.
Every time you talk to people about using Python, they have this groan, which comes from installing Python or installing packages or setting up the environment.
And for me, it was like a lot of people, I mean, something that's intimidating is like a lot of people have actually tried this.
Like, there are lots of tools.
And that's because like a lot of people have tried to do different takes on it.
And so for me, it was a little bit of like we have to do this, both to prove that we can and because it seems like the most important tooling problem in the ecosystem.
So if we're going to try and, like, really, like, lift up the ecosystem in some way, like, this is the thing that we have to go after.
And it was, you know, we thought a lot about, we think about this with everything we build, but it's like, why, what is like the insight or like, why can we build something?
Why would we succeed here?
Whereas other people, I would.
I mean, I don't think it's that other people have failed, but it's like, why would we, if the space is really fragmented and people, and some users are still having problems, like, what are we going to do differently?
that's going to make that it's going to overcome that fragmentation or like overcome those
problems. And I think I think part of it is just that we had the resources and like the
ambition to try and like do the whole stack. Because if you look at a lot of these other tools,
they kind of build on other pieces and they have to like cut the cord somewhere. And so for us,
it was, okay, we're going to do packaging. We're actually going to do like the whole stack.
like everything from like parsing dependency specifiers like version specifiers through to resolution
through to installation through to managing python itself like the pythons you install and all the
versions through to actually building those pythons for you like we're going to do the entire stack
and so that I think is where a lot of the effectiveness of the reason that we're able to do things
differently like a good amount comes from that which is we kind of did we were like we're
we're going to do the whole stack. And like everything's going to be aware of everything else.
And that lets us build experiences that kind of work better together and are like more
automatic. Like when you go into a project, you install UV and you run UVSync, like we can do
everything from we figure out what version of Python you need. We go install the pre-built version
of Python that we built. We put it in the right place. We resolve all your dependencies.
We put them in a lock file. We create the virtual environment. We install everything into the
environment and then we run the command that you gave us in that environment. So like all of that
complexity goes away. And that comes from being willing to say, okay, we're actually going to do like,
we're going to try to do like the whole stack. And I think that's where a lot of it came from.
I, you know, I think the other piece is just making good decisions about like where to be
pragmatic and where to be like dog sort of dogmatic, like where to say this is something we really
believe that we're going to like do differently and being otherwise being, other areas being willing to
say, okay, we need to do this for compatibility or like just, it'll break too many users.
And just like on the margin, trying to make good decisions around behaviors is very hard.
I think we've gotten some, I think we've gotten a lot of them, right?
Some we've gotten wrong and some we've like changed over time.
But like a lot of ultimately that too, you know, a decent amount of this comes from having
the resources to like work on this stuff full time, like being able to kind of rally people,
including investors and like bringing investors and say like this is an ecosystem that's like
really worth working on and like we can do something like we think really special and different
here. And so being able to bring people in full time and really pay, you know, put in all the
engineering investment to build this thing, the community investment to like spend all this time
in the issue tracker and understand like what's going well and what's not and fix things for people
and be really close with the community and like try to iterate really quickly. So, you know, I think
I think ultimately it comes from being able to have that level of ambition of like we're
going to do something really different. And then executing that and that in a way that I think
has been really effective. So did you have that goal in mind when you started talking to investors
because like you said, you needed end years to do this or however many years you think it is
or months. Months or years, probably years, right? How long it took to build you be able to bite off
the entire thing? Like that's what your goal is, right? It's like everything. And
So if I'm a full-time developer, Python developer, you know, someplace, and I have this like,
you know what?
I'm going to solve packaging for Python.
I got to do that nice in weekends.
Maybe I cut back my hours and work on it.
Maybe I convince my boss.
I can work at it in my 20% time.
But like you said, no, we're going to do it all.
We're going to do it right.
Ground up to a certain extent.
We're going to even installing Python through our tools.
And so to do that, you raise money, right?
Like that was because you need to have, you need multiple people for multiple years.
to actually get that done.
And so was that your first step?
Was like, I want to do this and I'm going to go raise money to do this?
Or I'm going to convince people like, how does that whole play out?
Yeah, yeah.
So the first step was, I started working on the tools before I raised any money or started
a company.
So like I was actually, I'd left my last job.
I was at a computational biology company.
I was in charge of all of all this sort of like software infrastructure, data infrastructure,
machine learning infrastructure.
We wrote a lot of Python.
and I was kind of like figuring out what I wanted to do next and I did I was like looking
to start a company but I didn't really think it would be this this was kind of like my side
projects where actually I kind of like wanted I wanted to learn rust so I started building
this because I was like I think this could be cool you know et cetera et cetera and um and that
was rough you were building right that was rough at the time yeah yeah and um that project then
really started to take off and I kind of realized there was an opportunity to take these
similar ideas and extend them to other parts of the tool chain.
Like we could build a package manager at the time it was just a linter and it was like we
could build a format or we could build a package manager.
Now we're also building like a type checker and a language server.
Like we're trying to build all this stuff.
And I saw there was this opportunity.
And, you know, I think a couple of things that come to mind.
One is like very, very important everywhere, but especially in this context to find ways
to demonstrate like incremental value and being able to get.
get things out to users incrementally. Like, Rough was a really good project for that because it was
a linter. So it's like a set of rules, right? And a set of functionality. And like the core of it
doesn't have to be that big. But like over time, you can do like more, you can add like more rules,
more functionality. So we were able to ship like the first version I shipped was like not very
future complete. But people could actually use it. And over time, we could like extend and grow it.
So it was usable very quickly. And it grew quickly. And it grew quickly. And
and we kind of expanded the scope of what it could do.
It was something we could ship very incrementally.
The formatter was maybe a good example of something that's not like this at all.
Like a formatter is not useful until it's done.
Like it has to be finished.
Like it's not useful if a formatter can format like function definitions, but nothing else.
A third of your code.
Yeah.
So that was like a much harder, a bigger challenge where it's like we actually had to iterate,
not in private, but like we didn't have like a useful release for like a long time.
Like we were like, it was all in public, but it was like no one was using it until it was done.
And then when we got to the package manager, we took, we thought about that a lot.
And the thing that we did, we actually built the entire, the first, like, I started working on it in, like, October, let's say, of I think 2023.
And then we did the first release in February.
So it was only a couple months, and it was like three of us working on it.
And the first release, we said, we actually found really good ways to kind of cut like what went into that first release.
Like the first release was just a PIP compatible CLI.
So all it did was like UVPP install instead of PIP install and UVVM to create virtual
mile.
Like it was very, the CLI now does a bunch of other stuff like supports like installing Python.
We have like lock files, like installing global tools, like all this stuff.
None of that stuff was in the initial release.
And so for us, it was like the first release, how do we prove that we can do this?
Let's ship this very well scoped.
It just does PIP install and like PIP uninstall, right?
and, like, creates virtual environments.
And that's actually useful for a lot of people.
Like, that actually solves a lot.
It's not, like, some people looked at that and we're like,
oh, it's just faster PIP.
That's not interesting.
And, like, what we're trying to do is much bigger than that now.
But that's what we started with.
And so for me, it's like a lot of the focuses on how can we,
let's think, like, super critically about, like, use cases.
And how can we get something out there as quickly as possible?
It's, like, useful to people that we can actually start iterating on in public and
with users.
and that's kind of been like, that drives, I think, a lot of how we build things.
It's like the type checker is similar.
It's like the type checker kind of has to be done to be useful.
And so it's much, it's actually harder to build and you have to be willing to put in a
lot more investment over a longer period of time.
But when you can find a tool that you can get out a small version of, that's actually
useful, like just ruthlessly asking yourself, what would it take for someone to actually
be able to use this and then like getting to that and getting into an iteration loop,
I think was really helpful because we kind of proved like we could build this
packaging stuff. And then from February to, I don't know, August or whatever, we built this whole
other part of the CLI and released it. And then we had a huge new launch around that. But, you know,
a lot of it is being able to prove that you can do these things and prove that users want them.
And so getting something out quickly and iterating from there, I think, is like the thing that I
always try to find a path to that with the things we build. And proving that you can do with
a small team is also very helpful to yourself and to investors.
And to investors.
Yeah.
So did you go the traditional pitch deck route?
Like you got a pitch deck that you went around and how did you raise the money?
No, I don't know.
I got pretty lucky.
Yeah, there was just, there was a lot of, I mean.
You had friends in the industry or like, how did you get the money?
It's more just that the open source was really taking off.
And so our fundraisers were, our fundraisers were pretty easy or pretty, I don't want to say easy.
They were relatively frictionless because.
we were seeing so much growth in the open source that it was just clear that we were doing something
right basically. And so for investors who are open to investing in open source, because you will find
very different investor philosophies around this stuff. But if you find investors who kind of have,
either have done open source before or really believe in, you know, the ideas around open source
and also how they can segue into like commercial growth, it wasn't. Thankfully, we didn't have like super
challenging, you know, fundraisers. So, yeah.
Like it's kind of funny because at the time, like I said, I was, I was thinking about starting a company when I started when I was working on this stuff. And so I'd actually started talking to a few investors, but really just as a way to kind of like build relationships and be like, hey, I'm like not starting a company yet. I don't know exactly what I want to do. Here's like four things I'm kind of like playing around with like building prototypes, like talking to users. And the thing I learned is like that stuff can actually like escalate very quickly. So as soon as I started having traction,
And in my head, I was like, okay, if I hired people now, like, I know what they would work on.
Like, like, once I got to that point, then things kind of escalated really, you know, quite quickly.
And, you know, I ended up raising some money and starting to grow the team.
Behind the scenes, I'm over here just clotting away.
I'll reveal a little bit here.
Well, I was thinking, like, what's the, it's like a just in time learning tool, let's just say.
So I was like, well, I created a new directory to play with Claudian.
I said, let's create the most simple CLI-based to-do list in Python,
and let's let UV be the centerpiece of it all.
And, I mean, it's a Python project.
I don't know much about those, because I've never ran one,
but I can see the usefulness just in the real time of UV.
It's super fast.
It's obviously one command.
UV-Run runs the CLI, so it's got run command.
So it's doing a lot of that user experience direction for things.
And, you know, as somebody who runs, I've been building a few CILIs.
I feel like that's the one thing you want, you sort of one easy path to run your project
and keep it that way.
Yeah, I mean, we thought a lot about that just because, like, that was kind of like a major design decision when we were building UV was we want to abstract away like as much as of the complexity as possible.
So like UV run when you run that command, it will make sure that you're,
your dependencies are in sync with the lock file, make sure that your lock file is in sync with
your environment, like, that, you know, basically that your environment matches, like,
what your declared dependencies are. It will do that every time you run UV run. And the thing that
I think is cool is, like, that's kind of an experience you can only reasonably build if you have
a really strong, uh, performant like baseline. Because you can't, if that takes, like,
imagine that took like 10 seconds, then like you can't have that 10 second overhead.
every time anyone tries to run a command.
And so, yeah.
So, like, for me, that's one of the cool things where it's like, okay, focusing on performance
actually lets you build, like, kind of different experiences because you can build things
that otherwise would have been like prohibitively slow before.
And so, yeah, we didn't want to have this workflow where, like, you might run a command
and you're actually, your environment is like stale and you're, like, missing all the dependencies
or something, like, that happens all the time
with, like, NPM and Node and stuff.
And I'm like, I don't want that.
Like, I want the cargo version where, like, you do cargo run
and it, like, takes care of resolving your dependencies
and installing them and then runs the command
with all the right stuff.
So, yeah, it was a very much,
that's probably the biggest example of us trying to, like,
provide a different experience for working with Python.
I decided to install it with Homebrew as well.
I don't know if that's anti-pattern because docs don't really mention
homebrew or?
Yeah.
You view as a solve home brew.
That's cool.
That's cool.
My preferred method is homebrew because that's my MacOS, you know, package manager, basically.
So that's my preferred way versus the, you know, just the, just the ways, essentially that, that, that script on your, on your.
Yeah, we have this curl script.
The reason that we generally recommend installing with the curl script only because it lets you do auto self-updates.
Like if you install with the curl script, then you can run UV self-update.
But, but if you install it.
install any other way. We can't really do that because
like we're not home brew. So like
we can't like. So it goes stale
over time. I'll maybe yeah. Yeah. But you know, it's all the same
binary for the most part.
That's a hidden unknown for me with
Humbrough. I didn't think about that. Like being out of sync
I guess I know that by
nature. But the fact
that you have a self-upbating thing
that's built into the package manager
is a nicety that I really want.
And now I'm going to go and undo that and
I'm going to install the right way. So thank you very much.
Yeah.
Can you talk about, you know, we just talked about run, but what about init and ad?
These seem to be like the things that really are the magic moments for anyone using Python.
Yeah, sure.
That's not there otherwise.
Yeah, yeah.
So like I said, when we did the first UV release, the things that we launched with were like UVPIP install and like UVVM.
Like, and these are kind of like commands that match how people have historically worked with Python packaging.
So it's like, okay, I want to create an environment.
I'm going to run this command to, like, create the environment.
Then I'm going to, like, activate it.
Then I want to install a package in it.
I'm going to run UVPIP install torch or like by torch or whatever.
And so it's very imperative and it's kind of low level because it's like you have this
directory on your machine that you're sort of like manually managing.
It's like I want to add things to it.
I want to remove things from the environment.
And like ultimately we want to get away from that.
And so like UV init, UVI, UV run, these are all designed to be very declarative.
Like, you tell us in your file, like, what the dependencies should be.
And then we take care of, like, taking that declared state or that declared set of dependencies
and making them correct on disk.
Like, you don't have to think about, I'm going to create this directory and add this
dependency and add this other dependency.
You tell us, this is what my project needs.
And then we take care of the rest.
And so getting there, yeah, UV init will create a new Python project for you.
And then UV add, you can just add dependencies to it.
And, you know, UV takes care of keeping everything in sync.
and yeah there are a bunch of when we talked about this a little bit earlier just like taking good ideas
I mean there are a bunch of ideas in UV too that we've like taken from other tools even tools that
I know I've talked about cargo even tools that have nothing to do with cargo like like in JavaScript
a couple tools like PNPM and Bun probably yarn too although I'm honestly a bit less
familiar with exactly how their design works but basically the way that we manage like our
cache and do installations is, that's probably one of the most, like, oh, this is, I never thought
it could be this fast kind of thing where people get really confused, which is, we use like a
global cache.
So basically, if you install a package once, we put it in the cache.
If you then go to a different project and install the same package, we effectively just
like sim link that package into your environment.
it's not exactly a sim link but the basic idea is for each package you install we actually only keep like one copy one real copy on disk and all your projects just point to that which makes installation incredibly like if you are installing something that's already been installed before it's basically a no-up it's like it just points the files to the right place and that's something that like pnpm and bun do and there are different tricks you can do on different file systems to like make that faster or or different depending on like what's available
But, you know, again, a lot of it's like taking some of these ideas around like how you can build these kinds of tools in a really performant way and bringing them to Python.
Yeah.
That seems like a pretty logical one, though.
The global cache.
Where did you get that one from?
I think probably from Bun was probably the, I think I knew PNPM did this for a while.
But then Bun, I think, is the one that I probably looked at most when I thought about like how our design should work.
But yeah, there are tradeoffs to it.
Like the main downside is there's kind of two ways to do that.
Like one is you can you can actually use like a hard link, kind of like a sim link.
The problem with that is users can poison the cache.
So like if they change the files in the installed environment, it will actually pollute the cache
and affect everywhere else.
So let's say that you open a file that's in your virtual environment, it's like out
a print statement or something to debug, which is something that people have done from
time to time. That thing gets, it sort of like poisons the cache, so that it also affects other
projects. That's one downside. But like macOS and some other file systems support a concept
called copy on write or like reflinking, which is much nicer. The idea there is it's like a sim link
until you edit it. And then it creates a, then it actually writes the copy and changes it there.
So that's kind of like the best case scenario. But yeah, it's a very nice idea. And it also means
that you, in addition to being way faster, you save like a ton of disk space. So,
So, like, in Python especially, packages can be really big because a lot of the time you're
actually working with native code.
So, like, like, Pytorch, for example, like, compress, it's almost like, I think the Linux
Kuda, like 128 builds are about like a gigabyte compressed.
So when you install that package, we're actually like downloading and unzipping like a
gigabyte compressed.
And so it's really nice to only have one copy of that package on.
your machine. Like if you're installing a new one in every environment, the disk space actually adds up
tremendously. So it's faster and it's more space efficient, which is pretty nice. Is the UVFS when I
browse the creates directory in the project? Is that literally a file system? Is that what you mean by
FS? It does mean file system, but it's really just like utility functions that operate on the file
system. So do things like creating sim links and stuff like that. Yeah, we have a pretty like
unconventional structure
to our Rust projects
we tend to break
easy to browse
thank you
we have a lot of crates
you can probably see
it's kind of like we have a lot of
it's very ceremonious
I mean you got Tommels everywhere
you got source
directories everywhere which I think
is I've leaned
into Rust a little bit
and you can appreciate
the verboseness of Rust
because there's
explicit returns of types
there's explicit types
Obviously, there's so much in there.
But I think what it offers you is confidence and safety,
which is probably why you chose it as the reason to change what you've done
and go build these, you know, build roughed and then build this.
Or sorry, build rough and then build this.
That's what I see there.
It's like, that just seems to be the rust way is more than necessary by means of confidence.
Yeah, it's been a, I think it's a really good language for building this kind of tooling.
I don't think it's the right language for everything.
Otherwise, I probably wouldn't be building Python tooling.
I'd be building Rust tooling, I guess.
But I think it's a very good language for building this kind of tooling.
Which one's your favorite?
Which one do you like more?
Which programming language?
Python or Rust.
I mean, you're building Python tools to bring more Pythonistas.
They're building them in Rust, so you must like Rust.
Yeah, I mean, I would say I prefer writing Rust to writing Python.
but it's just well first of all like I write a lot more rust than I do Python now
so I'm just I've just spent a lot more time in the ecosystem and rust is I think actually
quite a hard programming language to learn because it well I don't know it depends a lot
on your background for me it was a hard programming language to learn because my background
was, you know, I'd done a lot of Python, but also a lot of typescripts,
like maybe like two years of Java professionally, like a little bit of Go, kind of barely,
and then like not really any Russ.
So I didn't have like a systems programming background,
and I wasn't really used to this idea of thinking about, in Rust you would call it like
ownership, but thinking about like memory management.
And so when I first started writing Russ at my,
my last company, someone else introduced it to the project, a really great engineer.
And but I didn't really, like, know it.
And so every time I went in there, I was trying to get in and out as quickly as possible.
And like, I was like, I was trying to, I was thinking in terms of Python and trying to map that to Rust.
I was like, why is it so hard to like do a comprehension over a hash map?
And basically like nothing made sense to me.
And so it took me like working on rough and kind of building something from scratch and kind of banging my head against those concepts.
So I think really understand.
like what the language is about.
And the thing is now,
Russ has this mechanism called the borrow checker,
which is like the thing that,
the thing that makes sure that you don't break its rules around how memory works.
And it's not really like writing Python or JavaScript
where you can just like create things and like pass them around and do whatever you
want.
You have to follow certain rules that the compiler enforces.
The thing is for me now,
I don't really think about the borrow checker anymore because I've written a lot of
Russ. So I think in terms of that kind of ownership, like the thing that makes it hard,
eventually that kind of goes away in my experience. It's like I don't, I just write my code now
in the way that I kind of know what work for the borrow checker. And I don't really feel like
I have a lot of overhead from it. So that's like the thing that makes us hard, I think, at least
when you're beginning is the borrow checker. And that kind of dissipates over time. And then it just
becomes, it just becomes a lot easier. I mean, there are still plenty of things I complain about.
like I would like compile times to be faster, for example, et cetera, et cetera.
But I do find that the way I think about programming now is better to rest now.
And so that ends up being kind of an easier experience for me.
Can you give a little bit deeper dive on ownership and borrowing in the vein of memory
and memory safety and memory usage and how it just, once a variable goes off scope,
it just falls off? Can you speak to that a little bit? Do you know much about that?
Obviously, yeah, I can try. I don't know that I'll give the best.
explanations of these things.
Give us your best take then. How about that?
That way, everybody who's thinking about Rust, why,
well, what I've learned about Rust is just that.
It's like that is the centerpiece of what makes Rust so safe is this borrowing method.
You can't allow a variable to be mutated.
I think you can only have one.
This is where I was hoping you can fill in, Charlie.
But I think you can have the variable used all the places,
but you can only mutate once or borrow plenty.
It's like, that's the rule thing.
that you have to abide by?
Yeah, I mean, I've never, I don't even know if I've even tried to explain this before,
but so I probably won't do a very good job.
But like, you know, if you're writing like C, for example, like I have not written a lot
of C, but I wrote some C in college.
And so when I wrote C, I had to think a lot about like Malik and free like allocating memory
and then freeing it and thinking about like, okay, who's like allowed to free this?
And like how do I make sure that like after I free the memory, like no one.
like using the object and stuff like that.
In Russ, you don't have to like think that way because the compiler does it for you.
But the tradeoff is the compiler enforces these rules at compile time.
So the rules are things like, okay, like whenever you like initialize some sort of variable,
like someone has to own it.
And when that owner, like maybe it's an, maybe it's like an attribute on an object, like when
that owner goes out of scope, that value goes out of scope.
So if other people need that value and they have to read from it, you have to make sure that
the owner lives basically longer or as long as the things that rely on it.
And so whenever you end up like initializing memory, like a string or something, you need
to think about, okay, who owns this and who's going to need it?
And like how long, how do I make sure that that owner lives long enough?
There are a lot of things in Rust, like Rust enforces all these rules and then you can also
break them if you need to in sort of like special ways.
And Rust also has a lot of escape patches for kind of, like, different ways of managing the memory.
Like, I actually found that a lot of that stuff made the language more intimidating for me.
Like, when you go read Rust guides, there are certain things like refs, like, RefSel and like RC and like blah, blah, blah.
And it's interesting because, like, those things exist in some ways to help you.
Like, often if you reach for those, it's actually a sign.
if you're an experienced rest program, this isn't true.
But as a beginner, I found that whenever I was reaching for those,
it was actually a sign that I was doing something wrong.
Like, I was thinking about memory the wrong way.
Because like RefSel, for example,
lets you do like interior immutability,
which is just, it's not super important what it is.
But the idea was, I was like, oh, I can't like,
I need to have, like, right access to this object here,
but it's only letting me read from it.
And I would, like, Google that.
And then I would find, like, ref cell.
And I'd be like, oh, okay.
But actually, I was just like thinking about things incorrectly.
like someone else should actually be owning it someone else should actually be doing the mutation so for me it was
like I tried to start by like once I figured that out I actually like ignored a lot of things in the language for a long time
and I was like I'm going to try and write really dumb rust and then I kind of grew to understand those things over time especially as I hired people into the team who are honestly like much better Russ programmers than me and I could like learn from them but I do I do sort of maintain that I think it's kind of an intimidating language to learn for that reason like the mental moral
model around borrowing is just very different, there's a lot of stuff in the language to help
facilitate that because it's like one of the most important concepts. But, you know, the upsides
that you get with Rust are like, it's actually, I think, a little bit hard to appreciate
if, like, I suspect that I don't fully appreciate it because I haven't spent a bunch of time
writing in memory unsafe systems languages.
But like in Rust, it makes it hard basically to make mistakes that otherwise would be very easy.
And the cost is you have to play by those rules.
Yeah.
And when you compile, you get yelled at.
So, you know, good luck on that part there.
And it's also happening.
Compilation, not in production necessarily.
I pull back some of my rules.
I'm not sure if this will, this will just maybe pepper and spice up.
what you've shared here.
So the three rules on ownership are this.
Each value has a single owner.
When the owner goes out of scope,
the value is dropped.
That means it's out of memory.
And then the last one is there can be multiple immutable references
or one mutable reference,
but not both.
Yeah,
the not both is also critical.
Yeah.
So basically,
like,
you can't have people who are allowed to read a value
while someone else is allowed to write to it.
Because it would do what you just said
with the global cash before.
It would corrupt it,
essentially.
It would do,
yeah,
it would cause,
polluted or what a
word of your terminology.
Yeah.
Yeah.
Yeah.
And I think I've,
like you,
I've never written C
and I've really never written
systems,
languages beyond my dabbling and
Rust recently ever.
So my only experience with it
is the knowledge I've gotten from Russ,
but I can appreciate what people
have complained about about C
and the reasons why people push up back
on C-based network tooling
because that's where there's so much,
you know,
critical nature to being safe, being memory safe.
Yeah.
I can appreciate this ownership model, though.
Yeah, I remember recently, maybe a few months ago, there was a big, you know,
there was a store in Hacker News about, like, I don't know, like, an LLM finds a use after
free vulnerability in some CEE project.
And I was reading the write-up about it, and they were kind of like tracing through the code
where basically, you know, it's written in C.E.
and like someone was doing something after a variable had been freed that was like not allowed.
And I was just looking at the code and I was like, how on earth are you supposed to keep track
of this as a programmer?
Like this just looks impossible.
There must be so many of these out there.
And anyway, that reading that actually, it had nothing to do.
For me, the experience actually had nothing to do with the LLM being involved.
It was just actually looking at the vulnerability and being like, wow, how would you possibly
keep track of this?
And that's why, anyway, I shouldn't speak to it too much because,
again, like, I haven't written a lot of CE, and, like, there are other things people do to try
and mitigate against these things, but I do feel like in Rust, I, it was, I took for granted
for a while that basically we built, like, you know, a couple tools that we get, I don't know,
like, a hundred million installs a month, like, maybe more. Like, we just have, we've built really
popular things. And I don't think we've ever had a single, like, memory-related vulnerability,
which is, which is, or error even, like, that I can think of. And it's, and it's,
It just comes from working within the language, which is cool.
Like, basically, it's easy to take that for granted.
And it's, like, not why I started using Rust at all.
Like, I started using Russ because I wanted to build something fast.
But there are these other benefits that over time I've just come to appreciate it a lot.
Let's focus on Fast because that is UV's selling point.
I assume it was a Rough selling point to a certain extent.
I remember looking at Rough and seeing, like, how fast it can lint things compared to other things that are linting things.
which really speaks to developers
because like we don't want to wait around for anything
let alone a linter you know
I'll just turn it off
I'm not going to sit there and wait
how much of the speed that you're getting
with astral tools I guess we can take
rough and UV specifically
are just fast
merely because you just did it in rust
instead of Python and then how much of it is fast
because of some you know
smart you got in there Charlie like what your
decision to be on that one
of like architecturally
different.
Yeah.
It's a great question.
I think it's pretty nuanced
and we'll be hard to put numbers on.
So I guess the way I think about it is
we get some baseline from being written in Rust
that's probably faster than like the baseline
that we would get from being in Python.
But rough,
even as an example,
has actually gotten like significantly faster over time.
And it was always written in Rust.
Right.
So like there's like a big,
there's a huge delta that you can get
even within just like being rust.
You know, like an example is like at one point, we rewrote our parser,
like the thing that takes Python source code and turns it into a syntaxry that we can analyze.
And like, you know, the initial version we use was based on something called a parser generator
where you like, you kind of like write out your grammar and then it generates the code for you
to a certain degree, which can be really handy if you're working with something,
which you're rebuilding a parser because often you can describe a,
parser in terms of the grammar, like, okay, you have the deaf as a keyword followed by the
function name, blah, blah, and we ended up rewriting it to use, I guess what would be called
like a handwritten parser. So we got rid of the parser generator and we kind of like wrote it
out ourselves. And it became like way faster and like several times faster. It made rough as a whole
like 30 or 40% faster. And so that's like the same, that's all within rust, right? But it's like
about thinking hard about how you're doing things.
And, you know, it also varies a lot.
Like, even if you look at Ruff and UV, because in UV, we're doing way more I-O.
And so a lot of being fast, not all, but a good chunk of being fast is trying to be smart
about how you do I-O because we're either, like, downloading things from the network or, like,
writing things to disk or like reading things from the cache and writing them somewhere else.
And so like I talked about the caching trick earlier, like you could also do that in Python.
It would probably be, the program would probably be slower if you wrote the exact
same program in Rust and Python for a bunch of reasons.
But you could also do that to write a faster version, you know, in Python.
Right.
In rough, that tends to be less true.
Like in rough, there is I.O.
Like we have to read the files.
But like beyond that, it's a lot more, I guess, like,
a compiler like it's kind of like has to parse all these all this source code and then figure
out how to efficiently like traverse it and collect diagnostics and report them back um so you know
i think rust like gives you a better baseline faster baseline how much better twice is good
three times is good i don't know it depends what you're doing but yeah over the same python
program. Well, a Python-based linter and a Rust-based linter doing the exact same
linting, same file. The exact same stuff. I mean, obviously there's a rough numbers. No,
pun not intended. Yeah, I don't know. Probably like, probably like somewhere around an
order of magnitude is what I would expect. So that's significant. Can I keep going? Yeah, for sure.
But I think the other thing that I've come to appreciate with Rust is like it kind of like gives
you the tools that you need to like optimize further and think really hard. Like like allocating
memory is like pretty expensive. And so in relative terms. And so like a lot of what we think about
when we try to optimize things is actually like, how can we allocate less memory or allocate memory
less frequently or whatever. And like in Python, it's actually just pretty hard to have control
over that because like you're not, you don't really have any control over like the allocator
and like where memory is getting created or destroyed. It's definitely not staring you in the face.
Like it is no. But in Rust, yeah, you're kind of thinking like we did this whole thing where we,
I gave a talk at EuroRust
where I went through the exact design here
but like when you run UV
we parse a lot of versions
like it sounds like a silly thing
but just like package versions
like for a very complex resolution
that code might run like 10 million times
or something because like just parsing versions
and so we saw that showed up in a flame graph
and like when we were profiling
and we redesigned how we represented versions
and we came up with the scheme
whereby like 95% of versions, something like that,
can be represented in a single U-64 integer.
Like we encoded them as an integer.
So it's like, okay, the first eight bytes are the major version number, right?
The next eight bytes are the minor version, et cetera, et cetera.
And that actually had like a big, like a very measurable speed improvement.
And like that's not really something that you could think that would be intuitive to do
or very easy to do in Python.
Like in Rust, we have to think about how are things actually represented, for example.
And so, again, it's like, I guess for me, it's like, it gives you a faster baseline,
then it gives you the tools to care about this stuff if you want to, you know.
But certainly, like, some of the things that we did in UV especially could be taken
and used in other Python package managers written in Python.
And like, that would be a totally, I mean, if they want to do that, I think that would be a good thing.
Like, I'm very happy for that to happen.
But, but it ends up being a mix.
So your first product is out.
Yes?
Yes.
PYX.
called, is that how you pronounce it, PYX?
Yep.
Okay, PYX, a Python native package registry.
Registry.
Yeah, registry.
So you've learned a lot from all these other tools, open source world.
How much have you studied NPM, Inc?
A bit.
A little bit.
So certainly wouldn't avoid some of the problems that they've had.
And I'm curious, the design of PYX and, you know, the ambitions here,
and how you're going to go about building a registry that's commercial.
Yeah, I mean, it's a little bit different because, like, we're, you know, at least right now,
we're like, we're not focused on this being like a public registry.
This is a tool that's designed for companies.
Like, we work with lots of companies that either already pay for registries and we think
we can build something a lot better or need to adopt some sort of internal, you know,
registry as they grow or like have problems that we think we can solve with the registry that
we actually couldn't solve like just with the client. And so for us, it's like how do we go and
help more of those users and like fit the needs of those companies while continuing to build
out like the open source. So, you know, commercially like for us, we're not looking to like charge money
for like rough for UV. Like we view that as our like open source tooling, which we want to
remain free forever, very permissively licensed.
And what we're trying to do instead is kind of build these paid services that are
complementary and are sort of a natural evolution if you're already using our open source
tool.
So like I said, we talked to tons of companies who use UV.
They buy registries.
So we're going to go offer them a registry.
And we think it's going to be better in like a variety of different ways than the things
that are out there already.
So like a lot of why we built this thing in our building this thing.
in our building this thing comes from, like I said, or as I alluded to,
like problems that users have brought us that we can't really solve,
like in the open source.
Like maybe people will come and they're using some private registry
and it behaves in a way that's like incorrect.
And we're like, okay, we actually can't help you anymore because like that's a problem
with like the software.
And so just being able to offer them something different,
I think is one manifestation of that.
But the other is like sometimes people want to install packages that are like broken in different ways or maybe there aren't builds available for like their Python version or maybe there aren't built available for their GPU or something like that.
And in that case, again, that's not something that we can solve with the package manager because to solve that problem, we actually need like a server that has like artifacts that we can manage.
And so again, it's like trying to look at these problems where we've spent a bunch of time in the issue tracker trying to help people and ultimately.
ultimately concluded that there's only so much we can do and saying, well, what if we had our
own registry, our own server? Couldn't we do something pretty different? That makes tons of
sense. Is there a world that you can envision in which Astral would want to host a public registry
to make UV better in some sort of way that you can't do it as just a client? Maybe. I guess anything's
possible. It's not like in our current mission for the problems we're trying to solve here. And I think
I guess what I would prefer to see happen is like Pi Pi, which is sort of the Python equivalent
of MPM. It's like a public, it's the public registry that people use by default. It's run by
the Python Software Foundation, so it's owned by a nonprofit. I want to make sure that that
has longevity and continues to evolve and be stable because I think it's a really, like, I'm happy
for that to be kind of like the public system of record and for people publishing packages and
the way that, you know, the way that most people are maybe installing packages.
But for us, like, we mirror that stuff in, for example.
So, like, you can use PYX to install things from PIPI, and we're not necessarily focused
on, like, the public serving.
We're more focused on what's the experience we can build around the raw artifacts storage.
And are there, also are there, like, other things that we can expand to over time that
aren't a registry, but are related to hosting and serving artifacts.
Like we've also thought about like code execution, for example, like maybe you should be able to execute code through, you know, through our tooling and maybe that would have some tie-ins with the registry too.
So for us, it's not like, you know, we're not like a registry company.
Like we're like a company that's trying to build this whole Python developer experience and the registry is kind of like the first extension into building something beyond like a command line tool.
So it's like a big, a big new thing for us.
But the thing I think is most exciting is just that we can actually like solve a bunch of new problems for users that we basically.
so you couldn't solve before.
And, you know, I think one of the things is interesting about building in Python is, like,
the user base is incredibly diverse.
Like, what I like to say is, like, every company on Earth is using Python for something
within some margin of error, but, like, the things they're doing can be super different.
Like, we have, you know, we talk to users who are like, you have 15 million line code
bases that are running web applications that's, like, all Python.
And then there's like AI and ML, everything that's happening with GPUs.
That's like super different.
Even if you look at a company like OpenAI, like the things that they're doing with
GPUs are even super different.
Like half the companies like research and half the companies like applied like chat
GPT and like that's all super different.
So, you know, for us it's also about trying to figure out like where can we have the
biggest impact, like which of those user groups and what can we build for those
groups?
Because the things they need are pretty different.
And so that's always been, it's always been saying it's on my mind, is like, how do we,
how do we serve, like, all those groups and how do we figure out, like, where we can have
the biggest impact?
And the registry is sort of another example of that where a lot of what we're building there,
not all of it, but a chunk of it is actually focused on GPUs and, like, people trying to
install hardware accelerated packages, basically things that involve Kuda or, like,
NVIDIA GPUs.
And that's, like, relevant to some people, but not to others.
But for us, it's kind of about figuring out, like, where can we have the biggest impact
with this, like, now that we have a server where we can host artifacts, like, where can we
have the biggest impact amongst all these different people using Python?
What does it mean to be GPU aware then?
Like, why is that such a, take you some of the details of why that's a problem?
How does it manifest outside of, was it PYX?
And then how do you solve that?
Yeah.
So it's, it's a problem for a couple of reasons.
So, like, in Python, Python has actually very good support for, like, building and distributing native code, like native binaries.
And it's part of why it's been such a success, such a, it's both a result of and why it's been a big part of data science.
Like, like, if you think about installing NumPi or something, like, you know, most of that is not Python.
It's like a compiled binary that's like native code that's compiled for your machine.
And when you go to install NumPy, you don't actually have to build that from source.
Like, you don't have to download the compiler and, like, compile everything.
yourself. What happens is when NumPy does a release, they publish, I promise just to look
it back to GPUs, but when NumPy does a release, they publish builds for Linux, Windows, MacOS,
MacOS, M2, M1, M2, like MacOS Intel, X86, all the different Python versions. So they pre-build
and they put them on the registry. And the package manager knows how to look at your machine
and figure out which version of which NumPi build is right for your machine. So that's like, that's like
very central to Python. It's where a lot of complexity comes from packaging, but it's also
like kind of a superpower, because you can build like UV, you can install UV, you can like
pip install UV. And that's just a rust binary that we basically turn into a Python package
and publish. And you don't have to build it from source, like Python standards, let us do that.
GPUs make things harder, in part because of some of the gaps in the standards. So like in the
standards, there's no way to express that, like, I just built PyTorch and it was built for
Kuta 128. Or I just built PyTorch and it was built for Kuta 126. Or I just built pie torch and it was
built for like AMD Rock M. These are just different kinds of GPUs. There's no way to actually
express that. So when they build PyTorch, they go and the PyTorch team builds PiTorch for a bunch
of different architectures. And they basically have to hack in the way that they do this. Like,
They have to just, like, find ways to encode it that aren't really codified by standards,
which leads to a lot of complexity.
Like, there's no way in PIP, for example, like, let's say that you have an Nvidia GPU on
your machine, there's no way to do, like, PIP install torch and, like, get the right
torch version based on your GPU.
Like, it just, there's, there's nothing in standards that would really allow them to do
that.
So that's kind of the problem that we want to solve, which is, like, there's actually
a second, what they do is
they publish the different
version of Torch. Each
architecture basically gets its own registry.
Wow. So they create an
index for Kuta-128, an index
for Kuta-126.
And on the 12-8
index, they publish the bills that they built for
Kuta-12-8. And each library also
solves this in a different way. So like Jax,
which is like a Google library for
working with
GPUs, does like something totally different.
So, yeah, there's been a lot of experimentation
around trying to make this work.
But it ends up being pretty difficult.
And there's sort of like a second order problem,
which is that there are packages that build against Pytorch.
And so it's not just like Pytorch needs a GPU.
There are also other packages like VLM,
extremely popular piece of software for actually serving language models.
Like if you actually want to run an endpoint that like runs a GPU
to like do predictions and give back data,
you would use VLM for that.
That has Pytorch as a dependency.
And when they build VLM, they not only have to encode the GPU version, but also the PyTorch version.
So it gets very complicated very quickly.
And that's kind of like the complexity that we're trying to tame with the registries.
So like GPU aware means a few things.
One, it means that UV the client can actually figure out like what GPU you have on your machine.
And then it can map that to basically the right endpoints in PYX.
So in PYX, we have like curated distributions based on the hardware.
And we go through and we not only take like PyTorch, but we also like pre-build a lot of other stuff that people kind of only use with PyTorch.
And we make sure all the versions are compatible and all the metadata is correct.
So we do what I would call the sort of like non-glorious work of like making sure that all the things we put on there, it's like a well curated distribution.
And in UV you can just like install, we'll look at the GPU on your machine and we'll pull in the right things.
from PYX. So that's like that's like one tack that we're taking on this problem, which is we just
want people to be able to like run on a machine with an Nvidia GPU or an AMD GPU or
whatever, just run like, you know, UV add torch VLM flash attention and not have to think about
building from source, not have to think about where he's coming from, not having to think about
making sure that like the, the Cuda versions are all lined up. Like that's like part of the
complexity that we're trying to solve. Is that ability to see the GPU particularly,
Is that a rust level thing that UV inherits?
No, it's, I mean, you could do that anywhere.
And like a parallel, I think we're trying to do in parallel, actually is like standardize a lot of this stuff, like actually evolve the standards so that these things can be encoded.
We've been working on that with like the Nvidia team and the pie torch team.
It will take a long time.
It may also never get accepted.
Who knows?
Standards are hard.
Like we're working on that.
But that would also involve similarly like trying to detect and understand like what GPU the user has installed.
the thing that's a little different
I think like the thing
that is hard for others to do
is like because we work
on the package manager and the registry
we can actually encode that contract
of like okay this is the user's GPU
so like this is where you should be getting
the packages for example or like this is like
constraints that the server needs to understand
so like we can work on
we can keep those APIs in sync
and make the experience really good
ultimately hopefully this stuff gets standardized
but until it does there's like
I think it's like
hard to appreciate, like, how big pie torch is. Like, we spend so much time, like, helping people
install pie torch. Not in terms of size. In terms of, like, the wheel that just, like, or the
artifact, but just in terms of, like, how big the user base is and the community is. Like, we just
spent a lot of time trying to help people install this stuff. And, um, it's gotten a lot better. And I
think the pie torch team has actually done quite a good job in the face of these sort of gaps
and standards and, like, having to trailblaze a lot of this stuff. Um, but,
But for us, it's like we're kind of like user obsessed.
And so it's like I hate seeing people struggle to install this stuff and I want to find
ways to fix it.
And so for us, some of that's in the UV client and some of that's in the registry.
So PYX, your first commercial product, but not necessarily like the future of Astral
because you're not a registry company.
We don't want to be necessarily a registry company.
But it's a good.
It's a revenue stream.
It's another opportunity to make UV better for your customers who need it.
but maybe just one of your products
down the road, right?
Like, this is one thing we do.
We have other things we do as well
that we're selling,
making our investors happy.
Without making your investors mad,
like what are some of your other ideas?
You know,
they don't have to like spill all the tea,
but like what other aspects of Python tooling
could you tackle,
whether in open source,
I know you've got your type checker going on
or in the product side.
Yeah, yeah.
Yeah, I mean, in the open source,
It's, I mean, there's no shortage of, like, things that people have asked us to do in the open source.
And I mean, I would, my philosophy, which will obviously stick to as long as we can, is that if there's a problem that we can solve in the open source, then we should solve it in the open source.
Like, not with a commercial product.
Like, again, we'll see.
We'll get tested on that over time.
but like I would like the incentive structure of the company to be such that we're very much
incentivized to build things in the open source and grow the open source and that the paid
and the hosted products represent real value that can't go in the open source for like structural
reasons like you know like in the registry case it's like okay security compliance like all
these things that are don't really make sense to put in the open source in that way right so
there's no shortage of things that people have asked us to build in the open source I think
the things that we've been, the things that we are building now, which I'll just like
maybe mention again, is we're building a type checker and a language server. Probably,
like very comfortably, the most technically difficult project that we've worked on. It's very
hard. We're looking to do the beta release for that. I can't really say the date, but let's say,
you know, within the next few months. So that's something that we've been asked about forever,
basically has been like a type checker the language server you're referring to or the type checker
both both yeah okay the because the language server is out there right they're they're all out there
yeah we did an alpha release we didn't alpha we've done an alpha release and we do have companies using it
in production we just haven't gotten to the point where we recommend it for use okay so the type checker
slash language server like it's one yeah they're the same thing it's got to i gotcha yeah and
it's it's a little bit like type script in the sense that like it's a command line tool but you can also
run it as a language server.
Gotcha.
Yeah.
The other things that we get asked about a lot in the open source are testing, like a test runner.
That's kind of an interesting one for me.
A lot of people actually really like Pytest, which is like a very common, a very popular,
probably the most popular Python test runner.
And so I think I would need to think hard about like why could we build something that's
a significant improvement over Pytest?
Like, it has to meet some bar of, like, it has to be much better, hopefully, than alternatives.
Otherwise, why would anyone use it or switch to it?
And the other thing we ask about a lot, although it's a bit more niche, is like documentation tooling, which is not super glorious, but it is a big part of, it is, again, something that Rust does very well.
It's also highly appreciated by the Python community, right?
It is.
It is, yeah.
I think the thing is, like, the thing that's a little challenging is that the user base
for that is slightly smaller because it's mostly oriented around maintainers and people
publish libraries.
Most people are using libraries and not writing libraries.
So, but again, this is all just about prior.
Obviously, we would do everything in the world if we could, but ultimately everything
has to be prioritized.
There's sort of like a thing that is a little bit more pie in the sky that, like, we do
not have any plans to do this, but it would be cool if we had the, if we could, which is
our own Python runtime. So like actually trying to make Python itself faster or different
in different ways. Because right now we don't, we actually do our own builds of Python that we
distribute. And it has a bunch of modifications versus C Python, but those modifications are really
in the build process. And they're all motivated by this same idea.
which is we want to be able to pre-build Python for you
that you can then unzip and run.
Because C-Python itself, like C-Python Main,
doesn't really support that for technical reasons.
Basically, a bunch of absolute paths
get encoded in the binary.
Is it different, like, it's statically linked,
or it's like different sort of?
We do statically link things,
but ultimately, like, the main problem is that C-Python,
when you build it, at least on Linux and MacOS,
encodes a bunch of absolute paths.
Okay.
And so we have a project called Python Build Standalone, which the core idea there is like, you can, we build it, and then you can literally just download on Zip it and run it on any machine.
It makes it relocatable.
You can move it around your machine, et cetera, et cetera.
So it's basically patches and changes and see Python to enable that.
But we don't change anything about like runtime behavior.
And so, again, we do not have plans to do this, which I have to be like really, really explicit in saying.
If you were going to, though, however.
What a big list that you just described, though.
I mean, that to me would be like just overwhelmingly daunting.
Well, this makes me think about the dreaded gill, right?
Like, if you, I would imagine if you did do this runtime, it would not be written in Python or C because.
Well, the gill is potentially not long for this world.
Yeah.
I don't know if you've been following that.
Oh, okay.
But, yeah.
They're free threaded now, you know.
Yeah, free.
Yeah, you're not supposed to call it no-gill.
No-gill is the original move for it, but now it's called free-threaded.
The idea, so in 3.13 or before 3.13, they accepted, sort of provisionally accepted a proposal
to remove the gill. And in 3.13, there's a separate build of Python that has no-gill.
And then the idea is eventually that will become the default.
But right now, there's kind of a split world.
How does that work with legacy Python out there?
though.
You know, that's good for future, but what about the past?
As in Python code that is written in a way such that it assumes the Gill exists?
Well, maybe take this as my imposterousism here and not writing a lot of Python, but if I'm writing Python, I suppose, against a version that supports the no-gill method, then that's probably just fine.
but if I'm if I've got code that's legacy that's written in you know Python but against older versions
am I still at risk of the issue of the Gill I suppose so you you possibly are and that's part of
why they're doing this incremental transition whereby also like if you try to install like a numpie
for example again just going back to numpy like something that has native code it has to they have
to build a special variant for no gill for free threading so basically all the library
have to add support for this and, like, audit their code and make sure that they, like,
work in the no-gill world.
So, like, especially the extension modules especially have to, like, explicitly go through
a process of adding support for this.
So it will take some time.
That's also why it was, like, provisional acceptance and kind of, like, something that got
staged in.
But I guess a cool thing for us is, like, we've made it really easy to install the no-gill version.
not the free-threaded version of Python because typically better personally but i know me too but
you know it's that doesn't always work out the way it's not it's a spade called a spade right
there's no gill i i didn't really participate too much in that conversation at all i guess in
that conversation but i think the concern was that a lot of people don't know what the gill is and
so they wanted something that was descriptive also in a positive way rather than something
a negative not as in like negative sentiment but just in terms of describing what it is rather
than describing what it isn't in a way like free-threaded because it just reminds me of you know
free bird you know yeah i just feel i haven't thought of it that way but i guess i could or used to
hold a lighter up at the concert now it's a phone yeah but now already holds their phone up anyways
yeah yeah yeah so yeah we actually have this guy on our team named carl mire who's like
super amazing um and he worked before this he worked at meta meta meta has a fork of cipython
called cinder um and it's sort of a performance oriented
fork. It is open source.
I don't, but it's not really open source in the sense that people run it outside of
meta. It's just sort of open source in the sense that the code is open source and they use
it as a reference. But cinder was this performance oriented fork largely, if I understand
correctly built with Instagram in mind because Instagram is all Python. Instagram is this big,
very high, obviously here, very high volume application. And so they did a bunch of things in
cinder to try and make Python faster, or I guess make Python faster, not try. I think they did make
Python faster. And it had some very interesting ideas. They tried to upstream some of them. So,
for example, cinder added lazy imports. So like in Python imports are all eager. So when
you run like import whatever, at the time that Python interprets that import statement, it goes in
and like parses all the code and actually executes all the code. And that cascades down. So basically
when you start up your application, if all your imports are at the top level, you kind of like
import the world. And that can lead to like slow cold starts and variety of other problems.
And so Cinder added support for lazy imports. And then they tried to get that upstreamed a few
times and it failed. But we have a person on our team basically who worked on Cinder and he
worked on Cinder's Jit, which they did a bunch of interesting stuff to try and make Python's Jit,
like Python's Jep compiler faster. And so basically we have people on the team who understand how
to work on the C Python interpreter, but I don't think that we're at the point where we
like, I don't know. So you're saying there's a chance. I'm saying there's a chance. No, no, I don't
really want to do. I mean, for me, it's like, I sort of look at these things from a few perspectives.
One is like, it's good to be a little bit naive because otherwise, if you're not like a little
bit naive, you just like won't try anything hard because it's just too easy to say like, well,
a bunch of people have tried it. Like, of course it can't be done. Right. So I like being a little
at the same time you also have to have some like humility and be like okay a bunch of people
have tried this why didn't they succeed and like why could you succeed where they didn't so right
you kind of have to thread the needle i think a little bit between like being incredibly
arrogant about being able to solve any problem and being like a huge naysayer who just says
like well that can't possibly be made better especially while there's lower hanging fruit like
you have you have a list of things that you're currently prioritized which arguably i assume
you would argue this is higher value
for less work, right? Like it's lower hanging
fruit than that particular fruit up there
because that one's way up high. And so maybe you could do it,
maybe you should do it, but clearly
right now you're focused on other things.
Yeah. Yeah, I mean, I think...
Maybe when you raise your next round.
Yeah.
Well, I come back to the Y. I mean, like on this subject,
I come back to the Y. So you mentioned
like the being naive. I think that's
true. There's been some efforts
with prior art to
to, you know, do some version
of a runtime.
I think there's even
like Rust Python out there.
I think Dropbox
had one that made it faster
but they abandoned it.
So like there may be some
maybe
sadly to say it like
there's dead bodies out there
that you have to sort of like navigate
like maybe we shouldn't go there
because it's already been tried before
but coming back to a why
and you mentioned being
you want to build in the open
you want to incentivize your company
to build the open
and so I can just come back to that why
like if hypothesize with us
for a moment.
If you did do something like this, what would be the why?
How would you quantify the why from a business owner, CEO founder, investors, all the things?
How would you prioritize or think about the why of writing a runtime for Python in Rust?
Right.
I think that's probably one of the reasons that I don't know that we do it is like the, well, so if I think generically about that question in terms of like the business strategy.
strategy, it's like the open source, like ignoring the Python runtime for a second and just
thinking about our existing tools, the existing tools that we build that give us a couple
things. So one is they give us like a lot of distribution. So, you know, we're building a registry
and like we have all these people that use UV. And so we have kind of a natural audience of people
who maybe want to try the registry. Also intimately linked to that is also brand. Like I actually
think a lot about brand. I think brand is maybe a little bit of like a, everything around like
developer marketing and brand is kind of like seen as like slightly dirty if you're like an engineer
because it's not technical. But like I actually think brand is like incredibly important. And
and for us, it's like, you know, we built rough. And then when we came out with UV, people were like,
oh, they built rough. So like this might be good. I should give it a shot. And then we see that kind of,
I view that as kind of compounding. Like I want to like earn users trust and prove that we can build
great things. And so that all accumulates in brand.
which again ties back to distribution it's like if we build a registry people hopefully think okay that
will probably be good because these people built these other good things the other things it gives us
are i think we have interesting like technical advantages across the stack in a few different ways
so like with the registry for example i actually want to like pull in our type checker in some
interesting ways like i want to be able to do things like detecting like semver and compatibility
like within the registry, like, understanding if a new version of a package comes out,
like, can you upgrade, like, without having to, like, or are there breaking changes that affect
you? Or, like, security scanning. Like, if a vulnerability comes out and a package and we know
that you're using it and we have, like, a really good understanding of your code, like, we should
actually, we might actually be able to tell you, like, if whether you're affected or not. So, like,
for me, it's also about trying to compound these, like, horizontal, like, technical advantages
across the tools by, like, bringing those things together. I think for the
runtime, it's like, if you think about the why, the why would be like user impact.
And, but like, it doesn't necessarily enable us to do a lot of things that we otherwise
couldn't do, at least right now, unless we built it with specific technical ideas in mind.
Like, maybe we decided, okay, I'm just making this up on the spot.
It's really not something I've thought that much about.
But it's like, okay, we want to build a version of Python.
It's like, like, maybe we really want to focus on like WASM or something.
I don't know.
Maybe we're like we have use cases where we want like WOSM to be like, like we want to run
this like on the edge, like in the CDN or something. And we're like, we're going to go all in
on like that idea. So we'd have, I think we'd have to have some idea that basically
enables us to do and offer things that we otherwise couldn't offer. Distribution is a really good
reason. It is a good reason. I think. More UV out there means more Python out there. More
Python means more UV users. Yeah. I mean, I do think also like if I, I, not to be too,
like, full of, you know, full of myself, but it's like, I do have to think a little bit too
about, like, how do we make sure that Python keeps growing?
Because that's important to, like, that's important to us.
And so if there are, like, big existential problems in Python, like, if we thought
packaging was an existential problem or something, then it's like, okay, let's try to solve
packaging so that Python keeps growing, right?
Like, so there is a little bit of that, too.
I don't think, like, the runtime is, I'm not suggesting that the runtime is in that
position.
I'm just saying that, like, you know, we do, there are benefits for us in helping grow Python.
And so if there are things we think we can do
to help grow Python, like those could be worth doing
even if they're not connected to like a
concrete product offering that we charge
money for. Again, I'm not
suggesting that the interpreter
or the runtime is in studies. I think that's actually
a very good state. Or I'm just hypothesizing about
I think, you know, learning from the past to
save the future from itself. I mean, we're just doing what rest
is doing. I'm just saying in terms of things we choose to work
on and why they're worth working on, one of them
one of the considerations is just like
how do we help grow Python and like make Python
more popular and bring more
people into Python. One more question on this. Maybe
maybe too far, but I don't think so.
You tell us, Charlie. And the reason why, so yeah, I think, so if you're concerned with
somebody being offended because we're hypothesizing about a future that doesn't exist,
and if it should exist, because you have business interest and vested interest in
Python growing, I think that's silly to get upset with that. So if that's you,
chill out for a second. Do what you can, given how you've improved other Python tool
and to the degree that you have improved that tooling,
if you did undertake a runtime,
what benefits would you,
do you think you could deliver
just by nature of what you've done already with Rust
tooling for Python?
Besides, just simply speed,
like enumerate specifics if you could.
I think there are things, again,
we're just speculating here.
Just speculating.
I think there are things
that would be interesting to consider changing
around like how this sounds small,
but sort of like how environment discovery works.
So like right now, like basically, I guess the way I'd put it is
take things that actually happen in UV run
and see if you could make them part of the runtime
is maybe a way to put it.
So if you're in a project and you run Python,
like run that in the context of the project
as opposed to just like trying to find some global Python interpreter,
basically trying to make Python more, the runtime, more environment-aware and like project-aware,
I think would be something that's kind of interesting that we could do.
So trying to like smooth out some of what we see as the traps that users run into.
I mean, I think we would, I'm sure we would initially be drawn in by performance if we thought
we had ideas that we could pursue there.
But again, like, there are plenty of people working on C-Python runtime performance.
And, you know, I don't think, I think there's maybe actually like a slightly different, like, the thing that Bun is doing, which is like maybe interesting, but also maybe like potentially a trap, like kind of remains to be seen. Not for users, but just in general is like they're building out a pretty large standard library. I guess standard library is probably the right word. So like in their standard library, they actually have like probably get a bunch of this stuff wrong. Like I think they have. I think they have.
have like an S3 client. I think they have like a Redis client. I think they have a like a database
client that understands like MySQL and SQLA and like they have like all this stuff. So the
idea is like you can just like work with Bun and like import all those things and you don't
have to worry about going and finding third party implementations. And they're all like natively
implemented. I think there is something interesting there. I think there are down there are absolutely
downsides to that. But I think there's something interesting to that, which is kind of providing like
a trusted, if you can build a strong enough brand and build good enough
implementations, providing kind of trusted implementations for all these things that people need
in building modern applications, I think is kind of interesting.
But again, I don't know that we would ever do that.
But the idea of trying to provide really good implementations of all these things that
people commonly need, first of all, to provide better implementations that are faster
and also reduce the surface area of things they depend on, I think is kind of interesting.
You can potentially do that, by the way, without actually forking the C-Python runtime.
But I do think it's kind of...
I think it's an interesting direction.
Let's take a moment to reflect on brand because you mentioned it as something that you think is important
and that a lot of software people don't necessarily think about.
I'm a fan of yours.
I think specifically speaking to astral.sh.
I think it's a nicely designed website.
I appreciate that you have bucked the trend of going...
following linear down the road of like dark mode everything.
I like that there's some brightness going on.
Talk to us about brand, how you think about it.
Why it's important, by the way, the font is really sweet.
I'm not sure what that is, but your Y is really cool looking,
and the Gs are nice too.
Thank you.
You've got some taste, in my opinion,
or at least we share tastes, you know?
There's no accounting for taste.
Yeah, whether they're good or not.
Yeah, we have similar tastes at least.
So I think they're good because they align with mine.
But yeah, talk about brand, why it's important,
how you think about it,
hack it because, in my opinion, that's another thing that's setting UV and astral apart.
And it's kind of one of those things that you don't think about right away. It just is there.
And so your thoughts.
Yeah, definitely. I mean, I think like, I don't really explicitly think in terms of like developer
marketing. But like when I first started, like when I first released UV, I was trying to just
explain to people as quickly as possible, as succinctly as possible why they should care about
this project. And so the read me for me, like,
like the top of the fold of the readme had to capture that.
It had to capture like,
why is this interesting?
And I remember when I did the launch too,
I had like a little graph that was just like a benchmark graph
of like UV rough versus like a bunch of other things.
And like I think that graph was like very important for like conveying to people
the significance of like what's happening.
So like for for communicating to developers,
communicating to anyone really, it doesn't matter if they're like developers or not.
it's like you have to assume that you have basically like no like you get like no attention like if
you write a blog post for example you have to be thinking in terms of uh most people they might read
the headline they might read the tlDR and they might look at the one image at the top but they
probably won't read any of the text some people will and it's important to care a lot about what it
says but like you have to be thinking in terms of how do I explain to people why they should care
about this as quickly as possible so like even just by thinking about that in production
I think you'll, as an engineer, you'll probably be doing more than a lot of small.
Also, by the way, caveat before we get into the actual interesting stuff, you definitely don't
have to care about any of this if your goal is not to make your project very popular.
Like, you can just build stuff and publish it and not care about this at all.
And I think that's totally cool.
But if your goal is to get people to use your thing and care and then follow along, you know,
I think this stuff matters a lot.
A lot of how I think about brand, too, like, it's very holistic.
So, like, when people come to our repo and, like, they file an issue or, like, they ask a question in Discord or something, like, I would always view those moments as, like, I have a moment.
I have an opportunity here to, like, make a friend or, like, win a fan or something or win someone who's going to support the project.
And so, like, you try to compound that over time.
And so when people would come in, first of all, when I was.
starting the project and anyone would come found an issue, I would just be so excited that they cared
at all. And I would kind of just focus on how do I give these people a great experience? And over
time, it becomes, even if I'm going to say no to what they're asking for, how do I make sure
that they have a good experience? Like, as in they feel heard and respected and they understand why
I said no. And we've just focused on that a lot. Like, we try to be really responsive in the
open source, and we try to like give people a good experience. And it takes a very long time for that
to have an effect, but like compounding over years, I think it's had a huge effect on like
our open source community and like how people view the project. And you, you have to take a
very long term view towards a lot of these things. But again, it's all, that's all connects to me
to brand. Like brand is not like it is the visual identity of the company, but also it's like
what do people associate with it. And I want people to associate with us what I hope is true,
which is like we want to be good, responsible, like approachable, like open source maintainers.
who are demonstrating, like, responsible stewardship for these projects and that people can trust us.
It takes a lot to keep this up, too, like, doing the open source, like, even just trying to be
really responsive in the open source is, like, a huge investment of our time.
Like, we could probably have the whole company, like, just maintain the open source and build
nothing new, and that would still be, like, full-time work for us.
But it's also, like, with everything we release, we want to, like, maintain the quality bar that we have.
You know, I think maybe actually a good example of something.
that we've tried to put into our brand is like we try to fix things really quickly.
And so even if we release something that's broken, we'll fix it really quickly.
And that actually, I think, has become a really helpful part of our brand because if something's
broken, we fix it quickly.
And then it gives people more trust that like if something breaks, we will fix it quickly.
If we ship something that's not finished, you know, we'll fix it quickly.
So I think I just take kind of a pretty long-term view towards a lot of these things.
And I try and think really hard about like, if I,
were a user, what would I need to hear? What would I need in order to use this thing? On the visual
brand, we worked with, you know, some designers to, like, do the initial branding. And I actually
showed them a bunch of examples of sites I didn't want to be like. I didn't just show them
positive examples of things I thought were interesting. I also showed, I obviously won't name any
these companies, but I just showed them companies where I was like, this, these to me feel very
derivative. Like Vercell and like linear have amazing really amazing brands and design. Yeah.
But then so many companies have tried to just be like for sale or linear. Right. And I was like,
I actually want to do something that's like pretty different. Like it should still feel professional,
but like and well done. But I want it to be a little bit distinctive. And so there it's very much
intentional. I think that it looks a little bit different than a lot of other developers.
Well, that's one of the things that grabbed me. Because as somebody who,
who's steeped in the industry as Adam and I are,
and we see lots of company websites,
lots of open source product websites.
You know,
there is this,
well,
it used to be the old bootstrap effect,
right?
Like all you could tell
when you saw a bootstrap website.
Well,
now it's like you can tell
when you see a VC backed open source,
you know,
website because it's like,
they're basically derivative.
I mean,
not all them are.
Linear,
I think started at all.
Versel does have amazing work done.
And so a lot of that is just like,
well, those look good. I'm going to copy that. And I got no problem with that. Like,
if there's more important stuff to do, fine, go ahead and do that. But I've been waiting
for a turn in the trend. Like, who's going to come out and be different? So I'm just applauding
you for that reason. Yeah, no, I appreciate it. Yeah. I look like, I don't know,
like Sentry is kind of an example here too where it's like they have a very different brand.
Yeah, Century does have a very distinctive brand. And like some people don't like it. Some people
love it. Like, I mean, like Post-Tog would maybe be another example. They have like a totally
crazy brand. They do. Their website.
You see their redesign?
The website is crazy right now.
I did.
It's like an inbox.
But they really lean into it.
It's like it's everywhere.
Yeah.
It's like they're like real world advertising.
Like everything is like just crazy.
Like you said it like love it or hate it.
Like those are too strong emotions.
Right.
Like you're going to remember it.
Like I knew exactly what you were saying when you said Century and when you said post hoc.
Right.
And whereas there's lots of them where I'm like, I can't remember what that brand was.
But anyways.
Yeah.
I mean, I think it's been interesting for us too to like try to figure out how to kind of like
connect our open source to the brand because there's like I think there are actually a bunch of
people who don't realize that like our tools are connected or that there's like a company
behind this work and so it's been kind of interesting for us to think about like how to how to
like communicate that like over time it happens but like we definitely have people who are like
surprised to learn that like rough and UVR related for example or that there's like or don't
know that there's like a company behind these things and so those are like sort of separate
challenges but that comes up too. One thing I go back to and it just
stems from the things y'all are saying is like most like those those websites that are not to be
named they're they're beautiful but they generally suck in some way and the reason why i think they
suck in some way and really by the main thing is like if you land on it it's really hard to
understand what they do it's some sort of like pie in the sky marketing pitch rather than
something as succinct and just like compressed as next-gen Python tooling.
And I think that's the promise you ever delivering.
I think that's the thing that I think is challenging with a lot of these markets.
And I evaluate a lot of them too because we work with a lot of different brands to by way of
understanding who they are so that we can take their message and help them communicate
to our audience in a way that isn't marketing, but it's a story of who they are,
why they built what they built
who uses it and how they benefit from it
how they the audience may also benefit from it too
too often I'm just like lost
in my journey my personal journey
because that's my job is who are you
why do you exist why shardons care
and how can I help you tell our audience
in a way that respects them as developers
respects them in terms of their time
and helps them truly learn
and be educated about that tool
that company that thing
that service yeah I spend
a lot of time just figuring like what do you do and your your homepage doesn't always not yours but
the proverbial you are yeah often just misses the mark it's like you know bento box this and sliding
thing there and it looks really beautiful but it's like can you please just show me the tool how does
it work how am i going to use it i think that's that's the challenge i mean i think the other thing
that i found is like i couldn't i can't really like outsource like voice um like all the copy on the
site like I like did myself and it's like because I just think a lot about um I guess just because
like I've spent my whole career as like an engineer and like I'm not like I know what it's
like to like land on those sites and I know what it's like to like when people speak to especially
to like engineers or technical audiences in a way that it feels like authentic and a way that feels
like inauthentic and so yeah I mean I guess the tradeoff is I spend so much time on like
like any public messaging, like the PYX announcement blog post or even like the Twitter
thread, like, none of that stuff is like off the cuff. That stuff's like I'm spending like a
week, like writing a draft throwing it away, getting feedback, like writing a draft throwing it
away. So like I, yeah, I spend a lot of time on like, basically on like our public messaging and
the things we say. And I've sort of just accepted that like that's just something that takes me
personally a long time. And like I have to go through a bunch of drafts and I have to like,
have a few chances to look at things
with fresh eyes, which means it takes time
because I need to like step away from something for a day
and come back and read it.
And like, so anyway, I guess my point is,
I don't, my point is just that it takes a lot of time
and it's hard to, I think it's hard to fake
and it's hard to outsource.
Yep, totally agree.
Well, we've kept you here a long time.
We appreciate chatting with us.
It's been super fun.
Yeah, we've gone in a lot of different directions.
I appreciate all the great questions and all the interest
and letting me talk about some fairly obscure technical things.
That's fun.
That's what we do here.
Yeah, that's what we're all about.
That's the good stuff.
Yeah.
We enjoyed it for sure.
Yeah, I've learned more about Python and Python packaging over the past two years
than really than anyone should know.
So like whenever I can find people to listen, I'm like very happy to chat about it.
Well, P-Y-X is what's next.
Astral.S-H, that's A-S-T-R-A-L-L-T-R-A-L-L-S.
slash p yx yes go there check it out it is the next step in python packaging doing the wait list
there you go if someone no don't brew install don't ruin install uv yeah curl install curle install uv yeah
no it's install uv however you want but you should use it yeah come on it's great way
very good thank you charlie for sharing your story awesome yeah thanks for having me take care
So Charlie and his team are experimenting with PYX as a money-making product.
We're also experimenting with ChangeLog News to make a little money.
We're playing with the idea of adding a classified section to the news.
It would have a maximum of five listings per week that appear both in the newsletter and in the podcast audio.
They'd be super brief, headlines only, and link to a URL of your choice.
If you'd like to put your startup, your passion project, your big idea, your event, your whatever,
front of change logs classy, well-to-do, audience of hackers, fill out the form that's linked
in your show notes and in the chapter data. Thanks for listening and thanks to our partners for
sponsoring, fly.io and depot.dev. All right, that is all for now, but we'll talk to you again
with our old friend for Ross, all about NPM and those supply chain attacks on Friday.
We're going to be able to be.
Game on
