The Changelog: Software Development, Open Source - Into the Nix ecosystem (Interview)
Episode Date: April 20, 2021This week we're talking about Nix with Domen Kožar. The Nix ecosystem is a DevOps toolkit that takes a unique approach to package management and system configuration. Nix helps you make reproducible,... declarative, and reliable systems. Domen is writing the Nix ecosystem guide at nix.dev and today he takes us on a deep dive on all things Nix.
Transcript
Discussion (0)
This week on The Change Law, we're talking about Nix with Doman Korzar.
The Nix ecosystem is a DevOps toolkit that takes a unique approach to package management and system configuration.
Nix helps you make reproducible, declarable, and reliable systems.
And Doman is writing the Nix ecosystem guide at nix.dev.
And today, he takes us on a deep dive on all things Nix.
Huge thanks to our partners, Linode, Fastly, and LaunchDarkly.
We love Linode.
They keep it fast and simple.
Check them out at linode.com slash changelog.
Our bandwidth is provided by Fastly.
Learn more at Fastly.com.
And get your feature flags powered by LaunchDarkly.
Get a demo at LaunchDarkly.com.
This episode is brought to you by Sourcegraph.
Sourcegraph is universal code search
that lets you move fast, even in big code bases.
Here's CTO and co-founder, Byung-Loo,
explaining how Sourcegraph helps you to get into
that ideal state of flow in coding.
The ideal state of software development
is really being in that state of flow.
It's that state where all the relevant context information that you need to build whatever feature or bug that you're focused on building or fixing at the moment, that's all readily available.
Now, the question is, how do you get into that state where you don't know anything about the code necessarily that you're going to modify?
That's where Sourcegraph comes in.
And so what you do with Sourcegraph is you jump into Sourcegraph, it provides a single
portal into that universe of code. You search for the string literal, the pattern, whatever it is
you're looking for, you dive right into the specific part of code that you want to understand.
And then you have all these code navigation capabilities, jump to definition, find references
that work across repository boundaries that work without having to clone the code to your local machine and set up and mess
around with editor config and all that. Everything is just designed to be seamless and to aid in that
task of code spelunking or source diving. And once you've acquired that understanding,
then you can hop back in your editor, dive right back into that flow state of, hey, all the
information I need is readily accessible. Let me just focus on writing the code that influence the feature or fixes the
bug that I'm working on. All right. Learn more at Sourcegraph.com and also check out their
bi-monthly virtual series called DevTool Time covering all things DevTools at Sourcegraph.com
slash DevTool Time. tool time. So Domen, you're here to tell us all about Nix. Welcome to the changelog.
Thank you.
We are excited to have you. I've heard a lot about Nix. Welcome to the ChangeLog. Thank you. We are excited to have you.
I've heard a lot about Nix.
I hear a lot of smart people saying the word Nix.
I also hear them saying you, Nix.
Not the same thing.
But Nix is a lot of things from my research.
Can you tell us what it is in your words?
Sure.
Yeah, the way I see it, it's an ecosystem of tools
that you can use to develop, build, and deploy software.
In other words, I see it as a kind of a Swiss army knife of DevOps.
Particularly Nix, it's two things, or maybe even three.
So that's what makes it a bit confusing.
It's the language, the package manager, the facade,
and it's a bunch of concepts behind it that are very different to a typical
package manager okay so where did nix come from who created it and why did it come into the world
yeah so i think it's almost 20 years now since elko dostra started research in utrecht if i'm correct in the university there and he eventually
had his phd thesis be about nix and also developed the prototype in the first version there
so that's where it started so it was essentially sponsored by grants and so on.
So it was a research project sponsored by grants. And what was the purpose? What was its intended?
The purpose was, again, this is according to me talking to Elko many years ago, was to see if functional programming paradigms can be applied to solve packaging problems.
I think that the university there has a pretty big department on functional programming research,
and this was one of the areas that they tried to apply to.
So what was your introduction to Nix then?
It came much later, I suppose, than that 2001, if it was about 20 years ago.
Yeah, it was around that, yeah.
It became a research project.
When did you find it and what got you excited?
That's an excellent question.
Yeah. I was doing a lot of Python development in 2012, I believe it was.
And particularly, I was working in the community called Plone.
It's a CMS, pretty old now and not that well known as it was before.
But we had a lot of packages there.
It was, I think it was about 300 packages to install Plone. And some of it depended on C libraries and so on. And, you know, between Linux and macOS and all the things broke really,
really often. So a friend of mine, Florian frizer dove actually discovered it i don't
know to this day where from who and he suggested look this is like really cool research and it
works already and you know we should give it a try to solve these problems so that's how i
was introduced to it and then i think only a year or two later i finally switched and then gave it a try so
and what did you find what was was there an aha moment or was there a feature or a thing that it
did that you appreciated because i mean you're you're big into nix at this point you're you're
doing the nix.dev website you've got the weekly newsletter so you're you know you came to us and
said hey let's talk about nix so this is something that you're excited about. What was it that got you a year later?
I think my first aha moment was, you know, I was back then doing consulting and we had a client in Finland.
I used to use Gen 2.
You know, in Gen 2, when you rebuild everything, like you have to recompile, you know, it's called the world.
And that client, I needed to upgrade in order to have the newest package.
And then it was, you know, I needed to compile for like 10 hours.
And, you know, back then, then I switched to Ubuntu at the time.
And, you know, I really didn't like the inflexibility of it.
So when I went into Nix, I was like, oh, this is the best of two worlds.
I can have, you distribution and the binary distribution model at the same time.
So when I tried Nix after Florian introduced it to me and I saw that you can roll back
and there is a binary cache for all the packages from open source that you just download everything
instead of compiling, but you can like apply a patch
and it switches to the source model.
I was like, oh, this is what I need, right?
So I can have both of convenience
and, you know, hackability, if I may say.
And on top of that, the design, you know,
like one of the biggest features advertised
is the rollback.
So the way Nix does it uses a sim link
to switch between the previous and the current version which is an atomic operation on linux
so essentially you can always roll back to the previous version of you know a system that you've
activated and you know the switch is atomic because of the symlink primitive. And that was two things that were really clicked into my head.
I was like, oh, this is something really better than what we have today.
If you're on Linux, is this in some cases a replacement
or an augmentation of like apt-get or apt?
Or is this sort of like a whole separate thing
where it's purely for building and delivering software and how do
those two worlds play together between like an app or an apt-get or something like that
right are they completely different yeah they're completely different so nix replaces the whole
you know stack it exposes uh some called imperative package management model which is what you are
familiar with apt-. So we can
like say, you know, install a package or uninstall a package and so on. But behind the scenes,
it works very differently. So there is a folder called slash Nix slash store. And in that folder,
it will put packages prefixed by a hash, the hash of all inputs that Nix needed to build this package.
So the idea there is that Nix will always guarantee that the result of the binary output,
you know, when you build a package is the result of all the inputs that it needed to
build this package.
Then it will expose, like I said, the command line interface over that.
So you can, you know, uninstall search and so packages. But what allows this flexibility of
rollbacks is that these packages are completely installed in separate folders. Like in make,
you have prefix where you can say where it will install something. And this is the next store
slash and then that hash. It's called like a global store install something and this is the nick store slash and then that hash it's called
like a global store or something like this where you have all the packages then right i think some
of the confusion with nix is because and i like the way you describe it as an ecosystem is because
there are different aspects to this so there's nix os or nix os there's nix packages there's nix which appears to be a
language there's a nix language that you use in order to configure things and then there's also
a shell which maybe that's part of all that maybe explain the ecosystem the different bits because
when i read about nix os i think is this a Linux distribution? Or is this a package manager?
And kind of where Adam is like, can I use it with Debian and replace AppGit?
Or do I need to be using NixOS?
So help us understand the ecosystem better.
Yes, so usually the answer to this question is yes, everything.
Okay, okay.
Ultimate flexibility.
Yeah, so at the very core,
it's a bunch of design concept and language.
The language allows you to write something
to this Nick store and create a folder
or a file and so on.
But then they're building blocks on top.
So you have, as we said, the language,
the package manager,
which can be installed on,
officially supported on any Linux distribution and macOS as well.
And there's people who ported it to FreeBSD.
There's people in high-performance computing.
There's some people who are trying to use it there.
There's a few blockers, but it's been successfully used.
So besides the FreeBSD and there's yeah a bunch
of smaller projects that people use it in you know some niche areas but yeah the main supported one
is linux and mac os so yeah then there's the os which is a linux distribution built on top
of the package manager so you can deploy. It's for desktop and servers.
So you can have, for example, I run NixOS on my desktop and my servers. You have, I think,
GNOME and KDE as two desktop environments, and there's a few others as well. On the servers,
it's even bigger. There's tools to deploy, like with Terraform,
or there is a Nix-specific tool called NixOps,
and you can deploy to AWS, Amazon, and Google,
and a bunch of other providers.
So it's big on the DevOps side.
And then there's different smaller parts of the ecosystem,
like Home Manager, which allows you to manage home files, dot files, and
that's a separate project, but still
it's done by Nix. And yeah, people do all kinds
of crazy stuff with the Nix API to
build and deploy software, so it can be
applied to any of these things.
Kind of nice because it's approachable in that way.
If you're already just running macOS or maybe running ubuntu as your development environment and you want to use nix package manager and have
your own little isolated nix environment you can do that so you don't have to go all in but if you
want to go all in maybe a year later you're loving it and you're wondering why don't i just use nix
for everything you can set it up as your desktop environment and run an entire distribution of Linux that has Nix at its core.
Yeah, exactly.
I think the easiest way to get started is to use the Nix shell,
which allows you to, it's kind of like virtual Nth,
but system level or Ruby environment or all these language-specific tools.
And you can expose then a shell environment
for your project with a bunch of tooling which is reproducible and you always get the same kind
of tools and you know it's pretty nice because you can share that between linux and mac os
so you just drop that file in and that's a really good start i would say and yeah the other one is to manage a bunch of
to be able to install a bunch of software that otherwise you know your linux distribution
doesn't expose it or or something else or yeah those are the two common paths so at the core
of nix is this package management system which is purely functional as it says on the, on the tin. And then so in
addition to that, you have next packages. And this, I assume is similar to what we'd expect
with an apt get or with a homebrew, where you have like a package ecosystem of packages that
you can install. Tell us about that. Or like, are they pre compiled binaries? You said that
most of them are, but you can patch them and you can do all this different stuff i i did find the package
management you know website and started searching for a few packages some newer ones i thought oh
maybe it's not in there yet like dino and i was like oh sure enough there's a dino package so
you know what all is in there maybe what kind of stuff isn't in there tell us about that ecosystem
because when you buy into something and you want to use some packages then you're going to want them to be there and there's
a lot of packages in the world so how does a package become a next package talk about that
side of the thing okay yeah so this is called the next package as part of the ecosystem which i've
kind of left out but yeah it's on github so it's kind of easy to contribute. You just open a pull request.
No, we're not there yet, but we should hit 100,000 pull requests pretty soon.
That's probably one of the biggest projects on GitHub right now.
I would say it's pretty easy to contribute.
And there is a project called Repology where they kind of track different distributions and package managers.
As far as I know, Nix packages is the biggest project out there.
Now, to be fair, like Debian and Berser pretty strict what goes in and what doesn't.
And Nix packages is just kind of like an ever-growing one.
But I would say there's almost any package you've wanted to install is in the Nix packages collection.
And yeah, everything that is free is also built from source,
and there is a binary for it, unless it's broken or some other.
But yeah, by default, we built all the packages on a part of the ecosystem called Hydra,
which is kind of like a CI system also built for Nix.
And it's the build farm, which has like Mac OS, Linux,
and also our V8, I think, yeah, machines to compile these things
and provide binaries for everyone.
It seems like the core tenet of it, it really is around reproducible builds.
It seems like that's the core feature that everything sort of hangs upon, right?
Like even in the documentation, when it talks about Nix, it says, you know, a lot of what
you've already said here, but it says this means that it treats packages like values
in purely functional programming language, such as Haskell.
They are built by functions that don't have side effects
and they never change after they've been built.
So really around this reproducible builds scenario
where you want to ensure that the package you're using
has never been changed, it has been altered,
and then some other features such as rollbacks
or as you mentioned atomic upgrades and rollbacks seems
like other core tenants to why you might use it. And everything else is sort of like similar
in nature to say at get or apt or a homebrew, like a lot of the reasons why you use it is
very similar to that, but the core tenant being functional or being reproducible builds, being
sure that the thing you're using is, in fact, has never been changed
and what compiled it didn't inject
any sort of side effects as a process.
Yeah, that's correct.
I think there is a lot of benefits
and one of the jobs that we haven't been doing
that great as a community
is really enumerating all of them.
Because, you know, like one side of it
is this page of reproducible builds because
of the you know purely functional model but i i don't like to explain it that way because i think
a lot of people might not be familiar with these terms and what it means in the context of package
management and we haven't been really able to put up a good way of what are the benefits that are the consequences of that
design so yeah you've you've enumerated a few and there's a bunch of others so on onyx.dev i have
a pretty um i've come up with a list um that is incomplete but yeah like rollbacks are the number
one feature one of the cool things is that you can build your whole system remotely
on a different machine than yours,
and then just copy everything to a different system
and just say activate.
And in a matter of seconds, you have the new system running there.
So the build and deploy or activate phase are completely separate,
which is especially nice when you have more than one machine and so on so nix as a language is evaluates to
so-called derivations these are the instructions how to build the package and then you can copy
those to another machine and then you can realize them. That's the term which goes from this derivation
into the actual build, which then produces the output.
And on the way, you can also, instead of building,
substitute, that's the technical term we use
when you say you download a package for this hash,
which is then the binary you get instead of building.
Yeah, that's the kind of pretty nice benefit, I think.
This episode is brought to you by our friends at O'Reilly.
Many of you know O'Reilly for their animal tech books and their conferences,
but you may not know they have an online learning platform as well.
The platform has all their books, all their videos, and all their conference talks.
Plus, you can learn by doing with live online training courses and virtual conferences,
certification practice exams, and interactive sandboxes and scenarios
to practice coding alongside what you're learning.
They cover a ton of technology topics, machine learning, AI, programming languages,
DevOps, data science, cloud, containers, security,
and even soft skills like business management and presentation skills.
You name it, it is all in there.
If you need to keep your team or yourself up to speed on their tech skills,
then check out O'Reilly's online learning platform.
Learn more and keep your team skills sharp at O'Reilly.com slash changelog.
Again, O'Reilly.com slash changelog. So let's say I want to use Nix to install Firefox.
And I type Nix install Firefox.
Or you can tell us what you would type.
Or what do I do?
And then tell us what Nix would do.
And then we'll go from there and talk through what that provides
and why I might want to do it that way.
All right.
Oh, there's quite a lot going on behind the scenes.
So let's go through each step.
If you say Nix install Firefox, Nix will, first of all, try to see where you're trying to install Firefox from.
And by default, it will use Nix packages, which is the official one of the sources it can install from. And by default, it will use Nix packages, which is the official one of the sources it can
install from. But, you know, it can be anything. So we'll skip that part for now. By default,
it will use Nix packages. And then there is a top level file in Nix packages called all packages.
And you can imagine this as a, you know, it's kind of like chase, you know,
Nix language is kind of like chase and with functions. So in there, you'll see a key Firefox
and it will point them to a file, which will import that file. And inside the Firefox file,
Firefox.nix, wherever it is, there is a description of how to build Firefox.nix, wherever it is. There is a description of how to build Firefox.
So Nix has, in the language, there is a primitive call derivation,
which is kind of like the core of the whole concept.
And in the Firefox case, it will say, you know, there's a bunch of dependencies.
You have to run make with these flags and a bunch of other things.
And this derivation function is really the core of it. And what it will do is it will first go through all of the dependencies and build those,
of course, you know, all to the bottom of it, which is the bootstrap, something we call the
bootstrap, which where we build the minimum possible environment. And then it will build
all those dependencies up to Firefox.
And all of those dependencies go through this derivation function.
Okay, so what happens in there is the ratio function gets a bunch of inputs, which you
can imagine as like key value pairs, essentially.
And it passes that to a builder, which is some kind of an executable by default
in x all the builders are done in bash but you can have an executable as a builder essentially
and pass it all the inputs to it and this builder once it will be executed is run in a sandbox
environment so you know you can imagine this as something like Docker-like environment
where it will not have access to the internet.
It will be completely isolated from the file system and so on.
The idea of this sandboxing is, of course, for the build to be reproducible
and only dependent on these inputs.
That is one of the core design decisions.
And as I've previously mentioned, all these inputs are then calculated.
There is a hash calculated out of these inputs.
And this uniquely identifies how this package was built.
And what is the source of this package so not just the
source the binary of the thing but also all the instructions for it so the
Builder will take care of the building part and this is where I previously
talked about evaluation and building separation kicks in so when you will
install Firefox it will find a Firefox file, it will evaluate this
first. So it will evaluate the Firefox derivation, and then everything up to the bootstrapping bit.
And then once that's done, it will start to build. And the building phase is not that interesting.
There's essentially two parts to it. One is that it will use these derivation files
to call this builder as i've mentioned but before it does that it will also check with this hash
if there exists a binary package for it and it will substitute it if if there is one and if not
then it will go and build this how that works is that when the package is built, as I've mentioned, Nix will put it in
the slash Nix slash store slash and then hash and name of the package and then everything goes in
there. And the same for all the dependencies. So let's assume now that Nix, you know, downloaded
some binaries as a dependency of Firefox and then it built the Firefoxfox now it just has a bunch of folders in slash nick slash store
and now it will link those into something you know we would we would call file system hierarchy
you know something you're used to that the standard that you're used to in deviant for
example so slash user and slash opt and so on.
And it will just layer these things essentially together into something Nix calls profile.
And this profile is really just one snapshot of when you installed a package or a group of packages, you know, completely linked together.
And so that's how Nix goes from this global store into an actual file system here that we're used to.
And it's one big SimLink farm, the way to imagine it.
This is then how you go through.
If you install something, it will add that pack.
It will install all the packages you had before and then this package on top of it.
So it's kind of like
immutable you build up these things that's how i would like i like to imagine this and the same
when you uninstall it will not remove the firefox from the slash nick slash store directory but it
will just create a new profile version without fire LinkedIn. And this is very typical to memory management, right?
Where you essentially just allocate
and then you garbage collect when you want to.
So Nix works in a similar way.
So then actually deleting packages
would be an explicit garbage collect operation
which will go through these profile versions
and you can say, oh, give me just keep just the last one, for example.
Yes, it's the garbage collection bit.
But let's go back to installing.
So now you have this profile where Firefox was installed in and then Nix will activate it,
which means that the specific snapshot of the profile is now the activated one.
And these profiles can be stacked one upon another as well. of the profile is now the activated one.
And these profiles can be stacked one upon another as well.
And there is like the user profile.
So each user can have a profile.
Each user can install their own set of packages.
And then on Nix, so as the distribution,
there's also a system profile,
which is the actual OS profile.
That then represents the environment that you access. And then then it exposes like in a typical package manager,
like Firefox in the path variable, for example.
So how does it accomplish that?
So I understand completely because I have slash Nix,
slash store, slash unique hash, dash Firefox or whatever.
I understand how that provides for multiple versions
installed in the same system right i can upgrade and i also understand how once you have this ever
adding system where you're just adding a new install of firefox and you still have the old
ones unless you garbage collect how you could do your atomic upgrades at that point because now you're just
changing you're just swapping the sim link between those versions and like you said that's
atomic that's an atomic operation in linux so it's like the small you know happens in a split
second and so that's really good it doesn't explain to me the multi-user support so you said
there's profiles is everything stored in the next
store and it's just like the profiles are elsewhere and point to like which versions you're
using or how does it know when it's garbage collecting that adam's profile is has this
firefox but my profile has a different firefox how are those segregated the easiest way to imagine
it as yeah it's a file like you, Debian installation would be one profile, right?
And then you have different profiles in your system.
The way Nix stacks those together is, if I understand your question, is it will just append the path, you know, like by the hierarchy of the profiles you have activated. So if you have like a user one and a system one, then the user one will
append the bin path of the user profile first, and then the system profile bin path will come
second. So then all the packages that are installed in the user profiles come from the
user profile bin path, and then the system one follows.
That much I understand. But how does it know not to garbage collect my profile's version of Firefox
when you run garbage collect?
Oh, okay.
Is there some sort of registry of who's using what, where?
Yeah, so the profiles are symlinks as well.
They're installed in the slash nix slash var slash profiles, I think.
And for each profile, there is a name.
And then there is a something called profile
and then the version of it one two and so on counts linearly and inside there that's the
symlink then to the file system hierarchy so when you run garbage collect you can do it for the user
or you can do it globally oh yeah you can pass a link to if i remember correctly to
the profile you want to garbage collect otherwise it will garbage collect everything and the way
the garbage collection works is you can say garbage collect everything that is not in the
profiles so you know just some things like so for example when you build something that everything
ends up in the profile because they're just tools that were needed when you built something, but are not actually part of the runtime paths.
So you can just collect those, garbage collect those.
Or you can garbage collect to say, delete everything up to so that 20 gigabytes is freed.
And so on.
There's quite some flexibility in there.
Gotcha.
So the profiles essentially bless what's installed.
And when you garbage collect,
if it's present in a profile,
then it's like, hey, I'm not going to touch that
because that's necessary based upon somebody's profile
that's established, whether it's Jared's user or my user.
And then likewise, if Jared wants to install something that I've already installed, it's not going to re-download and recompile and rebuild that a second time.
It's going to just use what's already there, which is secure because you can't kind of go and change that.
It's got that hash.
It's already been built.
It can't be muted, essentially. It can't kind of go and change that. It's got that hash, it's already been built, it can't be muted essentially, it can't be changed and then
Trojan horse be dropped in or something like that. Yeah, exactly. NixStory is mounted
as read-only, so only Nix is allowed to
manage it essentially. So that's the guarantee you get.
So since we compared Nix as an
alias to how somebody might be familiar with Apt or Apt-Get,
does Apt or Apt-Get or Homebrew, are they not in this kind of world where it's reproducible?
Is that not a concern in those worlds or is that not a scenario?
When I run Apt-Get, install Git, for example, or I suppose I can Apt-Get install Firefox or something like that.
If that's the case, am I just grabbing what's in the registry and pulling that to my machine?
Because I'm not making or building there, right, in most cases, unless it's something that needs to be built.
Right.
So Nix is reproducible in the fact that it runs on reproducible builds and the fact that it makes these builds hashes secure by nature because you can prove the complete dependencies.
You know what was involved, all that good stuff.
And that hash proves it.
And that's the way it works by design.
How does that compare to AppDraftGet?
Did they not do that?
Yeah, that's a very good question.
I think there are actually two kinds of reproducibility that are usually, I would would say mentioned in the package management world
so nix does the reproducibility where it goes from source to a binary by ensuring that you
always kind of get the same binary and this is then minus some discrepancies of like system time
getting into the binary output and so on but But the guarantee is that using the same hash,
the same kind of sources and inputs,
you always get the same binary.
On Debian, they have also the reproducibility project,
but that is more about the binary output
so that the actual binaries you get
are identical each time you build something.
The difference is that in debian
as far as i know maybe the infrastructure has changed recently but it used to be at least
the case that when you build something it will pick up libraries from your system right so like
when you let's say you build firefox it will pick up OpenSSL from your system. Now, how this OpenSSL was built is not, there is no guarantee, right?
Something build it.
And of course, if you use Debian to install OpenSSL,
then you kind of have this guarantee implicitly on your system.
But anyone, you know, like you could easily swap out OpenSSL
with the newer version or the lower version and so on.
So there is no tracking of it.
The way I reason about Debian is there's just some kind of a file system state where you installed OpenSSL and then you install another library that depends on it and so on.
And then you stack these.
But as I've said, there is nothing tracking what really was used to to
install this so of course debian probably has some servers where they build this in a sandbox
environment and so on but when you do it locally you kind of lose that guarantee right in x
everything is sandbox by default so everyone that's building anything on X
gets this guarantee and it's enforced.
So yeah, that's the main difference between the two.
This episode is brought to you by CloudZero.
CloudZero is the only cloud cost intelligence platform
that puts engineering in control
by connecting technical decisions to business results.
This is crucial for software-driven teams
focused on growing their margins.
By analyzing cloud services like AWS and Snowflake,
CloudZero provides real-time cost insights
to help you maximize your margins.
Engineering teams can answer questions like,
who are my most expensive customers? How much does this specific feature cost our business? And what is the cost or impact
of rewriting this application? With cost anomaly alerts via Slack, product specific data views,
and granular engineering context that makes it easy to investigate any cost, CloudZero gives
you complete cloud cost intelligence, connect the dots between high-level trends and individual lineups.
Join companies like Drift, Rapid7, and SeatGeek
by going to cloudzero.com slash changelog to get started.
Again, cloudzero.com slash changelog.
So, Doman, one of the things you said at the top and also you say on nix ecosystem is a devops toolkit so there's a devops focus in what nix is providing so not just merely installing firefox
on my local linux box so i can browse the web but like using this for getting your dev ops,
getting your stuff out there in the world, right?
Taking your software, putting it out there,
whether it's a web app stack or whatever it happens to be.
And so that makes me wonder how it fits in with other DevOps-y things.
And would you use Nix plus this configuration language
to create these isolated installs and similar to like a universal binary
kind of idea where you're like just take this folder and put it on another machine and it runs
would you use it instead of docker would you use it with docker and docker compose just tell us
help us understand where nix fits in as a devops thing where i might use it to deploy some software that's a good question yeah i don't think there's a definite answer to it okay essentially there's
all the options the answer is yes the answer is yes again yeah so the way i see it at least
first compared to docker nix is really good with the configuration and build part.
And once you build something, then when you run it, it's just un-executable.
So Docker is, I would say, complementary to that.
So Docker provides the runtime isolation between things. In the NixOS, we use systemd, so since the very early days.
So that one kind of manages the whole runtime bit if you use the OS bit.
But if you use containers, then there's people who are using Nix to build containers for Nomad and Kubernetes as well. And yeah, you can build Docker images with Nix.
And I think that's a pretty nice combination as well because one thing i forgot to
mention is nx you have two kinds of derivations really one is called fixed output derivation and
the other one is just a derivation or dynamic derivation so the dynamic derivation is the one
that hashes all the inputs the fixed output derivation is the one that has the hash up front. So you can say, oh, this is a SHA something.
And that one actually has now network access.
So whatever it will build, the SHA you provided should be the hash of the content that this builder will do.
So this is a pretty nice guarantee that everything you get from the internet
has a predefined hash of it
and everything else that doesn't access internet
then depends on that.
And in the Docker world, this is not the case.
So if you have a Docker image
that allowed something from the internet,
essentially if you run it twice
and that content changes,
there is no guarantee whatsoever
that this will be a completely different,
you know, image with different output.
The reason why people don't really notice that
is because Docker Hub kind of has the history
of all the images,
and people don't usually build them themselves.
But yeah, that's where I think Nix shines better.
So in this reproducibility aspect
and then Docker from runtime for sure
and all the container stuff we've built recently helps a lot.
So you're effectively running Nix alongside Docker
or inside of Docker to do all the package stuff.
Is that the way you would use it?
Yeah.
You said they're complementary.
You use them together that way.
So there is an official Nix image
where you have the Nix installed
and you can build stuff inside the container,
the Docker containers.
But there is also an API in Nix language
so that you can build images with Nix,
which is pretty cool as well because you will get very minimal images
compared to stacking them up as people usually do.
You know, you have like whatever you build,
which depends on something else and which depends on Alpine Linux and so on.
So this quickly adds up.
Whereas if you go through the next round,
you just build your thing
and then you copy that into the Docker container
and it has nothing else, essentially.
It's also potentially faster,
but yeah, let's not go into those details.
Yeah, I saw a cool example on the examples screencast
where it was setting up a Docker image
that had a specific tool where it showed three versions.
It was like the stock Nix version, stock Debian,
and then the Alpine Linux.
These are the containers, you know, the images.
And at first, the Alpine Linux one was just teeny tiny, of course, because it's just a stripped-down version.
The Nix one was somewhere in the middle of the other two.
Then the screencast goes in to show how it could, instead of taking just the
default package, that's the Nix package for this particular piece of software,
camera the software, Nginx maybe, maybe it's simpler than that.
Instead of merely using the
pre-compiled binary and putting that in its image it would go in there and just tweak a couple flags
like a compile flag and then it like removed some sort of subdirectory that didn't care about and
was able to achieve a image that was even smaller than alpine linux is just through those couple of
tweaks so that kind
of speaks to the thing that you like about it with it's, it's convenient by default, but then it's
also has the customizability, where you can have the prebuilt binaries, you can just use that no
big deal, you don't have to compile everything. But when it comes time to say, you know what,
I really want to strip this thing down and make it as tiny as possible. And I know I don't need this
sets of files, or I don't need to compile for these seven whatevers. I can go in and through
that next configuration language, just make a couple of changes to way that that particular
piece of software is compiled, passes some flags, have it compiled for you and reap the benefits.
That was pretty cool. That is one of the most powerful things. Like for example,
going back to the Firefox,
let's say you would package Firefox in Docker container.
Each package is essentially just a function of all the dependencies it needs.
So like, you know, open SSL is a parameter in that function and so on.
So you could say open SSL override, you you know flip a flag or apply a patch
that's the most common one like here is a patch
and you could apply a patch to open SSL
which then is provided as an argument to build Firefox
and that's like one line to tweak
so I think that's really powerful compared to
if you then go tweak those Docker images and trying to rebuild them, which are not exactly reproducible and so on.
It becomes a mess pretty quickly.
So a couple of the other features that I'm not sure we've hit on exactly that I think play into the ops side is you list remote builds and remote deployments.
What exactly do you mean by remote deployments?
Right, yeah, maybe that's a weird way to put it.
Okay.
As I said previously, you can control where it's built,
and you can then deploy from one machine to, like, 20 others, for example.
Now, Nix will either copy what it needs there from your local machine, or it will
substitute from a binary cache. And so really, remote there means that you're not really doing
anything on that machine except copying them and then activating the NixOS. I'm talking about the
OS bit here in case you're... There's also these only true profiles,
but that's not as convenient by default.
So that's the remote part.
And what would be an example of why you would want to do that?
Would it be for cost savings on the wire or caching?
Or would it be for... For what reason would you want to do that?
So you're saving all the resources.
Usually the way you're deploying to
is optimized for the runtime features of your thing, right?
So a very good example of this is if you have like Raspberry Pi,
you kind of don't want to compile stuff on Raspberry Pi.
You might want to compile on an EC2 ARM V8 machine.
Right. So then, like, essentially, no,
it doesn't require any extra disk space.
The thing you copy to the Raspberry Pi
is exactly what the system needs.
So it's really fast, right?
This is as fast as it gets
from getting the system up and running.
You copy it over and you activate it,
you start a script and that's it.
You don't use any of the CPU or memory resources
besides the constant memory copying.
Right.
Also, it doesn't interfere with the system that much, right?
So if something is running there and it's really on the edge,
it's essentially untouched that way.
Gotcha.
That's really, that Raspberry Pi is a very good example that clarifies to me why that would make sense.
You totally want to do that.
Cool.
Where's the interesting things happening around Nix? and we're now at roughly year 20-ish of its inception,
when you look at the ecosystem at large,
where are the cool things happening?
What's happening?
What's being done that's sort of bleeding edge around things
that's really got you personally excited or the community excited?
Right. That's a pretty broad question.
One of the things that I've been doing in the last four or five years
is I think we need to build infrastructure and documentation.
Those are the two main things I'm working on.
So in other words, I think we should go into commercialization or, you know, what I like to put it, to go mainstream by really making it easy and accessible for people.
And also, as I said, building infrastructure
so that deployments, builds, and all these things
are done very easily and companies can just subscribe
or pay for a subscription and roll their own stack.
That's one part of it that i'm you know mostly concerned about
um there's also on the community side there's a lot going on we had a couple of conferences
so the community is growing pretty fast we're having issues with actually a lot of people
coming in so we're trying to do more policy stuff
so that we can grow faster and then less chaotic.
And on the research side,
there is a bunch of new things coming in.
One thing is called content undressable store.
So this is quite similar to what Bazel does.
Yeah, I'm not sure if I should go into explaining that
because it's really in the phase of development right now.
But essentially, it's an optimization of when you build...
Like in Nix, if you rebuild something
that is in the beginning of the dependency tree,
let's say, you know, Bash,
you have to then rebuild everything that depends on Bash.
What content addressable approach allows
you to say is if the derivation output of bash is the same as it was previously then you don't need
to rebuild the rest that depends on it and this completely needs a different design so maybe bash
is not the best example but let's say if you would modify Git
and then Firefox depends on Git,
then Firefox output probably wouldn't change
even though you have changed Git.
So Firefox wouldn't change
and anything that depends on the Firefox
then wouldn't need to be recompiled, for example.
That's one of the...
There is a really cool paper called
Build Systems a la carte
that unfortunately doesn't have Nix inside,
but it compares different build systems
and different features they have.
And Nix will then tick all the feature boxes
once this feature is complete
and be essentially, I would say,
better than Bazel in that sense.
So that's one of the areas the other
thing is there is a bunch of work on the usability side it was clear that it was a research a project
so we're elko dostra and also a bunch of people from community are redesigning the common line
that it's easier to use so i think we're kind of the again phase of
bringing it closer to the wider audience and yeah comment line redesign was a big part of it
gotcha but you mentioned a lot of growth is happening now in terms of community are you
familiar with like where that growth is happening potentially like where it might be coming from
and i know that a lot more people are using raspberry pos for example i know that familiar with where that growth is happening, potentially where it might be coming from.
I know that a lot more people are using Raspberry Pis, for example.
I know that Nix has that support. A lot of home labs are sort of built around Raspberry Pis and things like that. Where do you see the support or where do you see
the growth kind of happening? What areas of the Nix ecosystem
seems to be the most on fire, so to speak, in terms of growth?
I think the biggest one right now is actually Haskell community
because it's so close conceptually to Nix.
I would say most, this is probably a hard statement,
but most of Haskell teams are using Nix to deploy or build Haskell
one way or another.
And yeah, so a bunch of other languages
where this is useful as well.
I would say there is a bunch of people in Rust community
and other languages as well.
And I think in DevOps,
especially deploying and managing systems,
I think there are more and more companies using Nix
because this disability part systems. I think there are more and more companies using Nix because
this disability part
and just assurance
in general is useful to
them. Well, actually, my friend Nate
told me this one. I really like
this concept
is that, you know, Nix
is kind of like when we
had PHP and you would hack
on the live server and all of that.
It was, you know, back in the days,
considered as accepted and, you know, practiced.
It's the same with Nix now.
So if you go to a Nix machine
and you just try to edit some files, it won't work.
You have to edit the Nix files and redeploy.
So this usually creates a bit of resistance
from people who are used to Debian,
for example. So Nix kind of turns operational tasks into development tasks. So you kind of
have to pay this cost upfront of actually, you know, describing your system in one file and so
on, which takes some time. But once you do that, you save a lot of operational problems. So we see a lot of people
figuring this out in the wild and then coming
to Nix as lessons learned.
Gotcha. If someone's listened this far, they're like, man, this
is interesting, somewhat interesting to me, whatever. Where's your go-to
place to get started? Is it n next.dev? Is it another place
where do you send people to sort of, you know, obviously they've maybe gotten past the, you know,
reproducible bills, understanding the reliability of that, potentially even the extra security of
what that means. Where do you send people to get started? If they're DevOps, do you send them to a
certain place? If they're from Haskell, do you send them to a certain place? Is there a different
place for different camps? Like where's a good place? If they're from Haskell, do you send them to a certain place? Is there a different place for different camps?
Where is a good place to kick the tires, begin, get started, play around,
and maybe fall in love?
Right. Yeah, I'm still working on Nix.dev.
I think it is a great place to start, although it's not complete yet,
so there are parts that are missing.
The typical place to start is to read the NixOS, the manuals.
There is a Nix manual, which is specific to the language and package manager,
and NixOS manual, which is about the OS bit.
But those manuals are kind of like, not tutorial-like,
they're more like reference documentation
and description of different bits of NixOS and how it works
and that's where I would like nix.dev to be the middle ground where you have tutorials to get
started with so I think between those two if you want to go really deep into Nix as a language and
how it works there is something called Nix Spills where it kind of goes into different parts of Nix
and explains the concept behind it.
And there's a bunch of people, there's on YouTube,
a few people who have recorded videos.
There is Nix Shorts.
Someone wrote, which is like a short tutorials of getting started
with doing stuff with Nix.
I think that's everything that comes to mind right now.
The main one, I would still say NixOS Manuals that's everything that comes to mind right now. The main one I would still say
NixOS manuals if you want to get
your hands dirty.
We'll link those up in the show notes.
NixOS.org is kind of a
good landing page, but we'll link deeper into
say manuals and
pills and obviously link up
Nix.dev and
we'll look up NixShorts
on YouTube. I found something else i think it may be a false
positive but we'll dig further and provide awesome links listeners find that in the show notes
gentlemen thank you so much for this deep dive on nix it's interesting i've never personally used
it but i can certainly see the reproducible builds idea around it especially the usability around ops
and you know you want systems you're putting out into production to be secure and stable
and, you know, be able to count on those.
So I can see where Nix really plays a role there.
But thank you so much for your time today and appreciate you sharing your wisdom here.
Thank you.
Thank you for hosting me.
That's it for this episode of The Change Law.
Thanks for tuning in.
If you aren't subscribed yet to our weekly newsletter you are missing out
on what's moving and shaking in software and why it's important it's 100 free fight your fomo at
changelog.com weekly huge thanks to our partners linode fastly and launch darkly when we need
music we summon the beat freak breakmaster cylinder huge thanks to breakmaster for all
their awesome work and last but not least subscribe to our master feed at their awesome work. And last but not least, subscribe to our master feed
at changelog.com slash master
and get all our podcasts in a single feed.
That's it for this week.
We'll see you next week. Thank you. Bye.