Tech Over Tea - Developer Of Distrobox | Luca Di Maio
Episode Date: April 5, 2024After it being delayed for a very long time we now finally have the developer of Distrobox, Luca Di Maio on the show to chat about how the project came to be, it's use on various types of distros ...and all manner of other things. ==========Support The Channel========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson ==========Guest Links========== Github: https://github.com/89luca89/distrobox Gitlab: https://gitlab.com/89luca89/distrobox Twitter: https://twitter.com/LucaDiMaio11 Mastodon: https://fosstodon.org/@89luca89 ==========Support The Show========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson =========Video Platforms========== 🎥 YouTube: https://www.youtube.com/channel/UCBq5p-xOla8xhnrbhu8AIAg =========Audio Release========= 🎵 RSS: https://anchor.fm/s/149fd51c/podcast/rss 🎵 Apple Podcast:https://podcasts.apple.com/us/podcast/tech-over-tea/id1501727953 🎵 Spotify: https://open.spotify.com/show/3IfFpfzlLo7OPsEnl4gbdM 🎵 Google Podcast: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xNDlmZDUxYy9wb2RjYXN0L3Jzcw== 🎵 Anchor: https://anchor.fm/tech-over-tea ==========Social Media========== 🎤 Discord:https://discord.gg/PkMRVn9 🐦 Twitter: https://twitter.com/TechOverTeaShow 📷 Instagram: https://www.instagram.com/techovertea/ 🌐 Mastodon:https://mastodon.social/web/accounts/1093345 ==========Credits========== 🎨 Channel Art: All my art has was created by Supercozman https://twitter.com/Supercozman https://www.instagram.com/supercozman_draws/ DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase we may receive a small commission or other compensation.
Transcript
Discussion (0)
Good morning, good day, and good evening.
I'm, as always, your host, Brodie Robinson.
And today, we have Luca, the developer of Distroboxxon.
Welcome to the show.
How's it going?
Thank you, thank you, thank you for having me.
We originally tried to plan something, what, when Distroboxx first came out?
Yeah, I think a year ago yeah yeah half and half yeah
yeah it was it was a while ago that's for sure and then i think you were busy with some real
life stuff and then i just forgot to get back to you and now it's 2024 yeah yeah life life
got in the way well i hope we're finally getting to do this, and Distrobox is a
really cool project. Like, when I first heard about
this, I
knew about
things like Docker, and I'd used that before.
I hadn't used Podman before, but I'd used Docker.
I just never really thought of using it
as this...
Actually, for anyone who doesn't know, just explain briefly
what Distrobox is.
It's a wrapper around Podman or Docker, so free to choose.
And instead of using containers to isolate workflows and workloads,
it creates containers that are tightly integrated, so the completely opposite.
And yeah, mainly gives you an user land
of whatever nature you want so whatever this true you want so that it take the
couples the user land from the base destroy that you have on your host or
your laptop your desktop where you can do your stuff so you can install packages you can develop you can do games
whatever the important bit it's the tight integration and the possibility of you know
leveraging i don't know your graphical session audio and i don't know usb devices
stuff like that so that it doesn't feel like a container
it feels more like a fancy ch root or something like that the idea was to go
what i was going to say i when i first saw i sort of thought of it as like a kind of like a linux
subsystem for linux you know like a Linux subsystem for Linux.
You know how like Windows subsystem for Linux is you have this Linux thing running on top of Windows.
It's tightly integrated.
You can modify your files from your Windows system.
Here, you can do the same thing.
So you're on like Debian, for example, and you want to install an application from the AUR and then access all your Debian files.
You could just do that.
and then access all your Debian files.
You could just do that.
Yeah, basically the same as WSL,
or I think it's called Crostini on Chrome OS.
They are Linux subsystem VM.
So instead of using VMs, in this case, we are just using main containers.
So performance are definitely better.
Yeah, so the idea was that.
It's not that different than simply doing a chroot, because I am basically undoing all
the compartmentalization and isolation that Podind Docker does by default.
But the idea was to leverage their infrastructure.
So they have so many images,
so many cool things that you can use.
Just starting from simple chrutes
was just more work at that point.
It was easier to just reintegrate the Docker, more work at that point.
It was easier to just reintegrate the Docker. So it went that way and yeah,
it's useful for various situation.
Personally, it was born because I needed it.
So.
Right.
Well, I was going to ask you why you made it.
Like what specific thing was it that caused you
to want to go make this?
I was working for a company where they gave me a company laptop.
I asked for a Linux laptop, and they actually gave me one.
But then you cannot be pseudo because policies, you cannot do this, you cannot do that.
be pseudo because policies, you cannot do this, you can do that.
It was like an Ubuntu, I think 1804, but it was in 2020. So ancient stuff.
And at one point, I just needed to work.
Right.
So I started with simple containers, then one thing leads to another, and here we are.
And, well, one of the things that I always found weird about the project, and it makes sense at some level,
but the fact that it's all done in bash script so i'm sure people have
mentioned this like shell okay sorry posix shell different sorry it's harder that's actually a fair
point okay why did you want to do it in posix shell and said like you know a lot of people
suggest why not use a common glue language like Python or Perl or something like that.
Obviously doing something like C wouldn't make any sense because you're basically just making
Docker and Podman calls, but why not something that's like an actual programming language?
Two reasons actually. One is the fact that it needs to run without depend-
except for obviously Podman and Docker, needs to run without depend... Except for, obviously, Podman and Docker.
Sure.
Needs to run without dependencies anywhere.
Because if you see how Distrobox is structured,
you have various pieces.
So you have various utilities.
Then you have the init,
which is actually the init that we inject inside the container, which
runs at the startup of the container
and does everything, let's say.
That one needs to only run on POSIX stuff
because it may have to run on a container that only
has POSIX stuff
let's say Alpine or Void Linux or Gen2 and I even support Slackware so
I mean it's I don't know who uses but yeah it needs to run on very lean stuff.
And in the end, the init is the biggest part
of the whole project and the most difficult part for me,
I think.
And the others are pretty much smaller utilities, you know, list, rm, I don't know, start, stop, it's very small scripts.
Create and enter are a bit bigger, but just because we give some, you know, flags, some
interactivity, some options, it's verbose, not difficult. It's long, not difficult.
The init is difficult because you have so many things to do.
Right.
To do that integration.
And so many things to keep an eye on because, I don't know, a fix on Ubuntu can break something on Alpine.
Right. It happened, actually.
And then at the support for initful system,
like systemd, OpenRC actually supported.
It makes, obviously, difficult. But I
wanted something that can run on anything. I didn't want any library
hell, so that's one big reason. And I wanted something that was fast, interactive,
and easy to contribute to and in hindsight probably golang will be a little bit better in some ways
not for the init right because that needs to be you know a simple shell and point to be
fully compatible with everyone but then again if that is the most difficult
part and it's done in shell
I may as well do the rest in shell
so
so a knit is the thing that actually takes
your docker or podman
container and then makes it this thing
that's tightly integrated
yeah so there are
two phases.
In the create, we do some fancy mounts.
Like the key mounts obviously is running the whole route
of the host inside run and host,
run host inside the container.
So everything is still accessible inside the container. So everything is still accessible inside
the container.
Then the init
is actually like
PID1 of the container.
So it's like literally an init.
And
what it does is
setting up
the TTY devices,
setting up all the various mounts that are needed at runtime,
setting up, I don't know, stuff like ATC hosts, resolve.conf, all the integration part.
And it has pre and post hooks, so you can customize a bit your init process,
integrates the, I don't know, icons, fonts,
and stuff like that with the host.
And then it either stops there and, you know,
just waits so you can do connection to it,
or it starts another init so either systemd or
openrc never tried s6 but i mean stuff like that and um yeah it's more or less the the init
in the real sense of the word right right okay right. Okay, that makes sense. So just, I know there's a lot of people
that might not be aware of what like
Docker and Podman actually are.
So if you weren't to do any of this,
you just left Docker just as Docker normally would work,
what would that actually be like
for running an application inside of that?
And then what sort of changes did you need to make
to actually make it usable for what you want to do here?
So like you should just do Docker run,
I don't know, Alpine or Ubuntu latest, whatever.
What you have is a very bare shell.
So you have like your, like BusyBox, I think,
for Alpine and bash for boom too
and you pretty much have that just it so you don't have any init process setup um
open supports starting systemd containers but then again you have to use a dedicated image for it because you need to have
you know systemd inside and you would have no access to anything in the file system of the host
except for volumes that you specify at creation time you don't don't have display
creation time, you don't have display access, so Wayland, X11, stuff like that.
Device access also is inhibited, so you can declare what to access at create time, not at run time, stuff like that.
So a container is not as isolated as a VM, obviously, because we are still using the host kernel.
So you have that. So if you have a kernel vulnerability, you pretty much can do whatever.
But you have complete user land isolation. So with Distrobox, the work was on undoing that so um yeah that was the figuring out what to
do still discovering stuff obviously people have crazy use cases and workflows and uh yeah so
that's where we i still learn learn that, I don't know,
Podman and Docker could do that, stuff like that, you know.
Well, this way of using Docker and Podman is obviously not the main intended way,
but how much of it was fairly straightforward to get done with something,
with this Distrobox project?
And how much was it of like, you had to kind of break some stuff
that shouldn't have really worked the way it should?
Yeah, I think all the volume part
is the straightforward way.
Volume and environment variables
are pretty much straightforward.
It's when I started Docker, yeah, sorry, Distrobox. Variable and variables are pretty much straightforward.
When I started Docker, sorry, DistroBox,
when I started DistroBox, it was, I think,
a couple of hundred lines long only, everything.
And it was pretty much a with a bunch of volumes
hard-coded and the enter was pretty much like a full loop over you
know your mvars so that it depends the mvars and then enter which it's not that different now
i mean it's still pretty much that um obviously expanding the use cases we find that I don't know how to
actually enter the what command to execute when you enter the container is actually important so
having for example in cases of I think it was a fix just now in the 1.7 about when you have an initful container
with, for example, systemd,
you actually want to do a proper login
so it can give you a login session
so it acts more like a real thing.
more like a real thing.
Let's say it's halfway between what you expect from a normal container and what you expect from an LXC, for example,
which acts a little bit more like a VM for you,
but it's still a container, right?
Right. a VM for you, but it's still a container, right? So yeah, that part was a little bit bending
what is advisable to do with these container managers, but I mean, if you can.
Well, the whole just running like tons of applications inside of a container like this is kind of like not the way
that it's typically intended to be used like most of the time when people are talking about running
docker they're talking about running nginx in a docking container and then like jitsi in a docking
it's usually not like you know because if you install something like uh if you have like an
ubuntu container with distrobox you'll you might install like 10 or 20 different applications with that, sending all of those applications out of the box and...
It's just... It's a really
different way of working with these containers that
I would imagine has caused some kind of headaches along the way trying to work out specific problems.
Less than you think, to be honest. Yeah, what Docker and Podman are by default
are application containers. So think of it like a flat pack, but for service. So you
for service so you actually just run I don't know nginx and that's it I'll just run I don't know could be torrent and that's it just that but in the end it's just a Linux user land so you can
actually do what you need many containers like proper ones um only expo expose one thing for example i don't know the
part of one service but they are running three or four uh processes inside for i don't know they
depend the service depends on those right so it's not a new concept at all but I'd say we are
With this box and other solution for pet containers like those which we can talk about later. It's
More like an halfway I
as I was saying
Between let's call it a user land container
So your whole user land and a system container with which LXC or LXDR,
which you have a container of a whole system.
It's a little bit different from Docker and Podman.
But I mean, the technology is the same.
It's just what you do with that.
The technology underneath LXC is the same as underneath Docker.
I think Docker, when it started,
actually was using Alexi under the hood.
Like the first year it was released
or something like that,
before container D.
So the technology is always the same.
So it's not reinventing the wheel.
Right, right.
One of the things I find really cool about Docker
is not that it's just this neat little content...
Sorry, Docker.
Distrobox.
I'm going to keep doing this.
I'm sure you're going to...
Maybe I should just start saying Podman.
We'll just forget that Docker exists
because I won't mix up Distrobox and Podman.
With Distrobox,
one of the things I find really, really cool
is not that it's just this nice little thing
to manage a bunch of tightly integrated containers,
but you also have this ability
to export applications from the container
and just basically treat it
like it's just an application on your system
obviously there are weird caveats with that with um uh gui applications i know on arch for example
i had to there's like a specific thing you've got it in the like faq there to it's like i think
doesn't pass in the display or something i don't remember the exact details. You probably had to report the issue. It's the magic cookie.
Yeah, yeah, yeah.
Some systems don't have a magic cookie by default.
Like GNOME and KDE for sure does in run user, whatever.
So you can use that magic cookie to have your ex-authentication working.
Yeah, yeah. But for for i don't know homemade
systems like you know a window manager and stuff like that you have to manually
yeah so you have to manually do the x out thing or it won't work yeah um but i find it really
cool that you can just export the applications and treat them as if they're just a regular part
of your system and that i think is what makes it really really useful especially if you just don't want to deal with like you want
to get it set up and then you just don't want to look at the fact there's a container there
and i i get it that some people would want to use it like that right like it it is kind of
complicated to have a system in a system and once you get something set up you probably just want it
to be easily accessible and just there.
Obviously, there is that caveat as well where the first time the container opens up when you boot your system, it's going to take a while to do so.
But besides that, once it's already running, it just works.
That's awesome.
That's awesome. Yeah, it was actually one of the first things I started introducing when it was still a
set of scripts on my laptop.
Because it's actually useful.
I mean, it's one of the first things that I needed, actually.
And it was with as with all software it started easy and then it became
quickly complicated at the beginning I was supporting exporting like apps like desktop apps
binaries and systemd services but at one point I introduced like init containers so i removed the exported service because it
doesn't i mean it was just simpler to maintain you know uh the services inside the container
as if it was like with their own init system whatever and yeah, the apps part was a bit of a,
it's a bit of a hack because you know,
you have to find the desktop file,
set a lot of things and you know,
just to introduce the, like a prefix
of all the distro box command
and then the original command for the app and export the icons because you know you have to
show something and stuff like that we so the bin binary part was plain easy i mean it was like
it's like just a wrapper that actually runs the distro box enter command with your original binary thing. So that
was easy. The app part was a little bit more difficult, but it's working well for a while now.
I don't see many people complaining, so I'm happy. But yeah, things can always improve on that front.
But yeah, things can always improve on that front.
And I think one of the most useful features that we've,
because there is a big community, which I thanks everyone. And we all introduced was the assembly file,
was the assembly file, assemble file,
which you can declaratively declare your DistroBox with all the packages that you need
and declare all the bins and the apps that you want to export
so that you don't have to manually go and export, blah, blah, blah,
blah, and stuff like that.
You can just DistroBox, assemble, create.
It will create all the DistroBox that you have declared,
export all the stuff.
It's pretty much like one file fits all for your environment.
I use it heavily.
For example, in my case, I have like a main distro box where I mean, it's
what my terminal opens when I open a terminal
and where I live actually.
And I have a little Alpine distro box
just for one CLI tool, which is GitHub CLI.
OK.
Because the distro box I use as a user land,
which is Tumbleweed, doesn't have it in the repo.
So I mean, I just create another one.
And then a rootful one.
So like the container runs as root,
which runs the whole libvirt
suite. So I have libvirt daemon,
virt manager, so all my VMs actually run inside
the container. So I don't have to install
anything on the host.'s easily you know replaceable and
yeah with assembly for me like just yesterday i reinstalled my laptop because i wanted to try
the new plasma and i mean with one command i was up and running so it was pretty pretty useful
command I was up and running so it was pretty pretty useful yeah I had a good connection but yeah but yeah I I still needed triplasma the package is finally
available on arch now so I'm gonna install after this I didn't want to do
it before the show just in case I broke something yeah so like I I've run like a
system update today or anything just I want to make sure everything is
good I don't break anything
that's
later problem
yeah I mean it's
I actually
use one of the
offerings of the Ublue project
because
it's
more up to date than Arch.
I always love when people talk about Arch being
like a cutting edge history. It's not.
No. Like certain
things are cutting edge
and most of it just happens to be
rolling. Yeah, certain. Yeah.
I think the point is
the AUR is cutting edge.
Absolutely. Yeah. If you're downloading the latest Git commit.
Yeah, absolutely then.
Yeah, I try to avoid that with anything
that I consider like very important software.
Like I don't run Git commit desktops, for example,
unless there's like a specific reason for it,
like testing something
or like there's a known good commit or something like that.
The people who just run the latest
commit of a project, they
are living
on... Brave people. Yeah, very
brave people. Like
there was an issue with
Glypsy. This is a
don't even consider
trying this. There was an issue with Glypsy
where people were replacing
the system glibc with git glibc with a custom patch on it like that's really living on the edge
i i think uh it might be a little bit harsh but i think people doing that are either doing that on a spare computer just for fun
which I completely agree on.
Like having your experimental PC on one side but if they are doing them on their main PC, I really doubt they are doing anything with that PC anyway.
Like, I wouldn't risk breaking my main PC just
for a GDPC patch or whatever.
So that's why I'm pretty much on the immutable system train,
mostly for the reliability of them.
If you have a problem, just roll back.
And it's very, very improbable that they break by themselves,
even though, like on the main PC,
I'm using Eon, which is the immutable of OpenSUSE.
And I mean, if you don't touch it, it works.
It's a rolling release, so I have stuff pretty new.
The base is very minimal.
Like if you remove all the Flatpak,
you just have GNOME settings, terminal files.
And I think that's it.
Something like that.
It's very bare bones.
Everything is Flatpak or Distrobox.
So everything is modular on that point of view. And I think I'm doing
the same, I will do the same with the Kinoite. That's as close as I would get. Yeah. I'm
not sure how to say it either. KDE Atomic. Yeah, we'll go with that one. That works much
better. On the other laptop. So pretty much it's very improbable, it's row height, so it's still
it's like running Arch with testing repos, so it's pretty much experimental. It's pretty solid in the
end, but it can happen that it, I don't know, they change something in the repos, something breaks.
We are human, it can happen.
But I have more, let's say, guarantees that my laptop will turn on when I need it, which in my experience years ago wasn't the case with Arch.
the case with Arch.
There's a an issue
not an issue, it's more of
a package that
was updated. Let me just find
what it was.
It is
MKInitCPIO.
There was an update to that
which
moving to version 38
they changed how the microcode
like load
line is done and
they have basically no
explanation in the Arch news
about what needs to be done. If you want to know the
change that needs to be made you actually need to go to like
Reddit and other places like that where people
have the list of commands to run.
Yeah for me it was probably like 2016 or 17,
I used Arch last time.
And I was using it on my trusty ThinkPad
and I was preparing for an exam at university
and it decided not to boot anymore.
So while I was following a session, I created a Fedora USB and, you know,
refleshed it and that was the tipping point where I decided, okay,
experiments for something, you know, a dedicated laptop or a VM or whatever.
for something, you know, a dedicated laptop or a VM or whatever.
Sometimes I still, you know, create a VM just to enjoy the rising, you know.
But for my laptop, I need it to work.
I don't dare.
I just use it as an appliance, actually. actually like turn it on and whatever do do its
job and turn it off it's not i experiment on other things so it sounds like for like your
main systems you're pretty much all in either fedora land or something based on fedora why am
i getting a call uh no uh yeah give me a second no sorry one one sec i'm just
getting a phone call i'll just tell them to go away um Okay, we're good.
I got a call from my mom.
It's fine.
It's important to know.
It wasn't anything.
She probably just wanted to chat.
I'll call her again afterwards anyway.
Yeah, so you were saying something about...
I mentioned you're doing a lot of...
Like, using a lot of Fedora stuff or Yublo or whatever.
And you were saying something and then...
I use mainly Suze and Fedora right now.
Right.
Okay, right, right, right.
Another laptop for Vanilla OS.
laptop for Vanilla OS.
And yeah, I really like where everything is going.
They are the three main, let's say, philosophies of immutability that I see they are taking steps in the mainstream.
Let's say I like where everything is going.
I like that you can customize either UBLU or Vanilla OS
with Docker files.
You can create your own system pretty easily
and still have the guarantees
of
solidity, of
reliability
that the
system can offer you.
When people say that
that's why
probably immutable is not the right
term. I like atomic.
I think atomic is a better term for it.
Atomic is much better, yeah.
And they are not customizable.
It's not true at all.
What I think is just they are customizable in a different point in time.
Yes, exactly.
They are not customizable easily.
I mean, you can still do it.
You can do anything at runtime.
But they are extremely, even more customizable,
if you think about it, at build time.
Because then you're just building whatever you want.
You're not adding or removing the bloating
or that stuff that people do with scripts you are just doing it
you create your what it's called the golden image uh in the clouds you know in the cloud
when we are in the cloud and we create our vms for something. Like, for example, VMs to run Kubernetes,
where then you run your containers for application containers.
VMs are pretty much golden images.
You create the image of that VM,
and when you have to update it,
you just create a new image of that VM.
It's not like APT, blah, blah, blah, or whatever, DNF, whatever.
So I think taking this concept back to the desktop,
it's extremely important.
Obviously, desktop is much more difficult on that point of view
than servers because desktops are much more versatile than servers because servers i mean uh they mostly
do one thing which is serve um desktops i mean you have many uh you know corner cases, many different hardware, many things.
I mean, I don't know.
Just think about NVIDIA stuff or MDGPU stuff, the proprietary one.
So it's not as easy.
So I like how UBlue and Vanilla are making more mainstream
to having
custom images for stuff.
Like you have
I think the concept was
firstly introduced by
Pop!OS actually to have like a dedicated
NVIDIA
image and I named
the Intel one.
Now you have like proper images
based desktop with this concept. I think you blew
currency to 11 where you can choose even for some laptops
a dedicated image. Like I know that they support the
framework laptop like with a dedicated framework image. So it's
this opens up to, I think, easily for OEMs
to create their own stuff.
They just have to create their own container file
and point, I don't know, that this row
to update on a specific registry and they pretty much have done they
don't need dedicated you know um repos for each model or stuff like that it will make oem stuff
i think pretty much a breeze for them like yeah i've talked about this plenty times and i'm sure
people are sick of me saying it um i feel the exact same way that you do about the immutable atomic images we're just gonna
call them atomic because it's a better term um yeah i feel like the problem that has existed
and the reason why people do think that they're just not customizable is there just hasn't been
that same level of exposure and same level of documentation on how to do that customization
because if you want to go and like if you want to go and install a package on fedora it is very
well documented how to do that how to remove packages how to add new repos all of that sort
of stuff but this whole idea of using a like an image on the desktop using this atomic system
that's a fairly it's been around for a bit but as like a mainstream thing that
people do really only the past couple of years and really only recently because
of things like flat pack and distro box that make it a lot easier to install
applications that aren't there as part of the image is becoming more and more
popular and thanks to projects like you blue actually part of the image is becoming more and more popular. And thanks to projects like Ublue, actually customizing the image is...
It's still obviously different, and it's still obviously a challenge to a lot of people,
but the documentation is being put together, and there is being progress made towards
people understanding that you actually can mess with these systems.
It's just not the way that you've traditionally done it.
Yeah, yeah.
I think the important part, both uBlue and Vanilla,
they leverage something that exists in the past, I don't know, 10, 15 years,
something that exists in the past, I don't know, 10, 15 years, because Docker images are pretty much
the standard in the real work world, right? And it's a skill that you can upcycle in the desktop environments. So that, like, don't do, not doing their, not doing their own stuff.
They are just leveraging something that already exists.
It's very important for the skill set and the documentation,
because when one will search how to do something in Dockerfile,
they will find plenty of stuff.
Whatever, Stack Overflow, Google, whatever.
plenty of stuff, whatever, Stack Overflow, Google,
whatever. And that's the important part.
Instead of doing their own, whatever,
make my image dot stuff, whatever it is,
I think that's the winning point, is upcycling
existing skills from the server's hardware.
I mean, Linux is pretty much the, I don't know, 90-something percent, something like that.
I wouldn't be surprised if the number's something like that, yeah.
and upcycle it on the desktop so that you can leverage a pretty large skill set.
So I think that's the most important part. I think the immutability, atomicity and stuff like that is...
They are...
Let's call them good side effects that you find along the way
who come to these image-based desktops, right? It's the only way to have an
image-based desktop is to actually then be atomic, immutable and so on. It's like
a dependency, right? And I think it's just shifting the point of view completely.
But, yeah, you have many, let's say, many good side effects,
but then you have to change the perspective on how you do things.
It's not that you cannot do things, you just have to do things differently.
Well, on the topic of these atomic systems, I did bring up in there that Distrobox has really,
it has changed the way that people are able to actually use these systems. Same with Flatpak, Flatpak obviously has done this as well, but just having full access to a non-atomic system on top of this atomic system
because previously you could obviously do overlays which then cause issues because if you have a
bunch of overlay packages especially it's especially ostree a lot of overlay packages
tends to cause the update process to slow down quite a bit
But just not doing that and instead having all of this done in
The user directory which is perfectly writable and is not going to cause that to be an issue whatsoever. I
Think that has made it a bit more accessible because obviously you can customize it. Yeah, but a
Lot of people are not going to take that step and then on on something like, I've got a Steam Deck back there, you know, it, making that your own image, there's like no documentation for that, you need to make something entirely custom, so it really does just
make working with these systems quite a bit easier, and at least, at least get your foot in the door,
where then, if you feel more comfortable, then you can actually go that extra step
and make your own custom image
that is truly your system image.
Yeah, I think the idea of pet containers,
so it's a container that you don't actually
just create and destroy on the spot for your application or whatever,
which is how usually Podman and Docker works, right?
A container is a container that you care of because you install manually stuff in it.
The concept was actually already in Fedora using Toolbox, which is their pet container tool.
And when DistroBox was created,
it was a couple of years ago, I think.
I think 2021.
So yeah, probably something more.
It was still like a Fedora only thing.
So that's why probably DistroBox was very useful
to other immutable distributions,
so that they had their tool too.
And I remember two, three years ago
that they were speaking about replacing the whole terminal with toolbox at the time.
So it's not a new concept, right?
It's something that obviously was needed.
I think that we are in a transitional phase now.
we are in a transitional phase now.
Like there are various ways where you can work,
let's say, around the immutability when you need stuff to actually get done.
And there are obviously flatbacks
or graphical applications.
There I see snaps.
There's app images as well?
There are these... sorry?
There's app images as well?
App images when they work and there are then the containers part but then containers can be used
in multiple ways there are dev containers and there are like pet containers so like this box
toolbox and i think what for developers is really important is to start adopting more
dev containers which is um an extension that like a protocol let's call it that microsoft created
where you can use them with uh pretty much any ide or editor like VS Code, JetBrains, you know that stuff where
when you clone a project all the bits and stuff actually run inside a said container so you don't
have to actually install I don't know JVM or Golang or RAS stuff inside your host, but inside the container.
And it's not a pet container, so you can destroy it and recreate it.
It's all specified in that file.
So that should be probably what people will need to learn to go to.
will need to learn to go to. Because when we speak about the development experience,
Linux is actually the other one, where you actually
install stuff on your system. Because the two most popular OSes for cloud-native stuff
are Linux and Mac OS.
Someone uses Windows 2, but it's still WSL,
so we are pretty much on the same boat.
And it's very rare they just open the terminal
and start installing stuff. They either use something in that just, you know, open the terminal and start installing stuff.
They either use something in their home like brew or they just use dev containers.
There are many tools like the one at my company we do, it's called DevPod that helps using
them in a very fruible and easy way, right?
You just have this application when you create your project,
you slap the URL of the GitHub page and start, I don't know,
you choose VS Code, you just start it,
and it starts with already the container provisioned
with whatever it needs to develop.
You don't need to even think about dependencies
and stuff like that.
On the other side, there are people like me
that develop in Vim,
where you actually need a full terminal
and that's where pet containers are good you create your own development container where you
you do stuff and probably the best thing there is to start adopting
assembled files so that you can easily recreate and refresh your container.
It happened many times that I just mess around and break stuff so I just need to
rebuild it and finding back all the stuff that installed is a pain so
having some way to recreate as it was was is it is really useful. And I envision a point where we will come to a stage where actually
installing stuff on the base system isn't even needed anymore, like at all.
We will either have custom images or we will have containers.
That's it.
or we will have containers.
That's it.
So if I'm understanding correctly,
sort of keeping the host system clean of the development environment,
because we already do this to a certain extent, right?
Where you will install project-specific libraries.
Take, for example, like a Python VM.
Like you will install libraries
for that specific environment,
but you're saying take that a step further
and move the development
tooling completely off of the host
and do everything inside of a container
so you don't have
cargo, for example. You don't have cargo
installed on your host system. It is done
entirely in the container, and then all
of the libraries are done in that container as well.
Yeah, exactly.
And that is important not only for, I don't know,
I mean, modern languages pretty much have ways to work around this breaking of the system, right?
Like Rust, Go, put your stuff in your home, usually,
and they don't really need many dependencies, right? Go, put your stuff in your home, usually.
And they don't really need many dependencies, right?
You just have Rastap or, in case of Go,
you just have the Go binary, actually.
And with Python, you have PyAnv.
But for other things, it's not the same.
I don't know.
Let's think about having to develop a C or C++ project. You still have to
install, you know,
lib something something
devil or, you know, that stuff, which
it still mess up your
system in a way or another because
most
package managers don't have an
undo button, right?
You have to remember what
you did and undo it if you want to undo it. And this leads to also a lot of
errors in the development phase or even in packaging phase because working with
containers which are pretty much super minimal images, right?
Like the Alpine one is 5 megabytes, something like that.
The Ubuntu one, which is one of the largest, is 40.
They are not big at all.
And starting some apps there, you see how many things are missing because they were part of the base image
and people just took it for granted so you the packages are actually not declaring all their
dependencies they're declaring some of their dependencies and the same goes for the
development part where you go into a readme and say oh you need these libraries and then it doesn't
work because they are taking for granted some libraries that were already on the system this
is something i'll see especially if something is packaged with ubuntu in mind and then you want to
install on something like Arch.
They will assume things are there
that were just there from Ubuntu,
whereas they are not necessarily a part of a minimal system.
And you've got to like,
it'll give like some, throw some error,
like random things like that.
One example I actually had recently is Hyperland.
I was doing some debugging and
The dev wanted me to like compile stuff and he had no documentation on
what needed to be
included for the build so the Hyperland wouldn't build without
Doxygen for example installed and so I had to install Doxygen, I had to install this,
I had to install this, I had to install this, because none of that was actually documented,
because on his system he had those installed already, so it's not gonna ever throw that
build error. And I get exactly what you're saying here, that makes sense, yeah.
Yeah, it's a nice thing to have both on the development side so that you are less prone to those errors.
Also on the, let's say, distribution side.
So on the fruition side, like for you, I think you like your system to be as lean as possible, right?
Okay.
Yes.
Also, it breaking is content.
Yeah, I mean, okay.
But I think for a person that is thinking about setting up their own minimal operating system
with a window manager, a brophy and stuff like that,
at least the point for me is thinking about having it as lean and optimized as possible so that i can
maximize what i do so for example stream or develop or stuff like that and then you have to install
doxygen so i mean doesn't make sense right to have so many um compilation dependency in a system just to run something that then doesn't depend on it.
And then if you just want to install, compile,
and then clean up, the clean up part is not a given.
Yeah.
Not many.
I think just one package manager supports undo,
which is DNF.
I didn't know that.
It's not always possible. Yeah, DNF has history undo and then you say like the transaction id and then it will undo that.
The point is that if in the main time something updated and then for example in that transaction you i don't know
you went from h top 1.1 to 1.2 then it will try to downgrade it to 1.1 but if it's not in the
repos anymore you cannot undo stuff right so it's not a perfect thing. It's something which not many other package managers
do, but still it's an incomplete process of cleanup. While if you just have your own
package compiling environment, you can just scrap that in a second.
package compiling environment.
You can just scrap that in a second.
This is not a problem that most normal people have,
but I also have an issue with installing random applications.
I'll do videos on, like, I don't know,
some H-Chop clone or some random thing,
and I will have hundreds of packages that I forgot I installed,
and every time I do a system update,
I'm just like, wait, why do I have...
What is that application again?
Why is that installed?
Why do you have one gigabyte of updates?
One gigabyte's a small update.
Yeah, I just have all of these extra things installed.
I just...
Because I forget to remove them after I do the video.
And then it's not just the application,
it's the dependencies for the application.
And I probably should just do these inside of a container.
And the next time I clean my system, which is just delete the Arch install
and just redo an Arch install or replace it with something else if I decide to do that,
I probably should just start doing those inside of a container
just so I can get rid of it when I'm done with it.
A nice thing will be to use also a custom home for the container. Like with DistroBox, you can
use a dash dash home flag during the create. It will use as a home directory for the user
inside the container, that directory, but still has access to the
original directory.
So if you enter the container inside your project to develop, you still have access
to that project inside your home directory.
But all.files and configs go to that home, the custom one, so it doesn't litter your
own home.
Right. So if you try a lot of applications,
then you don't have all those dot, blah, blah, blah, RC, all dot.
Or for example, you want, I don't know,
something to be configured differently.
Like two instances of the same application,
but differently configured.
Let's say you want Discord with multiple accounts, Signal with multiple accounts, you know, stuff
like that.
You can just use a different home for each one, and then the configs go in different
points.
So it's like having a clean installation and then you do whatever you want.
I'm going to send you an image.
This is my.config directory.
I don't know how many of these applications I still have installed.
That's not all of it.
It's just what I could fit on one screenshot.
That's a lot.
It keeps going further.
There's probably like another, because that start, it ends at O.
There's still like probably another hundred more in there.
I think I will just quickly do.config
and then work out.
I have 40 things in my.config.
So it's,
I like to keep it clean.
Yeah. So it's... I like to keep it clean. Mm-hmm.
Yeah, I don't even use custom home to be honest, but yeah.
Yeah, but like this is this is my point like I'll install a bunch of things and
it's not just the application because then I have to like clean up other things that it comes with that might not necessarily be cleaned up by the package manager like the config files or any other files that might litter across your system,
that aren't managed by the package manager, instead they're generated by the application.
Um, yeah. I- I-
That's why it's, uh...
It's useful to have this type of separation, so you can keep your system doing its job. I mean, the system jobs is to be a system,
not to be your application thing. An application is an application, it's not the system.
I think having this separation, like Flagpack does very well, it's extremely important so i i think starting doing that for
other things is also pretty much useful for everyone it's just a matter of
taking the the you know the the routine of doing it right yeah um and that that's also the other
big thing because I can start doing
something and then the second you stop
then it's like well I've stopped
now so why don't I just
yeah keep it going
so
yeah
I also
want to ask you about was
so back when
I first looked at distrobox you supported
docker pod docker and podman which for the most part work in fairly similar ways but you also
have this other thing now called lilypod um what what is what is that and why why yes this is why. Why is why not. Fair.
But yeah, mainly it was a project that I started to really learn all the bits and bobs underneath for how the container works.
So I just wrote my own container manager. And it actually seems to work
decently, and the idea was to have something with as few dependencies as possible to one.
It's like a single statically compiled binary in Go. So it doesn't really
have any dependency, except, you know, BusyBox. I pretty much think anyone has a coreutils
on a system. But the idea was to have a nice fallback for a system where you want to run your distrobox or whatever, but installing Podman or Docker is pretty huge, right?
You have many dependencies, you have configurations to do and stuff like that.
So with Lilliput, this plug and play, you don't have any configuration to do any dependency.
don't have any configuration to do any dependency.
Obviously, those, like,
I don't even think one-tenth of the things that Podman does is even probably less.
But the point is doing what this box needs.
It's not doing what Podman does.
Right.
And it does.
And I'm happy about it.
Need to, yeah, update that one too.
There are many things to it.
It's like a really alpha release, bad release, whatever.
It's very new, very new.
And it's more like an emergency latch for when you cannot run
one of the big two.
And on that front, I also developed this Podman launcher,
which is just using Go EmbedFS to just embed everything,
every dependency of Podman inside it,
and just run it as a single binary so also that one
works it's like a very big binary it's like 50 60 megabytes because everything is inside it's
it works like an exe on windows right it's actually just a an archive a self-executing archive but in this case of Podman so there is that one too
and then I discovered
that also it was mainly for
a system like the Steam Deck
where you didn't have Podman, Docker
stuff like that
and installing them is a bit of a pain
because the next update
it gets deleted
so I wanted
something that runs from your home and you know stays there
but i think now they are shipping podman by this phone so it's uh already a big thing
yeah i i didn't know about that until i saw like the steam deck just i don't know what the
developers are doing but they just
add random cool things from time to time
like not too long ago they were like
here have NixOS package
support like why
sure okay
let's have it then they're like oh
okay thank you but yeah then like here
have DistroBox I know you said it was like a
really old version of DistroBox but like
even so yeah it's like 1.4.
Hmm. Like, the fact they
just added it to the system,
like, ah, okay, yeah.
Uh... I was
surprised.
Because before it... They didn't know anything,
so... Like, before you had documentation
for it, but it...
It's... Yeah. It's still
like, nice to just have it there and just not have to really
think about it yeah absolutely especially on a system where which is atomic and image-based but
you cannot do the image like in the case of the steam deck um you i think it's a perfect example
of how this
image-based stuff really benefits
the OEMs
because it's incredibly
easy for them to have
a known
golden state for the system.
Obviously, they control the hardware
but this way they also control the hardware but this way
they also control the software because everyone has the same version or less
everyone has the same set of packages by default because every update
resets them actually so they can also just say hey something is misbehaving
just reset it which is not a given in the linux world even the
windows world to be honest but just factory reset is not a thing which is i think it for the oems
will be the first thing like having a factory reset button like android does, where you just wipe it and only the system is kept
in its original state, which is a known state, it makes also reproducing issues easy. Because
for example, already now, I think for image based OS like Vanilla OS and Fedora,
doing a bug report where you explicitly say, I am on this version,
and these are the overlaid packages or added packages.
Me, as a developer, I can reproduce the exact system, just putting the same version and
same packages.
We are on the same, pretty much the same system, except maybe the hardware.
And you can already figure out, okay, this is an hardware problem, a software problem.
I can reproduce or not.
So stuff like that.
It's much easier and yeah i mean it goes back to the
what we were seeing before where you know in which in which state is your system
you don't know your arch linux in which state it is right and at least knowing it is a good thing well also just having that really convenient system
rollback there because most atomic systems do provide like at least two or three of the previous
images so if for any reason the latest image is broken in some way like maybe they decided to
ship an application they didn't test something properly and it's that location is broken in most cases you can just roll back and just it works and then maybe the future version is
working then you just go straight to that one and
Your user your user data is completely separate from that so like obviously you can do this with a
Non immutable system you can have a root that is separate from your home directory,
and you can reinstall the ISO and go to a different version.
And this is doable.
It's just a lot of extra work that, with a system like this,
there are these ideas put in place
that you can just very conveniently make use of.
And I think it's really cool.
And I do understand that...
I was going to say, for an OEM, that makes a lot of sense why you'd want to do that.
And with the factory reset, not just a factory reset, but just a system reset.
So if the user did break something by installing overlay packages or something,
you can just change like fix the
system by itself without breaking their user data yeah and to be honest there are other systems not
immutable ones that do something similar like if you install tumbleweed by default they already
have btrfs snapshots connected to the zipper package manager
i think you can do the same with arch also yeah where every transaction first does a snapshot of
the running system and then um executes the transaction so the difference in that is that
you still have rollback points which is already a big win but you are still modifying
your running system so the difference between having tumble with with btrf snapshots and eon
or micro s which is the the atomic version is that you're never changing your running system because many times it happens that you you
know update a library and then all application running starts freaking out because that library
has been updated at runtime and i think most of the breakages that happens is this because there is this
conception that you don't need to reboot to update i hate these people so much i i really hate anyone
who says that i mean if you if you think about it stuff like system updating, do you really think you don't need a reboot for that?
You can technically...
Like there's...
There are things that can be done, you're just probably not doing them.
That's the issue.
Exactly.
So, it's, um...
It's very important that the state of the running system doesn't change under the hood
whilst it's running.
So that's why the reboot is important.
I think the introduction of the offline updates,
I think Fedora was one of the first
with the GNOME software
that actually does the Windows thing
where it reboots into an update
and then it reboots back.
Already improved a lot the reliability, I think, because not changing... think about like updating DNF using DNF. I mean, how well can it go while you're running your system?
It can break something. That's why it's important to have both the rollbacks and the mutability part, because
that's where the real ability comes from.
Yep.
One place that does certainly cause issues is out-of-tree modules,
out-of-tree kernel modules.
So a while back, I was using V4L2 loopback.
If you've never used that before,
it's basically a way to take a camera device
and then create virtual camera devices and loop them in
because you can't have multiple applications that access the same camera.
This just lets you create virtual cameras.
If you update your kernel
and you update the module,
applications will be like,
where's the module?
I don't know where it is.
Yeah, yeah.
Also, just also think about
AKMOD or DKMS
out of three kernel modules and switching to an image based
system where the modules and stuff are built upstream not downstream they are not built by
your laptop they are built by the servers and if something goes, like they don't compile, they simply don't ship the update.
So it's even a prevention thing.
It's not only a rollback thing, right?
Right, because you hear people say things like,
I lost power during an update and I bricked my system.
It happened to me.
Various times, actually.
When you're an old laptop you know and you you really trust the
no the battery outlet yeah they say okay it's one hour and then in 10 minutes it's discharged
because it's very very old hp yeah and it's one hour when the hard drive is not being smashed, and then the second
you're hitting a bunch of data, well,
bye.
Yeah, it happened, and
I think it was like on
Ubuntu
or something like
12 something before,
something like that.
Yeah, the only way to fix
that was to reinstall there because uh it actually
freaked out i don't remember which very important part if it was the init or the
glibc during the unpacking so it was pretty much unrecoverable at the time, so yeah, tough stuff
yeah, distro, like no matter what
distro you use, if you're using something
that is not, well this is the big thing about
atomic, like
I guess for anyone who doesn't know, just
what does it mean for something to be atomic
because I would imagine some of you haven't
just heard that term before
it's either you do the stuff or you don't. So when we have something that is not in one go,
for example, updating a system, which I don't know, can take even five or 600 steps,
it depends on how many packages you have.
And each package is a step if you think about it.
There are ways, many, at least 600 ways things can go wrong and
you will be left with a system that is in a, let's say, hybrid state.
It's not done, but it's not as it was before.
Let's say you need to update 600 packages and it breaks at the package number 300.
The previous 300 packages were updated.
packages were updated. The next one not, but maybe something in the previous 300 packages depends on the new, on the after 300 packages. So you're in a broken state. I think it, I don't know how much
this is common, but back in the day it was. I remember in the time, I don't know, it was Fedora 20,
something like that.
I broke Python in an update,
and DNF broke because DNF was in Python.
So it was pretty much unrecoverable
because Python updated before DNF
in the pipeline, you know, on the list of packages.
So having this all or none deal is very important for the stability of the system.
And you can achieve this in various ways. One is the image-based way, like Vanilla and UBLU does, where you just download a big image and just unpack
it somewhere.
In the case of OS 3, you have your deployments.
In the case of Vanilla, you literally have another partition where you deploy that image
fully or not.
If something goes wrong, you just abort the procedure
and you don't touch the original files.
Or there are approaches like Eon and Micro S
where you actually create a new BTRF as snapshots
of the running system
and then perform the transaction inside that snapshot so you don't
touch your running system and then at the end if something goes wrong you just delete the snapshot
nothing happened so this if the snapshot is there it means that everything go went well
so you have the atomicity done in this way, where everything goes well and you have no snapshot, or you don't have a snapshot.
So that's where the atomicity comes from and they have the advantages.
But to have true atomicity, you also need immutability because you don't have to change the running system in ways
that are not atomic
and that's where
the
confusion comes from
when we speak about immutable systems
and atomic systems and stuff like that
You touched on this before
but I think it's important to
hammer home, we are seeing a lot
of these sort of cloud-based concepts
coming to the desktop and Atomicity is kind of one of them as well because
that's very much a like a very common database concept like you want your
database calls to run in an atomic way where you're not updating a you don't
want to update a user record and not update the entire record you want that
entire thing to take place in a single action. We're seeing containers come to the desktop and...
I know, like, people have their opinion of, like...
I think a fair thing to have an opinion on is, like, the serverless space where people are getting these...
wild, like... I don't know if you saw this recently, there was someone who... they were running one of the...
I hate the term serverless, it's really stupid, but they were running one of the i hate the term serverless it's really stupid but they
were running the free tier of um whatever platform they're on netlify netlify and they had their
website on there and it had like a mp4 file and people started like smashing that file and they
were hit with a hundred and four thousand dollar bill because they didn't actually have a cap on how much data could be used.
Luckily, after it got a bunch of attention,
they ended up dropping the bill.
But besides that part of the web,
that's a part that's certainly an issue.
But all of this really cool tech on the web,
I think does have a place on the desktop.
But because we've done things in the way that we've
always done them, I
get why there's some pushback
on the way it's
being done because it is
different and change is
weird, change is scary and
I don't
ever see a point where
immutable distros become the
only thing available. This is Linux.
People are going to do things that they want to do.
But I could see
a place where the more
mainstream offerings,
they are these...
They at least have the option
of having an atomic system. Ubuntu has
that. They're doing their thing.
Fedora obviously has their thing.
And I could see other distros doing this as well.
And as more distros do...
I think of the major one,
just like the major baked one,
only Ubuntu is still missing one.
I think they are doing something later in the year,
the Ubuntu core desktop.
But OpenSUSE has one.
Fedora has one.
Many, actually.
There is Vanilla OS.
I think there are also various Arch-based things.
Yes.
I think it's interesting with the Fedora case
because they actually do want to make the
atomic
version, like kind of the main
the main go-to
version. Like this is the version
you default to. Obviously if you want
to use Workstation, it's normal state.
That's going to be there. But for
the average user, there
is this sort of
push towards getting that to a state where it actually is just
the thing that people want to go to.
I think it makes sense.
I mean, thinking about it, who are generally people using laptops or desktops or whatever. They are either the very basic, let's say,
Chromebook-like use where you just browse, mail, videos, whatever. Do you really need to DNF stuff?
No. You can do everything with Flatpaks. So there it goes. Then there is the developer using, you know, Linux for development.
And from that point of view, they should, I mean, if you are a developer and not using containers in 2024,
I will do, you know, some update course probably.
Because, I mean, it's the the standard industry standard in the last 15
years probably so the developer doesn't have that many problems in using an an atomic desktop
and the problem arises only on people using um either really really low level type of development I don't know
you you develop the kernel or systemd or something really then yeah okay you you
know better than anyone else how to keep your system probably and or people that just want to have fun. And for those, the normal spin is the best bet, I think.
But for how many use Linux just for fun?
And I think it's a minority, a vocal minority.
Obviously, I have fun using Linux, but I also need it to work.
You know, so that's why I think it's a sensible default to have the Atomico version be the default.
I don't know if we are still there, but we are going in that direction pretty fast i think
because if you think about how how old is the mainstreaming of this type of distros and
think about like how slow is the adoption of stuff in especially in the desktop world where change is bad right
i think we are actually not going that slow at all i i said it before but i think what is speeding
up is having things like distro box and flat pack that do like that do make it so you don't have to
build your own image because the building your own image part to get applications installed,
that's like a non-starter for a lot of people.
Especially if you are that kind of person who is the,
I'm using the laptop to run Chrome.
Like, for those people,
they want to just be able to easily install applications and not really think about the technical stuff.
I think with better tooling, better documentation,
the people who do use Linux for fun will still have a place on those immutable systems.
Like, they will have, like, if you enjoy messing around, like, writing your system, I can see you being the kind of person who enjoy making these weird Docker files to have your system build in a very specific way.
your system build in a very specific way so you can deploy that on each your devices and maybe have like a slightly different version for each for your laptop and a slightly different version
for your desktop and all of this sort of stuff yeah i think it's uh i mean people are already
pretty much invested in dot files for example right and it's not that different of a concept. It's just add this file to your.files.
So the.file is in your.files.
And I mean, there are plenty of GitHub actions for free
that will build the image for you
and you're pretty much done.
So instead of having.files just be the config of the system,
it's the system.
That's a pretty big shift on one side, but if you think about it, it's the natural extension of what one expects of doing.
I mean, many people do the same with NixOS, right? You have this file that pretty much
NixOS right with you have this file that pretty much dictates how your system is that's the same concept but in let's say in a more popular and common technology
yeah the NixOS people like I they were for years now feel are like how are you going to do a NixOS video like maybe look
people talk about how
go on
I was going to say people talk about how Arch users
like to talk about how they run
Arch all the time but
you've never spoken to a NixOS user
if you think that's the case
they are very very
very happy to tell you about how
great NixOS is
I mean we are all in the same boat
like if
outside of the Linux world they say the same about
the Linux world
it's relative
but technically
I think it's a good solution
I tried NixOS
in the past
I think it was last year, actually.
And I didn't necessarily find it difficult.
It was pretty much easy to just install
and then take the configuration.nix
that it's populated by default and then start
from there the problem is that not being a standard file system brings a lot of
quirks that then people need to account for um it's not it's the same thing change is bad you know and scary so it's just different set of
things that one has to account for sometimes that's worth it and sometimes it's not
what i like about the current situation with atomic desktops is
there is change involved but not changing skills it's just a change in
mindset right while with Nix's it's also a change in skill because you need to
learn how Nix works which is not a given that everyone is able to program
or properly search what to do and stuff like that.
It's like when the difference between using Vim and NeoVim,
where Vim is pretty much a config file. The VimRC for you is pretty much a config file.
The VimRC for you is pretty much a config file.
And the NeoVim in Lua, I need to learn Lua,
which is a good skill.
It does amazing things.
But what if I just want an editor and start programming and I don't want to program in Lua?
Right?
But it's pretty much the same thing.
It's very good, but very involved.
You need a lot of effort.
On the other side, I feel like the regular distros going atomic is a little bit less effort on the user side.
There's a lot of effort on the distribution development side.
Oh, sure.
So I want to talk about some of the cool stuff that's just been done with DistroBox.
Because you've got a bunch of stuff highlighted here.
Like people running their desktop environment inside a Distrobox like that's just
like
It was an experiment many people actually took it pretty seriously
I
mean
It works
Then you actually there is a little difference is that then you actually live
Then you actually, there is a little difference is that then you actually live inside the container
because you log in inside the container,
then you don't see the whole stuff anymore
because, you know, regular application are not visible,
stuff like that.
I think it's useful more for testing stuff or, you know, extreme cases.
Okay, but that's why I have the disclaimer that it's a very
experimental thing. Because then people will report bugs about very crazy stuff, which, to be
honest, as a as the maintainer of the project that don't necessarily care about. Like it's not really in scope
of what this book is or does. I'm happy people experiment with stuff. And because that will
highlight problems that were just dormant somewhere, right? But yeah, at one point,
you need to draw a line
and say, okay, this is in scope,
this is in scope, or
it will become unmanageable
pretty quickly.
Right, right. Because there's all manner of things
you could do with it. I saw this other one where it's like
Japanese input on clear Linux
inside a distro box.
Like,
you can do a lot of things with this and you can
it's just like there comes a point right where it's just what is the goal of the project right
like are you trying to support applications like normal applications that most people are going to
run but there's gonna be these weird
applications that aren't normal like you know a text editor it doesn't matter what
text editor it is most of them are gonna run in a fairly similar way but there
are applications that have very low level access there are applications that
run in really bizarre ways and need access to weird things and you just have to at some point say
well
Okay, it can probably work but is this something that I want to put my effort into I'm sure if someone like
Had specific patches to address some like weird case like this like you'd probably accept it but
you know at some point you have to decide like how much do i want to put in these maybe like
one person is using out there yeah i mean as long as they do the patch i'm happy like
if i don't have to i mean uh as long as it's not like a refactor
of, I don't know, half of the project
just to support one use case.
If it's just, I don't know,
a little patch where
something was missing
or something was
made optional, you're welcome.
That's not the problem.
That's good.
The problem arises when either people are trying to really use,
let's say, not in a wrong way, because I don't think...
I mean, when you have a tool,
you can still use a screwdriver as a hammer.
It's free to do that.
But it's your problem to use that in that way.
Don't come upstream and say, hi, I used to use this as a hammer.
Now we need to use it as a I don't know a drill I it's it's a bit like I don't know if um you know the xk cd about
the overheating cpu key oh yes part of the workflow that one
so yeah there is a point where we are in that situation, right?
And you have to say no.
You have to decline gently the thing because maybe it's very, very unique and very, very
much involved.
So that's not worth it. For anyone who hasn't seen that XKCD,
so it's about fixing a bug
where if you hold down the spacebar,
the CPU overheats.
And a user writes,
this update broke my workflow.
My control key is hard to reach,
so I hold spacebar instead,
and I configured Emacs to interpret
a rapid temperature
rise as control admin writes that's horrifying the user writes look my setup works for me just
add an option to re-enable spacebar heating like yeah no i think it's a good thing like um
it sort of like relates like what linus has said in the past many times where if a bug is being used by users
It's no longer a bug. It's now a feature and
That can cause issues
Depends like how seriously you take that right because if it's like a serious bug people might have workarounds
Specifically designed around that and maybe that is a bug you want to do but no matter what sort
of change you make you're likely going to run into some sort of breaking change for a user if you
have enough people with like eyes on the application actually like um it depends on the
on the software we are talking about obviously something as important as the kernel. Absolutely, yeah. You don't want it to break workflow something like Distrobox in the end.
How many people really care about it, right?
Like, I introduced a breaking bug fix in the 1.7
because we changed the way you enter the Distrobox, actually.
So how the script was working before to
export the apps and the binaries, now it's not compatible anymore so you have
to re-export them to the new version. I think it's an acceptable thing because
it improved dramatically how entering the DistroBox now works dramatically how the entering the distro box now works
how the weird things like character escaping
and all that really
things that you don't think on the spot
you only think when people report the bug
and I think it's okay
to have this type of breaking bug fixes,
as long as you're not really mission critical.
So I agree with Linus,
but I mean, that's the Linus kernel,
so that's pretty much it.
Well, I mean, that's the Linux kernel, so that's pretty much it. Well, I think it also matters if you're properly documenting
what those changes are as well.
If you make a breaking change and then just don't mention it,
you're going to get people that are going to be like,
what do I do now?
What's happened to my application?
Why does it no longer start?
Or whatever reason they have.
Like why does it no longer start or whatever reason they have? That's why in the release notes I wrote this big warning about this.
And also the fix, I also put there which command to run to just re-export everything so that
it still works for you.
So that's, I think think nice etiquette to do that
but
yeah as long as it's
this stuff fixing
bugs it's I think more important
for software than
keeping
the not breaking stuff rule
I think
well I think it also depends on like
what the issue is.
Because there's a difference between it being a bug
and the way something works, right?
Like, if it's actually a bug causing actual issues,
maybe I'm wording this terribly,
but if it's a bug causing actual issues,
at some point that is going to overwrite
the sort of user desires for things to not change in their
workflow but if it's something like a good example that I've liked to bring up is are you aware that
in GNU grep F grep and e grep are deprecated yeah and they've been deprecated for about 16 years now.
Most people have no idea about this,
and it's a change they want to make in the project,
but there are so many legacy scripts out there
that are tied to using fgrep,
and if they remove fgrep,
it is going to break so many systems,
and that's one of those changes
where it's something they want to change
it's not fundamentally changing what the application does it's just shifting the way it
works in a slightly different way and especially as you said like it depends on like where you are
in that like importance on your system because if that something like grep makes a change
like that there are so many things that rely on grep working a specific way that that is going to
cause a lot of a lot a lot of changes outside of grep to also be a need to be made so in cases like
that you sort of need to weigh like how much of an effect is this change really going to have?
And is this change something that really makes sense to actually do?
Or is it just...
I don't even know the reason why they want to change it in grep.
Like, I think it's...
Yeah, I think it's because it's not part of POSIX or something.
Technically, it never was.
Yeah, probably.
I think it's, in that case, I think it's in that case,
I mean, 16 year of deprecation is,
if people didn't notice in 16 years, maybe you,
I don't know if there is a communication problem.
They keep saying it on the mailing list
and no one outside of the GNU project
reads the GNU mailing list.
That's probably why.
I don't even, I don't know,
I never used EGREgrep on f-grep
i just use the posix thing um if i can stick to posix i try to stick as much as possible to right
posix um i don't know if they were printing uh a deprecation warning. They were also printing a deprecation. So it's gone through multiple steps.
It's gone just deprecated
and mentioned in the documentation it's deprecated.
Then a bigger warning in the thing.
And now I think in the past couple of months,
they added a deprecation notice
when you run EGREP and FGREP
that now say this is
deprecated stop using this um i feel like it's gonna still be deprecated for another five years
considering the state it's been in yeah probably at that point it's okay to remove but
yeah i think it's um there are i think ways that one can uh I don't know, work
on these type of deprecations.
Like, for example, making an egrep and fgrep
optional packages that are just, you know,
wrappers on grep-e or f, right?
So that you can remove that
and then still have the
backward compatibility on older system
where you just want that thing
to work
but I think
like backwards compatibility
is important
unless it's
really
I don't know
how you say it
holding back
the progress
and creating
a maintenance
burden because one thing
to remember is that
maintenance is not easy
it's as difficult
as just developing the stuff
and the problem with bugs is that they are It's as difficult as just developing the stuff.
And the problem with bugs is that they are like the all possible permutation of all the features that you are offering.
So that's why if you think about it,
like the perception of GNOME and KDE,
many people say, hey, Kde is bugger than gnome
it's just that having so many much features it's possible to have to encounter the bugs yeah and
it's much more difficult to test everything actually like all the permutation and stuff
there obviously will be some corner case where if you i don't know click on
three things in a different manner than usual you will trigger a bug so it's difficult to find stuff
why yes if you have something like a little bit more static and monolithic like gnome it's a
little bit easier to do quality assurance and say,
okay, this is what we are offering. Let's ensure everything works.
And obviously, I feel for the maintainers when the project starts growing
in features and feature flags, and then you have to test everything it happened with this box
it started with like four flags now it's a lot um so i think it's um yeah it's um
important to have the freedom of deprecation the problem is how you handle it yeah you
i for me personally i never thought of it to be honest but yeah it hopefully it will come
uh the day because it means that people are using the software
a great example of that like weird edge case that's just hard to test is um i mentioned before the
blind game dev had on the other week uh he talked about an issue he had in kde where if you have
vertically offset so you have multiple displays and they're vertically offset uh he uses uh the
feature where you can zoom in like really really far and then move around by pushing with your mouse um if they are vertically
offset you can't actually see the top of the screen that is um that is higher or is it's either
higher or lower so it cuts off part of the screen um when you try to push all the way to the top
which obviously breaks menu bars and things like that and that's just not something because the
mode he uses as well
isn't the default um mode to move around because there's one where it follows the mouse and one
where you push the edge and then it moves um following the mouse is the default on kde pushing
on the edge is another option so it's a weird setup where you have offset monitors and you're
using the non-default option this is just a case that really
there was like one other person
that had reported it as an issue
and it's
one of those cases that obviously is just not going
to be tested, it's one of those cases where
the number of users that's going to
affect is very small
and nobody thought
to even try if that was
working like it should
that's the nobody thought to even try if that was working like it should yeah
that's the
the other set of options right
it's uh
that's why I like opinionated
projects actually
at least you have a stand
on what your software
should be and wants to
do and I know many people will you know on what your software should be and wants to do.
And I know many people will, you know,
upset when the developers doesn't accept your way of using their software.
And I think that's why drawing the line is so important
because you risk of
making the software unmanageably big
then you risk to have so many bugs that
then you burn out and then you don't develop the project anymore
it's like an avalanche where
one small thing leads to many big things.
That's why it's so difficult to find an equilibrium in this environment, right?
It's difficult.
Yeah.
No, absolutely.
Software is hard. Basically um software is hard basically software is hard especially if you're
doing it in in the public yeah yeah it's it's one thing if you um just write a script for yourself
like my uh my roommate showed me a script that he has it uh for his work um it's like a it it's firstly it's visual basic
for applications but i already don't want to look at um because it's for it's a excel macro
so it basically what it does is it takes uh a bunch of images you dump into a form and then
formats them the way it should be formatted and that script is written for that form and no other
form there is like three lines of documentation the guy who
wrote it wrote it five years ago doesn't remember a single like way it's supposed to work he's
actually in the process of rewriting it because it's easier to just rewrite it than modify it
um and he was trying to take that and then get it to fit in some like other file and just like
this is a disaster to look at and when you're writing
something for yourself or just that you know is going to be used for a specific case it's very
easy to just cut corners and write it like all of my personal scripts are written like that they cut
corners it's made for my system i don't care But when you're dealing with other people other people are going to do things in a way that you never expected
One of my favorite examples is with the arch automatic installer when they first made it
Nobody had tested if you put in incorrect input
So if you put it like you would ask you like a yes or no question you put in why it goes forward put it
No goes back. Sure fair question. If you put in Y, it goes forward. Put in no, it goes back. Sure, fair, whatever.
If you put in A or S or anything else,
it just crashes because they didn't actually check for incorrect input.
And that's something that the second you have other people using it,
you're going to notice that there are problems
that you didn't even consider
were possible problems.
And that's just software related issues.
It doesn't even get into different like hardware that people might have, whether it's different
GPUs or CPUs, or they're using a hard drive or an SSD or different monitor resolutions,
like all of these things.
As soon as other people are touching your project, like you're going to realize there's
more problems than you ever could have noticed yeah yeah uh one of the example for me was the
nvidia supporting distro box because i don't actually own any nvidia thing and i don't plan ever owning anything NVIDIA so that was
pretty much
relying on
others to do that
so I thank
a lot Mirko that helped me
because he has a lot of NVIDIA hardware
so I just
SSH'd in his laptop
to
test stuff he trusted me thank you um yeah and um maintaining it is pretty
much a blind thing for me like as long as it's i don't know uh finding the libraries and mounting
them inside the container i can just install the drivers inside the vm and just you know butter match
the files and stuff like that it's okay but actually checking if it's working i cannot
so it's a pretty much blind test so that's um that's a little bit scary for me but i see how
many people really need that so it's um necessarily evil i don't know but that's why
it's not on by default like you have to opt in the nvidia integration because i cannot guarantee for
it right right and um yeah many many like these unintended things are, like, the macOS support, people are, they opened a draft pull request for the macOS good, but then I have to maintain it.
And I don't have a Mac.
How can I maintain that thing?
Right.
It's difficult.
It's difficult.
Yeah, that's a rough one.
I didn't even know that there was a pull request here for it.
There's a lot of back and forth here.
Yeah, okay.
Yeah, yeah. I mean, i'm pretty much open to to uh happy if people use my software and i still don't i can't still believe that people
are using my software so that's a good thing on On the other hand, it's a responsibility.
You can take it lightly
just, oh, okay,
with a couple of vFiles it works
on Mac and then only half the thing
works on Mac.
Right.
I cannot
ship that because
or at least not like
this, because people
then see a guide on how
Distrobox works, then they run it on the Mac and half of the things don't work
and then they will just open bugs because they feel like it's a bug and
maybe some things cannot work at all because on Mac a Podman or a Docker
container is actually running inside a VM. There is all this
abstraction and separation from the host, which it's even harder to circumvent than the
plain Docker and Podman thing. That sounds like a nightmare to work with. Wait, so it just runs
in a virtual machine. Yeah, they don't have many choices. They don't have a Linux kernel, so it just runs in a virtual machine? Yeah, they don't have many choices.
They don't have a Linux kernel, so they need to run it.
Oh, yeah, yeah.
But then the problem is, how do I integrate X11 on macOS
that doesn't have X11?
There were X11 servers in the past yeah yeah yeah yes there is
xquark but i mean it's pretty deprecated stuff like it's like fgrep for them like it's uh
very old stuff and then if i don't know the graphic stuff doesn't work, then what, like, already half of the things are not working, right?
Right, right.
GPU acceleration will never work because it's inside a VM.
And so it's useful only for CLI stuff, which is still useful,
but it's like a third of the thing that the software is expected to do.
Right, it has value as like a development environment, but yeah, at least...
There are dev containers for that.
That's true. Yeah.
Um, um, yeah. Okay. It's a lot less useful then, isn't it? Yeah.
Yeah. I mean, it's, the point is integration. if you can't integrate what I'm doing.
Yeah, that's... Well, hopefully that gets dealt with at some point.
But it sounds like there's a lot of issues there that just don't exist on the Linux side.
Actually, the funny thing is that WSL works.
Okay.
actually the funny thing is that WSL works
because they have
the WSL
virtual machine thing
actually has all the plumbing for
Wayland
X11 and all that stuff
so that
this sandbox actually works on that
which is strange
did it just work without any extra configuration?
Or did you have to write something
to be sure?
If I remember correctly, there was just one
workaround to do,
which is making the
root
rshared
the mount,
because by default it wasn't.
And
that was pretty much it i think in the newer
distro version it's actually done by default so also on corstini it was working for the same
reason they already did the plumbing for the audio for the audio, for the GPU, for the whatever.
So this was cool to leverage that, right?
And it was pretty much a thing about, yeah,
changing a couple of things, of mounts or flags.
It wasn't that big of a deal.
With Mac OS, we don't have that.
We have plain Docker, plain Podman,
have that we have plain docker plain podman where the isolation is very pretty much maximum because we are you're running inside an isolated vm not an integrated vm so it's
it's almost like i have to do twice the job right of integrating back stuff so it I don't think it's as easy.
I will need probably a Mac for that.
If I ever will, yeah, I can work on that.
So I think it's fair to say you never expected Distrobox
to get the attention that it initially got.
Nope, not at all.
I mean, I can see how people find it useful, but on the other hand,
it's like a wrapper of Hotman and Docker that exists for 15 years. It's not something that you
expect to have good adoption. You expect it to be useful for corner
cases not the other way
around right so it's
obviously very happy
about it a bit scared about it
but you know
well
I think that's as good
a message to end it on we're closing
in on the two hour mark and I know you said you have to
go to work after this, so
let's wrap this
up then.
If somebody wants to get involved in
DistroBox, or support the project,
or whatever, where can they go for that?
GitHub.
GitHub.com
slash 89luka89
slash DistroBox.
You don't have, like... Or Lilliput, or... slash 89luka89 slash distobox. Or whatever.
Or Lilliput.
You don't have like a...
I don't know.
GitHub sponsor? Patreon?
Anything else like that?
Nope.
I'm just happy if
people contribute.
I see there is a Matrix room and a Telegram
here as well.
Yes, if they want to chat, there is a Matrix room and a Telegram here as well. Yes, if they want to chat, there is
the Matrix chat and
the Telegram chat.
Yeah, is there anything else you want
to shout out or anything
you want to say?
I'm very grateful
for having the possibility
of talking with you.
I'm happy we finally got to do this after so much time.
Yeah, I mean...
Yeah, I'm more than happy to do another one in the future
if you want to.
I don't know what we'll be talking about,
but I'm more than happy to do another one.
Whatever, yeah.
I mean, probably in a bit,
maybe we can speak about news of the atomic world.
For sure. Well, if you have nothing else to mention, I'll do my outro and then we can end it off.
Awesome.
Perfect.
So if you like this and you want to see more of my stuff, my main channel is Brody Robertson.
I do Linux videos there six days a week.
I've got my gaming channel, Brody on Games. Right now, I'm playing through Nioh The World Ends With You
and probably still God of War 2. If you're listening to the audio version of this,
you can find the video version on YouTube at Tech Over Tea. If you want to hear the audio,
there is a RSS feed. There is stuff on Spotify. Search Tech of a T on your favorite platform and you
will find it. I'll give you
the final word. What do you want to say? How do you want to end
this off?
Thank you, everyone.
And bye.
See you guys later.