LINUX Unplugged - 537: This Makes Us Unemployable
Episode Date: November 19, 2023Can we save an old Arch install? We'll attempt a live rescue, then get into our tips for keeping your old Linux install running great. ...
Transcript
Discussion (0)
I feel like we're doing the Linux community real solid this week.
We're going to put a myth to bed,
or we're going to have to own that Arch isn't suitable for a server environment.
I was doing a little looking online this morning,
seeing, you know, what is the current recommendations?
And you can go all the way back to 2010.
It's the same line we've been getting for years.
Don't use Arch as a server operating system.
It's not clear when applications might break.
More often than not, you have to keep up with what's going on in the wiki.
It's just not as easy as Debian or CentOS.
Well, we have an ancient Arch install that we haven't touched for a year.
And we're going to see today on the show if we can upgrade it,
if we can fix it, and really see if you can
neglect an Arch system, just kind of come all the way up to current and see if the system works.
I don't think we're going to get any admin contracts out of this, though.
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen.
Yep, it's been more than a year or so since we've updated our quote-unquote Arch server.
So we're going to attempt to do it live and then rescue it in whichever ways we have to. We'll truly try to put that myth to
the ultimate test this week. And while we're doing that, we figured we'd have a little sidebar and
attack one of free software's most sacred cows. So buckle up for that. That's coming up a little
bit in the show. And then we'll round it out with some tips and tricks for keeping an old Linux
install smooth. Not necessarily just Arch, but things you can do when you first set it up and
while you're running it to keep
an install running as long as possible.
And then we'll finish it up with our picks, the boost,
the feedback, and a lot more. So let's
say good morning to our friends over at Tailscale.
That's a Mesh VPN protected by
WireGuard. Oh, it
is. It's going to change your networking game.
You won't need inbound ports on your
firewall anymore. If you're an enterprise,
you can throw out complicated or sucky VPN solutions, and you can embrace the Tailscale lifestyle.
Get it for free for up to 100 devices and support the show at tailscale.com slash Linux Unplugged.
And, of course, a big time appropriate greetings to our virtual lug.
Hello, Mumble Room.
Hello.
It's a small showing this week.
We've got a couple up there in quiet listening.
We've got a couple on air.
We're recording on a Saturday.
I'm amazed we have anyone at all, so thank you.
And I'm amazed we've got people in the chat room, too.
That is actually really impressive.
It gets a little lonely doing the show by ourselves.
I will also remind everybody that Tuxies Voting is going on just a little bit longer,
Tuxies.party, to get your vote in.
And we have a feedback form if we've missed something there, so we can also collect that a little bit longer, tuxies.party, to get your vote in. And we have a feedback form
if we've missed something there,
so we can also collect that
a little bit better this year.
And show mascot, the Golden Dragon,
now has his stickers on an Etsy store,
which will ship internationally, two bucks.
If you go vote and you want to get yourself
a sticker to prove it,
limited time sticker,
we'll have a link to that in the show notes.
Also, while we're kind of planning
about the future here a little bit,
in April, Texasux fest is coming up and we're planning to try to make it i would love to find a spot to park the rv for a couple of weeks if anybody in the austin area has like
some room in their driveway and wouldn't mind me mooch docking for a bit living out of there doing
the shows and whatnot let me know chris at jBroadcasting.com or contact me on Matrix or whatever or BoostIn.
I'd like to ask.
I don't often ask this question, but so many times when I then set off on a trip, I get
emails saying, hey, man, I got a spot you could have parked at.
So I'm going to ask this time.
If you're in the Austin area, let me know.
That could be cool.
I really want to get down there to see the eclipse.
I want you guys to get down there too.
I'm in.
Okay, I'm going to remember that.
Yeah, I mean, I'll commit to it right now.
Come on, Brent.
It's an eclipse.
You know me.
I'm in, but I have no idea how I'm going to get there or when, but I'll be there.
Yeah.
I know.
I'm already thinking like how the hell am I going to afford the gas even, but, you know,
it's going to be worth it.
It's a once in a lifetime thing.
And then to like go down there for an eclipse, hang out in beautiful Austin, Texas, and then go to a great Linux event and see friends again, that just sounds like a great way to spend a little bit of our life.
Just sounds like a great – I don't think we're going to regret that one little bit.
So that's coming up.
Go to TexasLinuxFest.org, I think it is, for the deets on when.
I hear the food is a thing, too.
All right, Wes Payne.
So, it's been a ridiculous amount of time.
Oh, wait, you wanted me to update this thing?
I thought you were doing it.
We're not quite actually at the year mark.
If I'm being honest, 359 days since we've done a server update.
Pretty good, pretty good.
Arch is famous for being fine if you just go a long
time with never updating it. Totally fine. That's what they recommend,
I think. We thought we'd transition off the
server completely, but then
storage became tight, and we never
actually took it out of production,
and we just left it thinking, oh, any
day now. And then we thought, okay, once we get
Proxmox going, we'll retire this thing, and then we
decided not to do Proxmox.
And now we're thinking we're going to go with Nix, but that's kind of a re-engineering of things. So now it's
time to revisit this box and keep it healthy. For the sake of time, we have pre-downloaded the
packages. You want to give us the deets there, Wes? We have 538 packages to install or update.
That's a total download size of just about one gigabyte,
a total installed size of nearly four gigabytes after, you know, decompressing,
but a net upgrade size of five megs or so.
Negative.
Yeah, negative five megs.
Excuse me.
I mean, we're slimming down here, even after all this time.
After a year of not installing any updates and downloading a gig, which will expand to
four gigs, the net upgrade size is negative five megabytes.
Now, I am already seeing, I started my SYU just to get ready.
Yeah.
I'm already seeing some questions, replace this package with extra, you know.
Yeah.
Should be fine, but those are the kinds of things that you look out for.
All right.
Well, let's kick it off, Wes Payne.
And we're away.
We'll let that go for a bit because I know that's going to take a minute, and we'll check back in.
Now, Wes, have you been keeping up with the newsletter, the Arch newsletter, so you know exactly all the manual changes you need to be doing here?
I don't know about keeping up over the year, yes i did take a look last night uh and there there are a few items in
there that we may need to resolve uh or run into issues with but we'll find out what we've actually
all got installed on this a few it looks like you've identified like half a dozen well some of
these are just like noteworthy uh they did a git migration um during this time which i think maybe we might just avoid the fallout from because we didn't have to update or do anything like during the migration.
Let's see.
There's also some stuff you might have to do if you had Ansible on the system or Budgie on the system.
But I don't think that is we're going to run into those.
Right.
I don't have Budgie on the server.
It's a pretty lean install overall.
There's also changes to the default password hashing algorithm and UMask settings.
Now, they say there that it shouldn't require any manual intervention.
They're not calling it out as such, but that feels like something that could go wrong.
Yeah, yeah.
That seems like something I'm a little concerned about.
We'll just see as it gets there.
While we wait, I want to talk about one of our sacred cows in the free software community
and you have undoubtedly heard an iteration or a version of this or i've probably said it yourself
and it can be summed up as linus's law and the law is which was really formulated by
esr is that given enough eyeballs, all bugs are shallow.
Right? You've heard various versions of this. Yeah, that saying's been around for a long time.
I think it's a mantra in the Linux community.
One of the advantages of open source software is that more eyes are on the code
so you can catch the bugs quicker, right?
Sort of a default answer sometimes in security questions.
You're like, yeah, well, it's open source.
But then we see stories that I think
have fundamentally proven
that Linus' law is unfortunately broken.
You may have saw a story this week
that was pretty attention-grabbing,
and the headline was,
for a decade, the Linux kernel
hasn't scaled beyond eight CPU cores. And everybody, this went all
over Hacker News, and it was based on some research that was published that shows that
essentially a system CTL setting, so in scaling of how the scheduler assigns CPU cores, kind of
begins to max out at its efficiency around eight cores and it starts taking additional overhead
to kind of assign out tasks beyond that is that a fair enough layman summary do you think i mean i
don't know this one's kind of hard to interpret honestly um i think it's roughly the idea part
of it is that you know scheduling is kind of a black box where there's a lot of heuristics at
play um so it's a little it it's hard to figure out without actually knowing a lot about what's going on in terms of what the implications of this setting are and what environments it would lead to performance impacts.
I don't know if that's all clear yet, at least not to the casual observer.
I think there's some tests that show if you put it in the right specific kind of situation, you'll notice a little bit of a performance loss, somewhere in like the 14% range is what I was reading.
But here's a TLDR.
So it isn't actually really an issue so much in the kernel itself.
It's, well, it is, but it's in the scheduler.
And it doesn't mean that the scheduler can't use more than eight cores.
I'm sure many of you out there have used Linux on systems with more than eight cores.
The scheduler controls how to allocate tasks to the available cores and how to schedule
particular workloads efficiently on available hardware.
That's a really complex problem,
especially when you consider the data
some of those tasks have to share.
And some of these settings are hard-coded timings
to control the behavior of how the scheduler works.
And some of it varies with the number of cores,
some of it doesn't.
And it seems like some of that math
has been capped at eight cores.
So it's, in some situations in the scheduler beyond eight cores.
It's not doing things very efficiently.
And it seems to have persisted for about 15 years.
I think that's what's catching people's attention here, right?
As you're looking at like, oh, well, this is something that probably isn't what we wanted or we're questioning now.
How long has it been like that?
Why hasn't anyone noticed? Right. And I think we can think of things like Heartbleed and Shellshock and other issues that have
been persistent in free software for a decade before anybody noticed it.
And so I feel like this just on its face kind of calls in Linus's Law into question.
Brent, have I built a case?
Yeah, I think you have,
especially, you know, this is not the only example that you bring up. But it does get me thinking about what are all the issues that we haven't found yet that are also a decade old? Because
these are just the ones we know about, right? And surely there's a huge number of them that matter
that are also in there in all sorts of projects, too, not just the Linux kernel.
So that gets me kind of worried.
Well, I feel compelled to say welcome to software, my friend.
I think this is an even bigger problem potentially in commercial closed source software.
There was a little bit of an empirical study done, if you want to call it that, that tested this theory.
And what they did is they compared popular and unpopular projects from the same organization
on GitHub. So like a popular project with a 5% rating is like 7,000 more stars. They had a
couple of qualifiers to determine what a popular project was and an unpopular project. And they say bug identification was measured using a corrective commit probability.
The ratio of commits determined the related fixing of bugs.
The analysis showed that popular projects had a higher ratio of bug fixes.
Google's popular projects, for example, had a 27% higher bug fix rate than Google's less
popular project.
27% higher bug fix rate than Google's less popular project.
So what it kind of demonstrates is more eyes actually does discover and detect more bugs.
But I think you could also just say more users, more popularity.
Well, my argument there is that if there are more people, you know, working in a project, often it's because there are new features
being added to it all of the time.
And those have bugs because they're brand new features.
And so those are being fixed,
but it doesn't necessarily reflect old bugs
that haven't been found in code
that no one's looked at for quite a while
because it's just sort of worked as expected.
So I don't know if these are more correlations.
I don't think they're causations.
So I would bring this into question.
I wonder if maybe there's an element, too,
of, like, unpacking the law a bit in terms of,
you know, we're talking about eyes,
but are we talking about users?
Are we talking about developers?
Are we talking about auditors?
Like, there's bugs you find that, you know, like, you read through the code and you're like, oh, that's a little suspicious.
And then you test it and you find some edge case.
And those are important, but, you know, and sometimes they have dramatic security implications.
But oftentimes they're just sort of like, well, this tiny little corner case over here, you know, we need to fix it.
And there's probably a reason that no one has really run into it or cared enough to like raise it up the flagpole.
So there's sort of a question of, you know, do you end up also then finding a bunch of you know more
bugs that matter less i want to say it's all of the above you know i want it's it's more users
more developers more auditors more automation um that all is gonna is gonna find more bugs i
i think though that we can't just sit here and say,
because it's free software, because it's open source,
it's more secure because more people can check the code.
I think that's a real shallow take on the situation.
And I pulled up a dozen references
from the last five years of free software projects
that had a CVE that was related to a bug
that had existed in the project
for five years or longer. I put like a little search parameter together and I found like a
dozen examples. It happens over and over again. So I think we kind of have to stop this crap that,
well, Linux is more secure just because it's open source. It's only more secure when there's people
that are actually interested in looking at it. And I don't think it's a bad thing. I don't think
it's a weakness of free software because it's all softwares like don't think it's a bad thing. I don't think it's a
weakness of free software because it's all softwares like this. And it's just the reality.
We should just be more honest that all softwares like this. Zach Bacon in the chat room here has
a good point. It says, I think it boils down to some people working on what they want as opposed
to necessarily working on what's needed. So it gets me thinking like contributors to projects,
you know, they're have specific interests and they're going to work on that. But, uh, how do you spend the attention
on the things that are, you know, need, need an audit or, uh, those bugs that are really old,
that nobody really is excited to work on. All right, Wes Payne, how is our old Arch server doing? Uh-oh, that's not a good look. No, it seems to be so far going okay.
We've got the new kernel installed.
Okay.
We're not done updating packages yet.
I bet.
It's still in progress, but we're on 450 out of 535.
Oh.
At some point, we're going to have to build ZFS, right?
Yeah, and I imagine that's where we're going to run into some problems
and have to do some updates.
Yeah.
Because I think we were on ZFS 2.1.5?
Yeah, that's going to be the tricky thing.
It's going to be a huge jump in ZFS and the kernel version and the kernel module.
Yeah, it looks like there's a 2.2 now over in our buddy the AUR.
Now, it looks like we're running the LTS kernel on this box,
so that's a helper.
We're not worried about some super new kernel
and some incompatibility that we have to wait for.
I'm glad you brought that up.
That was one of our kind of go-to solutions to using Arch as a server
is switching over to the LTS kernel.
That may save us.
That may save us right here.
Because then it is, it's still a big jump in ZFS version, though.
Okay.
I bet it'll be fine.
But we'll stay tuned.
I'm going to, I'll check back in when we're done with everything,
all the updates here.
Linode.com slash unplugged.
Head on over there to get $ and 60 day credit you can really try out
the service and it's a great way to support the show and discover the great news linode's now part
of akamai that that akamai yep the akamai but they're keeping all the tools that we use to
build and deploy in the cloud like their nice cloud manager with its great ui their api that's well documented and
the command line tool which is just like a swiss army knife of awesomeness that's still there
but now they're combining it with akamai's power and reach and network they are like the best
network out there and they're investing more in linode services while still giving us that good
old-fashioned reliable affordable and scalable solution that we've all fallen in love with. There's a lot of fly-by-night providers out there,
but the combination of Linode's years of expertise combined with Akamai's pristine global network,
you're not going to beat this at these prices. And as part of Akamai's global network of offerings,
they're expanding the data centers. So that means more locations. We just recently took advantage of one,
which means you can be closer
to your friends, your clients,
whatever it might be.
You can just have a wider footprint.
So don't wait anymore.
It's a great time to support the show
and it's a great time to experience
the power of Linode, now Akamai.
You go to linode.com slash unplugged
to get that $100 and you learn now
more about Linode and Akamai
getting together to scale everything bigger and keep the prices great.
So go see what we've been raving about and support the show and get that $100 at linode.com slash unplugged.
That's linode.com slash unplugged.
So, gentlemen, I have many a box that I've run for multiple years on a variety of systems.
And I was reflecting on this just last night because I'm having issues with my Tumbleweed install, my dev one, that it just eventually just blows up.
But I think that might be a me problem.
But I was thinking, I want to hear some advice on how to get these systems running for as long as possible.
I want to hear some advice on how to get these systems running for as long as possible.
There's got to be some tricks, some knowledge that's been passed down from the wiser.
Chris, you got anything to give us?
Wow.
Are you saying I'm old?
No, no. I'm saying experienced and wise and all of this.
You were also saying last night when we were chatting about this is that it's usually about the one year mark for you.
we were chatting about this is that it's usually about the one year mark for you.
It's like between a year, a year and a half where I start to at least notice that things are,
let's call it crufty. God, that sounds like Windows. Well, you know, that's one of the reasons I wanted to move to Linux quite a while ago was because I had experienced that on Windows
and I had experienced that on Mac OS. And I thought, surely there's a better way in Linux land, right?
But my experience so far is that, yeah, year and a half.
That's where it all starts.
Stay a while and listen.
Let me tell you a story.
I actually have two machines in front of me that are both installs from 2018.
I've got a 2018 install on this laptop still, actually.
Look at us.
And they're all Ubuntu, aren't they?
Yeah.
So there you go.
I think maybe it starts with how you use it in the beginning.
You sort of set the tone.
And I don't really ever install a lot of software on any of these machines
from the package manager anymore i haven't for years like reaper is a download from the reaper
website what i have added to the system has either been through flat pack or snaps
and i don't really install much from the base package manager and i haven't for quite a long
time as you can see from our picks a ton of stuff we end up picking are like often they end up just and I don't really install much from the base package manager, and I haven't for quite a long time.
As you can see from our picks, a ton of stuff we end up picking are like often they end up just have a happenstance
being like a Go or Rust tool that you could just download
and stick in a bin folder.
Obviously, that has downsides in terms of auto-updates and the rest,
but it does mean you're kind of easily sideloading onto your system.
Now, Tiny and I have the exact opposite opinion
when it comes to updates.
So, Tiny, you and I were chatting before the show and you said you think one of the key
things to keeping an older install running smooth is automated updates.
Yep.
So I think having to deal with updates regularly and in small chunks has a higher likelihood
of success than just ignoring updates altogether.
I wonder if it's also because the updates are like,
there's a recency to it. So updates that are, I don't know, in the last week or two,
probably have a higher chance of success than, I don't know, if your rolling release hasn't
updated in the last month or 365 days. Correct. And I also think distro choice is a huge part of
it too. Um, I'm still of the camp that Arch is not a great candidate for this.
You've demonstrated that it can be done, but you have to be very particular about what you put on
the base OS and then move all of the applications into confined environments like Docker or OCI or
whatever and keep it away from the base OS. Yeah, I would agree with that.
I mean, we'll see.
We haven't finished the update yet.
We may have disproven it.
I think, yeah, if you choose something like Fedora,
that has a really smooth upgrade process.
They've really refined that over the years.
And so if you can fall into a cadence where Fedora 39 comes out,
you wait a few months, three, four months, then you upgrade from 38.
If you can fall into a cadence like that, that can work pretty well.
I have gone the route of LTSs when I want a system I know is going to be around a long time.
And I go the opposite side that Tiny does, and I wait about two or three months,
and then I do updates with, you know, I just plan for something to break
and then try to just have a little time on the other side of that to fix it.
You got your maintenance window set up.
Yeah.
It's a little more old school approach, but I feel like the package manager kind of dictates the behavior.
You know what I mean?
Like I don't do that on other systems, but for my app system, these two app systems.
Ah, yeah, right.
I mean like on some systems you might have the package manager integrated with snapshots, which I forgot we have on this server too. Yeah. Yeah, right. These two app systems. and yet apt is sort of the package manager i have some of the least confidence in terms of like
getting into weird situations in you know on an older install especially one that maybe has had
seen a lot of randomly downloaded dabs or ppas or yeah the more you do that the more little edge
cases and issues you'll run into and did you really get all that stuff uninstall which version
is it still pulling like so this package from this ppa even though you're not using the other part of it anymore yeah i think there's you could always leverage
something like clonezilla too i think that's worth considering i'm gonna throw a couple curveballs at
you what if you did an approach say you're using centos stream you're using rel you're using ubuntu
lts you're using a debian release it's going to be around for a while. What if you just did the MVP install, the minimum, or the
MVD, the minimum viable distro install,
and then built everything else up using the
Nix Package Manager? And then you just maintained
basically all of the stuff you want installed from
random server thing to whatever text editor you might want.
You just did all of that with the Nix package manager.
Could that maybe be a kind of bulletproof?
Because you'd still have to run the package manager of the system to do kernel updates and package updates of the base OS.
But if it was really a minimal base OS install,
there's a pretty low chance that's going to break.
And then all the other stuff you install from Nix.
Yeah, I wonder if you might run into, like,
depending on what layer you do,
like, I imagine that would work well for a lot of stuff,
but I wonder if there's some, like, do you end up with some odd middle ground
where, like, Nix OS has it more integrated
in a way that you would prefer,
or like you need some, you know, hook into the lower level system that right now you're relying
on your base OS, but you have to like replace with the Nix stuff.
That's why I always just end up with NixOS. It's like if you're on Ubuntu or Red Hat,
you probably want that package manager.
But isn't all of this just sort of admitting that distro package managers are failing at their primary job?
I mean, I know it's a complex issue, but the whole point was to solve all of this, right?
I think it's fair to say the longer you take it out, the harder of a problem it begins to solve.
If you wanted to install current software on this install from 2018,
Flatpaks and Snaps are kind of your best bet because they're not going to have that package in the repo.
So I think it's a harder problem when you want a system to last longer and longer.
And the more minimal the system is, I think the easier the problem is to solve.
So that's probably one of the major tips.
Jeff, you kind of think rolling is probably the best route to go.
So you kind of take the other side of tiny as well and say, don't just update often,
but just go with something that's rolling.
Yeah, just in general, I think, you know, you're not going to hit those hit those brick walls, right?
Like, oh, now you got to do a full system upgrade and sit around for however long that's
going to take.
And then when that inevitably fails, because apt always does for me, at least. It tends to work out better for me
anyways. I, you know, of course I'm an arch user and I've, that's what I know the best. So
that's what I have around my household, including my partners. And I don't update her system often
at all, maybe two, three times a year. And it's fine. I mentioned, just like you said, too,
flat packs, snaps snaps whatever you're more
comfortable with they're not crafting up your system right they're they're going to be a bit
more reliable in terms of updating and um and keeping your base system sturdy
uh-oh this doesn't look good i see wes dipping into the aur over there oh that's right we're
doing some aur updates as well.
We did need to manually update Paru, which is, I guess, the AUR helper we ended up on last time we were doing this.
I'm sure there's a couple installed.
Yeah, I think Ye is on here too.
Hey, I mean, it's written in Rust.
It did mean we had to compile it, so that took a little bit.
But we've got the up-to-date Paru going now, manually installed.
And we'll see.
In the past, I know we've sort of done the manual package build style,
make package to get the ZFS going.
We can do that if we need to.
But why not see if Paru will just take care of the remaining packages on the system.
The rest, like the base install went pretty well.
It did kind of mess up during the init RAMFS,
but I think that's a DKMS issue trying to get the ZFS stuff built.
And is that coming from the AUR?
Like, why are we messing with the AUR?
That's what I'm trying to put together.
Well, it's part of our system update.
It's part of the packages we have on there.
But we were so close to a W.
And it is where we get the ZFS stuff.
So if we want updated ZFS, we have to.
That's what it is.
All right.
But it's not a complete update if we're leaving behind, you know, we use the.
Yeah, that's fair enough.
All right.
We'll check back in.
We'll check back in.
Before we move off the topic of keeping an old distro running, I would love to put a call out there to the audience to boost in or email in with their kind of tips from anything from you're doing when you first set it up to what you do while you're maintaining it, running it.
What are your tips or tricks for keeping an old distro rolling?
Because I feel like that's a conversation that we could keep touching on.
I wanted to just come back to the topic of automation for one moment.
I think also learning some kind of automation either maybe it's setting up your dot files, maybe it's Ansible.
But the one thing that I want people to think about when you are trying to get a lot out of an older distro is imagine a scenario like I'm in right now.
I'm talking to you from the future if you're thinking about doing this.
If I had to rebuild this system right now,
how would I even do that?
Like, how would I bring it back to the state it's in right now?
With software that was originally from 2018,
that's been sourced from various different places,
that's been assembled over time and then updated,
how would I actually bring this thing back to this state if, say, I lost a drive?
That is exceptionally hard when you're trying to preserve a precious environment.
And so anything you can do to make that environment less precious, to make it simpler to just
nuke and pave when something goes wrong and restore, is, I think, going to be essential.
So maybe that's managing your.files properly.
Maybe that's using Ansible.
Maybe that's just having a good backup strategy.
Yeah, I mean, you already mentioned Clonezilla.
Yeah.
You know, maybe that's not where you want to end up,
but it's a great place to start,
and at least then you have some, you know,
quote-unquote golden images you can fall back on,
you can redeploy if you need to.
And in the meantime, you can work on, you know,
maybe you want to set up some more automation
to be able to rebuild it totally
or have more control than just sort of blessed bits and you want to have a blessed
configuration in um in a gate or something i think also there's maybe a little we should give a little
air to the your point that we brought up in our pre-production was don't do this right yeah just
maybe don't build a system like this if you can. Yeah, maybe try to work in automation from the start.
Think about what the life cycle of your system is going to be.
Do you have backup in place so you can try to do upgrades
without having to worry that it's like,
well, I don't really want to upgrade
because I don't know, it might totally blow up
and that might be my whole Saturday
that I have to fix troubleshooting it
or it ruins the system
and I have to do some kind of crazy rollback.
If you have snapshots,
if you have a practice backup strategy, if it is
an important system, so you at least know
okay, well I've got a full disk backup. If something
goes haywire, I just roll it back and we try
it again in a month.
And then from there, at least maybe you feel a little more
comfortable playing with your system. Because I think
one thing that can kind of be difficult is these things
ossify so much.
Even if they aren't, you kind of perceive
them as brittle and so you're really loathe to touch anything.
And then that might mean you're not actually
keeping up with the little bits
because you need to update configuration files.
You need to slowly incorporate some new practices
if you're ever going to get, you know,
to handle the next big upgrade you're running into.
Collide.com slash unplugged.
If you're in security or IT, you need to hear this.
Man, if I had Doc Brown's time machine, I wouldn't be getting a sports almanac.
I'd be bringing Collide back to when I worked in IT.
Something that drove me crazy was most of the day-to-day problems that were created were generally due to a user getting their credentials phished.
Old out-of-date software not working, so they're just technically out of compliance.
credentials fished. Old out-of-date software not working, so they're just technically out of compliance. Or even just sorting out and keeping track of which systems are in compliance and out
of compliance, especially when it's a regulatory thing, it eats up a lot of IT and security cycles.
And users don't mean to run out-of-date software. They don't mean to have their
credentials fished. You know, it's just the state of technology right now.
Well, Collide is a solution to this challenge.
For those of you in security IT that are working with Okta,
Collide ensures that only secure devices that have met your requirements
can connect to your cloud apps.
So compromised credentials, that's not an issue
because those can be caught before they connect.
Software out of date, antivirus not installed, things like that,
that can be caught, and then the user can be guided
using Collide's system to resolve that problem. They empower employees to fix their own
issue without burdening IT. You got to try it. You got to see it, and it's a great way to support
the show. Go to collide.com slash unplugged. They got a demo over there. Gives you kind of idea of
how seamless all of this works and how it really would be worth like skipping the sports almanac
and taking Collide back in time because it would have changed the
trajectory of my it career.
I really think this is a great solution.
I want you to go try it.
At least see the demo at K O L I D E.com slash unplugged.
That's collide.com slash unplugged.
So our email inbox this week has been jam packed withed with some great feedback, so thanks for everybody.
We did get an offer.
Spazzy C offered us a little something.
Hey, I have two Dell Inspiron 6400 laptops.
They're circa about 2005 to 2007.
I could send these your way.
They're probably perfectly suited for the 32-bit challenge.
Intel T2300 with Arc shows as 32-bit only.
It's got four gigs, though, of DDR2.
15-inch.
Luxurious.
Big old 1280 by 800 screens.
And apparently the batteries are still decent.
So it claims they're just new enough that you can throw a serial ATA drive in there.
So maybe an SSD.
So thanks, Fazzy, for the offer.
I think we're still trying to decide
how we're going to tackle this one.
I think we should do it.
Yeah, so I've had lots of great offers,
but this one I really like a lot
because they're two portable laptops.
So like if you and I had them both,
then we'd have identical hardware.
So if we solve the hardware issue,
I think we should do it.
So let's follow up with Spazzy.
We could cover the shipping.
We've had a lot of really nice offers.
Yeah, that's for sure.
Thank you all.
I've like,
I've been wanting to just sit down
and make a spreadsheet
and try to figure out which ones to go with.
But this just seems like such a great option.
And with that option to put an SSD in there,
it makes me think we could build this as like.
It would make compiling for Gentoo a little faster.
Gentoo.
Wes.
All right, man.
All right.
Okay, man. Or building for our weird NixOS system.
No, I'm down, dude.
I'm down.
No, I just think you could put an SSD in there,
and then, like, you could actually build a usable computer,
which I think makes the argument around 32-bit support a little more compelling.
Yeah, like, this is a computer I'm getting use out of.
It's not just some relic that we need to maintain.
Yeah.
All right, Spazzy C., we'll try to follow up.
Let's try to remember to do that after the show.
We're building ZFS.
We're making progress.
Oh, good.
Good, good.
I see we have some more feedback.
Yeah, Joe sent in a little note here that I think is worth pondering.
Regarding the free software discussion recently, I think we should stop calling it free software.
It started as a side project or passion project, but has now changed over the years.
I think it should be referred to instead as common software.
Software for the common good.
Having free in the name generally makes people think
that you don't really have to pay for it.
Just my two cents while waiting for a doctor's appointment.
What do you think of this, Wes?
So changing the language from free software
to maybe something like common software
to change the expectations around paying for it.
Hmm. I mean, the words we use do matter.
I don't know how much this one in particular.
I know we also have Libre software that we throw around as well.
Plus, there's obviously the association with the Free Software Foundation.
So that can also be somewhat problematic for some people.
I think I also like
this idea because I do think of a lot of this stuff, you know, as much as I joke that, why is
this, why is LibreOffice installed on my system? I don't need this. I do believe that the world
needs it in the sense that like, you know, if word processing software is something that must exist
for humans to use, there should be a version that is free and open source and available to be hacked on
and available for free for people to use.
I like the common verbiage.
I'm just not so hopeful at this point
that we can actually get it to take off.
Yeah, it feels like that brand
has been successfully blazed into everybody's memories.
It's already kind of so muddled, right?
Like we say Libre, we say free,
we probably incorrectly often end up mixing
or calling stuff open or free
when it wants to be called the other thing.
Yeah, I think if something's really successful,
the name matters, but it doesn't matter a lot.
So yeah, I think we could, there's a lot to think there.
A Jupyter-approved software?
Yeah, okay, Jupyter software.
I like thinking about that, though.
Thanks for kicking that around with us, Joe. It's Jupyter software. I like thinking about that, though. Thanks for kicking that around with us, Joe.
It's a fascinating idea.
I like what Joe's thinking about while waiting for the doctor, too.
Yeah, right.
You're our kind of folk, Joe.
Picture him there in the waiting room.
And now it is time for the boost.
And we got a baller boost that I think was sent to self-hosted on accident.
So we pulled it over here from deleted.
250,000 sats.
Hey, rich lobster.
You are a baller.
He says, did I hear the mention of a 32-bit challenge?
Here's some sats to help you go shopping.
Aw.
Yeah, thanks.
We'll put that towards the shipping.
Perfect.
Yeah, that's great.
Jitty boosts in with 80,000 sats.
Hmm. Hey,000 cents.
Hey, Rich Lobster!
We got a lot of lobsters today.
So, for the 32-bit challenge,
there's an obvious thing you need to run on the machine once it's ready.
The way I see it, a retro machine calls for retro software, and that retro software should serve a glorious purpose with a touch of nostalgia.
My friends, Jupiter Broadcasting requires a BBS.
Mystic BBS is 32-bit and easy to set up with SSH and Telnet support included.
Uh, yeah.
I have a question.
What's BBS?
No.
No. Really? I think so yeah we better get the harp out oh well you know i'm feeling like the old man this week brentley i'm feeling like the old man for sure but stay a while
and listen so back in the day before we had aOL or before we had the World Wide Web even, we had these things called BBSs.
And they were a bulletin board system that you dialed up to with your computer.
And you could imagine they could only have as many users as they had incoming lines.
And this was before there was any kind of TCP IP standard.
This is pretty early stuff.
And everything was done in text.
Uh,
this is where muds became really successful,
which were multi-user text games.
There's a Star Trek one out there.
You can go find if you search for Star Trek mud.
Um,
and yeah,
it's estimated,
this is according to info world that at its height,
there were 60,000 BB,
uh,
BBS is active serving 17 million users in the united states alone in 1994
that's a big deal yeah kind of cute if you try to go to the arch linux forums the domain name is
bbs.archlinux.com yeah yeah it all makes sense the name the bolton board name has uh has stayed
around um and so those of us that grew up in the day probably have a few stories of accidentally
putting outrageous long distance charges on mom and dad's phone bill because, yeah, it was easy
to do that. That's for sure. You know, that sounds like something really fun, something we could
definitely look into, I think. Thank you for the suggestion. I think that's a great idea,
Jitty, and also really appreciate the support and the boost.
But does that mean we would need like a network that other people can attach to like would this become like jbbs well i don't
think we'd leave it run indefinitely but we could do a little lino shenanigans to like bring through
some connections or something to the laptop that sounds great i mean we'd have to run it on the
laptop yeah of course so there's well it's 32-bit, I mean.
Right.
LT boosted in with 51,000 sats.
Coming in hot with the boost!
And it says, um, keep up the good work.
Well, thank you, LT.
Thank you for the support.
Keep up the boosting.
It's always nice to see that.
Torch comes in, also known as Listener Jeff, with 50,000 sats using the index.
Here's a little post-trip refill. Hope those tacos were good. Torch comes in, also known as Listener Jeff, with 50,000 sats using the index.
Here's a little post-trip refill.
Hope those tacos were good.
The tacos were the best thing I ate on the trip because they were the only food that I ate on the trip besides rice and boiled chicken.
So they were really good, actually.
Well, you earned it.
Hybrid sarcasm boosts in with 50,000 sats.
Hoping this boost buys a humanitarian pause for us proxmox fanboys i mean fair enough 50k sacks that's uh yeah that's good money
we'll pour here pour a little coffee out for proxmox
are you sure that was coffee brent i'm not sure that was coffee it It seemed like. It sounded like. Yeah, the consistency was a little off.
It seemed like something else.
Yeah, who knows.
Kisaria boosted in two boosts, each 22,666 satoshis.
Things are looking up for all but duck.
So for the 32-bit challenge, why not use QEMU's 32-bit emulator?
It's about as performant as any 32-bit machine I've ever had.
Just remember the 4-gig memory issue.
What you call a 4-gig memory issue, I call a 4-gig memory opportunity.
Those lazy devs, come on.
In the second boost, another 22,666, for funding open source myself, I've done a fiat value for value.
So as an example, every seven years, I would expect to spend about $250 on Windows.
So I divide that by seven, then split that up between my DE, between GNU and Linux as well.
Comes out to like $60 a year.
Then on top of that, I try to contribute my skills where I can.
It will help keep it running, and in lean times, I hope it makes a difference.
Yeah, that's a great idea.
I often will do, almost every year, a holiday kind of shopping spree
where I'll go throw some money at some of my favorite open-source software
that I use personally on the holidays.
What I always come back to is I think it's remarkable
that we have solved this for podcasting with Boost and remember streaming sats,
which people stream sats in as they listen all the time.
And it's a real reasonable amount.
You know, they set the amount.
And as they listen to the podcast, they just send a few sats.
Sometimes it's 10 sats.
Sometimes it's 200 sats.
You know, people set their own amount based on what they perceive the value to be.
It just seems remarkable to me that we can't do that for other things.
How have we solved that for podcasts, but we can't solve it for software?
Yeah, well, I don't know.
Now, to your point about using the 32-bit mode in QMU, that might be useful for some
testing, like maybe if we wanted to eventually keep a BBS and we wanted to run it on a Linode.
Sure, yeah.
Or, you know, imagine that's what folks in the future when all the 32-bit hardware is dead will have to do.
Yeah.
But we want the full bouquet, right?
We want the whole shebang with the screen and the IO and the entire experience.
Because I think really the idea is,
is what is the value of maintaining 32-bit support at this time?
And really, I think, fully test that.
You need to actually do it on 32-bit hardware.
Southern Fried Sassafras comes in with 27,507 to 7 sats.
Best name.
Using the podverse.
Says, I'm guessing the self-harm content apology request was aimed at your joke about cutting your wrist open a couple episodes back.
However, please don't turn it into a show that polices your language and jokes.
Love all the shows.
Keep the great content coming.
Really makes my 40-minute one-way commute better.
P.S. This is a zip code boost.
Oh.
So I wonder, are you listening in two chunks, Sassafras?
Is that what you're doing? Now, Wes, you brought the map. Oh. So I wonder, are you listening in two chunks, Sassafras? Is that what you're doing?
Now, Wes, you brought the map.
Good.
Yeah.
27577.
There's a postal code in Johnston County, North Carolina.
Hello, Johnston County, North Carolina.
Smithfield, Wilson's Mills, or Whitley Place.
We do need to get out there again.
See Alex.
Yeah, we do.
Yeah.
Good idea. Yeah. Thank you need to get out there again. See Alex. Yeah, we do. Yeah.
Good idea.
Yeah.
Thank you for boosting.
Appreciate the support.
Jordan Bravo boosts in with 22,222 sats.
Things are looking up
for old McDuck.
From Podverse
across two boosts.
The first one,
I'm a software engineer
and when I joined
a new company back in August,
I requested a Linux laptop.
Good on ya.
Nice.
But was forced to choose Mac or Windows.
I chose Mac and used Nix Darwin to share most of my config with my personal machine, NixOS,
but it's just not the same.
Now, after months of bugging management, they finally acquiesced and gave me a ThinkPad
with Ubuntu 22.04.
Not quite NixOS, but I'm delighted.
And part two, PS, I for one,
love a show section with Nix tips and tricks.
Here's my Nix config repo that's still a work in progress.
And unfortunately, I don't know if this didn't come through.
It looks like there's supposed to be a link,
but I don't see one.
I think the URL got cut.
I'm not sure.
I'll double check the backend
to make sure we didn't drop it,
but maybe boost us again with that, Jordan Brava, if you would.
We do love the Nix configs a lot.
I would really like to see it if you could send it in again.
Thank you for the boost.
And thanks for sharing.
DucksM came in with a row of mech ducks.
Quacka quacka, it's a treasure.
Yippee!
You know, we're going to need a dictionary soon of all these so I could keep track of them. Hello, gang. Great show. Long time KDE user here. First,
I think for long term adoption, it's best to stick with a user interface, especially if it works,
and introduce end user improvements with each iteration. General users like my wife, kids,
friends, and non-techie users don't like their
user interface pulled out from underneath them. Second, keep talking about NixOS. I used to use
and recommend Debian, but I've been a happy NixOS user, and while I learn new things in the process,
strangely, it's keeping me from wasting time with distro hopping. One thing I haven't figured out yet is virtualization.
Oh, Chris, you can probably send a copy-paste for that, right?
On occasion, I need to fire up a Windows 7 virtual instance
to use some sort of Citrix service,
but haven't gotten VirtualBox to work properly.
Maybe I'll look into VertManager or QEMU.
Best wishes to all.
By the way, I've been interested in ham radio for years,
and this fall,
I put a few weeks into studying
for the technician license.
I'm now known as
KQ4MAJ.
Congratulations.
Oh, that's great.
You know, a few episodes back,
we linked one of the very first
NixOS configs in the show notes,
has a really nice,
clean VirtualBox setup,
which I yanked and put in my Nix config, and VirtualBox has been working fantastic.
Okay.
So we know it can happen.
Go look at that.
Go look at some of those Nix configs.
A couple of them that have come in recently have VirtualBox in there.
So I think you'll probably be pretty happy with that.
I mean, you could go either way, VertManager.
You could do them both.
Do it all, man.
That's what I do.
work manager. You can do them both.
Do it all, man. That's what I do.
Davalent comes in with 6,000 sats across two boosts using
Castomatic and reminded
me, and I put the call out, what was
Kali, the security distribution,
what was it called originally?
He reminds me it was Backtrack.
Oh, of course. That's when
I used it, back in the old days. Same.
Oh, when you were a lead hacker.
Yeah, it looks like they're, look at this, Wes, their own site's still up,
backtrack-linux.org.
Since 2013.
Some things change
and some things stay the same.
I love it.
Thank you for the trip down memory lane,
Dab, I really appreciate that.
In at six,
boosin' 10,000 sats
from Podverse.
Managed to get new sats.
Thanks for the tips.
Hey, here's a hot tip.
Strike is now available globally.
One of my favorite ways to get sats, and it's on the Lightning Network,
so you can buy it with Strike in something like 36 countries now or something like that.
It's quite a bit.
They've expanded beyond just the U.S., and it's a great app with a great team behind it.
They have their own infrastructure.
They self-host their own Bitcoin
and it's on the Lightning Network.
So then you could send it
to one of the new podcast apps
or you could top off Albie using Strike.
So check out the Strike app.
New tip.
And thank you, Init6, for the boost.
Gene Bean from Castomatic
boosted in three separate boosts for a total of 9,444.
And you know what that means? Two of them are rows of ducks. B-O-O-S-T. Hey, I don't get the
grumpiness over CLAs. You are contributing to a company's code and they still need to be able
to run a business. A CLA makes that much more straightforward. You know, I think that's an
interesting discussion point, Gene. I believe a lot of the upset comes because people begin to
participate in a project because in part of its licensing, and then things change. Wes,
you deal with this more in the professional field. What's your, not so much your like Linux
advocate hat, but you're like a professional developer hat. What's your, not so much your like Linux advocate hat, but
you're like a professional developer hat. What's your take on CLAs? It seems like there's like,
there's definitely an aspect of trust. You know, there are some projects, I'm pretty sure Closure
has a CLA and it's a rather tightly controlled. Some folks really hate the style that it's
developed because it very much is like the, you know, the founder of the project is still
tightly controlling the reins, but I trust their decision so far, so I'm okay with it.
And I think, as you're saying,
it feels like maybe trust feels violated sometimes
when these CLAs get added or introduced,
and then that change sort of has already started
to undermine your trust in the org behind it,
and then the CLA just gives them more powers
than the open source community, at least in some views.
So it can be tricky,
and I think it depends a lot on who's doing it.
If it's just like code that a company developed privately and then over the wall and started open sourcing it or
something and then it came with the cla from the get-go that's probably a little bit of a different
situation at least in terms of you know you still might not want to get involved because you
understand that you know it could change they're going to do whatever they want with it you're
helping them develop their product and maybe you're not okay with that but it's at least a
little more straightforward from the start and And I kind of feel like maybe
where Gene's coming from a little bit
is like, well, but we need that company to survive
in order for the project
that you're contributing to to exist.
And that's a thing we're trying to figure out right now,
I guess, in the open source world, right?
It is.
Yeah, it is very much.
I'll add a link here, just a random article.
I thought that at least laid out some of this decently.
So look in the show notes and see if you disagree or agree with any of those maybe.
You know, I had a conversation recently with Simon Phipps at the NextCloud conference.
He's one of the people who was involved in crafting the open source definition.
And one of his pieces of advice for anyone starting out was like, definitely do not sign a CLA.
And he had some interesting points to that.
So I'll see if I can link to that conversation as well.
I thought for someone who's been around and really has been paying attention for that amount of time as that strong of an opinion, maybe there's something to it.
And Gene's third boost with 5,000 sats.
third boost with 5,000 sats. If either of the guys building the website to help promote people in El Salvador to help them find work are doing this as volunteers, I'd love to interview one or
both of them on Volunteer Technologist to help get their story out there. Sounds like quite an
inspiring project. It's a good question. I'm not sure if any of them are doing it volunteer-based.
They might be doing it for free right now as they get it going.
I think they hope to make money.
But I'm planning to stay in contact with them, Gene.
So I'll follow up if I find out more.
Scott sent us a row of ducks, just a little row of ducks.
No message.
Although, Scott, follow up because I had somebody boost in the other day with no message.
But it turned out there was a message.
So we can always check.
We have it in the database.
Soltros came in with 15,000 sats using the podcast index.
Hey, guys.
I've been having fun with NixOS configurations lately, and I wanted to make mine super modular.
So I've taken everything I've added to it, and I've made them each their own.nix file.
Then I wrote a bash script with a dialog checkbox system to turn on and off features I want given my system via imports.
If you're interested, I could pass it along
on matrix.
Yeah, of course we're interested.
That is
very fascinating. I've actually considered doing the same
thing, so I'd very much like to see your take
on it, because I think that could be useful for servers.
I'm a bit surprised here about the bash script,
because surely there's a next native way to do this, right?
Right?
Well, but to have the checkboxes is kind of fun.
Yeah.
That's kind of nice.
All right, Wes Payne.
We have stalled long enough.
Where are we at?
Did the AUR packages build and stall?
Well, so I guess for some reason we have Ceph installed on here
and that was going to take another
five minutes to download, so I cancelled
the full AOR update and adjusted
the ZFS packages.
And that's done. That worked.
Unit RAM, we're rebuilt.
Okay. Is it
time for the reboot? Yeah, I'm gonna
I guess I could do one more check, but yeah, I think so.
Okay, I'm gonna get a ping going.
Oh my god, I'm actually very nervous
because, Jesus.
My whole evening's screwed up if this doesn't
work.
It's always Wes's.
Wait, what did you
sign us up for?
Do you want to
pull the trigger? Let's do it.
We'll come back in just
a moment. We'll let back in just a moment.
We'll let that thing reboot.
I got a ping going.
You want to take this next boost as a sub in for me real quick?
Yeah.
I'll take JJ's here.
From 15,000 sats using the podcast index, JJ wrote,
Hey, guys, I'm glad to hear Chris is feeling all better.
For dot file management, I use Cheezmoi, which is C-H-E-Z-M-O-I.
Cheezmoi.
It stores my dot files in a self-hosted Git repository.
So I just use a Git.
And I can set up a new machine with all my dot files in one command.
And my favorite feature is it supports Go templating with variables based on the machine,
meaning you could include templates or bits of your config based on the machine.
Like, for example, he replaces Find with a tool that's built in Rust.
He says he can also set up his Bash RC,
so he has his aliases dynamically based on the distro.
He says it makes handling configs across different machines
an absolute breeze.
I've started to
dabble with Nix, but I quite like having my.files separate so I can easily apply them to
any machine with or without Nix. It does seem like a neat tool. I've only ever dabbled with
it a little bit. Brent, did you try this one at one point? I remember you did like a few different
.file managers. Yeah, I was looking at.file managers, oh, probably a year ago, but maybe
six months ago. I don't know memories you know um and
this is one that was right at the top of the list uh if you don't know what chris is talking about
it's because he uh cheese boy yeah cheese boy it's you know naming projects is hard this is a
shame which is a french you know turn of phrase but if you wanted to search for it. I'm sorry, did you sneeze?
Yes.
Cheez Moi.
It's Cheez, C-H-Z-E-Z-M-O-I, Cheez Moi.
But the project seemed like such a nice balance between advanced features and having it be easily accessible by users who aren't necessarily into scripting their entire own git repo that's going
to do this with bash scripts and stuff so it was like a really nice balance so i would say i haven't
used it uh so i can't really speak to that part but it impressed me when i was doing research so
i don't know take that run with it i think there is some i think there is some use you know if you
once you're all in on next maybe not but not, but I think being able to support hybrid systems, having a sort of neutral, independent setup that just works and Go is quite portable.
Or you just go all in on Nix and use Home Manager.
I mean, you could do that too.
The show mascot, the Golden Dragon, comes in with a row of ducks.
He says, hey, everybody, I just wanted to let you know that an Etsy page has been set up for a digital version of the sticker.
It's a digital version for $2
for those that can't afford international shipping
or for those that don't have the ability to
have the ability to make them on their own.
And he sent us the link, so we'll put that in the show
notes. Oh, nice. It's a digital
download in the spirit of open source and JB
and the great international folks have to pay ridiculously high shipping
and landing fees for swag.
Yeah, it is a bummer.
Yeah. Yeah. Thank you, GoldenDragon,
for not only setting that up
but then boosting in
with a little bit of support
to tell us about it.
And before we check back
on that server,
because I do have a ping
going right now,
Greeno boosted in 5,353 sats.
It's a very long boost,
which we just read,
but it's also a zip code boost
there, Wes Payne.
Yeah, though we might have
to do some homework for it
because it's the beginning portion of my UK postcode,
converted from ASCII to HEX.
Okay.
Okay.
Okay.
All right.
You got Chad GPT open over there?
Yeah,
let's pull him up.
Hey,
Chad,
can you answer this for us?
Use the Chad map.
Yeah,
it looks like it may be,
is it,
uh,
is it SS?
SS?
Postal code UK.
Let's find out more.
There's, okay, SS postcode area.
Okay.
South end on C.
All right.
How do we do?
Is that useful at all?
We have no idea.
It sounds like you made it up.
There's a lot of stuff in there.
I mean, SS0, SS5, SS22.
All right, Wes Payne.
We got our ping, which is impressive.
I mean, at least it booted.
It came back.
I'm in.
We are on kernel 6.1.62.
I'm verifying this.
I got to verify this.
I got to verify the applications are running
because that just seems impossible.
And they might still be starting.
Yeah, yeah, I would expect.
I'm seeing our pool is online.
Huh, I see mono starting, which is absolutely.
And I see Plex starting.
I see Tailscale starting.
Yeah. Yeah.
Yeah, there's a bunch of containers going.
Some of which were created four years ago.
Well, there you have it.
I mean, Wes Payne.
I guess I'll get the remainder of the AOR updates going, but otherwise we are up to date.
I mean, I feel like myth busted.
Myth busted.
That is as negligent as you can get with an Arch server.
And then on top of that, we threw ZFS in on the mix.
The only saving grace must be the LTS kernel.
Yeah, that was a good call on our part, I think.
Wow.
I am genuinely relieved. I have a sincere sense of relief that that worked.
Chris, I remember you having a story of hosting an Archbox. I think it was at Angela's place to play a bunch of media. And I remember a distinct story of you having to go over there, I don't know, late in the evening when everyone's trying to watch, you know, a movie or something and just spending all night fixing that thing.
So is this one for one?
This is, we just got like a whole free evening, right?
Like, this is great.
I'm a little worried, though.
Does that mean we're going to have to do the rest of the servers as Arch instead of Nix now?
It's just so rock solid.
All right.
Our last boost comes from Zach Attack.
5,555 sats
using the podcast index, and he writes,
Okay, after your explanation of the trouble with
snaps, it got me wondering about
Linux's other big hills people will die
on. Wayland, X11, it seems
like there's a lot of hate for Wayland, and people
think X11 should be upgraded and tweaked
into perpetuity. Any
mention of Wayland is akin to insulting someone's parents.
Audience, let us know out there.
Are you seeing this?
What are the hills you're seeing people die on?
And I think the other question is, is what is your hill?
Because I feel like I'm going to, I'd be willing to die on the Wayland Hill.
I'd take that bet over the X11 crowd.
So I guess that's one of the hills I'd be willing to die on.
What about you, Wes?
System D?
Yeah, well, I'm wondering too,
like I'm wondering if we can,
how many folks, you know,
like we had a certain amount on the System D stuff.
How many folks are just going to try to stay on X11?
I definitely feel like there is this attitude,
but as the more recent switches
and sort of converts and upgrades
and like distro releases
and thinking, you know, Ubuntu's and Fedora's and upgrades and like distro releases and thinking you
know montus and fedoras and like as more things as kde you know defaults it as more things get
waylandy uh i think it's going to just be a smaller and smaller set you know and more things just work
like so when you try it now you know you can have the chrome flag that makes your screen sharing
actually work or but that said there's like four or five distros still going that are system free
system d free distros well yeah there's always going to five distros still going that are system-free, system-D-free distros.
Well, yeah, there's always going to be some.
So there will always be an X11 crowd.
Yeah.
But eventually, you know, there'll be more and more software that, you know, just like they didn't write the old service files anymore, they're not going to support talking to X.
Yeah.
Yeah, there are a few hills.
I think another hill I'm kind of willing to die on is that ButterFS doesn't get enough respect.
I'm not saying it's like the king file system.
I just don't think it gets enough respect or consideration, especially for small little home lab setups or especially ARM-based systems or laptops.
I just don't think ButterFS gets enough respect.
Mine might be like a reverse of the old stuff.
or laptops.
Just don't think ButterFS gets enough respect.
Mine might be like
a reverse of the old stuff.
You know,
we were talking about
habits people haven't changed
and a lot of the stuff
in that Reddit post
was mentioning
ifconfig.
Yeah, yeah.
And I'm definitely
in the IP adder crowd myself.
Oh yeah, yeah, me too.
Yeah,
I do have to admit to that.
All right,
thank you everybody
who boosted in.
We had 22 folks boost in.
29 total boosts,
which is great.
And we stacked
422,437 boosts, which is great. And we stacked 422,437
sats, which is just absolutely
fantastic.
Thank you, everybody who supports the individual production
by boosting. Like I mentioned
earlier, you can get that Strike app, or you can just
get Albie, top it off somewhere
on the Lightning Network, or get a new podcast
app at podcastapps.com. Lots of options.
Fountain and Podverse and Castamatic seem to be the most popular in our audience.
And I think what really makes them stick, besides the fact that there's a whole new
ecosystem of podcasts that have a whole bunch of cool features, is there's just a button
right there to boost in.
And so when we're saying something and you're thinking about it right then and there, you
hit the button, it sends the message in, and you send us a little support.
Because this is a value-for-value production.
And we're always trying to make something
that is absolutely the most optimized for our largest customer,
which is our audience.
So if we say something or share something
or get you thinking about something that is a little valuable to you,
you can send that value back.
You can also become a member.
And we do have a Black Friday sale going.
So if you want to become a core contributor,
we'll have links at linuxunplugged.com, unpluggedcore.com.
Linuxunplugged.com just has the link.
And if you use the promo code BLACKFRIDAY, you get $2 off for a year for your membership.
You can use your Fiat Fund coupons to support the show automatically every single month.
We really do appreciate that.
So that is probably going to run until a little bit after Thanksgiving here in the States, but not much longer.
So jump on it if you want it.
Promo code Black Friday.
Our pick this week is a replay because I checked and it's been 103 episodes since we've mentioned this tool.
Oh, wow.
And I've had to tell multiple people about it in the last couple of weeks.
I've talked to multiple people about it in the last couple of weeks.
And because we just recommended using Flatpaks to maintain a system over the long haul, I want to amend that and recommend FlatSeal.
This is available on Flathub and lots of other places.
It's a graphical utility to review and modify the permissions of your various Flatpak applications.
That way you can break that pesky sandboxing and let your apps break your system like you want them to.
Yeah, like a good old classic app. You know, every now and then there's just like, so you can't get to the network, you can't get to a folder, you can't get a screenshot of it. You take it, you
open up FlatSeal, you look, you go through the options there and generally you can find the
setting that'll fix that for you. And I just wanted to give it a mention again because I think it is
really useful. It's a nice way to sort of review too take a look at like oh what is this app already getting access to what yeah that
too you know i've discovered something recently in at least kd plasma that there's a new flatpack
permissions settings section in the settings which looks almost identical to flat seal and uh i
haven't spent that much time with it but it it's like just also there. So I think they inspired some changes in the desktop as well.
Yeah, I could see it built into GNOME settings one day too.
I think it should be probably.
I know it's a lot of flips and switches, but it's really nice to have.
Don't forget you can vote in the tuxes too, tuxes.party.
I took a peek.
I think I saw 1,600 responses.
All right, not bad.
So keep them coming, folks.
Not bad.
Yeah. You think we could get that to them coming, folks. Not bad. Yeah.
You think we could get that to 2,000?
Tuxes.party.
We also would love to get you to boost in your tips
on keeping old Linux installs running smooth.
Send that in because I'd like to compile a bit more of those
because I think we could visit that topic again
and do it some more justice.
So please send those in.
And maybe let us know about
if you are maintaining an old system yourself, how old is it? And yeah, if, has it
broken? Cause like one of mine here, you know, something breaks about once a year in the upgrade
process and I spend a little bit fixing it and it's just part of using an older system.
Do people run into that? Am I the only one? See you next week. Same bad time, same bad station.
Yeah, that's right.
We're back to the Sunday time.
If you'd like to join us live over at jblive.tv, we do it at noon Pacific, 3 p.m. Eastern.
You can get it robotically programmed in your local time at jupiterbroadcasting.com slash calendar.
And don't forget our love plug gets together every live show, too.
In that mumble room, you get a low-latency opus stream, and you can participate.
And then if you're a member, you get the full bootleg, which is, at this point, a two-hour and ten-minute show.
There's a lot in there.
It's reckless.
It's wild.
And it's also published just about as quick as we possibly can.
So you get the show quick.
Like on Saturdays, you're going to get it Saturday night.
But we just appreciate you listening.
Thank you so much.
And we'll see you right back here next Tuesday, as in Sunday. I can't believe it worked.
I thought for sure we were going to try to figure out how fast we could fix it.
I mean, like that whole password format change.
We got lucky in that it's not a desktop.
Yeah.
And so we missed some of the breaking changes that way.
Yeah, there's not a ton of users on this system.
It's not running any complicated
graphical stacks. The applications are in
Docker. Yeah, for the most part. So it's
kind of a regular system. We've got a
few odds and ends like
YTDLP is on there and things like
that. Yeah, you gotta have that. And NetData
is on there, of course. And there's a few other things
like Magic Wormhole and
maybe Samba is installed locally.
Right. And so some of those things might be things these days
we would just run in Nix as a complement
to the, you know, more service-oriented stuff
that's running in containers.
I don't know.
Yeah.
Wow.
Can you believe it?
After all that time?
I mean, honestly, did you think it was going to work, Wes?
I thought we would be able to get it back online,
but I sort of expected a trip out to the garage.
Yeah.
You know, it just like...
We didn't even have to go to the console.
Now that...
A year.
This thing's been too good to us.
Why do people hate on Arch so much?
That is impressive.
I think there is a feeling of the uncontrollability.
Yeah.
You know, because like that quote you read was saying, you don't know when stuff's going to break because you're not just getting bug fixes you're getting the new versions but that feels like sometimes maybe it's more of
a theoretical concern because in practice if you're okay with like okay yeah i do got to deal
with this or you know know how to work around it um it means you just keep moving and you don't
have these giant updates and even after a year we're still doing all right