LINUX Unplugged - 331: apt install arch-linux
Episode Date: December 11, 2019We're myth-busting this week as we take a perfectly functioning production server and switch it to Arch. Is this rolling distro too dangerous to run in production, or can the right approach unlock the... perfect server? We try it so you don't have to. Plus some big community news, feedback, and more. Special Guest: Brent Gervais.
Transcript
Discussion (0)
An announcement from Microsoft's Office 365 team today, Wes, for Linux.
Microsoft Teams, their Slack competitor, is officially here.
And I'm going to go so far as to say, early prediction, so I'm claiming this one.
It's already on my list.
What?
This is a beachhead for Office on Linux.
Oh, you think so?
The rest is going to follow suit.
I do.
It's been interesting.
You know, I mean, there was an open support issue since Teams launched for Linux.
And for a long time it
didn't look like it was going to happen, there just wasn't
interest. Now it's finally here, although
you'll have to go, like, three directory
levels deep to find it.
If you go to teams.microsoft.com
slash download, they do have a more friendly version,
but if you get where Wes got
linked to, he's, like, browsing through
their web directory. That was from one of their pages.
I know, I know.
Nobody else in the world
except for existing Office
365 users was looking for this.
But you know, if you want something besides
Slack, there's some really great open source alternatives.
However,
a lot of Office 365 users.
Right, and Teams gets bundled
in there, so of course you probably
have to use it. And now there's another Electron app you can use on your Linux desktop.
Yes! Oh, man.
Boy, wasn't that on my wish list.
So while you were spelunking in web directories,
I was just installing it from the AUR,
because, God bless the AUR, it just showed up immediately.
Nice.
All right.
I'm not going to use Teams, though.
Come on.
Come on.
I'm not using Teams.
If anything, I'm going to Mattermost next.
Yeah, but the people who use Teams don't get to choose.
don't get to choose.
Hello, friends, and welcome into the Unplugged program.
My name is Chris.
My name is Wes.
Hello, Wes.
I've got a smile today because this is one of those episodes where Wes and Chris are going to do something you shouldn't try at home.
Don't follow along.
Don't do what we do this week.
We're busting a stereotype. Fedora worked so well. It blew my mind how great it was as a server OS
and all the stereotypes say don't run Fedora as a server. So we thought, well, let's take that a
little bit further and let's replace our highly critical, perfectly functional Fedora server
with an Arch box and report back what the process is like to migrate a live in-production server from Fedora to Arch,
lessons we learned, and then truly what we think of it as a server platform
and some of the things we're doing to safeguard the fact that it is a rolling distribution.
I mean, what could go wrong?
We're doing this so you don't have to.
You're welcome.
I'm really not kidding.
This box is critically important to us.
But we really wanted to test the stereotype.
After having really good success with Fedora, we thought,
we've got to replace this Fedora 30 install either with Fedora 31 or CentOS.
So let's go to Arch.
We've come a long way from FreeNAS.
Yeah.
Yeah, we really have.
So we'll tell you about that in a little bit.
But before we go there, we do have some community news to get into.
Like, some of this is down the road, maybe going to happen prediction stuff.
Like, we don't really know.
So let's start with some of those far out predictions.
NVIDIA looks to have some sort of open-source driver announcement just around the corner.
Michael Larbrill's hot to the trot with this one over at Pharonix.
He says, start looking forward to March when NVIDIA looks to have some sort of open-source driver initiative to announce,
which is likely contributing more to Nauvoo.
He was tipped off by a Pharonix reader about a GTC session that's happening at GTC 2020
by NVIDIA. Now, the engineer from NVIDIA, John Hubbard, is running a talk titled Open Source,
Linux Kernel, and NVIDIA. GTC, for the unaware, is the GPU Technology Conference, and boy, does
that talk have an interesting abstract.
Here it is.
We'll report up-to-the-minute developments on NVIDIA's status and activities,
and possibly, depending on last-minute developments,
a few future plans and directions
regarding our contributions to the Linux kernel,
supporting NoVu,
including signed firmware behavior,
documentation, and patches,
and NVIDIA kernel drivers.
Whoa.
Okay.
There's a lot in there.
Signed firmware behavior is huge.
Documentation would be ginormous,
but patches to the actual upstream projects is mind-blowing.
That is mind-blowing.
Has something changed inside NVIDIA?
Perhaps they've noticed that AMD is kicking some ass these days.
So you're going to have to wait until towards the end of March because GTC 2020 runs from the 23rd to the 26th of March in San Jose.
You're going to go, Wes?
I know you're a big gamer.
I don't think I'll make it, but I will be equally awaiting this news.
Hopefully Michael Larbo will make it down there and he'll give us a report back.
All right.
And then one other kind of far out in the future community news,
but one we are so dang excited about.
Your co-host and buddy Jim Salter over at Ars Technica writes
that the WireGuard VPN is one step closer to mainstream adoption.
This is all coming from the Linux network stack maintainer David Miller
who committed the WireGuard VPN project to the Linux kernel's NetNext source tree.
He maintains both the Net and the NetNext source trees, which govern the current implementations of the Linux kernel's networking stack and, of course, the future one.
Yeah, NetNext gets pulled into the new Linux kernel during its two-week merge window, where it becomes Net.
merge window where it becomes net.
With WireGuard already a part of net next,
this means that, barring unexpected issues,
and there's always time for those,
there should be a Linux kernel 5.6 release candidate with built-in WireGuard in early 2020.
Oh, just excuse me, Wes.
Just going to do a little happy dance.
Something tells me we're going to be trying those release candidates when the time comes.
Now, do take a little bit of salt with this one.
I mean, yeah, the kernel community does what they want.
The maintainers have their own priorities and schedules, and we'll see what happens.
We've heard this song before, but we did just recently see the required encryption bits land.
We covered that recently.
So this is sort of the next required piece.
Now, there's
kind of an unfortunate possibility
on the timing here, because if I'm
doing my time math right, and Jim
points this out in the article,
this is probably going to land after
Ubuntu 20.04, the next big LTS.
Yeah, that's unfortunate.
I imagine there'll be long timelines to see it get in
other sort of LTS releases like RHEL.
But WireGuard founder and main developer Jason Doenfeld
offered to do a bunch of the work backporting WireGuard
to earlier Ubuntu kernels directly.
Jason.
That's great.
That is.
We've had very brief exchanges with Jason in the past,
and he seems very passionate about this.
Oh, yeah, obviously cares a lot.
And has put in a ton of work promoting WireGuard
to get it where it is. I hope they take
him up on that. You know, he also teased
that a WireGuard 1.0
is on the horizon.
Well, what? I know.
Wow. Already. How can it get
better than it is? You know, that's a good
thing to say, too. Like, if you could have that
land right before 5.6, then you're
including 1.0. Right, we've got here, like, a nice, stable, maintained VPN.
Mm-hmm, mm-hmm.
Mr. Brent, you just recently had adventures in WireGuard land.
Yeah, it's been on my docket for about, I don't know, six months.
Wes, I've been asking you for, like, hey, Wes, when you're ready, if I go ask you, would you help me?
And it turns out you didn't even need it.
Well, it turns out I am more
capable of things than I think I am. So this morning I thought, ah, I got a pretty wide open
docket and WireGuard seems like the best thing to do in the morning. So dove right in and it was
super smooth. What are you using it for? I mean, are you replacing an existing VPN? Yeah, my idea
was I'm currently using private internet access just to give me, you know, as some people may have picked up, I travel quite a bit.
So it gives me some extra protection basically everywhere.
And so that's been playing fairly nice with my phones and computers.
But, you know, they just got sold recently, I believe.
And if the rumors are true.
Yeah, they are.
And also the apps kind of, they buggy, and they're not native.
I don't like them.
So I thought, okay, WireGuard really seems to make sense.
You guys have been telling me it's super stable.
So I jumped in, had to learn a few things,
but there's some great tutorials out there.
And yeah, just spun up a VPS on our trusty DigitalOcean
and got it configured pretty easily, actually.
And it's working pretty good.
How proud are you of Brent right now?
Oh, yeah.
Amazing.
Super proud of you, Brent.
Good for you.
No, thanks.
And did you find any particular documentation useful
or any kind of tips or tricks you could pass along to people
that are also kind of starting from zero?
Yeah, I think my setup is wonderfully simple.
All I'm trying to do is have a kind of a dumb server out there that's just waiting for me to connect to it.
So it's not doing much fancy stuff, but maybe that's a good place to start.
Absolutely.
So I will share in the IRC here the tutorial that I used, and it just kind of like worked.
It was super simple, very straightforward, great, written in a nice way that allows you to learn along the way,
which is the whole idea.
Excellent. Will you grab that for the show notes, Mr. Payne?
I sure will.
And it's great that it's already pretty easy,
and once it's in the kernel, it'll be even easier than that.
Yeah, that's the thing.
Part of that setup is just getting it installed right now
and figuring out how you're going to do that.
We just went through the installation process last night ourselves,
which we'll talk about more in a little bit.
Really hope that they do take Jason up on that offer to backport it to the LTS because, like you kind of implied there, Wes, there's also the question of the RHEL release cycle.
RHEL is currently using, the current version of RHEL is using the 4.18 kernel which is already 9 months old
and they tend to stick with that for
quite a while. Yes.
Even that said, it's pretty straightforward
to get going, even on a system that doesn't have
it baked in. Right, it's not a big module
to load, it's very easy to build
and it's one of the better behaved
DKMS modules I've ever used.
So this next story kind of made me smile
this morning because it felt like the good old days of using a Linux distro.
I knew that Manjaro, which I've been running on a couple of my workstations,
had an update coming.
They've been kind of teasing it on Twitter.
But I didn't know when it was going to land.
And so as I always like to do,
because I think it's the way to keep an art system running great,
is I decided I'd check my packages this morning.
Just do a, you know, synchronize my mirror.
It's a show day where you need your workstation to work,
so you better update it.
You're right.
I really am a dummy, aren't I?
I really am dumb.
I'm so dumb.
I think you're just excited.
You know, you have this great little workstation going,
and you're like, oh, there's probably new packages.
You know what it is?
Is I just love the way it all looks on the retro CRT,
cool retro CRT terminal.
And Peckman's so pretty.
Yeah.
So anyways, I'm just synchronizing my mirrors,
and I notice I'm getting like 30k a second from the mirror.
It's just going really slow.
And I'm like, oh, oh, must be a big release day,
because that's how this works.
And sure enough, I check on Twitter,
and a brand new stable release of Manjaro is out.
525 packages needed updating on my system.
Oh, boy.
It includes the new 5.4 LTS kernel and a new version of,
remember I was trying to tell you Manjaro has this package manager that's, like, unique to them?
Mm-hmm.
PAM AC?
PAMIC? PAMIC, yeah. New version of that, but as well as updated desktops. like unique to them. Pam AC? Pamic?
Pamic, yeah.
New version of that,
but as well as updated desktops.
And just loving,
loving it so far.
I wasn't sure
how the first update experience
would be like,
but this was great.
It was really kind of fun.
Oh, my mirrors are running slow.
And my first thought was,
let's go check Twitter
and say, hey, look,
a new release.
I mean, it's a little bit of a difference
there if you're used to Arch, because it's, I mean,
not quite rolling. Right.
Right. Right. Yeah, it's just sort of fun to have
sort of, like, moments where, like, there's a bunch of
things, but it's not a complete huge thing. It's a mini
new Fedora release. It's mini. But, yeah.
All the time. It's a good way to put it.
It's a little mini update, and I
really, um, I really think they
did a pretty good job of this one.
I haven't actually rebooted yet, though, so I'll reserve my time.
Oh, come on.
That's the real test.
I know.
I was like, you know, I can't.
You have to kick the tires on the new kernel.
Well, now that we have the show at the newer time, I was like, I got to run down there and do the show.
I didn't have a half hour.
So you should have rebooted it, and then we'll check at the end of the show.
Yeah.
I could run up there.
Should I try it?
Should I try running up there during one of our clips or something and see? I don't know. I hope it works because
I've really fallen in love with that workstation. Well, actually, you know, we've got a tip later
on in the show that maybe you should set up on that workstation and then you wouldn't be so
worried. You're right. I totally could set it up on that workstation. Oh, Wes Payne, you are clever.
We will definitely be talking about that.
But before we get into all of that, I want to mention,
speaking of how much I love my dang Plasma desktop,
we had ourselves the Making Plasma Brilliant livestream on Friday,
and I'm pretty happy to say I think it went
decent. We had a good attendance,
the mumble room was poppin',
the video after the fact has already been posted
and got a lot of views,
relatively to what I expected. I think the sign it went well is there was just
more stuff than we could possibly talk about, and there was never,
we weren't huntin' for content, cause there's so much
to do in Plasma. Nailed it.
Right at the hour mark.
Still got the livestream. And there's like six the hour mark. Still got the live stream.
And there's like six or seven topics we just couldn't go into.
I got that internal clock.
Pow, right there.
So it's up now.
If you are interested in what I do to beautify and make my Plasma desktop,
like I say in the video, from basic to brilliant,
that link will be in the show notes at linuxunplugged.com slash 331.
And it's up on the YouTube channel at youtube.com slash jupiterbroadcasting.
And you can just jump in if you want.
At the very beginning, I kind of tell you what to expect.
So if you just want to know how to tweak your fonts or tweak console, you can just jump to that.
Also, Brent just keeps hitting it out of the park.
Brunch with Brent and Alan Pope.
Mr. Popey sits down with Brent for a fantastic brunch.
Is there anything you want to tease?
I haven't had a chance to listen yet because it just came out this morning.
Oh, there's a lot of stuff in there.
Both of us at the end of the conversation went,
geez, that went in a bunch of directions we never expected.
That's usually the sign of a great brunch with Brent.
I didn't know this about Popey but he's a very well practiced fuzzy tester
and he tells a little bit about that
his adventures there
so that's a pretty good one
you get some good Popey flavor
I can tell just by looking at the links
there's some good Popey flavor that comes through on this one
and you know we were just sitting around here
and said you know
if there's just one thing we need
is Popey on more podcasts.
So he's on the Ubuntu podcast.
He's on the User Air podcast.
We need him on more podcasts.
So brunch with Brent,
extras.show slash 38.
Maybe someday he'll come back to love.
That's right.
What the heck?
I forgot him.
I remember.
Remember that, Wes?
Wes remembers.
Wes remembers.
Oh, yeah.
But it was really great.
So if you've been missing yourself, some poppy.
Also, there was recently one with Wimpy, too.
I see.
I see how it is.
Now they got time for brunch.
Sorry, guys.
I'm taking all the, at least all of your appearance.
No, actually, you know, I've known these guys for a long time,
and I know that just by listening to Wimpy's,
I learned stuff about Wimpy's,
so I know I'll learn stuff about Popey, too.
So it's really great.
Also, I am so happy to say that our Telegram channel
has really leveled up recently.
Lots of great conversations going all the time,
and thanks to the work of Cheese Bacon and others,
we've got some good spam prevention
in there now.
So it's a really nice,
good, clean chat
at jupiterbroadcasting.com
slash telegram.
Yeah, there's really
a fun conversation
going on there
almost all of the time.
Pretty much all the time
because there's folks
from all different time zones
in there.
But you'll also see
the host popping in there
throughout the day
as well as
network announcements.
So if you've
felt like you want to take the conversation
beyond just the download,
jupiterbroadcasting.com slash telegram.
Even sometimes that Wes Payne's in there.
Oh, yeah.
It happens.
All right, Wes.
So here we were with a perfectly functional
Fedora 30 workstation server,
which is even funny to say.
Actually, we might have used the net install image.
I think we did use the server, yeah.
So it might have been the server at the end of the day.
All right, fine, fair enough.
There wasn't a GUI installed, but it was still Fedora.
I think we had a workstation USB drive
going, though, to troubleshoot.
Did you think I was crazy
when we picked Fedora coming from FreeNAS?
So the background there is we were running FreeNAS.
We had a whole ZFS array that Alan Jude set up for us,
but we found FreeNAS to be limiting, so we went to Fedora.
I mean, I'd say limiting is just that it didn't work for our use case.
We needed less of appliance because we wanted to manage the server a little more interactively,
and we just weren't that familiar with FreeNAS or FreeBSD.
Yeah, and we also wanted to take advantage of being able to run things from the command lines
for testing for the show and setting up and spinning up things that are a little more
cutting edge that maybe there wouldn't be a pre-cut
something for free NAS. I mean, we're Linux
nerds. That's what we do. And honestly, we just wanted
a Linux system. I would have gone
Ubuntu LTS probably myself. Right.
But it didn't seem crazy. One of the first
servers I ever set up when I was starting to make the transition
from playing with Linux on my little laptop
to like, oh, okay, I'm going to try this
on the server.
It was like Fedora, I don't know,
it was like 14, 15, somewhere in that era.
And it was a fantastic server.
So I knew it could work.
For a limited time, though, is usually the concern.
Yeah, I mean, I think I had that thing for a year or two at most,
and I wasn't current on updates.
And it's a lot of updates.
It is a lot of updates.
That is something we had to deal with,
which Arch will be the same way,
is there was sometimes updates
that would then break things like our ZFS support momentarily.
We are running a lot of out-of-tree modules relative to most servers.
WireGuard, ZFS, to just name a couple of really critical ones.
And that was a bit of a struggle, although not insurmountable,
but it did cause probably one outage in total.
But, you know, one something.
And I think we sort of found that while we liked Fedora, there was a lot going on that
we appreciated.
It was almost, it's almost too complicated, too ready for the enterprise, if I'm allowed
to use that phrase, because there were just a lot of systems in place for good reason
that you would want and that were well configured, but that we just didn't need in our tiny,
you know, server use case here in the studio.
But end result was we were very, very happy with Fedora
and we're very much considering because of that
going with CentOS 8 or CentOS 8 Stream
and then just loading ZFS support into that.
However, we got talking about this
from kind of like a philosophical standpoint, and we realized
this is something that we have an opportunity to kind of try and maybe bust a myth here on the
show. We are the Linux Unplugged Mythbusters because we have a theory, and that theory goes
that if you were to build a minimum viable Linux server, or another way to put it is
just enough Linux
so the system boots
and launches containered applications
and really
does almost nothing else.
Right, I mean, we're under a gig.
Some of those container-specific distributions.
And we're not going that far because we still kind of want
all of our usual tools.
Yeah, we go a little bit further because we install things like NetData and Samba
on the host system.
But I think our base install is still well under a gig.
It's a very minimal Linux install.
Very few things are running.
And we wanted something that was,
everything was off by default
and what we turn on incrementally is all that's running
with all the other functionality provided by applications
and containers that are divorced from the host operating system.
That was our theory.
And we thought, well, in these conditions,
if we could come up with a belt and suspenders approach to running Arch,
it would probably be a viable server platform.
Right. I mean, we're both familiar with Arch and have run it many times.
And I think we nailed it.
So I'll tell you what our belt and suspender was as we go here,
because I think you're going to like this. But I first want nailed it. So I'll tell you what our belt and suspender was as we go here, because I think you're going to like this.
But I first want to start, I want to set the scene.
I didn't know exactly how Wes planned to pull this off,
because here we are.
We have a Fedora 30 box.
It's in production.
We've set an evening aside with the team.
Hey, this thing's going to be offline for a couple of hours.
We weren't clear on that one.
Or like six hours, but we'll get to that.
But I didn't exactly know how Wes was going to accomplish this.
So I was delighted to learn exactly how we were going to install Arch on top of this
existing Fedora instance.
Wes Payne has decided the best way to load Arch Linux on our server is to boot with an
Ubuntu thumb drive.
Yeah, that's right.
I mean, Ubuntu is just a reliable operating system.
Why not use it to install Arch?
You get a GUI and everything.
That's great.
And I'd just like to point out that the Arch install scripts are packaged in the Ubuntu
repository, so it couldn't be easier.
Oh, I didn't realize that.
That's really cool.
Okay, so we've got the thumb drive.
We've done a lot of the preliminary,
so go ahead and fire it up.
We've done some backups.
We've exported the ZFS pool.
We shut down the Docker containers.
We're going to do one more export
just to make sure everything's nice and clean
of the ZFS pool.
But job one now will be to boot off of this Ubuntu thumb drive
and create an Arch to root environment. So these Arch install scripts, they're in the Ubuntu
universe repo. So you got to turn on the universe repo. But what is this? Is this some sort of like
backdoor way to get Arch on an Ubuntu system? Yeah. Oh, yeah. Well, I mean, it's really it's
just the minimal tools that, you know, the Arch team has written to help aid you in the install.
Things like Arch to root or pack strap.
Just a little, you know, a few utilities that can get things up and running.
Or like GenFS tab, the handy tool to make you an FS tab entry.
Yes, that was really nice.
And most of them, I mean, they just need sort of the core Unix tools, all the stuff you would get in core utils.
So that was fun because I was really surprised to see that.
And it was great, since it supported all of our hardware, including the ZFS disk.
And we, of course, had to struggle a little bit, because it's a server, it's a super
micro system, so it doesn't have a very fancy graphics card.
But we managed to get that figured out by just going into safe graphics mode, and then
it was off to the races and time to destroy some data.
We're up and running in Ubuntu safe graphics mode.
And Wes was pretty clever choosing 1910
because it supports ZFS automatically.
So that was really nice.
And we've already made the drastic step
of wiping out the partitions.
And Wes is currently in the process of creating new ones.
We're doing a really simple layout.
The host drive will be ButterFS.
The storage array
is ZFS. So we are doing a
ButterZFS hybrid
arch install. We'll
explain more about that. There we go.
That's our three simple
partitions. Have you
written it to the disk yet? No, not yet.
Are we ready? Let's do it.
There's no turning back now.
Go, okay.
I didn't give you much notice. There was no time
for you to say wait.
You were just ready to go.
I had my finger hovering over the keyboard.
It's cold out there because this is Pacific Northwest.
I didn't have the most dexterity.
No, and it's a cold garage.
I mean data center. I thought this would be kind of fun to just talk about on the most dexterity. No, and it's a cold garage. I mean data center.
So I thought this
would be kind of fun to just talk about on the show for a moment.
How about a hybrid
Arch file server that's ButterFS
on the OS disk and
ZFS on the data disk.
Radical. The further we got into this
setup, the more I love it
and I think this is how I'm doing my workstation
setups from now on too.
And it really kind of comes down to how you set up the sub volumes for that belt and suspenders
approach I was talking about.
Alex Wright.
Okay, so we've done a simple disk layout, but inside that simple disk layout, we've
created a series of ButterFS sub volumes.
Can you give us a quick rundown?
Well, we'd like the ability to take snapshots.
Considering we're installing Arch here,
down the road some packages may go wrong.
And therefore we've got a root subvolume,
and then we're also going to have some data
stored under the home partition,
maybe some of our Docker setup or other configuration.
And we'd like that at probably a different cadence.
We might integrate snapshots of the root file system
with the package manager or perhaps on a daily cadence
and want something different for home.
So we've got those separate,
and then we've actually got a totally separate boot partition
that we can take snapshots of as well.
I'm really looking forward to experimenting
with integrating snapshots into Pac-Man.
So before and after Pac-Man actions,
we do a pre and post snapshot.
We'll talk more about that in a little bit.
So we just got done setting up those sub-volumes
that will enable that flexibility
once the system's up and running.
Now we've got to get them mounted and cheroot inside.
Now we should probably mention right here
so that way we avoid confusion.
Later we decided it would probably be better
to actually have slash boot.
Right, once we invested a little more
in some of the tooling, we'll talk about, we realized
it just made more sense to do it that way.
But it's a flexible setup, and it wasn't
much work to change. Yeah, and this
is great, because, and we'll have a link to the
wrappers that let you do this,
but this is great, because it lets
you have a
automated system that will take a complete
snapshot before a package action
and after,
much like SUSE does, but for Pac-Man.
That's right.
We didn't have to switch distributions.
And there is a way to extend that into Grub, so it also creates completely bootable snapshot environments.
So we can just, from Grub, choose a previous environment and boot completely into it and
revert all of the changes that happened on
the system. And that's really what we want here, right? I mean, if an update goes wrong, we're
trying to stay on top of our updates for security and features. We want the ability to easily roll
back if we don't have time to deal with any issues right now. Yeah. And to kind of get that boot
environment thing working and the Pac-Man integration, that's where it was sort of necessary
just because of some of the assumptions the tools make, that's where it was sort of necessary
to have boot on the root file system.
And it kind of makes sense, too, to have snapshots sort of integrated so we have a full snapshot
of basically everything we need for the system, especially the way, by default, how Arch does
kernels as compared to, say, Ubuntu with a whole bunch of versions laying around.
Yeah.
And that's why Home is its own subvolume, so we can snapshot that independently.
So that's kind of nice.
And then all of the data and the container data and all of that is living on ZFS.
So you could completely just unplug the OS drive and plug in a new OS
and then just re-import the ZFS pool, which is kind of essentially what we did.
But we hadn't actually truerooted and booted into it yet.
So once we got all the subvolumes created, Wes used the Arch setup scripts and set up
an environment, and we went through and configured FSTab and generated that.
Arch has also got a handy little bootstrap tarball image you can download that has basically
everything you need.
It really was great.
It was really nice to go through this process again and really just understand how clean
the setup is on the server.
I've never installed Arch with someone else before,
so that was kind of fun, too. It was fun! I haven't
either. It actually was a lot of fun. We installed
it together. And, like, our favorite part, I think, was
doing FDISC, because, like, you know, that's old school
and setting all those up and, like, deciding what
the volumes are going to be and how to lay that out.
That's fun to do with somebody else.
But then there's that moment where you've got to reboot from the host Ubuntu system
and you have to boot into your handcrafted Arch environment.
Did we get it right?
Did we get it right?
Will it actually boot?
We've done everything on the checklist so far as I can tell.
But, I mean, can you ever really know until you push the button?
No, we've just got to find out.
So here we go. But can you ever really know until you push the button? No, we just got to find out.
So here we go.
This will be a nice, lean, mean, just enough Linux installation if all goes as planned.
But I don't know, something about this first boot,
it's always like the most special.
I made some sous vide pork shoulder this weekend, you know.
I thought you were going to say something about this.
You did, huh?
That's funny.
We just picked up some pork shoulder, and we are planning to sous vide it.
How'd it go?
Amazing.
Amazing.
Sous vide tips with Wes Payne.
All right.
Selecting the built-in boot disk.
Welcome to Grub.
We've got Grub.
Monitor power save mode at the worst possible moment.
Arch Linux Grub option comes up.
Hit it, Wes.
Boom.
Loading Linux Linux.
Those little large details.
Yeah.
A.k.a. the default configuration for all software.
Yeah.
Okay. Systemd is starting up for all software. Yeah. Okay.
System D is starting up.
Version 2.44.
This is getting pretty far.
I'm feeling pretty good about our potential.
Oh, yeah.
Our ZFS array is lighting up.
Look at that.
And we're at the boot.
Just like that.
We're done.
Any thoughts? Guesses? Do you think ZFS is going to load?
I do. Because those disks lit up, I think it's going to work.
Go ahead and hit it. Let's see.
A little mod probe ZFS.
No problem.
There it goes.
So that'll just scan it. Now we've seen it,
and we can actually import it fully online.
I think the next step is to set up SSH on this thing,
and we get ourselves out of this cold garage,
I mean server room,
and we set this up from the comfort of the studio.
That's a great idea.
I think in our rush to get out of the cold garage,
we may have made a slight mistake.
We did not unplug the thumb drive
from the back of the server.
No.
You know, Ubuntu, in fairness, which I always complain about, but it hopefully prompts you that you should do that.
Yeah.
We didn't listen.
No, we just want to get the heck out of there.
And then we also, during our setup, because we were really just enjoying the simplicity of things,
just enjoying the simplicity of things.
We made a decision for simplicity that ended up costing us several hours, maybe two and a half hours, if I'm being generous, of time.
Once we went back in the living room, we were troubleshooting this issue where we would
fire up all of the containers and everything would start.
They would know nothing in the logs was weird, but none of them could communicate with the
network.
But at first, it wasn't even none. You know, there were a couple that went worse. It wasn't like it was a
total failure. It wasn't obvious because a couple did work. The containers were up, the ports were forwarded,
but we couldn't get to all the services. No, and it was
simply because of a decision we had made hours earlier, but in the order of process
we hadn't realized we were making a critical decision that would affect us later on.
Well, a couple of hours now in the living room of the studio,
working from the couch, much more comfortable than the garage.
But we ran into a couple of snags that we didn't quite expect.
One which we'll tell you about in just a moment,
but the one that we should probably warn you about first of all
is some of the troubles we ran into with systemd networkd
and how it interacts with Docker, which it's documented.
We kind of probably could have, should have known about it,
but just because of the way we built this thing up from the ground up
didn't really hit it until it was already an issue.
Yeah, we were attracted to SystemD, NetworkD because it's so simple.
You make one file with, like, three lines.
We have DHCP.
There's one interface that we care about.
It was easy.
But then you have to start jumping through hoops
to get it to ignore Docker.
It tries to basically manage any interface
it can get its hands on by default.
We've just disabled it and are moving back to DHCP CD.
Yeah, that'll work for now.
And it means that our applications in the containers
are spinning up successfully,
and they're pretty much none the wiser.
They have no idea that we just transitioned
from Fedora to Arch and just turned them back on again.
And the ZFS stuff seems to be working pretty well.
Oh, except for that one thing.
Yeah.
So we had this kind of moment where we thought, you know, we gone done mess with stuff and we broke a couple of hard drives.
We were expecting this to be easy, right?
I mean, that was the whole point of the way we'd set the system up is just lift and shift.
Right.
But a couple of decisions we had made for simplicity had actually screwed us.
Number one, the fact that we use DHCP for our server.
However, we, and I have done this for, oh, wow, ever.
I mean, a very long time.
I set my servers via DHCP,
but I do it with a reserved address.
So everything's mapped to the MAC address.
Right. We could just statically assign it and it would work just fine. But I've always preferred to do it with a reserved address. So everything's mapped to the MAC address. Right. We could just statically assign it
and it would work just fine.
But I've always preferred to do it that way
so that way, even remotely,
I can change the IPs of systems.
I mean, it centralizes your admin right there.
Yeah.
And we chose to use systemd networkd
instead of just going with a straightforward DHCP client.
And, I mean, I'll take the fall for this one
because I proposed it.
I really like systemd networkd,
at least in the other use cases I've used it for.
I mean, the Linux-based router I have at my house, that's all powered basically by systemd-networkd.
But it also uses nspawn.
So I've used systemd-networkd with nspawn and Podman.
Now that I think about it, not a ton with Docker.
So I really hadn't encountered this issue before. It shows up as a new interface to systemd-networkd,
and it's like, oh, well, let me take care of that for you. And we have a lot of Docker in the
business. And one of those Docker containers is also managing WireGuard, which needs a whole
bunch of more complications involved. So there are definitely ways you can tell systemd-networkd,
hey, don't manage this link.
Make sure that forwarding works as we expect.
But for our case, because we weren't using any of the features really of system network D besides its DHCP support, DHCP CD works very well.
And then there's that disk issue that I was implying there at the end of that clip.
When we loaded up, it said, hey, three disks are offline. And
you know, the system
isn't that old, but it did get really
hot in there over the summer despite our best efforts.
And we looked at each other and we thought, maybe powering this
thing on and off again a few times pops some
disks. It wasn't
nearly that bad, though, was it?
No. I mean, they're
probably still old disks, and we should
consider upgrading anyway.
Yeah.
But the pool's fine.
The pool is fine.
It is fine.
All the disks are online.
All 13 drives are actually operating.
But I think in part was it because it thought...
Well, so we were just going through this.
It's not like we had a major plan or a giant sort of sophisticated write-up.
Yeah.
I mean, we may have done this as a sort of...
We're the kind of guys that put Arch on the server-wise.
I think that's enough said.
Right, right.
And so normally when you're setting up ZFS,
you know, I'm normally sort of mapped out ahead,
like how I'm going to shape the pool.
But because we were just sort of importing a pool that already existed...
Yeah, we inherited that.
Right.
And so we just popped over to the ZFS page on the Arch wiki
and entered the first command that you saw.
And it's like, oh, yeah, import the pool.
That looks great.
We did not remember, and I do usually like to do this, is have ZFS use the disk IDs and
not whatever SDX it happens to be assigned.
And because we still had that USB drive plugged in, all of our drive names were shifted and
messed up.
Well, not all of them, the few that happened to be assigned after that device.
So that's why it couldn't find the disks.
Linux could see them, but they had new names to CFS.
I was convinced.
I was like, oh, man, we killed them.
Like, just turning them on and off.
Because one of the things we thought we'd do is we'll do a full shutdown.
You know, we'll test this thing.
We'll do a full power off and power on.
And that's when we thought, let's then check the disk. And that's when we saw that, well, three of them are
not responding. But thankfully, it was all resolved. Right. I mean, you just had to export
and then re-import telling it like, hey, label these this way. ZFS is smart. And on the upside,
it, of course, goes and, you know, recheck, scrubs the data to make sure like, oh, does this match
up with what I think I should have for this disk and add it back into the pool, it did find some checksum issues on the drive.
So we do have some homework.
Right, yeah, it does turn out that perhaps it's a good thing we're using ZFS
and that bit rot protection because we do have some disks that are kicking airs.
Oops. I mean, that's why you use something like ZFS for your important data, right?
So, I mean, now I'm feeling good about that decision.
So going back to those belt and suspenders, we did decide
to use Snapper from OpenSUSE,
which is integrated in both
as a snapshot that does a snapshot every single
week, but also with a Pac-Man
wrapper. And this
is our kind of
insurance. Even though it's a very, very
simple base Arch install that
we'll just update once a
week, this kind of integration with snapshots with ButterFS, which is baked into the Linux kernel,
unlike ZFS, is our insurance policy. Right. So we should be clear, right? The ZFS pool is the
existing pool we've inherited all the way from when the system originally ran FreeNAS. And that's
great. It keeps our data safe, as we were just talking about.
ButterFS we're just using on the one disk that we have that's actually sitting outside of the server case that we're running the OS on.
And that's kind of playing this role, right?
Basically, all of our data is on the pool, and we can mix and match OSs as we see fit.
Yeah.
The one thing we need to change is we need to get the Docker Compose files off of the
OS disk. Not a big deal.
We have a backup of it. And I think we might just want to move
the home partition over to the pool.
Could. Totally could. Actually, that's a really interesting
idea. That would be a good way
to do that.
I like having the separation of
OS and data, though, because
now we have, I think, really proven it
three times over. We have gone
initially, this pool initially started inside of a FreeNAS Mini.
Right.
An actual enclosure with four disks.
And then Alan came along and we said, let's turn this thing up to 11,
and we got a super micro enclosure.
Great recommendation.
I mean, it's been a great server.
Super solid.
Really been nice.
And took those disks in there, plus added all the way up to 13 disks total,
and took that to a free NAS install on a super microchassis.
Then we said, hey, you know what?
Let's go crazy.
What's the most ridiculous thing we could do?
Let's put Fedora on here.
That's when we put it on Fedora.
And then we got to the end of that.
We said, well, what's even crazier than Fedora in production?
Let's put Arch on here.
And this ZFS pool has moved every single time.
I think we learned some stuff, too, about how we wanted to use the box along the way.
You know, because when we switched over to Fedora, we talked about how we discovered that it was more useful to us.
And I think that helped shape why pick Arch.
Because it turns out we're doing everything in containers
besides like cockpit and admin stuff.
So we don't need a lot in the OS.
Yeah.
As a FreeNAS storage box,
it was a storage appliance that we dumped files on.
When we moved to Fedora,
we expanded the applications that we could run on it.
We really kind of started enjoying using the system.
We realized this thing's got 24 cores.
Like, this is insane.
Like, we can use this for encoding.
It doesn't have to just be a file server.
And it changed the way we use the hardware fundamentally.
And so this go-around, we had all of that kind of experience of,
now, well, this is really, like you're saying,
this is how we use the machine now.
This is how we use it.
And so this super simple core approach, I think,
is the key to a long-term sustainable Arch install.
So we're going to set ourselves a reminder
to check in on this box and let you know how it's doing
because obviously the real, like, I don't want to say
proof in the pudding because you know what I'm saying.
You know what I'm saying.
I suspect that you'll be curious to see how this is working out for us in six months. real, like, I don't want to say proof in the pudding because you know what I'm saying. You know what I'm saying. The real, like—
I suspect that you'll be curious to see how this is working out for us in six months.
Well, yeah, because, like, I can say it's been great for 24 hours, but the real proof
will be in the long-term, I don't know, like six months from now.
We'll talk about it after the show and figure out when's a good check-in date.
Probably sooner than six months because it could go all wrong in 90 days for all I know.
We'll let you know.
It's a lot of updates.
I mean, it's a lot.
I've been very pleased with the tooling we've adopted.
And that was part of the reason to choose ButterFS, right,
is that, I mean, Canonical's working on things like ZSYS,
but right now OpenSUSE and, you know,
the integration with Snapper and YAS,
that's kind of the state of the art
on the Linux side of things.
I know, you know, the BSDs have very neat solutions here too.
And we wanted that goodness.
And using ButterFS, at least on our single no-raid or anything,
right on the root drive, it's been so easy.
I think I'll probably replicate that on my laptops.
Anytime we make a change, you run Pac-Man,
and you get two snapshots before and after.
I've been hard on ButterFS, Wes, but I'm not
expecting a lot here. I'm not trying to do a
RAID 6 or I'm not trying to do anything fancy. It's just
a super simple one-disc install. We also don't care
about the data.
I was trying to make that a nice thing.
Oh. I was trying to make that
like Chris says something nice about ButterFS
and you're just sort of like, yeah, but it's also because you don't
care about the data, which just sort of undercuts
the whole sincerity of it.
I want to talk about something that I think that was kind of our And you're just sort of like, yeah, but it's also because you don't care about the data, which just sort of undercuts the whole sincerity of it. Whoops.
I want to talk about something that I think that was kind of our key to success because this did end up going longer, but we kind of also knew when to call it a night because before we started, and credit goes to you, Wes, you said, all right, before we start, what's our benchmark of success?
When have we successfully converted this thing
to a production server again?
And so we took a hot couple of minutes
before we started deleting partitions
and we said to ourselves,
all right, when it is capable of running
all of the applications the current production server runs,
when it is capable of supporting WireGuard connections
with our existing keys.
Yeah, that's pretty important.
Because we didn't want to have to force all of the team to...
We just got that working nicely.
Yeah, it just got all their keys out there and everybody, yeah.
And also, we wanted to end with something that this system could do that the previous system couldn't do.
And that was where we integrated Snapper with Grub, with Pac-Man, and on a weekly basis.
And that was something I asked you, I was like, are we going to, like, blow out all of our disk space by using snapshots?
But I guess Snapper actually shipped with some pretty sensible defaults.
Right. I mean, you can go adjust settings and tuning to say how many, how often should you take snapshots,
and then how long should you let them hang around.
And there's just some systemd services you can enable
to set up
the automatic prune job.
Not so bad.
And we thought
really for something
we're not using that often
from at a base OS level
once a week is enough.
Right.
Plus every package
transaction.
That's it.
That's the majority
of system changes
we'll be making
and then some snapshots
around to capture
anything else
we might mess with.
You know,
configuration in Etsy
for example.
Yeah, because it's
I mean literally it's NetData, Samba, configuration in Etsy, for example. Yeah, because it's, I mean, literally,
it's NetData, Samba, and Cockpit.
Those are the three applications we have installed.
But it's just admin stuff.
I really consider SSH almost like part of the system, right?
I really do.
Like, as far as like we went out and installed something,
Cockpit, and NetData.
Well, we have to install the SSH package.
Yes, okay, fine, fine, all right.
Yeah, okay, I'll count it.
Four.
We've installed four applications.
That's it.
It's just admin stuff we need on the box for ease of life stuff to monitor what's going on.
It's truly a fundamentally simple system.
There's nothing really all that fancy other than the only kind of edge casey thing is the ZFS DKMS stuff.
And that's why I thought it was pretty important we didn't use ZFS en route.
We went butter FS, so that way the system would at least boot
if something went sideways.
Yes, it's just simpler.
Worst case, I mean, we still have some outage issues
because the applications don't come online,
but we've got our environment set up
to actually deal with that in an easy way.
We can at least log into the machine and begin to troubleshoot
and rebuild those DKMS modules and get the system back online.
Something I'll need to find for the show notes,
and this is a little reminder to myself here,
there's a handy little script because we have Docker sitting on top of ZFS,
which also made this so easy.
Docker has a ZFS driver,
and it meant that all the Docker stuff from our previous install
was already on the pool.
So once we loaded Docker back up on the system,
it just found all of its old containers
exactly like it needed.
Yeah, it was just pick up and run.
But as a result, you need your ZFS pool online
before you can have your Docker daemon start.
So there is a handy little systemd service
that just sort of acts as a shim
and doesn't start Docker
until your ZFS pool is actually online.
And that's one thing we did not have on our
Fedora setup. Yeah, no, that did
cause problems. That did cause problems.
So that's, I feel like
again, lessons learned on this build.
And then taking what you and I know about
managing Arch boxes and applying it to a
server, I think we're pretty good. I will say when
I got in this morning, I did run updates
on the server. That's going to be good. I will say when I got in this morning, I did run updates on the server, so. That's
going to be it. Like, we talked about, do
we want to automate the updates? And we both decided,
even with snapshots,
probably not. Both
from, like, a storage use standpoint, but
also just to be
careful for a while.
With maybe something we'll try. Maybe if we
get a few months into this and everything's fine,
maybe we'll try it. There is the added benefit of if we have to do updates,
we'll have to at least log into the system,
and that's sort of an incentive to check around on things.
I would like to know,
I'm sure there must be other crazy pants people out in the audience
that are running Arch on a server.
Have you automated the updates?
Have you found a way, like, can you have it ping a Slack channel?
What are you doing? How are you doing that part of it? I'd be curious if anyone's building custom
Arch images out there, too, because that would be my approach, probably, is to set up a job to
bake a new image and then just have the server, you know, reboot into it. Yeah, oh. So how would
you, where would you, you would build that somewhere safe, test it, and then deploy it to the box,
and then just say, next time you reboot, tell Grub,
next time you boot, boot into this instance.
Right.
Like a partition that just the image gets expanded onto,
sort of using like your own home-baked OS tree approach.
Exactly.
Huh.
That's a lot of work to maintain one Archbox,
but I could totally see the value if you had like a whole rack of them.
Right.
Then the testing would matter more and you'd really
want to be sure before you push it out. Yeah, I feel
like really
almost something, just imagine
picture it Wes, Sicily,
1987.
The Arch Hay Day.
Everybody does. A Slack message comes in
that says, in 24 hours I'm auto-installing
these packages.
But of course by then there'd be more packages.
You know, there are tools that'll help scrape the Arch website, check for things.
You can probably tie those things together.
If there's not been a blog post since the last update on the Arch page,
just install them.
Otherwise, prompt for my approval.
Somebody must have solved this.
Somebody must have.
And so I thought I'd put the question out to the audience.
Linuxunplugged.com slash contact if this is something you've got an idea on how to solve.
Just tell us the weird ways you're abusing Arch.
That's what we want to know.
It's been a lot of fun.
Really enjoyed it.
And I think if we had thought about the DHCP system, the network D thing ahead of time.
Yeah, it was just an instinct of like the last time I used this tool, I liked it a lot.
So why don't we use it?
I think we would have had this thing in like two, three hours.
Once that was done, really, the dream of just being able to move the containers over, it worked great.
Yeah, it really was pretty awesome.
I will say also, installing Arch is just a dream, and I love understanding all the pieces of the system.
It does constantly give you moments of like, oh, right, yep, got to get that and that.
But that's a good reminder of all the implicit dependencies
you have on pieces of the other
operating system. I think for you, add user
was a good example of that.
I thought that was a,
I didn't realize that was a Debian thing. I went to go add user
and I'm like, what do you mean you don't have add user?
What is this? And you're like, it's in the AUR.
I'm not installing it,
it's fine. Also,
nano? I mean mean this is 2019 i think we're all civilized here
let's install nano in the base image to just ship the s code instead or at least vim for heaven's
sake please watching west try to use a system without vim is painful it's very painful it's
just it's ingrained in my fingers it's literally every single time he forgets.
He never remembers that Vim is not installed.
That's how I edit.
All right.
Well, Richard wrote in on some Arch update tips.
He says, you can slowly apply backpatches to Arch Linux
and not apply months worth at a single time.
If you edit a repo to a specific date,
say two weeks out, then
you can run the update against that. And he gives
us an example, which we'll have linked in the show notes.
After you set that, you save
an exit, and you update the system via Pac-Man,
and it will only go back to
that date range. This is
clever. I never
knew about this. No, and it makes
sense, though. I mean, you'll still have breakage. You have to
progress through the updates.
But that should make it easier to deal with sort of one thing at a time.
So you could go back, and if you're six months back, you go all six, just one month at a time.
Huh.
Wow.
He says, once you're caught up, restore your original pacman.com file to normal without the date ranges in there, and you're good to go.
You just keep updating from there, from there forward. Also, a note for the nervous updater, the program
informant, which is in the AUR, prevents you from upgrading if there is a fresh Arch news item that
you have not yet read since the last time you ran updates. You might want that for your, uh,
since the last time you ran updates.
You might want that for your Manjaro box.
His base Arch image
is still from the Antigros install
he did back during our Arch
challenge, which
was March 29th,
2016.
Wow.
And he's kept it running
ever since. Let's hope that's
the future of our server.
No kidding.
I mean, you and I were kind of joking.
We were like, you know, this took us about five hours or so, six hours, I don't know, whatever, to set up.
But if we were the types to leave something and not fix something that isn't broken.
Which, I mean, fingers crossed, maybe someday we'll become those people.
This could be our forever install.
It's our forever server, Wes.
It could.
I mean, I like it right now.
Wes, we built a little forever server.
And it's working, which there were moments when it wasn't.
When we first started the containers and the networking wasn't working,
I thought we had leaped too far.
And I was legitimately concerned that I would never see my pillow again.
But thankfully, we figured it out pretty quick.
Well, Wes figured it out, really.
I read forum posts and suggested different things
while Wes ignored me and fixed it.
So that worked out pretty well.
What matters is that we fixed it.
Yeah, really.
Anybody in the virtual lug there running Arch on their production system?
Yep, been running it about since April now.
Oh, really? On a server or on a workstation?
It's on my XPS 13, and it's also running ButterFS, actually.
I did this bit of an experiment, and it's gone quite well.
Oh.
Yeah, I mean, if somebody installs a system on Arch,
they've got to tell you about it, so that's why this podcast exists today.
Amen.
We are completing the circle of meme life right there.
I acknowledge that.
Brent, I know you're on Arch as your daily production system.
What did you use for your VPS for WireGuard?
That's something I have yet to solve.
So I didn't actually get WireGuard working on my laptop
because I made a subtle mistake when I installed it
during the Antrigos challenge that you gave a long time ago,
which was to put too small of a system partition.
So I'm having a hard time upgrading anything
because I'm out of space there, and it's a real disaster.
So it didn't quite work for me.
Oh, we could help with that.
Because in this process, we realized that we had made boot
its own
dedicated two
gigabyte partition
and then realized,
well, crap, we
don't need that.
It needs to be
part of our
root partition.
And then, like,
what are we going
to do with
those two
gigs?
There are a
lot of options.
There's some
fancy dancing
you can do.
As long as you're
willing to wait
for Gparted to
drag bytes
around.
Yeah.
Yeah.
Well, I admit that my partition is
also encrypted, so that should add an extra level of fun. Atta boy. Atta boy. Gotta learn sometime.
Tell you what, we need to have a laptop support week where the three of us get together. I reload
to Arch, you reload your box, Wes, and we help Brent out with his partitioning scheme. Alex really tried to help me do a fresh Arch install, and I didn't finish it.
So sorry, Alex, but I'll get there.
We'll get there.
The problem is there's work to be done, and that system is functional.
It's just tight on space, right?
It's just hard to update, but it's still working.
I've been there.
Yeah, it sounds like you've been there.
I've been there.
I'm really hoping we don't put the server in that position.
I feel like this minimum viable Linux thing,
just enough Linux, is the key to success here.
What are we going to update on that box that's going to break?
There's not a lot.
I mean, I think the biggest thing, it will be kernel updates,
but we've got snapshots for that.
Yep, and we can just boot into the old environment or boot into the old kernel
or whatever because the host operating system will still load regardless.
And so from there it's just minutes to resolve an issue.
And I think both you and I felt a lot more comfortable managing a Linux box
versus a FreeNAS box simply because of this exact reason.
If we have to SSH into the host,
we're good to go.
No problem.
We're comfortable there.
And, I mean, it just lowers the debug cycle.
It's not that we mind having to fix occasional problems,
but if it's a 10-minute thing versus like an hour thing,
and, I mean, it's a real server belt,
it's not fast to boot up.
So every time you have to mess with trying to reboot and test something,
it's a long wait.
Yeah, all the controllers and just everything's slow
on booting a server. But if you've already booted and you just need to test if you can load a Yeah, all the controllers. It's just everything's slow on booting a server.
But if you've already
booted and you just need
to test if you can
load a module,
that's fast.
Yep.
So we're going to do this
so you don't have to.
So don't do this at home.
You should, you know,
probably use Ubuntu LTS
or CentOS, something.
I think I am going to
rebuild my router
on Arch again.
I know.
I was thinking it'd be
really nice to run
those Raspberry Pi 4s
with Arch.
Oh.
And AUR. You know, I was thinking it'd be really nice to run those Raspberry Pi 4s with Arch. Oh. And AUR.
You know, I appreciate
Flatpak snaps and app images
and RPMs and DEBs
and TARS
and all of that, but
one package manager to rule it all.
One package manager to install a package, to update my
packages. Just so nice.
Like, watching you
legitimately, I'm laughing at you
hunt around on Microsoft's site trying to get
the Teams dab. Meanwhile, I've just,
I literally, tryzen-s
teams, and just hit enter,
and it just goes out and installs it.
And what I love about it is it's so simple,
right? Like, I mean, you can have these complicated managers
that do a lot for you, but at the
same time, I mean, MakePackage
is great. The package config files,
you can mess with it.
They even prompt you,
do you want to make any changes to this thing?
So it's so accessible
because it's like radically simple.
Mm-hmm, mm-hmm.
I think I'm going to enjoy that a lot.
Anyways, we'll come back.
If this blows up on our face,
well, we'll tell you about it.
Yeah, you'll hear about it.
But we'll check back in.
Obviously, we haven't determined
when yet,
but we'll powwow after the show.
Maybe let us know.
Yeah, let us know what you'd like.
That's a great idea.
Look at you, Wes.
jupyterbroadcasting.com
slash telegram
or linuxunplugged.com
slash contact
like Richard did here
with his update trick
on setting the repo dates back.
Brilliant.
Thank you, Richard.
Like I said,
we'll have examples linked in the show notes
if you want to check that out.
Well, Mr. Payne, is there any other bits of business
we need to attend to on today's Unplugged program?
No, I mean, maybe I'll just update the server again real quick.
Yeah, it's like a fidget spinner.
When you feel bored, you just go do a quick update on the old server.
We already got a snapper update this morning.
Did we really?
Yeah.
Wow, look at that.
Isn't that funny?
Something that's really kind of you think of as an OpenSUSE thing.
And here we are, all the way over here in Archland using it.
I just love open source.
Hopefully that update didn't break our update protection.
Guess we'll find out.
All right, well, we're going to wrap it up there.
You know, it is getting towards the end of the year,
so that means our predictions episode.
We've got some special stuff planned.
So please join us live because there's so much more to participate in,
but also just sit back and enjoy probably a whole other show's worth
over at jblive.tv.
We do it on a Tuesday at noon Pacific.
You get that converted over at jupiterbroadcasting.com calendar. I'm
also going to give a personal plug to my buddies
over at User Air. The latest
episode was so great. So funny.
User Air is so fun. Check it out.
Air.show. Really, really great. And Brent,
another fantastic brunch
with Mr. Popey over at xers.show
and Wes Payne over at techsnap.systems.
And this here humble podcast
will be right back here next Tuesday. You know, Wes, no matter how much we disclaimed
that people should not do this at home,
we're going to get a lot of crap for installing Arch on a server.
I mean, I think our lawyers advised us to read a long legal spiel about liability.
The Linux Unplugged program does not endorse running your home server
on a rolling distribution.
And will not be held liable for any package upgrades that fail.
That'd be really funny if we actually were in that situation,
and also horrible.
We will fix your server, but it's $500 an hour.
Oh, yeah, a little side biz right there.
A little side hustle.
Thank you, Mamoru, for being here today.
I really do appreciate everybody showing up at the new time.
It's still kind of new. Thank you for joining us. It's here today. I really do appreciate everybody showing up at the new time.
It's still kind of new.
Thank you for joining us.
It's still new to us.
Very new.
So very much a big thank you for being here.
Appreciate you guys.
And appreciate you live streamers, too.
Man, you know what? It's so much fun live.
While I'm thinking about it, Wes, you know who else I appreciate?
People who download the podcast, too.
Yeah.
You.
Appreciate you.
Not Joe Resington, though.