LINUX Unplugged - 341: Long Term Rolling
Episode Date: February 18, 2020We question the very nature of Linux development, and debate if a new approach is needed. Plus an easy way to snapshot any workstation, some great feedback, and an extra nerdy command-line pick. Spec...ial Guests: Brent Gervais and Drew DeVore.
Transcript
Discussion (0)
Well, I think we're having ourselves an emergency podcast right now.
This is like thrown together.
We got to just really slam this one out because I got called into jury duty tomorrow.
You know, you've got to help out the state here, Chris.
You know, Wes, somebody's got to go in and pet the bald eagles.
It does make me feel comfortable that you're the one making the decisions.
Oh, yeah? Are you sure about that?
I got a whole line of things.
I'm going to go in there and give a speech about what I think should be a user-voted type of justice.
Think that'll get me on the old board?
Hello, friends, and welcome into the old Unplugged program where we're celebrating user-created software.
My name is Chris.
My name is Wes.
Hello, Wes!
Well, here we are on a Monday.
No livestream, no mumble room.
By ourselves?
Well, we've brought in a little backup.
You might say some of our
favorite friends, Mr. Cheesebacon and the Drew of Doom. Hello, gentlemen. Hello, guys.
Drew, it's been a while since you've been on the show. Glad to have you on. They've been
loving the Choose Linux. Of course. And Fridays with Drew on Linux Headlines.
Yeah, but you guys know I missed you over here. I just couldn't stay away.
Oh, Drew, I know you're lying. I know you're lying.
You're just helping us out because we really did get ourselves in a bit of a situation this week.
So I apologize if you were planning to make it live and we weren't there.
We'll try to do something. We'll try to do something for the live stream, but I don't know
what. Since it's kind of a special edition, we thought we should do something we haven't done
for a couple of weeks, something we wanted to stay accountable for. And so, Wes, are you ready, sir? I am
standing by. All right, ladies and gentlemen, it's time to check in on the Arch server. So if you'll
recall, we replaced a FreeNAS box with a Fedora box, which seemed like a ridiculous decision.
Level one insanity. So then we thought, well, what could we do that would be even more unreasonable?
So we replaced that Fedora box with an Archbox.
Now, this system is responsible
for a bunch of media storage
that's critical to our operation,
as well as several applications,
more than several, that run in containers
that are vital to our day-to-day operation.
So we thought, what better way
to take care of a box like this than to update it live
right here on the show.
Mr. Wes Payne, are you SSH'd in right now?
Oh, yes.
We're ready to go.
All right.
Kick it off, Wes.
Oh, some new kernels, upgraded wire guard, new system D version.
This is a good one.
Oh, God.
All right.
And I kind of love Arch.
Downloading like 1.6 gigs of software for a net upgrade size of two megs.
That is really great.
I always love it when a package manager does that.
Well, coming up on the show today, we're going to read a blog by Richard Brown, the former OpenSUSE chairman, who has a really deep take on the fundamental way we maintain
and distribute software in the Linux community, and he's advocating for micro-rolling servers.
We'll tell you about that and what we think about that idea.
Plus, we've got some software towards the end of the show that is fundamentally designed to help you recover from major system issues like a bad update. It's called TimeShift.
It's a beloved piece of software on Linux Mint. We discovered it during our Linux Mint review.
Now we've extracted it from Linux Mint because you can put it on any distro,
and we're going to focus just on that recovery method. Plus, we've got feedback and picks.
But right now, we're going to check in on that Archbox. Okay, how are we doing, Wes?
Download's going? Download's going. It's always a little nerve-wracking when you see the step.
You're getting rid of all your old DKMS modules, and you just have to hope they come back.
All right, Wes, well, we'll check back in on that in just a little bit.
In the meantime, let's talk about something from the no-crap department in the news.
Google is publicly shaming Samsung for making unnecessary changes to Linux kernel code.
Now, how about this one? This is coming from Google's Project Zero department.
This is coming from Google's Project Zero department.
They write that Samsung is creating a more vulnerable Android ecosystem by adding its own downstream custom drivers for direct hardware access to Android's Linux kernel.
Yeah, this is really no good.
Turns out the Linux kernel has, as they put it, a few sharp edges.
And these changes that are being implemented by Samsung and other vendors,
they're not being reviewed at all by upstream kernel developers.
So that means no one outside of that organization is checking them.
The researchers actually found a similar mistake in the Android kernel of the Galaxy A50
and that unreviewed custom driver added security bugs related to memory corruption,
which, that's no good.
No.
Can we just sit here and just acknowledge the obvious?
It is absolutely no surprise that when a vendor is
adding their own patches that is not being peer reviewed by the rest of the community, that there
is vulnerabilities in there that they did not catch. It seems obvious on its face and that
somebody in Samsung's position clearly, someone in that organization must have at least acknowledged
the risk that they were assuming by taking this action. And as an organization, they must have at least acknowledged the risk that they were assuming by taking this action.
And as an organization, they must have chosen to either ignore that individual
or actively ignore the risk.
It does get a little more complicated.
I guess some of these are really mitigations or intended mitigations for other security issues.
In this case, the bug affected the company's process authenticator security subsystem.
Yeah, they describe it as a moderate security issue.
But from the Google engineers at Project Zero perspective,
a lot of these things, they're just not necessary.
And so rather than being a mitigation, they just
introduce more attack surface.
Not only are they not necessary because there are
some facilities in Android that enable some of
the same exact functionality, but
at the end of the day, some of these
are enabling arbitrary
code execution. Fundamentally, these are enabling arbitrary code execution.
Fundamentally, they're enabling arbitrary code execution.
And that's even on Android 10 devices, which is just such a shame.
You think you're getting yourself a secure device by getting the latest and greatest version of Android,
and they've bolted on these custom fixes so that way they can add value.
Yeah, it is a shame.
And I'm not having it. And Google's been doing a lot of good engineering effort to make Android
more secure. Why throw that away?
It seems so obvious.
That's why I find it frustrating.
It still continues to happen.
I understand that there are
other options out there. This doesn't condemn
all Android devices. But
Samsung is such an important player in this space.
You'd really rather see it happen. I know it's a big organization.
They have many departments,
but you think they at least have some experience
knowing you got to work upstream with the kernel community.
That's just what you should do.
It's also kind of awkward that their partner Google
is calling them out publicly.
It's sort of weird, Wes.
You know what I mean?
All right, Wes, your break is now over.
Let's check in on that Archbox. Upgrade complete? I mean, I don't see any errors. You ready for the reboot? Let's do it.
There's only one way to find out what these things. All right. It reboots so fast, it almost
makes me feel nervous. It's gone. And then it takes so long to get through the BIOS.
Do you think that maybe you should have installed TimeShift first?
We do have a series of snapshots on there.
Yes, and I did see it taking those snapshots, so that's good.
And I'll remind everybody that our philosophy with this Rolly and Arch box
has truly been keep that base install of Arch
absolutely as thin and lightweight and minimal as possible.
That thing's really designed to do one thing, mount ZFS storage and run containers.
That's our philosophy with this.
So we're hoping that this ultra-minimal approach to an Arch deployment
is a safer bet than something that's really built up.
We'll see.
We're going to find out.
Because at the end of the day,
you're still updating fundamental packages.
Right, there is a lot of change
and we do have to upgrade more often.
But the flip side of that is I feel like
I understand the system a little bit better.
That's true.
I think you and I both stay more current
on what that thing's doing.
Well, let's shift gears into Richard Brown's post,
which was titled,
Regular Release Distributions Are
Wrong.
We're going to pull a few bits from this because it's pretty thought-provoking, and we'll link
you to read the rest.
As we record right now, I think the site might be having some troubles, so we'll try to include
both a link to the source site and maybe an archive.org version.
That's what we had to use.
When you want to start with this one, Wes, you think maybe there's so much to it, including his setup here.
What do you say we start with his bit in here about regular and LTS releases and how they mean well?
And this is, I think, a point that we could discuss as a group here.
He writes, the open source world is made up of thousands, if not millions, of discrete free software and open source projects.
of discrete free software in open source projects.
And Linux distributions exist to take all of that often chaotic, ever-evolving software and condense it into a single consumable format that is then put into very real-world work
by its users.
The traditional mindset for distribution builders is that the regular release gives a nice,
predictable, planable schedule in which a team can carefully select appropriate software
from various upstream projects. The maintenance often comes in the form of making minimal changes, seeking only to address
specific security issues or customer requests, taking great care not to break the systems that
are currently in use. Right, so this is really an example of a classic LTS system, and often,
I mean, that can be appreciated. You know, you don't always want the rug pulled out from underneath you
and having to adapt to constant changes,
especially if it's software that you're not super familiar with
or you didn't really, you didn't need the new features.
That wasn't your concern.
You just wanted the software to keep working.
Yeah, sometimes it's required by a commercial vendor.
Yeah, and you want a reliable service,
especially for something like a server.
But it is a little more complicated than that
because, I mean, especially here on Unplugged,
we want the new stuff.
You know, we're always talking about the shiny new features.
And so the downside of an LTS often is that you don't have that or you have to do some really, really terrible workaround or build stuff from source to get it extra.
And that doesn't always work.
When we've talked about rolling releases versus traditional, we've often identified that there is a type of user that wants access to current
software, either to stay current with their peers or to stay current with somebody they're working
with in industry or because they like to follow software. Maybe a tool you need for your work,
and it's important that you get the new features as quickly as possible. Yeah. But it's really the
nature of what we consider change in a Linux distribution that Richard Brown is calling into
question here. He continues,
Is this change too risky? It's a common question,
and quite often, highly desired features take years to deliver in regular releases
because the answer is yes.
This often means avoiding updating software to entirely new versions,
but instead opting to backport the smallest necessary amounts of code
and merging them with, much often,
older versions already in the regular release.
We usually call these things patches, updates,
maintenance, updates, security, updates.
Richard suggests, though, that we're really avoiding
referring to them what they really are.
Franken-software.
Okay, so this...
A weird, unholy hybrid of things that were never meant to be.
Yeah, hybrid software would be a little more charitable, but what it is, he says, no matter how skilled the engineers are A weird, unholy hybrid of things that were never meant to be. which was never originally intended to work together. In the process of trying to avoid risk,
backports instead introduce entirely new vectors for bugs to appear.
Okay, let's sit with this for a second, guys.
He's saying essentially that when you have a regular stable release
and you start backporting fixes,
you may be solving one security issue,
but you could be opening up a whole series of bugs.
And I'm no software developer,
but it does make sense that if you're taking two bits of code
that are very close together but not intended to work together,
it could create other possible issues.
It certainly depends on the specifics, right?
But you can imagine simple changes that are just,
oh, change the default config option or something.
But depending on how far the software has progressed,
you're basically having a totally separate fork.
And to make certain security fixes,
it might mean significant re-architecture
or touching a large swath of the code base
because it's changed and you can't just apply the fix
that gets applied to the head of the branch, right?
Right, and I think a lot of times
we'll charitably call these a regression.
Oh, it was a regression.
And we continue to do it as a model of delivering software.
It's not just Linux distributions.
For example, you have Firefox 73 that came out last week,
and version 68 ESR, whatever it was, also came out.
Well, clearly, they're taking some bug fixes and security issues,
and they're backporting it from the current version of Firefox to that old branch.
And it's so common that we don't even talk about this process anymore
because it's just how you do it.
It's just normal.
Yeah.
But he's kind of calling that entire thing into question.
And I think if you narrow this scope, which he is here,
to just servers and how server software and open source software is distributed,
his argument starts to make sense when you consider, and he writes,
the more people involved in working on something, the more eyeballs looking at the code, the better.
That's a fundamental truth.
And yet, when you think about it, it also means you have handfuls of people
contributing software at different paces, at different schedules,
updating all different aspects of a system, at all different interests,
commitments, timelines, commitments,
timelines,
motivations, etc.
It's a massive spectrum.
And perhaps in this reality,
when software is developed in this way,
we're kind of fooling ourselves
a little bit here.
And that if we could build a system
where we could continue to just stay current
with the latest stuff,
like all, like the way Linus considers security issues bug fixes,
just everything's a bug fix.
Everything's a fix.
And we have seen this attitude more prevalent on all kinds of systems, right?
I mean, some would say a lot of the DevOps philosophy.
And if you think about the flip side, have you ever gone, you know, you waited a couple LTSs,
I mean, I was personally involved with 1204 to 1604.
That was part of the system D change.
It was a nightmare.
I mean, there were just so many changes to the system.
And you've got to imagine, in some sense, if developers had been given a little more time to get used to those, adapt during the process.
In the end, it might be smoother.
My wife's clinic's workstation is on 1604.
And a lot of the repos don't work anymore,
so the updates are failing.
And I guess that version doesn't auto-clean the boot partition.
I thought it did, but her boot partition filled up again,
so then it stopped updating.
I mean, it's really getting decrepit,
and the software on that thing is still running Unity 7, right?
I mean, it's really feeling old.
But it works for her, but it's clear that the rest of the software world has moved on.
Drew, since you're our special guest, I wanted to kind of get your take on this from your sysadmin days.
Usually the rolling argument falls down when you start talking about server deployments.
But Richard goes on to make the argument here that, well, you solve that by
creating microserver OSs that just do one thing, much like our Archbox. He's got a point, but I do
think the argument does fall down quite a bit here in 2020. Three years ago, I think this would have
been a much stronger argument. But nowadays, with the prevalence of containers and snaps andized applications and containers themselves. I don't
see this as something that's strictly necessary anymore, especially in server software.
How many people are deploying things on bare metal anymore? Just about everybody is using
Kubernetes and Docker and increasingly even enterprise level stuff is moving that
direction. So to me, the ability to have a very stable base that's been quality assured by the
team who put it together and then stack software on top of that, that is rolling, that's based in a container, why do I need the base system to be rolling at this point?
You flip the argument around is what you've done here.
Yeah, exactly. It's just not something that I buy anymore. I would rather have in production
a system that I know is dependable and has a solid team doing the security patches
like Ubuntu or like Red Hat, and is putting together a product that I know I can rely on
and gets really good tested quality assured updates. And I don't feel like I get that level of quality assurance, testing, whatever you want
to call it, in a rolling distro. It just doesn't feel rock solid and bulletproof like some of these
others do. I think to buy his argument, you have to accept that the distributions are not providing
a ton of value, that they're mostly providing organizational and smoothing of rough edges.
And then when they backport, they're kind of slamming the software together.
That's sort of what he's implying.
And he may be in a better position than I,
having been the OpenSUSE chairman and being involved with OpenSUSE forever.
He may know better than me, but I don't – I feel like it's a little uncharitable to distributions.
I feel like they're sort of underselling what they do.
But I think I might agree with his fundamental argument.
I might see it the opposite way Drew does.
What about you, Wes?
Yeah, it is complicated.
A little more data, I think, would be useful because there's a lot of philosophy in this,
and rightfully so,
and well-made questions and arguments.
But do we see that many problems
of regressions in patches, security patches?
Is this burden of maintaining
these really
that big of an issue? It seems like
at least from Drew and many other admins
that I've talked to, LTS
is appreciated, and they have security
compliance in play. If that was a major problem, then they is appreciated and they have security compliance in play.
If that was a major problem,
then they would probably consider
moving to a rolling distribution, right?
Okay, so there's a bit of philosophy in here,
like you just said.
And I think we should touch on that for a second.
And put on your philosophy hat for this question.
Imagine if all of the open source developer time
that is used to backport to old releases
was spent working on new stuff.
Like we took X amount of time and dedicated it to new stuff.
That is part of his argument.
He writes, a small handful of committed volunteers
and those that are employees of companies
selling commercial regular releases.
These are limited resources that are often siloed
with only time and resources to work on very specific distributions with their specific backports and patches that are often hard to even be reused by other communities.
I think he's got a fair point.
For better or for worse, that is true.
What if, what if all of a sudden we had all that time back and it was all just new stuff?
I feel like we probably would be further along in
some areas. Yeah. I mean, I think that we would be further along in some areas if, if we were,
you know, progressive about it. And we, um, continued to take that time from, from doing
all these backports and, and ensuring that these LTSs were secure and moving forward with software,
uh, would probably fast track us to, you know, create,
just evolve the software in itself and create new and interesting software. But in that same vein,
I think that you could say that that also introduces bugs, right? And potential security
vulnerabilities. So it's damned if you do, damned if you don't. Right. So, you know, I kind of align
with Drew and what he says and with an LTS. I mean, it is long term support and you expect that.
But if you had the had this additional time and you could use that time to further the software
along and create new and interesting software, I mean, you have that opportunity as well. So it's,
it's really, this whole article is, is really more of a philosophical article in and of itself,
I believe. And, and, and it's really an opinionated piece. I do agree with a lot of what he says, but
you know, there's, there's part of me that, that thinks that there are LTSs for a reason.
what he says, but there's part of me that thinks that there are LTSs for a reason.
And between these LTS releases, that gives your sysadmin and your developers time to come up to speed with the latest technology. So when do you have a lull so that your devs can get up to speed
with this new technology if you're creating the technology so fast?
Okay, so that's a great point. They got to have an opportunity to learn, which,
okay, all right. But in some ways, an opportunity to learn, which okay, alright, but in some
ways, Wes, couldn't you argue that perhaps rolling
at least at some level could be easier
to understand? Right, it's all about cadence,
right? I mean, a rolling is
just a lot of little tiny releases.
And so you can learn these changes.
There's less to learn when you
have constant changes instead of one
big heap of changes. Right, you're maybe
updating a dozen packages versus a thousand packages,
and there's maybe three changes for each package versus 200 changes.
Right.
And I think of it from maybe a development angle as well,
and that just minimizes your pain as long as you're doing it sometime.
And you might not update every day.
It might be that you update every week or every two weeks
so that you're aware of those changes.
But having a smaller change set to go diff and look back through
when you're trying to debug something
or figure out why something won't compile anymore,
that's really nice.
So where does this leave us, really?
I think there's one way to find out.
We could check on how our server's doing,
and that could be the ultimate answer.
Oh, I was hoping you weren't going to say that.
Has it back up yet?
All right, well, it rebooted, no problem.
It didn't load the ZFS module.
You're kidding me!
I know!
What?
So, no containers were running.
No ZFS file systems mounted.
I'm currently rebuilding the DKMS modules.
We'll see if that just, you know...
Oh, that is sometimes a thing, but usually we have not had that problem on Arch yet.
Hmm.
Well, I think that proves it right there.
I think Cheese and Drew are right.
You guys got that one because, I mean, we'll see.
It's probably not a big deal.
I guess that's a fair part of the conversation
is every now and then these things happen.
I suppose if this was a real, honest to legit,
revenue-generating piece of equipment, we'd probably have the storage on like an iSCSI device.
And it wouldn't be directly attached to this one host where we're using a ZFS kernel module to mount it, right?
It'd be on another host that's running the disk that has it mounted, or we'd be iSCSI-ing it and mounting it that way over iSCSI.
And there's also other things.
I'm sure we could have detected this problem before we rebooted
or adjusted some parameters to ensure that.
However, good on us for going butter FS on the root
so that way at least the host system gets up
and you can troubleshoot, right?
That's really been working very well.
You were able to SSH in.
Yeah, no problem.
Came up, rebooted normally.
This was part of our belt and suspenders approach to Arch.
It really was.
Is get snapshots,
which we're not going to bother with in this case
because it's probably a pretty easy fix. but also give yourself a way to get in when the system
doesn't work right. And the snapshots here do mitigate a lot. And we're not going to use them.
I'd rather just sort of roll forward in this case. But right, if we, we wouldn't do this unless we
hadn't allocated at least some kind of maintenance window. And with snapshots, it means, okay, it
didn't work. Try it again later, roll it back and then you don't have to miss that window.
We've done some offline,
I mean off-air I should say, upgrades to it too.
We haven't only been upgrading it on the show.
And so it's been
probably a dozen times
maybe?
I know I've probably
done it half a dozen times and you've probably done
another handful of times and it hasn't
had an issue but I'm pretty glad that when we did,
we caught it on the show
because this is how it goes.
This is the real world.
And these are the things you need.
And when you hear why you shouldn't run Arch on a server,
it's like, well, if you don't want to deal with this,
if you don't know how to reinstall a DKMS module,
which is just a Pac-Man command,
but if that goes over your head,
then it's not the right OS for you.
I didn't even read the Arch blog before doing this, which I probably should have.
Well, there was that whole I had game show music countdown play.
Well, I mean, I think that this is the perfect type of system to test this idea on, though, right?
Because it's not like Equifax's database server or something, right?
It's this personal server that we're using. It disrupts some podcasts.
Right. It might, you know, disrupt
a little bit of a file storage thing, but you always
have your ZFS pool that you can
reattach to. So it's not like
theoretically anything is really lost,
right? Like you've got
some containers you might have to rebuild and
you've got, you know,
let's say if you change the host from Arch
to something else, you might have to rebuild those containers and reattach to your ZFS pool.
But other than that, you still have a security net there that you don't have to worry about the system actually breaking and falling into pieces.
There is one aspect of this conversation that I think he nails before we completely move on.
And that is treating the server a little bit different
than we traditionally have with old school Linux.
In old school Linux, you got these DVDs or CD-ROMs or whatever,
and there was like six disks, and you would do this whole install.
It would be, you'd install all kinds of packages
just in case you ever needed to install another kind of package.
And it was assumed that you'd have all these different things
in a distribution that was much larger than it actually needed to be if that was only doing one job and one job only.
You made a big standalone multipurpose machine.
Yes.
And even if you installed just a mail server on it, it still had all this other stuff.
That has shifted.
And the idea of one server, one function in the world of VMs and containers is very, very affordable
and very attainable.
And I do think that is the way
to just really solid server systems
in the future is
one micro install instance
that only has the bare minimum
of what it needs to run a container
or a VM.
And in that VM,
again, it's an absolute
bare minimum install,
only needs what it has to have.
And you're only maintaining
that small set of packages.
And that's why there's so much room
for all these different flavors of Linux.
From OpenSUSE and MicroOS to
Alpine to the
RHEL core stuff now. And there's even
Unikernels where the programming language,
they don't even have a regular kernel. It's all just that.
And in the meantime, the rest of the world will just keep on using Ubuntu.
Well, and
he's also spot on about atomic style updating.
He mentions micro OS, which is the OpenSUSE version.
But the whole idea of transactional updates is fantastic.
It's something that I've really taken a liking to, not just in servers, but also on the desktop with things like Silverblue,
which I think could be the future of how we do a lot of things, having an immutable base,
which is read-only, and then you can layer stuff on top, including containers. That's
really what it's built for. And it really feels like the next generation of server computing,
especially. Maybe desktop.
I don't know.
I'm back and forth on that.
But for servers, yeah, it's the ability to roll back
without having to deal with things like snapshots
because they're baked in.
It's fantastic.
I agree.
That's why we're going to talk about time shift today.
So that way, folks that are listening can implement this on their desktop or laptop or even your server in a way that's really straightforward, just using rsync. Or if you have ButterFS, it uses native ButterFS snapshots. Because these snapshots are really great functionality. We decided to install it on our recording system here that has an awesome jack setup. It's a lot.
an awesome jack setup.
It's a lot,
but amazingly,
we are able to expose all 32 inputs and outputs
from our mixer to jack,
and so we can wire the mixer
in jack
and then send it to Reaper
for recording
and pipe it out
to our remote guests
in the mumble room
with virtual syncs
and whatever you call it.
It's just flexible
and lovely.
It's awesome.
However, you could see how an audio package could come along and break the whole thing,
and we would not be able to record.
We're really dependent on it now.
Yeah, there's PPAs involved and a lot of specific configuration.
It's delicate.
So we thought, Jesus, this is a perfect candidate.
Perfect.
So Wes, he laid his hands on the machine, he blessed it with time shift. And we're going to tell you how that went in just a moment.
Headlines dot show every weekday in three minutes or less.
What's going on in the world of Linux?
I do Mondays.
Drew does Tuesdays.
I do Wednesdays.
Wes does Thursdays. And then Drew's back to wrap us up on Fridays.
So we just sort of shifted around.
So that way, not one of us completely burns out.
But what's really cool behind the scenes, we have a dedicated staffer to doing research,
to collecting stories, to verifying them.
But we also bounce it around the team before it publishes.
And then we do a peer review before we actually release the MP3 file.
So in that three minutes, you are getting super tight, accurate, condensed information with the hype removed, with all of our different perspectives on it, regardless of who's hosting it.
Really proud of that work.
Linuxheadlines.show.
And, of course, one went out today.
And it's a great way to just, when you're going down the road,
why not just start out with a three-minute show
and then get into whatever you're going to listen to?
That's really great.
Also, jupiterbroadcasting.com slash telegram.
Just going great.
Yeah, come join the party.
I think I owe the whole telegram group a beer now.
I'm not sure how we're all going to get in one place.
I think I need to give them tacos.
PBRs.
It's going to be a good night, though.
Not going to be on PBRs.
Don't work tacos in there.
Everybody wants tacos now.
We're not doing tacos again.
At least that's on you.
Unless you want to bring the tacos, you're the taco guy.
I'll be the PBR guy.
You'll be the taco guy.
Dude, this is sounding great, actually.
Now we just need a place.
Also, if you haven't checked out my side thing,
Chris Last Cast,
Chris Last Cast,
me and Brent,
we're launching an empire over there.
No, actually,
I just decided when we were talking about the burnout stuff
to do a little creative outlet,
and I've kind of been keeping it going.
Brent talked me into it.
We sat down,
and I did a thing about how I created myself
a little note challenge.
Look at my notes right here. I'm sticking to it so far. You have a lovely notebook there. Yeah. down, and I did a thing about how I created myself a little note challenge. I got my notes right here.
I'm sticking to it so far.
You have a lovely notebook there.
So Brent and I checked that out, chrislast.com.
That just went up this morning as we record because we're doing this one a little early.
Hopefully, things will be back to our regularly scheduled program next week, Tuesday, at noon Pacific.
But since I'm at jury duty, we don't really know.
We don't really know until I just go.
So I go tomorrow.
If we're not live, we'll try to figure out another time.
Maybe we'll do like a late night LUP party.
Oh, that could be fun.
We'll do that.
We'll come.
We'll do some pizza.
We'll watch a movie.
We'll watch Revolution OS.
We could really do this.
Yeah, I'm down.
We could grab some clips, watch Revolution OS, have some pizza, and then go record a LUP.
I mean, if you've got to do jury duty, you might as well chill out.
Might as well make it fun.
We'll figure out something.
It might be kind of a weird one, but we'll figure it.
We won't leave you hanging, at least hopefully.
Anyways, that is our housekeeping for today.
Let's talk about Timeshift.
Before we do, I've got an Arch update.
It turns out the latest kernel that we just downloaded,
not yet supported by ZFS on Linux.
There's another GPL-only symbol change that has not yet been fixed.
So, revert?
Go back?
Back in time.
Very good, very good.
Well, that shouldn't be too hard of a fix.
You think you'll have it done before the end of the show?
Oh, yeah.
You think so?
I'm going to do it right now.
Okay, we'll see.
I mean, that thing does take a while to reboot.
So TimeShift is a Linux application that provides functionality similar to Windows Restore or maybe Time Machine on macOS.
It protects your system by taking incremental snapshots of the file system at regular intervals.
Now, these snapshots can be restored later to undo the changes.
And it will use rsync with hard links on a system that doesn't have ButterFS.
And the nice thing about those hard links is common files that are shared between the snapshots will just be hard linked,
and then you don't have duplicates, and it saves space.
Each snapshot is a full system backup that you can actually go browse with a file manager.
But here's what's also really cool.
In ButterFS mode, snapshots are taken using built-in ButterFS file system features, which
is really cool.
Now, I think the one thing that might surprise people is, if I recall when we were setting
this up, Wes, by default, it doesn't back up user data.
Yeah, and that's actually intentional.
It's designed to just try to keep the system safe.
And so if you break something in your system,
you don't also want to lose all the latest Excel files you were working on, right?
You want those to stay.
And so the idea is with Timeshift, you can just roll that right back.
All those Excel files.
That's your Excel?
I don't know.
I like your picture of the typical Mint user right now in your head.
That's what that was.
Hey, spreadsheets.
All right.
Okay.
Well, that actually makes a lot of sense because a lot of times you'll have maybe your most personal or important stuff in some public cloud storage with the permission set to public.
Or you'll have them in Dropbox with a single encryption.
Or you'll throw them on sync thing that isn't actually syncing because it's always breaking on you,
and they won't actually be only in your
home directory.
Come on, that's
how it goes, right? That's how it works.
But you could add it. It's just a matter of going
in the configuration and adding it to
backup. Yeah, absolutely. So you could do it.
We did not, though. We just had it focused
on the system, and we wanted to... I did add a few.
Oh, yeah? Why? Oh, I did add some of the files that we don't ever really change,
but just some handy binaries that aren't stored in the usual system paths.
Oh, yeah?
Okay.
And then, of course, we actually excluded the recordings themselves.
Absolutely.
We don't need to back those up into a snapshot.
We just really want to restore the configuration state so we can record again.
That's really the main thing.
And because we have committed
to ourselves to keeping these systems
fairly up to date, we don't do it
like every day, but you know, about once a
month we upgrade all these boxes and so
we do want to have an escape hatch
just in case something goes wrong.
Snapshots can be restored by just
either using the GUI, which is nice,
it comes with a GTK GUI, or
you could boot into another live environment,
and you can restore using a live environment too,
as long as you can access.
Yeah, really mess things up.
So I'm going to pull it up.
Let's take a look at it right here.
Yeah, it was very easy to get started with.
I'm going to, and of course,
what really put us onto this was
the people that were writing in when they were explaining
why they use Mint talked about how Timeshift
had saved their bacon a couple of times.
And it's nice because when you go in here, you can look at individual snapshots.
It has a big green badge saying that it's active.
It gives you a big number on how much available storage you have and how many snapshots have occurred.
All just clearly listed right here.
And you don't have to set anything up.
That's just how it shows up.
And it will automatically set up cron jobs for you
to make sure it takes those snapshots on time.
Yeah.
So that was the thing is when people wrote in,
they were like, you really got to go use this
because it really makes this simple.
And now I can see it.
I can really appreciate it because it's so simple.
You can right-click right there and you have mark for deletion.
You can do file management.
You can browse the files,
or it's one-click to just restore that entire snapshot.
And you can have a field in here,
so if you want to go back in later and say,
okay, I'm taking a snapshot, you can add comments.
So you could say this is our pre-upgrade snapshot,
and I can leave a little note in here.
So if you showed up to record TechSnap and something wasn't working,
you could go in there and say, oh, Chris did an upgrade.
Yeah, that's really nice for multi-user systems.
Mm-hmm. So I had something happen the other day, and I thought in there and say, oh, Chris did an upgrade. Yeah, that's really nice for multi-user systems.
So I had something happen the other day, and I thought,
uh-oh, Jack, it broke.
It finally broke on us.
And I got a hold of you, and we troubleshot it for a little bit and ended up just having to restart Pulse or something.
But it was at that moment I was thinking to myself,
hmm, I really need to get going on this recording,
so I wonder if we had a snapshot system.
So a time shift kind of came on our radar. I thought, this is going to be a perfect solution.
So we loaded it here. And I don't see any reason why we're going to stop using it. I think we'll
just keep using it on our production Kubuntu system. Oh, no, now it's XFCE. It's a Kubuntu
install with XFCE. I'm curious, though.
You can schedule this in like a cron job so that you can dump,
maybe have like three backups, dump the oldest, create a new one sort of scenario?
Yeah, it has some retention configuration set up already.
So you sort of choose like how often you want and how many you want to keep.
And it'll do all the updating and rolling for you.
Yeah, and it gives you a nice GUI for going and setting the location,
for adjusting the cron schedule so you don't have to be a cron expert.
Yeah, I'm looking at the GUI now, and the GUI looks actually fantastic.
It really does look like a quality piece of software.
There is one issue you may run into, which we did
as I was trying to add a couple extra things in the home directory.
There is a nice little filters part, so you can sort of add excludes or include files specifically on sort of Regex-style patterns.
That doesn't really work in the GUI.
It turns out any – it lets you add stuff, but that never doesn't make it to the config file.
There is an issue open.
It seems to just be a recent regression in the latest release.
And thankfully, the config file is just like a simple JSON file. So you can,
there's already some filters in there. You can see how it has just got to copy and modify a couple lines. I'm close to calling this a must load for any system where you are really serious
about needing to get back up and running or a family member. If you deploy a Linux box for a
friend or family member. The nice thing about it was that, especially because of the rsync support,
you know, we didn't have to do, when we set up the server with Snapper and stuff,
we spent a while considering, like, how are we going to do this file system layout?
What's the right thing?
With Timeshift, it just installed the PPA, got it going, and that was it.
It was probably like 20 minutes of just playing with it,
and now we haven't thought about it, and we have all these snapshots.
Yeah, and I love rsync. You know, that's good.
If you use the ButterFS support, there is a couple of requirements
on how you have things laid out, but they're
totally reasonable, but just worth checking
before you use the ButterFS support.
Well, Brent just joined us.
Brent, have you had a chance to look at Timeshift?
Do you have any thoughts on it? Yeah, you know, I've
seen Timeshift the first time I
saw it was in Linux Mint, I don't know, a few
years ago, and that got me
kind of excited about it because of the rsync backend, and I don't know, a few years ago. And that got me kind of excited about it
because of the rsync backend. And I'm a huge, huge fan of rsync because of its, you know,
robustness, but also because I know like a tool like this, I love seeing rsync in the backend
because I know that if for any reason, time shift stops working the way I want it to, I understand
what's happening on the sort of on
the bottom end, you know, so I can use tools I'm familiar with to diagnose something that's going
wrong with the backups if that ever happens. So that is some pretty good peace of mind.
Yeah, that's a great point. It's not some magical special backup format. It's just
directories or butterFS snapshots.
Now, it looks like reading around on their website that you'll get unpredictable results
if you're pointing at containers and stuff like that.
So beware. This is more for, like, a workstation snapshot. That's the ideal use case, I think.
Yeah, this looks really cool. I haven't had any experience with this project yet.
But one thing I'm wondering is if they'll integrate ZFS snapshots at some point. I think that would be super, super cool.
ZFS snapshots at some point.
I think that would be super, super cool.
I saw someone on their GitHub offering to start working on ZFS support and then contribute it back.
So that could be in the works.
I agree, especially with 2004 shipping ZFS.
I would love to see that.
Now, Mr. Payne, we do have some feedback to get to, including one from Troy about new users and Linux Mint.
He writes, Chris, I've been listening to your show since the Linux Action Show days when Brian was around,
and I love the shows, love the work, keep it up.
I was just listening to episode 339 of The Mint Mindset,
and I just wanted to give some feedback.
I know with your background that the show is probably focused more towards the geek community than the average operating system consumer.
Most people in your sphere of influence are programmers
or inspired by the bells
and whistles that people seem to love. Now, come
on now. He's, that's right.
Okay, he's probably right. He's probably right, actually.
Most, yeah.
Okay, well, okay, go on.
Continuing on. I got a little worked up there. I felt
like he was accusing me, but, nah, he's probably
right. I mean, we do like the shiny. He goes on.
While I work for a computer service shop in our
area, we make most of our money supporting the various bugs that occur in Windows computers and doing virus cleanups and so forth.
However, we've gotten more and more requests from customers, both businesses and home users, about purchasing computers or upgrading their computers to Linux.
to Linux.
And after trying a large variety of distributions myself and trying with different customers,
I have to say that the feedback they give us about Linux Mint
is how similar it is to Windows 7.
And they love it.
They love it.
Linux Mint is not only easy to use,
but it looks sexy as hell.
It also allows for geek and power users
to get things done as well.
I met several people, including myself,
who run Linux Mint web servers.
So there you go. All right several people, including myself, who run Linux Mint web servers. So there you go.
Last bit aside, which frankly
scares me a little. I think the point
of like, this is working for users. They
like it. It's an easy transition.
That's important. I agree.
I think that's what matters the most
right there. There's nothing really else to say
about it. He says, I guess, he
wraps up, I guess I'm saying, don't always look at everything from such a high level since most computer users
are just average consumers. They want to use it and get their daily tasks done.
I think we agree with that, actually. And I think that's a lot of times where we've even wondered
if Linux is the right answer. And I think elementary OS and
Mint are telling us yes. I would also wonder,
Troy, if you had a chance,
I'd be curious to know what the feedback would be
for something like Ubuntu Mate 2004 when it comes out,
maybe try slapping that on a few machines,
because I recently gave the daily ISO a go for Mate 1.24,
and it is looking real good.
And it's not Cinnamon.
Cinnamon kind of does have that Windows 7 vibe going.
But it still has that traditional paradigm.
And 2004 will be supported for a long time, and it seemed really good.
Daniel writes in with a question we've gotten a couple of times.
It's about these ThinkPad T480s.
He says, in episode 323, you mentioned a shim is being developed for the T480 fingerprint reader.
This was back in October, and it seems that the official drivers were released for almost everything bad except the T480 and a few other models.
As far as I know, there's no official development going on for the fingerprint reader, but maybe you could get an update on this?
Because if there is, I'd like to know, or if it's been abandoned, thanks in advance, Daniel.
Unfortunately, I don't have any updates on official support. There was a project over
on GitHub to reverse engineer the protocol and add support that way. They made some progress,
I guess, on the specific model that's on this ThinkPad. It will initialize and the LEDs work,
but scan doesn't work. And unfortunately, at least on the master branch, the last commit was like a year
ago. Oof. Oh, see, I
kind of heard a rumor it was in development, but I bet it was that.
Here's hoping, though.
Oh, well, just a couple of more. One more from Jason.
And this one is so cool.
I had no idea, so I'm so glad he wrote it.
And he says, not sure if y'all heard of this,
but there is a community plugin for Cockpit
called Cockpit ZFS
Manager.
He says it's a great plugin to help those that are used to a GUI style of managing ZFS from FreeNAS,
but are moving over to ZFS on Linux.
It's still in the alpha beta stage per the developer and requires Cockpit 201 or greater and Samba,
because it can also do some SambaShear stuff.
I think we're going to have to give it a try.
This does sound pretty cool.
He says maybe the JB community could show the developers some love.
I agree.
I think that would be really cool.
And he says, love all the shows.
Hopefully one day I'll find time to join in live.
Well, hopefully it wasn't going to be this Tuesday because we're not here.
I'm in the courthouse.
That is really neat.
I didn't actually need it, but now I want it.
Right?
I mean, just thinking back on our fake Nans there,
Cockpit was one of our favorite parts about using Fedora on that system.
And having ZFS support would have been perfect.
And you know what was great about that is we totally had no intention of even really trying out Cockpit much,
but we kept getting little notes from the audience saying,
Hey, you should try out Cockpit.
Hey, let's check it out.
And they were so right.
Will writes in with our last one, and it's a shorty.
It's a prediction for the show. He says, follow-up
prediction for Wes's Bcash
prediction. If Bcash FS
does make it upstream in 2020,
Jupyter Broadcasting will
be using it in production before the end of the year.
I think that's pretty solid.
Seems right to me. I mean, I'm absolutely putting it
on my laptop for sure.
I use that for production, so I mean, I'm absolutely putting it on my laptop, for sure. I use that for production, so
I mean, come on, how great would that
be? I've just said it too many times, I have to install
it once it's there. Bcache for the root file
system, ZFS for the data, ButterFS
for my home. I'm a happy guy.
That's like a perfect setup for me.
I'm all about that. It's all about seeing how many different
file systems you can use at once.
I mean, that's
the great thing about Linux, right?
It's all about choice, Wes.
Okay, we have a really cool app pick for you that I'm going to let Wes pick or tell you
about because he found this pick and he wants to geek out hard on it.
It is pretty neat.
Yeah, okay.
You've probably heard of JQ, the handy little command line tool for manipulating JSON, making JSON queries, that sort of thing, right?
It works really well.
But a lot of the time, at least on the Linux command line, you don't always have JSON.
I mean, we do have the Unix philosophy.
So you have this text output, but I'm sure we've all been in the experience of,
okay, I just want to grab the IP address field.
And now I have to like grep and
awk and cut, and it just gets to be kind of a mess. It's almost a little embarrassing too when
you think about things like PowerShell, where they do have a lot more structured output.
I was going to mention that, yeah.
So, I mean, we're not there yet. Some tools, I learned this today, IP route to the new IP
command that has a dash J flag and it'll output
JSON for you right there. So boom, some of this is handled some like system D, you know,
some newer tooling does have integrated JSON support. But for everything else that doesn't,
there's JC, it has a whole bunch of built in support. So if you have a, you know, a handy
classic Unix tool, you just, you know, bypass that as a command line option. It'll ingest the output. It already knows how to parse it
and then give you JSON write out.
Whoa, okay. That is
legitimately geeky cool. I mean, it's
definitely a hack, but I think it's better than
everyone rolling their own little parsers all
the time. We can at least here have a sort of
semi-standardized community to
make sure they work well. Check out the link
in the show notes if you want to visualize what
Wes is talking about because they have some example outputs on there.
This is one of those where people hear about this and be like, huh, okay.
And then one day they'll need it and they'll be like, what episode was it you guys talked about
that had that JSON thing that now I need for some weird arbitrary reason that I never expected
I would need?
It's episode 341.
So you can get those links at linuxunplugged.com
slash 341.
If you'd like to send us a pic or send us some feedback
like we just read, linuxunplugged.com slash contact.
Please do.
I really like that one.
That was pretty good.
And we'll have links to Timeshift and Richard's article
and all the other things we talked about
in those show notes too.
And of course, those are found
in your podcast catcher of choice. And if you haven't enjoyed our chapter markers yet,
why not take a look? Yeah, give it a try. It's pretty nice. And it's also very easy to jump
back to a topic if you ever need to. So those chapter markers are great. And we thank our
great editors like Joe and Drew for making those possible. Thank you, Drew. You're very welcome.
And of course, Joe, but he's not here.
I don't think so. Come on.
Gotta give him a hard
time. All right. Well, hopefully everything will be back to
our regularly scheduled Linux
Unplugged program on our regular
Tuesday day at noon
Pacific. You can get that converted over to Jupiter
Broadcasting dot com slash calendar. I'm
at Chris LAS. He's at West Payne. The show
is at Linux Unplugged.
See you next Tuesday. Okay, I think I could use some advice.
Kind of on topic, too, with what we've been talking about.
Lay it on us.
So I've got access for a limited time to Hadea's clinic machine,
like I said, that's running, I think it's 16.04.
It's got to be.
What do I do with this thing?
Do I attempt to, say, replace all of the repos?
It's like the Telegram, PPA, the Chrome repo.
There's several that aren't working. Do I spend the time to get all that working again, if it's even possible on 1604,
and then just update it and she just stays on 1604 for a little bit longer? Or do I try to
bring her up to 1804? Or do I just wait and just skip this window of opportunity and just take her
straight to 2004 in maybe a month or two.
Can you wait that long? I don't know if I'll get access to it again. How risky is it? But what's it do?
What's the use case for it? So what's it
do and how risky is it are both really good questions.
So what's it do right now is essentially
just information
input to web applications
or
some light printing and scanning.
It doesn't do a lot, so it's been perfectly fine on 16.04 with Unity 7 and all of that for ages.
I think 16.04 was a new release when I put it on there, which is crazy.
Wow.
So I haven't really been super motivated to throw it on the latest and greatest because they like Unity and it works. But things
like Chrome and Telegram are yelling at her that they're going to stop working because they haven't
been upgraded in so long. And those are kind of essential to the workflow. I know you guys have
been playing with 2004. How crazy would it be to put her on like a development release, considering
we're kind of close? Is that out there and too wild i don't know if i have
like a really authoritative opinion on it yet but my initial impressions have been that 2004 is
close to daily drivable if not there already it seems like it's been pretty solid um but i've
been mostly using the mate version so i don't know about the GNOME shell side of it, but I've heard from
others that it's solid. My inclination would be to go ahead and document what all they have
installed on there and actually use, and then take them to like 1804 Mate. And when 2004 Mate
comes around, it should be, if you find another upgrade window,
it should be pretty easy to roll them up to 2004.
But if I don't, no big deal.
They're on 1804, and it's going to be supported for a while.
Yeah, exactly.
And I'd go ahead and enable live patch, too.
Yeah, okay.
Yeah, that's pretty solid, I think.
And maybe make a backup before you get started.
Put time shift on there.
Seriously, I should seriously put time shift on there.
I'm going to write a note.
That's actually a really good use of it for her.
That's a perfect use case example is
this is a front office computer
that they want to book appointments on
and scan information into.
And, you know, I got to keep it up to date
because it needs to be secure.
Yeah.
But we can't have it break either.
And that GUI, assuming they can at least get to the desktop,
that GUI is totally usable by then. Yeah, it's very
intuitive. I think that would be
a much easier transition from Unity
than, say, GNOME or
even XFCE or Cinnamon.
It's going to feel
a little different, but not
so much that they can't find
what they need. Yeah, I could use that Mutiny
layout, too. Yeah, could do.
That pre-configured layout. Oh, that might be really nice.
Thanks, guys.