LINUX Unplugged - Episode 277: Skipping Fedora 31
Episode Date: November 28, 2018Fedora might take a year off, to focus on it self. Project Lead and Council Chair Matthew Miller joins us to explain this major proposal. Plus Wimpy shares his open source Drobo alternative, and our f...inal Dropbox XFS hack. Special Guests: Brent Gervais, Martin Wimpress, and Matthew Miller.
Transcript
Discussion (0)
You remember the Runs Linux segment on the Linux Action Show?
Oh, yeah, of course.
Check out this Runs Linux for this week.
NASA.
NASA Runs Linux.
You know, there's this recent epic landing on Mars.
Heck, yeah.
And on the live stream, I got a screen cap of it here,
I noticed up on the projector a kind of older Plasma desktop running VNC,
looking into another Unix box to pull the images down live from the landing.
All on Linux.
You've got Linux, Linux, and then I think that might be some old Unix possibly.
Chris, when you're landing on another world,
you don't have time for proprietary software.
That's right. You've got to make sure it works right.
And VNC is the old go-to remote control, even for NASA.
This is Linux Unplugged, episode 277 for November 27th, 2018.
Welcome to Linux Unplugged, your weekly Linux talk show that might be slightly distracted by that keyboard over there that's got a Linux box built into it.
My name's Chris.
My name is Wes, and I know I am very distracted.
It's the Keyboardio.
It's a wood keyboard with a pretty eclectic layout
and Arduino boards built into both halves of it,
connected via an Ethernet cable.
And then USB-C back to the machine.
And it's all programming.
You got LED lights.
You could do RGB if you want to be an elite gamer.
So Wes is going to be putzing with that project.
But that's actually not what we're here to talk about today.
No, no.
We're going to do a bunch of community news.
A nice batch.
Not too much, but a good solid batch.
Wimpy whetted my appetite recently with some really cool home server setup talk.
So we'll deep dive with Mr. Wimpress on that.
And then after the KDE Corner, we're going to talk about this proposal by the Fedora Project.
After Fedora 30, they want to punt an entire release, take a year, and focus on themselves,
and wait for 31. Now that's a massive proposal with huge ramifications
and kind of a precarious timing with the other news out there.
So we're going to bring on the project lead, Matthew Miller,
to explain what's going on, what the objective is,
what might go wrong, and what stage this proposal is in.
We're going to get the whole scoop from him.
That's like everything.
He's going to be on it a little bit to tell us what's going on there.
And then we'll wrap it up with some picks, some
interesting year-end news, including like when we're doing our special for predictions
and stuff like that. We got all that nailed down. And then for one last time, hopefully,
I'll tell you my current Dropbox hack, which those bastards at Dropbox. I sat down on a
machine upstairs and started popping up that error message. I just got so disappointed.
EXT4.
Oh, so frustrating.
You know what?
Let's focus on something positive.
Let's focus on that virtual lug.
Time-appropriate greetings, Mumble Room.
Good morning.
Good day.
Hello.
Hello.
Hey-o.
Hello, everybody.
Good to have you with us.
Thank you for joining us.
We're going to have a great show today.
And I want to start with a topic that just won't go away. And Wes, you set it up so beautifully the other day in our chat.
You said we're in the ascendancy of these projects. And so we are not fully realizing
all of the long-term ramifications. And what you were talking about is how everything has
its own package manager right now. And we're going to get to a new story that just happened
here. But we've talked about this before. Docker Hub, flat pack apps that have been packaged with old libraries
or elementary apps that aren't properly supported and distributed as flat packs
or snaps that had a crypto miner in them.
The most recent one that I think has gotten a lot of attention
is this Node.js package hack.
So for some context, a widely used MPM package was discovered to have a dependency
with an encrypted payload in it,
which then tried to steal Bitcoins.
Of course.
I mean, that's what you do.
That's always what they try to steal.
But when you start reading
through the community reaction,
there is tremendous outrage
at the developer
for essentially screwing millions of people.
People are very, very angry.
And they have not really a lot of ground to stand on.
Because if you look at any open source software, it generally comes with, it's provided as is, use at your own risk.
I think that's what makes it so interesting is because none of these are hard and fast rules.
It's all about community norms and what we expect from software.
And these days, like everything you build is going to be pulling in six,
ten more libraries that you're
linking features and adding value on top,
and that's great, but
we do have to question, how do we be responsible
about this? And yes, it's not your
responsibility necessarily to do any of these things
if you're an open source project maintainer, but
if you're interested in doing that well,
what are some guidelines that can help you do that?
How do you, and when you're ready to not do it anymore,
which is totally reasonable,
how do you transfer ownership in a reasonable way?
Right, so it doesn't just die and become vulnerable.
But, I mean, how deep does this rabbit hole go, though?
Because you've got a distribution package repository
and maintainers, then you've got things
that have their own package managers built into them.
Lots of software does.
Then you've got containers that have hubs that people have automatically submitted software to.
Then you also just have rando drive-by websites.
So like this rabbit hole is never-ending to me.
And we're just building on top of it.
Like building a package manager now is like the keen thing that's the bee's knees that everybody's doing.
And it seems like we aren't appreciating the amount of work it takes to sustain these systems.
The more packages a system has, the bigger it becomes.
Then it becomes, it's like it's on a Richter scale.
Like it becomes way more work to actually maintain it all.
Maybe we need a minor free certification process.
There are three layers to this, right?
There's the ethical component,
there is the trust component,
and then there is, you know, the legal one.
Ethically, wrong, just throw it away.
Trust, well, people trusted the repository
where things were sorted out.
So the responsibility really gets attributed to them
in my perspective, not so much the original developer, because transfer of ownership is not immediate transfer of access to
a repository. When I think of that rabbit hole, it seems insurmountable to solve at the management
layer. In fact, it seems like ultimately the responsibility is on the sysadmin who installed
that on his own server or the end user who decided to install that desktop software.
I'd kind of like to get your perspective on this, Brent.
I'm not saying I audit all of the things I install,
but don't you think ultimately if you installed something
that stole your Bitcoin wallet, it's your fault?
Yeah, I think that is the case,
and we have to be careful with the end user perspective there,
is that not all of our end users are educated enough
to know to look at all of that stuff or pay that much attention, right? Like Linux, we sometimes
tout as really quite safe for end users. But in this case, you know, they may not even know what
they're installing in the background. So yeah, absolutely. Especially as a sysadmin, you should
feel responsible for that. But as an end user, you need to kind of
have your eye out for that kind of stuff too. I'm on Arch and as everyone knows, the AUR is great
and can have some pitfalls, right? In that respect. So I think, yeah, it is ultimately
responsible for what you're installing. But I will say there's even a bigger catch that is
often missed in with this one, which is companies that tend to use open source packages,
they generally will have some sort of artifactory
or internal repository that they clone.
If they can't just easily pick up from the original source,
they will stop contributing back the little they already contribute back
because at this point, the risk is too high.
So I would refrain from trying to put these things into this perspective of,
hey, let's just, the user is responsible because a company really doesn't want to do that
all the time because then it's just worth it to pay the developer to write it in the first place
and we're the ones to lose.
So I really think that we want to take ownership and responsibility for it
while it's still convenient for us
to have that responsibility.
I think one of the problems that we see here is
in our quest for more software availability, we've made it
easier to get software and that can make
like with the AUR, it can mean that you can
rebuild things from Git really easily and get the latest
security patches or things
can never get updated. But I
think some systems are doing it in a little
bit different because you'll never be able to solve, or at least it doesn't seem likely we'll
solve the problem of people just didn't update all their dependencies the right way anytime soon.
But maybe another approach is limiting the damage that can be done, like with snaps, right Wimpy?
Yeah, this is an interesting conversation. So when we think back to 1970s unix everything's a string do one job and do it well and pipes these
were profound ideas that have stood the test of time and lasted for 40 plus years but back then
when unix was being created there were a handful of people that were building the system. And there was
implicit trust that existed between all of those people that were working on the Unix derivatives.
And you can even trace that back to the level of trust that existed in the networking tools
back then. Everything worked on a trust relationship. Everybody knew who everybody else was. Now, when you roll forward 40, 50 years, that's not the case anymore. There are tens of
millions of developers just on GitHub. And there are other places that you can, you know, manage
your source code in the world today. There are millions upon millions upon millions of projects in GitHub and other public source code repositories.
You, as a sysadmin or a DevOps engineer or a developer who are pulling in components,
you may just pull in component foo, but component foo may have dependencies upon dozens and dozens
and dozens of other libraries. You can't trust all of that other stuff.
And this is where snaps are a reimagining of old 1970s AT&T Unix
where you take all of those things, you put them in a container
that can't tread on the host operating system
or the other applications that are installed in that way
because we can't
guarantee these trust relationships that used to exist in the distribution models and the creation
of the distributions in their earliest forms anymore. It's a new era, and it seems like
something that's actually, now that we're this far into it, has been a long time coming.
So, and there's also, obviously, as technology gets better,
there's other ways to solve this.
Like, almost every major repository is built in some sort of basic
scanning for malware or trust verification system of publishers.
Like, there's other ways of tackling this at the distribution end as well.
Somebody could build a machine learning model
which is good at smoshing crypto miners or something.
I bet you it's in the works. That
seems like something that would get a lot of hype.
Crypto miners by themselves are not illegitimate
though. True.
Basically scan for anything that
is not specifically flagged as a crypto miner.
Like when you upload your package you get a flag
that says this is a crypto miner.
And it would come up on
when you install the package saying hey this is a crypto miner. Are you okay with that? when you install the package saying, hey, this is a crypto miner.
Are you okay with that?
You're like, yes, I know.
I want to get all the Bitcoin.
Yeah.
Let's not talk about Bitcoin right now.
It makes me sad.
All right.
Well, I want to also discuss this story that's coming out of the United Space Alliance.
When's the last time we got to do that?
Too long.
Too long, I say.
May have been never on this show, actually.
We're going to take a look at a page
from SciByte here for a moment. This is
the group that manages the computers aboard
the International Space Station,
and they work with NASA. And they
just recently announced that the Windows
XP computers aboard the
International Space Station have been
switched to GNU slash Linux.
No way. Wow. Isn't that awesome?
That's fantastic.
Yeah.
Yeah, they write,
we migrated key functions from Windows to Linux because we needed an operating system that was...
Stable.
Stable and reliable.
Of course.
Yep, yep.
In specific, the dozens of laptops
will make the change to,
can you guess which version of Debian?
Oh, it's Debian.
I was going to say Fedora 29,
but no, that wouldn't make sense.
No, they ought to wait around for 30. It's going to be supportive for a while.
Okay, so Debian 8, let's say?
Yeah, give me the number because I don't remember all the Debian
names off the top of my head all the time.
So, all right. Okay, let's see if anybody
can see if you can guess.
What version of Debian? It's going to be up in space
for a long time, so you're going to want something
probably recent, right? Something
current and well-supported. Yeah, you'll have security updates
going forward.
It's an old one.
It is.
You're right.
It is.
All right, okay.
It is Debian 6.
The laptops will join many other systems
aboard the ISS that already run
various flavors of Linux.
These ones will be running Debian 6.
There's also Red Hat up there,
as you would imagine,
and some scientific Linux, as you probably guessed already.
And after the transition, there won't be a single computer,
the ISS, that runs Windows anymore.
It was kind of interesting that another reason they liked
was that they wanted in-house control,
which is a thing we talk about,
but you don't always actually see people using it, right?
But here they can mess things up as much as they want to
and customize it.
Yeah, well, you know, they got a virus back in 2008.
I guess they blame it on a Russian cosmonaut.
Those Russians and their viruses.
Brought about a worm, I guess, technically not a virus,
which then spread to the other laptops.
And they think that switching to Linux will essentially immunize them
from that particular type of attack in the future.
I mean, I guess against casual malware, not perhaps a more malicious attack.
Yeah.
Yeah, but I don't think any of us are too surprised by this, are we?
Because Linux is the scientific community's operating system of choice.
We know that the Large Hadron Collider is used by Linux.
It uses Linux and uses KDE Plasma.
NASA and SpaceX ground stations run Linux.
DNA sequencing labs technicians use Linux.
It's pretty much anybody that has a serious job.
Yeah, a lot of astronomy software
and packages for analyzing images you get from telescopes,
that's already running Linux too.
So, yeah, kind of part of the course.
Did you see this bit on here about the robot?
They got a robot.
There's a robot.
In other news, yeah, the first humanoid robot in space, Robonaut 2,
also runs Linux, is due for an upgrade.
So they got Robonaut, and Robonaut
2 is also going to be, like, they got two
robots, but only one of them is in production.
And the one that's up there now,
the one that's being reassembled and built,
has been delivered in pieces.
The last big piece actually went up
in 2011. Right now it's just a torso
with two arms.
But they also have legs that they're going to be giving it soon
and a battery pack that will be delivered soon.
And then eventually it will be able to perform menial tasks like vacuuming,
changing the filters, the air filters,
and possibly dangerous tasks like during spacewalks.
Space butler.
You saw it here first, folks.
Run on Linux, Wes.
How about that?
You know, if you like run on Linux,
friends, if you enjoy the GNU slash Linux operating system.
And you like talking to people who also enjoy that operating system.
Then I invite you out to LinuxFest Northwest
coming up April 26th and the 28th
in Bellingham, Washington, the beautiful Pacific Northwest.
And it's at the Bellingham Technical College,
which is a great campus because they got all kinds of things
like robotics there and they have gaming labs there.
And they just, so it's great for kids too.
If you've got little ones, they have things for little ones to do.
They have like a gaming room that is a ton of fun
that my young ones like.
And there is tons of talks to attend.
And of course, we're going to be there. We're going to have a booth, and we're even considering doing a little bit
more this year, maybe having a room where we're giving talks all day and things like that. It's
all in the works. It's all early days. But I wanted to mention it now while we're still in 2018,
so maybe it could possibly be on your radar, because we're hoping to have a hell of a party.
It's been a good year for us. We have things to celebrate.
A year like no other.
Not only have we joined Linux Academy,
and now we have full-time people on our team,
and we have new hosts in the works
that are going to be able to join us there
that you haven't even heard about yet.
It's been a really good year for us,
and we're going to have a great party.
Plus, it's the 20th LinuxFest Northwest.
20 years. 20 years of LinuxFest
Northwest. Wow. So they have a
few things, a few surprises
in the works to celebrate that.
And they're doing a big past, present, and future
theme to kind of bring it all together. All this kind of
sounds like if you've been wanting to go to
one of these, you want to go to a conference and hang out with us
or other JP fans, yeah, this is the one.
Yeah, this is the one. And last year
we did a barbecue outside of Lady Joop's.
We had a couple of barbecues going, plus we had the kitchen
inside Lady Joop's going. Just a
tremendous party that was a huge success.
And so I think we're going to do it again.
I haven't talked to the folks at System76 yet,
but I know they thought it was a huge success.
I'm sure they'll be back. And if not,
hey, we've got the crew.
Maybe we'll be lucky enough to see Brent there joining us.
I can carry two barbecues on Lady Jube's alone.
So I'm just saying.
Yeah, Brent, are you planning to make it back this year?
I'm definitely going.
I think after the experience last year
and just the huge connection to the JB community here over this year,
there's no reason not to go, really.
Even if it's across the continent, really.
I'm feeling like this is going to be one of our big years.
I'm really excited.
I've talked to Noah.
He's going to be there, too.
He'll be at the booth, and we'll be doing live shows.
And it's just, oh, it's going to be fun.
JB family blowout.
It's going to be a JB family blowout.
So if you want to come out, and so far,
it even sounds like I've talked Joe into coming out
and doing a live Linux Action News.
That mysterious, mysterious Joe.
The first live Linux Action News we'll have ever done live at LinuxFest Northwest.
And the first time we've ever seen each other in person.
So I'm looking forward to that just alone.
That's going to be great.
So if you can make it, it's one to go to.
Anybody else in the mumble room?
Anybody?
Anybody in particular?
Might make it for the 20th anniversary, yes, thinking about it.
Gosh, that'd be great.
That would be wild.
That would be so great.
It would be great to have a Lut barbecue at the same time, I think.
Absolutely.
Just a crazy suggestion.
A barbecue.
We almost always have an after party at the studio,
although this year I have a restaurant in mind to go to
just because the crowd's getting bigger,
and the studio's not that big. So yeah,
but we almost always have an afterparty. Um, well we haven't, we try to have an afterparty each night, so it gets a little ridiculous cause you got Friday night, which is set up night.
And then the, you need to blow off some steam after that. And the actual fest is Saturday and
Sunday. So then you got things to do at Saturday night and Sunday night. And then it's like Sunday,
you're all done with the fest,
you've done your live Linux action news.
Well, now you want to go have a party,
so then there's another party.
And they're not like crazy blowout parties.
Well, actually sometimes they are.
It just depends on the crowd every year.
But I'd just love to have you there.
And you can check out and get more information
at linuxfestnorthwest.org.
Also, as we do this recording,
there's still 65 days left to submit papers.
I'm encouraging our team, Wes,
and others on our team to submit papers for talks
because I think we have a lot to contribute
that we don't normally do those things,
but because we're going to have such a big crew,
we'll have plenty of coverage at the booth and stuff
so we can do that kind of thing this year.
Wow, very excited about that.
Well, Wimpy, I really, really do hope you make it. That would be one of that kind of thing this year. Wow. Very excited about that.
Well, Wimpy, I really, really do hope you make it.
That would be one of the highlights of the year, I think.
But while I've got your ear, you really kind of piqued my interest on a recent episode of Ubuntu Podcast.
Big fan, as you know.
And you touched on a new setup using some software that I have heard mentioned several times by the audience, SnapRaid, and a couple of other projects to really build yourself what sounds like, oh, I mean, this might, you could correct me if I'm wrong, but it almost sounds like a free open source Drobo, like a home-built Drobo in a way.
Yeah, I think a comparison with a Drobo is fair, yes.
Okay, so this obviously sounds very tempting. I just
had another drive die in my
Freenaz server. I don't know why this happens to me.
I don't know,
I don't grok in
Freenaz's UI how I'm supposed to solve it.
I don't get it, so then I get...
That's funny, I just bought two replacement drives on Black
Friday, so I'm going to be rebuilding
the array myself. I didn't even notice until yesterday,
so I didn't even think to look. So I don't know so I'm going to be rebuilding the array myself. I didn't even notice until yesterday, so I didn't even think to look.
So I don't know. I'm
just kind of sick and tired of
this. It's not that FreeNAS is bad. I love
FreeNAS, but when I have disk
die on me, I feel like I can never
clear the error messages out of FreeNAS for the rest
of the life of that install, and it's just because I'm an idiot
because Alan sits down and he has it figured
out in two seconds, so it's
my fault, but I have been thinking about if this was a Linux-based system,
I could probably just take care of it.
And it's because it's this quasi-appliance without a full user land
running on top of a kernel and system tools that I am not as familiar with
that I feel like my data is kind of in peril.
So help me, Wimpy1.
You're my only hope.
Is this something where you could actually hack it on the command line?
You can SSH into your box and mess around with stuff
and you're not going to break it?
Explain this setup to me and how it all kind of fits together.
Okay, so yes is the answer to that question.
What I have started with is just an install of Ubuntu server 1804.1.
So just straight up Ubuntu server.
So yes, you can SSH into it and you can do all of the management from that point onwards just over SSH.
With no fancy file system set up beforehand or anything like that?
Correct. Yeah. So this is where SnapRaid is interesting.
So let's just start this by saying ZFS is all well and good,
and it has its place in the world,
but I've specifically chosen these tools of SnapRaid and MergerFS
because I am addressing specific issues that home users would have,
not enterprises with terabytes and terabytes of storage.
So is an example of that like mismatched disks or what's an example of that? Yeah, so examples of that would be with SnapRaid, you can
use mismatched sizes of disks. You don't have to create the array up front. You can use your
existing disks with your existing data on it and add those into a
snap raid configuration without any format reformatting or anything like that so in
terms of the home user for budgeting it's very flexible and for my position i lost some discs a year ago in last year's summer due to heat so i'm very sensitive
to discs running and spinning and generating heat and the way that snap raid works is you
so for example i have four drives in one of my arrays um i just formatted all of them with well one of them i formatted with xfs and the other three
i formatted with ext4 so you can choose what file system you want to put on the disks you're going
to use in your snap raid array ah the way that snap raid works is that you just copy files onto a file system so nothing clever nothing fancy but consequently a file
exists on one of the disks in your array an array is like in air quotes now because it's not
like a raid array where the whole thing is a logical volume which means when this is actually in use only one disk is spinning and i have smart mon
power down the disks after 10 minutes of activity so this keeps the disks cool it keeps the power
down and it keeps the noise down so these are all like important factors for home users right
that makes sense instead of have focusing on a constantly available,
high-performance NAS system,
this is more of like, here's my backup server,
and I just want to keep things safe.
It doesn't have to be ready to respond at all moments.
I mean, it's sitting there idle a large chunk of the day.
Indeed.
That's a good point.
So SnapBraid is designed for media servers
where the files on the media server are changing infrequently.
So let's just talk about SnapRaid for a minute.
I had four drives.
I formatted one with XFS, the other three with EXT4.
We'll get on to why I did that in just a moment,
because I'm sure that will come as a surprise to you.
I would have thought of an XFS myself, but, you know.
Yeah, yeah. A gentleman's file system. We'll surprise to you. I would have thought of an XFS myself, but, you know. Yeah, yeah.
A gentleman's file system.
We'll get to it.
You'll understand.
And the drive that I formatted XFS, that I used as my parity drive.
So within the SnapRate configuration, I say this disk mounted in this location is used for parity.
Oh.
Huh.
The other three drives that I formatted,
I say these disks are data disks
and they're each mounted in their own discrete location.
So one is called HDD1, another is HDD2, another is HDD3.
So, and that's basically the SnapRaid configuration.
That's all it cares about, where the disks are,
which one has got parity, which one's got data and then you also specify um the the path to what are called content files and
these are the files that store the hashes so the the checksums and hashes for the files and this
is where snap raid has a a great property because it offers you um the same kind of bit rot protection that ZFS offers you.
With checksums?
With checksums, indeed. And it can recover from files where the data has become corrupt over time
and repair that data from the parity and the hashes.
You probably have heard a lot of people say this sounds similar to Unraid
since you talked about it on the Ubuntu podcast.
It is similar to Unraid.
It's far more robust than Unraid in my experience.
Yes, indeed.
Yeah, and Unraid is more than just a disk manager now.
It does a whole lot more than that.
Is it a web UI to manage it?
What's the interface?
It's just a config file and a command line application called
SnapRaid. Easy. So once I've created that configuration, that configuration defines my
array. And now I can just copy some files onto those disks willy-nilly. So I copied some files
onto hard disk one, I copied some onto hard disk two, and some onto hard disk three and then i ran the command snap raid sync and that
then builds the parity file so it inspects every file on the in in the array builds the parity file
and builds the uh tables of hashes for all of the files that are in there and at that point my data
is now protected so i have a parity drive which means one of my disks can fail
and I will be able to recover.
And I have hashes of all of my data
so if there's any bit rot
I can also recover any data
that's become corrupt over time.
Now explain to me
because I feel like I'm missing one piece
is I understand that MergerFS
is also part of this
and I know MergerFS is a union file system.
Yes. But I'm not clear how that's fitting in with your setup right okay well i'll get to that in just one moment because
there's one other important important feature of snap raid i you have to run the sync command
manually to rebuild your parity okay unlike say something like zfs where that's happening on the fly. And this is where SnapRaid now has backup kind of
functions. So in the intervening periods where I haven't run a sync, if I delete files or
overwrite files I didn't mean to, I can now go onto my volume and I can say snap braid fix and I can undelete the files and recover them from
the parity that I have so in the times between the syncs being run it operates as a backup tool
and you can pull your pull files out of the backup that could be nice in a small office
environment too right yeah or at home when you know if I do a sync every week I've got a week's
grace that I may have deleted something accidentally and I can go and recover it without having to, you know, go to actual backups.
So because now the files are spread across three disks, that's a bit inconvenient because now you have to know that this disk has got these files in it and this disk has got these files in it and this disk has got these files in it so then i've installed some software called merger fs
and that uses fuse and it uses wild cards in your fs tab so you can say go and mount all of my hdd star under slash media slash merger fs and now under that directory i
have a union of those three disks presented and merger fs then has some strategies about
how files if you say copy a file to the union how it decides where to place those files so i've got it in a mode where
it will say does the directory the endpoint directory that i'm being asked to put this
file in already exist if it does put it on the disk that already has that directory on it
sure or you can say just scatter them willy-nilly you know there's a whole different bunch of you
know right strategies you can use there but because I want to have disks generally only one disk spinning at a time,
I have it say, where does this data already exist? Let's write it there. And then I've got a
configuration option that says, and if there is not enough space on that actual disk, then fail
over to the disk that has the most available space and start writing those files there.
And now how does the system respond if you were, say,
to slot in a couple of new drives or put in one that's maybe six terabytes
and another that's two terabytes?
Right.
So the only thing you have to do is make sure that the drive you're using
for parity is at least as big as the largest drive that you're using for data.
Is there a migration?
Can you move parity drives?
You can now spread the parity across multiple drives.
So you could spread it out
and then you could reclaim that parity drive
as one of your storage drives.
Yes.
Aha.
Yeah, exactly so.
Yes, yes.
And, you know, let's imagine
I have this terrible situation where two disks fail
and I've only got one parity drive.
Yeah.
So I can't recover the whole array.
But the two disks that I have left, because they're just file systems, I have that data on those disks.
You know, there's no special file system or RAID mechanism behind that.
That data is just accessible to me.
I can just plug that into an external USB adapter
and just suck that data off.
It's extended for whatever.
Yeah, exactly so.
But yes, if you want to add another drive,
you just plug the disk in, you add, you format it,
you mount it somewhere,
and then you put in the SnapRaid configuration.
Hard disk 5 is in this location. And that's it.
It's very, very simple.
That does sound simple.
And it's surprisingly fast as well.
So again, because this is home stuff,
I have these arrays hooked up over USB three even.
Right, because you're not even looking
for like super high performance.
You're looking for storage.
Yeah, but even over USB three,
I'm pushing 170 megabytes a second
across the array, so
not too shabby. Not bad. No, what kind of hardware
is that? It's their
ICDoc Black Vortex
enclosures connected
over just regular USB 3 to a NUC.
Is it like an i7 machine, or is it
a NUC? It's a NUC
with a mobile
i7, so it's a chip, a mobile i7.
So it's a quad-core mobile i7.
That sounds like a perfect home server for what you're going for.
That's a great setup.
Well, that piques my interest.
Wimpy, thank you for sharing with the class.
And we'll have links to the projects that Wimpy talked about with their documentation in the show notes, linuxunplugged.com slash 277.
And he also went into some detail in the Ubuntu podcast.
So go check that out.
And I am definitely looking at SnapRaid.
I wanted to mention for anybody that's listening the week that this comes out,
or if you're listening live right now, I don't exactly know what happened,
but Linux Academy decided to extend the $2.99 per year deal,
which makes it $24 a month to get Linux Academy
for five more days as we record this.
I think on Cyber Monday,
the sign-up page went down or something
because of everybody trying to get the deal.
So they're like, all right, well...
And it was only like...
I mean, they freaked out because it was like 20 minutes.
But they're like, we're going to extend it for five more days.
I think that's what happened, but it's a great deal.
And if you didn't jump on it, you're saving 33%.
And the reason I mentioned that is because they obviously don't pay us.
Well, I mean, they pay us, but they don't ask me to do these ads.
But that, I know what they have in the works.
So if you get it at 24 bucks a month, that's going to be worth it for a long time because it stays at that price.
Lock it in. You can lock it in. Well, the big news this week really is that the Fedora project
is considering canceling, or maybe you should say significantly delaying the release of Fedora 31.
So following the release of Fedora 30 in May, the project is considering essentially hitting the pause button
for a year, skipping the traditional 31 release cycle, and working on themselves, retooling,
and those kinds of things. It's one of those that they've gone through in the past when they
began doing additions, and it's one that comes at a kind of precarious time. So we thought the best route to go would be to get it from the horse's mouth, as they say,
and bring the project lead on the show and chat with Matthew Miller.
Matthew, welcome back to the show, and thanks for joining us on the Unplugged program.
Hey, Chris. Glad to be here.
Well, we are glad to have you on a day where we're trying to parse the news
that it sounds like there is a proposal in the works to essentially bump the 31 release, hang out on 30 for a while, while the back end, I guess infrastructure really,
of the Fedora project is retooled. Can you explain really what is being proposed? That's
my understanding, but what's going on from like your actual horse's mouth words?
Yeah, that's basically it. And I don't know if you remember
back to five years ago, we actually did this one time before and they worked out. It worked out
okay. A lot of the problem is we have a six month release cycle and a lot of the people who work on
our Fedora infrastructure also are people who are engaged in the business of getting that release out the door, the release engineering, the infrastructure stuff that makes the bits ship, QA and QA tooling and things like that.
And so as we're going on that rapid cycle, a lot of technical debt accumulates.
We get a lot of Band-Aids put on things that are problems and not enough time to fix it. And then just because this is Fedora 30 we're talking about working on now, there've been
a lot of releases and a lot of our tools have grown organically in the last 15 years. And so
some of the things are scripts that have another script that they run and that script does some
other things. So there's a lot of redundancy
and performance isn't so good.
So specifically one of the things that's a problem
when we do a compose,
that basically takes all the software in Fedora.
The packages are already built,
but they're put together into one tree
and then put into the different images
for you making Fedora server,
for making Fedora workstation, and then for making all of our various spins,
KDE desktop, the Python lab, the Docker image, all those things.
That's the compose.
And that's one big monolithic process.
And we were up to that taking something like 18 hours to run.
There's been some improvements in that.
But so now it's down to like 12.
But 12 is a very long time.
So that means, you know, from making a change to getting it actually to a place where our QA can test it takes a day.
And so as we're putting out a release, that can get very frustrating because let's say we want to sign off on a release on Thursday.
Well, in order for QA to have time, that release needs to be ready on Wednesday morning, which means it needs to be kicked off on Tuesday,
which means we basically have during that whole last week two chances, Monday and Tuesday, to get changes in,
last week, two chances, Monday and Tuesday, to get changes in, which leads to, you know,
20 changes or 50 changes being put in all at once, which is not very scientific.
And if something messes up, too bad, we have to delay for a week. So getting that compose time down, I think the proposal aims for an hour, which is nicely ambitious, but even getting it down to four
hours. So during, you know, one normal workday, you could start off a compose and then if something
goes wrong, do another one would be a radical improvement. So that's one of the things,
but they're kind of a lot of things in our processes that are kind of like that, where it
seems like taking some time to look at it and focus on
actually improving the tooling and processes would be worthwhile.
Now, I noted too, in the pretty well-written problem statement that's on the wiki, that
it says, currently Fedora can't scale for more community ownership of the things we release.
Only a few people can build and push our releases. Is that also a team retooling problem there?
To me, when I read that, it sounds like perhaps the Fedora project is seeking to be able to
independently release bits of software they cannot currently independently release.
Yeah, and some of it's a process thing.
Some of it's actually tooling because the tooling to do this literally has no permission
structure.
So it's a you can build everything or you can build nothing.
And so having the tooling being a little smarter about who can do what,
not because we don't trust people, but just so you don't accidentally rebuild KDE
when you went to meant to rebuild XFCE and then it caused trouble.
It doesn't have that fine grain thing.
And again,
because it's all one big compose,
basically it's just a,
you could trigger the compose,
but you couldn't say build XFCE.
Right.
And that's one of the things we actually,
like our XFCE spin didn't get built properly for the final release for
Fedora 29,
which was bad because we've got a lot of people who love XFCE,
and we want that, even though that's not our main offering,
we want that to be successful for people.
I don't know what's the matter with them, but they just stick to it.
They sure do.
You know, you like your interface as it is.
It's very personal, so it's important for people to have it.
There's really only one XFCE user out there I'm giving a hard time,
and he knows who he is.
Yeah, that's right.
Actually, some of the Fedora infrastructure team
actually have XFCE users.
Yeah, go figure.
Close to home.
I bet, I bet.
Well, okay, so speaking of XFCE
and other sort of user-facing things,
what does all this internal
infrastructure process work mean
for user-facing
features? Fedora is doing a lot of stuff. In particular, I'm thinking of things like Silverblue.
Does that mean these are going to be delayed, or will that work continue side by side?
Yeah, no, that's a hard thing with this. And yeah, I think we've done a really good job of
getting onto our six-month cadence and making that fire successfully. So I actually have a
little bit of skepticism about this plan of stopping things.
So I want to make sure that if we do this, we have a good plan in place for making sure
that we actually succeed in the things we want to fix.
And then also that we don't break things for people who have become accustomed to and happy
with having fast video releases and good upgrades.
I mean, that seems like a key point.
If you're going to take this break, it's really got to be worth it.
Yeah.
Because I look at the market right now,
and I see there's going to be some uncertainty
around the future of Fedora, misplaced or not.
But also really, like, on the desktop side,
that could potentially mean skipping a GNOME release cycle,
which is kind of like a hard time to miss a GNOME release cycle right now
because so many great fixes are landing in GNOME.
Yeah.
And we did do this for our 20 release,
and I think we put GNOME as a copper to update to.
Ah.
Mixed results there.
We could do something like that again.
Flatbacks.
Yeah, put all of GNOME
in a flatback.
You watch, it's going to happen.
One of the things we have
now that we didn't
is this modularity system,
which lets us have
two different versions of things.
So one idea
is to make a GNOME module
so users could opt in
to the newer version
using the modularity.
That seems brilliant, actually.
Yeah.
And that's kind of what modularity was made for.
So it's also a good test for, hey, can modularity do something major like this?
Let's make sure it can.
And then for something like Silverblue, which is more on the experimental side, the Silverblue
OS Tree Compose could just pick the newer module.
So Silverblue could have the newer version of things.
So we could have a more conservative and a more aggressive release.
And then the idea would be to maintain the 30 packages for that whole year.
So, I mean, in some ways, perhaps as a server user, Fedora 30 could be a great release because it's going to be, in some ways, an extra long supported version of Fedora.
Is that true?
Yeah, if this proposal goes through.
And yeah, we actually see that in our stats for the Fedora 19 and 20 releases, which ended up having a longer cycle because of this.
Those were very, very popular releases and still continue to, you know, there's a long tail of people who should upgrade but never have,
who are still using those releases. And I think that that extra
year definitely helped with that.
So yeah, that's a side effect as well.
Yeah, okay, that makes sense. One thing that stood out to me in some of the top-level
proposals here is defining a base platform. And maybe this is something
Red Hat wants to see some focus on, but we were curious, what does that mean?
Yeah, so I'll talk about it from a Red Hat hat on for a little bit here.
If you looked at the RHEL 8 beta at all,
one of the things that Red Hat did is split into a base OS and an app stream
where they can have different life cycles.
And things are different in Fedora, but the basic concept kind of is helpful. One of the
things in Fedora, I don't know if you followed all of Fedora's history, but one of the important
good moves was merging together Fedora Core, which came from Red Hat Linux and Fedora Extras,
which was kind of the community thing into one big community project. That was great and really
good for Fedora and helped Fedora grow and become something
that was really a whole community distribution
rather than something that was just a corporate thing
that the community could build stuff on the fringes.
So that was great.
But one of the outcomes of that is that Fedora
is one gigantic repository of undifferentiated packages
where some little toy thing
that I am interested in playing with is important to me,
but if it breaks, like five users will care
and I'm one of them.
That is undifferentiated from something like glibc
in the distribution or Gnome
or the things that if it breaks,
a lot of people will have their systems now work.
And so we, and I've, I've been talking about this, you know,
since before as Fedora project leader with the Fedora rings idea,
basically make,
making different policy levels for some of our pack,
all of our packages are important,
but some of them kind of need to need a focus in QA.
And we've never had a defined set of,
we've got some that are like critical path, but it's kind of a loose definition of getting up to
boot. But some packages that we basically say, this is a QA set of packages together that, you
know, maybe if we're looking at doing a longer term maintenance kind of situation, these are the
ones we'd focus on.
If we want to make sure, you know, QA focuses on these things. So sort of narrowing down from our whatever 15,000 package package set to something like 1000 packages that we can really focus on
polish on, I think is a good idea. That strikes me too, that it might provoke a few deeper
conversations within the project about what's our focus? what does it mean to ship this set of packages and things like that.
So that could be a really fascinating event to watch.
Yeah.
And I think one of the things – focus is definitely important.
are focused on this enabling core package set and that people who do have those other cares
can focus on their part of it
and not have to worry about the base part
falling apart on them,
but they can also focus their energy
on the thing they care about
and develop solutions on top of that
and give that to users.
So we should keep in mind that at this stage,
as we record this episode,
this taking some time to refocus on the tools
is in most parts just a proposal at this stage.
So what happens next?
What process does this proposal go through
to either become reality or get rejected?
Yeah, so one of the key things here is like,
there's not secret room cabals in Fedora deciding things.
So a lot of these things are you're seeing the process in action.
So when somebody proposes it, that means they had this is an idea that several people have have worked on.
One of the things we have in Fedora is the idea of an objective.
things we have in Fedora is the idea of an objective. And so an objective is something that the Fedora Council, our leadership body, has said, this is an area we think the project
needs to focus on. So we have an objective around the Fedora lifecycle and the Fedora
long-term maintenance possibilities. If you go to docs at fedoraproject.org and look under the project docs,
I can provide an actual link for this.
There's a section about our different objectives there,
and you can find a link to documentation.
So this particular proposal came out of that objective
because there are people looking at sort of these problems.
And so the general process is you will have this community discussion and then the Fedora
Council will talk about it.
And the Fedora Council will probably give a kind of a non-technical heads up, yes or
no, we like this idea.
And the Fedora Council consists of me, community action and impact coordinator, Brian Exelbeard, some elected people from the community, and then also people selected by different committees to be on the council.
And so it kind of gives an overview of the whole, you know, we try to take the input from the whole project and reflect that into a definitive decision.
the whole project and reflect that into a definitive decision. And then also because this is an engineering decision, the engineering committee will probably also want to skip a yes
or no to this. So that's basically the process this will go through.
It'll be interesting to watch that unfold. And how do you feel about it personally?
As somebody who sits atop of this project and has been watching it now for,
I think it was a record-breaking run.
What do you think?
Is this a necessary delay?
Is this something that's sort of like taking your medicine?
Yeah, so like I said before,
I've got mixed feelings about it.
I feel really good that we've gotten ourselves
onto a well-firing Mother's Day Halloween release cadence,
which has always been our goal,
but we've sometimes slipped off of that.
Every now and then.
Right.
So we hit our target schedule a couple times in a row now.
That's awesome.
So just when we've got that going,
it seems like maybe not the best time to miss it out.
The joke almost makes itself at this point.
I'm not making it.
I'm just saying.
Yeah, exactly.
So I've got some skepticism about it.
On the other hand, I think it did work for us when we did that Fedora 20 release.
And I think the problems are real.
The Compose problem has been something that's been frustrating to me for a while.
So spending some time to tackle it seems worthwhile.
But I also do want to make sure that we keep the concerns,
the upstream GNOME community is concerned.
I'm proud of Fedora as a premier GNOME distribution.
I want to keep that.
Sure.
And I want to make sure that people who are packagers
who aren't involved in all of the day-to-day release stuff
are able to keep getting their work to users
and don't feel like they're blocked for a year
from getting the things they're worked on out to people.
So, yeah, it's a balance.
Yeah, it sounds like it.
Well, maybe we can check in at some point.
Yeah, I'd love to.
I have been following the project with more and more interest.
He is at MattDM on Twitter,
and we will have links to the problem documents
they have in the wiki,
as well as the mailing list and the docs
that Matthew just mentioned there.
Is there anywhere else you want to send folks before we run?
You know, getfedora.org is always a great place to check out
if you don't have Fedora already installed.
Good answer, good answer, man.
Matthew, thank you for coming on the show.
You know, one of the things he touched on there
that is a little more obvious
if you follow the links in our show notes
is this is the result of a larger objective
they set a while ago as a project.
And this is how they, as a project,
think they can achieve the objective
that they set for themselves.
And it's kind of in a broader context.
It's something that,
I know every project struggles
with these tooling issues sometimes,
but at the same time
you're like,
this is such a
precious time to take off
because...
Everyone else is moving fast.
Really fast.
There's lots of good releases.
Yeah, and there's a lot
of good fixes coming down
the pipe to GNOME
that people are going to want.
So I think they're going to
have to come up with a way
to get fresh GNOME
to Fedora users,
at least I would think.
Otherwise people
might get a wandering eye
and go to a distribution that's maybe shipping patches
with performance fixes or something.
But we'll see.
You know, we have been talking about a predictions episode,
and we actually nailed down a date and time.
Oh, and by the way, again,
thank you to Matthew for coming on the show.
No kidding.
It's really nice to just get it from him.
We don't have to speculate, you know,
and just understand it's a proposal right now.
But yeah, so moving on,
we will be recording a special edition of the Unplugged,
a special edition of the Unplugged program.
Do you remember?
Oh, yeah, I do remember what the dates are.
So we'll be doing a predictions episode,
and I kind of feel like maybe we should do the predictions episode
while it's still 2018. That seems appropriate. Yeah, I kind of feel like maybe we should do the predictions episode while it's still 2018.
That seems appropriate. Yeah, I agree.
So instead of doing, see Linux Unplugged gets the holiday hammer. It is on Christmas Day and
New Year's Day, the 25th and the 1st. So it's both are live days. So what we're going to do,
and I know this is difficult for some of you, but I'm hoping maybe it'll open up for others,
is we're just going to move the show one day.
So it'll be on Wednesday for those weeks.
So it'll be the day after Christmas
and it'll be the day after New Year's,
so the 26th and the 2nd.
Same time, just move it one day.
So on the 26th, we're going to do our predictions for 2019.
And if you really have a great prediction
and you have good audio,
I invite you to record a clip and then tweet it to me.
And I will try to include it in the show.
But if you can't make it, but otherwise, our mumble room will be open.
You can be here as part of our great virtual council and make your predictions and pontifications for 2019.
And then you will have world fame if you nail it, because we will try to review them the following year to see how we all did.
So that'll be on the 26th and then on
the 2nd, maybe if you're still
in party mode, join us for the Mumble Room
extravaganza where
we will do what we want to see.
Wishes, reviews of
what happened. You don't have to be realistic
in this here more fun episode.
That one's a little more low-key laid back
like you're not going to get reviewed.
And we're going to be also doing like look back at some of our favorite stories
and be doing like the, if I had a magic wand
and I could make Project XYZ do ABC, I would do it.
And then we can sit around and go, that's a great idea
because it would mean more users come on board.
Like it's a fun thing to do.
So those will be just moving one day from our regularly bat time,
but it'll be on the same bat channel, right?
Is that what the message is?
There's a lot of bats involved, so if you're scared of bats, watch out.
But otherwise, just come.
It's a great time.
You're probably going to get sick of your family anyway,
or maybe you're all by yourself and we'll be your family for a day.
Our plan is to try to have something for you to listen to over the holidays
because it's nice
when, you know, not every
we get like the lowest downloads ever, but
it's like for me, like if I'm ever in a situation
where it's like all of a sudden I have like a bunch
of time in the car because I'm traveling for the
holiday or like sometimes
I have like these weird gaps in my schedule
where it's like been rush, rush, rush, rush, rush for the holiday
and then all of a sudden I find myself without
kids, without my significant other, it's like I got half a day to burn. And then when nobody's releasing stuff, it's kind rush, rush, rush, rush, rush for the holiday. And then all of a sudden I find myself without kids, without my significant others, like I got half a day to burn.
And then when nobody's releasing stuff, it's kind of a bummer.
So we thought, let's try to stick with it this year,
and we'll just bump the schedule a bit.
So it'll be out a day later those weeks, but there will be a show.
And if you go to linuxunplugged.com slash subscribe,
you can just grab the episode automatically
and don't have to worry about it.
You can also go to jupiterbroadcasting.com
and check out the calendar
if you forget just when that's happening.
Well, really quickly, one last item on the agenda.
It's that Dropbox quote-unquote hack I've been using.
I mean, ultimately, this is a temporary fix.
In fact, big disclaimer here.
I mean, this could potentially even lead to data loss.
So don't do what Chris does, okay?
Is that fair?
Can we make that disclaimer?
Don't do as Chris does.
But I'm using the Dropbox File System fix project. And the thing I like about this is it lets you run
Dropbox and any damn encryption or any damn file system you please. And I think that's what's
important. It essentially fixes the file system detection in a Linux client to restore the sync
capability. So bear in mind, it's doing some trickery here, and if they introduce something in the future
to the Dropbox client, you could have some problems.
Oh, yeah, as you might suspect,
this is an LD preload trick,
so you have to run make,
you got to build yourself a little.so
from the included C,
and then it provides a separate Python script
that will start Dropbox with the custom library.
So if it's making ext4 assumptions
that aren't just what file system is this, there could be issues, but so far, it sounds like it's working for you. Yeah if it's making ext4 assumptions that aren't just what file system is this,
there could be issues.
But so far, it sounds like it's working for you.
Yeah, it's been fine.
And I did have to build it,
and then I had to disable Dropbox
from starting at startup.
And then I essentially just created
a new desktop launcher
and added that to my startup items.
So the Python script is kicked off that way.
It's fine.
It's been working good.
It's been going for about half a week,
survived a couple of reboots just fine. So
I'll do this as I transition off.
I might do what Wimpy did and might
toss it on a headless box somewhere.
And that'll be how I begin the transition to something else.
A hacky solution to a problem we
should never have had. Yeah, I've sent them my feedback.
They know how I feel, Wes.
So also a little plug here for the TechSnap
show. TechSnap.Systems, Wes
Payne and myself are doing a show over there
that if you're in the IT field, if you are a sysadmin,
if you work on servers or on the cloud and want somebody to commiserate with,
hear about particular topics like WireGuard, eBPF,
what's some other things we've done recently?
Boy, we did.
We had a great breakdown of the whole Spectre meltdown stuff when that came out.
The show is just famous for doing the deep dives.
And we're gearing it up to do even more.
We'll have more details about that soon.
So check out techsnap.systems
because there's a lot of good stuff going on over there
and even more coming.
And then go check out the Gentleman's Show,
Ubuntu podcast, my favorite show
that's not a Jupyter Broadcasting show
that's not late-night Linux.
Because I love all of them.
Careful, man.
I have to be really careful.
It's a little awkward.
But I try.
All of our brothers and sisters.
Go check that out at UbuntuPodcast.org.
And go check out Late Night Linux, too,
because they're doing some great work over there.
Thank you so much for tuning in to this week's episode.
You are always welcome to join us live,
hang out in that mumble room,
chat with Wes and I.
Oh, please do.
We miss you already. You get a
little pre-show and you get a little post-show
in the released version, but there's probably
30 times more than what
actually gets released. And to be honest, we say all the best
things when we're not in the release. Especially when
I'm cursing about YouTube. Oh, it gets
me so fired up. Anyways, that's
all for this week. Check out all our links at linuxunplugged.com
slash 277. I'm at
Chris Elias. He's at Wes Payne.
Thank you for joining us.
See you next Tuesday. The Unplugged Show.
Oh, man.
277. Rapidly approaching the 300. Oh, man. 277.
Rapidly approaching the 300.
Oh, dig it to know.
Wimpy, before you leave tonight, I have a problem,
and I want to know if you've experienced this.
I think it might be because I went with a GPU dock.
A little bit of real-time GPU dock follow-up.
I've got a vertical monitor and a horizontal monitor at 2K resolution
hooked up over DisplayPort, and ever since I've got a vertical monitor and a horizontal monitor at 2K resolution hooked up over DisplayPort.
And ever since I've done that, my window performance has gone in the tank.
Like on my main laptop, the quote-unquote prime display, it's fine.
It's butter smooth.
But on the external monitors, it's choppy.
It's leggy.
Like resizing windows is jittery.
Even my typing is sometimes stuttery.
It's way worse than I expected.
Have you had this problem?
I have not experienced that, no. Even my typing is sometimes stuttery. It's way worse than I expected. Have you had this problem? Is this a...
I have not experienced that, no.
Oh, it's got to be something with that multiple GPU setup then.
I would think it's all using the NVIDIA
because I can manage the panels with the NVIDIA control panel.
So it's all going through the NVIDIA
and it should be able to handle three displays.
Yeah, it should.
I've only ever run two until recently though um and obviously the third display
that i now have is a panel on the laptop um but no i've not experienced that i've only experienced
that when i've moved a game off of the prime display into you know an igp powered display
and that's expected but you've got one and you've got i've never also i also run
both of mine um landscape i haven't done that mix of portrait and landscape yeah so the and i've
noticed like 3d performance on the prime main laptop display still seems perfectly fine i can
play a game and have those other displays with displaying stuff um and it's good enough like if
i put things it's fairly static like chats and even a YouTube video is fine. But if I want to resize a window or start typing,
it sometimes just can't keep up.
And I just don't understand what's going on.
And I've wondered if it's a Plasma problem even.
But I doubt it.
But I've heard that there's issues with Plasma and NVIDIA
doing multiple monitors.
I'm not going to switch to GNOME over this
because like I said, it's butter smooth on the main display.
And again, I can't speak to anything other than Mate because that's the only desktop
that I've used.
I think we want to try it, though.
Tell you what, it has made me think about doing a live disk and just seeing.
You probably should.
I mean, that way it's fine.
I'm not switching.
And then what do I do?
We can tweak it from there.
I mean, for science's sake.
I got to know.
Okay.
All right.
Thanks, Wes.