LINUX Unplugged - 403: Hidden Features of Fedora 34
Episode Date: April 28, 2021The new release of Fedora has more under the hood than you might know. It's a technology-packed release, and nearly all of it is coming to a distro near you. Plus the questions we think the University... of Minnesota kernel ban raises, and more.
Transcript
Discussion (0)
You know what?
Screw the pre-show.
It's not the boss of us.
Yeah.
Or you could make the pre-show just you saying,
I think we're going to skip the pre-show.
Yeah, but that's so cliche.
We shouldn't do that.
We got to stop doing that.
Just admit you're failing, Chris.
No, I could.
I could.
I could come up with a pre-show if I wanted to.
If I wanted to.
I'm sure I could, you know, but I figure, why bother?
You know what?
Who needs a pre-show?
Nobody needs a pre-show.
You know, pre-show is it's not even the actual show.
So if it was, we'd put it in the show. So we might as well just get rid of it.
These folks just want extra free show. That's not fair.
I know.
Hello, friends, and welcome back to your weekly Linux Talk Show.
My name is Chris.
My name is Wes. Hello, Wes. and welcome back to your weekly Linux talk show. My name is Chris. My name is Wes.
Hello, Wes.
Guess what?
What?
Big news.
This episode's brought to you by the all-new Cloud Guru,
the leader in learning for Cloud Linux and other modern tech skills.
Hundreds of courses, thousands of hands-on labs.
Get certified, get hired, get learning at acloudguru.com.
Coming up on today's episode, this is one of those episodes.
This is one of those.
It's one of those, really.
You know what?
I don't even need to say more than that, but I'm going to.
It might just be one of the most exciting releases in a long time.
Fedora 34 is out today as we record, and we have a comprehensive review coming up.
But loads of new tech are packed into this Fedora release beyond just GNOME 40, which is big on its own.
And basically all of it that we're going to talk about today
will be coming to a distribution near you very soon.
Plus, as always, we've got some community news, some picks, and more.
So before I get into any of that, I got to holla at that virtual lug.
Time-appropriate greetings, Mumble Room.
Hello.
How's it going?
We've got a strong group today.
Yeah.
Look at that.
Jeez, how'd you all fit in here?
Wow.
You know, this is why we had to get the mega studio, because, you know, all the social
distancing and whatnot, we needed the 40,000 square feet.
And the room for the grills.
Right, of course.
Let's get into some community news.
And this week, we wanted to talk a little bit about this University of Minnesota Colonel Ban story.
Please do check out Linux Action News 186, where we went into some details there.
But if you didn't know, I want to make sure we cover some of this in this here episode,
because this is a very thick and complicated story with a lot of finger pointing and mail lists, posts to read.
And if you didn't know the details, you might not fully understand and appreciate the scale of this problem.
You might maybe, you know, just sort of brush it off as drama or just sort of not even really pay attention.
But there is a real problem here.
Three hypocrite patches, as they
are called by the researchers, made it into the Linux kernel around August of 2020. If you didn't
know that, you should be listening to LAN. Because I think a lot of people right now think that
nothing made it, that a maintainer caught something in April and rejected it, and these patches never
made it in. But that's actually not the case. In fact, there's really no evidence that indicates the patch in April is linked to these hypocrite patches at all.
So it's a great example of why Wes and I go above and beyond to really try to get the facts right.
I contacted Greg Keh directly to try to get some of the details as accurate as possible for the story.
And, you know, one thing that I feel like hasn't been properly talked about is a pretty valid concern that was going to be raised
by the research. Super quick recap, Wes, you totally correct me if I'm wrong, just jump right
in. But if you're not familiar with the story, back in August of 2020, some researchers at the
University of Minnesota around that time submitted a series of stealthy fixes that weren't really
fixes, that when combined actually sort of created a vulnerability,
but not necessarily each individually on their own. And three of those patches were accepted by the maintainer of that particular subsystem into the Linux kernel in August of 2020-ish time.
We don't actually have all the details yet. That's one of the things the Linux Foundation
is asking for. And it is a little nuanced because it sort of relied on the workflow
of the maintainers. They weren't looking for the code to actually make it into a Git tree.
They were just looking for the maintainer to sign off on it saying like, yeah, this looks good.
Right.
They never shipped like a distro or even making it a Linus's tree, right?
It wasn't like it never made it that far.
And I think it's also worth mentioning just as a side note here, my understanding is, again, the university has not given us all the details yet, but my understanding is when the maintainer accepted the patches, the university actually reached out and went, oh,
oh, actually, we just caught the mistake. Here's the actual patch instead, and then sent them
working code. That's their claim. I don't know how true that is. But that is important to understand
that. So then we fast forward to April of 2021, when a PhD student submitted a patch and it was rejected by the maintainer for being just crap code.
And not really fixing anything, right?
They're like, well, you're saying you're fixing this use after free thing, but we've already taken care of that elsewhere.
This doesn't do anything.
Right.
And so there was a little bit of, you know, suspicion about like, why did you submit this?
Because your university has a history here of kind of screwing with the kernel
for your own research. And now you submit me this patch that looks kind of like garbage.
This is either just you not knowing what you're doing or malicious. And so that triggered a series
of events that eventually led to the University of Minnesota from being banned to contribute to
the kernel and the kernel team going through and reviewing any patch they've ever made.
And the Linux Foundation sending the university a letter saying,
you need to tell us about every patch you've ever committed to open source projects, period,
and you need to kill that paper.
Essentially, you need to shut that paper down that you are about to announce
in less than a month at the IEEE convention that happens virtually on May 23rd through the 25th.
You've got to shut that paper down.
Because the researchers were writing a paper
essentially about doing these hypocrite patches
about how they could gain the trust
of open source developers
and then trick them into accepting bogus code
to build a vulnerability via multiple patches.
And they were going to write a paper and they did
and they were going to publish it.
It's all done.
It's written.
That's not what the recent news is about.
The recent news is just sort of
perfectly timed so that when that paper does announce or they're going to have to kill,
like that paper is dead, basically. The news is perfectly timed to kill the news story
that was about to be landing as a bit of a bombshell potentially around the end of May.
And the question that I think the researchers were trying to raise is, in one part, no duh.
If you trick somebody for a while and then you stealthily slip in bogus code that builds to a vulnerability,
I think even without doing any studies, pretty good guess, you could probably get that past a maintainer.
But what about that problem?
It's truly possible it could happen at Microsoft or Apple or Oracle or some other proprietary organization.
Absolutely. But it's not as easy. could happen at Microsoft or Apple or Oracle or some other proprietary organization, absolutely.
But it's not as easy
because there's going to be,
like in the case of the macOS kernel,
about four people at Apple
that are allowed to actually commit that code.
And so you'd have to compromise
one of those four people,
at least their GPG key.
And on top of that,
there is a clear repercussion system in place
if somebody were to make a compromise like that.
It's not impossible, but it's much harder to happen
with a Windows kernel or the macOS kernel.
And then imagine for a moment
if this was something higher up in the stack.
If this wasn't a kernel, if this was maybe Nginx
or something even higher up in the stack in the user land
that really doesn't have as many I's.
Could be, you know, just a common library that you have, right?
Who really cares what gets merged into left pad?
And so what do we do about this problem, Wes?
Because it's there.
It's probably not super critical because in reality it would eventually be caught.
It would be worked out the way the systems and tools work.
We'd know when it was committed and who committed it and every commit they'd ever made.
So there are tools in place in that regard. But doesn't it seem like there should
be something in place, screening or doing some kind of automated checking at best we can to see
if what they claim it does, it even does? Well, you know, to some degree, there are some automated
tooling. There's, I think, the question of can we have more things that help? That's always true.
We have to think about it.
There's just bugs that get merged, both malicious and accidental.
That's a problem that happens in all software and in the kernel as well.
I think we need some more research to help understand, like, what's the scope of this problem?
And how does the maintainership process really work?
I think scholarship around that and what's happening there is worthwhile,
just clearly not in the way these researchers did it.
No kidding.
Talk about messing around with a development team
that has very limited time and resources
and messing around with software that literally ships
in mission-critical, life-critical applications.
There is satisfying intellectual curiosity,
and then there is crossing the line.
And I agree with the kernel team that from the evidence that's available to us at this time, in my opinion, they definitely crossed the line.
And they are right to ban future development until they sort this whole thing out.
They've been sent a letter.
The Linux Foundation has made reasonable requests, in my opinion.
And now it's really kind of the balls in the university's court.
They released on the 24th of April an apology.
But the apology sort of just says, well, if I'd asked you for the cookie, you would have said no, so I just took the cookie.
And so I know I shouldn't have taken the cookie, but I really wanted the cookie, is essentially what their apology says.
Not good enough.
No, and a lot of folks pointed out that, well, you could have tried to come up with other
methodologies. And then there's just sort of the arrogance or isolation of, you know,
researchers in academia who need to publish, they have an idea, they think they just want to go
about it. Clearly, they didn't give that much thought to, you know, how to get the institutional
review or what level or really what the potential consequences of that work might be. And I think there's also sort of a trust between academia and open source,
you know, because there's a lot of principles shared there and a lot of shared history.
So it's sort of, that's just even more rude. Yeah. And I mean, not to like, you know,
bang this drum too much, but the details are in Linux Action News 186. And the reason why I'm
hitting that drum so hard is Wes and I spent our
entire Sunday getting the details of that
story as accurate as we could.
Listening back, I'm pretty happy with it.
I think I would have likened maybe one more
crack at covering it, but
you know, we
spent all day on it on a Sunday
to get that story right. And so
the details, I think, matter in this one because
there's multiple timelines, there's multiple he said, she said. And so it is a fascinating topic.
And the bigger question it raises, I think, is worth considering.
And we should say, I think that question is being considered, maybe not as much as it
showed or not as publicly, but you did see in the mailing list, like there were even
conversations around people asking, you know, how much review do you give to these folks?
Some maintainers saying like, oh, you know, someone like that, I'd be very skeptical.
And others saying, we have to admit, many of us are very busy maintainers
and if the code looks good, we might just merge it.
I think there's also a cultural thing at play beyond tooling
and that conversation should happen too.
And while all of this is going on, of course, business still is normal,
usual development continues for the Linux kernel.
And version 5.12 was released this weekend.
And Linux can now run as a root partition on Hyper-V.
There's also more support for that lightweight hypervisor Acorn,
as well as some RISC-V support landing.
Broadcom's VK video accelerators, instead of CPU load, can now be chip-based.
Support for the PlayStation controller for PS5.
Nintendo 64 and more all LAN.
Nintendo 64 console's in there too, which we noted on LAN as well, but just hilarious.
That thing runs at like 94 megahertz.
You can just swap it in for a couple of your pies, right?
You don't need more than 8 megs of RAM.
You know what I found fascinating though around 512, Wes, was actually just some of the details of what make up a kernel.
So if you look at it, in some senses,
Linux 5.12 was one of the slowest development cycles
we have seen in the kernel since version 5.6,
which was not released that long ago, about a year ago.
But there's still plenty of things that landed, like we mentioned,
and you still have some serious numbers here.
1,873 developers contributed to Linux 5.12.
262 of those were first-time contributors.
That's about an average number, but it's pretty neat to see that.
Oh, yeah. No, that is nice to see.
Of course, on the other side, there's some heavy hitters, folks like Lee Jones, who was the most active Chainset contributor this time around,
working on compiler stuff, docs, and warnings throughout the tree.
Chris Wilson doing a lot of work on the Intel i915 graphics driver,
which, hey, as an Intel user, I appreciate that.
And Christopher Hellwig continues to clean up the code in the block layer and file systems,
also important work.
Yeah, that looks really good, and it's nice to see the IOU ring subsystem get some improvements.
The network subsystem is in there as well as some cleanup code in the block layer and
file systems.
So it's a good, solid kernel in there.
It's a handsome kernel, you might say.
And it was supported by 211 different employers that LWN was able to identify.
So LWN went in and looked at this stat.
They say it's a small decrease, but the top contributor by change sets in 5.12
is Intel at 10.9% of the changes.
Interestingly, though,
by lines changed,
Lanaro is at the top with 17.4%.
When you go by the pure number
of lines of code change,
Lanaro was really crushing it this time.
I guess just due to a flurry
of code removal patches
that they sent out this time. All right, keeping things clean.urry of code removal patches that they sent out this time.
All right, cleaning,
keeping things clean.
I like that.
That counts, man.
That counts.
Unknown represents 7.7%
of changes by just looking
at change sets.
And then in the number three slot
is Red Hat with 872 change sets.
Red Hat's also in the number three slot
when you look at just lines changed
with 38,000 lines of code changed just in this kernel.
Oh, my God.
Wow.
That's just the scale, Wes.
The scale of this project and the fact that they managed to ship and ship reliably and produce something usable is – well, it needs to be – it's going to have to be studied by historians at some point.
Well, yeah, right. That's kind of why some of the scholarship is warranted here,
because they incorporate a huge number of changes in a reliable way, in a predictable way. I mean,
you know, we're going to be running a pretty new kernel with 5.11 here on Fedora, and I'm not
worried about that at all. And I wouldn't really be worried about switching to 5.12 today either.
Yeah. In fact, 5.13 is shaping up to be a big boy. Torvalds wrapped up the announcement of 5.12 by kind of prepping people for the size and scale
of 5.13. He says, quote, please spend a bit of time running and checking out 5.12 before we start
sending a merge request for 5.13, because despite the extra week, this, 5.12, was actually a fairly small release overall. And judging by Linux Dash Nexttree, 5.13 is going to be making up for it.
And in 5.13, as just a reminder, we're going to get initial support for Apple's M1 chips,
the addition of a new wireless WAN subsystem, more RISC-V support,
and Intel's standalone GPUs,
if they ever ship,
well, 5.13,
we'll have support for them.
Just can't wait.
Yeah.
You know what, Wes?
You look at this and you go,
there is so much happening in kernel land
that by the time you could actually get caught up,
like by the time you get to a distro,
it's like a whole other generation of hardware is out.
The kernel team, though,
just continues to just crush it.
And to get, like, again
some of this RISC-V stuff and Intel stuff
in and the M1 stuff in
years, maybe, in some
cases, before anybody needs it, is going to
lay some groundwork for the future.
Can't stop, won't stop.
Linode.com slash unplugged.
Head over there and check out
Linode. They're our hosting provider,
and everything we've built in the last couple of years is on Linode.
So you go to linode.com slash unplugged
to get a $100 60-day credit towards your new account.
And, of course, you support the show at linode.com slash unplugged.
Unlike entry-level hosting providers or the big clouds like AWS
that try to tie your hands and lock you down,
Linode gives you the tools to get the most out of their crazy fast systems.
You get 11 data centers to choose from, and every service level is backed by the best customer support in the business.
And that really matters.
I mean, that really matters.
It makes all of the difference when you're in a tight spot.
And it's not just like one great thing like the support that makes Linode fantastic
and our go-to choice for anything we're building.
It's all of the great things about Linode coming together
that make them special.
And at every step of the way since 2003,
Linode has asked themselves,
how can we use Linux to accomplish this next task?
How can we use Linux to do what we want
in a way that people are not yet using it?
Their love and dedication to that
is baked into the product.
And as a long timer, I can tell it.
And that's something I really love about Linode.
And if you're not catching Linode's new tutorials
by HackerSplit,
you're missing out on something great.
Some chances to learn.
I'll put a link in the show notes
to a video on learning the various tools
and commands for logging and monitoring. These are some great basics. And if you can learn that from a YouTube video in like
15 minutes, well, that's going to change your server game. So get started by going over to
linode.com slash unplugged. Get that $100 for your new account. And try this stuff out. Try out the
object storage. This could be an amazing way to get your configs, your server stuff offsite in a system that's in the cloud,
that's reliable, that's fast,
but doesn't require running an entire server in front of it.
Go build something or maybe learn something.
With that $100, there's a lot you can get access to.
So go to linode.com slash unplugged.
There's a lot of ways to host something.
And there's a lot of various companies
that will do it for you.
Go see why we choose Linode every time.
Linode.com slash unplugged.
So Flatpak has a new version that's in the works.
So you have an interim release as they develop stuff,
and then we're going to have the final release
that is the actual stable version for end users.
And the development version is 1.11.1,
and the release version that we'll actually get our hands on eventually
is going to be 1.12.
Now, why am I telling you about this?
Because this is actually the first time
we've ever covered a Flatpak interim release on the show.
But this time, they're taking
a few steps towards something kind of cool.
Yeah, they have some notable feature
changes already in just
point one here. One of which
that's worth mentioning is allowing sub
sandboxes to have a
different slash user or
slash app. Why
would you want that? Well, right now,
initially, it's being used by the Flatpak
Steam effort to launch games within its own container runtime, showing up with a replaced
slash user. That's why they need the new feature. Basically, the goal is to be able to handle the
Steam Linux runtime within a Flatpak sandboxed environment and sort of merge those two systems
together. Whoa, my mind is blown by this.
And it's not just that too,
like Flatpak's also working on support
for better command line text user interface type programs
like the nano text editor was specifically mentioned.
So that's pretty great.
To make it clear here,
they're making it possible to support
Steam's pressure vessel, right?
They're not using pressure vessel,
they're making it possible.
And that's Valve's project to put Linux runtimes in containers to make old libraries and whatnot
available to games, even on newer OS releases.
And unlike certain other use cases for containers, that one, the pressure vessel, the pressure
vessel stuff is just for compatibility with old games.
They're not really trying to get security right.
They're not really bothering about sandboxing.
I mean, the idea is that you could ship this stuff inside like a Steam flat pack
and have support for these containers and do all of it with bubble wrap and a universal package, I suppose.
The idea is pretty neat.
It feels a bit like Turtles all the way down to me, but I understand that it's necessary
for compatibility.
Really, it's just exciting to see sort of these technologies both working.
And I mean, I'm already using a lot of the FLACPAC stuff anyway.
It seems like this is a good sort of test of using it in anger, making it work for more
and more use cases and just ironing out the whole setup.
Okay, Wes. using it in anger, making it work for more and more use cases, and just ironing out the whole setup.
Okay, Wes, this next one here, it's kind of one for the virtual lug a bit.
I think we've kind of got it wrong.
I think we've all been worried about WSL,
especially now with WSLG, which supports GUI Linux applications.
Right, yeah, everyone's a little worried about that.
Yeah, we're all worried about it.
But if you look at Linux's true successes in the market,
at scale,
it's the server, right?
Like the desktop,
as much as we love it,
the desktop doesn't even really show up on the radar, right?
It's just a teeny, teeny, tiny blip.
But the server is like this.
It's this massive
worldwide phenomenon.
So even if WSL were to
rob all of the Linux desktop market,
every single Linux desktop user, which is never going to happen,
but just for the sake of argument,
if WSL with WSLG now on Windows 10 or Windows 11 or whatever,
one day were to take every single Linux user and get them to convert somehow,
it wouldn't actually radically change the market dynamics
of where Linux is a powerhouse, really.
It wouldn't really change the server dynamics.
So WSL can't really harm Linux.
In fact, if anything, it's probably just going to mean more Windows developers are writing server-side Linux applications.
It's probably going to actually further the Linux server dominance.
Maybe it's not great for the desktop, but overall, it's going to be good for Linux.
But AWS and companies like them, like other companies that have these app platforms.
These sneaky cloud services, huh?
Right, or these serverless services that are working to essentially abstract Linux away.
I think they take this argument that we used to worry about with Webmin or services like Cockpit
that, oh, the GUI makes it so you don't even have to learn linux and you don't even know how your
system works well that that seems like that seems like a quaint concern compared to how things like
serverless application platforms work now where everything is simply an implementation detail
behind the scenes and the user and the in the developer never have to know any Linux,
they never have to know a single command line,
they don't have to know the name of any of the open source applications under the hood,
making it possible.
And this continues to be what all of these platforms push.
It's the primary marketing point of DigitalOcean,
it's the primary marketing point of AWS to small to medium business segments.
It really is true.
That's what, I mean, at the day job, same thing.
I mean, there's a lot of rigmarole I have to do to even get something close to, you know,
Docker exec into the container, despite all of it being running on Linux and powered by container tech.
Right.
And so I think while we are kind of looking over at WSL, worried that it's going to eat our lunch,
it really would only take companies like Amazon
and one or two others to flip a switch and start changing the implementations behind the scenes,
as long as they're still executing the code that end users are expecting, the customers are
expecting. I don't think they're going to care if it's a Mac on a power PC chip executing the code
as long as it runs fast enough and gives them the output they want at the price they want.
The fuchsia subsystem for Linux, right.
So I actually think that these companies
that are abstracting Linux away
are more dangerous than WSL is.
They're more of a risk to Linux's actual core strength
and position in the market
because you can kind of come in
and sell on compatibility and ease.
And then over time, Amazon could switch it on the back end to whatever they want,
or Azure or DigitalOcean. It could be any one of them. They could just flip a switch one day.
And now it's running on their new platform, but all of the compatibility for the end user is still
there. And so I wonder if Amazon is essentially over time in a position more and more
to kill server Linux,
which is actually the market of Linux that matters.
And I wonder if anybody in the Mumble room has thoughts.
I don't think that they're going to do that.
Like, if you want to talk about anyone
who would have any incentive
to do that sort of thing,
it'd be Google.
AWS, you know,
I know AWS is often the boogeyman here,
and I don't have much skin in this game at all,
but AWS actually makes their own Linux distribution
derived from Red Hat Enterprise Linux.
They actively contribute to Fedora and the Fedora ecosystem
and other distribution ecosystems.
There's folks involved in SUSE Linux distributions, Debian, and so on. And they even work with FreeBSD people. I think of the big cloud providers, AWS is actually the most open source friendly and the least likely to screw over the Linux ecosystem. people remember how critical it was that Fedora supported Zen properly for them to even launch
EC2. Fedora was the first Linux distribution that was available there. And there's a strong
relationship between AWS and the Linux community as a whole. I don't think we have the similar
strength of that relationship with the other cloud providers. And that's where I tend to be
more worried.
Microsoft has every incentive to do that because they build an operating systems platform that is directly competing against Linux on all fronts. And they're winning on one out of three
fronts. And the stuff that they're doing with Wizzle makes it a lot easier to chip away at it.
With Google, they're making Fuchsia.
That is their project.
Fuchsia has the ability to, if not now,
will eventually have the ability to emulate enough
of the user syscall interface to be able to run Linux applications
in the same way WSL1 did.
But in server workloads, the scale of syscalls
and stuff that's being used is considerably more limited.
And with containers, you already have so much filtering going on that you kind of have a good idea of what kind of surface you need to cover to be able to make stuff work.
And so if you target it that way, you can do exactly what you're talking about.
So AWS isn't the one I'm worried about.
I'm worried about the other two.
Karpino, you think you agree because Google's Fuchsia project could be this bait and switch swap? Yeah, I think so. I think it's not likely to happen anytime soon,
but because Fuchsia is offered under a different license that is not GPL, it could provide a
different kind of value for some companies. Therefore, the idea of investing in it may
sound attractive for some people.
There's kind of two things going on. There are maybe some competitors to Linux like Fuchsia
coming up. But then also, I think the abstraction layer is just continuing to
raise like it does, right? Like we are programming to higher standards. And so Linux is just becoming
more and more truly of infrastructure as it always has been. It's just that the infrastructure is just getting farther away from us
as we go up the stack.
Yeah, and somebody has to set up those systems
to host those applications serverlessly.
Yeah, right.
There'll still be a lot of cloud providers using them,
at least for a while, until it makes sense not to.
And if Linux, they don't have to do that,
and Linux means they don't have to reinvent the reel.
That seems like a good idea.
Hey, Wes, let's duck over here,
a little production side meeting.
I think we should probably do this privately, though.
The cone of silence.
All right, this is a little awkward.
So before we mention this on the show,
you know, in the past,
I've promoted a lot of the Humble bundles,
and they seem like a pretty good cause, but... Yeah, I mean, I've definitely, I've bought a few of the Humble bundles, and they seem like a pretty good cause.
Yeah, I mean, I've definitely bought a few.
Yeah, yeah, I've bought more than a few.
But I think they got, like, bought out.
And I think they're making, like, weird, awkward changes.
And, like, they don't allow you to contribute your payments
to only devs who supported Linux anymore.
And they're kind of, like, restricting how much you can give to charity now,
and like no more than 15% now.
They got a cap.
I don't know.
That's awkward, man.
Do I say something on the show?
I think you might have to.
We should observe it at least anyway, you know?
They've been pretty consistent over the years, so this is a big change.
Yeah, it's awkward.
It's definitely awkward.
Okay, all right.
The cone of silence.
Oh, dang it.
I left my keys in the cone.
Oh, Wes, dang it.
Well, we'll have to go back in later.
At first, we have a public service announcement.
The Humble Bundle pricing situation is getting a little weird.
If you know what's going on, go to linuxunplugged.com slash contact to let us know.
Yeah, it seems like they've forced a certain set
of default splits and you just kind of go with that and you don't have any choices, which was
all the fun before, really. And so I don't know why. Maybe they just it wasn't sustainable. Maybe
the new owners need some different revenue goals. I don't know what it is, but I wanted to let you
guys know that they have capped the amount you can give to charities at 15%, and they have changed
it so that you can no longer
adjust your payments so that only developers
who supported Linux
would get paid, which, you know, I understand that
everybody should get some value for their product,
but I really liked trying to give the
bulk of my contribution to Linux developers
to kind of vote with my wallet.
So Wes didn't want to let you
guys know. He said, don't tell you, but I thought you should know.
Well, I thought it would really get him down.
You know, there's enough bad news these days already.
That's true. That's true.
I have some good news.
MailRoute.net slash Linux.
Try out MailRoute today and get 10% off the lifetime of your account
and start with a 30-day free trial, no credit card required. That's right. MailRoute's back for another episode because it just is such
a great fit. We heard from a bunch of you last week that tried out MailRoute. They've been doing
MailRoute for 24 years. Yeah. They've been focused on one thing, providing cutting edge email
security. And you know, I respect that. MailRoute protects your email server with a suite of services designed to remove spam, prevent viruses, and debilitating downtime.
And, you know, with our audience who likes to host stuff on their own,
you guys know sometimes it's tricky with your ISP or for your own security.
Maybe you want to run SMTP on a different port.
Or maybe for some stupid reason,
your perfectly legitimate email server has ended up on a blacklist
that you are constantly fighting. Oh, no more. MailRoute solves all of those problems and more. And they make it
super easy for your business to migrate if you use Google Apps or Office 365. It's like one click.
But really, even if you don't, like I just set it up with our mail server that we built over the last couple of weeks,
and it's crazy straightforward.
If you know anything about managing a mail server,
you can make this work.
It's probably, you're probably done in 10 minutes,
assuming you know the logins to your DNS
and to your mail route account.
But the one-click migration is really sweet,
and they do have API-level integration
for getting information in and out of MailRoute,
which I really appreciate,
especially if you want to make sure
that you only allow mail from certain accounts
and you want to sync it with your master mail server.
Their API is great.
If you do business with the federal government
or you're a contractor
where you have to meet certain types of requirements,
well, they got you covered there as well.
And as an admin,
you're going to love the fact that they have real-time log searches,
which was super useful when we were setting up our mail server.
And you also can queue mail up at MailRoute for up to 15 days or whenever you release it,
which is perfect for covering you during an outage or maybe just a window for some maintenance.
Like before that, I don't know what I would have done.
Like what, just take the mail server down
and hope nobody emails us during that time?
This is so silly.
And so having something
just in front of your mail server,
it's just a higher quality,
higher production grade.
And of course, they know how to do this stuff.
So go try out MailRoute today
and get 10% off the lifetime of your account
and get a 30-day free trial by visiting mailroute.net slash linux
to protect your business and your email server.
MailRoute makes email better.
mailroute.net slash linux.
Fedora 34 is a big release.
It's huge, and there's a lot to talk about in here.
So much to cover from both the desktop side,
but also in the other spins and core technology.
Whew, Wes, I don't even know where we start.
Well, before we get too far along,
maybe we should take a moment to address the long-term future of Fedora.
I think it's safe and probably more secure than ever
even after those recent CentOS changes.
But I know there's a lot of people
that have been worried about that,
thinking maybe, you know,
well, look at the changes happening to CentOS.
Could something like that happen to Fedora?
But I mean, it just, I don't see it.
I don't think it really makes sense.
It seems like Fedora's role really is needed, right?
It's the place where everything happens first,
where these things get integrated and tested and tried out,
and the needed development can actually happen.
Packages can be tweaked, bugs fixed.
Core system stuff.
Yeah, new approaches to both the server and the desktop.
They all happen there, and Red Hat needs that.
Right. They want it to happen there.
It seems like it's now sort of integrated
to the process of development in a way that is,
I think you could say it's codified.
It's clear that for RHEL to be a successful product,
when you look at it from a corporate org chart standpoint,
Fedora is square one.
It's like you start at Fedora,
and for the end product that they
make all the money from to be successful, Fedora has to be successful. And this new arrangement
with CentOS Stream sort of codifies that, I think. Yeah, exactly. I mean, it makes it more clear,
at least, to some of the stuff that was happening internally. Now you can just see it laid out in
this pipeline of how things get created and how those changes eventually make themselves into a Red Hat release. But regardless of the details, I think we can just
not worry. Don't be scared to try out Fedora 34. It's great. We should probably just highlight
all the awesome things going on and maybe a few of the things we don't love.
Yeah. And I think it's worth mentioning that there are many spins of Fedora. So we're going
to talk at first a lot about the workstation version which ships with gnome shell but there's a plasma spin there's
a server spin there's a lot of different versions uh here and some of it is core technology that
applies to all of them uh if you want us to cover another one specifically like a specific spin
totally let us know at the contact page i would love to i just don't want to overdo it so if
there's something you'd like us to look at specifically in this realm please let us know at the contact page. I would love to, I just don't want to overdo it. So if there's something you'd like us to look at specifically in this realm, please let us know because otherwise
we are just sort of restraining ourselves and try not to overdo it. You know, like the server spin,
I think that's particularly interesting, this release potentially, because I believe,
correct me if I'm wrong, guys, but I believe that the nextEL is ultimately going to be based on Fedora 34.
So what happens here is particularly interesting for several reasons
beyond the immediate reasons.
And no matter what variant of Fedora you use,
you're going to get the latest in what the open source world has to offer.
And really, all of what we're about to cover
will show up in just about every distro near you soon, at least most of it.
So let's start with the big headline feature in the workstation spin,
which is GNOME 40, which we've talked about a little bit before.
Yeah, I mean, it's not new in the sense that it was already released,
but the first time it ships in a big-name distro, that's something of a milestone.
It is, and it's the first time you're going to really see it land in front of a lot of end users.
And it's predominantly recognized for the change to horizontal workspace layouts,
which is similar to how elementary OS has already been doing it.
It's how I actually have my Plasma setup configured most commonly.
And it's a lot how Mac OS has done it for several releases.
And I think how Windows 10 does it if you enable virtual workspaces,
where everything is left to right, essentially.
And once you learn it, it feels fast and natural.
And actually, because it is so common with the other desktop platforms,
it makes moving between all of those a little less frictiony,
you know, a little smoother on the brain.
I like that a lot.
But I think the other thing that hasn't really gotten a ton of appreciation
in GNOME 40 is GTK4.
And GTK4 is like at step one of getting awesome.
And each iterative release we're about to get, it gets even more awesome.
And the awesome that I'm talking about is performance.
I'm talking Vulkan rendering performance.
And it's looking real good.
And GTK4 apps snap. And GTK4 apps, snap.
They really do.
Yeah, man.
And when you combine that with GNOME Shell 40 itself,
that's a nice little package, right?
I mean, I've just been playing with 2104 for a while
and was thinking about keeping it around.
It already felt much faster than the Plasma desktop
I'd been using before.
But GNOME 40 is just the next level.
Yeah, and I've been impressed at how fast extension developers are updating to support
Gnome 40.
They have to implicitly update their extension.
Now, there are a few cons that I've noticed.
And I want to preface these cons with, I suspect they're all going to be addressed in future
releases.
But the reality is, it's not as easy to get a quick overview of what apps are on which
workspaces.
In the vertical layout, you would have right there like a film strip on the side of your screen,
and you would see all the different application windows across all of your desktops at once.
With the horizontal layout, you see like 1.3 of a desktop at a time.
It's just not quite as efficient, but it's so quick to slide through them,
and they do have little tiny, tiny, tiny previews at the top that kind of give you an indication.
Ah, right. That, where did I put that terminal problem?
Yeah, that app I launched two days ago. Also, it's kind of hard to move the workspaces around
themselves. You know, say you start a chat app, maybe you've got two chat apps on one screen.
And on the second desktop, I open up my browser and then I decide I want to swap those two things. So the browser's on desktop one and my two separate chat applications are on desktop
two. The only way to do that right now in GNOME 40, instead of just like being able to grab the
virtual workspace, the whole workspace itself and move it, you have to like reshuffle all of the
applications. So like move all the apps over to one screen and then move the other apps over to
the other screen in these tiny little boxes at the top of the screen in this little bitty, bitty preview,
you just can't swap the workspaces around easily.
But when you do get that layout right, after you spent the day with it and you figured
out, okay, like these apps on these screens and all of that, man, it rocks.
And it rocks, really.
It rocks in a way that like feels super polished.
And the only thing it leaves me wishing
is that I could then somehow save and restore that layout
so I just log into GNOME Shell
and my applications always open up on those virtual desktops.
Right, something that Plasma makes pretty easy.
Yeah, but man, is it smooth.
It feels very, very professional.
So that's GNOME Shell 40.
Really, overall, pretty much a positive take.
I'd like to see some work with additional external monitors, but I know that stuff's coming.
I do think you hit on something that I noticed as well. It is really great, but it is clearly
the start of a development chain. You know, there's some new ideas that are being explored
and have been recently implemented that are still getting worked out and that changes will happen to.
And that really felt contrasted to 2104 sticking with the previous release,
which, you know, they'd polished out a little more of those things.
So depending on which way you like to interact with your desktop,
that might be one way to sort of choose,
which of these new distros do I stick with?
In 2007, Fedora 8 was released.
And one of the headline features of Fedora 8,
I listened to my review this morning
from Linux Action Show like 67 or something.
Amazing.
Yeah, was Pulse Audio.
That was the release of Fedora
that they switched to Pulse Audio
and I think everybody knows how that went.
In Fedora 34,
this is the release
where they switched to Pipewire by default.
And it is a completely different migration.
It seems to be far more successful.
I mean, it's only just started, but at least in my testing,
you basically don't notice anything's different unless you use some,
there are picky apps, there's still some pulse modules
that are getting implemented here and there.
But by and large, it just works.
Not only does it just work from an end user experience,
but the project has been much more successful
in reaching out to the individual, quote unquote,
stakeholders, if you will, and getting them on board.
And even making changes, like I think in particular,
there was a lot of feedback from the Jack community
that influenced some of the design decisions with Pipewire
that meant that as Pipewire went along,
instead of turning people off and pissing them off
and kind of creating a divisive atmosphere,
it created this momentum of support.
And we started picking up application support and library support
and distribution support and Linux media support.
And it all kind of really worked well.
It was a good community management and code management.
Which is what you need if you're going to try to be the unifier, right?
I mean, that's part of the promise of Pipewire,
is all of this stuff happening in one go,
your Pulse and your Jack, and you're also all happy together.
And in particular, one thing you get out of this now,
if you're using Jack, is way better and easier access to Bluetooth devices, thanks to Pipewire. Yeah. And really just, I think,
an easier time kind of getting up and going and configuring Jack applications. It sort of solves
some of the fundamental plumbing that you used to have to worry about and makes it just all of a
sudden, like these applications that have been around for a while, even easier to use in some
respects.
It is neat as a moment right now of Pipewire happening.
You know, I think we all had a lot of confidence in the project
just because of the folks developing it
and their history and skill
and just sort of the previews we'd seen along the way.
And we've certainly covered it a lot on this show.
But the fact that it is not used for audio yet,
but deployed in 2104
and now shipping with audio in Fedora 34.
I don't know, if this goes well,
it seems like Pipewire is here to stay after this.
I think this is the part of the technology stack
of this release that you and I are the most excited about.
And we don't know what the line is.
Should we talk about it a lot?
Or is it like inside baseball podcaster stuff?
And so we just cover the high details and move on.
So let us know.
That's another area we'd like feedback.
But something that's a bit of peace of mind for me in this release
is that this version of Fedora ships with X Wayland standalone.
This is nice because you've probably heard that X.org releases are a bit unmaintained.
The current upstream release is stuck to the 120 branch for years
with no real foreseeable major update, at least.
But what they've done
with this standalone ex-Weiland session
is they've built it from the Git snapshots
of current code upstream
rather than the stable branch,
so where the fixes have kind of landed,
and they've broken it off out from Weiland itself.
It's nice to see that,
and it could probably even, I mean,
not only could it result in better performance,
but maybe even in some use cases, better battery life
if you're not running any X applications now,
this won't be running at all.
Yeah, definitely.
And it just sort of, you know, it wasn't accessible.
There were fixes going in,
and that was the main part of the sort of X repo
actually seeing any development.
But when they weren't being shipped,
no one had access to them. You don't want to just pull from that without having tested it. So it's nice to see
that there was enough developer effort to sort of bootstrap this, keep it alive and functioning
nicely. Yeah, I bet you'll see other distributions follow this particular suit. Another nicety, and
for several reasons, is that Wayland is now the default for the Plasma spin. So, you know,
Plasma's Wayland support has been getting pretty good and pretty much daily drivable in 520 and
even better in 521, which is what's shipping with Fedora 34. And so it's a great kind of
milestone that was set because when the Fedora project said, we're switching to Wayland by default for our Plasma spin, the Plasma project saw that and realized, we got to double down on this code.
We got to get this shippable and working. And not in a bad way, like in a very kind of cooperative,
well, this project set a goal and we're going to work hard to help them succeed at that goal. And
it's going to be good for our project too. And it was a good example of how Fedora 34 played a role
in a wider ecosystem improvement for Wayland support.
Yeah, you can really see that even just in the change here
on their post about it on the Fedora Wiki where they write,
Fedora has long been a leader in advancing the adoption of the Wayland protocol.
Much of the quality of Wayland for GNOME can be attributed
to the work done by the Fedora Workstation Working Group.
It's now the KDE Special Interest Group's turn to do the same for the KDE platform.
So I think there really is this sort of shared responsibility and help and goal to sort of, you know, push things over the hill and get it nicely working now that we've got stuff like NVIDIA proprietary support working, screencasting, middle click paste, all the sort of, you know, 90% little paper cut stuff that needed to be in there before you could really ship it.
Yeah, there's still edge cases.
You know, the other day I had this problem where my, for whatever reason, until I rebooted, my Wayland applications could not copy and paste to my ex-Wayland applications.
It's a problem that's been solved, but it just started hitting me the other day.
And so it's not all there,
but I am such a performance nut
that I will live with those paper cuts
to get the smoothness of Wayland.
On my X1 Carbon with the full Fedora stack,
it's so damn good that I think you could hold that up
as an example of a premier Linux experience.
And it makes me really excited for this technology to land in other distributions as well.
I also decided to give it a spin on something a little different, Wes.
I tried out the ARM64 image on my Raspberry Pi 400.
Oh, yeah, right.
There's a new AR64 KDE Plasma desktop image available this time around.
Yep.
What did you think?
You know, it's not super fast, Wes.
It's a little bit faster, I felt like, than the GNOME shell images,
which is probably not too surprising,
but not fast enough for daily use.
Not fast enough at all.
The command line interface, perfectly fine.
If you wanted to use this thing to host some file services for yourself
or make it a LAN
server, fine. Great. It'd be
good. But as a desktop environment,
Wes,
it's just not quite there.
We're not there yet. Or maybe you needed something
more lightweight, I guess.
And I was running it from a
SSD over USB 3, not from an SD
card. So I was kind of giving it a performance
advantage. You were trying, yeah.
And the Pi 400 is slightly faster than the Raspberry Pi 4 vanilla.
Someday those things will be a regular desktop.
What I really want, actually, my goal,
and I think they're going to get there eventually,
but my goal is to have a Pi device of some kind
that is on 24-7, that's logged into all my stuff.
So if my desktop ever breaks or an update goes haywire, or I want to reload for a day for some
crazy thing we're doing on the show, I want to have like an appliance that has my chat apps,
my email, and my web browser ready to go for me all the time. Just like a mission critical console,
you know, that's in my
office that takes up very little space. And the great thing about the Pi 400 for this job is that
it's the keyboard and the computer in one, and as silly as that sounds, it means it takes up a lot
less space because I just have an HDMI cable and a USB-C cable and a mouse, right? It's simple.
It's great. I actually do Ethernet too, but it's really nice.
So I'm hoping, and I honestly just,
I think maybe Manjaro has invested more in this area.
They've spent more time here.
But the Manjaro versions on the Pi 400 are significantly better.
Like, absolutely, I can do that job with Manjaro
or Ubuntu 20.04.
Okay, so maybe there just needs to be a little more tuning.
That's hopeful.
Could be, could be. I learned a valuable lesson, though,
thanks to Neil, who responded to a
quick question I had during the week.
To make it work, you need to
really, for the best time, I mean, there's probably
a thousand ways to do it, but you really should use the
Fedora Media Writer to make
the whole thing actually bootable, and there's some
cool options it gives you. Some
nice-to-haves, like turn off the boot splash so you can see all of the output or turn on SSH or some
options that you might use if you didn't have a monitor hooked up. So it's a really nice tool.
Also nice to see that you can use it from the Mac and Windows too, if you need.
Oh, really? They have, oh, okay. I did not know that. So they have Mac and Windows versions of
the Fedora Media Writer? That's great.
Good for them.
All right, well, moving down the stack a little bit,
another kind of headline feature here is ButterFS Transparent Compression.
And if you're like me and you are around during the drive doubler days of the DOS era,
this is a totally different beast, and it could be particularly great for SSD users.
So where do you want to start, Wes?
Well, maybe we should start and say, this isn't a new feature for ButterFS, but what's new here is Fedora adding it on by default.
You could add it if you'd like, but I don't even think it's a default.
Just when you make a ButterFS file system, you have to manually go in there and add the
mount flags to say, hey, do compression.
So, you know, like with a lot of things in Fedora, this is some trust in saying, this you have to manually go in there and add the mount flags to say, hey, do compression.
So, you know, like with a lot of things in Fedora, this is some trust in saying,
this makes sense, it works in enough use cases that we think all of our users should have it,
at least if they're using ButterFS.
I'm going to use this. Totally think this is a great feature.
But it did cross my mind, like, what if something goes wrong?
Like, am I risking my data here?
Sounds like they checked that out as part of this change.
And as far as we know, known compression-related bugs are just kind of cosmetic.
Nothing that is data loss.
And this has been in the file system for a while.
Seems like it just works.
Okay.
Yeah, I know it's been in Butterfest for a while. The change here is they're turning it on.
And I say good on them.
You know, I say, too, if you're on CFS, you should consider doing the same.
And they went with this compression algorithm because I assume they must have looked at the impact on the CPU and disk tradeoff and decided, OK, this is the right one to use.
Yeah, they're using the Z standard.
Then they picked a specific default compression level because it's tunable here and you can kind of choose different parameters, you know, speed and how well does it compress and kind of pick the one that made the most sense in terms of like CPU usage and memory and disk space savings.
What do you actually get out of it?
It's just so neat that we, you know, because of the way our architectures are and how fast CPU are and how slow it can be sometimes to talk to the disk, even a fast disk just makes sense to compress things first and you don't ever have to worry about it.
Right, just get it to the CPU, you know?
Just get that data to the CPU and then let it do its thing.
And this is probably available on the distro you're using now.
You could, you know, make a partition and use compression today,
and you probably should consider it.
There's just one thing to remember.
It's kind of tricky when you're checking your free disk space going forward
due to the nature of transparent compression.
It's transparent.
So utilities like DU are only going to report
the exact uncompressed file space usage,
which is not the actual space that they take up on the disk.
So it gets a little tricky.
I assume there must be tools, Wes?
Yeah, the Fedora magazine had a good write-up about this feature,
and they recommend that they comp-size utility. If you're curious, of course, that'll be in the show notes. And they've also got a good write-up about this feature, and they recommend that they comp size utility.
If you're curious, of course, that'll be in the show notes.
And they've also got a good tip if you want to go back
and retroactively compress older files
if you've just added this mount option now,
because it only applies to future files.
Now a little bit further up in the stack,
so in the SystemD layer,
I thought we talked about this feature in episode 351,
but we're getting SystemD out of Memory Demon now, which I guess is different than the early OOM we've talked about before.
Yeah, it turns out there's a lot of options here.
Yeah, so last year at this time, I guess Fedora 32, it would have been early OOM shipped in Fedora.
And that was a big change, and we talked about it.
Of course, there's also Facebook has some work, which we talked about in this space. They were interested and also helped
get new statistics like memory pressure, PSI
metrics into the kernel. So they've got a tool that uses that, also
kind of enabled cgroups. And now systemd is in play. Because,
you know, if you have a daemon, it's doing something on your Linux system, you better just do it in systemd.
It's going to work out better.
Managing processes does make sense, right?
It does.
It does make sense to have that in systemd.
Yeah, and it's nice and smart and integrated.
It can do stuff per cgroup, lots of cleverness with fancy new kernel features to try to get this right
and do a better job of not killing stuff that doesn't need to be killed,
but also trying to kill things early enough that it actually helps save your system.
And I actually, I have not looked at this myself,
but I have been led to believe this is simpler than,
at least from a configuration standpoint,
than the previous stuff made by Facebook.
Yeah, sounds like it.
And it's neat that it's using that memory pressure indicator.
And so what it does, audience,
I don't think we've clearly explained this,
is it's monitoring applications and processes and whatever,
and it's checking for and processes and whatever,
and it's checking for memory pressure and availability.
And if it has to, before your system gets really crappy in a low memory situation, which Linux is famous for,
this thing will kick in and essentially kill it
to save the overall system.
Yeah, this systemd oomd, I guess, can also monitor
similar stuff for swap and act there to try to free up swap.
And according to Neil in the IRC room just now, I guess early OOM was really just something of a stopgap to try to get this feature out there while systemd OOMD was in the works.
Okay, and then there's one last thing that I was waiting for you to kind of explain to me.
And that is, I think I must be mistaken, but I guess it looks like they've removed support
for disabling the SELinux runtime. What is this outrageous thing, Wes?
Oh yeah, so people might be kind of confused by this, so we should probably note up front that
by disabling SELinux, what they mean is that the kernel doesn't call into the SELinux subsystem at
all. Switching SELinux between permissive and enforcing using set enforce, that's not affected. That's fully
functional. You can still set it to permissive mode. What's changing here is kind of subtle.
You may never have used it. Right now, SELinux can be disabled by passing SELinux equals zero
on the command line as you're booting or in user space by a config file in Etsy SELinux equals zero on the command line as you're booting, or in user space by a config file in Etsy, SELinux slash config.
And then there's a library in user space, libselinux,
that reads that file, and during boot,
if you've told it to disable SELinux,
then that user space library actually writes into the kernel,
unmount slash sys slash fs slash SELinux,
and sort of disables it.
But doing it that way,
while very flexible
and sort of helped out distros
that had a harder time
or environments,
systems that had a harder time
changing or adding on
to kernel parameters,
it did make some security trade-offs
when you're trying to use Linux
security modules like selinux.
So now that's getting ripped out.
The only way to do it,
you can't use that config file anymore.
You have to pass selinux="0 on the command line.
Ah, man, I polished this pitchfork and everything.
It was really shiny.
That sounds perfectly reasonable.
Yeah, it sure is.
You'll have to look elsewhere for drama today.
I'm also really, really liking what I see for the future of grub configuration.
It looks like it could be a lot simpler regardless of what platform you're on. Fedora 34 uses Unify Grub Config.
So the Grub configuration file layouts depend on the platform you're using.
So if you've got an old BIOS system or an open firmware or an EFI system,
it's going to be different. And that's confusing for your old podcast host here. So the proposal, and I don't
know if it's actually shipping yet, is to always store the grub config and grub environment files
in the same place, slash boot slash grub2 directory, that place right there. And then making
small tweaks in another directory and always being consistent across all platforms. Seems like it has
some obvious benefits. Yeah, it's just kind of more of what you'd expect.
And I don't know about you,
maybe there's a lot of users who hopefully don't ever have to mess around with Grub
or what goes on in slash boot,
but for whatever reason,
I'm fascinated by booting the kernel.
So I'm always mucking around in there,
even when I don't need to,
or just checking things out.
And on EFI systems in particular,
there's been a lot of variance around
how does the EFI partition get managed?
How does it get mounted and where?
And then what things go in there versus
go on a slash boot partition that's
not the fat one, but like an EXT4
version. So now that's
being simplified. Everything's going to be consistent no matter where
you're running Fedora, and it's going to be
under slash boot slash grub slash grub
dot cfg, like you'd expect.
And Neil says it has shipped.
That is so brilliant.
You know, and it's kind of thankless work.
Sure, it's like it doesn't really change anything.
It's not a new feature or anything.
I mean, there are benefits,
but it's just that kind of nice cleanup and maintenance
that is really nice to see.
Oh, 34 is such a solid release.
There's so much we could dig into.
If you have any specific questions or area of interest
that you found that you think maybe Wes and I could geek out on,
let us know at linuxunplugged.com
contact and congratulations to
the entire team. It is yet another
fantastic release and I think maybe
an extra special one with
GNOME 40 and Plasma switching
to Wayland and Pipewire landing
and this Unified Grub stuff.
Wow. Just
really just very impressed. Good work
everybody.
Now before we wrap up today, we do just have a spot of housekeeping around here
because MiniMac has a special Luplug announcement.
Yeah, thank you, Chris.
You know, Chris, I had a dream.
I always wanted to do a talk about the MyCraft Voice Assistant project,
and guess what?
My dream will come true.
I'm in contact with Chris Gessling,
the community manager of the Minecraft project.
And we both hope to be able to put together a cool talk for you.
The talk will be scheduled the 9th of May,
but we'll already be playing around with Minecraft next Sunday during LobLob.
So in case you want to give Minecraft a try, we will give Mycroft next Sunday during LobLob, so in case you want
to give Mycroft a try, we will give you a hand next Sunday.
Chris himself, so Chris Gessling himself might not be available for the talk because he is
living in Down Under, so I mean for him the talk will be in the midst of the night.
So this is a call for participation for our Mycroft talk of the 9th of May and even for next Sunday.
Of course, we will also invite the Mycroft community for that.
I don't understand, but I'm learning new things every day.
And now I was interrupted by Mycroft.
That will be our special guest.
Hey, Mycroft.
Hey, Mycroft.
Sing a song for us.
I would be happy to sing for you.
She packed my bags last night, pre-flight.
Hey, Mycroft. Stop.
That's amazing.
I'm sure we will have a lot of fun. And that's all for me. Thanks.
Thank you, Mini-Mag. Check out the Love Blog every Sunday.
It's noon Pacific, 3 p.m. Eastern or get it in your time zone
at jupiterbroadcasting.com slash calendar.
That sounds like that's going to be
a really great one.
And it's always fun to hang out in the lounge
in our Mumble server
and just chat with people
that think like you do.
Information for the Mumble server
is at linuxunplugged.com.
And thank you to our Unplugged Core contributors at unpluggedcore.com.
I have a special note for you. So check your feeds. It's also in the member download area
for a very special exclusive deal to get 15% off an item I have created for our members in the
Jupyter Garage at jupupitergarage.com.
So go check your feed or the download area.
You'll see it there.
And then you can take 15% off for something,
which is the reason why it's 15% off is really because that's the cost.
So you just get it at cost.
And it's a very limited item.
But our members make this show really possible at the end of the day.
So we thank them at unpluggedcore.com.
And a discount on something like that is just one of the ways we can thank them.
But I also get two feed options,
a limited ad version of the show, same full production,
all the Joe Lovin', just less ads,
or the full live stream, which is like another 1.5 shows, really.
All our screw-ups, which is a lot this week.
I had a liquid lunch.
On my way into the studio, I grabbed me a CNX Tuesday because I just missed lunch.
So I grabbed a CNX Tuesday on the way into the studio and popped it open
and chugged it before I started the stream, and I'm paying the price.
But you know what?
Joe cleans it all up.
If you'd like to see all of the hard work he does,
you can get the full live stream version where you hear all of it,
and you get all the extra content as well.
So thanks to our members at unpluggedcore.com.
And make sure you go look for that special message.
It's like a phone call between me and you.
And it's a discount and more information on what I'm hinting at right now.
So thank you, everybody.
We do appreciate you.
A couple of quick bits of feedback.
Jim wrote in after hosting email for six plus years.
You want to take this one, Mr. Payne? Jim wrote, I've hosted my own email server for six plus
years with almost all that time performing minimal maintenance. There's hope for us yet, Chris.
Servers pretty much remain the same except for maybe compiling DoveCod a couple times to get
new versions. Originally, I'd followed a guide and installed the email server on dedicated hardware,
but as I evolved in my Linux experience, I actually implemented Proxmox and was able to
move that dedicated install into a container without reinstalling. Nice. Interesting. Here's
another part that I had missed out on. I guess two years ago, Jim implemented Proxmox Mail
Gateway Container, which sets up
a backend Postfx server while still using
the web UI of this
Proxmox Mail Gateway to access
it. So you can still keep this existing setup,
but just get a nice web GUI for
free. Awesome. That is nice.
You know what? That's the second
mention I've heard of this Proxmox Mail Gateway,
so that seems pretty good.
He said he has had some blacklist issues, but he just tries to ignore those ones.
He's lucky because I've had a lot worse.
But we also got a neat trick for another self-hosted mail solution,
one that may be a solid replacement for mail-in-the-box if that doesn't work for you.
It's from Sir Lurksalot.
He says, apparently, having been dropped on my head at some
point, I also run my own mail server. And I thought I would mention a neat feature of doing so that I
take advantage of. So here's a reason why you might want to host your own mail server. We also heard,
by the way, side note, Wes, we also heard from a lot of people out there. They're just like, hell,
yeah, I'm still running my own mail server. Good on you guys. Go for it. Awesome. Which I appreciate.
Yeah. But he says that he uses this wildcard
feature, which are aliases that any non-existing email address that gets sent, like somebody
emails something at, you know, Bob at his server that doesn't exist, it'll just forward to his
main email account. The reason why that's nice is he can easily make up an email box name on the spot
when registering for an online service,
like make something specific for them, and then never really have to worry about it because
it'll just show up in his mailbox.
And if he ever wants to reply to it, he just goes and creates a quick alias first and then
sends it off.
But he also wanted to recommend this tool that's called ModOba.
Modoboa.
M-O-D-O-B-O-a.org.
And it is an easy way to get up and running with mail,
perhaps a mail-in-a-box replacement.
It gives you Python, a Django-based front-end interface
to manage Postfix, DoveCod, and all of the related components.
We'll put a link to that in the show notes.
Wow, yeah, it looks like it's got webmail, calendars, address book,
filtering rules, auto- auto response. Fancy. And, you know, I love that first tip there because it's
kind of like what you can do with, you know, pluses and Gmail or Outlook, but way more flexible,
especially for those picky websites that won't take a plus.
And we have a Rust pick, not just any pick, a Rust pick.
And this one's a workspace that's aimed at developers.
I think they call it like a terminal with all batteries included.
Tell us about this little discovery, Wes.
Yeah, it's called Zellij, Zellij?
I don't know.
Z-E-L-L-I-J. And of course, yes, it's called Zellij, Zellij, I don't know, Z-E-L-L-I-J.
And of course, yes, it's written in Rust.
And while that part is exciting, I think what's most exciting to me is just sort of this new approach at a terminal multiplexer.
And that's what it is at its core, but that's just the infrastructure layer because it also
includes a layout system and a plugin system allowing you to create plugins in any language that compiles to Web
Assembly. Yeah, Rust and Web Assembly. That's double hype for this project. That's right.
That is pretty great. And it looks good too. And of course, because it's Rust, there's a sketchy,
just pre-compiled, Mucil-linked binary you can go download to give it a try with minimal friction.
What could go wrong? I would run it with pseudo privileges.
I would just do pseudo and then I'd run the binary directly.
It looks nice, though, I will say.
Like, I've got it up and it's got fancy fonts
and a really nice banner at the bottom by default.
It might stick around.
Yeah, no, I tease.
It does look really great.
And don't run random binaries from the internet as pseudo.
Do I even have to disclaim that?
Should be obvious.
I'm kidding.
Jeez.
Jeez.
Anyways, that looks really good.
It's Z-E-L-L-I-J?
Link in the show notes at linuxunplugged.com slash 403.
You can find a lot there.
Also, you can find our sponsor, CloudGuru, on social media.
They're just slash a CloudGuru just about everywhere.
That's a social media website.
It's really easy. Slash a cloud guru just about everywhere. That's a social media website. It's really easy. Slash a cloud guru about everywhere. The Jupiter Garage is rocking.
We have a little more Linux Action Show retro merch in there, including the beloved zip up
hoodie. One left is in there as well as a couple of brand new logo items all in the garage sale
at jupitergarage.com. Wes, I know you already know
this, but I'm very proud. We have shipped damn near 300 items from the garage sale.
Two NASs in the last week. That really is something. And it shows just if you take a
look at the studio, which is now definitely more of a packing and shipping place than it is an
audio recording studio. We have got, we, I really, I'm very proud of it actually. Like took those
lessons from that robe and we have really gotten our game together. And we had so much old merch
that I was hanging onto because of these emotional attachments that I realized we could be, you know,
sharing with our audience and it's going, it's going out the door and we still have more gear
and more stuff. So check out jupitergarage.com
for all of that.
If you do the Twitter thing,
you can follow this show
at Linux Unplugged.
The network is at Jupiter Signal
and we have a whole network of shows,
fantastic shows,
over at jupiterbroadcasting.com.
Lots of great shows.
Go check those out,
including Linux Action News
where we break down lots of stories
in the Linux world that you need to know about every single week.
See you next week. Same bad time, same bad station.
Keep the Linux rolling and make it a Linux Tuesday and join us live 12 p.m. Pacific, 3 p.m. Eastern at JBLive.tv.
Links to everything we talked about today, how to contact us, the Mumble Room, Matrix info, probably even the garage sale.
We link it up at linuxunplugged.com.
Isn't that a great idea?
It's like one website.
You just go to that one website for all the crap we talk about.
How easy is that?
We do that because we love you.
linuxunplugged.com.
Thanks so much for joining us on this week's episode of the Unplugged program.
And we will see you right back here next Tuesday.
Tuesday! All right, jbtitles.com.
Let's title this here show.
It's a long one, and we covered a lot.
I mean, obviously, mostly the focus is Fedora 34,
but there's a lot of ground.
There's a lot of ground to cover.
No kidding.
Fedora 34, the worst release ever oh
that would be a good idea yeah get them get them clicking with that one oh geez