LINUX Unplugged - 426: This Old Linux PC
Episode Date: October 6, 2021It's the worst time ever to upgrade or buy a new PC, so we cover our favorite tips for getting the most out of your current hardware. Then we pit a 2014 desktop against a 2021 laptop and find out if o...ur old clunker can beat the Thinkpad. Special Guests: Alan Pope, Christian F.K. Schaller, Jack Aboutboul, and Martin Wimpress.
Transcript
Discussion (0)
It seems there's a rumor floating around that Pop! OS will soon be available for the Raspberry Pi and maybe even other ARM devices.
Yeah, Jeremy Siller of System76, as he does, sent out a teasing little tweet with a cryptic link just to a folder this week.
Hmm, yeah, and you go in there, and it looks like it's a build of Pop! OS and some of its packages for the ARM architecture.
That's pretty cool. I mean, yeah, I'd love to try Pop! OS on a Raspberry Pi,
but I'll be impressed when they get running on a launch keyboard.
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
Hello, Wes.
Check out my working robe.
Today is definitely a robe day in the Pacific Northwest.
Yeah, but if you could just wear underwear.
All right.
I'll take that note.
I'll take that feedback.
Thank you, Wes.
I appreciate it.
This episode is brought to you by a cloud guru.
They are the leader in learning for Linux, cloud, and other modern tech skills.
Hundreds of courses, thousands of hands-on labs.
Get certified.
Get hired.
Get learning at a cloud guru.com. Well, coming up on the show today, you know we love getting a
fresh Linux computer, probably a little too much. But if you're looking for the best performance on
Linux today, you might actually want to buy a used computer. We have run the numbers and done
the tests. And this week, we're going to cover what to look for in a used PC
to get a blazing fast Linux desktop.
Plus, we're going to chat about
Pipewire's renewed focus on video.
There's some great news for Alma Linux.
And we've got all the other goodies
you come to expect from your weekly Linux talk show.
So before we go any further,
it's time we say hello to a packed virtual lug.
Hello, Mumble Room.
Hello.
Hello.
Hello, everyone. Namaskaram. Hello. Hello, everyone.
Namaskaram.
Hello.
Good to see some old friends in there and some new friends.
I mean, I say old, it makes them, I mean, it's good to see long-term friends.
I know, I'm trying to find something that doesn't make Popey and Wimpy sound old, but
guys, welcome back to the show.
Thank you.
We are old.
Oh my goodness, that's great.
Well, I'm very glad to have you here.
So let's start with Pipewire this week.
Now, we did cover some of the nuts and bolts of this in Linux Action News 209,
but here's the cliff notes of what you need to know for our chat today.
If you're not familiar with Pipewire, it's a modern audio and video pipeline
that's being developed for Linux.
It's shipping now, and it's been built on the shoulder of Giant, so that means it works with your Pulse Audio,
ALSA, and Jack applications. And it's already become an important part of delivering video
for a lot of X11 and Wayland users today as we talk about this. But recently, when we talk about
Pipewire, we're generally discussing improvements in audio. That's where the project's been focused
a lot recently. But with audio mostly solved
and now just kind of waiting
for that to trickle down to distributions,
it seems the Pipewire project
is renewing the focus on video again.
And this time they seem to aim several problems
from improving kernel security
to a critical one,
allowing more than one application
to use a video device at once.
Christian Schaller joins us.
He's the director, I think, is it for desktop and graphics at Red Hat, Christian?
Yes, that's correct.
You have a long title, actually.
I was going to read the whole thing, but I just went for concise.
Welcome back to the show.
Thank you.
Thank you for having me.
So, Christian, I'm reading through the announcement about the new focus
and about aiming to solve video problems on Linux.
From a high level, can you kind of explain
the core issue we have today with webcams and cameras
and connecting them to Linux and running multiple applications?
To just start why we decided that this was the time to do this
was actually triggered literally by our hardware partners
sort of coming to us and saying,
hey, you guys are aware that, you know,
starting probably next year,
you will have laptops coming out
that uses this thing called a MIPI camera.
And the support in the Linux is not great at the moment.
That was actually a starting point.
That's a huge advantage
that the vendor gave you that heads up.
Yeah, no, it's something we've been focusing on
for last year to build those close vendor relationships
so that instead of sort of Linux being that thing
where like, you know, new hardware comes out
and I'm like, oh shit, this doesn't work
and trying to reverse engineer something.
It's kind of a big problem too, isn't it?
Because it's multifaceted, right?
It's how applications currently get access
to video resources on Linux.
That's an issue.
But additionally, there's sandboxed applications,
more and more of them that also have some limitations
on how they access hardware.
And then another problem, it sounds like, just from what you just said, is the cameras themselves are about to get a lot more complex.
Yeah, exactly correct.
So I think triggered by the need to support these MIPI cameras, we also decided that this was a good time to revisit these other things we wanted to resolve.
Like, you know, as you said, multi-camera access is a big one.
But also there's security aspects of it.
And at the moment, right, you need to basically have current access to get a camera feed.
And we thought, like, we can fix this and make it more secure so that when we say an application is sandboxed, it is, you know, generally sandboxed and not sort of pretend sandboxed.
Right.
And it seems like a huge part of this is getting developers on board with how they address video devices in Linux,
and part of what you want to do is bring in
a Pipewire plugin
to GStreamer. Application developers
can use GStreamer and that API
and avoid using the video for Linux
to API directly.
Is that a correct assessment?
Yeah, that's correct, because, you know,
the way we dealt with this on the audio side was that we
literally re-implemented both the Pulse Audio and the Jacky APIs so that applications didn't have
to change.
Unfortunately, for various reasons, this is not an option on the video side.
So we need to basically get application developers moved over.
And to be clear, application developers are perfectly free to target Pipewire directly.
But I think, as I said in my blog, of course, if you are starting from scratch writing
a new multimedia application, more often
than not, I would say that you're probably
better off just using GStreamer, and then
of course you get the pipewire for free through
GStreamer. So the plugin architecture, which
that pipewire plugin
for GStreamer, does that actually exist today?
Yes, it does. I mean, we have, of course,
had a pipewire
plugin for a long time, but in some sense, it's sort of been verified to work for audio.
And now we're making sure that it also works fully for video and handles this new use case as we're refocusing.
Because quality in theory, the support for capture has been there for a while, but we haven't sort of polished it up or made sure it worked well or even had any test applications using it.
So that's what we're trying to make sure we put in place
now to make it all work
and work correctly and perfectly.
You know, as we were going through this
and reading through your great post,
one thing that kind of confused Chris and I at first
because we weren't really familiar with libcamera
before was exactly who's behind
that and what's the relation between the folks
working on Pipewire and the people behind LibCamera? I'm not 100% sure on the backstory. I was told
that it originally started out as a Chrome OS effort and then it was sort of taken over as an
Android effort. And from what I understand today, there is a group of people working on it with the
aim of supporting Chrome OS, Android and Linux. So they were in some sense just ahead of us in terms
of seeing that there was limitations or challenges that was going to be hit with the VD4Linux API and
decided let's put together a nice user space library to make it easier for developers to use
these things. And then we sort of said, hey, instead of we trying to compete with them or
reinventing the wheel, let's just build on what they have already done and just sort of make sure we expose that through pipe wire
and then add the future set of pipe wire on top of the camera.
And it's not just Chrome OS.
LibCamera also has Android support as well.
Is that right?
Yes, correct.
And I think, actually, I mean, as far as I know,
the product is actually funded partially by Google for that reason.
Ah, so it's probably going to be around for a little while.
It's a good one to build on top of, I would imagine.
Yeah, that's correct. That's good. When I read through this, what jumped out
at me, and maybe you can help me understand this, is, and I'm no desktop Linux developer, but it
seems like it isn't quite yet clear how developers who want to ship an application as a flat pack
are going to use this. Because it seems like there's a bit of a debate if this should be like
the function of the audio portal, or if this should be the function
of Pipewire and Pipewire figures out if
it's a sandboxed application and then
figures out how to route that audio. Can you
shed any light on where that situation's at and
how it's going to impact developers writing desktop
apps? What we know for sure is that
we want both of them right. We want
the portal there in order to
have a way to punch through the firewall
when the user allows for that. But I guess what we're still trying
to figure out and try to do in the blog post too is that there's
two ways of doing it. One, telling application developers, you know, you call
out to the portal API and then you get a Pipewire handle through that
and then of course you start interacting with Pipewire through it. Or you could
say, start by calling out to Pipewire,
and then Pipewire in the background tries to do the callout to the portal API.
Both approaches work.
We're just trying to figure out, A, what creates the most consistent developer story,
and what is also most convenient for people to use.
So, yeah, so both are going to play a part.
We're still thinking hard about what is the best way and trying to get feedback.
And I think maybe once we start working with Cheese,
as we said, it's going to be our test application.
And for that matter, with the browsers,
that might also help us inform us a little bit
on exactly how we want to do that API setup.
That makes sense.
And it seems we're really on the precipice
of something that I got to a point where I had accepted
this was never
going to exist for Linux. Like I thought the Mac had this, this unified audio video API pipeline
pretty well established. Windows had its system that seems to work pretty well for them. But Linux
had all of these various components and projects that didn't really work together. And
Pipewire refocusing on video now that audio has been fairly solved
is actually the first chance
I've seen of solving this problem
in a super modern way
that is really unique
and specific to Linux
without flushing everything
that's been done before.
And so I know for this to fully work,
you've got to get community buy-off.
You've got to get distribution
maintainers to support this. You've got to get distribution maintainers to support this.
You've got to get open source projects to support this
and maybe even commercial developers to support this.
What is kind of your thought on how that kind of community participation
can be achieved?
Because I think the Pipewire project has demonstrated a pretty good success rate
with getting audio community adoption,
but this is a whole other level of complexity.
Definitely.
And I mean, part of my hope here is that we can use our success on the audio
to convince people to work with us on the video side.
I mean, in some sense, of course, it was a lot easier on the audio side
because most people didn't have to do anything.
It was like, oh, hey, my Jack application actually works on Pipewire.
Holy shit.
Yeah, yeah.
So it was sort of like a free transition. like, oh, hey, my Jack application actually works on Pipewire. Holy shit. So
it was sort of like a free transition, but this time
we need people to actually do porting of
applications, like they have to move away from using
the VFolumes API directly and start
targeting Pipewire somehow.
That, of course, requires
us to convince the community to go
in. And I think what we want to do, of course, is A,
making sure that we provide a good API.
B, we want to provide good examples, like I mentioned, T's, but we're also going to be
trying to patch in the web browsers as heavy consumers of this to make sure that they work
correctly with Pipewire. And then hopefully, the long tail of applications will follow from there.
Right.
So that's where I guess fingers crossed that we hope to get the community to really
join in with us and help us make this transition. And of course, I think due to, once again, the success on the older side,
I'm sure Pipewire will be shipping in every distribution soon-ish.
And that will hopefully also make application developers less hesitant
to target Pipewire because they do know that it is everywhere
and you can trust it to be there and available for you.
Well, what kind of timelines do you think end users might start seeing some of these benefits?
I think the work on the audio side of Pipewire progressed a lot quicker
than I thought. And it sounds like from your post, maybe now is not quite the time for testing
but maybe sometime soon?
Yeah, I mean, we hope to, I think during, I mean, we sort of
often in our planning think in terms of Fedora releases. So I think from our point of view,
we hope to start having maybe some of these things ready in Fedora 35.
I mean, not for the release of Fedora 35,
but sort of during the lifecycle of Fedora 35.
And then Fedora 36 is where I think you can sort of start,
maybe for instance, things being ready for application developers
to really start porting over and trying to use this new API
and a way of working with things.
Wow, that's actually pretty quick.
That's, wow.
Well, Christian, thank you for coming on
and helping explain some of this.
And we'll be watching.
We are huge advocates of this project.
And please also pass along our endless gratitude to Wim
and all of his hard work for making this actually all come together.
And thank you for continuing to advocate and spread the word online about it
so we have things we can read to help understand what the progress is.
I mean, the whole thing is huge for us as media production people
that use Linux every single day.
So thank you so much, and keep up the great work.
Thank you for having me on.
On to an item that came out just a little bit before the show started today.
AlmaLinux has announced that their foundation memberships are opening up to the public,
and that seems like a great move.
If you don't remember, the AlmaLinux Foundation is set up as a 501c6 nonprofit,
which is the same model used by the Linux Foundation.
Individuals and organizations will now be eligible to vote for
and be voted
into the Alma Linux Foundation's board of directors, as well as participate in committees.
Ha ha! Jack, the Alma Linux mayor and cat herder over at the Alma Linux community,
joins us in the mumble room most weeks, sometimes in the quiet listening, sometimes in the main room.
And Jack, I'm just super impressed to see this. I think this was a commitment that you made to us early on, and now we're actually seeing
it delivered.
So why don't we jump right to the chase here?
What does this mean for end users?
Well, I think it means for the community, this is the realization and the fulfillment
of something that has been going on since, you know, since CentOS was announced since
2004.
you know, since CentOS was announced since 2004. And I did a small blog post about this earlier today where the NOS part of CentOS was always there. And the technological bits were, you know,
as good as you can get and arguably better than anything else that you could get. But the community
part was kind of lost along the way. And I think that, you know,
if the plan was from the get go that there should be a distro such as this that had communal
ownership, that had its own real foundation where everyone can participate and everyone has a voice.
I think that is what, you know, that is what we fulfilled today.
So it's kind of hard, especially for people who don't know the whole backstory here, to appreciate the context and what this means, especially those of us who maybe have come into a world where CentOS always existed.
And so if we rewind and bring ourselves back in time, it was a revolutionary idea to take an enterprise commercial operating system
and make a free version of it for the community. That got a little messy. And it never really,
truly, like Jack said, it never truly lived up to its promise because ultimately it was bought and
sold and moved around and it wasn't actually owned by the community. And that became its Achilles
heel. The lesson we all learned was there is a place for something that is a quote-unquote
enterprise grade, which generally means it's built from one of the LTS, you know, long-term
supported distributions out there that have commercial Linux support, right? It's something
that's enterprise grade that is going to be supported for a very long time that is run by
the community. And while it's not for everybody, it's definitely for a set of users. Is that now
true with this foundation in place, with the fact that community members can join? Is it true to say that the Cloud Linux company and those people, they could But yes, I mean, companies can vanish from the picture.
And, you know, the foundation is here. Hopefully, we get the sponsors to commit there. And of
course, you know, every foundation needs funds to continue. I mean, right now, we are thankfully
funded comfortably, and we hope that that'll continue. But yeah, I mean, anyone can come and go and disappear. But this thing is its own entity originally involved. They're still involved. And that's all that's all whatever. But the idea that this is now like it's not going to get sold around as a property. Right. It's not going to get built up to a big revenue stream with a large user base and then be sold to whoever wants to buy it at a certain price point. That's pretty impressive. I say congratulations to you. And I'm, I note two things about this.
Number one, you said you guys were working towards this, you delivered on that. And number two,
you're showing up in our community. You're participating in the community. You're here
almost nearly every week. You don't always have something to say, but you're here,
you're participating, you're truly invested at that level. And that I think is another data
point which people can assess when they're looking at, you know, which distribution they want to run
for 10 years. So Jack, just, you know, congratulations, really. And we'll have a link to
the details. You guys have a good solid post up on your website that has more information.
I think this is a big milestone. Thanks, Chris. And I just want to say one thing, you know, the credit definitely doesn't go to us.
And I want to put a big shout out there to everyone in the community that supported us,
that's used the stuff that we've put out, that's helped us, that sent pull requests,
that's given us, you know, RFEs and that's given us just enhancements of all kinds,
whether that's on the technical end, whether that's on the governance end, whether that's on
the graphical end, the community end, I just, you know, I, the community deserves a huge thanks. And
whether or not people realize it, they've been a huge part of this. And the people that, that have
done it, know who they are. And I just really the people that, uh, that have done it, uh, know who
they are. And I just really, you know, I mean, everything from like cloud images, containers,
like we put out so many containers and just this whole thing, um, it really blossomed into a
movement. And I really want to thank everyone because, uh, it is you guys are, are you all
that are making it, making it what it is.
And I just have a tremendous debt of gratitude to everyone for that.
Well, that's great.
I echo all of that here.
Linode.com slash unplugged.
Go there and get $100 in credit for 60 days on your new account.
And you support the show.
Linode's where we host all of our stuff.
The stuff you interact with, the backend services, the things we use to make these shows possible, like our
automated encoding and deployment system. Yeah, it's on Linode. That's where we run it.
We've even recently been experimenting with actually doing project rendering in Reaper up
on Linode. Right now, we're going to get to this in the show, actually. It's pretty hard to get a
decently specced PC that we'd run in the studio that meets
all of our requirements when it comes to noise and heat and CPU performance.
So why not just spin up a powerful Linode and run it there?
And with things like WebTop, it's pretty easy to get a desktop environment going on a Linux
box.
There's limitations, of course.
You don't have the responsiveness.
But when you're just rendering a project, why keep investing in hardware here in the
studio when I can fire up a Linode why keep investing in hardware here in the studio
when I can fire up a Linode
with tons of awesome AMD Epic CPUs?
It just, you know, and for you out there,
when you get a $100 credit,
you can mess around with stuff like that too.
Their infrastructure is fast too.
That's one of the reasons we really stick with it.
It's fast, it's reliable.
They have a great dashboard to give you just a snapshot
of like if your system's working hard
or what's your memory usage is at your CPU, your network transfer.
There's 11 data centers to choose from.
So there's going to be something near you or your customer or your client.
They've got over probably a bajillion users.
No, actually, I think it's about a million customers now.
They've been around for 18 years.
That matters.
You know, they've been around for 18 years and they've remained independent.
JB, you know, I'd be really proud to make it 18 years independent.
You know, I'm on my way, but a couple of those years, you know,
I got to take off the calendar.
I just really am impressed by that.
You know, like not taking the VC funding when you're watching
all these other flyby hosting companies pop up.
Could you imagine the temptation but staying true to their mission
of just making great Linux hosting and then refining the product
to remain competitive with these other companies
that are just getting cash dumped into them
while keeping their pricing 30% to 50% lower than those cloud providers themselves.
It's just, it's so impressive.
And they started out as Linux geeks too.
So go check it out at linode.com slash unplugged.
Get $100, play around, try this stuff out, build something, learn something.
I mean, it's just fun to play around with.
It's a great experience.
Linode.com slash unplugged.
And we have a bit of housekeeping for you today.
Thank you to our members at unpluggedcore.com.
Some of the best bits of this show are in the live stream.
We did a breakdown of the Linus Tech Tips Switch Challenge.
That's going to be in the live stream,
although I
may also release that one as an extra at extra.show. But our members get access to the live version of
the show, which has tons of additional content and all of our screw ups, and they get access to a
limited ad tight version. So maybe you got a shorter commute. That's a great one for you.
And you're supporting the show. So unpluggedcore.com to become a member, support the show and get access to that.
If you got feedback, you got ideas, maybe something you want us to check out, we'd love to hear them.
Linuxunplugged.com contact. Also join our matrix community. The new spaces have rolled out. They're
beautiful, looking really good. And you can join the fun and see what's happening over there
at linuxunplugged.com. But here's yet another reason.
If you're a Python or Go developer, you know, if you're familiar with either one of those stacks and you're looking for a job, there's an open engineering position, well, there's several of them, open at Red Hat right now.
Get in our Matrix room and contact Link DuPont at SubPop in our Matrix chat.
He's fielding those and he's looking for Python or Go software developers.
And he's hanging out in the JB chat, link DuPont at SubPop in there. Get in touch with him. Maybe
you could get a gig at Red Hat. Don't miss our LUP plug. That happens every Sunday at noon Pacific,
3 p.m. Eastern, jupiterbroadcasting.com slash calendar to get it in your time zone.
And our mumble server details are at linuxunplugged.com slash mumble. It's a party in there. When I went out on the road,
I talked to people and I said, you know, they'd say, Chris, this is what they say, Wes. They'd
say, they'd say, Chris, there's no lug where I'm at. You probably heard this, Wes. There's no lug.
Oh, I heard that so many times. And what do we say every single time, Wes?
We've got the virtual lug for you every gosh darn Sunday.
Gosh darn right.
And linuxunplugged.com slash mumble has all the deets.
And then when you can, maybe you're home on a Tuesday, come join us in there.
Hang out.
Get your thoughts in the show.
Now, we all love getting a new computer.
I know I sure do.
But these days, if you're looking for the best performance on Linux,
you might actually want to go used,
depending on what you expect to get done
and how you have your work environment set up.
It could be that an older PC with the right design
is actually faster under Linux than a brand new PC
with the wrong strategy for how to use it.
And while it'd be great
if you could spend a lot of money
and be guaranteed a super fast desktop Linux experience,
our testing shows that's not always going to be the case.
And right now, maybe that's just fine.
It's harder than ever to buy or build a
new PC with high-end parts right now. I mean, AMD's CEO Lisa Su said just recently that there's
probably going to be a chip shortage all the way through some part of 2022. I know I'd hoped the
GPU shortage would end sometime and I could upgrade that old 1060 that I've got hanging around, but
it appears we might be stuck for a while.
Might be time to dust off that old desktop.
Yeah, I think you're right.
Man, that's bad.
If AMD is saying it's going to be at least until the first half of 2022 is over.
I wanted a new desktop or machine at the beginning of this year,
and I ended up going the laptop route.
You've got limited options right now,
especially if you want something with Linux first support.
And so I received the ThinkPad X1 Carbon
and I decided to use that as my new daily driver for a bit.
You know, it's a Linux first computer.
Seems like you should have plenty of power to spare.
It's got Thunderbolt for external expansion,
which I've already got several Thunderbolt devices.
There's open source upstream drivers for every major part.
It's got a 10th gen Intel i7 CPU, NVMe storage.
I already own an eGPU with an AMD RX 570 in it, so I can just use that with this laptop.
I mean, I think on paper, Wes, it all looked pretty good.
It all looked solid.
Yeah, I mean, you were buying this computer almost meant to run Linux in some ways, right?
It's like almost as close as a Linux-first machine as you can get.
And I kind of recall you making a big fuss about getting it
and totally switching over,
packing up that old desktop PC that you built with Noah way back when.
Yeah, I did.
I looked up the parts for this episode
because I thought we built it in 2017, and maybe we did.
But Intel considered my CPU end of life in 2014.
Oh, ouch.
Yeah, it was older than I remember, or at least the parts were.
And so I was done with it.
I wanted something that felt really, really pretty great and pretty fast because I had set up a Asus gaming laptop for my son and it had
a high refresh rate monitor and it was really pretty quick system. And I'm like, I'd really
like my desktop to feel like this. And then I was set off to find like the perfect Linux setup.
I thought I had found it. Well, yeah, right. And this thing shipped with Fedora pre-installed,
all the drivers were upstream as you're talking about, like really should have been a best case
for having a modern Linux setup.
Yeah, yeah.
I mean, it has been pretty stable.
I'll give it that, you know,
and it's been nice.
I can mess around
with a lot of the latest stuff
and everything's upstream.
Wayland works pretty good,
but performance and usability,
it has not been very good.
And I was shocked
when I actually dug in
and did some testing,
but I want to tell you before
about other results
and how I actually ended up back on my old, but I want to tell you before the results and how I actually ended up back on my old computer,
I want to tell you a little bit what I expected.
I expected to have a machine that I would sit down at my desk with
and, you know, after being mobile,
connect it to a Thunderbolt cable
and then use a full-size keyboard and mouse
and a couple of monitors,
just, you know, as a daily driver.
Just sit down with the laptop, use it like that.
Maybe play a video game every now and then.
But for the most part, just basic terminal chat, web browsing, audio editing, that kind of stuff.
It's your office workstation machine.
Yeah, exactly.
Yeah.
And, you know, sometimes I like to have a little fun with it.
But for the most part, I just wanted to be super rock solid, reliable and responsive.
little fun with it, but for the most part, I just wanted to be super rock solid, reliable, and responsive. And then at the end of the day, I liked the idea of unplugging the Thunderbolt
cable, putting the laptop in my bag and taking my main machine home and having everything I
really needed with me if I needed to use it on one monitor, or I eventually even opted to have
a couple of monitors at home and set up another Thunderbolt setup there. And, you know, I admit,
I made things a little challenging. my external displays were 1440p
resolution one of them is 144 hertz the other is rotated vertically so my external monitors
were mixed refresh rates you know slightly challenging but what i ended up with was a
substandard experience by any possible measurement and this is on x X11, Wayland, Plasma, Gnome, Wayfire.
I mean, really, I'm just, I tried everything.
If I have my external monitors hooked up
and I put any windows on my external monitor,
it was just awful input lag, awful glitchiness,
dragging windows around.
I've complained about it in the pre-show before.
And I mean, and God forbid, the system was busy.
It would just lag out worse and worse,
and it would manifest itself slightly differently if it was on NOM or Plasma or X11 or Wayland,
like, you know, different kinds of performance characteristics. But the core problem
just seemed to be anytime I was trying to share screen information between my Intel GPU
and my AMD GPU, things just would go sideways.
And Neil, you and I were talking about this a little bit
in the pre-shows before.
It's like there's perhaps like a fundamental plumbing issue
happening here or some kind of decision
that's led to poor performance in this setup.
This is mostly because the way that things work
in the X world is all the displays are stitched together and synchronized to a larger virtual display head.
And so like if you have three or four or six monitors or whatever, they're all stitched together internally into one big virtual monitor.
And that's how everything gets rendered.
So the synchronization issues are taken care of. Your monitors are already downgraded to the same
refresh rate as your slowest monitor, so on and so on. And so because of all that,
moving things across displays becomes very fast because from its perspective, all it's doing is drawing on the
pixels in the virtual representation and then mapping that back to the physical displays.
In the Wayland world, that's not how it works. Instead, each display, so each monitor,
is a separate surface with its own control, its own refresh rates, its own resolution, its own DPI, all those things.
This is good and bad.
So good in that now it means that your displays can actually operate at their native performance levels independently of each other.
But it's bad in that because there's no underlying synchronization, no virtual display underneath, none of that stuff that forces everything to be in sync, moving things across it and handling, you know, keeping things working,
you know, stitched across displays is now very expensive. It is certainly going to be an
interesting challenge to see how this is improved over time. But it is definitely fixable, for sure.
There are a number of strategies to do it.
And I think one of the reasons why it didn't get tackled before now is
because actually multi GPU just hasn't worked in Wayland because NVIDIA
didn't work in Wayland very well in the first place.
Sure. Yeah.
So I could see that kind of causing it not to get worked on particularly
until that was solved. And in a way, funny enough, right?
If I just got like a giant 49-inch single screen,
I probably would have been fine.
Absolutely.
Especially if it was a like super high refresh rate one,
you'd get all the benefits of it,
and it would just, it'd work great.
But no one tells you that when you're buying hardware
or trying to set up, you know,
is there like a ultimate Linux workstation guide out there?
Not one that I've found anyway.
There should be.
And I already own the monitors too when I bought this thing because I had them connected to my desktop. So, you know, I played
around with various configs for months. And, you know, like Wimpy will buy a really nice powerful
system. He gets everything working just great and gets great results. So I'm not trying to take away
from building brand new powerful systems here. But I got the combo wrong, even though I thought I
knew all my, you know, all my specs and everything like that, I got it wrong.
And so because of the config that I chose to go with, I was suffering for months as I tinkered with this thing.
I mean, I attempted to run all the monitors off the AMD GPU.
I attempted to run as much as I could off the Intel GPU.
I could go with no monitors and things would be fine, but then I'd feel sad and feel like Linux couldn't live up to my, you know, use case.
and things would be fine, but then I'd feel sad and feel like Linux couldn't live up to my, you know, use case. And so I attempted to improve overall system performance just to,
you know, reduce load that would cause things to lag out more. So I tried attacking that route,
and I'll share some of those tips with you in a moment. But about two weeks ago, guys,
I just kind of gave up. When I decided to put Fedora 35 on my ThinkPad, I just said to myself,
you know what, this is going to be a single monitor machine, and it's not my daily driver, although I actually
have ended up using it pretty much as such.
But anyways, I'm removing all the expectations that this is my main workstation.
This is just my portable machine now with a single screen.
And I took out my old 2014 era desktop.
The unceremonious unboxing.
It's an i7.
It's got like a one-time upgraded AMD GPU in there now
that's also an RX 570.
And I dusted off quite literally
and I just kind of connected all the monitors to it
and re-hooked everything back up to the machine.
And it had been since the beginning,
I mean, since I got the ThinkPad,
things just been sitting there collecting dust. And I was kind of, you know, contemplating reusing
it for maybe in the studio or I hadn't really decided what I wanted to do with it yet. So I
was kind of glad I still had it. I fired it up. And of course it's an old Endeavor OS install that
I had put on there forever ago. Like when Endeavor OS first came out, I'm like, oh yeah, right.
forever ago like when endeavor os first came out like oh yeah right so i i do the update and um sure enough i mean everything updated just fine and i reboot and i had the latest plasma i had
the latest kernel it was pretty good oh that's kind of just great isn't it wes i have to tell
you man like immediately on this old busted system they've been sitting around collecting dust
it was immediately noticeably
faster with the multiple monitors and everything. Even just launching applications was faster
than this brand new ThinkPad with, you know, a fast NVMe storage. I had to kind of think about
this for a moment. Like, why is this older machine in my use cases, perceptually so much better than
a brand new computer, essentially.
And I think it came down to fundamental architecture issues.
And, you know, the laptop
had a hard time sharing information
between the eGPU and the Intel GPU.
And I think it'd be true
if it was two GPUs,
one was Thunderbolt or not.
And I think also in terms
of desktop usage,
the laptop's single hard drive setup, its MVME disk,
even though it's MVME, it still leads to I.O. contention sometimes. And the Linux desktop,
while better, doesn't really handle I.O. contention with grace. And if you're also
slamming the GPU and you're slamming the network at the same time, it would really leg out bad.
And desktop hardware, it's not only more capable in this area, but you're able
to architect it in such a way that the disks can be set up maybe where you avoid IO contention.
And maybe your PCI bus is such that there's more bandwidth.
And so like in the case of my old desktop, I have a MVME boot
disk. It's small, but I recommend if you want decent performance on Linux, you go with a desktop
and you go with multiple disks. So I have like a 256 gigabyte MVME for my root. I have a 500
gigabyte, like Samsung Evo, 512 gigabyte Samsung Evo for my home. I mean, it's not a, it's not like, you know, it's not a killer.
It's not something like screamer, but it works right.
And then I have a RAID 0 of like three or four disks.
I can't remember because I'm old that I'll put my sync thing or my next cloud or my,
my editing scratch on or steam, you know, any kind of process that might run in the
background and need to write to disk a lot.
I move that over to that RAID 0.
Ah, yeah, stuff you don't care about,
stuff that's already backed up or synced elsewhere.
Yeah, you got it, you got it.
And then it sort of spreads out the workload across the disks.
And so there's not one disk that's generally responsible for everything.
And it allows for the package cache to be refreshed
while I'm launching an application,
while sync thing is pulling down a large file, and I don't experience any desktop lag. Even though
the machinery is from 2014 and some upgraded components, because of the way it's architected,
I don't experience performance issues. And this is just in my assessment here. And I actually
thought, well, let's go benchmark this. Let's see if some of this bears out. And so I decided to just use Geekbench
because you get a simple number and the X1 does win Geekbench in single core performance scoring
1039 is the number it gives it. The old Core i7 5820K scores 923 on the single core benchmark.
So the laptop beats it by 116 points.
In a single core work, the laptop beats the old desktop by 116 points, according to Geekbench.
And Geekbench runs a bunch of different tests at it to come to that number. But okay, that makes some sense.
It's a much newer machine, much newer processor.
It probably should win, right?
Yeah, it's five generations newer.
I would think so, right?
Yeah.
So multi-core, though, is where it got kind of fun, Wes.
I didn't know what to expect here.
The X1 scored 3,773 on the multi-core benchmark.
The 2014-era desktop scored 4,993.
era desktop scored 4,993. The old desktop beats the X1 by over 1,200 points in multi-core benchmarks.
That's a pretty big gap. Dang. Yeah. I suspect over the years, Linux has gotten pretty good at handling multiple cores and it does a good job of throwing things around to different cores,
even if the applications themselves are single-threaded.
And so you benefit pretty substantially on the Linux desktop by having more cores.
Now, you've got to make sure your workload,
maybe you have a workload that's single-core performance
is what matters all day long.
But for general desktop use,
I think multi-core, having more cores
than having very, very fast cores
may actually pay more dividends
because of the work that's been done for years to manage all of that.
Right, especially on this kind of machine where you're checking your email,
you've got multiple electron apps going,
maybe you're working on an audio or a video project on the side.
There's just a lot of stuff you can spread around.
I also want to mention I ran these benchmarks with my daily driving applications loaded
because I felt like that was a good representation of I almost always have these applications going and then I'm using the system
to do something on top of that. So they weren't fresh systems. They were freshly booted, but then
my base applications loaded. Wimby, I feel like I want to ask you what your thoughts are on this
because I always drool over the hardware that you get to play with these days. Does this track so far with your experiences?
Not exactly. What does track is that you can get decent performance out of older hardware on Linux for sure. And, you know, I've got not as many as Popey, but I have a modest number of slightly
antique ThinkPads that are still very usable today. and those that are considerably antique.
I have replaced spinning rust with SSDs, even SATA or in some cases IDE SSDs,
and that makes a considerable difference to the oldest of systems.
I think your point about mixing display refresh rates is an interesting one
and potentially one that could come and bite that Linux gaming challenge
that we discussed earlier that Linux tech tips are doing
because they have lots of high refresh monitors
and I'm almost certain that they're going to be mixing them
and they are going to run into this, I should imagine, at some point.
But I found modern hardware works very well, extremely well, in fact.
Oh, I agree.
In terms of it's better than it ever has been for Linux support,
and the performance is incredible if you build the right system.
But what I was surprised by is just how boggled this system was,
which essentially came down to plumbing issues. Because if I put a, and I hate to say this, I don't normally do this, but if I put a MacBook Pro in that same position, in the same exact configuration, it works perfectly. There's no lag. Everything's fine. So it's clearly a plumbing issue. Oh, also, what also stung is for like a week or a weekend,
I can't remember, I think it was just a weekend,
I put Windows 11 on that same ThinkPad to try out Windows 11,
and I tried the multi-monitor setup, tried the Thunderbolt stuff.
Perfect.
No lag.
No multi-GPU issues at all.
It all just worked.
It downloaded the drivers.
It set up the displays.
It was pretty good.
So it seems like you can buy a new machine,
but if you do it wrong,
some plumbing issues in Linux can actually make it such that
a machine from 2014 can still kick its ass in some configurations.
I'm not trying to say, though, that all new hardware is going to crap out
because that's definitely not true.
So I've recently got a X1 Carbon ThinkPad.
It's my work laptop. And I've hooked up a X1 Carbon ThinkPad. It's my work laptop.
And I've hooked up a couple of displays to it.
And I find the Intel GPU internally is really struggling with multiple GPUs.
So I ended up using, sorry, multiple monitors.
So I ended up using an external GPU.
And it's only an NVIDIA, I say only, an NVIDIA 1050 Ti in a Razer Core
external box. And that's running three displays. And that helps, obviously, but there's still
something in the plumbing somewhere that makes this slow. And I get what Neil says about
all three displays under X are one big canvas on which it paints, which is great for me dragging Windows around.
But there's tearing and Windows glitch out now and then.
And I'm dangerously close to putting Windows back on this thing
because that's what it's shipped with.
But I tolerate it because I'm a Linux nerd and I prefer Linux.
I tolerate this nonsense.
And I close Windows when they start glitching out
and I just relaunch them. And I'm a fool to myself because I probably would have a less
stressful life and probably a lower heart rate if I just remove Linux and put Windows on here,
and then I wouldn't have to deal with all this nonsense.
That's how I was feeling. I was getting to that point. I'm like not doing it. I went back to a
desktop and it's just so much better by just going all off of one GPU
and having, I think, maybe the desktop plumbing.
And I guess part of what I was thinking is,
you know, maybe it's just not so bad.
If I'm finding this fast now,
now that I've gone switch back from my new PC to my old PC
and I find my old PC faster,
maybe I can hang out for another nine months
before I replace it.
That stinks, though.
Yeah. So the other thing to bear in mind, based on sort of the configuration that you were
describing, as in the one that you've created, is that you need to bear in mind there are
two motivations for things working well on the Linux desktop today. And that's that the configuration that you have
is the same as the developers have.
And by and large, that is somebody working
on a laptop workstation with a single screen attached
to possibly a single IGP, sometimes with a discrete GPU.
That's like the most common configuration.
And then the only other time you're going to run into a configuration where you're guaranteed that it's supported well is that an OEM decides to ship a class of workstation.
And they demand that whichever Linux vendor that they're working with makes that configuration work as well out of the box as
it does if they ship windows on that same model so those are really the only two ways you're going
to get a good or guaranteed to get a good experience unless you know you put in the hours
and do the research and buy components that you know are going to work well together.
And that's less of a requirement in terms of just general compatibility. But when you want
complete harmonious operation of your system, then it does still require a bit of upfront
research and due diligence. Right. And that's where the value that Linux vendors like System76
or the selection of Dell PCs come in.
They've done that kind of selection process and that work.
And, you know, this is, in my case, I kind of thought I was going that route with this ThinkPad,
but I ended up, I think, using it in exactly a situation like Wimpy just said,
in a way that the developers don't really use it.
And that's with multiple screens and that's with an external GPU and an internal GPU
and all these little edge cases and the multiple refresh rate and all just built up.
And I guess I ended up ultimately finding that the desktop was better.
But through this process, through this process, I realized going forward, I'm going to take all these things I kind of already knew and I'm going to apply that to my next decision on what I do.
And I think before I replace my machine here at the studio, I'll probably be
setting up a permanent recording machine in the RV. So that way the next road trip I go on,
I have a recording station that's built into my setup there. And I'll take these lessons learned
and apply it to that rig. And some of the things I did to my desktop, I think I'm going to keep
doing because I just, I prefer the performance when I can. Like, I think I'm going to keep doing because I prefer the performance when I can. I think I'm going to
try to do the multiple disk setup, have home
on its own thing, have my scratch disk,
have root on its own disk.
I think I'm going to try to keep doing that.
If you have the space, why not?
If it prevents you from problems
down the line, get it right at the start.
Some laptops let you do multiple disks,
and of course you can do that with a desktop.
I've mentioned this before, Wes.
I'm just going to plug it again because it's pretty slick.
I'm a big fan of Profile Sync Demon.
It's that little tool that just moves your browser cache and profile to RAM.
There's a couple extra steps if you have Chromium.
But it moves that cache to RAM, and I think you might be shocked if you actually dug into it.
and I think you might be shocked if you actually dug into it.
In fact, you might actually be a little pissed off
if you learned how much your browser writes to your SSD.
It's abusive.
And while it will make your browser faster
by moving its cache in your profile to RAM, obviously,
I'm assuming you've got the RAM to spare.
It's actually, for me, it's more about reducing disk IO and wear and tear on my disk,
minimizing SSD wear, and just leaving the disk IO available for other things on my system.
Then the fact that, you know, I mean, there was literally a time when Firefox had an
intermediate update that I think it was a bug and it was just constantly spamming the disk
nonstop while it was running. This is a while ago.
And that kind of stuff you just completely avoid.
And you think about it, like RAM is fast, right?
Accessing RAMs in like the nanosecond range
and accessing going out to the disk,
it's like an order of magnitude.
It's like a thousand, I mean, it's a million times long.
I mean, it's way, way slower to access the disk.
I mean, you already know that, right?
So not only are you going to get something that's just going to make your browser
feel faster, which is nice, but it just, I think, reduces overall wear on your SSD and overall load
on your disk IO. I recommend it. It can be a little tricky to set up, but it's ProfileSyncDemon.
I'll have a link in the show notes. And then there's this last one, the one that I'm trying
to get Wes to try out, to sanity check me. Try this out on a VM or something before you buy this tip.
This is one of those maybe don't do as Chris does.
I thought that was this whole segment.
Yeah, I suppose.
But this is my last tip.
If you're comfortable swapping kernels,
I know a lot of you guys out there hate this kind of stuff,
but I've had some great results using like a Zen kernel or a CK kernel, which is really
easy to get going on different distros and really tricky in other distros.
But what they essentially are, it's kind of like the good old days of Linux.
It's a kernel with a set of patches that are just designed to make it a little better under
a desktop load.
Just kind of if you were to tweak the kernel just a little bit to make it a little more
desktop focused and a little less server-focused.
And, you know, I think it improves my desktop responsiveness under load.
I personally have had good results with it.
The people that maintain these kernel and these patch sets
seem like pretty sharp individuals.
But it is swapping out your kernel,
and if you want support from your vendor or you want application support
or you're on an LTS or something like that, I don't recommend it.
I like how you make this sound like it's like back alley patches that, you know, hush, hush.
You know, I pass you a note and then you're slipping me these patches.
No one's supposed to know.
It's just the kind of thing I shouldn't even say.
You know, it's like one of these things.
It's like I'm talking about Fight Club a little bit right now and I'm going to get in trouble for it.
But I got to admit, I have found great results.
I primarily have the most experience,
like a year plus with the Linux CK patch set.
But when I tried out Garuda Linux,
they have the Zen kernel in there
and it's basically kind of the same mix.
And you know what?
I admit I like it.
I know that makes me like some sort of like,
I don't know, like custom Linux tinker
or something like that, that Windows users would roll their eyes at.
Well, you've convinced me enough to try it, I think.
You know, this whole thing, you switching back, I've dug out this old Dell XPS desktop that I had laying around that I stuck this 1060 in.
Yeah.
I think I'm just going to go Garuda on there and make it a standalone little gaming rig.
Have you got a sense of what it's like performance-wise? Like, is it
spinning Rust and stuff like that, or is it
SSD? It is. I've got
an SSD on the way to upgrade it.
Ah, good thinking. But boy,
is Windows on there just atrocious.
The SSD makes a huge
difference, and so does
a decent GPU, which is hard to come by right
now, but, I mean, you got at least something, right? That's decent.
Ish. But that, I mean, this is so stupid to say, but that's, it's so great that
in desktops, you can unplug the old slow hard drive and plug in a new fast hard drive, and you
can take out the old slow GPU and you can plug in a new slightly faster GPU. And that's just critical
right now when you can hardly get your hands on anything like the prices for GPUs are the prices that used to be for an entire computer right now.
It's nuts. And CPUs too. My old crappy CPU is going for $300 right now. That piece of junk
from 2014, 300 bucks. It's crazy. It's crazy.
So it just seems like for a lot of us,
now's the time to try to make the best of what we got.
And I think you can mix and match some of these tips.
Like maybe you can get an SSD off of eBay or something,
or maybe you have one laying around and maybe you could do something like Profile Sync Demon
and just kind of free your system up a little bit.
Or maybe something crazy like the Linux CK patch set
might make your system feel just slightly more responsive
under load on a desktop,
especially if you can't do multiple disks.
There's things we can do today
to get the most out of these systems
if we got to hunker down
and wait through this whole thing for a little bit longer.
I don't think it's the end of the world.
I think we can do it.
And I'd be curious to hear how it goes for you, Wes,
because I think that SSD will make a big difference.
Oh, yeah.
I'm sure you'll hear me complaining
or congratulating myself one way or another. goes for you, Wes, because I think that SSD will make a big difference. Oh yeah, I'm sure you'll hear me complaining or
congratulating myself one way or another.
Alright, well we heard back from the developer of
Tube Archivist,
which was our pick last week, and he pointed
out that there actually is a Docker
Compose file ready to go that also
pulls in Elasticsearch, which was my reservation,
and sets it all up for you.
So, there's that.
Ah.
Yeah.
Do you think you'll actually try it then?
I might.
Although we have another alternative that was submitted that we'll talk about in a minute.
I thought we'd save that for a pick.
Did you see that we also got a feedback from Tim on why NixOS would be great in our server poll, which we're going to get to as well here in just a moment.
We got a lot of opinions on that.
Tim has some solid reasons here on why we really should try NixOS.
I mean, okay, number one.
I agree.
Number zero, Wimpy agrees.
Number one, we've ran Arch long enough now to show it works, right?
We were trying to prove something that, okay,
maybe you shouldn't run Arch in production, but you can if you try. Number two, Fedora and Seuss
are popular enough that while the rolling variance might be interesting, they're ultimately close to
well-established server distros that just can't compare to the weirdness of Nix. That's number
three as well. Nix is just so off the beaten path. It might actually teach
us something. It's not just about testing the operating system. It's about testing our own
boundaries as well. Nix is the future. Yeah, this is, I heard a compelling argument on the last and
great Ubuntu podcast. And do you want to recap your profound statements here? Yeah, I think that with the growth of, you know,
the container ecosystem around Linux, that will start to penetrate, you know, desktop workstation
engineers more and more and NixOS and the Nix Package Manager. You don't require NixOS to use
Nix, of course, but NixOS and its package manager are absolutely fantastic
if your daily commodity is containers.
And within the circle of container engineers
that I move within these days
who are not, by and large, Linux nerds,
but they care about containers,
Nix is growing in popularity.
Yes, it does seem to be.
Popey, are you rolling your eyes now at this point
every time you hear this spiel about Nix, or do you buy it?
I can see where he's coming from,
and I've noticed that the people who are passionate about Nix
are as passionate about Nix as Arch users are about Arch. So yeah, I can kind of see,
and they have a good compelling argument. I'm not sure I buy Martin's, but I can see why people like
it. Well, Jonathan, the NixOS release manager reached out after last week's episode and offered
to answer your questions to help get us started. I have not looked at the poll results yet because I felt like if I looked at them, I
would influence it, you know, like it's like a Schrodinger's distro, Schrodinger's distro.
Schrodinger's distro.
That.
Thank you.
And so I didn't want to look at it yet.
So we're going to look at the poll results and see what the audience voted for our next
Linux distro on our garage server.
The new server has arrived.
The Dell box is here.
Wes and I unboxed it this weekend.
It's beautiful.
It's amazing.
Yeah, wow.
I mean, you know, I expected a used server with all the pros and cons that comes with that,
you know, just when servers are used.
But this thing looked brand new.
Yeah, I don't think there was a speck of dust inside.
I mean, Mr. Real Zombie Geek hooked us up big time with this thing.
It is loaded to the gills with RAM and CPU.
So we're off to the races now, and we're going to get sleds and disks.
And it takes, you know, I think it's 2 1⁄2-inch SAS disks.
So if you've got a batch of 2 1⁄2-inch SAS disks
that I would need for a Dell
PowerEdge server, let me know. Maybe we could buy it at a good price. We also need some caddies for
that sucker, but that's all like, we're all getting that ready. So the distro is definitely
going to be a big part, but before we reveal what the results are, I wanted to touch on feedback
that we got from Chris from Linux After Dark or AKA the Green Sys Admin. He wrote in talking about our pontificating on using the Raspberry Pi as a wire guard gateway.
And he says, you know, I don't know if it'd actually be up for the task for what you guys are doing,
but I did want to defend the little box.
And he points us to some great resources, including over at the OpenWrt forum,
where folks did some benchmarks and showed that a Raspberry Pi 4 as a WireGuard VPN gateway
can actually sustain about 500 megabits,
all the way almost up to a gigabit for the WAN link.
It can actually perform with using about 25% of its CPU capacity.
He doesn't think you need much more than probably the 2GB version
and the 8GB he says would be a big waste.
And the OpenWRT community
has gone all in with the support.
He also mentions another option
might be the compute module,
but you're going to be sharing
some of your PCI bandwidth
with the USB bus.
And all in all, he says,
a final note,
I've been running WireGuard server
on my OpenWRT router.
It's a Flash Linksys consumer unit, and he's had great results.
So if you're out there, you're still thinking about doing that.
He says it works great still.
He says there's really no point in setting up separate boxes.
He can just do it on there.
He says, enjoys the show, and thanks for the shout out for Linux After Dark.
Podcasters, unite.
Go podcasting.
Amen.
Thanks for writing in.
And, you know, I think this seals the deal in my mind.
We got to try it, right?
I mean, 500 megs, that's plenty for the studio.
We sure could. I mean, why not?
What could go wrong, Wes?
All right. I have not yet revealed the results to myself
because I have to hit the little button.
So I'm going to...
Our options, when we asked the community to vote
on what our next distro should be,
we wanted a rolling distribution.
We thought,
should we stick with Arch,
see how far we can take it,
try out Nyx,
maybe it's time to give Tumbleweed a go,
or could we somehow get Rawhide to perform?
And the community said,
Tumbleweed,
Tumbleweed.
I don't know how to feel except a little queasy.
I think this is a karma vote
because I gave the lizards a hard time on Coder Radio.
I think the Coder Radio audience ganged up
and came over here and voted Tumbleweed as a spite vote.
I think that's what happened.
Nick's had a pretty good showing, though.
So Tumbleweed got 49.9% of the votes.
Oh, wow.
Yeah, I mean, Tumbleweed got it by a huge margin.
Nix came in second place at 21%,
Arch at third place, 16%,
and Rawhide at 11%.
Oh, my goodness, Wes.
How do you feel about this? Are you ready to deploy a tumbleweed box? No, not really. I was hoping this was going to be close enough we could sort of fudge the
numbers and just go with whatever we wanted. But no, this is a knockout win for tumbleweed. I guess
our fate is chosen. Do you think somebody monkeyed with the results here? Do you think there was some
election tampering happening? You know, some
outside influence, if you will?
I wouldn't put it past our clever audience,
but, I mean, here we are.
I still think, especially, you know,
with Jonathan reaching out from Nix,
we've got to try that in some capacity,
even if it's not this version of the server.
So, I'm sure there'll be more Nix to come,
but I guess we've got to look forward
to Tumbleweed and figure out how to make it our own.
I completely agree.
We'll find something to do with Knicks
and we'll have a go at it
and I think it's a great opportunity
to learn more about it
and share that with the audience.
Something else we got asked about on the show
was why not Gen 2?
It's technically rolling.
You guys could have gone Gen 2
and it just, it really came down to, we also,
and we didn't implicitly say this, but we also wanted something, maybe I did actually,
we wanted something we could update during the show.
So that way we could capture the results for you live as they happen.
We're not like doing it after the show and like fixing it off air and then coming back,
you know, we wanted, we wanted to be held responsible and accountable for it.
So that meant we had to be able to update it during the show. You could update it through a season of shows
if you're using Gen 2. Yeah. Yeah. I'll come back next week when we have the updates installed.
You know what, with all the cores this thing has, maybe we actually could have, but no, for now we,
it looks like we're going with Tumbleweed and I will make peace with that. And you know what?
My commitment is this.
If Tumbleweed performs and it does a good job, I will own that, and I will acknowledge that.
I'm not going to hate on it the entire time.
I will give it a good, honest go and report back our findings.
And some of you who've been listening for a while may recall I was a huge ButterFS skeptic for a very long time, especially once when I had a data loss issue.
But over time, as the software has improved, I've changed my tune.
And now I probably have ButterFS deployed on nearly 20 systems.
That's a huge about phase.
And I'll own up to it here with Tumbleweed as well.
If we get through this and it's a rock solid rolling performer that makes a great on-premises server, I'm going to tell you guys about it.
So you voted for it.
We're going to see what happens.
But that also means that if it disappoints, I'll also address that.
So there is both sides of it.
But I actually, I'm feeling pretty good.
I've noticed a change in tune with the audience.
They seem pretty hyped about it these days.
And I think that played a role in the voting too is it legitimately has become more popular amongst our audience as well.
I mean, it almost makes me think we should put it on one other machine, you know, besides the server so we can get some experience with it.
Hmm.
It's not the distro.
It's how you use it.
All right.
We have two picks to get into this week.
The first one's right on topic with the show.
You found this, Wes.
It's like a patch bay
for Pipewire. Found it.
A.K.A. Red Christian's blog post where he linked
it. Yeah, it's written in Rust.
It's a GTK-based
patch bay explicitly
for Pipewire, inspired
by the jack tool Katia, which
many jack users will be familiar with.
So we have a bunch of patch bays.
Up till now, though, they're basically all stuff that was written for jack.
And if we ever want to have similar functionality
for the video side of stuff in Pipewire,
we're going to need something Pipewire-specific.
This is pretty neat.
This is what it lets you do is connect blocks to wire up your audio system.
So you have applications like maybe Chrome, and maybe you want
to take something from VLC or MPV, and you want to play that audio into a video call that you have
in Chrome or Firefox. You can use something like this Pipewire Graph Editor now to draw connections
between applications or input and output devices and actually make things like that happen.
This is great.
And it's the realization for end users of what the advantage of moving to something like Pipewire is.
And it double checks the box because like Wes said, rust.
And even if you have no use for something like this,
go check out the link in the show notes
just so you can visualize what we're talking about here
so you can see kind of why people are kind of getting pretty excited
about being able to do that kind of stuff.
Yeah, I mean, it's tools like this that actually make the show today possible.
That's very much, very, very much.
I wish I could show people how we have things wired up so that way we can have a mumble room and a guest and remote hosts all on one system.
Everybody hears each other.
It's all wired up virtually using Jack.
And it's pretty powerful.
And I think it enables a type of production that people might not even know is possible if they're not familiar with the technology.
So go check it out.
Number two.
We have two picks. That's right. Not one, but two crazy picks for you this week
is another alternative that came in over Tube Archivist. It's called Tube Sync. Think of it
as like Sonar or some of those automatic built-in web apps where you say, I want this thing to be
downloaded every time it has a new release. And it goes off and it gets that for you. It's like
TiVo for YouTube. So you give it a channel or you give it a feed, release and it goes off and it gets that for you. It's like TiVo for
YouTube. So you give it a channel or you give it a feed, you give it some sources, and then it
monitors those sources and then automatically downloads those channels or whatever it might
be, a YouTube playlist to your file server for you to watch later. And it tries to pull down
the metadata information, the thumbnail, and it tries to preserve the original URL for you as well.
You just need Docker or Podman to get it going.
It's pretty straightforward.
And you end up with something maybe a little closer
to what I was looking at.
I'm going to play with both of them.
Yeah, this looks great.
Wow.
I know.
Isn't this just so neat to see this?
And I knew other people out there
must be trying to solve this problem.
So if you are too,
we got a couple other folks who sent in like their personal scripts and stuff like that.
If you're looking for or if you know a great tool or something to essentially watch a YouTube channel and automatically download it and store releases somewhere on your file system so you can easily watch them offline, let us know.
Linuxunplugged.com slash contact.
We'd love to hear about it.
It's just so neat.
And so that's tube sync and we'll have a link to tube sync in the show notes at linuxunplugged.com
slash 426 better get cracking west get syncing your tubes you know i'm already downloading it
and you can find our friends a cloud guru on social media they're just slash a cloud guru on
facebook on the youtubes i mean if they're they're up, but you know, when the social
medias are up, you can find them at slash a cloud guru. It can be rough there. However,
respect to Twitter and Telegram, Telegram saw a massive wave of new users and, and mostly held.
And so did Twitter. They mostly held. Yeah. It's, uh, you know, some of them got it right.
This show, our Twitter account, remarkably,
just didn't see a huge flood of new users.
But maybe you want to go follow it anyways.
At Linux Unplugged on the Twitter.
The network is at Jupyter Signal.
And of course, all of the whole network of podcasts,
like Self Hosted, Linux Action News, and Coda Radio,
all over at JupyterBroadcasting.com.
And don't miss Linux Action News.
Things like that Pipewire story about the nuts and bolts of how that's going to work,
that's in there.
Linux Action News is covering the stuff that really matters every single week in a nice,
tight, concise way.
Go get it at linuxactionnews.com slash subscribe.
And then you can just find out about what happens every single week in the world of
Linux.
And we invite you to join us live every Tuesday.
We do this show at noon Pacific, 3 p.m. Eastern.
See you next week.
Same bad time, same bad station.
Yeah, come on over, hang out in our mumble room,
participate in our live stream chat,
or just kick back and watch live while you're,
I don't know, working.
I don't know what you do in the middle of a Tuesday.
Weirdo. I don't know. Hopefully at least you're using Linux, but I don't know.
I mean, if you're not, you could kind of make up for it by watching a Linux stream.
Yeah, right. You're trapped on Windows there. Just pull up the latest Linux unplugged,
pull up the live stream. You'll feel a little better about yourself.
There you go. JBLive.tv on a Tuesday.
Or, you know, download it.
We do go through all the trouble of editing and publishing.
Might as well do that, too.
Thanks for joining us.
See you back here next Tuesday! So JBTitles.com
Let's go pick our title this week.
You know, I hope the Ashi Linux team is successful.
I saw they had another update today, but I didn't get a chance to parse it.
Did you parse through it all, Wes?
There's a lot of stuff in there, mostly a whole bunch of different updates
on the various patches
and driver bits
that they've been working on
to slowly get upstream
which parts are working
and what's in progress.
I'm half kidding when I say this,
but only half kidding.
In a way,
if Oshie Linux got
really great desktop Linux support
on the M1 Macs,
it'd be a viable option
for people like me
who want to get fast hardware right now
that doesn't suck a ton of power because i'm looking at this from the rv perspective where
more and more i'm living off solar and there's a just a massive cost difference for me in terms of
power from 33 watts and 300 watts and you know you know, that's maybe like the PC, not even working that
hard. I'm not playing a video game. I'm not doing anything that's rendering, right. I'm just
kind of, maybe I have my basic apps open. It's going to have 180, 200 watt draw
versus the absolute max of an M1 is like around 33 watts. Yeah. That's like a days of difference
for me in power usage. And I, but I'm just, I,
I'm not going to pull the trigger on that until I know I can run Linux on it and it's going to run
well, but it's like this weird situation we're in with these part shortage that might extend out
into, you know, middle of 2022. If we're, if we're lucky, it's weird that an M1 may actually be a
viable option to get a decent performing Linux desktop if they pull it off.
Did you see in the responses to Jeremy's tweet
about 64X availability for Pop,
people were going, oh, M1, which might be interesting.
You know what? If I were them, I might do it.
Why not? Go for it.
Be like the premier M1 Linux distro for the desktop, right?
Because you're going to have Oshie Linux,
which is going to be this great vehicle for upstream development.
And maybe, I don't know what it turns into.
Maybe it turns into a desktop distro.
Maybe it's a development distro.
But, you know, they could move in.
It's a little bit naive to see ARM and think Apple M1 silicon
because it's literally just the processor core
and the instruction set that's the same.
But what makes the M1 chip what it is
is all of the custom chips that exist in that thing,
which is probably 80% of what it is.
And that's where all the complexity on the bring-up lives
because none of that's documented. That's Apple IP, and that's what these the complexity on the bring up you know lives because none of that's
documented that's apple ip and that's what these people are working so hard to to unlock yep