LINUX Unplugged - 362: The Hidden Cost of Nextcloud
Episode Date: July 14, 2020Our team has been using Nextcloud to replace Dropbox for over a year, we report back on what has worked great, and what's not so great. Plus why Linus Torvalds has become the master of saying no. Spec...ial Guest: Drew DeVore.
Transcript
Discussion (0)
I read email. I write email. I do no coding at all anymore. Most of the code I write,
I actually write inside my mail reader. So somebody sends me a patch, or more commonly,
they send me a pull request, or there's a discussion about the next pull request,
and there's something I react to and say, no, this is fine, but.
And I send out pseudocode, or I'm so used to editing patches that I sometimes edit patches
and send out the patch without having ever compiled it, ever tested it, because I literally
wrote it in the mail reader and saying, I think this is how it should be done.
But this is what I do.
I'm not a programmer anymore.
I read a lot more email than I write.
Because what my job really is, in the end, my job is to say no.
Somebody has to be able to say no to people.
Somebody has to be able to say no to people.
And because other developers know that if they do something bad, I will say no,
they hopefully in turn are more careful.
But in order to be able to say no, I have to know the background.
Because otherwise, my no, I can't do my job. So I spend all my time basically reading email about what people are working on.
Well, that just leaves us one burning question, which email client?
Hello, friends, and welcome into your weekly Linux Talk show.
My name is Chris.
My name is Wes.
Hello, Wes.
This episode is inspired by our email. We have read emails, tweets, and telegrams,
and we'll be doing a whole batch of follow-up on topics we've talked about before on the show and give you an update on how they're doing. Plus, we'll dig into some,
well, not exactly community news, but similar community news, and we'll play a few more Linus
clips from a recent talk he was at. We have a bunch of good stuff, but before we can really
do any of that, we have a little bit of upfront work that has to be done. I got to say hello
to Drew and she's gentlemen.
Welcome into the show.
Howdy.
Hello.
Thank you for joining Wes and I.
What do you say we give a holler to that virtual lug to time appropriate greetings mumble room.
Hello.
Hello.
Hello.
Hello. We are sneaky recording ahead this week.
So if anything major has happened that you think it's a little strange we haven't talked
about, that's because we're doing a little pre-record while I'm taking a family trip in
Bozeman, Montana right now. But don't worry, we'll pick it up next week. We're just not as
clairvoyant as we used to be, you know. You know, I should have looked at that crystal ball before
we started, but I just, I keep forgetting it, put it in the cupboard and I never go in there anymore
because I need a crystal ball to tell me to go get the crystal ball. I was in Texas. Now I'm on a trip to Montana and then I'll be on a
trip back to the studio. So when it's all said and done, I think it's like 40, 45 days on the road.
It's been a crazy, crazy couple of weeks or many weeks now on the road, but it's almost over. And
so we thought we'd do one more record ahead. Most of them have been live, but we thought we'd do one
more record ahead so that way I could go
do some swimming with the family.
I was enjoying this Linus keynote.
He had a conversation
with Dirk Handel,
who is the VP and chief
open source officer at VMware.
It's about a 26 minute talk.
And I have the whole thing
linked in the old show notes
if you'd like to
listen to the entire thing.
But I grabbed two other bits about it that I thought were a little fascinating that maybe we could play and talk about.
And one of them was Linus was asked about the possibility of launching a new project.
So in Jim's introduction, he pointed out 28 years of Linux, 15 years of Git, and he always forgets mentioning eight years of subsurface.
The third most important open source project.
The third most important open source project.
I agree.
I wholeheartedly agree.
But isn't it time for a new project?
No, no, no, no.
I am so done.
I'm very happy with how Git ended up.
But every time Git is mentioned, I want to make it very, very obvious that I maintained Git for six months and no more.
And it's been 15 years.
And the real credit goes to Junio Hamano and a lot of other Git developers.
I'll take credit for the design.
And the thing that makes me happy about Git is not that it's taken over the world.
It's that we all have self-doubts, right?
We all think, are we actually any good?
And one of the self-doubts I had with Linux was it was just a re-implementation of Unix, right?
Can I do something that isn't just a better version of something else?
And Git proved that, yes, I can.
Yes, you can.
And that, to me, having two projects that made a big splash means that I'm not—three projects—means that I'm not, like, this one-hit wonder.
To those in the audience who don't get the joke, so the third project, Subsurface, is one that I now maintain, so I keep trying to talk it up, but it's okay.
I thought that was very human that even
Linus... Yeah, right, the Linus
Torvalds. We all have imposter syndrome
and doubt our own talents, even
when you're, you know, top of the open source world.
I never would have thought that Linux,
you'd think, an accomplishment like Linux
would be so, so clearly
a major
accomplishment that you wouldn't
doubt it, you wouldn't question it. But the human mind,
Wes, finds a way. It's sneaky like that. It'll make you feel bad about anything great you've done.
Dumb mind.
Right? I wonder if there's a world where, you know, Linux was less popular and Git became
his biggest project. I think in some ways, you know, it's touched so many more circles. I mean,
yes, Linux runs everywhere and is great.
And that's the whole point of the show.
But so many people who have no idea about Linux
or couldn't care less about Linux,
well, they all collaborate on GitHub.
There was another bit in here
that I never really paid much attention to
because there's no real public-facing example of it.
But were you aware, Wes,
that the kernel has secret code names?
Secret code names? Yeah, each release, like a distro. example of it, but were you aware, Wes, that the kernel has secret codenames? Secret codenames?
Yeah. Each release, like a distro.
Well, now we got to hear some.
A few years ago, you started using codenames for the development kernels.
And in the new release of RC5, you changed the codename of the current kernel.
Can you talk about that?
I've used... How many in the audience know that kernels don't only have version numbers?
They have names, too.
I see most people now.
So it's not just a few years ago.
I've done this since way before 1.0.
The kernels have always, in the main top-level make file,
there's a name variable that gets set,
and it's usually a random creature.
Where I live, there was this one squirrel that kept running in front of my car for a week.
And so I named the kernel Suicidal Squirrel.
And the name is not used anywhere.
It's literally just a variable that gets set and then never, ever used again.
But it's been this ongoing joke for me for the last 20-plus years.
And I think Greg names his stable kernels too.
And if something happens, either in the news or in my personal life, you can sometimes see it in the name change.
So the last release on Sunday, name change to kleptomaniac octopus.
Because we were diving.
This is what Dirk was aiming for.
We were diving and playing with an octopus and two different
octopus. And the first one tried to steal my flashlight. And the second one tried to steal
Dirk's camera. And the picture of that, if you look at my Twitter handle, that was up a moment
ago. I actually posted a picture of that. It's pretty hilarious when the octopus goes for my
camera. So the name has absolutely no meaning, but sometimes you can guess that, oh, something happened in Linus's life where some random animal did something silly.
Suicidal Squirrel was 3.12 RC1, and Kleptomaniac Octopus was 5.4 RC5.
That's great.
I like this one.
Blurry Fishbutt. I want this one. Blurry fish butt.
I want to know the deal behind Wet Seal.
What was the story there?
Or deceased newt?
Or one that's not really an animal.
Series 4800.
Don't know.
Don't know.
But how are you feeling, Wes?
Are you ready?
Do you have an SSH session open to our Arch server?
Are you ready for one of those famous live Arch server updates?
This is our follow-up episode, after all.
Establishing connection right now, Chris.
All right, well, we figured we've collected a lot of topics on this show,
told you a lot of things we've switched to or tried,
and it was time to tell you how they're holding up.
So we start with the Arch server.
Of course, something that we're
keeping on top of all the time, right Wes?
I'm sure we've been doing lots, yeah, lots of updates.
That's right, uh, synchronizing
package databases, totally not
173 of these right here.
Including a new
stable kernel.
Jog my memory, did we switch this
box over to the LTS kernel, or did
we not do that yet? Yes, we did.
We got a little less crazy on the Arch front.
Okay, Wes.
Let's do it.
Hit the upgrade button.
Now, this is a little bit more critical while I'm on the road,
because I use WireGuard to connect into the studio to do a lot of job functions.
I do see a new version of WireGuard coming on down the pipe right now.
We got a new kernel, got a new WireGuard. New GCC, all kinds of good stuff. Probably means a
new ZFS DKMS module. Yeah, we're going to need to rebuild some of those, that's for sure. This
is probably the most critical update we've ever done because if it breaks, I'm really screwed.
Yeah, you know we have a show to record this week. Yeah, I pretty much
wire guard in almost every single day to do something. Oh, a new version of Docker. That
doesn't matter for us, right? No, we just run everything in Docker. What could go wrong?
It's going to be fine. Now, for those of you new to the show, Chris and Wes do not recommend that
you run a rolling release distro as a server unless you really
can babysit the thing. But what Wes and I wanted to do was try to build an arch server with as few
moving parts as possible so that we could test the theory if it's possible to build a minimum
viable Linux server and run it rolling. With the applications containerized whenever possible,
With the applications containerized whenever possible, the data broken off onto its own array in ZFS.
In theory, the Arch system is just a user land set of tools, the kernel gets it going, and then it fires up the containers and gets us access to our data.
So this was our philosophy when building a minimum viable Arch server.
Don't recommend it, but we're doing it so you don't have to.
Where are we at there, Wes?
Well, we're currently removing the current set of ZFS packages and DKMS on its way to build a new one.
However, we're going to need to do that again
because there's some updates on the OpenZFS side
that we need to get in place as well.
You ever wonder if there's a better way to do these kind of updates?
Like, you've already crossed the point of no return now
before we've even installed the new stuff.
We just gotta not reboot, I guess.
Yeah. And, you know, that said, we do
have those snapper snapshots set up
and it did take one right before we started
this upgrade, so we've got
a recovery path. We just don't want to have to
use it. True. That was part of our belt and
suspenders approach, was getting snapshots
in there. If this was a VM
system, I would absolutely
take a VM snapshot as well, but we're running it on bare metal super micro box that I got from
Unix Surplus. Old school. Still would buy it again. I got to say, speaking of follow-ups of all
follow-ups. That has just been such a great recommendation by Alan Jude right there. Yep.
I would buy that again. Unix Surplus was a good way to go. There's probably other ways to do it too,
but it's been pretty great.
So I guess I'll kind of know when you do it
because I'll lose connection to the studio for a bit.
So nothing like doing it live.
I'm only sweating a little bit over here,
but thankfully you can't see it.
That's true.
That's true.
Oh, oh, we're loading our freshly compiled ZFS module.
That built pretty fast, actually.
Yeah, not too bad.
All right, now we are going to go update our AntRAM FS as one needs to do.
I have to admit, this box went a little bit longer than we'd normally go
because there's been some transitions at work, plus I've been on the road.
So this entered the worst case scenario zone, as far as I'm concerned.
I think we intended to probably upgrade this thing at least
every 30 days? Yeah, you know, once a month at least, maybe
more often, depending on how often it comes up. Because, you know, with Arch, most of the time
it's just a couple of packages if you stay up on it. It's not a big deal, and if something does break,
well, you only upgraded six
things, and usually there's a note about
what went wrong. Okay.
I'm not nervous. I'm not sweating.
It's totally fine.
I'm not concerned. We're good now.
So we've moved on. We're using
EA, the AUR helper on this box,
which I've been very pleased with
and continue to be. So
it's figured out that we also have some AUR upgrades that are needed,
and now we are compiling the newer version of OpenCFS.
Oh.
Yeah, so previously we upgraded the new kernel.
That was smart enough for DKMS to figure out
that it needed to go rebuild the module
from the existing code base for the module for the new kernel.
Now we're getting a new code base for ZFS,
and then we'll build a module for the new
kernel and that. Boy, that's really good that you mentioned that because my standard move there
would have been to skip AUR updates when you're just doing a core server upgrade. So I'm glad
you're the one driving right now. So you keep an eye on that. Flag me when it's ready. I want to
move on to some of the follow-up that we wanted to do because this got kind of kicked off when I was tweeted a question about how our NextCloud install is doing.
And so I wanted to go look up everything about it so that way we could talk a little bit about how big it is, how it has worked or hasn't worked for us.
Because NextCloud promises for a team like us to solve the horrendous cost of Dropbox.
A Dropbox Pro account or enterprise, whatever the crap they call it,
is $1,000 a year for five users.
And you have to have a minimum of five users.
It's too much.
And our core function here is we record a podcast
and we need to share the files with an editor like Drew.
You know how mad he gets if you don't send him the files right away.
You know what, Wes?
He gets so mad.
He really, he gets so mad that I just have a soundboard clip of it.
I get so mad.
So that's how you know he gets so mad.
We want to send Drew flax, for example,
or we have a bunch of assets that we want to work with for putting a show together,
or Cheese is working on an art project and he wants to share it with us and sync it and back it up.
So in the past, we had used Dropbox.
The Dropbox advantage would be they manage the server,
they manage the storage,
and they have a really high-speed CDN network.
So you get a pretty decent user experience.
It works fairly good for transferring large media files, even people that are spread out around the world. And for the most part, other than the cost, you really don't have to even give any thought to the storage. It just grows as you need it.
when you roll it yourself, doesn't have a global CDN, doesn't have anybody managing the storage.
And for the most part, it's cheaper until you really start using it a lot. And then it can get pretty expensive. So I want to talk about that too, because it sort of depends on how we have
it deployed. And in this case, what we opted to do was deploy Nextcloud on DigitalOcean.
opted to do was deploy NextCloud on DigitalOcean. And when we made this decision, we experimented with different DigitalOcean data center regions to try to find a somewhat middle ground between
transfer rates for folks in Texas versus Savannah versus London versus Seattle versus New York and
et cetera, et cetera. Just where we have people spread out, we wanted to stand up instances in different areas in different data center locations and see if they made any noticeable difference in transfer speeds.
Well, yes, they do.
Very much so.
So we ended up kind of, I think, splitting the difference.
Wes, do you remember what data center?
Located in NYC right now.
Okay.
So we went on the East Coast to help Drew in Savannah and Joe in London.
coast to help Drew in Savannah and Joe in London. It just kind of made the most sense because folks in Texas still got a decent connection and it was totally serviceable for Wes and I
on the West Coast. And now like right now when I'm traveling, it's sort of more centralized to
where I'm at. We solved that one problem. But then we had the problem of storage because one of the
problems with past attempts to use NextCloud was we would just run out of disk.
Everybody's working away on the team. They're saving files every single day. Maybe you have it
keeping copies of deleted files too, so that takes up space. And it would just grow and grow and grow.
We eventually would extinguish local storage and we'd have to come up with all these different
solutions. I one time famously almost lost an entire episode of Unfilter
because the Nextcloud system started to eat itself.
And I swore off Nextcloud for like a year after that.
But this time around, we had a couple of advantages.
We had an idea of how to solve the storage issue,
and we had people on the team that had used Nextcloud.
I'll talk about what storage option we went with because it's very unique.
But before we get to that, I thought I'd let Cheese chime in on this part because you were there when we were really trying to make the decision about dropping Dropbox and switching to NextCloud.
And you were one of the guys advocating for it.
And I'm curious with, what, over a year now that we've been running it, what your hindsight is on that original discussion and using Nextcloud in production as we have? Overall, it's been a pretty good experience.
There has been times where, you know, syncing all of the things, maybe it's not quite as
intelligent as Dropbox in some regard, seemed like it would flood and just saturate the droplet,
preventing others from accessing or having varied quality with their throughput
to the server. But overall, it's been pretty good. My personal Dropbox, at one point I had,
you know, like WeChat and Tmux kind of set up, and then I had HoneyRSS for news feeds and just
a bunch of random little droplets. And I was able to condense majority of that into NextCloud.
So there's a news application that you can add on.
It's been great for me.
The companion app for the Android device for the news app is great.
You use the notes app, which also you can preview and mark down.
I don't know what we're up to now with the JB
because the amount of storage that we have there,
but I'm spending 10 bucks a month.
So a $5 droplet and, excuse me, a $10 droplet,
$15 a month, a $10 droplet and 50 gigs of volume storage.
The use of NextCloud for yourself
and a significant other is really great
because as you go on, you've got documents,
you know, just all kinds of things for vehicle purchases or insurance stuff. There's just so
many places where you both want access to something. And a lot of it is confidential.
And I'm not super confident storing that stuff in any cloud service that's on somebody else's
service. So being able to run
Nextcloud on your LAN or on a droplet that you have root access to is, it's a huge feature of it.
And I wasn't sure if it would go from that scenario to a team scenario, but Drew, you're
also a Nextcloud guy and you were advocating for us to give it a shot. I'm curious what you think
now a year plus into using it. And from a perspective of somebody who's often looking to get files from the various hosts that are spread around the world,
you're looking at it maybe in a bit more of a day-to-day production scenario than I am where
I just drop files on there. For you, it makes a big difference in your day. So what I like about
it, which is not necessarily unique to NextCloud per se, but I really like the permission systems and being able to hand a URL to like a contractor or a guest and have it just be a Dropbox for them to upload files without actually giving them access to read anything in the system.
I love that feature.
access to read anything in the system. I love that feature. I can get stuff from people without actually creating an account or having to do any kind of complicated stuff. It's literally just
drop the file on the browser, you're done. And being able to do specific syncs, like specific
folders where I don't have to pull down everything to me is great especially given
that with Dropbox you have to default to sync all and then you have to go in and unselect everything
whereas with Nextcloud you can take the opposite approach and I just sync specific folders which
is great for me because it significantly limits the amount of stuff that I have to
actually pull down because like I don't need cheese's design folder on my local computer
I don't need half the stuff in our team share I really just need these raw audio files and so
that's what I think so the ability to really narrow it down and have that much more control over what we're doing
versus how I felt Dropbox was, to me, it's a significant improvement.
Yeah, I like your point about it's much safer and simpler to just loop in a contractor or a
new team member and just set them up an account and get them going. With Dropbox, it would be a license
and it would be a significant cost. I have found myself in the past creating a contractor account
that I pay one license for and then I just change the password when I share it with different people.
I'm sure that's probably not the way Dropbox wants me doing it.
I don't think that's the way anyone wants you to do it.
No, I don't even like it. way anyone wants you to do it. No,
I don't even like it. So that's one of the things I've definitely liked about switching to Next
Cloud. So right now, as we record, our monthly run cost for 161 gigabytes of data in the cloud
and the droplet that runs it is $305 a month, which obviously is quite a bit. I'm not happy with that, to tell you
the truth, because one of the things we wanted to do was lower the cost by switching from Dropbox.
And any of you who are doing the quick math on $305 a month is quickly coming to the realization
that we're paying significantly more than $1,000 a year
for the storage. But it does bring a lot of other perks and benefits, one of which, of course,
is that we're hosting it ourselves. But I mentioned we're running on DigitalOcean, Wes. It's a box
with 8 gigs of RAM, two virtual CPUs, and 25 gigabytes of local disk. But it's got a database
and it's got kind of a unique storage setup. Yeah, you know, well, we were worried about having to do upgrades, deal with expanding storage down the road,
because, well, some of the team uploads WAV files or just giant flacks for some reason.
And, you know, that stuff eats up disk space really fast.
And pretty much always from a production standpoint, if you can leave the original sources around for a while,
that's just the better option.
You might need them again down the road.
And once they're gone, they're gone.
So we're actually hooking into the back end and putting all of the stuff.
That's not just, you know, some of the configuration files and stuff that's on the local disk,
but all the stuff that's uploaded through the UI, through the syncing system,
that all lives on DigitalOcean Spaces,
which if you're not a DigitalOcean user, is basically their S3-equivalent object storage.
Now, did we essentially use NextCloud's support
for S3 storage to use Spaces?
Is this some sort of hack?
What happened?
How did this...
How is this possible?
Yeah, so you can go into the NextCloud configuration,
and they've got a storage configuration,
and, yeah, they support, basically,
you know,
anytime it's going to do some file operations on the backend
instead of writing it to the disk.
Well, in the Nextcloud world,
this is all handled in the database anyway, right?
These are not like, they're not just named the regular name.
There's database entries, there's quids involved.
And then eventually that gets written somewhere on your disk.
So instead, once you've set this up to use S3,
then that call just gets made, sent to the cloud, stored up there.
And from the user experience, you're none the wiser.
It's totally transparent to you, at least when it works.
The nice part here is while the S3 storage protocol is not really a standard,
it's been around long enough now that basically everyone implements it.
So whether you're actually using S3 on real AWS or you're using DigitalOcean Spaces,
they talk the same language, and it pretty much just works.
I haven't seen this lately. It may have been fixed.
I know DigitalOcean had some issues open on it,
but there's a couple different nuances of the protocol
that they didn't support exactly the same as Amazon did.
And so it meant when some of our contractors were uploading multiple files at the same time
through the web UI specifically, sometimes some of those would fail.
But 95% the rest of the time, you know, uploads from the studio, uploads that I'm doing, it's pretty much just worked.
Yeah, I do recall a few issues early on, like you mentioned, when people were uploading multiple files.
Or if somebody was trying to upload files and somebody else was cleaning up a directory at the same time
and it involved removing files,
everyone's sync operations would just halt
until something had sorted itself out.
And sometimes you have an editor on the other end that's like,
hey, where the hell's my files?
So that was a little sketch, but it seems like it's either
we've all just sort of internalized a way to use NextCloud
that hasn't caused this problem,
or it's just, it hasn't been an issue recently.
The nice part is we just don't have to think about it because, you know, we just pay for the usage on spaces.
So someone wants to drop in a giant file.
Doesn't matter at all.
There's no quota.
There's no limit.
It pretty much just works.
And you still actually have access behind the scenes to the system.
So you can go ask in the NextCloud database and go find out what the long random string name of the file that you want is.
And then you can access it directly in Spaces too and take advantage of their CDN system if you want to.
Yeah, which is pretty cool.
And it feels pretty performant as far as we've noticed.
You know, the files come slamming down on our machines.
And I generally just use the NextCloud client itself. I think that
tends to work best for me. But something else that is nice is the web UI can be somewhat customized.
So we had some fun with it and we have some branding on the main login page and all the
kind of little thing we have, you know, we have ourselves its own domain name that we all have
memorized and all that stuff. So there's other niceties that you just don't get with Dropbox. And if you had a lot of storage and you weren't using spaces like we were,
say maybe you had a system that was on your LAN with terabytes of storage and you didn't need it
on a centralized cloud server, that $305 a month that we pay wouldn't even be an issue for you.
Right. And we should note that some of that too, is we are using spaces. We've also got backups,
not our own backups, but DigitalOcean-provided backups.
Those are an extra cost.
And we chose to rely on their managed database.
So it's a Postgres server.
They're running.
They backup.
They upgrade.
They do the security maintenance for.
That's also one of the major costs that we're paying.
Yes.
That box is a 2-gig of RAM box with one virtual CPU.
And I don't really think we've suffered performance penalties for that.
I was a little concerned that maybe it'd be too slow and we'd have to upgrade it later.
But it seems like it's been fine, don't you think?
Yeah.
The best part to me is that, you know, I have had to muck around with a few things with
this setup.
The database has never been one.
But if you didn't want to have someone else host it, that could just be another container
that we were running on the same box, and that would work fine too.
Yeah, we wanted this to get as close to Dropbox as possible
while not being Dropbox.
And so that meant spaces, and that meant a managed database server.
How have the upgrades been?
Because it seems like every now and then they've required manual intervention.
Overall, not too bad.
One thing that's nice about doing it all in Docker
is you get a little bit more of a consistent interface
just with the image style approach.
We're not just upgrading a system in place.
We're actually just going and fetching a whole new container
and starting that one up.
Where it can sometimes get tricky
is you do occasionally need database migrations.
That's the biggest thing,
is if something in the config has to change
or the database schemas need to change. That said, you know, I think the NextCloud community
have been pretty pleased with, you know, the support for the containers. We're actually
using the ones provided by them right now. And there's usually instructions there to say like,
oh yeah, we made this breaking change. Here's what you need to do, if anything.
I have, over time, opted not to use NextCloud for really large media file transfer, like files that are 100 megs and above, mostly because I don't feel like it has super great detection of small changes and just syncing those small changes. working a lot. If you work the way I do sometimes is I actually work out of my sync directory for
some things because I never know what machine I'm going to be on and I want my state to be
on every machine. So I actually work from the Nextcloud directory. And that's a non-issue
when you're using Dropbox. Just it works fine. It can do delta transfers. It detects small changes.
It's no big deal. On Nextcloud, it seems to be a little bit more of a traditional crawling the file, analyzing it, and then uploading the entire file if it changes.
And that puts a bit of a strain on a box if you're maybe working on that folder all day.
You know, I think syncing in general has probably been our biggest complaint.
Not that it's bad.
Not that it doesn't work.
And I think it would work really well if we had smaller files. You know, if we were just managing some text files,
stuff like that, small JSON blobs we were sharing around. But with the big media files, and then
especially in diverse geographic locations, everyone uploading at the same time. I know I've
been pretty annoyed in the studio sometimes where, you know, Cheese was just finishing up some design
work. He's got a cool new design that he's uploading to the server, or maybe he's uploading
a new graphic
for the live stream,
which are always awesome.
Thank you, Cheese.
But for whatever reason,
the Nextcloud client here
insists on downloading those changes first
before it will even start thinking
about pushing up the files
I'm trying to send to Drew.
And that's just,
that's not the behavior I want.
Yeah, so what I opted to do
for that kind of stuff
is use something else.
And I didn't want to get sucked back into the Dropbox situation.
I decided to double down again on SyncThing, something I've tried in the past when it was brand new.
Oh, we've talked about that on JB for years.
And in the past with SyncThing, I'd have two things happen that eventually would just drive me nuts and I'd quit using it. Number one was I wouldn't be able to discover my other sync thing nodes,
even if I was on the same LAN. And I just found that so frustrating because there's not really
a way in sync thing to say, maybe there is at the configuration file, but in the web UI,
there's not really a way to say sync to this IP, right? You sync to a
machine ID, which looks like a Bitcoin address. It's not like 192.168.0.5 and now it's syncing.
It's a big old hash. And so when it quits discovering it after that, it's like you got
no recourse. And so that would drive me crazy. And I would quit using it. And then the other thing that was just unforgivable for me was, say, it was just going along, doing its little sync thing, maybe for a few months, just syncing right along.
And then it would just stop.
And then it would never connect back up and it would never sync.
And I wouldn't know that was happening because there was no immediate error messages.
And my machines just would fall out of sync. And that just drove me nuts. Now, keep in mind, this is a couple of
years ago. This is like when BitTorrent sync was still a thing. And I kind of hit the pause button
and said, I'm going to come back and visit sync thing in a couple of years and see how it's doing.
And boy, am I glad that I did, because am I just using the hell out of it now? I use sync thing
more and more because the beautiful, beautiful thing about SyncThing is it's peer-to-peer.
No cloud service at all.
So your storage is whatever you got in disk space on your machine.
So if you've got machines with terabytes of storage, then you've got terabytes of syncing.
The other thing I absolutely love about it is it essentially creates a network file system for me that doesn't require any fancy VPN or crazy-ass file system.
In fact, it can be across multiple OSs using multiple different file systems, and they just sync.
Because with SyncThing, you can sync any folder, much like you can with the NextCloud client if you set it up right, but not like Dropbox, where Dropbox only syncs the Dropbox folder, right? SyncThing syncs anything. It has its own little world, yeah. Yeah. And so I can point at maybe a media folder
on a server. I can point at an editing working directory over here. It just syncs. And you put
something in one place and it just shows up in the other place, even if it's some five directories deep folder. And what I realized is this was
a brilliant solution for clients of mine that look to move media files around.
So I've launched this chrislass.com consulting arm, which is a part of the Chris Lass empire.
I'll take over the world with my podcasting consulting. Anyways, and I realized that I would
get people all set up with a lot of things going into a podcast, but I wouldn't have a solution for
how to actually move files from where they're recorded to somebody who's editing or transfer
stuff back and forth. And we experimented with Firefox send for a little while. And that was frustrating for people. And
on people that run only Linux, I had them try Magic Wormhole for a bit, but that was too clunky
because it requires dropping to the command line. And then one day it just hit me. Sync thing,
you idiot. Just use sync thing. And there's like a sync thing tray application you can get for
Windows. And there's a sync thing bundle you can get for the Mac that takes SyncThing
and makes it usable for those platforms.
And then, of course, it's just available on Linux as a package.
And it works fantastic.
I set them up with a little SyncThing instance,
and then I point it at a directory on their folder,
and I just say, when you need to send something to an editor, put it in this folder.
And then I have their editor on the other end, install the bundle, set up a directory and give
me the ID. The two things discover each other over the sync thing network. They have a little
broadcast system they use and a little relay system. And then they connect peer to peer
and they begin syncing. And it just works now. It doesn't, so far, none of them have, or my own, have,
none of them have lost sync. None of them have failed to discover each other. It's worked every
single time. And I've also started using it as a way to move home media files around. I've got a
Blu-ray player at the studio, or I've got a DVD or something that I've ripped to my hard drive,
and I've made a big old fat MKV out of it like I do.
And I want to get that to the RV.
Well, the RV has a super slow connection.
It's on MiFi.
And I don't want to sneaker net it because that's a pain in the butt to hook up a drive, copy it over, and then you get home and get to my Raspberry Pi and plug it in and then SSH in and copy the file.
It's just – who's got time for that?
Nobody.
I just point sync thing at the TV or movie folders that I want to have currently at the RV.
Like right now, I recently got into the television show Alone.
Have you heard of this?
No.
You should watch Alone because they film a lot of it in the Pacific Northwest.
So it's pretty great.
Oh.
in the Pacific Northwest.
So it's pretty great.
It's like 10 contestants go out to Vancouver Island
and have no contact with society,
have no assistance
except for the 10 things
they can bring in their bag.
And then whoever lasts the longest
wins $500,000.
It's silly,
but it's a lot of fun
and I watch it with the family
and I just decided,
you know, why sit here
and move these files all around when I could just sync them using SyncThing?
And been able to just set that up, point it at my Raspberry Pi, and it just goes.
And when a new episode comes out and I have it in the folder on my server, it just over time as, you know, however long it takes, trickles down to the RV.
has, you know, however long it takes, trickles down to the RV. And SyncThing is pretty good about not abusing your bandwidth, about noticing if there's other things going on and sort of
scaling back. So I don't even really notice a hit. I maybe every now and then have gone in and
paused SyncThing for a bit. So that way I have just a little bit more of the kilobits, but it works pretty well.
All right, Wes, it's time to check in on that Arch server upgrade.
How's it doing over there?
Kernel's upgrade, DKMS has been rebuilt.
So we're ready to reboot.
Yeah, that's reboot time.
All right, ping is going and reboot is away.
So Wes, if you were going to set yourself up a home server,
like an actual home server,
do you think you'd go with something as complicated as we've done here?
Would you go with Arch? Would you go with Nextcloud?
What would you do?
Ooh, good question.
You know, honestly, I've been pretty impressed. The Arch system has really not been as painful as I thought that it might
when we first discussed this kind of crazy
plan. Keeping things simple,
not overloading the system, and mostly
just running everything containerized,
that has helped a lot.
I would be tempted now that it's
post-2004.
I'd be very tempted for my personal stuff
if I didn't need any
customization or if I wasn't trying to do
something really fancy in the Arch setup, you know, where the minimalism and simplicity helped, I would probably just install 2004.
Just especially because the ZFS stuff is taken care of for you.
Arch has been the best besides Ubuntu, I think, for us by far in that respect.
But you just can't beat Canonical building it for you.
Yeah.
And I think the math has changed here a little bit. In the past, Arch was how we got access to the most recent, freshest packages.
And you and I, doing this show and just kind of being the guys we are, that's what we like.
We like getting that.
We like getting the latest software.
Give me the beta.
Whatever you translated to Rust, I just want to run it.
But do you really need that anymore?
Is that really how you really need that anymore? Is that really
how you need to get software? I mean, with everything either in a container or in a flat
pack or a snap, like, do you really need to be running a rolling distro to get the latest software?
That's kind of one of the things I think we're also trying to test out here is,
is there still a benefit to it? I think it depends too on how high touch a system is.
And this server is sort of in the middle. There's some weeks where we're doing stuff
we're setting up a project that we're playing with
and I'm kind of glad that it's Arch because it's
so simple. We understand the system because
there's not a bunch of stuff configured in the background that we
didn't set up. But
the flip side of that is if it's just a
box that you leave in the corner, you set it up once
you never want to touch it again.
Well, that's not where Arch shines. Yeah, very true. All right. Well, while you're doing that,
because I know it's going to take a minute, let's sneak in a little housekeeping.
I don't have a whole bunch to update everybody on the housekeeping, but I want to give a plug
for the old self-hosted podcast. Check it out at selfhosted.show. I say old now because
Self-hosted podcast. Check it out at selfhosted.show. I say old now because we've officially crossed the 20 episodes mark. Thank you. Thank you. And episode 22, slow cooked servers. I give everyone an update on how my Raspberry Pi servers are doing in the intense Texas heat, I will tell you now that yesterday, my server booth was up to 104 degrees Fahrenheit where those Raspberry Pis are sitting.
Oh, man.
So I talk about what's going on there, what I'm doing to mitigate that, and what I think is going to fail first.
And how I monitor all of this is covered in selfhosted.show slash 22.
All of this is covered in selfhosted.show slash 22.
Plus, Alex goes into this super cool self-hosted image recognition AI tool that you can throw at like a security camera.
And it can recognize cars, dogs, all these different objects in the video feed locally on your LAN without having to go in any third-party service and then generate alerts based on that. But there's some really other cool things it does, including how
it records the video and all of that to save you a ton of storage. So Alex can run 4K cameras
and not burn through disk like a madman. It's a really neat system. So we talk about both those
things in a little bit more in selfhosted.show slash 22.
I just hope by the time I make it back to the Pacific Northwest, I haven't burned through every raspberry pie in the garage.
So check that out, selfhosted.show.
Also, please do join us live.
We do this show on Tuesdays.
We'd love to have you here.
Now, I won't be live on the Tuesday that this came out, will we?
But normally we are live at jblive.tv. You can get that at jupiterbroadcasting.com slash calendar.
And that's why we have the calendar, right? Is that most of the time we remember to update it.
So you'll know in advance if we'll be here or we won't.
That's the theory. I talk about the live show all the time. I do not proportionally talk about how
much we appreciate everybody downloading the show, sharing it with friends and that kind of stuff. That's obviously the vast, vast, vast majority of the audience. We That's what makes podcasts tick. You don't see a bunch of podcast ads out there because the only marketing that works for podcasts
is the word of mouth. So thank you everybody who does that too.
It's you, the audience.
That's right.
You're the ones that make this possible.
So we will be back at our regularly scheduled live streaming time, I think the week of the 21st.
If you want to join us, July 21st at jblive.tv at noon Pacific. And then, of course,
one more mention for that LUP plug, which is happening right now as we record this episode.
You never know what's going to happen during a LUP plug. And that's Sundays, same exact time,
the same Mumble server, same IRC server, just different rooms. So check all that out.
You've got two days to try. And the LUP plug is a great chance if you just, you know,
you've got a new microphone,
you've been wanting to join the virtual LUB for a while, try it out on Sunday.
If everything goes well, then come join us on Tuesday.
Yeah, it's a good way to kind of warm up too.
Maybe if you're a little shy or something like that.
Right, start talking with some of our wonderful Mumble regulars.
Yep, totally understandable and a great way to do it.
All right.
That's the housekeeping for this week.
And that puts you back in the hot seat, Wes.
How's that server reboot going?
You say hot seat. My seat is nice and cool over here.
All of our Docker containers back up and running.
ZFS remounted, no issues at all, and we're on the latest LTS kernel.
You're kidding me. All like that?
Boom. Arch for the win.
me. All like that? Boom. Arch for the win. Wow. Not only was that our longest time between updates,
but that's got to be one of the smoothest. I'm very impressed. That was a non-issue because essentially everything in the stack got upgraded. Now, I will say I did have to go build one set of
packages by hand just because, all right, I was previously singing the praises of yay, and I do really like it, but there's like a, the way the package dependencies work between ZFS
utils and ZFS DKMS, you have to install both of them at the same time. Otherwise, you break a
dependency on the previous version. So I just did that by hand in the shell with make package,
which was also pretty simple. All right, I'll allow it. That's borderline though, Wes Payne.
Hmm. All right. I'll allow it. That's borderline though, Wes Payne.
It's borderline. I know. I know.
All right. Moving on to the things that folks asked us to follow up on. Thoughts on our ThinkPads. How are they running? Anything like that. And Chaz wrote and says if we could also talk about maybe if we would spec them differently. He says, hey, just a quick question. I'm in the market for a new laptop and thinking about getting one spec'd out with the same way your T480s are.
As you listened to our episode, which was forever ago now, like almost,
geez, Wes, was it almost two years ago that we got the ThinkPads?
Wow. That's crazy.
Is that right?
Yeah.
It still kind of feels like a new laptop. So there's that. He says,
were those the models that had integrated graphics or did they have the MX150 card?
And are there any changes that you would make to their configuration if you could?
Answering the graphics question, they are the Intel integrated graphics.
We did that on purpose to just keep things very simple because these were work machines.
I will say, though, that, you know, those integrated graphics, the modern ones, really not too bad of a slouch.
Like, I'm not playing a ton of games, but 3D shooters, all kinds of stuff on there,
that worked surprisingly fine.
I mean, okay, I turned down some of the graphics settings,
but it was playable without a problem, and then I forgot I'd ever turned those down.
So I was pretty impressed.
On how are they running?
Now, there's not really much to say here.
They just run.
They run great. What a great Linux machine, really.
I would absolutely fully recommend them.
Now, I'm curious, Wes,
would you change the screen
and go from 1080p to 4K in 2020?
Ooh, that's a good question.
You know, the screen is one of the,
let's call it limiting factors on these bad boys.
And you recently spent a little time
with the new XPS 13,
which has incredible screen, just amazing.
Yeah, I would like a better screen.
It's not so bad, especially, you know,
you're going to put it on a dock.
You've already got, you know, monitors on your desk, whatever.
And it's fine.
It works on the plane.
It works at a coffee shop.
Like you can read it.
I would want a nicer screen though.
Yeah, probably.
If it was a very high end 1080p panel, maybe we would feel
differently, but it's just not a great screen. It'd be terrible to like, if you're going to try
to do any serious graphics work where you needed, you know, good colors or brightness. Like I take
a picture on my phone, I look at it there and then I look at the same one in telegram or something
on the think pad. And it's just a joke. And on top of that, it's 14 inches. So I'm not positive 4K would be
the right resolution unless you were pixel doubling everything. I think a 2K screen on that
14 inch laptop would be the absolute sweet spot, but they don't offer it. So in light of the fact
that it's kind of a crappy 1080p panel or a nice 4K panel, at this point, I'd go with the 4K panel.
I don't think I would have a year ago, but I think I would now.
One thing I wouldn't change, keep that 32 gigs of RAM.
Maybe it's not essential, but it's just been so nice,
especially when you're doing serious work,
you've got a lot of stuff going on in the system,
or you're just trying to run a bunch of virtual machines.
Or you're like Wes and you just run out of RAM for a week
and your whole OS is in RAM
and everything you do is in RAM.
It really enables my bad behavior.
It's great.
There is a niggling issue still, which is shameful that it hasn't been fixed fully,
with how the 480 handles thermal throttling under Linux.
And it's an unfortunate issue because if you fix it with the workarounds,
the laptop runs noticeably warmer and the fans run more often and it becomes annoying.
But if you don't fix it, you're robbed of performance. You got two bad options when
it comes to the T480 and thermal throttling. I don't know if with later revisions that's
been solved, but I can tell you that the firmware updates that we were told would solve it did not
solve it. And we can test it. We can, we can verifiably prove it. I'm trying to remember what
we did semi-recently, Wes. You remember that encoding trick we did where like the 2015 MacBook
kicked butt at first of our, on our T480s, just like shamed it on an encoding test.
Oh yeah.
And it came down to the thermal throttling on the T480 was
kicking in too aggressively and reducing the overall speed. And so that older MacBook,
until it heated up, was winning. And then once the MacBook started heating up. Once they were
both throttling, right? Yeah. The super aggressive throttling kicked in on the MacBook and then the
T480 pulled ahead. But meanwhile, the XPS 13 never missed a beat. It remained consistently
performant in our test. That was unfortunate because, you know, here we are like a couple
of jerks with our T480s and they're getting stomped by a MacBook from 2015. Embarrassing.
Yeah. So there are fixes out there. And that encoding test we did was after my firmware fix
was installed. And I was still disappointed with the arrangement they'd came to. And I don't know if they were just
adjusting things and maybe it'll get better. But yeah, the firmware has been kind of annoying in
general. You know, there's also, we ran into issues with the Thunderbolt controller where
you had to do an upgrade. Otherwise, you might eventually break your system. The nice part about
that is that, you know, it works splendidly with FWAPD.
That's true.
Good point.
It does.
It's really easy to get the firmware updates.
So, Drew, you ended up with one of these T480s, and I think you got the same unit, didn't you?
Because you came on a little bit after Wes and I, and I don't remember if you got the same exact model or not. Yeah, I've got the Core i7 T480, and it's been a pretty good machine for me. I do have a couple of small things that
I have noticed about it. With the thermal throttling, I've noticed that it has been
better when I've been doing pro audio work than it had been in the past, but it's still not great.
I've always been a little suspicious of putting something so hot as an i7 in a laptop and
this has been kind of proving me right now one thing that i did do that helped immensely is i
replaced all of the thermal paste on the cpu in the thinkpad itself with some higher quality
thermal paste and that did improve things quite a bit. And that
was even before the firmware fix came out. That might be why I'm having a little bit better luck
than you are with my thermal throttling fix is just they use cheap paste in the stock configuration.
Now, the biggest problem that I have is I can't have the laptop in the booth with me while I'm recording.
Because even when it's running at its lowest possible heat and everything else, the fan is still too loud for me.
As a pro audio engineer, it just annoys me to no end how loud that fan is at idle.
to no end how loud that fan is at idle. And I actually keep it outside of my booth and run an external monitor and keyboard and a Bluetooth mouse just to avoid that issue entirely.
Good man.
Like if I had to do it over again, I think I'd go with something a little quieter, you know,
maybe something that's even fanless. Because my main thing with this is recording and then simple editing.
I don't need a super high powered machine for that.
Most of my actual renders and stuff I can do on higher powered machines.
I think that's my one big thing is just it's kind of noisy.
Yep.
I am talking to you right now on a Pinebook because of that very issue is the Pinebook doesn't make noise and it's slow and it's not nearly as nice to use, I would say, as my ThinkPad just because of the overall performance difference.
But it's silent.
And, I mean, right now my microphone is hovering over the keyboard.
It's right above the keyboard.
So if the fans were going
crazy in my ThinkPad or just making constant noise, it would be getting picked up. And that
has been a problem. I don't find it to be a huge issue for me unless I'm really in a kind of a
cramped setup where my microphone's right next to the laptop. I have kind of compensated too for
some of the shortcomings of the ThinkPad by using an eGPU as well. So I think
that's extended the life of the ThinkPad for me because I have a eGPU with, it's an AMD graphics
card and it's not a great one anymore. I want to say it's an AMD like 580. I could be wrong.
Might be even less. Might be like a cheap 570 or something. But I can plug that in and it acts as a dock.
It's not a dock technically. I got the Mantis eGPU case and it has gigabit ethernet. It has a
SATA adapter too, so you can have storage in the dock. And then it has a full array of ports,
USB and all that kind of stuff.
And so I plug that in just as a matter of course, and then I selectively execute applications on that GPU.
I launch a terminal, I set the environment variable to use GPU 1 instead of 0, because
0 is the Intel GPU, and then I launch the application.
And any application I launch in that terminal session, once that environment variable is
set, launches on the eGPU. This is so much better than how Windows and Mac OS handle eGPUs.
And it's kind of what System76 is working towards. We talked to them recently about it. And it's
the way to go. I have a big GPU for when I need it, and then I just otherwise run everything off
of the Intel GPU. And I have found that to be a pretty solid setup.
And it remains my primary workflow for the ThinkPad at home today.
And then while I'm in the studio, I have two different Thunderbolt docks, which are ridiculously stupid expensive, and I hate it.
Wes, you see one there, right?
What is the brand on that one?
I can't even remember. OWC. Oh, is see one there, right? What is the brand on that one? I can't even remember.
OWC. Oh, is that Otherworld Computing? Yes, that's where I got it from. I got it from
Otherworld Computing because they seem to make the best dock with the most port options and all that
stuff. And so I have one in the studio and I have one up at my desk upstairs and I just move between
them. I have my monitors hooked up to that now,
and I have my ethernet hooked up to that. And that works really well for me. And I'm kind of just,
in a way, getting myself ready for a pure USB-C lifestyle ahead of time. I don't think I'll get
another ThinkPad. I think my next machine will be probably something different, but I'm not at that
point yet. And that's just because I want to try other things. Maybe there will be a ThinkPad that catches my attention.
That new Oryx Pro from System76
is pretty high on my list right now.
I have to be pretty honest.
But I wanted to get to a lightning round
just to cover a couple more things.
Joshua wrote in and he wanted a recap
on what the hell's going on with the Debian project.
Dang it.
Because I had raised some concerns on Linux Action News.
Well, the Debian project leader elections wrapped up. Jonathan Carter won. He has been the Debian project leader since April, I believe. I've noted that recently Debian chatter seems to be, well, A, it's spread across many, many, many lists. I'll link you the complete index.
Can you believe that list, Wes?
Ooh. lists. I'll link you the complete index. Can you believe that list, Wes? But still, with all those
mailing lists, it seems that certain conversations may be happening in other places, not in the
public from what I can tell. But what I have noticed is that Jonathan Carter himself has a
blog and it doesn't get updated all the time, but he does a pretty good job of posting over there.
So if you want to see where he's at,
you can check out his blog.
I'll have that linked in the show notes.
And he has some good stuff in there,
including what's going on with the Debian project,
little mini DebConf updates and whatnots are in there.
And then additionally, speaking of conferences,
DebConf 20 this year has moved to online,
which everything has.
And it's happening August 23rd through the 29th.
And it's going to be a shorter conference. They're going to do DebCamp and DebConf,
but maybe more of us will have a chance to attend now that it's online. The flip side is these
online conferences are kind of a shadow of the in-person event. However, more of us can attend.
So it's also kind of great. So I'm hoping we'll
be able to attend. They currently have the call for papers open. So we'll have a link to that in
the show notes too. And DebConf20 will be going online. And I hope from that, we'll be getting
a more clear picture of how Debian is doing. I think one of the things that will really tell us
is when there's a big decision that has to get made, something that really affects the project and requires them coming together and having a civil discourse on how
to proceed.
That will be the test.
But Jonathan's leadership so far seems to be steady, open.
And I think he's, from what initial kind of ways I can tell, it seems like he's doing
a pretty good job.
And I'll link to some of those things that you can read more about. Also, Joshua asked if we ever ended up installing those minimal
distros that we used in our small distro bake-off. And I'm happy to report that, yep, my VM distro
is now Sparky. I just have a Linux VM on every machine, even if it's a Linux box, that is sort
of like a playground where I can go in and
try stuff. In the past, I had used just my machine to just trash it, install stuff for the show.
Then when I switched to Fedora for a while, I discovered Fedora Toolbox, which is awesome.
And it was a game changer for how I decided to experiment with software.
When I moved away from Fedora, I didn't want to go back to just trashing up my system.
with software. When I moved away from Fedora, I didn't want to go back to just trashing up my system. And so I went with a dedicated tester VM. And I like that Sparky's Debian based for this
very reason, because it gives me kind of a, I don't know, how would you describe it? Because
it's not... It's a leg up. Yeah, it's not Ubuntu, it's not Arch. I know it's like it's slightly
different, but it's not too far off the mainstream. Does that make sense? It's like, it just, it's a
good test distro for me. Yeah, it has some of its own uniqueness, but at's not too far off the mainstream. Does that make sense? It's like, it just, it's a good test distro for me.
Yeah. It has some of its own uniqueness, but at the end of the day,
there's apt underneath and you know what you're doing on the command line.
Another question that came in, did we ever end up installing i3 on the streaming rig?
Like we said, we'd consider it. And I don't think we tried it on the streaming rig,
but we did try i3 in the studio for a bit.
And I think we did a live stream of me experimenting with Regolith for a little bit.
And then we thought about putting it on the Reaper machine.
Ultimately, on the streaming machine, we've stuck with Plasma and KWIN rules.
And then on the recording machine, we've stuck with XFCE.
But that's just because we haven't reloaded that box and it works.
Right.
I don't think we would go XFCE again.
Maybe. It hasn't been bad,
but I think it is missing a couple niceties
that would go a long way.
I would be very curious to see how that new pop shell did
on the recording machine
because we've got a lot of tiled windows,
but some of the applications we use,
especially like software to control a mixer,
sort of random apps like that,
well, they don't always play nice.
They've got weird floated windows.
They don't perfectly fit with a Tilean thing.
But I think maybe a hybrid environment could find the right balance.
When Wes said that XFCE hasn't been perfect for us,
that's been kind of one of the problems is we run these weird apps
like X32 Edit by Behringer for our mixer.
It's just not a common Linux app.
No, and it's not using regular toolkits or anything.
Yeah, it's very important to us.
It's very important to us.
But at the same time,
it's like most Linux desktop environments
don't know what to do with the window of it.
They just, they can't figure it out.
It doesn't have any traditional borders.
It's just totally perplexing.
And then additionally,
when you have multiple monitors under XFCE,
it seems like some of the applications we run, like Reaper, for example, which again,
love Reaper, but it's weird in this one way in that it just opens up dialogue windows on what
seemingly seems random or maybe not random. It's just whatever monitor you're not presently using.
It's almost guaranteed to be the monitor that you don't think it should be on.
I know. And then you're sitting there like, well, why is this application not responding?
What's going on?
Did it freeze?
What's happening?
Yes.
Yes.
And then you look, oh, oh, there's a yes, no dialogue on another screen somewhere.
And that's where XFCE hasn't been ideal.
And we kind of speculated that I3 might be better, but we don't know yet.
And we will have a link to the I3 live stream in the show notes, as always, if you'd like to catch that, if you missed it before.
Wes, where do they find those handy show notes?
Linuxunplugged.com slash 362.
You got it.
You got it.
All right.
So that's batch one.
We want to do this from time to time.
We like following up on the stuff that we talked about because I think some of the most valuable reviews are the ones after you've used them for a while.
So if you've got other things you'd like us to follow up on in a future episode, send them in. Go to linuxunplugged.com slash contact.
We need your help.
That's right. Or you could tweet me. That's how this whole thing kicked off. I'm at Chris
Lass. Wes, what about you?
At Wes Payne.
Drew Cheese, you guys, you online, you on the Twitters?
At Cheese Bacon.
I'm at Drew of Doom.
Holy smokes. How about that? Oh yeah, I follow you. I knew that.
Well, you can also follow this show at Linux Unplugged and the network at Jupiter Signal.
Now, we will be back soon.
Actually, by the time you're hearing this, we'll be back at our regular time.
So you can just join us Tuesday or go to linuxunplugged.com slash subscribe and get it every single week.
See you next Tuesday! We'll be right back. Oh, I ran out during the music and turned on the AC again
because this place I'm at doesn't run on the weekends,
so it's 87 degrees in the studio right now.
Chris, how in the world did you manage to last an hour and a half like that?
Well, he's not wearing many clothes right now.
They got an override mode on the old air conditioning,
but it only lasts for an hour,
so it resets to whatever the schedule is after an hour.
So wait, Chris, were you a melting for the last 30 minutes?
Oh, at least because we were, we took like 45 minutes in the, you know, before we got started,
just chatting ultimate pre-show. Yeah. It was pretty much just right at the top of the show.
I'm oh, Hey guys. Um, just a little, uh, visual. Um. My back is like Niagara Falls right now.
Let's just put it that way.
Oh, Jesus, Chris.