LINUX Unplugged - 411: The Best of Both OSs
Episode Date: June 23, 2021Is it possible to have Arch’s best feature on other Linux distros? We attempt it and report our findings. Plus our reaction to NVIDIA’s beta Wayland support–is this the milestone we’ve been wa...iting for?
Transcript
Discussion (0)
I was going to go on a whole Rocky Linux rant today, but I decided not to put it in the show.
But my thoughts are, and I wish them all the best, and I think it's really exciting,
but it feels like the community has already awarded them the crown without really them shipping yet.
I mean, they just shipped their final version.
They haven't become a – they're not going to be a truly open, free, charitable organization.
They're going to be a for-profit organization.
And they don't necessarily have the experience
that Alma Linux has behind it,
who's already been building a CentOS clone for a decade
and has already shipped a stable version
and has now launched additional products on top of that
and has a really easy support contract
and has a true nonprofit organization behind it.
And I feel like we look at this and I'm like,
yeah, Rocky Linux looks really great.
It's got a good solid team.
It's got an excited community.
But when you look at the rubber meeting the road as it comes to CentOS, traditional CentOS
replacements, I feel like the community has called it too soon.
They've called it for Rocky Linux when Alma Linux really has all the best features and
seems to be the most viable contender.
Yeah, it seems like maybe there was a presumption here that Rocky seemed like it was more of the community effort at the start,
perhaps because Cloud Linux was so involved in getting Alma Linux off the ground.
But yeah, you're right.
I mean, one, it's just crazy early days for either of these regardless.
But, you know, it takes time to actually figure out
which one is going to be more of the community-minded distro at the end of the day.
How many times year after year do they hit their promises?
And AlmaLinux is checked every box so far.
And I don't, you know, I don't have a dog in this hunt,
but it seems like the cult of personality has won here.
And because a former co-founder of CentOS is involved with Rocky Linux,
everyone's just decided, well, that's the winner.
Welcome back to your weekly Linux talk show. My name is Chris.
My name is Wes.
Hello, Wes. Coming up on the show today, we decided to attempt something they say can't be done.
We want our beer and we want to drink it too.
I'll tell you about that.
But first, this episode is brought to you by a cloud guru, the leader in hands-on learning.
The only way to learn a new skill is by doing.
That's why ACG provides hands-on labs, cloud Linux servers, and much more.
So get your hands cloudy at a cloud guru.com.
Is it possible a workstation with an Ubuntu LTS base,
but maybe with a few select fresh userland native apps,
can you bring Arch's coveted AUR to Ubuntu?
Is it possible?
This week, I challenged Wes to get the AUR working on Ubuntu.
And to my surprise, he did it.
We'll explain more. We'll tell you where it falls short and where it's going
and all of the exciting things you're going to be able to do with it
and some other ideas a little outside the box
that could work on all kinds of distributions.
But first, before the show goes on,
we've got to say time-appropriate greetings to our virtual lug.
Hello, Mumble Room.
Hello.
Hello.
Hello.
Hello.
Hello, everybody.
Man, is it good to see you? I've been looking forward to chatting with you all week. And before we get into the news, there is something we
have to take care of. It's been a little while.
Wes Payne, are you ready for an Arch server update?
Oh, boy.
Am I ever ready for these fully?
I don't know.
We've got 206 packages today.
That's like three gigs installed.
But, you know, it's Arch, so it's only like a net 50 MB upgrade.
I love that.
So for those of you new to the show, from time to time,
we upgrade our production server live on the air
as part of our commitment to you
to do something you should never do,
and that is run Arch on your production server.
And to stay accountable,
as long as we run this server,
we will update it,
and if it breaks right here on air,
we'll have to admit it to you.
Wes, are you ready to pull the trigger?
Standing by.
Okay.
Do we have WireGuard in that update list?
WireGuard tools, yes, we do.
Oh, no.
And a new stable kernel miner version.
I feel like we always have kernel updates,
even though we went with the LTS version.
Yeah, I think they're just smaller updates.
Hopefully that means things break less,
but really, I think what we were hoping for
was just, like, fewer package updates at that level
and I'm not sure how much we've gotten
that. Alright,
you want to pull the trigger? Linux LTS 5.10
here we come.
I swear we have all of these
dang kernel updates, even though
we went with the LTS. So we'll check back in.
That's got to download the packages and start installing.
You will holler, Wes, if anything goes sideways.
We'll do.
Otherwise, we'll find out how it went.
And Wes, I have fantastic news.
I have like best news of the week news that I have been waiting to tell you until we are live on air.
Are you ready for that?
What do you have today?
Bombshell.
So you remember last week we announced that we're going to do a road trip to Salt Lake City and to Denver.
And we're going to do a meetup in both those locations.
We're going to do a team reunion.
That's all still going.
Like nothing changes there.
However, now I'm happy to say that Linode is on board to help finance the trip and make it possible from a technical standpoint.
And so we're taking it up a notch
and we're going to call it Linode on the Road.
It's the great road trip at a scale we've never done before.
Now that Linode is helping us,
we're going to create new exclusive road shows,
interviews with the crew, the places we visit,
the stories we'll gather when we get
together. We're going to have independent releases separate of this show that are made possible by
Linode. We'll have details on that very soon. I'm really excited to get produced more content and
have something that's unique. And we'll have some conversations on there that you won't want to
miss and some old friends you haven't heard from in a little while. We're making plans right now, though, for two meetups in August. The first meetup will be in
Salt Lake City on August 7th. The second meetup will be in Denver on August 20th.
Not all the details are firm yet, so if you have any insights on those locations, if you're a local
and you know a great spot to meet up, totally open to all that feedback right now.
It's all in the planning phases.
But details as we have them are in the meetup pages at meetup.com slash Jupiter Broadcasting.
And we'll link to each individual meetup in the show notes.
Please sign up and let us know if you're thinking about going so we can plan accordingly.
New, fresh content, old friends, places we're going to visit.
And my intention is to have the tracker going,
so that way if we go past your place on the road and it works up,
we can have a spontaneous little meetup too.
I think it's going to be a lot of fun, and I am super excited
and thrilled to have Linode helping us make it possible.
Thank you to Linode.
I'll tell you more about all that, but that just got all figured out in the last couple of days. And we've been talking to System76 about getting a factory tour while we're there, too, so we can see how the launch keyboard is made. That's in the works. And we're making dinner't my thing. It just, in fact, that event made me more
profoundly concerned about the state of the cybersecurity industry than anything else.
And it actually sat with me in a really kind of awkward, negative way for a long time.
Not a good community event to end on.
No, it wasn't, Wes. And so this is, oh man, I needed this so bad. And I think a lot of
people, you know, a team reunion, we haven't seen each other in a long time.
That's going to be incredible.
And then a chance to see the audience.
So details are in the show notes for all of that.
In the meantime, I am just trying to get Lady Joops back up and running again.
In fact, I could also use anybody's input on this.
So just a real quick version of the story for context purposes.
Stay a while and listen.
On my way to Montana about a month ago, heading out east from the Seattle area in Idaho,
my living room slide broke. So, you know, my RV's got a kitchen slide, a living room slide,
and of course it's got a bedroom slide. And for those of you that don't know what I'm talking
about, it's like, it's like a room on a track that stretches out from the side of the RV and it gives you more space.
And so for a family of three kids, two adults and a dog, you essentially go from having a kitchen and a living room to not.
And it makes it so you can go down the road, you park, you set up, you got a lot more space.
Wonderful idea.
However, in the time around when, well, really still even today, but in 2014 when Lady Joops was built, the Schwintek slide system was not properly installed.
And so we've been fighting that son of a bitch for years now.
Five years, it's just been a constant struggle.
A money pit.
And it's a real damn shame because, you know, Joops is decked out with solar.
She's got lithium batteries.
She's got the coolest internet setup I could ever possibly have.
All these little upgrades.
She's our family home with tons of memories in her. You don't want to give that up. I mean, just the work alone,
let aside the attachment. So all the slides were out when we were in Idaho because we're making
room for the family. I bring in the bedroom slide. I bring in the kitchen slide. Everything's fine.
I bring in the living room slide. It stops halfway. Of course, we're supposed to leave in two hours
and I can't get my slide in and I can't go down the road with a slide out because I'll be clipping everybody going down the road.
So I call a tech.
He says, yeah, we're two hours out.
I'm like, yeah, but I got to leave here in two hours.
So I go to the office.
They say, yeah, you know what, Mr. Fisher, we understand.
This stuff happens.
You guys don't worry about it.
You can stay all night if you need to.
Wolf Lodge, by the way, in Idaho, shout out to them.
They really were really great about it.
The tech comes out, and we determine that we can't resolve this in the field.
There's just no real, no solution here.
And the only option is to disengage the motors because they have a magnetic lock to prevent
the slide from just arbitrarily moving in and out.
Of course.
Disengage the motors and we have to slam it in.
And I hurt myself doing this like an idiot.
I popped a rib out, hurt for weeks, made the whole trip agonizing.
On our way back, we call up our shop that we've been going to for years and we say,
hey guys, and this is where I need the audience's help. We say, hey guys,
the slide that you just fixed in March is busted again. We immediately get the cold shoulder. Like
I can tell they're done fixing this problem because it's a pain in the ass to work on.
And they're sick and tired of this and they're busy. They're packed.
They basically say without saying, you know, we're kind of done fixing this.
So we kind of, we kind of decided we got to find somewhere else.
Found a local guy near the studio who's kind of known for fixing slides.
So it just seems like what a great opportunity.
15 minutes from the studio.
This guy's built custom slide systems.
This seems like the way to go.
Go to his website, even charges less than our standard shop.
Oh, man, that's great because we just had this fixed in March.
Bring Lady Joops in, and I pull in, and I go, this is a little weird.
He's using storage units as his workshop, which is fine.
A little unconventional, but, you know, I used to podcast out of a garage.
Okay, that's fine.
We get there.
We're going through the work order.
He says to me, oh, by the way, our rate now instead of 85 is 125 an hour okay and you have to pay in cash
okay like i don't i don't you know okay all right i don't i don't generally carry a lot of cash on
me but okay he does the he starts doing the work and days go by, days go by. He then gives us a diagnosis,
kind of seems in line with what I expect. More days, five more days go by.
As days go by here, I mean, that's your home, right? This is where you're most comfortable,
where you've optimized for your work and enjoyment and life.
Yeah. Yeah. That was, that was horrible. That 10 days was horrible. We get it back yesterday,
Horrible.
That 10 days was horrible.
We get it back yesterday.
$4,200 cash out of pocket, right?
I got like no, no, like check, no cashier's check had to be cash.
And he's got the RV, right?
So it's not like I can really argue with him because he's got it in a locked storage bay behind a gate.
Like I got to give this guy his money if I want my RV back.
But he says he fixed the slide.
Sounds like the beginning of a movie, maybe.
I know, right? Get in there. Slide seems like it's working. We close it all up. We drive it home,
set up. First thing I do, level out the RV and go put out the slide. It immediately breaks.
I wasn't even home for 10 minutes before it was broken. It just clearly wasn't fixed.
$4,200 in cash too, right? So it's not like I can just like reverse the transaction.
And I guess why I'm putting this out there is,
A, I obviously have a timeline.
I got to get this fixed before the Denver trip.
Like that's the mission now.
So I'm appealing to anyone in the audience
who might know of a shop or a technician
who has solved these rickety, poorly installed Schwintek slide systems before.
It's a very specific kind of slide type.
It has to be that particular type.
And I'm just putting that out there because this is going to become an issue for this trip to Denver now.
I got to get this fixed.
And I don't have that kind of money.
Like that, $4,200 hurt.
It hurt a lot.
It hurts because it was only yesterday.
It still hurts. It's really
got me wrecked, actually. I mean, that's other things you can't do. Other upgrades, things for
your kids, things for JB, all kinds of stuff. Well, the situation is a little more bleak than
that, even like I'd like to have my savings account for medical emergencies or some kind
of other unforeseen accident because I don't I'm self-insuring. Right. And that's what I'm
cutting into with that. And that's why it matters a lot. And I got it. I, I'm self-insuring, right? And, uh, that's what I'm cutting into with that.
And that's why it matters a lot. And I got to, I got to get it fixed again. I'm going to try to
get this guy to get it right, but I'm not comfortable continuing to dump money into
this hole for multiple reasons. So if anybody knows somewhere that really does a great job,
I'm looking into the room slide systems, but I haven't heard any testimonials of anybody who's
ever tried it. And my goal is to get this fixed, get it fixed right, and then get on the road and have a
hell of a road trip and meet up with people.
So I'm appealing to anybody out there who might have some input or some tips.
I know it's kind of off our normal beaten path here, but man, it's got me down today.
I'm just in a bad way.
And so it's one of those things.
I've been fighting this thing for five years, and I'm beyond the point of wanting this solved.
So if you could contact me, Chris at jupiterb Broadcasting dot com or on Telegram, Chris Lass or on Twitter, Chris Lass, you'd you'd really be you'd be really helping us out.
So thank you. And on with the show, because we do have some fantastic community news this week.
I'll give you a little more time for that arch update.
Wes, this is the announcement I have been waiting to make on the show for 10 years. Well, on some show for 10 years.
That is that NVIDIA's beta driver that is currently available to download, the new 470 series,
has out-of-the-box Wayland support, including xWayland. It's here. The first beta driver that makes it possible
to use your full GPU and Wayland at the same time.
They've released a public beta test of the proprietary driver,
which supports hardware acceleration.
This is being brought to us by folks that have worked with NVIDIA
at Red Hat and Fedora Project,
and the Fedora Project is making this announcement today.
Now, you will probably need Fedora or another up-to-date distro here because it requires
some changes and fixes in xWayland, but they're already part of the current build shipped
with Fedora 34, of course.
Yeah, other distributions, you're just going to need to update that stuff.
They're looking for people to help them test this right now.
There's a lot of work
that still needs to be done.
And so they're asking for people
to participate in this.
You also need to reconfigure
your GDM display manager on Fedora
because right now it's said
to automatically disable Wayland
when the NVIDIA driver is loaded.
So you'll have to solve that.
There's some issues though, Wes,
like we got to be honest,
it's not great at this point. Yeah, if you happen to like running Chromium based browsers, yes,
including regular old Chrome, well, there's an issue with Chromium's GPU sandboxing, which means
you just get a blank window unless you tack on tech tech, disable tech GPU to the command line,
and you can work around that for now. Not ideal. Yeah, it's not ideal, no.
And it's not just Chrome.
Applications such as Blender or Steam sometimes just show a black window.
That's an issue which should be addressed later.
They know about that.
There's also some problems with GTK4 applications just showing nothing sometimes.
That's also a known issue.
It's already been reported upstream, but you need to know about it before you're going to use it.
There's even some Firefox problems.
Yeah, if you thought, okay, well, Chrome's not working great.
What about Firefox?
Now, it turns out there's an issue with EGL here,
and I guess like ambiguity in the specification,
and Mesa went one way with their interpretation,
and this new NVIDIA implementation goes the other way.
Of course, Firefox Web Render, it relies on the Mesa version.
So you do have to disable hardware rendering
if you want this to work.
Yeah, it seems like right now
there's going to be a lot of that.
You know, these are just the identified issues.
Yeah, there's even an issue with KDE's Wayland Compositor
that also has a problem with full-screen GLX applications.
So yeah, it's kind of all over the place,
but this is still a huge
improvement. And these are exactly the kind of issues that, you know, you need to go through
this process to get those fixed and really get it polished. Yep. I mean, it's, it's funny though,
because this has been one of the most anticipated aspects of Wayland adoption for a really,
really long time. But in the free software world, we don't get these big onstage splashes where there's, you know, some high end video production of some famous executive that comes up on stage and says, and today we're enabling Wayland on NVIDIA.
Boom, and a slide drops down.
We don't have that.
We don't have these moments that the tech press then grabs and says, a fundamental aspect of this transition has arrived, enabling
the future of, you know, we don't have that kind of hype cycle that you'll get when there's
something about Windows 11 or something about the Mac.
But these little moments where a Fedora contributor, you know, posts on their mailing list of sorts
that, hey, all, you know, Oliver just says,
hi, all, it's here today, NVIDIA release. It's just, it's like this, it's the most nonchalant,
low-key milestone ever. And yet the ramifications are huge. And I really encourage people to try it.
I've got an NVIDIA box in here. And if we read Base on Fedora, I'll absolutely try it at some
point. I only have one
nvidia box anymore because i've so sort of either gone intel or amd so and i imagine a lot of people
have made a lot of those transitions so if you have the capability of testing this you're probably
you know you're kind of a few select and so we'll link to the beta display driver
and uh we get to uh celebrate the moment here Quiet and low-key, but we acknowledge that it's happened, and it's here.
Also, in that same vein, the ButterFS train chugs right along.
A little steam-powered there.
Fedora Cloud, another spin of Fedora, is adopting ButterFS,
and they've been thinking about this.
We haven't really talked about it yet because it was just up for consideration, really.
The idea is that Fedora Cloud installs will provide ButterFS by default,
and then you could take advantage of transparent compression on VPSs and other places where you might run Fedora Cloud and get snapshots, of course.
Well, today, the code to actually make that transition was committed. So it is real.
It's actually happening.
Another spin of Fedora is switching to ButterFS.
In some sense, maybe it seems inevitable,
but I don't know.
Like this must mean the rollout's going
pretty darn well, right?
Yeah, I wouldn't expect it to go much further though.
And I don't think people should take that
as any kind of statement on the ButterFS,
you know, traction or results. Fedora Server
and Fedora Core OS are not built like Fedora Cloud and Fedora Workstation. And they are,
they're kind of off on their own thing. And I think Fedora Server, probably there's probably
a lot more politically around that with Red Hat. And Red Hat doesn't necessarily have customers
knocking on their door demanding ButterFS. And that's kind of what I want to talk about here is, what's the line, people?
I'm talking to you ButterFS skeptics right now.
I'm sure so many of you actually use ButterFS in production too, so you actually have informed opinions.
What's the line?
20,000 Facebook servers running ButterFS, very popular distributions using ButterFS by default, Synology shipping ButterFS on
their NASes for a decade.
What's the line when you stop becoming a ButterFS skeptic and you go, oh, maybe there is something
here.
Maybe I should reevaluate my opinion.
Maybe things in the technology space change.
And when you get developers who have an invested interest to improve a file system for their
production environment,
it gets better.
Where's that line at?
Because, you know, what I saw a lot of
was people scoffing at this story.
Well, I bet they'll switch back.
They're going to regret that.
Lol, ButterFS.
Boy, what a joke.
And it's not, I don't care what file system you use.
You know, I use XFS and ButterFS and ZFS interchangeably, really. Well, not interchangeably, but for different jobs. I don't care what file system you use. You know, I use XFS and ButterFS and ZFS interchangeably, really.
Well, not interchangeably, but for different jobs.
I don't care.
I'm not getting paid every time I plug ButterFS or something.
I don't have a stake in it.
But I do, I just intellectually am curious for you,
where's your damn line?
Because how much success does something have
before you're willing to reevaluate?
And if you don't have an answer for that,
maybe you should consider the fact
that you're being a bit of a jackass about this.
And I'm just, I'm sick and tired of
when we have something great in our community,
the bias against it, for whatever reason,
like canonical experiences is constantly as well.
It's like, it doesn't matter how great a technology is,
like Mirror, which is still around and kicking.
It just never has a shot because of the canonical bias. It's like it doesn't matter how great a technology is, like Mir, which is still around and kicking,
it just never has a shot because of the canonical bias.
And ButterFS is kind of experiencing that same kind of branding.
It's like they've been typecast as a bad file system, and they'll never be able to play any other role than that because people have made up their minds.
And I find that so frustrating in a technology community, in a community specifically of
Linux, where things are always changing.
Assumptions are always being reversed.
New code and new people with new reasons and new products are always coming into the free
software community and contributing code and changing the direction of things.
We watch it happen over and over again.
And yet, even when it's happening right in front of us,
some of us can't recognize it.
I find that frustrating.
And I don't mean to get up on a soapbox,
but I honestly would like to open up that conversation a little bit
with some of you out there and go,
so where is that line at?
Where's the I'll reconsider?
Linode.com slash unplugged.
Go there to get $100 in 60-day credit on your new account
and you support this here show.
It's a way of saying, hey, I heard about that ad there on the Unplugged podcast, and I want those guys to keep doing shows.
Linode.com slash unplugged.
Our hosting provider, full stop.
I know there's a lot out there every time I go with Linode.
It is in part due to the performance, no doubt about it.
That is always a factor for me.
Linode is streaming fast.
And I like that they have 11 data centers that I can choose from. So I can put something close to you guys, or I can put something close to me,
depending on if an internal thing or an external thing. That's obvious, right? That's great.
But I think, you know, when you look at all the different options out there, I can't just tell
you it's the one thing about Linode that has really made me stick. My transformation with
Linode was just curiosity. Almost two and a half years ago, I just wanted to try them
because I tried just about everybody else.
But I had seen Linode at Linux Fest every year.
I'd seen Linode at a bunch of events.
They always had a great booth.
They always seem like really great people.
They always had a smile on their face.
And they were doing this kind of stuff way before anybody else.
You know, they just turned 18 years old this month.
They've been doing this for 18 years. That's amazing. And they have used that time to really
build the best in virtualized cloud computing. And that's why it's not just like, it's like the
speed or it's, it's the UI it's or the API or the command line client. It's like everything,
because when you've been doing something for 18 years and you focused on that,
you know, they didn't like launch all these crazy side businesses and go out and get a bunch of VC
funding and just start dumping on the market. They built a competitive product. And then years
into doing it, a decade into doing it, Linode had to pivot and start competing with every DNS
provider that wanted to give hosting, with every VC-backed company that wanted to go out there
and get you to buy their servers at pennies on the cost,
Linode had to pivot and compete with that,
but stay independent.
They managed to step up to the plate
and exceed what everybody else is doing.
And I can tell you as a long-timer now,
one of the things I really like about Linode
is just that constant, smooth, improved iteration.
Like in this last year, they rolled out cloud firewalls to everybody.
They have deployed Terraform support, Ansible module support, Kubernetes.
They just have stayed current along the way without like screwing up my workflow if I don't want to use any of that stuff.
Like it's not all in my face.
They've managed to keep the UI clean and usable. They have a ton of great options from like $5 a month rigs, which are totally, totally screamers. I mean, for 30 to 50% less than what
you get at AWS, you're going to get something faster and cheaper, right? I mean, that's a big
deal because it's an ongoing cost. And so if you can get something 30% or 50% cheaper,
and it's 30% or 50% cheaper every month,
plus you get $100 credit just by supporting this show,
there's a lot of value in that.
And then, oh, yeah, it happens to be super great.
The UI happens to be fantastic.
They have fantastic S3-compatible object storage too.
Like, check, check, check, check, check.
This is why we use Linode every single time for our back-end stuff and for my personal stuff too.
It's a practical thing, and I think you'd really like it.
So go over to linode.com slash unplugged.
Get $100 for your new account and try this stuff out.
Build something, learn something, deploy something.
There are a lot of ways to host things.
There's a lot of various companies,
but none of them are the complete package like Linode.
That's why we choose Linode every single time. Linode.com slash unplugged. Okay, Wes, I bought you some time.
Let's check on that Arch update. How's it going over there? Well, it's going pretty well so far.
We've made it through the regular packages and we're on to the AUR updates,
which not only includes updates
for the AUR helpers we've got,
but also CFS DKMS,
which is building right now.
And this is where it sometimes
gets a bit squirrely for us.
You know, you got to make sure
DKMS figures things out right
and not only builds the modules
for the current kernel,
but also that new kernel
we're going to be rebooting into, hopefully. Okay. All right. So we're not done yet. I'll
check back in with you. Oh, that is always the thing. So I know it will boot back up because
our root partition is ButterFS and then our big data storage with all of our disks is ZFS. This
is an example, by the way, of how you can use the two technologies together. It doesn't have to be
mutually exclusive. And we know
that ButterFS root partition
will mount, so it will boot.
We just might not have all of our data and all of our
applications may fail. That's all.
You know, it's just a partial degradation, that's what
we'll call it, once we get one of those
status pages, anyway. We will check back in.
You're fighting the good fight there, Wes.
Keep it up.
Keep it up.
Keep it up.
In the meantime,
a little spot of housekeeping here.
Just a reminder,
you can go to meetup.com
slash jupiter broadcasting
to get all of our meetups
that we may do in the future,
including the Salt Lake City
and Tempur-1.
We love your emails and feedback.
Send us those at linuxunplugged.com slash contact.
And don't forget about the love blog this Sunday and every Sunday at noon Pacific or whatever that converts to in your local time.
We have it on our calendar in our mumble room right there in the lobby.
Hang out with like minded folks.
Talk Linux.
Ask questions.
If you've got a project you're working on, you'd love to share with the class. They love that kind of stuff. You can find out details for our Mumble
server at linuxunplugged.com slash mumble. You go there, you get it all configured, you join the
love blog and just hang out. You participate as much or as little as you choose. And then gosh
darn it, you know what? You got your Mumble client set up. You could join us on a Tuesday if you had
a Tuesday off or something. You want to come out and hang out with your buddies during the show.
We've got a quiet listening spot where you just get the absolute lowest latency, high quality
audio feed right there in our Mumble room using a whole free software stack because we're on Linux
here. Mumble's free software. You can listen to the show live with almost no latency as close as
technically possible in our Mumble room in the quiet listening.
And then if something piques your interest,
you can pop into the on-air channel and chat with us in the show.
We open it up to anyone in our community as long as you pass the mic check
and stuff like that.
Details at linuxunplugged.com slash mumble for that.
And join that love plug.
It is a great group, and it's every single Sunday.
Talking Linux, people just love that kind of stuff.
And, you know, I need that personal connection from time to time.
And whenever we can, we pop in there on a Sunday as well.
LinuxUnplugged.com slash mumble.
Arch update.
We've finished our AUR packages.
DKMS modules compiled.
I think we're ready to reboot.
Let's do it, Wes.
Let's do it.
Here, while you do that, I'll bring up the ping.
And then when I see that it's come back up or you see it's come back up,
we can check in and see what the status is.
You get your ping going now so you can see it fail.
In the meantime, can you get the AUR on Ubuntu?
And why would you want that?
And just really briefly, the idea would be some sort of long-term stable LTS base.
You know, you have, maybe this could work on your Xubuntus, your Kubuntus, maybe even
elementary.
Something where you could get a nice, clean, highly compatible, well-supported base, but
then get those nice, fresh native packages from the AUR, Telegram and those kinds of
things you just want. And it could be, perhaps for some, the perfect setup. We looked out there
and found a couple of options for doing this on Ubuntu. And some of them are actually pretty
compelling. Let's start with Paxtol, Wes, which has maybe got a lot of potential.
It's new, though.
Yeah, it's very new.
This is kind of what started our little journey here, because it comes with a bold claim.
Paxtol, the AUR for Ubuntu.
And yeah, really, that sounds pretty nice.
Okay, we all love the Ubuntu LTS, super stable, you know what you're getting.
But sometimes the applications, well, they're not up to date, or they're not the latest ones,
or you have to do something else like, you know, get containers involved or virtual machines or compile stuff yourself.
That's where PaxTol might be something you're interested in.
It's very simple, pretty easy to get started with.
And it's written by a very intrepid 15-year-old specializing in Bash.
Yes, this is just written in Bash.
Yeah, Bash can do the job. You can do the job with Bash. And it's got a pretty nice user experience.
It's simple. And it's kind of limited at the moment because they're essentially recreating
their own AUR repository. Yeah, I think the thing we were hoping for when we first started playing
with this was direct access to the existing AUR. Because,
look, the AUR is great, but it's not exactly
sophisticated, right? I mean, it's
a user repository. They're all independently
maintained. MakePackage is pretty
cool, but it's not, you know, it's not crazy sophisticated.
This is a whole lot simpler.
It's just some bash scripts that
kind of help you out, install some
dependencies through apt for you, and that's why it's for
Ubuntu, and then, you know then it's got some steps to figure out
and run make for you, whatever compilation steps.
In that way, I think it is very much inspired by the spirit of make package
and the AUR, it's just a lot less far along right now.
There's only like 50 packages available, it's not really mature or tested,
I mean, it's just early days.
I think it was started just by this, the
initial author, but it does seem like there's a
small community forming. You know, you're already seeing more
reviews happening, poll requests, people keeping packages
up to date. So that part's exciting,
but it's a long way from anything
you'd ever want to use in production.
And maybe there could be a way to mirror
the actual AUR or say like
the top thousand packages or top
hundred packages from the actual AUR to their own independent top thousand packages or top hundred packages from the actual
AUR to their own independent AUR. And maybe they could do that automatically somehow. And I would
love to see them match their syntax to Pac-Man syntax. So it's like the same flags to do package
operations and things like that. But it seems like it's really early and it's not quite what
we were looking for, but it could have potential for Ubuntu users at some point,
especially if they can shore up some of their security stuff and whatnot.
But we wanted something that is truly using the AUR,
because the damn beautiful thing about the AUR is that it's just so well supported by the community around it.
Any wild-ass idea you could have for installing some random package, it's probably on there
from windows applications and games that haven't been around in forever to the, your daily
drivers, like your telegrams and your discords and your element chats all in there, all one
package manager, all updated centrally, all native, no weird loopback mounts and theme issues, all just beautiful
packages that sometimes get built right on your system.
Right.
And when they do, that means you're getting them built on top of Arch, which, you know,
provides you all the latest stuff.
So you get the fanciest options.
You can enable all the flags of compilation time that you might need to.
It's really quite slick once you've got it all set up.
Which is how we got to make Deb and MPM and another set of tools that actually use the real AUR.
And we had mixed results with this, I would say.
You actually did have some success.
Yeah, I got Tildr, which is a Rust implementation of TLDR,
which is a handy program from giving you easy examples that maybe you don't quite find
in the man page, but with a similar spirit.
This was a Rust application, you know,
because Chris makes me install Rust applications all the time.
I'm not allowed to install Go apps.
Yeah, that's in your contract. No Go apps, Wes.
It's tough, but I try to work around it.
And, okay, so it did work.
I was able to use MakeDev to get this going.
Here's the idea.
It takes those package builds,
yes, the real ones from the AUR.
It's even got some nice command line stuff
to go search and find them for you,
which is pretty slick.
Downloads them.
And then the idea is they've got this component,
MakeDevDB,
that's like a dependency translation layer
to go figure out like, all right, well, the package buildb-db, that's like a dependency translation layer to go figure out, like, alright,
well, the package build has these debs listed,
but we need to go get
apt dependencies in the Ubuntu system,
and that's going to be different. And that's kind of the key here
that I think is really early days and isn't
quite totally fleshed out, because to get
Tielder to actually quite compile
completely and get a deb that worked nicely,
I basically had to do dependency
resolution myself,
install the stuff it needed,
delete the dependencies from the package build,
and then run make deb.
But after that, it did work.
So I think that means, like, in theory,
if you could get that mapping from Arch to Ubuntu dependencies actually maintained,
you know, you had a community effort to make that work
and test out some packages, this might just work.
Yeah. What I liked about it too, is you could just check out the packages,
pull them down on your, on your local system and build them there, you know, old school style too,
if you wanted to. Where I could see this working is if you had a fleet of Ubuntu systems, but you needed access to a couple of specific apps or versions that are only in something like the AUR,
you could use this and you could support it and maintain it yourself,
and you could run that.
So like for us, we could potentially switch to Fedora,
as an example, if you could do this,
switch to Fedora or Ubuntu base here in the studio, something like that,
and then you put this on top of it, theoretically,
and just get those one or two applications you need,
and you don't have to run Arch on the system itself, on the host system.
However, both of those are early days, so we thought maybe we could look at something
a little more universal that wasn't just targeted at one distribution.
And you can crack this nut with containers.
There are people out there that get X applications running inside a container and go that route.
And by containers here, we mean like, you know, your server side, your Docker, stuff
like that.
Yeah.
And of course, this would be pretty much a non-existent problem if the world could just agree on a universal packaging format like AppImages, Flatpak, or Snaps.
And then everybody just shipped as that because that would actually solve this problem.
But I could also wish that it rains quarters.
Right.
Like if there was a Flatpak for every package on the AUR, like, yeah, okay, we wouldn't be so concerned.
This wouldn't really be the question because, of course, you just, you just go get the flat pack.
That would work.
But right now, I think the AUR still has got the flat pack, flat hub beat by a mile.
What I do like about the flat pack and container route is that it's not spewing stuff all over your file system.
Your host box stays nice and clean and fresh.
And that's when we started thinking about, what about Fedora's Toolbox,
which spins up these disposable PEC containers with just a command?
You don't even really know it's happening.
You just run Toolbox, and you get a command line,
and it brings all of your user accounts and your permission settings
from your host system into that environment, and it's totally transparent.
You don't even realize that you're making a mess in a container
instead of on your file system.
But would it be possible to use Fedora's toolbox tool
with another distro besides a RHEL or Fedora distro?
Could you use toolbox with Debian or maybe Arch?
A bold question.
I mean, I played with toolbox a little bit
and it really is pretty neat,
especially if you're trying to build some dependencies.
Maybe you're working on packaging something up or playing with container tech, and yeah,
exactly like you say, Chris, you don't want to pollute your actual main desktop system.
But it really is quite optimized for Fedora. That's all the images that they build for
themselves. That's what it asks you if you want to go download that image when you first set it
up and get going with it. But of course, it turns out it's powered by OCI images
and Podman under the hood.
So you can do basically whatever you want.
Yeah, including building an Arch or Debian image
and using that inside Toolbox
and just spin up a Debian environment on Fedora
or an Arch environment on Fedora
in just, well, as simple as running a single command.
Did you give it a shot?
Yes, I did.
Now, actually, they've added some stuff to the toolbox docs over on GitHub detailing
what needs to exist in the image.
And for most common things, those are basically just taken care of, which Arch, it's a little
bit different because it's Arch.
It's super minimal.
It doesn't have everything filled out and installed for you automatically or configured
necessarily in Etsy.
So in the past, I know there's been some bugs around like, well, you needed the local time stuff set up, and that's not in the base Arch Docker image.
And there's a few issues like that. But actually, a lot of that's been resolved.
The one thing I had to do was manually touch Etsy machine ID just to get that in because Toolbox is going to mount that in from the host.
machine ID just to get that in because Toolbox is going to mount that in from the host.
But after I did that, I was able to just sort of install the Pac-Man base packages.
I installed a couple other apps, including Nano, because I wanted to get that fresh,
hot Nano 5.8.
Who doesn't, right?
Just got to have it.
Built it up with a Docker file, or if you're in the Podman world, a container file.
And then, yeah, Toolbox, when you do Tool toolbox create, it's got a dash dash image option.
And then you just give it the name of the, you know, the tag
of the name that you built for that image.
It'll create it, and then you run toolbox
enter that name,
and you're in. It totally worked.
Now, I haven't gone all the way and tried out complicated stuff
with, like, audio and graphics yet.
This is still early days, but
boy, having an easy little Arch toolbox.
Sure.
Anywhere that you can get toolbox running,
which is not just Fedora, by the way,
it's packaged in Arch, for instance.
I mean, that sounds great.
Yeah, I love this idea.
You have this fresh, clean Fedora base
or whatever it might be.
Maybe it's CentOS Stream.
I mean, I'm not totally kidding either.
I'm thinking CentOS Stream 9
might make a decent workstation OS.
We'll see.
Maybe not, but it might.
And then you take something like Toolbox
and you have Arch and Debian user lands
that are just in a terminal.
And you know what you've done
is you've kind of created the WSL experience.
The one thing that struck me
as a nice thing about WSL
is I could have a distro per terminal tab.
I had Ubuntu, Debian, and OpenSUSE all running simultaneously, each with their own tab.
And you could replicate that with Toolbox.
And you could, it's all in its own container.
So you mess it all up and your host system stays clean.
I freaking love this little Toolbox tool.
I bring it up as often as I can
because I think it's such a neat idea
and such a nice, practical use
of containers in a way that's nearly
transparent to the end user
and it's a great example of what Linux does
that Windows and Mac can't.
And I freaking love that too. And the fact
that it's using Podman underneath is neat because that's
just cool technology and all that kind of stuff.
Yeah, that's what's neat about Toolbox.
It really is like we've had all these primitives,
kind of like with containers in general, right?
Like we've got all these low-flung things,
but especially in the Linux and open source world,
what really makes it shine is that folks have put in the work
to make the UI and the UX appealing.
I could do all this stuff by hand with Docker or Podman,
but to get it actually adopted and be more useful day to day,
it takes that extra layer of polish, I think.
I'd love to see the Fedora project get to the point where maybe they could integrate Toolbox even further into the desktop and work on those kinds of things.
And maybe even start maintaining a couple of compatibility images like that.
Because it sure is a neat idea.
And it's a great example of how we can use this technology if we just connect a few more of these primitives, like you're saying.
And, yeah, you could wish for everything to get packaged up as a flat pack or an app image or a snap, but it's never going to happen.
I mean, that's one of the brilliant things about the AUR is there's just junk in there that is not going to get touched by anybody else.
It's some diehard who loves that freaking piece of software and wants it available.
So he or she is just doing the hard grunt work to get it in the AUR.
And that's why it's there.
Well, and I think it's hit that right balance too,
right where you're right.
There's a lot of users who are passionate
and have the time and skill to maintain them.
That shouldn't be taken for granted.
But it's also simple enough.
There's not a lot of bureaucracy or, you know,
CLAs to sign or other hoops to jump through
to get it actually packaged and done.
So I think it makes it pretty easy.
There's a low barrier to entry
if you actually want to throw something up in there. Can be bad, of course, but I think it also makes it really easy jump through to get it actually packaged and done. So I think it makes it pretty easy. There's a low barrier to entry if you actually want to throw something up in there.
Can be bad, of course, but I think it also makes it really easy for folks to get started
and put whatever they want in there.
Yep, and you could mix and match and have it to your preference.
And I think this could be a, forgive the pun, but I think it could be a go-to tool for us.
I could see using this frequently.
It could make it possible to have just one base Fedora and then these pets
right there in my terminal.
Okay, Wes.
I see it's replying to pings.
So the server and the network stack are back up.
The question is,
did everything mount?
And are our applications running?
We are 100%
back online.
Oh, look at you!
Nicely done, Wes.
You know, I think, I'm not positive.
We should have been keeping track from the beginning,
but I think our success rate is higher than our failure rate.
I think you're right about that.
Maybe we'll have to go back and review the episode sometime.
Now, I will say we did have some issues
and there's still some work we want to do
around our existing WireGuard stack
because we're still using subspace.
And last upgrade, two upgrades ago,
that kind of bit us
because the existing subspace was set up
to rely on tools on the host file system.
And it just gets a little weird
when you've got one OS kind of borrowing tools.
And of course, the Arch one got updated on the host
and didn't play nicely.
We were able to update that and kind of better containerize that of course, the Arch one got updated on the host and didn't play nicely. We were able to update that
and kind of better containerize that setup.
But you never know, something might break again.
Yeah, we've got some ideas.
We've been kicking around
kind of an update to our WireGuard infrastructure
and then kind of capturing the work we do there
and relaying that back to you guys.
Because I think there's some cool tools
that we could share with the class.
So we'll probably do that soon, I think.
I've got a couple of VPN ideas I'd like to do on the show soon, as a matter of fact.
So stay tuned.
That'll be coming up.
Thank you to our core contributors at unpluggedcore.com.
This episode, the last couple of episodes, there's not been a sponsor in this spot right now.
This is traditionally where I might mention a previous sponsor we've had right here.
But there isn't one.
Our members are still making it possible for us to do this show though.
So thank you.
I really appreciate it.
And if you would like to become a member
and you haven't yet,
I've made a summer promotion available
that takes a dollar a month
off the lifetime of the membership.
You can go to unpluggedcore.com
and use the promo code summer
to get a dollar off the lifetime of your membership,
support the show,
and make it possible for us to keep going
when we don't have an advertiser in this spot right here.
And, you know, it's slow growing,
but I could see a day where we are just this,
this is always just this spot of the show.
This half of the show has always been brought to you by our members
or always is brought to you by our members.
We're not there yet, but we're building that way.
And we'd love your support at unpluggedcore.com.
Get a dollar off the membership and get access to the full live stream, no edits, all our mistakes, and plus the pre and post show, which is like a whole other podcast.
Or the access to the limited ad feeds, nice, tight, punchy, and still all the great production quality.
Unpluggedcore.com, thank you to everybody who does that.
We really do
appreciate your support, especially right now. A couple of emails to get to, maybe actually three
emails to get to. Frank writes in with our first one. Yes, he does. Frank's been trying Red Hat 6.2.
He writes, hi folks. For nostalgic reasons, I also tried to run Red Hat 6.2 in VirtualBox.
Although I managed to install it, I couldn't get X11 to work,
which was kind of the essential part of the experience I was looking for.
How did you actually do it?
You mentioned both bare metal and virtualization.
Was there any magic trick involved?
Thanks, Frank.
Good question, Frank.
You know, it's tricky when you go further back.
6.2 is further back than any of the distros Wes and I tried,
and it's going to be harder. But I'm pretty sure I was using just the VESA driver.
It's going to depend on your virtualization software and what it presents to the guest.
So there's that combo you got to get right. But I'm pretty sure during my setup, I didn't push it.
I recall doing like 1024 by 768 and the Visa driver for each machine. Whatever,
I remember that I had to fiddle with it a little bit. And then once I got it working on one of
them, I just used that same exact setup for each one of the distros. And that worked for me.
Yeah, you might try, I don't know, I do like VirtualBox, but it's been a while since I really
used it day to day. So maybe you also give KVM and QEMU a shot.
Maybe you'll have different results.
Joe writes in and he says, it's always great hearing about the latest developments on Wayland and Plasma and Pipewire and so on.
However, I think maybe there is a lack of AMD on the show.
I've tried KDE, GNOME, Mate, and XF, and all have issues on AMD Ryzen 5 3500U devices.
Gnome under X11 has been the most stable thus far,
though recently a lot of AMD GPU DRM issues have been causing me grief.
I've submitted bug reports to Red Hat's Bugzilla and Gnome and KDE's different trackers,
and Mate and XFCE issues have also been plaguing me, including tearing on the XFCE compositor. Overall, I think
it's worth highlighting that while a lot of updates are forward-looking and they're great,
maybe a deeper look at AMD hardware could be needed. Love the show. Thanks again, Joe.
That's true, Joe. We don't do a lot of AMD hardware just because of, you know, we'd have to
buy it and costs. We don't really have a lot of great options either when it comes to laptops.
I did, for a good period of time, recently try out the Asus ROG 14-inch laptop that has a Ryzen
CPU and an AMD GPU and also an NVIDIA RTX in it. That's quite the setup. And I have AMD graphics
cards in my workstations upstairs and in my machine at home.
So I do have some exposure to it, and I actually generally find they work fantastic.
In fact, it's sort of my baseline.
Any future system I buy at this point, although perhaps today's news changes things,
but any future system I buy is going to have AMD graphics.
I just don't really have an opportunity to buy a lot of Ryzen systems,
but I always keep an eye out, and if there is a system we can get in
and do compatibility testing and whatnot, I will jump at that opportunity.
And then in the meantime, I guess we just have to wait for West
to build his huge Ryzen box.
Yeah, you know, maybe as the prices team down,
eventually I have been wanting to try that out because, right,
I mean, AMD GPU, we've got this great open source story,
but I think in that it's still kind of early.
There's a lot evolving, and of course, as Joe points out,
it evolves with the kernel versions too.
So I'd be kind of curious to know a little more about Joe's setup,
if there were any tweaks that he did.
And thank you, Joe, for filing those bugs.
We need those.
I'd also be curious if anyone else out there has either AMD issues
or AMD success stories to share.
Neil, it sounds like you might have a note about AMD issues.
It sounded like the person in question had AMD Ryzen 5000 series, which is, wow, you're really lucky.
Yep.
Because it's been pretty hard to get those.
But it's actually been so hard that none of the distros had access to it.
Red Hat Desktop Engineering didn't have it.
SUSE Engineering didn't have it. Nobody had it. Nobody for Ryzen 5000, nobody for Radeon RX 6000.
Consequently, the drivers are in much worse shape than they would be normally because they did not
get driver engineering sign-off first. It was a bit of a botched release compared to the previous generations of Ryzen and Radeon
for Linux, where AMD typically sends out hardware samples and offers engineering resources to
Red Hat and SUSE to make sure that these are all in good shape upstream before they're pulled into
the respective distributions. That didn't happen this last cycle. So the latest
hardware has a lot more teething problems, which is probably going to afflict everyone
for a few more months. I think that's going to kind of wind down and stop being an issue
with the kernels coming out towards the end of this year. I mean, I've certainly heard from other
folks in the Fedora community that things have started settling down with the latest kernels that we pushed out. So take it for what it's worth, but sorry, like blame AMD for not
actually following the normal process this last cycle. I don't know what their reasoning was,
but it wasn't good. Good to know. Thank you, Neil. Moving right along. Austin wrote in,
Hey guys, love the show and all the work you do.
Day to day, I work in a Windows environment,
but I am able to use Linux from time to time for certain tasks.
One task is safely exploring phishing and malicious links that hit our company.
Now, to safely do this, I've created a Docker container,
installed Firefox, and pasted the links into this
Docker Firefox browser. What are your thoughts on this type of security containerization? And am I
actually as safe as I think I am by doing it this way? Have you ever tried something like this before?
Thanks in advance. You're probably fine here, Austin. I am curious, though, this seems like
by default the job for a VM here.
I mean, then you really know you've got true isolation
there. Yeah, I think that's the way I would do
it too. I mean, there's, you know, container security
has come a long way, but I
also think the state of the art means you need to
be a little more involved, you know, using some more
tools. Do you have stuff like SELinux
or AppArmor? Do you have SecComp style
filtering enabled?
Just using a virtual machine,
I think, gets you
in a lot better default place.
And it gives you more easy options
for trying different kernels
and operating systems
if you need to get
into that level of detail.
Or maybe you want to spin up
some Windows VMs
to see how the links
perform over there.
Hmm.
That is a great point,
is if you're using virtual machines,
you could just go destroy
a couple of Windows installs
and find out what happens
and document it.
Yeah, and you feel better afterwards.
It's like bashing a printer.
Just make sure you don't connect it to your main network.
Maybe have it on its own private network once you get the malware package downloaded.
I think you're probably fine, but Wes makes a good point.
What Wes is really saying here is for this type of setup, Austin, it's not just one thing.
You know, it's not just you're using Firefox in a container.
In that case, it's going to be you're actually securing the entire system.
You're taking a holistic approach to security on that box, including some sort of mandatory access control.
And you're doing proper permissions provisioning.
And you're not running things like the Docker instances as root.
And you're doing things at a higher level to protect the overall system.
You're likely going to be totally fine in that scenario.
I think you can feel a lot more confident
if you used virtualization.
And I'm just going to mention,
this is maybe a great opportunity
to play around
with something like Cubes
and see if Cubes
could help even more.
It could be like a tool set
for doing this kind of testing.
Maybe not.
It might be too much,
but I definitely recommend
it's a great Linux distribution.
Check out Cubes
to see if maybe that could
take your game to the next level.
And in the meantime, keep doing what you're doing,
but just make sure you're secure all around.
I think there's probably not nearly the kind of risk as, say,
installing Windows on a physical host and using that as like a burner,
because my main concern there would be something jumps to the network, you know,
and that could get dangerous.
Exactly. Yeah. Regardless of how you do this, you know, and that could get dangerous. Exactly.
Yeah.
Regardless of how you do this, you're going to want to isolate it.
Yep.
Yep.
Very good.
Very good.
If you'd like to send us an email, we'd love it.
Linuxunplugged.com slash contact.
You could also do the Twitter thing if that's your jam, at Linux Unplugged.
The network is at Jupyter Signal and the entire network of fantastic podcasts, because there's a lot more over at jupiterbroadcasting.com. That self-hosted podcast is sneaking up on its 50th episode,
which is a nice milestone, you know, nice and round milestone. I'm trying to think maybe if
I want to do a swag item. I haven't really decided yet. So I may do that. So check out
the self-hosted podcast if you haven't in a while, that's a great podcast. And of course,
if you're not getting your Linux action news fix,
then you're missing out on the things going on in the open source world.
So check out Linux action news.com.
And then we'd love to have you back here on a Tuesday.
If you'd like to join us for a live show,
we do it every Tuesday at noon Pacific,
3 p.m.
Eastern.
See you next week.
Same bad time.
Same bad station.
Go say hi to Wes Payne.
He's on Twitter, at Wes Payne.
Done right.
I'm on there.
It's true.
At Chris Lass.
At Jupiter Signal is the whole network,
and the podcast is at Linux Unplugged.
Links to everything we talked about today are in the show notes
at linuxunplugged.com slash 411.
And check out our meetups.
We want to know if you're going to meet up with us.
It's hot right now, but picture a nice, cool beverage
hanging out with your Linux buddies.
Go to meetup.com slash jupiterodcast.
Thanks so much for joining us on this week's episode
of Your Unplugged Program.
And we will see you right back here next Tuesday! jbtitles.com let's go boat by, I know you wanted to mention something on the GPU discussion, but we had to move on.
So I want to just give you a chance to follow up before we wrap up.
Yeah. So maybe you heard about China cracking down on the Bitcoin farms and other crypto farms.
Yeah.
So what will happen when all those farms shut down and dump their cards in the market?
I was wondering that too. Are we going to see like the prices drop on the used market? Because if those prices drop enough, even if they're used,
the scalpers also need to drop their prices and eventually even shops. And also the,
I saw an AMD card already drop prices. It was also mentioned in the news that
some cards are dropping prices. Oh, that is that is really good news.
My only concern there would be that the miners in China were using ASICs or dedicated.
But I'm sure there's some GPU usage.
I'm ready for those prices to drop.
The fact that the Yakuza was smuggling GPUs says enough.
Oh, my God.
Really?
The Yakuza?
Some of their shipments were caught with boats.
Wow.
Turns out they're big gamers over there.