LINUX Unplugged - 639: The Mess Machine
Episode Date: November 3, 2025After all the AI hype is over, one change for Linux will be sticking around; we put it to the test.Sponsored By:Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built o...n the open-source Nebula platform that we love. 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. CrowdHealth: Discover a Better Way to Pay for Healthcare with Crowdfunded Memberships. Join CrowdHealth to get started today for $99 for your first three months using UNPLUGGED.Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMSeaGL 2025 — November 7-8, 2025 in Seattle, WALinuxFest NorthwestLinuxFest Northwest 2026: Call for SpeakersSCALE 23x - Southern California Linux Expo — North America's largest community-run open source conference. Pasadena, CA - March 5-8, 2026SCALE - Call for PresentationsPlanet Nix '26 — March 5th-6th, 2026 @ Pasadena, CAPlanet Nix 2026: Call for ProposalsLINUX Unplugged 481: Just a Prompt Away — The Internet is going crazy with AI-generated media. What's the open-source story, and is Linux being left out?Red Hat eyes developer workflow efficiency, app modernization gains with new AI tools — An AI assistant specifically designed for application migration and modernization looks to reduce developer toilSUSE Linux Enterprise 16 Announced: "Enterprise Linux That Integrates Agentic AI" — SUSE's announcement today for SUSE Linux Enterprise 16 proclaims SLES 16 to be "the industry's first enterprise Linux that integrates agenetic AI" and "reduces operational costs and complexity through AI readiness."openSUSE on Hugging FaceChrisLAS/hyprvibe — A riced up Hyprland desktop running ontop of NixOS.Pull requests - ChrisLAS/hyprvibefeat: add comprehensive sops-nix integration by shift · PR #15 · ChrisLAS/hyprvibeAdd options for username, group, home directory by samh · PR #8 · ChrisLAS/hyprvibeCreate variables for user and home dir by F-bit818 · PR #5 · ChrisLAS/hyprvibeLM Studio - Local AI on your computerZed — The editor for what's nextCursor: The best way to code with AINixbook — Convert your old computer to a user friendly, lightweight, durable, and auto updating operating system build on top of NixOS.nixbook/base.nix — Roast my Nix: Not-a-flake edition!Jupiter Broadcasting's Colony EventsPick: duf — Disk Usage/Free Utility - a better 'df' alternative.Pick: cheat.sh — the only cheat sheet you need.
Transcript
Discussion (0)
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen.
Coming up on the show today, you know, after all the AI hype is finally over,
I think there will be one real game changer that stands and sticks around.
and it impacts Linux.
We're going to tell you what it is
and we're actually going to test it out
and see where it's at today.
And then we'll round it out
with some great picks,
some boosts, some shoutouts,
and a lot more.
So before we get to all of that,
we've got to do the right thing
and say time-appropriate greetings
to our virtual lug.
Hello, mumble room.
Hello.
Hey, Chris, hey, listen.
Hello, hello.
Hello.
Thank you for being in there.
We'd love to have you join us.
You can join us on a Sunday morning
or whatever it is in your time.
Jupyterbroadcasting.com slash mumble.
for that and then of course jbblive.fm for the stream it's a vibe that's what i always say
you don't think so no i do think so i'm here for the vibe you know so life you know there's
got a different feel uh you never know what we're gonna say and we don't either you never know
what neal's going to say you know hi also good morning to our friends at define dot net slash unplugged
Go meet Managed Nebula from Defined Networking.
It's a decentralized VPN that's built on a true open source platform that you completely self-host and own yourself, or you can lean into their managed system, which makes it really easy.
You go to Define.net slash unplugged.
You can try it on 100 devices for free.
So unlike traditional VPNs, Nebula is decentralized, and that means there's a certain resiliency you can choose to build into this.
And there's also a community of resources to help you make it more resilient.
So this is great for a home lab, of course.
This is also fantastic for a global enterprise.
Best in-class encryption.
Fantastic community.
And battle-tested.
You know, in history of being my teacher and all,
when it comes to something as foundational
is how I network everything,
I want to own that stack,
and I want to own that stack end-to-end
because when I'm building something
from myself and my wife and my kids to use,
I'm trying to take like five-year views,
year views if I can. And I'm not digging on anybody. It's just a fact. It's when you
tap venture capital over and over again, it gets mixed into the tools that you rely on.
And sometimes your core infrastructure tools go different directions over time because the
priority start to shift as they tap more and more VC money. And when I thought about it long term,
when I think about what I want to be running and what I want to build on years from now for both
J.B. and my personal infrastructure, I want a project that's truly built around the value of
ownership. It's a big deal when you think about it long term. And there are a lot of options
out there. And some of them have like self-hosting options kind of begrudgingly. Like, you know,
there's like different variations of it. Nobody does it like Nebula. And so, you know,
you can make the mistake of like, like I did. Like I made a rookie mistake of linking a big tech
login to my VPN provider. And like I don't like that at all. And now in retrospect, like years
later, I wish I hadn't done that. Like that type of stuff was what I'm talking.
talking about. Nothing does it like Nebula and nothing has Nebula's level of resilience,
speed, and scalability. Go get started with 100 hosts absolutely free. No credit card required.
Go to Defined.net slash unplugged.
Well, we got a wee bit of housekeeping this week because some events are coming up that
we'd like you to know about is the 13th edition of Siegel. And that's happening next week
at 8 a.m. to 6 p.m. Friday and Saturday, November 7th and 8th at the University.
of Washington.
What do you know about it, Wes?
Well, it's our local Seattle
Linux Fest, you know?
Yeah, downtown.
Well, it has been kind of downtown
or Capitol Hill area,
but now it's over at the University of Washington,
which is kind of its own little corner.
Okay, that could be nice.
Yeah.
So there should be lots of stuff nearby.
I think they've got some, like,
meetups and stuff going on after the fest.
There's like a tea swap thing.
If you want to get some interesting teas,
that sounds like it's a pretty good time.
Is that, like, gossip, or is that the kind you drink?
Like, I think the kind you drink.
Oh, okay.
Yeah, it's called Teagle.
That's like a little.
Subcom.
That's so Seattle.
That's great.
All right.
Okay, that's good to know.
So seagull.org slash schedule.
We'll have a link in the show notes.
Over 50 sessions or something around 50 sessions is going to be.
Yeah, it looks like some good stuff, dev stuff, devop stuff, of course Linux stuff,
and then just general, you know, like community and open culture.
And then probably should be on people's radar.
Linux Fest Northwest, Scale, Planet Nix 26, all putting out their call for papers.
Yeah, and they all close sooner than you'd think.
So if you're interested in contributing, talking, speaking at any of those kind of events or volunteering maybe too, but you might consider taking a peek.
Yeah, we want to let you know now because what's going to happen to you is the holidays are going to hit you right in the face and you're not going to think about the call for papers and then the events coming around.
And, you know, if you can get in as if you can get a talk accepted, a lot of employers will pay for that to get you there to, you know, do your talk and maybe promote the thing a little bit that you do.
It's totally worth it.
So scale, Planet Nix 26, and Linux Fest Northwest, all call for papers, and Siegel is next weekend.
And we hope to see you there.
I have a couple of other things to get to before we start the show.
We have decided we are going to do another config confessions.
So please send your configs in, either boost with a link or go to our contact page and add a link.
And we'll start collecting those.
We already have some in the bag.
If you already sent them in, we still have those, most of those at least.
Depends if Brent lost them.
and we'll get to confag confessions very soon.
So send those in.
And a great way to support the show would be send us a link with a boost and, you know, two birds and all that.
Also, why you got that boost button hot, I got something I have to just level set with you guys.
I'm having a hard time understanding something.
And I really want to take the temperature here.
When we first started talking about what everybody now calls AI, on Linux Unplugged, it was years ago.
and we really referred to it at the time as machine learning
because the tools that we were really looking at
were machine learning tools.
And in the context of Linux,
it really wasn't a huge topic,
seemingly didn't deserve its own episode.
And then three years ago,
October of 2022,
we finally did a dedicated episode on the topic.
And it was all about AI generation.
Episode 481,
the internet is going crazy with AI generated media.
what's the open source story and is Linux being left out?
And that's when we dedicated an episode to this thing called stable diffusion,
where people could generate images.
And we talked about the morality of it.
Three years ago, we talked about the power use issues in that episode.
And of course, we talked about the Linux story around these tools.
And we even stood up a live instance of a web version of stable diffusion
and unleashed it on the live stream and let them crank out images on our VPS.
Do you remember that?
Yeah, that was a lot of fun.
It was three years ago.
So back then, it just didn't have the hype around the topic.
And it wasn't as charged as it is now.
And you didn't see the counter reaction to the hype that you see now.
And I cannot really, to be honest with everyone,
wrap my noodle around how controversial every little new technology is these days.
From programming languages to technology platforms to AI,
it's just, it's remarkable.
And I'll explain my position on AI next.
week if people are interested, but I wanted to take the colony's temperature on where you are at
with AI. Do you hate it? Why? Explain yourself. Are you ambivalent? Explain yourself. Are you excited?
I want to know why. So help me, help us wrap our noodle around this because as people that have
been talking about this since like 2021-ish, these were just tools and then all of a sudden they got
really heated. And I'd like to know where you're at on this stuff. Because the premise of this episode
this week is despite how much you hate it there's at the end of this what we might call an
AI bubble or AI hype session there's no doubt going to be a few tools that remain standing
you know that's where we're we're going to find what worked and didn't work what was hype
what was silly what what LLMs were horrible at and what LLMs were great at and that might be
a little while from now but the people that hate this stuff have got to realize it's not going away
Some of this stuff's going to stick around.
And what I've realized recently, and I'll get to Y soon,
is that Linux in particular is going to be one of the most affected areas.
And you start to see hints of it this week when both Red Hat and Sousa
made announcements around their enterprise-grade distros.
So Red Hat has announced an AI assistant designed for application and migration
and modernization tools on the REL platform.
They said the launch of the Red Hat developer
Lightspeed Platform, a portfolio of AI
solutions will equip developer teams with, quote,
intelligent context-aware assistance through virtual
assistance. The company said this will help speed up
non-coding-related tasks, non-coding
related tasks, including
development of test plans, troubleshooting applications,
and creating documentation.
And then even more recently,
Sousa says, with the release of Enterprise 6,
It is the Enterprise Linux that, quote, integrates agentic AI.
Sorry, I can't help not laugh a little.
I know.
I know.
I agree.
Go ahead, Brent.
A release of Enterprise 16.
You mentioned Enterprise 6, but I think we're moving a little further 16.
You did.
You did.
See, 6 AI can solve that for you, Chris.
Well, I just, I'm reading with the LMs, and I'm kidding.
I want to, yeah, I agree with Wes and probably a lot of you listening, like, this stuff that integrates agentic AI.
it's it's okay anyways they say quote this is the industry's first enterprise Linux that
integrates agentic AI and reduces operational costs and complexity through AI readiness so this
is the phase we're in jargon heavy hype heavy got a got to slap AI on your product in order
to sell it do you think it's a coincidence that these two announcements were made so close to
each other no that's no coincidence at all that was very intentional it is very much each
position trying to jockey or each company trying to jockey there
position in the AI race. And all of that is exhausting and all of that is tiring and all of it
seems unsustainable. And that's all true. That is all true. But what we want to talk about today,
I think, is the stuff that will remain after all of this passes. So as those of you've been
listening to the show for a while, you know that I have been rolling my own distro that I call hyper vibe
for a while. I'm just at like the three month mark now, if you can believe it. On average, I've
made two or three notable changes per week as I've used it. Some weeks more, some weeks less
kind of, you know, averages out. And all of this has been done as an experiment using an
LLM. Every little tiny change. And I started as a joke because I thought it was going to be a
total disaster. And I somehow walked away with a working system. And then having refined this more,
as I actually start to use it on my day-to-day to get job work done
and start taking it seriously unless it's a joke.
Actually, depending on it to be your desktop.
I've realized that LMs are good at text.
And everything I'm doing is a configuration file.
And between the ability for the LLM to do web searches
and to write configuration files
and understand simple YAML or config files,
this is an area it's actually particularly strong at.
And I think we will see a future where if you don't take advantage of these tools,
you're not going to have your job replaced by AI,
but you will have your job replaced by people that are taking advantage of these tools.
Last week, I was having an issue with my network card dropping off the network,
and especially bad in the morning for some reason.
I don't know what that's about.
I'd come in the morning and they'd just have issues for a couple of hours every few minutes.
Looks like my Nick would just drop off the network.
And one of the ways I'd have to fix it is unplug and plug it back in.
And then it would come back to life and it would start working again.
And it was a morning before the launch.
I was rushed to get to the show live because we do it, we do a little bit earlier.
Or no, we do a little bit later, but it's just a more complicated show.
So it feels like it comes earlier.
And I needed to fix this issue because I had to get my job done.
And so while I was going about prepping the show, getting voicemails, doing all that stuff,
I opened up another application
and I just had a prompt I said
I want you to review my logs
look at my network driver
and the recent Linux kernel releases
and figure out
why my network card is dropping
and then I completely forgot about it
and when I came back
it had figured out what it was
there was a change
upstream in the Linux kernel
and now I need to make an adjustment
to my power settings
for my particular Intel Nick
and it did all of that
and figured all of that out
while I was going about my other work.
And I came back and it's like,
okay, when you reboot, the fix is in
and you're good to go.
As it been?
Yeah.
That's great.
And it went through,
it read the log files,
it read the kernel change log.
It did all of that work.
And then it made that change to my system.
And because it's just changing the output
in my Nix config,
and it's apparently very adept at Nix,
it handles it just fine.
And you could extrapolate that out
to a network engineer
who needs to make a modification
so every machine uses a new IP for something, right?
Yeah, I mean, if you try to, like, you know,
read through some of the buzzwords in the Sousse stuff, right,
they're making an MCP server,
which is basically a JSON API to interface with pieces of the OS,
including, I guess, some more stuff on top of cockpit
to help you do, you know, LLM-enabled changes and updates
or check in on what your system's doing.
And just as an aside, doing all of this,
it's giving me an appreciation for how much edge case-solving distro builders
have to deal with and maintainers.
Like, it's constantly like things change upstream.
Sometimes it's just the name of a package,
but you have to go through and constantly make those little tiny changes.
Yeah, the integration is real work, right?
Not for me.
I just have the LLM to it.
I do.
I just have it check it, and then it goes through,
oh, yeah, these things have changed upstream.
Okay, I'll go through.
I'll make sure I change these for you,
and then it's ready to go for the next build.
It makes maintaining my own distribution possible.
And I thought, okay, let's put this to the test.
And so that was one of the things we wanted to do this week,
is we wanted to put it to the test and see how far we could push this
and kind of figure out where it breaks and really how realistic is this now.
But I think this isn't going to work for every Linux.
This isn't going to work for every problem.
It's going to work specifically well for like Ancable, for Nix,
those types of things.
Declarative systems, it already is here.
It's here.
Yeah, especially where it's like a lot of the same common repeated patterns
that maybe just need to be squeezed into particular context.
It is fair to say that, you know, we're at a stage where it doesn't always produce the cleanest system, you know?
Yeah, that is for sure.
There's sometimes things that are a little hacky and a mess.
I mean, it works, but it might not be how you would do it.
And it often works kind of the best for things like that where, like, you care more about and are testing sort of from a black box perspective where you're just like, okay, I can verify the things that I need do work.
I don't care that much about exactly how it's doing it or, you know, or exactly how, if it did it the way I would do it.
And I think the other thing to acknowledge at this point is it can be very expensive if you're doing a lot of this, it depending on how you pay for it.
After we spent hours, we spent all day on a call yesterday working this out, which we're about to get into.
And after we got off the call, I wanted to see how feasible it would be to just do it with a local LM.
And there's a lot of ways to crack that nut.
But to really make it simple that anybody could do is I went and I downloaded the app image.
of LM Studio.
Oh, nice, yeah.
Classic, right,
which has Hugging Face integrated.
And it supports downloading a,
well, a bunch of different models, obviously,
but one of them it supports downloading is DeepSeek.
And also, like, the Quinn ones,
which are like 8 billion parameters,
you need a lot of parameters.
And you can just download them,
and then in the system tray icon,
you can just enable the server.
And then you go over to something like Zed,
Zed editor, which is great.
And one of the options is just a local connection
to LM Studio.
Zed just automatically connects
to LM Studio and all of the
prompting is done locally
with whatever model you've loaded in LM Studio
be it deep seek or Quinn or whatever it is
and it's slow on my system
but it's actually using the AMD Vulcan
acceleration so it's
if you're not in a rush it's usable
so after we got off the call I had a couple
things to fix which I'll talk about more later
I just went and had dinner
while I was doing stuff and came back and checked on like a half hour
later and it was done
So you can absolutely do this depending on the system with local tools, but there is still a quality gap.
Like the big stuff, the big models hosted by your chat chippities or your clods are still superior.
They're still faster.
And they're very expensive.
So these are real limitations.
Right now it works, but it'll build a system maybe the way you wouldn't if you're not really on top of it.
And if you don't know to catch something, there was something you brought up yesterday, West.
So it was like, oh, I'm really glad you said that because I wouldn't have caught it and the AI wouldn't have caught it.
Maybe it was the bit about like all the extra work it was doing with a Rofi set up.
Yeah, it was something like that was something like.
And it introduced like basically an abstraction to deal with having to use multiple Rofi packages that just no longer was needed whatsoever.
Yeah, it was a solution to a problem that no longer existed and it hadn't caught that.
That's a great example.
And so there's still that level of human engagement.
But we learned some lessons with this.
And it's, it is, it is in my mind.
sticking around.
And I don't know what this is going to be called.
This is one of these things I always take a lot of crap for.
It's not obvious to everyone two to three years out.
But as we get two or three years down this road, it's going to be so obvious.
And it's going to be called something like prompt ops or DevOps prompts or Linux by prompt.
Really?
Like, you know, like you're going to have a Linux machine as a system administrator and you're just, you're going to have something on the command line that you work with.
If you go over to GitHub.com and you look at the.
CLI tag. And you look at the most popular projects in the CLI category. Every other project,
practically, is some sort of LLM tool on the command line. I mean, even back at Summit,
right? Red Hat came out with light speed and their little C tool for the command line.
Yeah. That kind of stuff is going to be the norm. It's just going to be one of the tools on your,
you know, like you have some stuff that can suggest commands. There's going to be LM tools that do that.
And there's just going to be a name for it, like DevOps, became a thing. I mean, that's already one of the
uses just especially that, like throwaway scripts, quick little things to clean up files or
like do a particular manipulation that it's maybe not worth me scripting, but like a script
would be better than me doing it manually. I love that. And I actually think there is a future
where small local micro-LMs or whatever you want to call them are actually going to be better
at this particular stuff that we're about to talk about than the chat chippy T's or the clods.
there will be in the future models that you can run in your text editor
that will be possible on a desktop machine
that is just super focused on config files or PHP or whatever it is that you do.
And that's its whole world.
That's kind of some of what Red Hat is offering
with some of this new development lives.
Yeah, yeah.
Like they have stuff targeted at enterprise migrations of legacy software
or migrate into containers and stuff like that.
But it's, you know, you have models you can self-host
that they've figured out optimized for these particular types of problems.
And right now, man, if you get LM Studio and you load deepseek or whatever the hell you want
and you open up Z, it's all happening locally on your machine.
It's all completely self-hosted with just two desktop applications.
You didn't have to start a single container or anything.
And it's got a nice GUI to help you find the new models.
You don't even have to know what the hell you're doing.
And it's approachable right the F now.
And so this is only going to accelerate.
you're seeing Red Hat and Sousa lean into it,
it is going to be so obnoxious for a while.
And I really sympathize for those of you
that are so sick and tired of this stuff.
But I'm trying to give you your medicine
with a little bit of sugar this episode
because what we're about to talk about
ain't ever going away.
OnePassword.com slash unplugged.
That's the number one password.com
and then lowercase unplugged.
Take the first steps to better security
for your team by securing credentials
and protecting every application,
even the unmanaged ones you didn't know about.
There's more to secure than just passwords.
Managed and unmanaged SaaS applications, for instance,
are a huge issue these days.
That's where Trellica, by one password,
secures your apps without leaving your employees behind,
without creating that odd and difficult tension
between IT and your end users.
And if you are an IT professional,
or if you're in IT security,
you know about the mountain of assets that's growing
all the time and the sprawling applications that are out there as a service that users are signing
up for all the time. It's a big problem. That's where Trellica will help you to discover and
secure these applications. It'll find out where you have redundancies, where maybe you could
cut back on spend, and Trellica by OnePassword has pre-populated app profiles that'll assess the
SaaS risks, let you have a better understanding of really what you're dealing with, and like I say,
optimize that spend, while you're enforcing best practices across ever.
every app. It really is truly the missing piece, something that I have struggled with when I was
in IT because there wasn't anything like this on the radar. When OnePassword came along,
getting better password hygiene seemed like a huge leap forward. And of course, you know about
their award-winning password manager, but they're securing a lot more than just passwords now.
So check out OnePassword Extended Access Management. You get started by going to OnePassword.com
slash unplugged. That's where you'll find out more. You'll learn if your employees are bypassing your
practices to use unapproved apps, how you can get your hands around that, and how you can get
one dashboard to manage it all. So take a look at onepassword.com slash unplugged. Take the first
steps to better security for your team by securing credentials and protecting every application,
even the unmanaged shadow IT. Go right now, onepassword.com slash unplugged. Support the show and learn
more. That's one password.com slash unplugged, all lowercase. Onepassword.com slash unplugged.
Well, your three hosts yesterday got on a call because, well, I had an inclination at least that Chris's dear hyper vibe needed some love on the back end.
We had a couple listeners say, hey, I think things could be better.
We had some PRs come in, and we figured we should have a look at this.
And, Wes, what do we find?
Well, first off, as usual, with our community, we found, like, just incredible poll requests and issues and just a lot of wonderful.
engagement, including someone had
like Sopsified it with Sops Knicks and
written just an incredible read-me about
how to use it and work with it. So we'll have to look
more in that for sure. But
you know, HyperVive has done well.
It's obviously been working well from Chris's
experience reports here, but
it's really kind of, it started as a one
system thing that got extended to a two system
thing, right? RVB and NICS station
are the names here. Yep. And then at one
point earlier we did try kind of a first round
of this where we were like, well, you have a lot of shared
functionality between these two configs that you
probably don't, including a whole bunch of janky activation script stuff that is sort of...
A bunch of crappy bash.
Yes.
I'll own that.
Yeah, I guess you vibed into existence to not use Home Manager.
I did not want to use Home Manager.
I will own that.
Yep.
And so each one had their own version of that was pretty much identical, right?
So it was like, took an early stab at trying to refactor some of that into actual modules
that you could then just use in your host configs.
Like I was doing a backwards approach of taking two Linux boxes that were totally separate
system set up years apart from each other with very different display setups. And I wanted to
unify them into one experience. You know, try to create one ultimate Linux desktop for myself that I
use across all my machines. So I essentially tried to backwards integrate them. And I got to a
working point for like those two systems. But it was not in a very good state if you wanted to
onboard a new system. Yeah. Which eventually I do and I wanted you to try it too. It also meant like
if anyone else wanted to use it, right? There was a lot of work. You had to figure out
and, like, copy your structure,
not really do it with the Nix native way at all.
And, or, I mean, not the, like, easiest way anyway.
And to boot, right, we talked about the problems of the stuff that, you know,
we wanted to refactor.
But there's also stuff we needed to make more variable just in that, like,
you had hard-coded in a few things, including your username.
You know, the wonderful Chris F.
Yeah, I mean, I thought everybody would just run as Chris F,
but it turned out they didn't want to do that.
Yeah.
So we needed to expose stuff there.
And there were a couple poll requests already to do that.
So people had taken a few stabs at basically using the module system,
and exposing those as configuration options for a hypervib module
that would say, like, what user are you using?
Because we need to, like, put that in some of the scripts
and plan for that.
Yeah, so it's something you could add to an existing system.
It'd be a hypervide module that you could add to a Nixbox that already exists,
and then you could define run as this user.
Run as West instead of Chris F.
The goal was, like, you could import this module from Chris
and then enable it and set whatever required options
like what your local username you're going to use it with is and just have it work.
But to get there, we had the problem that in our first attempt, well, one, we didn't have
the, you know, optionality on username at all.
That was hard-coded.
But then in our attempt to simplify things before, it kind of pulled some stuff out into shared
modules, but it put it under a shared namespace instead of hyper-vibe namespace.
So we're going to have to replumb.
Yeah.
That's more of like a finder of a place almost, so stuff that it should be pretty decent at.
It's not, at first it wasn't obvious to me.
What should be machine-specific configurations and what do I want on all my systems?
Right. And that's where either you have to put your judgment or rely on its quote-unquote judgment.
And it's something I hadn't thought a lot about. So as an example of if you don't think of it, it won't necessarily.
It doesn't mean you can't go back and refactor, which is what we decided to do, is essentially re-architect the way this thing is completely done.
in order to make it possible for, say, somebody like West to run it on their system,
we had to extract out machine-specific stuff and try to put the overall hyper-vibe experience
of Hyperland, the chosen application, the theming, the Waybar, the performance optimizations,
the Zen kernel.
We wanted all of that to be something that would be a shared thing across all systems.
Yeah, and that's where it's you as like the crafter here is trying to, like,
you've got to figure out enough of what your vision is if you're going to have it be able to be executed
by something else. Part of the problem, too, was the first attempt had pulled some stuff
out, not the way we wanted, but it hadn't necessarily always de-duped that, right? It didn't
always update all the configs to use that new functionality. So then there was now, instead of
de-duping, it just d-duped again. So we had even more cleanup to start with, really.
Yeah, that is true. And that's just the kind of stuff you got to, you know, watch for if you're
going to be using some of these tools sometimes. The goal that we wanted to pull off was to
task the machine to re-implement this into a way that could be shared and not break the existing system while doing so.
Because we were basically switching, you were redefining and picking an interface that would be Hyper Vibe,
and you needed your two existing machines to still use that and not use their old hard-coded previous version.
Yeah. That's kind of a tall ask.
I mean, that is not managing a simple NixOS config for one workstation.
That is a little more abstract.
And you're really trying to change the plumbing of how this entire thing is built while it's in use.
And the more you realize, oh, I have to solve for this edge case.
Oh, I have to solve for this edge case because these machines have this resolution and these machines allow this resolution.
It just keeps increasing the scope.
And as we were doing it, I know, Brent, you were like, I don't think this is going to work.
This is, we're getting too far out there.
Oh, I was skeptical.
I mean, just the expressions that you had watching.
some of its progress go by.
And you're the quote unquote expert at this particular, let's call it an operating system, right?
And you're like, I don't know if we should be doing that, but I'll say okay.
Yeah, and you were freaking, like the higher the number of files touched count.
You're like, well, why is it touching that one?
Why that one?
And all level-headed West was like, no, no, it's okay.
It'll be okay.
We'll just fix it later.
I was getting more and more complicated.
And I thought the more complicated this gets, it feels like the more little side edge
pieces it won't think of.
Can you talk a little bit about the setup that we had to make all of this work?
What did you use to actually, like, make these modifications?
I know it seemed like a pretty cool setup from what I was looking at, but give us the nitty-gritty.
You know, there is a thousand ways to do this.
So don't take this as the blessed path, but this is what's worked for me is I got the
cursor application and I connected it to GPT5.
And then later I connected it to a local LM after we got off the call.
But that's a whole other process and it was really pain in the ass.
And I wouldn't recommend it with Cursor.
But back to my point, I wanted something that would work quickly for us while we were on the call.
And so what's great about Cursor is it's essentially a reskinned, modified VS code.
So if you have used VS code before, you'll be at home in Cursor and you know how it can open up entire directories.
Well, that entire directory becomes the context for Cursor and the LLM.
And so it becomes, and that would be all my config files, and they're all in a different hierarchy in one directory, and it knows all of them, and it can read across all of them.
And when you ask it a question, it considers the entire scope.
And so one of the ways I've made this work is I've put all of this for all of my hosts in a build directory in my home folder.
And so everything I'm doing is in that build folder.
That's what gets checked in and out of GitHub.
And so it has the context of all of the machines, because it's looking at that one directory.
And then, you know, you can just open up a chat session and say, tell me about this, or it can be something as simple as what key bind am I using for XYZ to, all right, let's refactor this thing, which we, I would say, it took longer in the sense that I thought we'd be done in a couple of hours.
I thought we'd be done at noon and I think we wrapped up around 5.30.
Yeah.
Maybe 430.
Okay.
However, I think if we had done that manually, I think it would have taken us three days.
Maybe, you know, possibly.
What do you think?
I think one...
Would it take me three days, maybe you one day?
Yeah, there we go.
I think that's about right.
Yeah.
Yeah, I think it probably would have taken about a day maybe regardless just...
Because there was a certain amount of, like, I had to catch up on what had all...
Because I took a snapshot look at it a couple months ago, but you've continued vibing right along.
So, like, enough stuff had changed.
I need to get enough exploratory work to, like, wrap my head around what was happening.
And then just, like, some of the stuff are just kind of mechanical changes that you got to do that are actually kind of perfect work for the element because it is kind of thoughtless, really.
And then some of it that maybe would have gone faster is if we had used more human abstract reasoning for some of the, like, more stuff that would be obvious to us that it was struggling with.
What really struck me, Chris, throughout was, would you even attempt something like this without this tool?
Because now that you're in it, you're like, okay, well, I could probably have figured it out.
but I'm not so convinced you would have started the project in the first place.
I would have started over from scratch.
I would have started over and just sort of rebuilt the whole thing with this new model in mind.
And I don't think I would have ever had the time for that,
so I don't think that's actually what would have happened.
But if I were to take this on without a tool like this,
that's how I would have had to go about it.
So beforehand, we kind of had like this scattered configuration,
activation scripts that were defined in different machines
and sort of a lot of duplicate efforts,
duplicate paths for stuff.
We had my username hard-coded.
We made it really hard for somebody to come along
and just add it to their machine
and just define a user.
And what we got to at the end of our call
was a single declarative source of truth
for Hyperland, Waybar, shells,
all the system fine-tuning,
and then per-host overrides
for certain config options,
users, maybe resolution monitor,
Excel 3D driver stuff.
And there's definitely a lot of work
that could still be done, you know?
Like one, I think just in general will be good,
to kind of just audit the whole thing and delete as much as you can without breaking it just as a, you know, a cleanup item.
But then mostly, yeah, I think there's some improvements to be made on the interface of like what stuff that we have as fallbacks and isn't required,
but it's more just like a good default because I had to override or comment out some stuff in my code to make that work super easily, as easily as we'd like.
So we got really close to the interface we want, but it's not quite there.
But that said, you know, after 250 lines move from.
from per host stuff to a single module
or a couple of modules.
We got a new namespace for Hypervibe.
Like, we got, we got it pretty far.
So the real test was,
after we got it working in a VM,
could you get it working on physical hardware?
And I'm happy to report.
Wes Payne, ladies and gentlemen,
running it right now on the laptop,
switched during the pre-show.
How's it going so far?
What do you think?
Yeah, it's been fast.
Does it feel faster?
You know, I'm not,
I got to do more tests.
Feels faster to me.
That's all I know.
It was easy, though.
Like, I did have to, like, there's some stuff you have for, like, the garbage collection for the Nix Demon that I had slightly different setting.
So, like, that's a conflict, right?
So some things like that is what I mean.
Oh, yeah, yeah, I suppose, because you weren't starting from, like, a fresh.
No, I had to put it on.
It's just my existing NXOS configuration.
Right.
But even then, right, it's mostly, there's a couple of changes.
I don't know, under five things I had to come in on my configure, adapt, or override, you could do a four, make force kind of thing, too.
And then in the flake, you just kind of, you know, I had to add your hypervive.
as an input. I had that hypervive module, and then I set enable equals true, user equals
Wes. So it's four or five lines. You added to your Nix configure, or your flake, and now
you have a hypervib system. That's pretty great, right? I mean, you took a, it was a plasma system
for years. You added three or four lines to your flake. Oh, yeah, I did also, commented out some
stuff in your... I did also comment out the plasma stuff. Yeah, but if you were on a fresh box,
I mean, you wouldn't even have to do that. Yeah, for our first test, we just got a VM going from like
a default, no graphical environment NixOS install from the installer.
So, yeah.
And that worked pretty well, too.
The machine had produced us a successful reproducible version of Hypervibe, which we
had not gotten to before.
So that was very impressive, that it actually did manage to properly refactor it into shared
modules, into, you know, host overrides, and then deliver us something that we could
actually use.
It got to a working state, yeah.
And that's on your system right now.
That's incredible.
to do that, we had to completely rip apart my home system.
You know, the system that was working sort of the original box.
Oh, did you want that to keep working?
And because we were on a call through that system, I couldn't build and reboot and see if it worked.
We could build it.
We just couldn't boot.
I couldn't reboot because I would drop off the call.
And so I had to wait until we were all done.
And so find out if I had a working box or not.
And I don't know how to properly convey the amount of rearranging of the guts of this system that we did.
I don't know if I, it's like an entirely new distribution.
Oh, yeah, we should definitely, I don't think we updated the read me at all.
So that's, add that to the to do list.
Yeah, I realized that as I was looking at it this morning.
I had no idea if anything was going to work anymore.
I didn't even expect it to boot.
So I got off the call, went ahead, did one.
last rebuild boot, hit the old reboot button, and it came up. I think I have one error message
during like the activation phase, but GDM launched and it came up and I thought, oh my God,
I can't even believe I got this. Okay, that's positive. Like I got GDM. It would be booted.
And everything works fine except for one thing. And it's so funny. GDM launches and instead of saying
my name it now says hyper vibe it's still chris f for the user yeah okay that's another thing we need
to set yeah we're setting that user description and to make it work i think we had it probably
uncommented just like i did but you don't get your username you get a generic hyper vibe
user because all my dot files were the same and all the software was the same but the system was
completely reconfigured that was the only thing that was wrong is that my full name for my
username was hypervib and i was sitting there going wow i cannot i cannot believe we just vibed our way to
completely re-architected system, and that's the only problem I have? That's the only problem I have? You've got to be kidding me. And I sat there just, like my jaw. Like, we can rip this thing apart and just walked away. I just couldn't believe it. And so it's working just fine. It's working absolutely. I later on used DeepSeek to fix the username. But we should fix it in a way where it's set for everybody. That would be the better way to go.
Oh, yeah, does you just override it locally?
Yeah, and I just, you can change the display name.
But what do you think, Wes?
What do you think about sticking with it for a little bit?
See what you think trying out the hyper vibe lifestyle for a few days.
Okay.
Yeah, are you going to do it?
Yeah, I'll do it.
I think if you and I try it out.
I might have to ask you some questions, though, that you can vibe for me.
What do we need, yeah, I need to update to read me, put a cheat sheet on there of all the key binds
because there's a lot of key binds to know about.
You and I battle test her for a little bit.
Then you know the next stage, Brent, we've got to get you to do a bug test on it.
You've got to keep you to run it for a little while.
Well, that's really up to you when you want to pull me in if you're ready for it.
Oh, are you ready for it?
Oh, I'm ready.
Are you ready?
I don't know, actually.
I don't know.
I haven't been so good with keeping up with the issues and PRs.
But I do appreciate people to send them in.
One of the things we were able to do is cursor can also, you can add links to GitHub as context.
So where we actually started is we pulled in a few of the pull requests and a couple of the issues.
And had it analyze those and compare it to the existing system,
because there was a pretty big change since those issues were submitted three months ago almost.
And we had to look at the differences and see what would be practically possible to apply
and how we might apply it to the existing configuration.
And so we started with an analysis of those poll request and issues that people submitted
and were able to kind of iterate from there.
That's really how we even started down this direction.
And we just had to give us an overview of what they said, what we would need to change.
And that was kind of our launching off point, which is pretty powerful just that right there.
Yeah, thanks to the community, really.
Yeah, it was longer than I wanted, but we ended up with something that is much more shareable, much more reproducible for me on a new system.
So if I one day finally do get a new laptop, this is now completely deployable for me.
In particular here, props to FBit 818, Sam H, and Shift, because they've all made excellent looking PRs.
Yes, thank you, everybody.
Go check it out.
If you want to see it, it's at GitHub.com slash Chris Lasse.
Happy vibe.
We'll put a link in the show notes and try to get the read me updated pretty soon to explain how to get it.
working. It's been a lot of fun, and I've learned a whole new appreciation for the people
out there that are maintaining distributions for us, and how much of it is just little chicken
s that they have to deal with on a week-to-week basis that keeps them busy, along with
all the other stuff. You have to think about it at the architecture level. It's been a great
experiment to kind of not put myself anywhere near their shoes, but I'm in the same room as
their shoes, and I can smell them. You know, and I've got a much better appreciation for the
smell of their shoes now.
Join crowdhealth.com promo code unplugged.
The open enrollment is now.
So take your power back and join crowd health to get started for just $99 for your first
three months.
I struggled to solve health care.
As a small business owner with just a really small team, there wasn't a great option
for me.
And I looked for years.
I tried everything.
But the cost just kept getting absolutely bonkers.
And I needed to make an informed decision.
So I did a deep dive into crowd health.
I have been a crowd health member for over three years,
and it has been a peace of mind for myself and for my wife.
And we've participated in the crowd helping others with their health needs, too.
Don't take my word for it.
Trust yourself.
Go take control of your future with crowd health.
It is a health care alternative for people who want to make their own decisions.
So you don't have to play the insurance game.
You join crowd health, which is a community of people, like myself,
funding each other's medical bills directly.
No middleman, no networks, no nonsense.
And I can tell you it works better than I initially expected, right?
I was I was just hoping for anything that would be functional, and I would say it's far beyond my expectations.
There's a great app to let you manage all of this, including looking at your status in the community, seeing what the requests that come in, where things are at, and also getting help, including like, you know, taking care of things when they come up, unfortunately, and all that kind of stuff.
It has dramatically saved my family so much money.
This is crowd health.
It's a health insurance alternative.
It's health care for under $100.
You get access to a team of health bill negotiators,
low-cost prescriptions, and lab testing tools,
as well as a database of low-cost, high-quality doctors
that have been vetted by crowd health, and it works.
And if something major happens, you pay the first $500,
and the crowd steps in and helps fund the rest.
It feels like everything has been messed up for the few years,
and with health care, it's just getting so much worse this year in particular.
So if you join the crowd, you take care of each other.
You get outside that system.
That system is going to be overpriced.
It's not really taking care of your health.
It doesn't incentivize you to take care of your health.
And it's so, so complicated now with all the subsidies and the things that are expiring.
It just is not something I even want to have to participate in anymore.
Crowd Health has saved members over $40 million in health care expenses
because they just refuse to overpay for health care.
They do it right, they figured it out, and it's working for me.
The open enrollment is now, so take your power back.
Go join CrowdHealth, get started for just $99 for your first three months.
Use the promo code unplugged at JoinCrowdealth.com.
That's JoinCrowdealth.com, and then our promo code is unplugged.
CrowdHealth is not insurance.
Opt out. Take your power back.
This is how we win.
Joincrowhealth.com promo code unplugged.
Unraid.net slash unplugged.
Unleash your hardware.
Go check out Unrayed, the powerful, easy-to-use NAS operating system.
For those of you that want control, flexibility, efficiency,
and you just want to play around real quick with the stuff we're talking about.
Unraid is your gateway to that.
And Unraid 7.2.0 just landed.
Yes, the new stable release of Unraid is here.
New fresh features.
First and foremost, the web GUI is now responsive.
So it's going to look great on a lot of devices.
use EFS users, you're going to love the fact that you can expand a Rade Z who will one by
one. Whatever that means, I know you're going to love it. And it's here. That's right. You now
have solid NTFS support. If you have a bunch of disks, like I have a couple of old Windows
disks that I want to use, but I want to copy the data off, boom, NTFS supports in there.
You would be blown away. You would be blown away. Can I just mention how blown away you
would be if you knew about the just excellent file system support in Unraid. I mean, we talk a lot
about the awesome virtualization support for passing through hardware and doing VMs and containers
next to each other. We talk about the luxurious community application catalog and the fact
that they're always maintaining this thing and putting out new versions and making it super
easy to upgrade and safe. Your data is always safe. Like that's stuff I talk about. But what I don't
really mention enough is like, it's got you covered on file systems. And one of the things that a lot of
people who script are going to be happy to see, they now have a built-in open-source API.
And I've already seen the community working on some apps around this. It's chef's kiss.
And I don't know, I guess the community is at a point with maturity where these applications are just bangers.
It's just really impressive. So the new Unraid's great. And if you haven't checked out Unraid yet,
we've got a deal for you because not only can you support the show by going to Unraid.net slash Unplugged,
But you can check it out 30 days for free.
No credit card required.
Unraid.net slash unplugged.
And if you decide to pull the trigger,
they got a lot of nice price options
at different points you're going to like.
And that's just kind of locking in the guaranteed maintenance,
the continued improvement.
I mean, they just hit 20 years and they're still going strong.
So this has got a long runway,
and it's something you can run for a very long time
with the hardware you have today.
It's that great.
Unraid.net slash unplugged.
We have a little piece of
mail here from our dear Olympia, Mike. Hey, Mike, it's been a while.
Mike writes, hey guys, I'd love to get in on that roast my Nix config action. This isn't my
personal config, of course, but it's the main Nix module for the Nixbook project that I've
been working on for nearly a year. The Nixbook install script basically just adds this
base.nix as an import. And before you jump all over this, no, the project doesn't use
flakes yet, mainly because, one, technically flakes are still experimental, and I'm trying
to be conservative here. It's also just complicates the installer. And number two,
flakes seem to also be very host-specific, but I won't know what host name a user of
Nixbook wants. Either way, this has been running well for the most part. Notable parts of this
config is the Nixbook config updating itself, sending notifications to users when you don't know
the usenames of the users on the system, and the automatic way to switch channels when I bump
the channel version.
Biggest issue Nick's book users are having currently is printing.
Oh, yeah, yeah.
I have a Vahy enabled, and it finds the printers, but for some reason, the user still needs
to go into cups, modify the printer, select the driver, and then enter their password.
Curious how this can be more automatic, like the way Linux Mint or other distros do it.
That could be something if anybody out there knows, let us know, because I don't think I have an answer for that one.
Please roast away, boys, call me out on my jank and make the Nixbook project even better.
Well, I think this email seals the deal, gentlemen.
Config Confession, step right in. Tell us where it broke again.
Yeah, I think we're going to do another episode of Config Confessions.
It doesn't have to be a NixConfig.
It could be whatever you are working on, including a Docker compose file, maybe an Ancible.
I'd love to see a few Docker composes.
and, of course, your Nix config.
Send them in either via a boost link or at the contact page.
Maybe Mike's is in there.
We've got a few others in there.
We'll be talking about.
And, you know, of course, I think you better prepare yourself
because a lot of the answers is flake it up, Mike.
Like, number one, the number one problem you listed
you wouldn't even be having.
And number two, you need to knock that experimental stuff off.
But we'll get to that.
We'll get to that.
He does.
I'm just telling it like it is.
That is, that is.
We got to listen to the episode.
That is for the episode.
That's big channel propaganda is what it is.
It's big channel propaganda, and I'm not standing for it on this show.
I just feel strongly about that.
All right.
But thank you.
And prepare.
Prepare yourself.
Boost to Graham.
We got some boost into the show this week as well.
KS.
COBA comes in as our baller booster with 88,887th that's.
I like it.
I like it a lot.
Thank you very much.
And I feel like that number means something.
but what it really is is a bunch of ducks.
He says, I watched a great interview with DHS and Primogen
about the perfect storm of Windows 11 dropping the ball,
macOS getting stale, and Linux getting good,
or at least good enough to finally make the 2025, 2025, 26,
the year of the Linux desktop, desktop, desktop.
Kind of echoes what Chris was saying recently
about a new demographic of users.
I've been able to shift fully in my life now to Oma Archie
on all of my laptops and desktop.
Testops. Still some learning to do. Like, how do I install from a tarball? But overall, it feels so fresh. And I'd say unobtrusive. An OS just gets out of the way. That's great to hear. And thank you for the boost. I really appreciate that. I love knowing that it's working for more and more people out there. Like, I do think there has been a particular audience that Omar Archie has locked in on. And I think that's fantastic. As far as running from a tarball,
Well, that's a little complicated.
Depends on what you downloaded.
You might need to extract it and mark it executable and then run it.
But generally, you want to try to install something from your package manager if you can and not from a tarball.
Because you're not really installing it.
You're just running it.
So there's that little hot tip for you.
But thank you, Kay.
Appreciate that baller boost.
You're the best around.
Well, Tert Ferguson boost in with 22,200 and 22 sets.
Turned Ferguson.
I already thought we were called the Jupiter.
Colony? We have ColonyEvents.com. How quickly we forget.
Ouch.
That's a bit of a good point. That's true. And the Matrix server is the Jupiter Colony.
Yeah, I'm going with it. I'm leaning in. I'm leaning in. We're calling the audience to Colony, and I'm fine with that.
Colonized. Yeah. I agree.
Well, Op. In 1984, boosted in 4,000 sets.
He's a good guy. He's a real good guy.
No, you're a great guy. Here's a quote lifted from last episode's boost.
A quote, I'm assuming home assistant here.
And Chris, that was your response to some feedback.
Um, well, Opie says, insert Picard face bomb here.
I only happen to leave out the most important details of my feedback.
Yep, I'm switching a home assistant.
A little update on my mom using Mint, though.
She's trying a live USB in so far, uh, it's positive about it.
That's an interesting way to slip it in for old mum there, is give her a USB live stick.
I like that.
She does, of course, like that it looks and feels just like Windows 7.
She's still not ready to take that plunge, though,
but I'm on the lookout for one of those Windows 10 laptops people are getting rid of when moving to Windows 11,
and then I'll just do a full install so she can try the full experience before making the switch.
She needs a new laptop anyways, so two netbirds with one gemstone.
Ah, that's big kidneys right there. I like that. Good thinking.
That's a perfect little snipe. You know, those laptops are still going to be plenty good.
Nice thinking. Thank you for the update.
Oppie, it's good to hear from you.
Not the one comes in with 2,000 sats.
Coming in hot with the boost.
Should be lup rats instead of lab rats.
Oh, and plus one for another config confession.
I love the deep dives.
Even if it's a topic I don't have it used for.
Now, that is the perfect listener.
Thank you.
We really appreciate that.
Hybrid sarcasm boosts in with 10,000 cents.
Are you serious?
Make it so.
Thank you, hybrid.
Love the recent baller boosts.
And just a reminder that a friend.
Free Jupiter Party membership goes to the listener that boosts the most total sets in 2025.
Right.
There's still time to get those boosts in.
Yeah.
We're going to have to put something together for the end of the year because we're not, well, we haven't.
We technically ended the tuxies last year.
That doesn't mean we'd have to do something, but we should, you know, get together, have a beer and discuss what we're going to do.
And maybe eat some food, too.
You know what I mean?
We should probably eat some food too.
I mean, do listeners have ideas of what we should do?
I'd be open to that.
I mean, we could party, we could road trip.
We can go to space, whatever you want.
What?
Okay.
All right.
Okay.
The suggestions have to come with funding proposals.
Well, Moon and I boosted in 5,135 sets.
I am programmed in multiple techniques.
This is a live boost from a train running under the San Francisco Bay, 135 feet below sea level.
Is that a new record?
What about the record is for lowest and highest elevation?
Live boost.
Anyone?
That's got to be it.
At least below sea level, 135 feet below sea level live during the show.
Impressive.
I bet you somebody could beat them on altitude for sure.
And, you know, we welcome all elevation boosts.
This is a great idea.
If anybody is above 1,000 feet, we're at sea level right now.
So if you're above us, boost in, let us know your elevation.
I wonder if anybody's on a mountain out there.
And can I come live with you?
Thank you, everybody who boosts.
We have the 2,000 sacked cut off just for on the air timing and all of that.
but we save all of them in our show notes and we read them.
We also had a nice batch of you stream.
24 of you just streamed those sats as you listened.
You collectively stacked a nice, humble, 21,058 sats for the show.
It's not a strong week for us, but, you know, it's a showing up and it's still appreciated.
The episode, which gets split between myself, Wes, Brent, editor Drew, the podcast index, and the creator of the app.
We all collectively stacked 153,802 sats, thanks to you.
that's like an investment in the future of the show and we really appreciate that and of course
it's also a signal if you liked that episode gives us an idea of what content really works for you
there's no better vote than a boost and you can use fountain fm to do that or albby hub there's
lots of options there and thank you to our members our core control
and the jupiter dot party you put that support on autopilot and it's our foundation we appreciate
you very much thank you everyone and we do have some picks before we get out of here and we've mentioned
this on air once before it was a sly mention but it's never made it into the pick category and i
need to elevate it up because the team's done great work it's a very useful application
and it's got a great name.
They had a release on September 8th.
It's called Duff, D-U-F.
And it is a free-disc usage utility that is, I think, the best visualization.
Especially if you've got some media shares or photo shares,
if you have a NAS and you need to kind of get a head around
what on your NAS is eating up a bunch of space.
Duff is the way to go.
It's MIT licensed, and they've just been doing great stuff.
And so think less DU, more DF, in terms of where its role is, because you get just a really nice breakdown of what file systems you have mounted, including, like, it'll call out special devices, like, places like slash dev and slash run and slash sys differently.
So then you kind of get to see like your actual more physical real discs all in one place, handy rendering in the terminal, including like little progress style bars to indicate how full your disc is, color coded.
It's just really easy to read.
And a nice breakdown.
Also, like, the type of the file system is so handy to have just right there.
I just love that.
And you'll love this, Wes.
Outputs to JSON packaged for just about all the distros you might possibly want,
including Arch and Nix, Fedora, and Ubuntu, and others.
So Duff, D-U-F, that's the first pick this week.
But we've kind of fallen into this habit of having more than one pick
because there's so much good stuff these days.
The cup run it.
Oh, fair. It does.
And this one's a little bit different.
It's called cheat.
dot s h and it bills itself as the only cheat sheet you will ever need and the idea is is you install
this on your machine and you forget a command and you use cheat sheet to pull it up and one of the
things that the project talks about here is they focus on crazy great performance like they
wanted to be back with an answer in like you know two milliseconds or something like that and it
covers 56 different programming languages a thousand of the most important unix and Linux
commands, a bunch of other stuff, is in there.
And it's called cheat.sh.
It's pretty nice.
And I think it's probably one of the handiest tools I've seen in a while.
It's mostly written in Python.
And did I mention it's MIT licensed?
I'm not sure if I did or not.
No, you got the last one, though.
Yeah.
Now, Wes, on our call yesterday,
I specifically remember you disabling, was it,
man pages on the hyper vibe?
I was wondering, would you install this instead?
You know, I haven't, that's a good question.
As long as you have a system that's constantly connected,
yeah, it might be, might work pretty decently.
Yeah, you want to be able to look up.
It's very fast at it, but it does require internet connection.
So there's that.
It can be used on the command line for command completion.
It can be used inside code editors.
They aim for sub 100 millisecond response times,
so you don't have to sit there and wait for it.
I think there might be a way to do it.
Yeah, there is a way to do it offline as well.
You can store it for offline.
Oh, that's nice.
I thought so, yeah.
It's pretty great.
So it's called cheat.
So two great apps will have linked in the show notes,
DUF, duff, and cheat.sh.
Yeah, I'll be honest.
I've been leaning into these types of things recently
just because, like, fish showed me the way.
And if I can get another fish-like experience
that makes it just, oh, yeah, that command I do every six months,
that kind of stuff, love that.
I love things that make that simpler.
I don't remember all that stuff like I used to.
I want to print out a cheat sheet actually and put it to my monitor.
For learning NeoVim?
Well done, sir.
Well done.
Maybe one day, Wes, you never know.
Could always be a challenge.
Well, maybe with the LLM helping you.
It'll go a little easy.
Nice.
I think that's a burn.
I think that was a sick burn, actually.
I'll call it a burn.
Yeah.
But topic relevant.
Oh.
Well, let's see.
What should we tell people about?
Should we tell them about our fancy features that we have?
Yeah, absolutely.
Not only do we have chapter markers and go right to the stuff that you like or don't like or skip around or listen in reverse order.
I don't know.
We also have transcripts of the whole thing with who said what dumb stuff.
Yeah, and a lot of that stuff is compatible with the OG podcast apps, the 1.0 apps, like we'll bake it into the MP3.
And if they support the standard for the...
Antenipod does a great job with the transcripts.
For the transcripts.
If they support the transcript standard, like Apple actually Apple Podcast does.
of all. They support the transcript
standard. So it just kind of depends on the player
and then if you have a nicer 2.0 player
you get even more features, you get better chapters,
you get perhaps more features with
the transcript depending on the client, and then additionally
you get the lit support and potentially
the boost support too. So check us out.
Yes, we are live. We do a Sunday
Tuesday show. Sunday's 10 a
Pacific, 1 p.m. Eastern over
at jbbi live.tv.
See you next week. Same bad time.
Same bad station. We have that mumble room
too. You can join us. There's always more
in that mumbo room, and if you're a member,
be sure to get the bootleg version.
It's clocking in at over an hour and 42 minutes right now
of content just for our members.
Now, links to everything we talked about today,
those are over at Linuxunplug.com slash 639.
Ooh, almost to 640.
How about that?
Also, our RSS feed, our contact form,
all that good stuff.
Matrix Room, all linked over there.
You can find it.
It's a website.
It's got links.
You're going to love it.
Thank you so much for joining us
on this week's episode of Your Unplugged Program.
and we're going to see you right back here next Tuesday, as in Sunday.
I'm going to be able to be.
