LINUX Unplugged - 653: The Kernel Always Wins
Episode Date: February 9, 2026The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.Sponsored By:Jupiter Party Annual Membership: Put y...our support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMSCaLE 23x | Registration — Get 40% off registration with promo code "UNPLG"PlanetNix 2026 — Where Nix Builders Come TogetherPasadena Linux Party MeetupValve explains why it hasn’t announced release dates for its new hardware, now plans for “first half of the year” — Valve now says all three products will ship “in the first half of the year.”Latest VirtualBox Code Begins Supporting KVM Backend — Support for KVM or other native OS hypervisors in conjunction with VirtualBox has long been sought and it's finally becoming a reality.bcachefs-tools v1.36.1 — Interactive TUI for monitoring various filesystem internals, slowpaths and device performance, with duration and frequency tracking for various events. Helpful for diagnosing performance issues.bcachefs v1.36.1 is out - next release will be erasure coding : r/bcachefsbcachefs PSA: if you're on 1.33-1.35, upgrade asap — Several people have been hit by this, so - please upgrade asap.Mattermost — Mattermost is an open source platform for secure collaboration across the entire software development lifecycle.Debian's CI Data No Longer Publicly Browseable Due To LLM Scrapers / Bot TrafficVenice AI - Private AI for Unlimited Creative FreedomThinkBox V2 - A Custom 4-Bay SATA/SAS HDD NAS for Your Lenovo M720QCode for Climate 2026 — The Mad Botter Earth Day Open Source Challenge — We're doing our Earth Day open-source competition again The Mad Botter INC. Once again featuring System76 hardware and this time with dual tracks for college and non-college students.Michael Dominick on X - Earth Day Open Source ChallengeRed5d/podcast_mcp — MCP server for accessing Podcasting 2.0 RSS feed episode data. "I've really only tested it with JB feeds!"Pick: Plexus — Remove the fear of Android app compatibility on de-Googled devices.Plexus-app: GitHubPick: ReinschriftTodo — Reinschrift combines a native GNOME interface with the simplicity of plain text. Your tasks remain a normal Markdown file – easy to back up, versionable via Git, and editable on any device with your favorite editor.Reinschrift on Flathub
Transcript
Discussion (0)
So friends and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Brent.
Hello, gentlemen, coming up on the show, today we're digging into the Linux news that's being shaped by delays in some interesting technical shifts that might actually matter more than they look at first.
I'll tell you about that.
Then there's a moment where the future actually showed up in Linux and everything worked.
We'll share that story.
Then we'll round it out with some great boosts, some picks, and a lot more.
So before we go any further, let's say time appropriate greetings.
To producer Jeff.
Hey, PJ.
Hello.
It's a big game day, as they say.
So we have a few up there in the quiet listening.
Hello, hello, hello, hello, mumble room.
And PJ is the sole member brave enough to set the nachos aside long enough to say hello on the show.
So why don't you make it a Tuesday on a Sunday next week and join us in our mumble room.
Let's have a nice, vital, big mumble room.
You know what I mean?
Yeah, I mean, if you do, you get a whole extra show.
Yeah.
Round of applause.
Oh, Dijl just showed up.
Round of applause to you guys.
And, of course, a big good morning to our friends over at Defined Networking.
Go to Defined.net slash unplugged.
This is where you want to go to get 100 machines absolutely free, no credit card required,
on the decentralized managed VPN.
They manage it for you.
It runs Nebula VPN.
Unlike traditional VPN's nebulous decentralized design keeps your network resilient.
You can manage a home lab with it.
I have a nebula network that's just two nodes, and you can have a nebula network that is thousands of nodes, entire global infrastructures across data centers, carrier grade NAT, whatever it might be.
It's extremely resilient, saved my butt the other day.
I was able to get on the wife's machine and save a problem before she even knew it was going on.
It's so nice.
You could go from this teeny tiny lean infrastructure where there's no big tech company.
You don't need any sign on from Google or whoever to use your meshnet.
you can do these tiny setups or you can go to these massive slack size scale, right?
And it's incredible.
And if you want to try it out, you can start with Managed Nebula, you can use 100 devices
for free, you can really get a sense of it.
It's a great product.
It's also a lot leaner on the system.
Shows up in multiple ways on the CPU and on the network.
It's really great for that.
And it's got best in class encryption, and it is fantastic because you control the keys,
you control the lighthouse, so the redundancy, the discoverability, all of it is under
your control, or you let them run it over at define.net slash unplugged.
Then if you wanted to move on, you could.
I mean, you could self-host it too.
They're a really, really great product because it's the Snebule open sourcing something
that we've been following for years.
Absolutely.
I think maybe we started watching in late 2017, early 2018.
And so we knew there was something there now to see it, to see it really take off.
It's really impressive.
I just noticed on February 6th, we got an updated Android app.
So, you know, if you are using it, go check that out.
Check it out. Go over to Defined.net slash unplug support the show and check out Nebula.
It is fantastic.
Just around the corner, 25 days away Planet Nix and Scale 23X.
We're working with our buddies over at FLOX, who's focused on making reproducible dev environments actually usable.
They're sending us once again to Planet Nix for the second year.
They're throwing a hell of an event.
It's looking good.
Year 2 is looking really, really good.
And Brent, you have 19 days until you're going to be absolutely slammed to get down the road.
It's about a 46-hour drive for you, buddy, which is six days of hardcore driving.
How are you feeling about that?
I'm glad you've been doing the travel math on this one for me.
I appreciate it.
Yeah.
That sounds daunting.
Sure.
I think is the main emotion that comes across.
And also, holy, I probably should leave tomorrow, right?
That's what I should do.
Yeah, well, if you want to have a nice drive.
Here's another way to put it in perspective.
there will be three more unplugs until we are in Pasadena.
That's frightening.
I mean, wonderful.
It's coming soon.
Just keep that one in your head.
I think it's easier to work with.
Mm-hmm.
Yep, three more lups.
So there you go.
Check out planetnix.com for the details.
And then go get registered at scale.
That gets you to both events.
We have a link in the show notes for scale at social LinuxExpo.org.
And you can use our promo code unplug, UNPL, to get 40% off your registration.
It's no joke.
And I've updated the meetup page a little bit as well.
We, I think, are locking in the yardhouse because the other two locations are no longer in business.
Oh, that's too bad.
I was informed by a local listener, which I really appreciated.
It is.
It is.
It is.
But it's great.
We already have a good showing.
And if you are planning to be there at our meetup, please go sign up so we can let the venue know.
Oh, great.
Already 25 potential attendees join the crowd.
Isn't that fun?
Meetup.com slash Jupiter Broadcasting for that.
And we'd love it if you could be there.
even if you're not going to one of the events,
show up and say hi.
We like that.
Right?
We like that.
So there you go.
That's all the housekeeping I have for you.
That's it.
So we wanted to get everybody on the same page
with a couple of stories that have gone down.
And the first one you may have already heard about,
but it's not too surprising.
We just wanted you to be aware.
Valve has updated their plans for their recently announced
upcoming hardware lineup.
All three products announced last November
are now expected to ship in the first.
half of the year instead of in early
2026. So it's a bit of a delay here.
In a Steam community post valve explained
the lack of firm release dates.
Basically, ongoing RAM and
SSD shortages combined with
the rising prices, making it hard to
lock in final pricing and
also launch timelines.
That's not too surprising, is it? No.
Okay, but what are we talking about concretely?
Well, it's the Steam machine,
the Steam Frame VR headset,
and that new Steam
controller. In their statement, they did
say this in a very valve way, I thought.
They have more, quote, work to do to land on concrete pricing and launch dates.
It sounds like they don't even know.
No, and are still kind of figuring that out internally.
I mean, but they did then secondly emphasize, especially given how quickly hardware market
conditions are changing right now.
I think some of us maybe were lingering on to this hope that they had maybe bought a pre-stock,
which things were already rolling and done and you were just going to ship up.
Yeah.
And then we recently saw these rumors floated that they're.
was a possible bare-bone steam machine with no RAM or storage that they might ship.
And they didn't seem to really give much life to that rumor in this press release and in the questions that they took.
Do you think that would be a product?
I mean, it might be for our crowd.
I don't know.
It doesn't seem like it competes as much in the, like, console market.
For sure.
Yeah.
I think in normal market conditions, a bare-bone steam machine would be the one I would want.
But I can't buy RAM or storage any cheaper than VALC.
Exactly.
So, yeah, it's like if I have it, yeah, then I would like a bare bones machine.
I'm curious that if you're listening to this and you could boost and let me know if you would buy a barebone steam machine if they offered it.
And, you know, they might build a ship that sooner would be the advantage.
So if you had the money to spend and you could buy your own storage and your own RAM and you could get your hands on this a couple of months before others.
One of the questions I have is like, how long can they wait for prices to stabilize or to secure hardware before the product just gets older and old.
and less worth releasing?
Why is it too grand for a three-year-old product?
You're right. Yeah, that's tricky.
Maybe they know something we don't know.
Maybe they know some other vendors about to come online and start manufacturing RAM.
I don't know.
But that's a good point.
If they wait too long, they're not going to be super competitive machines.
Yeah.
Huh.
Yeah.
Yeah, I'm not sure.
I guess what we do know is that it's going to be later than they originally suggested.
But they're still for now promising first half.
what, by June July?
That's what I would take it to me.
Assuming they don't change that again.
Right. Yeah.
Did we think it even ships in this year?
Jeff, how are you feeling about this?
Were you going to buy one of these?
Yeah, I'm a little sad.
I really, really, really want a frame.
Yes. Yes, me too.
That's the thing for me is the frame.
I'm surprised the controller's delayed.
That's interesting.
Maybe there's memory on the controller.
It's all that RAM and the controller.
You feel for Valve, because when they announced this,
it was maybe the writing was on the wall at that point,
but it wasn't obvious where this was all going price-wise.
Certainly wasn't where we are now.
And now here we are, and it's like, you feel for him, don't you?
I think if they can get through the COVID stuff with the Steam Deck,
I'm pretty sure they're going to get through this too.
They'll find some way.
It won't be as much as we hope.
You know, I don't think they're going to do any better than they did at the Steam Deck.
That's true, though.
The Steam Deck got it out.
Yeah.
And they are very smart, you know, they have a lot of,
of smart people working there.
That's a good point.
All right, PJ, you're making me feel better.
I like that.
All right, here's another story that may suggest an interesting shift.
VirtualBox is finally learning to ride on top of KVM.
Some code changes are landing in virtual box that could have big implications long term for
how people use it.
It's just being tested now.
They're beginning to support native KVM virtualization backend for the virtual box
application.
That's crazy.
I know.
It's a longstanding ask from the,
the Linux users out there who just want,
no, I don't want a little kernel module.
I totally get that.
Very early, very opt-in,
hard to get, we'll get more into that.
But this does seem to be a trend that we just see,
keep seeing is that hypervisors over time are just adopting Linux's native virtualization
stack and you're saying, ah, fucking screw it, you can use, you can just use that.
I mean, VMware did a similar thing.
I just think that's fascinating.
I mean, this is one of the main reasons I moved away from Virtual Box.
It was the first virtualization software that I used way back when I was stuck on, let's just say, other operating systems.
But then once I discovered, like, why, there's all these kernel modules and stuff, that was the reason for me to move away from it.
So I would assume for other Linux users, this is about reducing kernel friction.
So less reliance on those proprietary kernel drivers that you have to load and sometimes break.
Better compatibility with hardened kernels, secure boots.
and distro updates.
Right, right.
That is a good point.
Which would be a great point.
And I think from a privacy and freedom angle, basically KVM is part of the kernel.
So it's audited, it's upstream, transparent, and running virtual box on KVM shifts that trust towards the kernel rather than a vendor-specific modules.
What, Brent?
You don't trust Oracle.
Well, let's just say I have less reasons to trust Oracle than I do the kernel.
Fair.
Yeah, good point.
So they're pretty much
Oracles positioning this as a fallback,
so not the preferred path for VirtualBox.
So the messaging still centers
Oracle's hypervisor as the quote,
better choice,
especially for legacy workloads.
Yeah, isn't that interesting?
Like, I mean, I guess I get it.
They're very proud of what they've done
and they've specialized it over the years
to address their users.
They're not just chucking the old one on the ground
throwing it behind the bag.
But it's not often where somebody submits
an upstream patch so that way
their software can take advantage something in the kernel and then includes like a five bullet point.
But this is why ours is better, right?
Did anything in there stand out to you that was reasonable?
I mean, there must be some.
Yeah, a lot of it is like legacy and exotic guests, right?
So like if you think about KVM is like when it came of age, which was after Virtual Box, right?
And it's been very Linux native and it's been used a lot by hyperscalers to run cloud businesses,
running a lot of Linux guests.
Whereas, you know, Virtual Box can run a whole bunch of stuff.
It's got accurate A20 gate emulation, which is important for some DOS stuff.
It's got advanced instruction emulation, ring zero device emulation tricks, aggressive VM exit optimizations.
For modern guests, you really don't notice like a ton of difference for most situations.
But if you do have some particular legacy workloads, you might find some areas where the old driver would be better.
I still think even partial KVM support is, you know, win for us.
It is, as you said, hard to get.
It's in the latest virtual box get.
Git and test builds Linux only for now.
So you can't, you know, obviously.
So you have to go build it yourself.
Okay. All right.
Or I'd be comfortable with getting one of those test builds from somewhere else.
You can opt into it explicitly if you want.
Or, as Brent was saying, I think probably what is probably an upgrade for folks and end
users who maybe don't know what a kernel module is, is this could be a easy fallback, right?
So it could try to run with the virtual box stuff.
If it sees KVMs already loaded, kernel conflict, just.
go use KVM. Maybe you won't have all the features.
Maybe it won't be quite the same, but it'll work.
That's going to be the main use case initially for this.
Probably would.
I think it convinced them to do it, right?
Yeah, yeah. I wonder if
long term, if there isn't some
potential for Virtual Box to essentially
become one of the
recommended user space VM managers
for KVM. That was my
first thought is, oh, it would be a nice
way to manage a bunch of KVM systems, especially
if I could remote connect from my
desktop, if I could have the virtual box
VM management UI
on my desktop, and then I connect to my KVM server,
and I could manage all my virtual machines at KVM
with the Virtual Box UI.
I mean, I think I would at least give that a try.
It has been, right, like, it is in some of the best ways of open source,
even with its own licensing complications in Oracle and all the rest.
Like, it's been around for a long time.
It was early at having a good, consistent cross-platform experience.
It's just this, like, very available if you need virtualization software.
And that has a lot of utility, especially now if it can kind of adapt.
I do also think you kind of hit on the Linux side trend of folks using more of this existing in kernel infrastructure.
And I think that's a trend that is beyond Linux, right?
Like you've seen Apple offer a lot more robust virtualization primitives in their system.
Right.
They provide the plumbing and then you write to the API essentially.
And same on the Microsoft stack, right?
You got Hyper V side.
You've got the WSL stuff.
There's just a lot more primitives that you can do.
There are still products, especially Microsoft.
But like, there are still products built on it, but you get a lot more of that base information.
structure that you can plumb yourself.
And it makes sense, right?
They're in control of the kernel and all of that.
We've made and heard some wild theories in the past of like Microsoft, maybe Windows
is just going to take this approach and become Linux under the hood with a nice, fancy
interface.
So my big question becomes, does all roads lead to Linux?
Yeah, it seems like it.
I mean, it seems like here.
Here we are, right?
I've been talking about Linux for 20 years and can you believe it's still relevant?
after 20 years, and it's more relevant.
It's more relevant.
Yeah.
There's not a lot of technology like that.
And it is this trend of all things just kind of moving to the kernel,
and they stop their own way of doing things.
Now, Virtual Box is going to keep going for a while with their own modules, to be clear.
Did you see this week, unrelated, but that someone proposed like a machine learning framework
offload for the kernel?
So it really is everything in the kernel.
Whatever you want, right?
Use EPPF.
I don't care.
Well, speaking of things in the kernel and sometimes outside the kernel, let's talk about Bcash FS, big update in just the last couple of days over there.
And I have one particular feature that I am very excited about.
I'll hold that because I'm going to let you take the stage for a moment on this Bcash update.
Yeah, so this is on the heels.
We had 1.36 back in the end of January.
That had a lot of internal stuff.
This time we get a little bit of user face.
stuff with the new B-Cash-F-S-F-S-T-S time stats command.
It's an interactive to-E for monitoring various file system internals, slow paths, and device
performance, duration, frequency tracking for various events.
Wow.
helpful for diagnosing performance issues.
Brent, did you catch the part in there that I'm excited about?
Did you catch that?
Uh-huh. Yeah.
I think the light bulb turned on for me, too, a little to do this stuff.
It's got a tu-toe.
It's good for the people like you and I.
A file system repair tool with a to-e?
Are you kidding me?
to look at some of the internals and device performance
and what's going on.
That is so up my alley.
There's also some other improvements around in the output,
like improved B-KeshFS reconcile status output.
I saw on Reddit just some users in Kent chatting about,
in general, some of the interface and the outputs,
because there's kind of an array of there's some up-to-date stuff,
there's some older stuff,
so there may be one area of porcelain that gets some more attention.
But something we know is getting more attention
is that in the release announcement, Kent said 136.1 is out, which we're just talking about.
The next release will be Eurasia coding.
Yeah, so tell me about this.
So that would be the parody raid kind of style stuff, right?
Raid 5-6 in the B-Kash-F-S world coming for B-Cash-Fest, or in the Butterfess world coming for B-Cash-Fest.
Right.
That's massive.
Yeah.
And a lot of the base stuff has been there, but all the user-side stuff and especially, like,
you can make these files.
if you do experimental things or enable flags,
but there hasn't been a lot of tooling support
for actually doing anything if a disk dies.
So you could test it and use it,
but you don't want to run a prod system on it.
Okay.
And the last little BcashFS bit for today is,
it's not all good things.
It is still an experimental developing file system.
A little bit of a PSA here.
Indeed, 19 hours ago on our BcashFS,
early reconcile had a serious bug in the data update path.
If an extent lives on devices that are all being evacuated,
while being evacuated, they're considered to have durability zero.
And the old code for reconciling the existing extent with what the data update path
wrote would drop those replicas too soon.
Okay.
So if you're on 1.33 through 1.35, you need to upgrade.
Good news is the new code is much more rigorous with how it decides when to drop replicas.
And so far, only like a handful of people have been hit by it.
As usual, I have seen Kent doing a lot of on-the-ground support, both in the IRC and on the
subreddit. So if you do have file system issues for your sake and for everyone else's
sake as this thing gets developed, don't be afraid to reach out. Yeah, he's very engaged.
I just realized, I told the story to the members recently, but I'm just around the two-year
mark of B-cash-FS on one of my absolute most important production systems runs 24-7, and it's
been using B-Cash-FS for its critical data drive. And I do that not necessarily saying that you should
and not necessarily, you know, recommending it.
But so that way I can be kind of on the front line and report to you how it's going.
And so if we're five years down the road and I'm talking about B CashFS, you know I've been using it for five, six years at that point, right?
So I think there's some credibility in actually deploying it and testing it with data that is literally putting my money where my mouth is.
But I don't know if everybody should do that yet, but I'm very impressed because two years ago it was in a much different state than it is now.
And now it doesn't feel risky at all.
Yeah, definitely.
And a lot of improvements, right?
And we're still getting some really nice, like being able to do more upgrades in the background
or without not having to have the drive offline to do them.
It's now a very robust file system in a lot of ways, which is great.
I mean, I, again, don't do as I do.
But I'm at the point where if I can, I'm going to make it my default file system on every route install for workstations and laptops going forward.
You already have been doing that.
You've been running it for a long time on that laptop.
Yeah, I think summer 2024.
Wow.
And you've just been going through the kernel upgrades kind of regularly.
Yep.
And then I've also got it running on my home router box at the moment.
I don't have any raid systems.
So I do want to, I do want to set that.
I mean, I've dabbled with some, but no permanent ones.
I don't know.
The router one tickles me the most, right?
Brent, it's like that's the one.
Because it's like, why?
Yeah.
You can do extended four on a router.
You can do anything on a router.
That was actually one of the first ones I built, I think.
I think I was just, oh yeah, why not?
I was like redoing the system.
Yeah.
it was there. And there was, that did, that did buy me one time, I will, I will admit, with,
what wasn't a bite, it was just, it had been an old enough file system that I had to go through
some of those on-disk upgrades. So I did have to do one update where my network was offline for
like 20 minutes while it did that. I want to ask right now, if you're listening, what is your
router file system of choice? If you're building a router or a system like that, boost in or
send us an email, what is your router file system of choice? Let us know. Yeah, I have, I have
ridden the file system way for a long time. So I think that's kind of one of the reasons why I'm
a little bit more comfortable using BcashFS. I switched to Sousa back in the day because they
supported RiserFS. And I needed extended attributes for Samba shares. And I needed support for a lot of
little files because I was doing images of checks, JPEGs. And so I went with RiserFS way back in the
day. And then when Butterfess came around, I adopted it and got so burned early on.
Some of the early Linux on plugs are me ranting about losing my machine to ButterfS.
Yep.
And now I have it everywhere.
And I'm starting to do that again with B-Cash-FS.
And then I have a lot of my scary raids around, which you guys remember my scary raid, right?
Oh, yeah.
How could you forget?
Which is my, it's a raid zero of just a bunch of spinning rust.
Well, you forget everything of a distvail.
That's true.
And I have a scary raid here in the studio on the studio machine, and I have a scary raid on my workstation upstairs.
and I name it slash scary raid,
so I always remind myself,
this could blow up at any time,
anything you put here is ephemeral.
And so that's my little mental trick is slash scary raid.
And that right now, on all my systems,
all my scary raids are XFS.
Wow.
Yeah.
All my scary raids are XFS.
And I don't know.
It's legacy because they've been around for years
because I reload the boxes
and then I just remount the scary rate.
You know, because I've always got the scary raid.
sitting right there.
So are you going to upgrade that to a BcashFS scary rate?
Well, this is what I'm thinking, is the next generation.
Wouldn't the next generation of a scary raid would be like a bunch of used SSDs that I just
slam into one big volume and use BcashFS for that.
I should say that was the other factor on the router box is it's pretty much just a NixOS
config in Git, so there wasn't a lot of data on there.
Yeah, yeah, yeah, yeah.
I think if I were to, you know, because all my scary raids are identical disks, and if I were
to do a mix of drive.
drive size and then just sort of mush all that together, I think I would, I would call that
messy rate.
That would be a messy rate.
Well, I just want to take a moment and thank our members.
We don't have an advertiser for this spot yet, although we do have the world's best
Linux audience, the world's largest Linux audience.
We've been around for over 12 years doing this show.
I've been doing podcasting for 20 years.
So if you would like to reach one of the best audiences with somebody that knows how to do podcast
ads, send me an email at chris at jupiterbroadcasting.com.
In the meantime, thank you members, jupiterbroadcasting.com slash membership.
I know.
I know you can get the Linux unplug.
Jupyter.
Not party for the whole network.
I don't need to go through the whole thing.
You guys know it.
So I'll just say thank you very much.
There's not a lot of, you know, big commercial demand for a Linux podcast that is talking about file system nuances like this.
I don't know if that surprises you, but it turns out people that are selling ads on podcasts and YouTube, they don't find file system.
discussion, particularly interesting and doesn't really reach their radar. So we do have to lean
more on listener support than a typical podcast that you might listen to because what we do is
we use that listener support to give us the runway to actually nerd out on these topics and go for
the stuff that is never going to get us any play on YouTube. We're never going to get a clip on
TikTok. We're never going to show up in some sort of advertisers keyword search dashboard thing.
It's never going to happen for us. And that's okay. We're fine with that because we,
have listener support. So there's a couple ways you can do it. We have the show membership.
Those are our core contributors at Linuxonplug.com slash membership. We have the Jupyter.
Party membership that gives you access to all the shows and their special features. Every show has
special features. And then you can boost us. And that is not only a signal, but it also supports
that particular production and gives us an idea of like that topic worked or it didn't work.
And so there's several ways where you can participate. And we really do appreciate it because we couldn't
make this kind of content where we talk about these nerdy, esoteric things that actually do matter,
it's not our fault that the advertisers don't realize this stuff matters, right? It's not our fault.
This stuff does matter, and it matters to you, just like it matters to us. So thank you very much
for the support, and you can find membership links in the show notes as well.
A little while ago, we took a moment on the show to plant our flag and say, all this assisted
AI stuff was coming for Linux administration. And the last few weeks,
with all the open claw excitement might be proving that out.
But there's also been a lot of pushback to all these big tech commercial models.
And for some of us, it's made us more excited about local open source models.
Over on its voss, Bwan Mishra wrote about ditching Claude Code and using the open source
quen model for real sysadmin work.
I liked hearing about this, guys.
Yeah, me too.
Yeah.
So the author dropped cloud code, went with local quen code because it behaved.
more like a proper Linux tool.
He says he could install it locally.
It was open source.
And it shows every command it's going to execute
before it actually runs it.
And you can describe a task in plain English,
and the Quinn model, which is a free model,
that you can run on your own machine,
will just turn that into reviewable shell commands
that you can then authorize,
and then it'll execute them.
Yeah, it's pretty neat.
So Quinn comes from Alibaba,
so it is a Chinese model,
but it is also open weight.
some folks have found some kinds of censorship or other sort of things you might expect from some of these Chinese models.
But it has also been optimized for agent stuff. So as Quent Code, they have various specific models, especially like the code focused models itself sort of aimed at exactly these tasks. And in fact, even the app itself is a fork of Gemini CLI. So it's under the Apache II license. They look pretty much exactly the same. Oh, interesting. Okay.
But the Kwen folks wanted to add in, like Gemini CLI does a lot of stuff because it's attached to this very broad Gemini product and stuff.
And you can only use it with that.
But they added support for using local stuff.
They also optimized it more for doing these coding and agent tasks.
Very nice.
Okay.
That's, I didn't realize that you could, I guess I never looked at Gemini, CLA.
I didn't realize they had made that open source.
Yeah.
Isn't that nice.
I mean, the back end isn't.
But at least the, at least the front end is.
Yeah.
And, you know, it is like maybe you don't.
to everything with one of these particular local models,
depending on what kind of system and GPU
and what you can actually run.
But as these things get better and better,
like writing shell scripts,
or especially like, you know,
especially even silly stuff,
like,
oh, dang, I downloaded a bunch of files with spaces in there
and they got weird names and I just want it cleaned up.
It happens to me all the time.
Even stuff you can run locally can definitely handle
spitting out of bash script to do that.
Yeah, it is really exciting to see where these local models are going.
I feel like I'm in this.
tortured position where I recognize the utility now in a way that I kind of miss the boat on
when hardware was reasonably priced.
Now, now...
Where's that open source time machine?
Now I'm so GPU starved, and I've been looking at Venice AI, which is a pretty good
privacy-focused model system that you can use with agents and whatnot, but the tokens are
more expensive because of the privacy layer.
And this gets really expensive very fast when you want to do more extensive things.
So it is with a particular eye we're watching these local models and seeing where they go.
Just a little background on something we tried last week.
This was a very interesting experiment, and we wanted to share it with you.
We have wanted to stand up a production Mattermos server for years.
We've spun up a few.
You know, we've toyed with them kind of as just experiments.
We've even ran a few project-specific Mattermos when we're like working on something with people outside,
J.B.
We'll spin up a Mattermost real quick.
and they really just throw away matter most
and we don't really spend a lot of time with them.
We wanted something we could use long term.
So last week before the show,
I had been working with one of these open claw agents.
Just going over best system practices,
focus on this, consider this, security.
And the stuff we normally want or don't want to see in a system.
Yeah, exactly.
And so also I've been working to delegate some subtasks to agents
and stuff like that.
So naturally, I thought, let's do something really stupid and give this thing API access to Cloudflare.
I figured why not?
I had a old domain that I had sat around for years.
I don't use.
I could set up a limited scope API credentials.
I was about to say that we've already gotten to the point of maturity in that side of the industry is really useful if you're trying to use agents.
Like, don't give it everything.
No.
So I gave it this old domain I had sitting around that I bought.
on a lark, probably had too many beers and decided,
I'm going to buy this domain.
And so before the show last week,
I told the agent,
go SSH into this VPS because the host that it's running on has an SSH key.
SSAH into this VPS.
Go look around, learn the system,
the Docker Composed files are over here,
and then commit to your memory what you've learned.
That was before the show, just Sunday morning last week.
Los Scouting mission, reconnaissance.
After the show, when we were all done with our post-show duties,
I didn't even know you did that, by the way.
You're sneaky.
Yeah.
After the show, I said to Wes,
how fast do you think an agent could deploy a secure Mattermo server?
And so I brought up Telegram and I sent my agent the API key for its Cloudflare stuff.
And I gave it a clear goal.
I said, go deploy a Mattermost server on the VPS I told you about earlier.
That's all I said.
And follow best security practices and use Cloudflare to handle things like DNS caching and other
security best practices.
And that was the entire telegram message.
Five minutes later, the agent had deployed Mattermost via Docker Compose, integrated a Cloud
Flair tunnel sidecar, set up DNS with optimal caching based on upstream Cloudflare docks for
best security practices when deploying a Mattermost server, which is really neat because it's
set up the sidecar.
Wes, talk about this, the sidecar tunnel.
Like, we talk about these sidecars before.
You're a sidecar guy.
Oh, yeah.
I mean, it's a handy pattern, right?
It's just, because you're using network namespaces with the containers,
you can put two containers in the same network namespace,
and so they share a network, and then you can have, say,
tailscale, neburd, nebula, any of these mesh VPNs do it.
Cloudflare Tunnel.
Or you can have a Cloudflare Tunnel.
And so Cloudflare Tunnels agent handles binding a VPN,
a tunneling connection to them,
and then because they're also controlling all the front side, right,
the load balancing, the DNS, the caching,
then they can automatically.
just route it through that tunnel back to you.
And so the Docker and namespace layer
make sure that you don't have to mess with any of the host stuff,
but as long as the actual Cloudflare tunnel agent itself
can get outside via the internet,
then everyone else can connect.
And you don't have to have an open port,
you don't have to worry about that kind of thing.
That's the key thing.
It doesn't even talk to the host network.
It's not even talking to the VPS network at all.
It's just talking over this tunnel.
So then what I did,
after it had stood up this entire thing,
I reviewed the Docker Compose.
It was clean. It was clean.
I looked at the Mattermost config.
It was pretty basic.
So I said to the bot.
I said, I'm going to go create you an account.
I'll go create you a bot token for the agent.
Then what I want you to do is go finish the work.
Go set up the rooms, set up the permissions, dial in a full configuration,
give the rooms the descriptions, set descriptions for all the individual accounts,
everything.
And then two minutes later, it was back.
And we had everything there.
A complete lounge, rooms for the bots to talk, room for the people to talk.
room for the people to talk, permission
structure. The entire
process was probably eight to ten minutes.
Also, having the
bots fiddle with the actual
admin settings, so much nicer
than doing it yourself, especially for like, I'm not a
matter-must UI expert, right? But like, we
needed to rename some accounts or like we wanted,
I wanted to make sure that when Brent got in, he was going to be an
admin on the instance, and
bots handled that just fine. Yeah.
And so the point that I'm trying to make here is
this is
not going away for Linux system.
and it didn't deploy some vibe-coded piece of crap insecure setup either.
It actually did a fantastic expert job.
We went through and reviewed it.
It's good.
It's solid.
It's a good setup.
And you have to kind of let that soak in for a moment because we have heard these big tech
CEOs promise all this ridiculous crap for the last three years.
But this is really working.
I did three things.
In the morning, I told the bot to go check out the VPS.
in the afternoon I gave it an API
API key and I told it to go deploy
Mattermost. And then after that
I said go set up all the rooms, the
permissions, the descriptions, give them
emojis, all that crap.
And it did all of that.
And it's all because these things all have
API endpoints. The
documentation on how to set them up is extremely
well documented and clear.
And in the case of
open source stuff, you can
also go have your bot
belunk the source code and figure out exactly what
Yeah, and the API to then realize how to connect to it so then it can participate in the chat.
And Brent, we pulled you into some of that shenanigans, and you could see the bots are actually
coordinating with each other back and forth in our Mattermost chat.
Can I just say how much I wasn't expecting to be pulled into that kind of environment?
But it was impressive.
Like, I don't know, this is a topic we've been talking about for years to replace our sort of legacy
internal communications tool.
and you guys just kind of pulled that out while you were doing the post show work.
I kind of couldn't believe it.
We were actually just playing with agents at not doing the post show work,
but we were doing other stuff and not supervising this spot.
Like what a force multiplier, if you look at it that way.
And to stand up something we've been wanting for quite a long time
and having, like you said, some confidence that it's been done with best practices
better than we could have in our little busy schedule,
because this would have taken us what, like, longer to do it in a way that we were satisfied
that it was good enough to be a long-term tool for us and secure and safe enough and resilient.
That's just awesome.
Yeah, I agree.
But I think there's another part I was thinking about, which is, like, this varies per person and group and all that.
But, like, we weren't going to learn that much from standing up a Mattermole server.
Like, it's sidecar patterns, we've done it before.
Yeah.
And there's this whole mass, at least for me personally, of work where,
I want to do it, but it's either not quite important enough.
It's not going to rise to the top 10.
And I'm not especially motivated to do it, especially because I know it's kind of just grinding
an out.
It's not a new thing.
I'm not learning some new application or some new way to do things.
And I'm totally happy to delegate that.
Another thing I was doing just this week was I had an old code base.
I ran a linter across it to get a bunch of lint errors.
I know how to fix that stuff.
I don't need to do that again, but a bot can fix it.
And then I can just review the dip.
And if it didn't break anything, the test all passed, like, totally fine.
Yeah.
Yeah.
And for me, I feel like it's in the experimental, okay, I'm learning about a stage.
But it goes into that this is something that I will use for year stage
when I can run a model that's competent locally, you know, and privately.
Yeah, where you don't have to pay out the arm and leg for the credits,
where you don't have to worry that every little detail of your life is being sucked up.
Yeah, yeah.
But there is something here.
It is not going away, even if, like, Open AI were to crash.
like this is not going away.
And what we're going to see, and we're already seeing it,
is services are going to have to develop and deliver agent-specific endpoints.
You hit on the importance of APIs,
and I think that is going to be helpful for us open source people,
because if you have an open API system that's documented that's easy to learn,
these agents can figure it out.
Like when we were playing around last week,
I had my agent figure out how to use this WebSocket IRC thing, but then even just after we got
the Mattermost stuff, I wanted my agent as a test to send your agent a direct message, but the security
on the OpenClaught would not let that happen. My agent figured out how to use the Mattermost API
directly. Now, in that case, a little concerning, but in general, like in the old world, right,
like the proprietary provider would provide the bindings. So you had to rely on whatever set of bindings
we're going to be there.
But you don't have to rely on the default set if you can dynamically add the capability of new bindings yourself.
Yeah, so here's that.
So it doesn't matter if they only support Slack.
We can add Mattermost if we need that.
Yes.
This is the head shift that people need to get around.
And it's going to impact open source projects and they're not going to take it gracefully.
We're already seeing it happen.
And I'll give you an example.
These agents are going to become an extension of whatever somebody can do online, this agent will be tasked to do.
And that is from everything from hiring people to mow lawns to topping off its own API credits to going through GitHub and looking for issues.
And we are going to see a lot of transitions between crap code and all that.
But focus on something that the open source projects need to think about.
And this is something that Debian is struggling with right now, is Debian's had a problem with their CI system getting overloaded with what they say are LLM scrapers,
essentially going through their infrastructure via a web browser
and kind of just going through page by page
and generating so much load by doing this
that it's making the service unavailable for their existing developers.
Yeah, it's not just hitting, say,
like the actual output text file from the build run.
They're like walking through the whole interface to go browse through it.
And then as a result, I guess there's not that much caching
and just, you know, it's a volunteer project.
It's open source infrastructure.
It's Debian.
So then it's going to,
and pulling a bunch of results for, like, years ago build systems.
So now the system's churning on that instead of focusing on, you know, the actual builds for the next release.
And one of the things that's challenging about this is it's difficult to distinguish an LLM scraper, which might be training, questionable, a bot that is just scraping this stuff or an agent that somebody has tasked on their behalf to help them with their developer development in Debian.
You might want to go pull in those logs, say, so that you have your own personal dashboard at the upstream builds to keep track of issues.
Exactly.
And so maybe you've tasked your agent to go do that.
And what's going to have to happen here,
and what Debian has done for now is they've just essentially taken this out of the public.
And you have to have special access and special whitelist and all of that
to be able to get access to their CI system now.
But the solution is not going to be to block these agents that are operating on behalf of other developers.
The solution is going to be to develop API and data pipelines.
And then for these bots, agents, and LMs to respect those.
and use those data pipelines and APIs instead of just crawling the website.
Yeah, and there's certainly some operator responsibility here, right, to respect those things,
to respect stuff like robots.com, and other, like, you know, there is an element of you put it on the web
and there is some, you know, depending on your law and jurisdiction, but there's some right to just go do a
crow request and get the results. But you also need to respect that that costs people's people money
and can interfere with other people's, right, to do the same access.
I don't know if people remember, maybe I'm old enough,
But there was a time when web search engines first came out, and they were crawling the web.
And even though they weren't permanently caching before, like, Archive and the Google Archive stuff, even though they weren't permanently caching, there were lawsuits over the fact that the web indexers had a copy of the information in RAM temporarily while they indexed the website.
Often copyrighted things.
Yeah.
Yeah.
So it takes a little bit to figure this out.
And I don't think projects like Debian are going to want to completely sit this out because there is use case for these agents.
And if they can be more intelligent about how to manage a Debian system, that's good for Debian as a project.
And it's good for Debian as adoption in the enterprise.
I do think or suppose we might see a rise in more structured outputs and outputs designed specifically for LLMs as an aside to the human-centric interface.
Just give them their own thing to ingest that is low resource.
don't have them muck up with the human side, yeah.
Pretend like, and honestly.
It's more, it's less efficient for both sides.
Right, because they're spending a bunch of tokens to run that web browser to
browse the website like a human.
To parse out the HTML to go figure out where the links are.
And then they're probably just producing a JSON file on the back end anyway.
That's all they wanted.
So everybody wins, but we're not there yet.
So the Debian project you said to do is they've had to go kind of private.
But I don't think that solves the problem long term.
But we really had a moment with our Matarmo server.
we had a, you know, let's call it 10 minutes.
In 10 minutes, we had a fully working Mattermo server,
like an expert had deployed it,
with clean configs, tight security,
great performance because I sort of glossed over this,
but like you mentioned,
it set up all of the caching and best practices with Cloudflare.
So this thing is running on a super old low-end VPS
that is already doing other stuff,
and it is a faster chat experience than say Slack.
It was a really nice responsive performance system.
and using this Cloudflare tunnel made it really easy
for security purposes.
I didn't have to worry about having
like a VPS address and all that kind of stuff.
Also makes it portable.
You can move it around.
It doesn't matter where it lives.
Especially the way this tunnel works.
It's portable per machine too
so you could actually lift the entire Docker composed,
drop it on another machine and start it
and the tunnel would reconnect.
That kind of stuff is really impressive.
And on a low end VPS, it's even better
because you're getting more banging for your buck.
And you wouldn't, you know, you don't have to use CloudFair.
You can tunnel it over your existing mesh network.
Oh, for sure.
You can do an engine X.
proxy and the bots can help. Oh yeah, for sure. This is just because I was experimenting with
using the API to just do a complete end-to-end solution. There's a lot of ways you could solve it
with networking. And, you know, maybe it was $2.50 in tokens for the entire thing in 10 minutes.
And now we have a server that we could use in production for years. So there is something there.
And when you get that kind of utility and that kind of optimization, you know enterprises are going
to be drawn to that. And you know that's the kind of thing that Red Hat and Sousa are going to be,
and canonical, are going to be leaning into.
It isn't going away.
And it's important, I think, that the open source side stay engaged and, like, pushing on those, the values that we bring to technology generally.
And I think that's the most interesting and exciting thing about the last three weeks is that it has been driven by open source.
We've been talking about AI.
Well, we haven't, but people have been talking about AI for years now sort of to obnoxium.
It's just, ugh.
And this is the first time where we've actually been interested because open source is really biting in and making a difference.
And you can see it, like, just if you go look on Hacker News or other spots, like OpenClaug came out, and now there's a bunch of different versions.
There's smaller versions.
There's Rust versions.
And especially because the implementations are open source, you don't have to figure out how to give the bot memory.
I mean, you might need to customize it or tweak it or change it, but you can go follow the patterns that are merging in the upstream open source.
So what Wes is saying there is one of the things that's different this time around with these new agents is the memory and the soul and the tools and all the things that it memorizes.
So, for example, last night, I messaged my bot.
I didn't say anything else other than,
can you go get the weather report from Home Assistant?
That's all I said.
And it knows, well, okay, Home Assistant lives on this host.
This is the API.
This is my API key.
The weather is this.
And it goes and gets all of that.
And it just comes back with a weather report from Home Assistant.
That's another one of the big changes here is the agent is the interface.
Right.
And that memory and all of that is yours.
It's on your file system.
And these other bots that are being.
built, these other agent frameworks like you mentioned, can use this. So it is portable because
it's just text and other agents can ingest this and learn from it. So when you are building this,
you're taking something that is vendor agnostic, model agnostic, and gateway implementation
agnostic. And you can change it, they can change it. Yeah. So that's a big difference. It's a
big shift. And we're done talking about it, but we wanted you to understand it because, man,
does it matter? Well, no spot. No ad right here. But thank you, members. And thank you.
Thank you, boosters. We really do appreciate you for making this show possible.
You're doing the heavy lifting, that's for sure.
We've got a few little fantastic pieces of feedback this week.
Joe sent in this awesome case for your think center.
It's a think box V2 custom four-bay cereal ATA or SaaS hard drive NAS that you can print yourself.
Yes, you heard me.
It's a case for your system with the case.
It's basically, it's a custom 4B NAS that the Think Center small form factor PCs just slide into.
So using an LSI HBA card and the powerful brains of our much loved by our community, M720Q or M920Q by Lenovo.
It leverages many improvements over alternative custom NAS builds by basically providing a stable direct dispatch through with proper air handling, reliable drive.
identification, the drive base to the think box also support both serial ATA and SAS drives
on a 12 gigabit per second backplane. And everything just slides right into this thing.
This specifically is exactly what I've been thinking about for the last two weeks. And I have to
say, Joe, how did you get into my brain? Because I've been deeply investigating the M720 Q's and all of the
possibilities that these little machines can do for us, including a NASS.
And I was like, oh, geez, I really would just want to throw one of these into a case.
I can support a couple hard drives.
And here you come along with the version two of this project.
Doesn't it?
It looks OEM, right?
It looks, it looks, mint.
This is such a great fine.
Thank you, Joe.
He says that he found that we would be interested in.
Boy, was he right.
This is really...
Upgrade your pizza box to a cube.
We will link this in the show notes.
I know what I want.
for Christmas. The creator says Lenovo's use of the coffee lake generation of CPU in the M720Q
and M920Q line gives access to native hardware transcoding through Intel Quixink.
Yeah, it just, it makes for such a nice home media server, but it is a little bit tight on the storage.
So that is a great find. Thanks for sending that in.
Our buddy Michael Dominic from the Coder Radio podcast has a great giveaway going on right now for
students. He's calling all students.
He has an Earth Day open source challenge.
If you build something that helps the planet, you could win a big prize.
Some System 76 hardware.
Oh, hey, we know that. We know that stuff.
Yeah, so if you got a kid that's K-12 or even college students that are listening to the podcast, the deadline is Earth Day.
I'll put a link to Mr. Dominic's post about it.
He's done this.
His company, Madbott, Inc. has done this for a while now, and they often give away some really nice system 76 hardware.
for this. So check that out. We'll have a link in the show notes. Code for Climate
2026, the Madbotter Earth Day Open Source Challenge. And the deadline is Earth Day. You got to code
something up that helps the Earth. Go check it out.
Ooh, it looks like we got a little feedback from our buddy only Mike. Hey, Olympia Mike.
Guys, wow. I just wanted to say thanks for sharing my mini computer giveaway. I received hundreds
of emails from this amazing community with positive messages and excitement.
for some free hardware.
I had all 35 units claimed in a couple of days,
and have mailed them all out as fast as I could.
I'm now on a first-name basis with the post office employees.
I bet. I bet.
I also wanted to say thanks to the many recipients
that donated above the shipping cost,
allowing the computer upcycle project
to have some extra funds for SSDs and power cords
to fix more computers.
Best audience ever.
This community is literally the best,
and I'll be sure to do this again,
if when I have more good home server hardware.
Thanks again, Love Community.
I love you all and cannot wait to see you all
at Linux Fest Northwest.
Oh man, Mike, that's great.
Amen.
Thank you, Mike.
Thank you for the update.
Really, I love that.
I just love the growth of that too.
It's so great to see and appreciate the updates.
You know what I mean?
Very good.
And now it is time for the boost.
Well, the dude is definitely abiding this week
and he is coming in with our baller boost.
77,77 sets.
Hey, Rich Lobster!
He says, thanks for the wake-up call.
I deployed Prometheus and Grafana, and now I'm hooking up everything.
My true NAS, Prox, Max, is my Unify.
That's right.
And Home Assistant.
I have no idea what I'll use all that data for.
The next step is a set up alerts, I guess.
Great episode, as always, and happy belated birthday.
Oh.
Thank you, the dude.
Yeah, I have been finding it very useful just to monitor disk space and the CPU
load on my home assistant. Those are like the main things
I have home. Basic sort of metrics
and monitoring stuff. Stuff that I'm like, is this
box struggling? Is it? Or is it
okay? Well, Anonymous
comes in with 2,021
sats. Coming in hot with the boost.
But no sats, just the value.
So also we get a boost from Outdoor
Geek. 5,000 sets.
But that's not possible. Nothing
can do that. Oh, Scott, he says he's hot to try.
Open WT as access
point tip. By default,
if you connect another router to a land
of an open WRT Wi-Fi device,
the Open WRT Wi-Fi device
will operate as an AP.
However, if someone fiddles with it
and forgets or doesn't know
to use landport versus the WAN,
better to configure as AP for permanent installs.
Good to know.
Outdoor geek coming in, good tip.
Sounds like maybe you've seen that happen
one or two times, huh?
Brent, did you do this? Are you okay?
I didn't even know this was possible.
I like this little default behavior,
although, you know, sometimes these kind of
behind-the-scenes behaviors
that are trying to be helpful.
can be unhelpful if you don't know what happening.
This didn't happen to me because I didn't know this was such a useful feature.
So maybe I'll try that next time.
Now you know.
Well, Zach Attack came in with 4,500 Satoshis.
Oh my God, this drawer is filled with broolopes.
It's been a long time.
I never stopped listening, but just haven't found funds to send over.
I'm curious as to how your AI adventures go.
So I look forward to the continued reporting on that.
I've been back on the distro hopping train with one laptop running Nick's book, and the other running Linux Mint, my main machine staying with Bluefin for now.
That's an interesting fleet you have there.
I like that.
This is a good testing around.
You know, so talking about the agent stuff just a bit more, just briefly, it ain't all great because, you know.
Why does your wallet look so thin over there?
There's that.
I'm basically an alpha tester, right, because I installed it when it was called Claudebot.
then it became Maltbot, then it became OpenClaw,
and then they've had three releases of OpenClaw,
and the first two releases of OpenClaught completely blew out my subagents,
blew out my Mattermos config on the second install,
I mean, just like, and then the stereotypical thing where you're talking to the agent,
like, I've got it figured out now, and then it kills its own gateway,
and then I'm in there repairing the gateway.
And not just once on that one.
No.
I was getting pretty spicy about it.
So it hasn't all been great.
I've definitely learned what I do and don't like about the open claw infrastructure.
But ultimately, the reality is just that when you are learning something when it's really new,
you learn a bunch, you have to throw it out, you get burn, you learn a bunch, and then you throw it out,
and you learn again, and you learn again, and you learn again, and being an early adopter has a bit of work to it.
But you then at least know the language, you know the technology, you understand the direction,
you know what's hype versus what's actually just practically good.
And it actually is valuable to experiment with this stuff early on,
even if it is somewhat personally expensive in time and money.
But I kind of consider it like this is my ongoing education.
And I don't go to college anymore, but I do invest in my ongoing education in some of these ways,
with in reasonable means, right?
I can't go crazy with it.
And I sometimes just cut myself off.
And another reason we need more local solutions.
We do. We do. And maybe somebody will even create a better agent framework one day.
Thank you, Zach Attack. Gene beans back with 2,020s.
I like you. You're a hot ticket.
This is happy birthday to J.B. Thank you, Gene. Always good to hear from you. Hope you're doing good out there.
Wonderful, see Gene in March.
I hope so. I hope so. I hope so.
Well, Red 5D comes in with the Rovodogs.
You mentioned the idea of having an agent read the show RSS feed for some reason and stuff,
with the size of the RSS data, especially ours,
that would be a lot of tokens to process.
So I'd actually recommend using MCP tools
to process and search the data first.
Here's an MCP server that I wrote
to handle retrieving episode and transcript data
from podcasting 2.0 compatible RSS feeds.
I've really only tested it with JB feeds so far, though.
Well, that's a big one.
The Linux unplugged one's a little ridiculous, I will admit.
This is awesome.
It's MIT license, just written in Python,
over on GitHub.
We will definitely have this in the show now.
Yeah, let's your agent list shows, search episodes,
get episodes, and get the transcript.
So my agent provides me a morning report,
and it surfaces a certain boosts.
And it put Red 5Ds right to the top, right to the top,
and it said it had a specific call out,
hey, he created this, and this looks really useful.
It'd save tokens.
You should let me have it.
Yeah, that's exactly what.
That's what Lori said.
He's like, I think this would be really useful.
let's set this up.
So it's funny because I have it monitoring the boost
just so I can see what people are saying
and it's good for signal and stuff like that
and really good sense of direction.
But this is the first time
where you started surfacing interesting bits like this
and Reds came right to the top.
So appreciate that very, very much.
Thank you, sir.
Well, Tard Ferguson boosts in 13,333 sets.
Turned Ferguson.
So let's say you guys were right
about agents managing Linux.
systems in the future.
It makes you wonder how the big cloud providers would take advantage of a technology like
that.
Yeah, you do wonder, you know, what changes about fleet management?
Yeah.
It's just wrangling APIs.
And there's already so much YAML to manage in, like, declarative systems.
Do they sit on top of, like, an Ansible in Kubernetes and then go out and deploy things?
Or do we develop other new agent first things or, like, whole systems to sort of enforce more
security layers or more review layers or adversarial layers on top of that to make sure
that your change sets are really heavily scrutinized.
Do you think we would see cloud providers that lean into no AI agentic, no agent stuff?
Do you think that could be a market?
Hmm.
I doubt it.
They want to take advantage of all of this.
They do because it just sells more of their stuff.
So they're going to lean in.
You're right.
Well, I opt out of that.
That's a good point.
Okay.
All right.
Well, there goes that dream.
All right.
Well, that's an interesting question, turd.
Thank you.
Tomato comes in with a row of ducks, 2,222s.
I love the discussion on the bots and especially the talk with Abe.
Yeah, that was.
That was a lot of fun.
Abe's been updating us too in The Matrix.
There's more going on over there in Abe verse.
Planet of the Abe.
I don't want to spoil it, but I'm on, I think, the fifth book of Bobverse or fourth book.
Oh, fun.
And it's just a really, like a real, oh, crap moment just went down with AI.
Fascinating timing on that.
Thank you, everybody who boosted the show.
Those of you who stream them, sats, we had 28 of you stream them,
and you stacked collectively 38,38,385 sets amongst y'all.
Very good job.
It's a nice little show in right there.
We really do appreciate that.
When you combine that with our boosters who sent a message,
including even those that were below the two, below them, below the 2,000 set cutoff,
which we do it for timing.
But we read them all. We love them all. So our grand total, when you combine it all together,
for this episode was 149,531 sats.
Now, I will say, we have some big stuff coming up. We have our trip to scale and Planet Nix.
We have Linux Fest Northwest. There's always a ton of expenses around that. And we're always doing it on a really super lean budget.
So if you want to boost the dip while the sats are cheap and support our upcoming events,
this could be a great time to get a message on the show, even if you don't have anything that profound to say.
if you just want to send the value, we'll stack that and use that towards our upcoming trips,
and we really do appreciate it very much.
Fountain FM makes it really easy to get started, but there are a bunch of ways, including
self-hosted ways that use things like Albi and Podverse and whatnot.
That's all open source, top to bottom, from the software to the payment infrastructure.
Thank you everybody who supports us, including our members.
All right, so let's go through these picks, because I found one and Brent found one.
And I think Brent's is going to be really handy for anybody that isn't using a stock wrong.
I sent this app immediately to Jeff because it fits his personality perfectly.
This app is called Plexus.
And I just found it browsing the F-Droid repository, as you should do from time to time.
I do.
I love that.
It gets calming and fun, which is also why I have like 400 apps on my phone.
Yeah, you need to also, I think you need a rule.
One in, one out, Chris.
That's a good idea.
It's a good idea.
Plexus provides insights into app compatibility using the Google Play services.
So it's a crowdsourced, I guess, definition of which apps work really well without Google Play
services, also with the microG service, and which apps may or may not encounter issues
if you have these installed or not. So if you're using a custom ROM like,
lineage OS or maybe something like
Calix OS, you can
see how compatible your
current apps are if you
are migrating from one system to another.
Can I ask you something?
Yeah, sure. Did you give it a go on your
Paizzo?
Well, I was really hoping Jeff would do that for me.
You didn't give it a go. No, I didn't
give it a go. You're a paranoid guy when it comes
of this kind of stuff. Not to use the word
but you were kind of paranoid. I have to report
that
using Giraffine OS has considerably changed my level of anxiety about this particular problem.
Because Graphene has a sandbox Google Play, and that has softened me.
And I don't know if that's good.
But I'm going to blame you, Chris, because you brought me on this bandwagon.
And it's made me way more willing to install apps I would have never done before.
So maybe the community could tell me whether that was a good idea or if I should
go back to being a little bit more paranoid?
I have felt pretty safe about the giraffe, you know, a sandbox,
Google stuff as well, which is probably why I have 300 apps on my phone.
Why is it going to know how many of them would work without it?
You said 400 last time.
I was exaggerating.
It is closer to 5.
No, it is closer to 300.
You're updating.
Let's be terrible.
Oh, no, it's the optimizing.
It's terrible.
The updating's not so bad because that just happens in the background.
Okay.
So you guys know me.
I love Markdown.
If I could think and Markdown,
I would. And I often will write myself a little to-do for the day just in a text document, and I'll just format it in Markdown. And I came across an app. I think it's German developers. Anybody have a guess on how to pronounce this one? Maybe, Brent, you have a...
I'm going to go with Reintrift. Rindrift. Rindrift to do.
And it is a Rust, Edwadia app. That's right. That's right. That's right. That is a front end to very, very simple markdown to do manager.
So if you like to manage your to-dos with the Markdown,
this is a front end to it that is designed to connect to a web dev instance,
particularly NextCloud.
So you could have ongoing sync to-do notes formatted in Markdown that just saved to any web devs share.
Oh, that's nice.
Yeah.
Obviously, the idea is to lean into NextCloud, but any web devs share would work.
You can make it versionable via Git.
And if you're so inclined, there's an optional download to download a local,
whisper open source audio model so you can dictate your tasks to the application and it will
write them in Markdown for you.
Nice.
And it's lean.
It's mean.
It could fit on your machine.
No problem at all.
And because the back end is all text, should you decide to stop using it?
Well, Bob's your uncle.
You got your files right there.
It looks like it's a CC by SA 4.0 license, Rust, Python, JavaScript, HTML all in there.
There is a Docker file.
And no nix yet, but it's cargo.
so that should be easy.
It is a little bit of all the above,
and you don't often see in an app these days
with the Creative Commons license,
but it works.
So we'll check it in the show notes.
We'll chuck it in the show notes for you.
It's like Plexus is GPL3.
Yes.
Plexus is a good one.
Nice, fine, Brent.
Okay, I have to say I felt bad
about not trying Plexus,
so in the time that you gave us a second pick,
I tried it.
It's pretty fabulous, I've got to say.
You can just see the entire database
of the apps
that have been tracked
looked at, or you can just filter by installed apps.
So, Chris, you can look at your 300 apps and have a little homework when you're trying
to fall asleep.
It is fabulous.
And they rate kind of like rating video games with how they work on wine.
It has like silver ratings, gold ratings, that kind of thing for various apps, depending on
if you're using micro G or without Google Play services.
Oh, look, audiobook shelf is gold.
That's nice.
Mm-hmm.
Okay.
I'll be go, not so great.
Yeah, well, that's fine by me.
in a way it's a badge of honor
I say
the least compatible
I'm with Google Play the better
it's a badge of honor
so if you'd like to get links
to our picks or anything else we've talked about
Linux unplug.com slash 653
is where you go for that
in fact we have a whole bunch of episodes over there
you could say a whole back catalog
perhaps
in fact maybe there's even hidden metadata
that they can't see on the website
but it lurks in the RSS feed
well how else would that dope
MCP server work if we didn't have
fancy podcasting 2.0 namespace tags for chapters and transcripts.
That's right.
So the chapters are available as high resolution data, let's call it that.
And JSON.
Yep, as well as the transcript, also available for you, all in the RSS feed.
And often a video version of the show is tucked in that RSS feed as well if you go through there.
And if you've got a podcasting 2.0 app, it just exposes all of that for you.
Just whisper alternate enclosure to yourself at night.
And of course, we also stream live.
See you next week.
Same bad time, same bad station.
Yeah, join us on a Sunday.
10 a.m. Pacific, 1 p.m. Eastern.
Make it a Tuesday on a Sunday.
Hang out with our mumble crew, our Matrix room.
Give it that live vibe.
JBLive.tv, jupidabroadcasting.com slash calendar for your local time.
And remember, if you want more show, that's how you do it.
You feel like you need more.
There's a bootleg.
It's just clocking in at a whole lot of extra show right now.
Oh my gosh.
That's a lot of show.
All right, links to everything we've talked about.
Lingsdownplugged.com, Matrix Room, all of that.
Details for there.
It's a website.
Thanks so much to join us on this week's episode of your Unplugged Program.
And we'll see you back here next Tuesday.
As in Sunday!
