LINUX Unplugged - 651: Uptime Funk
Episode Date: January 26, 2026When your self-hosted services become infrastructure, breakage matters. We tackle monitoring that actually helps, alerts you won't ignore, and DNS for local, and multi-mesh network setups.Sponsored By...:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMUsing Experimental Lighthouse DNS with Nebula | Nebula DocsPlanetNix 2026 — Where Nix Builders Come TogetherSCaLE 23x | Registration — Get 40% off registration with promo code "UNPLG"Pasadena Linux Party MeetupNACME: ACME for Nebula PKI — ACME for Nebula PKIuptime-kuma — A fancy self-hosted monitoring tooltelegraf-bcachefs-inputbcachefs collector by ananthbntfy.sh — Send push notifications to your phone via PUT/POSTmeshSidecar — Mesh network sidecars for NixOS ServicesBecome a Core Contributor MemberJupiter.Party Network MembershipNixOS Clinic ConfigNixRTR/nixos-router — NixOS Router ConfigurationNext Steps for Funding Contributors - Actual BudgetWiFiman - WiFi AnalyserRichARCH Install GuideRichARCH Hyprvibe ScreenshotPick: Switchyard — Modern rules-based URL launcher ready to replace your default browser.switchyard on GitHubSwitchyard on Flathub
Transcript
Discussion (0)
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes.
And my name is Jeff.
Hello, gentlemen, coming up on the show this week, one pie hole, two VPNs, and zero public exposure.
I'm pretty proud of this one.
Then it's our pitch to ditch your GUI-only monitoring system and why we rolled out Prometheus and Grafana.
And then we're going to round the show out with some great boosts, some great picks, and a whole lot more.
So before we go any further, time-appropriate greetings to our virtual lug.
Hello, Mumbleroom.
Hello.
Hello.
Hi.
Yeah, you can join us in the Mumbleroom or at jb-live.tv.
Make it a Tuesday on a Sunday.
We have the times at jupiter broadcasting.com slash calendar.
And a big good morning to our friends over at Defined Networking.
Go check out, Nebula VPN.
They have a full managed product, 100 devices, no credit card required.
Support the show, Define.net.
Slash unplugged.
It is a great service.
When I've thought about it a lot, I talk about how Slack used it,
and they launched it in 2017 to build out the security around the Slack Global Empire,
and I talk about how Rivian uses it for real-time analytics for their cars to do securely on the road.
And those are all really big-scale projects.
But recently, I've appreciated how great Nebula is on a one, two, three node network.
And the fact that I can set up a on-demand mesh network that has,
name resolution and everything.
We'll talk more about this.
And there's no big tech login.
There's no third-party hosted admin dashboard.
Nothing like that.
It's just two machines using cryptographic keys talking to each other.
It's just a couple of text files, really.
It's so powerful for small home lab stuff,
and it's so scalable to massive enterprise stuff.
And you can try it out with their fully managed product
and support the show by going to define.net slash unplugged.
You're going to like it a lot.
And I'll tell you, I've been using it on extremely limited bandwidth connections.
And it's so much better and it's so much more resource sensitive.
It's way lighter.
It's way lighter.
Check it out to find.net slash nebula.
And thank you to Define for sponsoring the unplugged program.
All right, you know we've got to mention it.
Planet Nix and Scale 23X are 39 days away.
That means 33 days until Brent needs to be going down the road, at least.
Let's just round that to 30.
And six, I believe, or five, actually, more Linux on plugs, maybe, until we need to be on the road ourselves.
Wow.
So it's coming up.
I think we better get in the Nixie mood.
Yeah, and I am really looking forward.
Planet Nix has a theme this year.
It's where builders come together.
And our Nix coverage is supported again by Flax, who's focused on making reproducible
dev environments actually usable, and it's a fantastic tool.
So check out Flax.
And come see us at Scale and Planet Nix.
You do need to register at scale.
and you can take 40% off that registration with our promo code on Pludge,
UNPLG.
And we'd love to see you there.
One other item.
The meetup page is now live.
The details are not yet locked in.
The daytime location likely to change.
But you can join the meetup and you'll be the first to get updates.
And if you are intending to join us at the meetup,
please consider signing up for the meetup.
Please, we'd love to see you there.
Last time we had about 80 more people than we expected.
Great problem to have.
It was very stressful on the restaurant staff.
And they thankfully could open up.
They had to open up another wing for us, which they were able to do.
But this time, we wanted to give them a great heads up.
So if you're planning to make it and I want to bring a guest, there's room for that too.
Just let us know and we'll plan accordingly.
Meetup.com slash Jupiturter Broadcasting link in the show notes to the direct meetup.
We'd really appreciate it if you could make it if you're in the area.
Even if you can't go to the event, you're welcome to join us.
at the meetup.
We did get one submission for, that I saw, for a swag idea that we could hopefully have together for scale and Linux Fest.
It was a nice one.
I'll show it to you boys after the show, but I'd like to see a few more.
Send them in to unplugged at jupiterbroadcasting.com or Tegwess and Matrix.
And let us know.
We'll try to put one together pretty soon so we all have a uniform that we can identify each other with and have easy conversation.
Hey, I know you.
You listen to the show.
Are we getting hats?
Ooh, you know, I'm a hack guy now.
You know, I'm a hack guy now.
That's right.
Mm-hmm.
Mm-hmm.
Well, what is in a name, gentlemen?
In short, convenience, right?
When you set up your home lab or your enterprise network, whatever it is, it is eventually inevitable that you need good name resolution.
I suspect for you there might be a spousal approval factor in the mix for that, too.
Yeah, and also just a memory factor.
It gets hard to remember, especially the Mesh Network VPNs and the LAN IPs.
and of course I have to go and make it hard
and I have multiple mesh networks now,
multiple locations,
some behind double carrier grade net,
a couple of them behind double carrier grade net.
So I had to go and make it hard on myself.
And I want sensible name resolution
that works on the land
and works across the various mesh networks
so I can identify,
so I can just, you know,
connect by machine name to all of them.
And then I need something also
that does fast forwarding out to the internet and then can cache that so then future queries are
faster.
And then was it something you wanted to, like, I don't know, do you have some of these services
that depend on other services in a way where like DNS is how they find each other?
Yeah, and there's a lot of things I've set up are just by name now.
So, you know, I had a basic pie hole going on my tailnet, and I had a basic pie hole going
on my land.
But then we set up my wife's clinic.
And it was the tailnet pie hole?
running like as a container and a VPS or something.
Yeah, and it just only had an interface on the tailnet.
So it was just acting as name resolution for the tailnet.
And then I kind of combined that with magic DNS and sort of had the whole tailnet thing solved.
Then I had to go set up another network and all of that.
And I also just kind of wanted to take another look at this and see if I couldn't do this better.
But when I had set it up for the tailnet only on the VPN, I took a shortcut.
And instead of having to worry about exposing a pie hole to the internet,
I just only bound it to the tailnet interface.
So I didn't have to worry about public IP
and the internet banging on my piehole server
that's on a VPS because it couldn't talk to it.
But if I wanted to make this pie hole usable
across multiple mesh networks,
it meant undoing that sort of convenience in security I had
and coming up with a better security architecture
to go across multiple networks.
That's where it got a little kind of more complicated
because I went from the easy way to the hard way.
And so there's multiple layers I kind of took to this,
and I kind of like to hear your guys' feedback on this.
So the first step I took is,
I wasn't sure if this was the right call,
but I essentially put the pie hole container on host networking.
So it could see all the interfaces.
And then in the configuration,
I limited by application configuration to only bind
to the tailnet,
and the Nebula VPN interfaces
and to not bind to the WAN interface.
So at an application configuration layer, I did that.
And then at another layer,
I also set up ACLs with IP tables,
just real basic IP tables,
that blocks all traffic on Port 53.
So like just in case, you know, for a moment,
like when Puyall is starting up,
if for a brief moment it bound to Port 53 on the WAN interface,
this would essentially prevent that from happening.
Or if I make a config change mistake in the future,
it prevents it from exposing it to the public internet.
And so that's sort of the multi-layer approach in a way.
And then all the communications just happening over the Mesh VPNs.
I'm not communicating with the byhole at all.
No admin interface, nothing over any public interface.
How do you feel I did?
Is that too risky?
Brian, would you be comfortable with that deployment, I suppose?
I mean, to me, that feels probably more fine than anything that I've probably deployed.
in the past. So it seems okay, but really I'm not the pro or anything like that. But what I'm
getting from you is that this is upping your peace of mind with this. But there's also some hesitations.
I'm curious to hear what Wes has to say. Do you think it seems totally reasonable?
You could, you know, get with the times and use NF tables already. No, I'm just kidding.
I did actually, that actually considered it. I was like, this is what I know. But yeah.
I think from background discussions, I picked up, maybe you were using a side card.
before? A tailscale sidecar, yeah.
So I think maybe another
version, if you were going, like, fully
you know, application
mesh native could be
to just double down on the sidecar.
Do a nebula sidecar? Yeah, like have it served
those two interfaces just in its own
containerized networking environment. I like that.
Where things might get more complicated, depending
on exactly what you want and convenience, et cetera,
what matters to you, is what you're doing with
that host, and is that host then wanting to query
the pie hole? And are you going to
let that happen over the local host, or
in this scenario, you'd kind of, you'd either need to replumb stuff and forward it or
rely on it only querying it over the mesh, which would probably be fine, but maybe you don't
want to do that.
The host is also on the tailnet.
So there's that too.
But, yeah, that is a tricky part.
It technically, the host OS can't talk to it over the network, which hasn't been an issue yet.
But so that's the basic, that's the core network setup, okay?
And then what I decided to do was I turned off the tailscale magic DNS stuff and didn't like
the results because I do not have DNS entries for every machine on my, on my tailnet,
and that's what Magic DNS was solving for me.
So my sort of compromise solution was I reenabled Magic DNS, and then I added this pie
hole as the upstream DNS server for Magic DNS.
Totally.
And I think that worked seemingly pretty well, and then I enabled the DNS.
And that setup, Tail Scale will answer sort of right away for the Tailcase host, and then forward
to your setup for anything.
It doesn't know about where you define your own manual.
entries. And that's where you'll find entries for the nebula devices. Nice. And then you can configure
the Nebula lighthouse to suggest a DNS server to the clients, right? And that is, that's a really
simple. It's like two lines of configuration on the lighthouse and you just give it the DNS server.
And then so that's also helping the Nebula clients discover who they're supposed to talk to for
name resolution. And since I only have like three nodes on this little tiny, or maybe four nodes now,
on this little tiny network I'll talk more about, super easy to just add the entries.
And I don't, this is going to be for a private clinic, so I don't think I'll be adding more hosts.
One thing we should play with, which I haven't yet, but I'd like to get more into is doing either delegation or maybe using an API to trigger updates because Nebula lighthouses can serve DNS.
Yeah.
So you could also, depending on if you wanted to, maybe the static has advantages too, of course, but you could also maybe set it up.
Yeah.
You know, the pie hole would just query Nebula and be able to answer for the Nebula host without you having to hard code it.
The advantage was on the piehole DNS server,
now I also have a bunch of entries for the devices that are on my lands.
So hosts here at the studio and host at the RV are also on this DNS server.
So all the machines, if you're on the land,
whichever land you're on or whichever Mesh VPN network you're on,
we all can resolve the same host names now.
So that's kind of why I didn't go that direction.
But I think that would be an easier setup if you just had a couple of machines.
Well, I meant like integrating the two, like keeping the pie.
just letting Nebula answer for the host it knows about.
Oh, okay.
And then would it upstream to the pile when it does?
I see.
Yeah, I like it.
Okay.
Oh, my God, change.
Yeah.
How do you feel, Chris, about the need for Internet access here?
Because occasionally you don't actually have access whenever, you know, a storm comes by or you're traveling, that kind of thing.
So your name resolution internally on your local network would be affected.
Is that a correct understanding?
I did.
Yeah.
And so for that, I still, I kept my pie.
hole on my land.
Hmm.
And it just,
it forwards now to this guy.
Nice.
But for the most part,
because that pie hole's been around
so long, I have all these same DNS entries already.
So,
but I did keep it for that reason.
And I'm,
I'm very happy now.
It's,
it adds complexity to have two mesh networks and,
you know,
multiple lands.
But it's seamless now to the end user now that I've done this.
So I'm pretty happy.
And the,
and the latency is pretty,
yeah,
it's pretty good even for LTE connections,
really.
Well, you know, it's kind of makes sense, too, is like, it'd be one thing if you didn't have the existing infrastructure and all that, but because you kind of have posts that are positioned to fit into both of these networks or could bridge them, like it doesn't actually, you didn't have to stand up a bunch of new infrastructure.
You kind of just had to reprovision some of it to better work with your new setup.
I would like to actually ask, so if you want to boost in or send us a contact, if you were building this from scratch, so I already had a pie hole going.
But if you out there, listener, were building this from scratch, what would you have used to do this name resolution?
because it did cross my mind.
Like maybe this is just a stupid DNS mask thing.
I just set up a simple DNS mask.
But then I like the idea of a little bit of ad blocking
for the systems as well.
That's nice.
That's a nice feature that comes with it.
And you can do DNS mask configuration with Plyle, right?
Because it uses like a Forks version.
That was my conclusion, yeah.
I was like, yeah, well, I might.
I kind of get, and I know how to use it.
Yep.
And it's worked fine for me.
And it's survived multiple major upgrades now.
So it's past those tests as well.
So it's a good project.
But I would be curious.
it's like I think you probably would use tectidium or technetium or like because it I know can do sort of like delegate zones where it will say like hey for anything in this sub you know maybe you have like dot nebula domains or whatever go query this server for those and then return those it also has some plug in capability which I haven't really explored um or you know there's there's a lot of good options these days yeah I saw I saw some people that were solving this with Adgard
okay yeah yeah you can totally use bind of course so I'd just be interested to know how people are solving this uh
I would also like to know if anybody has a way to solve this declaratively, you know, so that would also be a winner in my book.
But while we were talking about Nebula, you've been working on something that's kind of slick, Wes Payneau.
Yeah, it was just an idea we had while we were toying around with setting up the clinic the other week was, well, what if you just had like a low-key, you know, not crazy production scale, not being like a whole control plane for Nebula necessarily, but just something to make printing new host certs.
easier. Yeah, could you explain that a little bit? So if I'm not using the managed product,
there's sort of some cert exchanges that have to happen. Yeah, right? So you have to, you're basically
managing a CA. Yeah. Right? So you have your own certificate authority and then to get hosts onto
the network. They generate their own private key, but then you kind of have to sign the public
part of that. And that's how they get blessed with a host name and an IP address on the network.
And then that's how anything trusts them. When you try to communicate with something, you
to be able to present that public side that is signed by the CA that they all mutually trust.
And the beauty is, the simplicity is it's really coming down to files you're moving around
that have keys in them. And that is the totality of the infrastructure actually required to get this working.
And if you sit with the amazingness of that for a moment, it really is very impressive.
These machines are discovering themselves. You need a lighthouse, but they're discovering or you use a public one.
And they're communicating and creating a mesh VPN just by exchanging these key files.
Yep, and, you know, just simple concepts of groups and you have stuff signed by the right thing,
and it kind of all just works.
But for simple static networks, that works pretty well.
But, you know, I've been playing around with my sidecar mesh setup on NixOS.
And especially for, like, the demos I was doing and testing it out, it was pretty convenient, you know, products like Tailscale or Netbird.
They have this U.S. you get with.
Basically, all you need is one secret, right?
Like an API key, and you can put that in somewhere.
and then when the client launches,
it goes and uses that to an API
and then can onboard itself.
And I was just like, well,
I wonder if we could get that same workflow with Nebula.
So NACME, or Acme for Nebula,
is my little attempt at that.
It's super early days.
I need to do a bunch more testing.
Eventually, it would be great to do renewals too.
But right now, it's at the initial testing stage
of just being able to, you run a little server,
you can configure an API key
that's bound to certain groups,
and then you have a little client
that can run and go get a new host onboarded.
And so if you can configure that,
I also want to set up a bunch of this stuff,
especially with Nick's side,
but configure it to run before Nebula.
You could have Itgo item potently check
to see if it needs to configure the host for the first time,
set up the keys and everything,
and then have Nebula start and be ready to go.
Or at least that's the idea.
Wes, this is really neat.
So it's automated certificate minting,
and it gives you essentially, like you said,
It's like an API key type type exchange.
The goal, too, would be like it's sort of best effort, right?
It's meant for, like, HomeLab or, you know, stuff where you're not going to go the full, like, crazy IT automation.
It's great for, like, a small business network like we were just setting up.
And the whole thing with Nebula, right, is, like, there are some trade-ups.
You have less with those certs and the way it's kind of like more like a JWT kind of style of trust.
You know, you don't necessarily have this one database that determines all of the truth.
Right.
In a sort of less eventually consistent way.
but the upside is Nebula will just keep working, right?
As long as the certs aren't expired.
Yes.
There's no, the control plane, nothing happens, nothing goes down.
What freaked me out recently was the idea that maybe my Google account could be suspended because, so PayPal decided to flag my account for like re-verification.
And it's a very complicated process.
It's not just like a regular view.
It's like a very in-depth multiple types of documentation.
What did you do?
Nothing.
I don't know, man.
That's usually what it is.
I don't know.
But it occurred to me that if our Google Workspace account payment got bounced because of PayPal,
then I might not be able to authenticate to my tailnet anymore.
And that freaked me out a little bit.
And that's where I was like, oh, the simplicity of these keys and the fact that they'll work for as long as I issue these keys for is very reassuring.
And so my thought was this was like worst case, you know, like even if this is down, you can still manually add things.
Like this is just a convenience functionality to make it easier to onboard host.
So this is not setting up routing.
This is not setting up the networking layer stuff.
This is just keys to get you going to then build that stuff.
Exactly.
That's really cool.
NACME.
I like the name, too.
It's very clever.
I think that could take off.
So we'll put a link in the show notes.
It's on Wes's GitHub.
N-A-C-M-E-M-I-T licensed.
Indeed.
Indeed.
And version 0.10 was released just recently.
Yeah, we'll see.
I should cut.
A new version.
It's moving, you know, it's moving fast and needs more tests.
I want to say thank you to our members and our boosters.
Next week, I'm calling it a birthday episode, boys.
I don't know what we're going to do, but check this out.
So Brian and I started podcasting in January right around my actual birthday almost exactly 20 years ago to the day for next week's episode.
20 years of podcasting on my birthday next week.
So if that's not a long-term commitment to the space, I don't know what is.
So send a birthday boost.
We'd love that.
Or become a member and use the promo code bootleg.
We have a couple of, well, we have a handful of redemptions left.
And, you know, you become a member at the party or a core contributor and support the show at a great discount.
And if you'd like to get your company, your product, in front of the world's largest and best Linux audience,
show me an email, Chris at Jupyterbroadcasting.com.
This space could be yours.
And thank you to everybody who supports the show.
We greatly appreciate it.
Now, Chris, for the last two episodes, you've been talking about deploying, you know, a bunch of new machines at a clinic, making sure you have the responsibility for keeping your wife's business happy from a tech perspective.
And now you're putting up, you know, some infrastructure that you also need to be working at all times.
I would imagine now is the time to make sure all that stuff yells at you whenever it's not in good health.
Right.
He wanted to just leave a business card with my phone number on it, but I didn't think that was a great idea.
And part of me is like, well, we should do this why it's still fresh in the mind because this stuff fades.
Oh, yeah.
And I thought, well, if we're going to do this for Hadea's Clinic, maybe I should do this for my own infrastructure.
And then I thought, wouldn't it be great if we could build something that if we ever did this, you know, on occasion for audience members or whoever, wouldn't it be nice if we could also offer to monitor their stuff?
And I could build something that was pretty flexible like this.
And I'm sure you boys are familiar with uptime Kuma.
We actually use it at J.B.
Alex set it up for us a while ago.
And we like it.
It's pretty simple.
And it alerts us when something goes offline, like one of our websites via a telegram bot,
and it creates a nice dashboard.
Super easy to self-host.
And they have a demo.
I'll put a link in the show notes.
And it does monitoring for HTTP, TCP.
It can search website keywords, check websockets, do ping.
Check DNS records, stuff like that.
Very easy, straightforward to get going.
Yes.
So, obviously, that was the first thing I decided,
because this is what I have the most experience with.
And I thought this will be the way to go.
Can you guess what the problem was?
Well, I think I know because it's been a longstanding issue
we've had with the project.
In fact.
What's that?
A lack of declarative configuration.
Yeah, man, it's just the gooey.
Well, I had to set up something like, you know,
45 hosts and services to monitor.
And I was sitting there creating all the entries and I'm like, this is going to take me two days to do.
And also because I wanted them to be actionable and all this stuff.
So it's like, oh, my God.
And then I also wanted, I wanted tiered escalating alerts.
So first ping me via notify.
So because I have a lot of stuff coming in via NTFY these days.
And so that's sort of a feed of just checking on my systems.
And if I'm available, I check it.
But if I'm not presently thinking about my infrastructure or my system,
I don't check it.
So I needed something
that would break through
to telegram and kind of,
you know, kick it up.
Yeah.
And so I wanted it,
and I wanted it based on different
thresholds and trends.
And so when I started getting into,
I need to add,
yeah, let's just say 45-ish host
combination services,
maybe more.
And more complex alerting
with a little bit more nuance.
I really started to hit
two different walls.
Like my GUI exhaustion kicked in.
Like if I was adding like five systems,
I would just done it.
And then trying to get complex logic
around alerts
started to get frustrating.
Yeah, you combine those two.
That could be pretty annoying, especially if you have to, like, go configure it and then
go run the test and then go see if it did the thing you want, and then go repeat that cycle
a whole bunch.
Yeah, man.
So I decided to break the seal on something that I have never, I've never bothered learning.
I've never wanted to embrace.
It's been a long time coming on the show.
I'm excited for them.
It has, ladies and gentlemen, I have finally deployed my first Prometheus Instinct.
And of course, once you have a Prometheus instance, you want pretty dashboards and you want all the details.
So along with that, I have also finally deployed my first Grafana instance.
Yeah, something tells me you really deployed the Grafana and then you just got the Prometheus, so you had something to build it.
It feels like that deserves a round of applause, really there, Chris.
Give yourself.
Yeah.
Thank you.
Everybody.
I mean, uptime Kuma is good, but I needed a little bit more than that.
And I also have kind of a complex situation that I thought Prometheus was a little better at solving.
I have set up a federated configuration.
And I had a problem where I have my Odryde on my home lab that doesn't have a lot of available resources.
It has some, but now it's doing frigate and a bunch of other stuff.
And I'm on LTE on two different ends of the connection.
Not all of them, but two different ends are on LTE network.
So they're slow.
And I can't just be blasting a bunch of data.
My original idea was I'll set up a central VPS, connected to all the mesh networks,
And then it will monitor everything.
And I just go to the VPS dashboard.
But then I actually started running the math on that.
And I realized the overhead would be somewhere between 40 megabytes to 100 megabytes a day, best case, plus the overhead and latency it adds and slows down the whole LTE connection once doing that stuff.
And that's only going to grow as you add more stuff that you monitor, surely.
Exactly.
So the solution was a federated setup where I had local monitoring on my home lab system monitoring, monitoring,
on a VPS and then light remote monitoring on my wife's clinic network.
And I set up a Prometheus system with a black box exporter that does some additional like HTTP TCP ping checks and can do some API authentication for me.
And that all feeds into Grafana to give me dashboards.
And then that all talks to Alert Manager.
So let me zoom out.
Prometheus is running on two different systems.
Plus a little Prometheus client is running on my wife's clinic.
and I have a Prometheus integration now running on Home Assistant.
Oh, yeah.
So this is collect.
Yeah, buddy.
And that's really useful.
I could talk more about that.
But the Prometheus agent is essentially collecting all of the metrics, the CPU, the disk usage.
And it allows me to trend and alert on these over time.
And with Home Assistant, I'll just mention this quickly, that integration is pretty awesome.
Because you can export a lot of different things from automation details, log, all these types of things.
I mean, it basically has a whole bunch of stats in its own stats engine to begin with, right?
Oh, yeah.
It's what is.
You can kind of just dump those out to Prometheus.
It's a sensor machine, yeah.
And so the way I use that now is I am using via Prometheus data export into Grafana.
I have dashboards on how long my different climate entities run.
So since it's winter, we have some electric heat out in different areas,
and I want to make sure that they're not running excessive because that would tell me the heater isn't keeping up.
I want to know how long we're running electric heat.
I want to compare that to our intake from solar.
And I want to have all that on one dashboard.
And now I have just not only a beautiful display, but good historical data that I can work with.
And all of this is pulled in on my local Prometheus incident that's running on my home lab.
Then I have Prometheus running on the VPS.
That's doing some remote checks to make sure that the remote systems are up.
And it has some logic to understand that if the mesh network is down, don't freak out.
Don't alert about every single host.
You know, I wanted some of this tiered logic in there so it knows if one network connection is down,
and everything that's on that network is down,
which is going to hopefully save me a lot of notifications.
Yeah, you've saved yourself that first night, right?
Yeah.
The network goes down and suddenly everything's on fire.
And then everything it's observing about the VPS itself,
the mesh networks, and the wife's clinic
is getting federated back to my Grafana instance,
so I have just one dashboard to view everything.
And that's running on my local instance.
Oh, man.
So, and the difference is,
according to my calculations, boys,
the difference is about two megabytes a day of data usage.
Nicely done.
Yeah.
That's at least a power of 10.
Mm-hmm.
It was a nice little savings,
and it's less running on my home lab or on the VPS as well.
I do like that you kind of went from zero to not, you know,
unsophisticated setup.
I think the real deal breaker was everything could be done declaratively,
and I could do this sort of hybrid federated setup.
Very flexible.
Yeah, coming together, those two things are like,
So, like, to create all the dashboards, I didn't create a single dashboard in the GUI.
There's so many community examples that you can modify and get started with.
And then you drop them in a folder and that becomes a dashboard.
And Bob's your uncle, you got all the things and bells and whistles you'd want.
And then I have alert manager, last piece.
I have alert manager running on the VPS and everything forwards to that.
So I only have to have one alert instance.
And it can communicate with Notify.
You can communicate with Telegram because I set up a little bot thing and all of that.
and it handles all that stuff.
And it will also let me know if any of the remote hosts are down
or if any of the mesh networks go down.
And the results have been,
there's about 31 services that I'm getting real-time visibility
into their performance and their metrics,
all in beautiful Grafana dashboards.
And I also have real-time alerts
anytime any service goes down.
And I'm now very carefully monitoring storage,
which is very tight these days.
And so I have different thresholds for storage alerts
and I have different thresholds for like if home assistance
at 80% CPU for X amount of time
and if it gets to 85 or 90 for X amount of time
to do different styles of escalation
and then there's follow up for when things recover
I get a recovery alert
so that's all working really beautiful.
Yeah, I'm curious how you tested it.
The entire, well, by shutting things down.
I did.
Excellent.
All right.
Docker Compose down.
You know, the next version of this
is you give your agent access
and you ask get to stop a random server.
Right, just chaos monkey it?
Oh my God, that'd be fun.
Basically, just give West your credentials.
Yeah, well, yeah.
It has to most of it.
But so I measured the whole stack across all the machines,
400 megabytes of RAM.
Totally fine.
Not even making really an impact at all in my home lab system.
And I was worried because I heard some stories about Grafana.
But I really, really like the home assistant integration with Prometheus.
It's something else I've put off for a while.
If you have been tempted to try this, the insights are fantastic.
And then the other thing that's really fun is Frigot, which I also set up recently,
also has an API where I can export all the information to Prometheus and Grafana.
So I have details on how my coral is doing inference-wise and my different cameras and their connectivity
and their detection frames per second.
The camera's overall health, when the automations execute and how frequently for like arming and disarming the recording,
if they are available at all, is all coming into this dashboard.
So I have essentially a camera health dashboard now.
It's so great.
You have dashboards for your dashboards.
I do.
And a lot of the RAR projects, it's great.
Oh, I'm like Inception dashboards over here.
A lot of the RAR projects out there that we don't talk about also have APIs and health API endpoints that also plug into this.
So you can get all that kind of information in there.
And it really struck me like how now I get it why people go through setting all this up.
And yes, it's a lot of yamil and all of that.
But there's so many great examples out there.
And I now have an enterprise grade monitoring stack that I just never even thought I would get into.
But zero dollars spent, you know, one day I've set up really, a lot of documentation and maybe five to eight hours of fiddling to get it all working.
And it really makes me feel.
a lot better about myself-hosted infrastructure. I have built quite a little empire now of things
I depend on and my family depends on, and my wife and whatnot. And I probably haven't been
monitoring as seriously as I should. And I just thought that, ah, it's fine. But honestly,
it does feel a lot better because I'm getting, I'm getting insights before things go wrong now.
I'm getting ideas of trends, so stuff I know, oh, this is actually something they need to
address. And I know something's a problem before the wife has to tell me it's a problem. So I know
if Jellyfin isn't working for some reason or et cetera,
or I know if her system isn't backing up
before she has to tell me.
So if you have a self-hosted setup,
definitely uptime Kuma
is a pretty good starting point.
But if you have the ambition and the time,
Prometheus and Grafana
get a double recommendation from me.
I waited way too long.
There is a real learning curve,
but the visibility you get out of it,
it's just so useful.
Plus, like I've re-visualized
my entire infrastructure.
once again.
All the stuff I've built, like, I know how many services and how many hosts.
And I, you know, I know how they're doing.
Like, it's a, I have a much more concrete picture once again of everything I've done over the last five, six years.
Well, and I think as you're discovering, right, a lot of things published Prometheus metrics.
Yeah.
So actually, as Tiny points out in our chat room, uptime Kuma itself can publish to Prometheus.
Yes, I thought about that for a second.
And Nebula does as well.
I don't know if you've integrated that yet, but.
Hmm.
Hmm.
Mm-hmm.
Should look to see what tailscaler can do, too, but that would be really useful.
Yeah, I'm pretty happy with this.
I did have to kind of, you know, be tied on the retention.
So I don't have, I think I have like 30 or 60 days.
I couldn't go crazy because of storage constraints.
And I did, I will admit, Wes, I feel a little guilty, but I did a lot of it with Docker.
And the reason was, and it's always what gets me, is a lot of the community add-ons and plugins.
Yeah.
assume you're using the Docker instance.
And it's one of those things where it's like,
well, yeah, I could come up with a way
to declaratively do that with Nix every single time.
Or I just start the container and it works, right?
And so all of the OS level stuff to make it work
is all declarative, obviously.
But then like Prometheus, Grafana,
and Alert Manager,
they're all in like one big Docker composed.
Yeah.
So I felt bad about that.
No, I mean, as long as you're dabbling, I think, a little bit
and seeing that Nix can play very nicely with these things?
And it was a no-brainer for the EngineX.
Like some of the stuff I had to add some more stuff behind a reverse proxy
and get SSL search for it that I didn't have before,
just so I could keep consistent.
So I wasn't doing IP for some stuff and name for other stuff.
I wanted names for everything.
So they just got all the DNS set up anyways.
And that was really, really nice to just configure all of the EngineX stuff for in Nix.
And, I mean, it should be easy, right?
Like any of your Nix host, you can just add in a bit to have it run a Prometheus exporter
for the node.
send them node metrics too.
And I know between the three of us,
I'm also the Git Luddite,
but the other thing I liked and appreciated
about having declarative config for my monitoring setup
is that I could use Git to manage that.
And if I F it up in the future,
you know, I've got some recoverability there.
So the more I can define via text,
I feel like the safer it is to experiment
and the easier it is to roll back.
And so that's something I just,
that's food of thought,
one of the lessons I took away from the setup is that gave me a little bit of comfort level
to experiment with something that I didn't fully understand yet that has a big learning
curve.
And now that you have it captured, you can watch as it evolves too, right?
So you have rollbacks, but you can also then go, well, what did I tweak?
Was that what broke it?
And I did catch a couple of things already, I will say, too.
I had some coral performance degradation issues that I tracked down to Wi-Fi, actually.
But it started a whole process of, like, breaking down where the problem was at.
What is the maintenance going to be like for this?
Like how likely are you if you're just booting up a new container for some kind of new system you're playing with that may last a long time?
Are you likely to add that to the dashboard here in the process of setting it all up?
Or is this just going to fade a little bit?
And then you'll have the problem where you've got a bunch of services that aren't actually integrated in this.
How do you think that's going to go?
That's a great question.
That's a great question.
Because my current thinking, so I haven't thought a lot about that,
because I've been thinking I need to really freeze the state of my o-droid,
and I need to stop adding stuff because every single effing thing I add,
I need to migrate to a one-liter PC one day.
And I just went and made it a lot more complicated, right?
So I have been thinking I was actually going to hit the pause button for a while
until I get to that migration.
But you raise a good point of like, what if I find it in my new favorite self-hosted app
and I get it all set up, do I throw it into the monitoring system?
And I think my answer for that is, I don't know if you guys do this.
This is probably just a Chris thing.
But I have two tiers for self-hosted applications.
You know where I'm going?
I think so.
And it's like if I'm just playing around or if it's just something for me or really, you know, maybe I want to, I don't know, for whatever reason.
I'll just put it on a port and I'll just go to the local IP and I'll put the colon in the browser like an animal.
Yeah, yeah, yeah.
But then when it becomes like, oh, this is something that's serious,
then I go ahead and I set up the reverse proxy
and I get an SSL cert for it
and I even register a DNS name.
Because some stuff just doesn't survive
to that level, you know?
Right.
So I think that is the threshold
in which I now need to say
and I'm going to add it to the monitoring system.
There'll probably be a natural point
where you realize that it's down
and you wanted it to be out there.
Oh, right, okay.
Better add that.
Other people do this?
Let us know.
Send a booster contact form
and let us know if you guys do this.
I just wonder if it's a thing.
Because I've definitely seen some other folks
that we know be like, oh, why do you do that?
I think my other question I have for you guys is,
do you honestly think I did it overkill?
Do you think I went too far with this?
I think is that implicit in your question there, Brent?
How many more pie holes would you have run?
Yeah, right?
I guess it is a little bit because there seems like there's such a gap
between where you started solving this problem
versus where you ended up.
But as long as it's, well, your requirements were quite specific as well, right?
So I think had you loosen those requirements, especially with the notifying, you know, the tiered notifications, that probably would have made this much, much easier for you.
However, you probably would have hated your life every day of the monitoring system after that.
So I think if you're looking long term, which it sounds like you are with this kind of monitoring, then I would, it sounds like you made the right choice because your digital life and your maintenance of that life will just get better.
You know, I saw this morning over on the B-Cash-F-F-S subreddit.
A couple different folks working on both for Telegraph and then for Prometheus, B-Cash-F-S collectors.
Cool.
So a little incentive for you in the future.
So you don't think I've overdone it?
You do, right?
Not if you install B-Cash-F-S, no.
I mean, if I could pull B-Cash-F-S metrics in, that would be pretty neat.
I think that justifies your setup.
So I'm a little worried about that, but because, again, it feels, it's like one of
of these things where when it's
all declared, I can kind of
pick it up and read through it and
understand it, where when it's the GUI, I have
to really dig through it
and really, really, really have to
like grind again to get
it figured out again. I don't know. Maybe that's just
me convincing myself. Well, no, and I mean
there's a lot of pieces, but I think
one of the benefits of a Prometheus
style setup is, right, you're building on top
of time series, and that's a
fairly universal format for a lot of things.
Yeah, yeah, yeah. West didn't answer your
simple question. Do you think that he went overkill? No, I mean, how many services did you say
you had? Around 37 and then, you know, five or six hosts in there or something, something like that.
Yeah. I mean, it seems like maybe if anything, it's more of like a reckoning with the level of
infrastructure you're already providing and that it deserves a similar class of monitoring.
Yeah, and it's like it's not only running a clinic, right, but the home assistant stuff is really
integrated into the function of the home to a degree that like prevents freezing and other damage.
occurring, so it's pretty significant.
Yeah.
I think I probably was underdoing it.
Yeah.
I think I might have been.
And, yeah, it did need a better solution.
I would be curious to know if maybe there was a better way to go, though.
So, and how people are doing it.
I always love to hear that.
I'm not opposed to coming up with a better way.
It could always make for a good segment.
All right.
Well, check this out.
Linux Unplugged has been here for over 12 years.
And I think I figured out why, right?
We focus on a few things, and I think this is one of our strengths, real use cases for Linux,
where you can get value out of it, free software that's actually free, and we talk about the differences there.
And we try to focus on self-hosting that's practical and just works and not like the hype stuff.
And I think you'll also find that we have honest conversations that try to help you make sense of the big shifts in the Linux landscape,
and you can look over the 12-year history of the show.
And we don't chase the outrage.
We don't chase the hype.
We don't go for the drama clickbait.
We just try to focus on the signal there.
And so when you support the unplugged program, you're keeping something that's a bit rare alive.
It's focused, thoughtful Linux podcasts that tries to stay in its lane, respects your time, and treats the community like adults.
And that's probably not as common as it should be.
So this here show, it runs on value for value, time, talent, or treasure, listening and sharing the show, spreading the word, time, participating in the community, helping create maybe show swag.
That's time.
It could be a little bit of talent in there too, right?
Also your feedbacks, corrections, things like that.
Also helpful.
And, of course, treasure.
Boots, membership, direct support at meetups, all of those things make a big difference right now.
The reason why I'm talking to you right now is because we don't have a sponsor for this slot.
So every bit helps the show continue and ideally thrive and grow.
Better coverage, bigger experiments, more room to explore, what's next before it's obvious, without all the hype.
I mean, look at the history of the show.
So if it's helped you understand Linux better or avoid bad tech decisions or feel more confident about running your own systems, consider supporting the show to keep that going.
You can send us a boost.
You can become a core contributor or jupiter. Party member or, of course, of course, if you use the promo code bootleg while at last, you get at a great price.
That's Linuxunplug.com slash membership to support this show directly or Jupyter.
You get the perks and you keep the show going.
And of course, you can send us a boost to support each episode directly.
Thank you everybody who does that.
It makes all the difference.
Well, AJ wrote in this week.
Longtime listener, Mobile Linux Survivor reporting in here.
Hey, Chris, J.B. Crew, been a long time member watching and listening since about the Matt Hartley era.
So not quite a Lunduke gray beard, but almost.
I'm extremely jaded about Linux phones and for some good reasons.
I backed the Libram 5.
Lawyers got involved there and still no phone.
I owned a pine phone and a pine phone pro, which were underwhelming at best.
So I assume we're all pretty burned on mobile Linux by now.
But then I heard about the FLX1.
Last spring I learned about it.
It's the FLX1 from Fury Labs.
Didn't expect much there, but I believe in the idea of mobile Linux.
So I backed it.
A month or two later, though, the FLX1 was canceled.
So cue that purism-era PTSD.
But here's where it gets a little weird in a good way.
Fury Labs announced a replacement device, the FLX-1S for Slim.
It offers refunds or a spot in the new queue.
And so I stayed in, fully expecting another disappointment.
But then they delivered.
My FLX-1S just arrived January 2nd,
and I've been daily driving it since that weekend.
not testing, not tinkering, but daily driving it.
So here's a little report.
Does it actually work?
Yes, calls seem to work.
SMS and MS, mobile data, GPS, most Bluetooth works.
Many Android apps also via Waydroid.
They use a fork called Andromeda.
The battery lasts about a full day with normal use as well.
The software stack, as I understand it, is Fosh,
customized Debian base built on the helium project.
Those details might be slightly off, but that's the gist I understand.
Is it perfect?
Nope.
Is it real?
Shockingly, yes, it's real.
There's a compromise, of course, but they're shrinking pretty quickly.
Some issues get fixed day to day, not month to month.
I've been submitted a bug fix that'll ship by default on the next release.
That alone felt wildly refreshing compared to my previous experiences.
So why am I reaching out?
Well, I have zero affiliation, no financial interest, and no incentives.
I just genuinely am a happy customer, which feels rare enough to mention.
Fury Labs has restored some of my hope in mobile Linux.
They're active in the Matrix room with a small but engaged community,
and I really think you should get your hands on a device,
maybe invite someone from that team onto the show,
and talk about what they did differently this time around.
Oh, that's an interesting suggestion.
Boy, I would appreciate a contact if you have one, AJ.
That is a good report, right?
Isn't it nice to hear that possibly a Linux phone out there that people are happy with and gets the basics done?
And I think about how much I could do in a web browser if I didn't have an app.
And I start to think maybe it's not crazy.
Maybe the dream is possible.
AJ makes me believe again.
Faith restored.
Thank you, AJ.
Appreciate that report.
Great example of value contribute to the show right there with a in the field report.
Another great example, of course, is Laboose.
And now it is time for LeBos.
And Aj is back with.
a row of Mick Ducks is our baller booster this week.
This old duck still got it, things that are looking up for all but duck.
And he writes, I just wanted to share a little Linux and self-hosted success story.
Oh, here we go.
I love these.
Yeah.
Thanks to what I've learned from the crew, our community and our community over the last six years, I was able to migrate off of a GCP, oh, a Google Cloud.
Yeah.
To a combo of our own infra and Kolo, resulting in a monthly savings of this boost amount.
Wow.
But it was in dollars instead of SATs.
That's a big number.
And all Foss.
You guys are awesome.
Thank you.
Love to hear that.
That really,
you know,
that kind of stuff makes our day.
Also,
regarding the on-site you guys did at Hedia's Clinic,
it was super interesting.
And it's always those small gotchas that get you.
Ain't that the truth?
It's always networking.
That is such a great boost.
Also, I mean,
because it's just really great to hear
that we made a little difference there,
but also appreciate the signal on that type of topic
is the first time we've ever done something like that.
Yeah.
So we always appreciate the feedback.
Optical GRI comes in with 21,703 Satoshi's.
I hoard that which all kind covet.
I forgot where I live.
So I need Wes to check his map for me and then relay that information to Brent so he can help me with the many unfinished projects if he ever works his way through here.
Uh-oh, here we go.
Zip code is a better deal.
Don't tell me with a good time.
Did you?
Uh, yes, actually.
Oh.
I keep it in my back pocket.
There it is.
Nice.
Okay, watch out, watch out.
I don't want that on camera because that...
I did sharpen the edges.
Yeah.
All right, do we have a location?
Yes, we do.
21703 looks like a postal code from Frederick County, Maryland.
Oh.
Wow.
There you go, Brent.
Well, what you have to do with these messages is also tell me, you know, some temptations as to why I should come through the area.
Well, mostly food.
Well, you swing by on your way to the Capitol.
Just they have a gluten-free pizza, then you got me.
Just he has a few allergies he'd like you to know about.
If you consider those and work them into your boost, there's probably a good chance you'll stop by.
Oh, and if you have a plug, outdoor plug, he can make 120 work.
If you have good cat snacks, that usually make help.
Liking cats helps too.
That is true.
Well, Gene Bean sent in, this is just a little row of ducks, 2222.
Says, can you share that Nix config for the clinic?
I'd love to get some ideas off there.
We could.
Yeah.
I think we would probably just want to do a quick sanitization check since it is for a clinic and all of that.
But I think we could give a look at that after the show.
Totally.
And if it passes the sniff test, we'll just put it in the show notes for this episode.
Does that make sense?
Does that make sense? Is that fine?
So I guess the answer is, if the answer is yes, it'll be in the show notes, Gene Bean.
Good question.
Show notes for 6-5-1.
You know, I've talked, yeah, Linuxonplug.com slash 651.
I've thought about this.
There's not a lot that's going to be revealed because anything that's, you know,
like a secret it's stored outside the main config that goes in the repository. But it is the
type of thing that if I had access to someone's network, I would use the hell out of this to get
everywhere I wanted to go. And I just like this for me when I, I don't mean to be this guy,
but like when I was hired to do penetration testing, this, I would have loved a map like this.
I would have, this would be like, oh, you just gave me the job for easy, free, you know, basically.
First, I'll own his couple of pie holes, and then I will.
Right.
It's really, well, it just gives you time to research, and it's easier than ever to drop these configs into a machine and say, hey, machine, what's the first thing I could pick on, right?
Like, you've got to think about the tools that are available to people now.
And so it crosses my mind that there is a level of information that's being exposed.
And so I have some consideration there.
But it relies on breaking into the infrastructure.
And then, you know, most of what gets exposed is just internal non-routable IP addresses and things like that
or perhaps where secrets get stored and whatnot.
But it's something I think about and I would love the audience's thoughts about it as well.
And if you guys are concerned, I know there's a culture around sharing your Nix configs and your Ansible configs.
And I like that.
We've benefited a lot from it.
So I, you know, and I know I could do a sanitized version.
So, I mean, maybe we'll try that.
But then I'm not, I prefer if I'm going to put it up on Git, and put it up in, if
I'm going to put it up on GitHub.
I would prefer to actually use it.
You know, like, Megan, and then it's like anyways.
Good question.
Check the show notes, Gene.
And I would love people's feedback on that.
Thank you very much.
Is it my turn now?
I don't remember.
Sife Seeker comes in with two, I got all distracted.
Comes in with 2,500 sats.
Making show.
Coming in hot with the booth.
Guys, I've been kicking around the idea of a NixOS router.
And an example would be greatly old.
There you go.
Would you be willing to share your config?
The router config is interesting, right?
It's in there.
I will also mention that bearded tech in our community
has a really cool NixOS-based router project.
And if you're actually thinking about using it for your home router,
yeah, this might be something to consider.
That might be something worth looking at.
We were kind of doing a bunch of stuff all at the same time.
So we took a peek, but we kind of wanted to start a little more minimal and work our way up.
But it looks great, especially if you just want like a standalone router that is NXOS powered.
Yes.
And you got to remember, we were building something that was a VM first, VM server first.
It wasn't going to be the router first.
Yeah, yeah, right, exactly.
Yeah.
All right.
We'll take a look at that, Cypher.
Thank you very much.
I guess there's some demand.
Hybrid sarcasm comes in with 10,000 sats.
You asked for some feedback regarding actual budget.
Yeah.
It's been a pretty good replacement for you need a budget with spousal approval so far.
Oh, nice.
That's $9 a month I don't have to spend on a cloud service.
You'll also appreciate that the actual devs have a sustainable funding model for their core contributors,
and they are looking to expand it to others.
And then we've got a link we'll put in the show notes.
Oh, interesting.
I'm looking, I'm trying, there it is.
Yeah, so they have in their documentation here.
Hmm.
Thank you.
I did not know that.
That's a good little bit of information there.
Appreciate that.
I appreciate the report.
You're saying those are the actual, actual, actual devs?
No, the actual actual.
No, yeah.
Oh, actually.
Okay.
Well, adversaries came in with 8,441 sets.
Adversaries responding to our question last week, Chris, you were asking for Wi-Fi analyzers for Android.
Adversaries says Unify makes a great Wi-Fi analyzer app called Wi-Fi Man.
It doesn't require Unify gear to work.
It just uses your phone's radios.
Okay.
And I can second this one.
This is the application I've been using for about the last year.
I used it just this week to...
That's good to know.
Hey, hey, you didn't give me time to answer your question,
nor did you ask me while you were working on this project.
Oh, wow, dude.
Anyways, it's fantastic.
I use it just this week to fix my parents' Wi-Fi,
and it's got some sweet features.
So I would say put this on your phone, play with it.
It's pretty amazing.
It took me a few uses to discover all of the different crazy features
that are hidden in it.
It's really quite good.
This makes you the buddy that shows,
up with a water hose after I put the fire out.
You realize that, right?
You're welcome.
All right.
That makes you, that's what you, I could start a new fire.
No doubt we will.
No doubt about that at all.
Anonymous comes in with 2,021 sats.
No message, just value.
Thank you very much.
And then Tomato comes in with a row of duckles.
And writes, I loved this old network segment.
I'd be curious to hear if Brent started to automate his van yet.
Mine is completely unautomated.
I'm not sure where to start.
Hmm.
Oh, well, he needs to start with sensors, right, Brent?
Sensors is great.
Yeah, I did the opposite this week and pulled my lithium batteries out of my van, so I've unautomated everything, only because it got really, really, really, really cold, and this is not very good for them.
So I feel like this week I went backwards, but I'm going to kind of build all the automation here in the workshop, just as winter's here.
Then I can just plunk it in the van, you know.
But I would say, Chris, you've got...
much more opinions on this than I do.
But I would say start with the problems that you feel like you want to solve or have
visibility into, right?
If you want exterior temperatures versus interior temperatures and that's really important
to you, start there.
If you want to, I don't know, have some other solution to a problem, that's always the best
place to start.
And plus one sensors.
And then if you really want to see what's capable and way, way far out there,
check out smartyvan.com.
He's also a YouTuber.
and he has created some really inspired automations around Van Living.
I mean, absolutely high-end tech stuff that you could build from for years.
He's also released some code.
He has examples and automations and video tutorials and all of that.
So it's smartyvan.com, S-M-A-R-S-M-A-T-Y VAN.com.
And you can get some good inspiration there.
I think my next step, if you're curious, is likely getting some visibility and automation around keeping these batteries charged and healthy.
One of them is how to keep them warm while they're charging in the winter or, you know, in the after winter season.
So that's the main problem I have that I'm going to solve that will get me, you know, some open,
hardware experience and also diving more into what home assistance can do to automate all this.
So that's my next step. But write in and let me know what problem you're solving.
I would also add, like if you want something to rabbit hole into for a while before you get into all this,
go learn the ESP platform. That is a skill that will pay dividends years.
Relize, relize. Yeah. ESBs and release. And send little sensors and there's kits you can get on,
you know, the big box websites for super cheap and all of that. And buy some epoxy too.
while you're at it.
All right.
Thank you, everybody who boosted the show.
We do appreciate you very much.
And, of course, shout out to our SAT streamers as well.
We had 26 of you stream sats.
And collectively, you came in with 26,866 sats,
which does technically make our streamers the baller booster again this week.
Thank you, everybody.
When you combine that with our boosters,
we raised a total of 109,354 sats.
Pretty humble, but we're very appreciative.
And gives us an opportunity to make our birthday.
episode a banger.
There's real,
real easy ways to boost in these days.
Fountain FM is making it easier and easier,
including making it just all kind of
dollar-based, simple stuff.
And of course, there's the entire awesome
self-hosted infrastructure. You can find that.
When you go to new podcast apps.com, you'll go down
that rabbit hole, you get AlbiHub. It's
really awesome. Of course, we have the membership
program, Linuxunplug.com
slash membership or jupita.
Party for the whole dang network.
All right.
Would you guys like a few picks?
Yeah, what did you get in your bag today?
Well, you've heard me mention my Hyper Vibe, which is a NixOS-based Hyperland desktop,
and still rocking it, got it run on three machines these days, and it's in a great state,
and I like it a lot.
But perhaps you are an Arch person.
Oh.
Well, Rich Arch has a Hyper-Vybib spin.
They say we at Rich Arch Project are re-releasing our Hyper-Vybe spin.
We now have taken the HyperVypig configs and enriched them with the Noctalibes.
shell on Hyperland.
You can try it in a VM and
include some screenshots, or some instructions
and a screenshot, which we'll put a link to in the show
notes. It's better
looking than the way I have it configured. I'll tell you
that. It's really nice. So now you're going to re-Nixify, Hypervibify,
the rich arch hypervib based config?
I like the way you think.
You basically, you start with a base
Linux, or a base arch, ISO.
Just base install.
Then he has some kickoff scripts that you can curl onto that
basic system and turn it into a hypervive desktop based off of what I set up to run on Nix.
But with Arch, kind of neat.
Thank you, Rich, for sending that in.
That's great.
It's beautiful.
It's beautiful.
And then I've got one that, this looks really nice to see.
It's good to have another one of these.
I've talked about Junction before.
Now we're going to talk about Switchyard, a modern rules-based URL launcher that replaces
your default browser.
So wrap your noodle around this.
Instead of having one browser as your default,
you set Switchyard as the default browser.
And then when you click a link,
it brings up a little window
and it lets you choose which browser you want to open in.
But on top of that,
they have added a really nice graphical interface,
a GTK graphical interface,
where you can have rules
to just automatically send some URLs
right to a particular browser.
And this is exactly how I work.
And this is why I really appreciate this,
because there's some stuff I always open in Firefox.
There's one site, and only one site I use Brave for right now.
And then there's other stuff I open in Zen.
And it's always that stuff.
So this is really, really great.
It's a super fast app, and it has a simple configuration.
If you do want to do it by text, they have a flat pack and a Nixflake ready to go.
So I thought I might get your approval on that one, too.
Oh, yeah, absolutely.
And it's written in Go, GPL3.0.
Yeah.
So this is so nice if you do live the multi-browser lifestyle, and I do.
and I probably would say Firefox is 90% everything,
but then there's those,
or maybe like if I'm going to do a Google Meet,
I might actually do that in Chrome.
And I maybe don't use Google Chrome
for literally anything else on that machine,
but I use it for Google Meet.
And it's nice to have something.
I just click a link and Switchyard will send it to that.
But if I don't have any rules set,
it gives you a really lean, mean, fast UI,
and you big icons,
and you just select the browser you do want to open it,
and it sends a link to that browser.
So it's ha-ha.
I think especially right,
they kind of keyed in on.
work or other stuff. But even just maybe you're doing
your mode where you're doing show notes. You want to make sure
you, you know, you're clinking some
the Brent sends. You want it to open the right spot.
Yeah, something that's in private browsing
mode for sure. Well, you had
no JavaScript either. Container rest. Definitely.
All right, well, that's
pretty much the end. I just want to remind everybody
that the Meetup page is up if you're going to be in the
Pasadena area around March 5th.
We'll be at Planet Nix and Scale
and hanging out with our buddies from Flox. We'd love
to see you there. Meetup.com slash jupura
Broadcasting. We'll get the details locked in soon.
for all of that.
I'm very much looking forward to it.
It's going to be nice,
especially as it's very cold right now.
I'm picturing the nice sunshine.
It's a beautiful time to be in Pasadena.
Wonderful audience.
Mm-hmm, mm-hmm.
Good crew down there, too.
Wes, is there some pro tips we could leave with them,
you know, things where they could get more data,
more information around the show.
Hmm, like some sort of enriched XML file.
Yeah, something like that links to text and JSON files.
Could be.
Yeah, with like chapter information and transcript information.
Yeah, could have all of that.
like an SRT for, however you want to consume this.
If you've got a podcast client, there's more and more of them that supports transcripts.
We have that in the feed for you.
And, of course, if you have a podcasting tutorial client, you get all kinds of stuff like the cloud chapters, like the live item entry, pending information, and a whole bunch of good stuff.
And, of course, we are live.
See you next week.
Same bad time.
Same bad station.
Yeah, we love it if you make it a Tuesday on a Sunday.
Join us Sunday at 10 a.m. Pacific, 1 p.m. Eastern.
Jupiter Broadcasting.com slash calendar for your time.
If you want to, I don't know, read about what we talked about,
if you want more show, I don't know, LinuxUplug.com.
This was episode, geez, 651.
So LinuxUMPLugged.com slash 651.
We get together every Sunday with our Mumble room.
That information's on our website as well.
You can get in there, get a low latency, opus stream.
We tell you about it, try it out.
And last but not least, we have that Matrix Room going 24-7.
You can find details to that.
It's a great community.
And if you're already in the Federation, why not join us?
Thanks so much for joining us on this week's episode of Unplugged.
See you right back here next Sunday.
