LINUX Unplugged - 661: Sink Your Claws In

Episode Date: April 6, 2026

The expensive, challenging, and humbling journey with open source agents.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membersh...ip for free!Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love.Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMLinuxFest Northwest 2026 - Back to Root — April 24-26, 2026 - Bellingham, WashingtonPreferring Local OSS LLMsQwen3.5 27B vs Gemma4 31BOpen Models have crossed a thresholdGemma 4 — Google DeepMindWelcome Gemma 4: Frontier multimodal intelligence on deviceGemma-4-31B-JANG\_4M-CRACK · Hugging Facemesh-llm — Mesh LLM lets you pool spare GPU capacity across machines and expose the result as one OpenAI-compatible API.hermes-agent: The agent that grows with you — The self-improving AI agent built by Nous Research. It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions.goose — An open source, extensible AI agent that goes beyond code suggestions; install, execute, edit, and test with any LLM.Lobehub — The ultimate space for work and life. To find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level; enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.LibreFang — LibreFang is an open-source agent operating system written in Rust.OpenHarness — OpenHarness is an open-source Python implementation designed for researchers, builders, and the community.Microsoft's Newest Open-Source Project: Runtime Security For AI Agents — Microsoft proclaims their new open-source project is the first toolkit that addresses all ten agentic AI risks identified last year by the OWASPTop 10 Risks and Mitigations for Agentic AI Security - OWASP Gen AI Security Projectgoogleworkspace/cli: Google Workspace CLI — One command-line tool for Drive, Gmail, Calendar, Sheets, Docs, Chat, Admin, and more. Dynamically built from Google Discovery Service. Includes AI agent skills.Tunarr API ReferenceEuro-Office launches as Europe's open-source Office rivalPick: single-file-cli — CLI tool for saving a faithful copy of a complete web page in a single HTML file (based on SingleFile)Pick: html-to-markdown — Convert HTML to Markdown. Even works with entire websites and can be extended through rules.Pick: skim — Turns any HTML page into clean markdown.

Transcript
Discussion (0)
Starting point is 00:00:11 friends and welcome back to your weekly Linux talk show. My name is Chris. My name is Wes. And my name is Brent. Hello gentlemen. Coming up on the show this week, well, for the last three months, we've been building multiple open source agent platforms. Gains have been real, but the friction has been just as real. Our expensive, challenging, and humbling journey with open source agents. The round the show out with some great picks, some boosts, and a lot more. So before we get into that and the challenging journey. Let's say good morning to our mumble room. Time appropriate greetings, virtual lug.
Starting point is 00:00:45 Hello, hello. Hello. Hello. Hello. Nice of you to join us on air. Hello, everybody up there and quiet listening. Nice to have you along as well. Look at them.
Starting point is 00:00:54 Aren't they looking nice today, Wes? Wow, dressed up and everything. I love it when they do that on a Sunday. Also, good morning to our friends over at Define Networking. Go check out Manage Nepula from Define Networking. Go to Defind.net slash unplugged. You'll get 100 hosts absolutely free, no credit card required. It's a decentralized VPN built on the open source Nebula platform.
Starting point is 00:01:15 And what I like about Nebula is it's built the right way. Open source, incredibly reliable and designed to avoid the usual points of failure. And man, it's so great for a home lab or an enterprise. It's resilient. It's incredible because you can have these lighthouses that you manage, the public lighthouses. You can have one system going to one system. You can have a giant mesh network. I mean, it was originally built for Slack.
Starting point is 00:01:38 So you can go big, you can go big or you can go small, and I just think that's really, really powerful. I just think it's, once you wrap your head around and you'll see what I'm saying. So why not dip your toe in? Check it out 100 hosts for free at Defined.net slash unplug. Nothing else offers Nebula's level of resilience, speed, and scalability. Get started, 100 hosts, absolutely free. Support this show, our premier sponsor, Defined.net slash unplugged.
Starting point is 00:02:02 Thank you very much for the support of the unplugged program. defined.net slash unplug. All right, so we want your feedback for these topics that we're about to get into. For example, this episode is based solely on questions that have come into the show. But we also get questions or maybe sentiment that is don't talk about these kinds of things. Like on March 24th. Matt wrote, and I think it's maybe his first time writing the show, you boys seem enthusiastic about AI.
Starting point is 00:02:33 I recommend you create a new podcast dedicated to AI so it doesn't dominate Linux unplugged. Half the audience can't stand AI. It's very polarizing. If you start leaning heavily into it, I'm just going to unsubscribe. I thought I'd give this feedback to let you know why you might lose some viewership in the future. Fair enough, too. Like, all kinds of feedback are appreciated. So we're not roasting Matt here.
Starting point is 00:02:54 Just wanted to share, like, we get both ends of this, right? So today we're going to represent the questions that have come in about agents and whatnot. But I think it's fair to say we want to check the temperature on this just in general with the audience. So send us a boost or go to Linuxunplug.com slash contact. And I do want to also say I think sometimes people, because they are so polarized about a particular topic, don't recognize that we do intentionally try to space this out. So you consider that AI has been the number one topic for the last three years in just about every economic story, every employment story, every tech story. And so we have worked very intentionally to try to space these topics out. Last week, we talked about Ubuntu and their grub plans.
Starting point is 00:03:40 The week before that, we talked about Ersatz TV wrapping up. And a little bit before that, I talked about my Keeper calendar program. So we try to space it out. We try to have episodes that don't just hit A.L. Every single week, which means, you know, we're digging whole cloth and building that stuff for you, which is part of the value we try to bring. But it doesn't just accidentally happen that we go two or three weeks not talking about AI. and then one week we talk about AI. It's not just, that's just not like the topics fall into the show by accident.
Starting point is 00:04:05 It's very intentional design on our part. And if we weren't doing that, it would look a lot different. Right. It's just if that filter was not in place and all you were doing was going based on, like, what's hot in the world and what news stories are out there and what's changing. And what would get us advertising very easily. Right. Like, we could just lean into it and lose some audience and make some advertising money there, but we're not doing that. And what we're trying to do is when we talk about it, we're trying to talk about in practical ways that are here today, that are the open source,
Starting point is 00:04:30 angle because that's what we cover and really impact Linux users. And we try to bring something even for the skeptics, even if it's an episode that's about AI and you're not an AI fan, we try to bring something for everyone. I would argue also solving actual problems that we have, either in our infrastructure or like some of the reverse engineering that we did of that diesel heater is a good example of using a new tool to accomplish something that we've been thinking about for what in a year, two years, something like that. So attaching it to a real world use case and problem set, I think, hopefully it describes how we're finding actual uses for it, not just burning up much credits. Yeah, we're doing that too. We'll talk about that. But it is a balance, right, because we don't want to lean too heavily into it. We want to make a show that's for as many people as possible.
Starting point is 00:05:17 After all, it's a show we're making for you. So we do want to get this balance, right? And it's something we want to hear from you about. We think that the terrain is still being discovered, right? The map still has a lot of fog on it. And there's a lot of mixed information out there, good and bad ideas and takes, and a lot of interesting technology that is really growing this year, 2026, for open source. And this week was another significant step for open source this week.
Starting point is 00:05:46 We had another one about three months ago. This is another one. And these keep happening specifically in the open source domain. So we're trying to balance all this out. let us know what you think with a boost or, you know, go to the unplug.com. Linux, what is it? Linux Action Show.com? Yeah, that's right.
Starting point is 00:06:04 We got the unplugged, you know, we got the contact page over there. You can figure that out. Linuxunplugged.com slash contact. What? No. Never heard of it. Right? That unplugged.
Starting point is 00:06:13 What kind of show? What, does that show about radio? Yes, it is. Oh, okay. All right. But internet radio. All right. So, let's talk about the good, the bad, the ugly here.
Starting point is 00:06:23 We've got some common questions into the show, and we're going to go through some of these and then talk about our setups and talk about some of the big stuff landing for open source. And it's not all just one particular flavor. So I think let's start with probably the one that everybody's talking about right now, OpenClaw, which is a project that's getting a lot of the attention for something you can run locally and it can use local LMs or cloud LMs. Do we really need to get much into OpenClawe? We've had a couple of people ask, but I don't really feel like we need to spend a lot of time on it. It's a node-based agent's gateway stack. Okay, let's talk about what's a gateway, Wes. Maybe that's what we could explain.
Starting point is 00:06:58 Well, yeah, I mean, it kind of depends on how deep you want to get, but there's a lot of different versions now of what people are calling an agent harness. And, you know, on one hand, you have just sort of the basic model, like the first version of Chad GPT and a browser tab that you're sort of typing into and interacting with. And, you know, at one point, maybe you're copying code in, and it writes code, and you copy it out or whatever. But then we switch to this version where it's like, it's living with us in our projects, in our editors, in the terminal.
Starting point is 00:07:22 and it probably has something like tools, right? And it has, so that gives it like sort of mechanisms that it can affect change, edit files, run scripts. But beyond that sort of core set, then you have like a lot of different other features, sort of like, how is the context assembled? Well, how does the memory system work? Is there a memory system? How do you chat with it? Yeah, and then you have the inputs and the outputs, which is like, what is the control interface and control surface like? How do you trigger it?
Starting point is 00:07:48 Is it autonomously triggered? Does it have mechanisms to, like, pin you anywhere you're at? or it just presents on like an interface on your screen. And so like on one version, you have sort of like an open code or a clod code or codex sort of thing where, I mean, they can do more than this. But the primary thing you first see is just like a 2E interface for you to sort of be human now with better helpers embedded right there.
Starting point is 00:08:10 And open claw you get is a very different experience where like you just sort of are presented with a telegram chat window into this bot who lives in its own entire other sort of universe. A persistent running back end called the gateway. that is connected to the LM that you chose and can run some of the tools like could be basic bash commands, could be other things like a model context protocol
Starting point is 00:08:35 and things like that. But the gateway just sort of does that via like your commands in telegram chat or WhatsApp or Slack or whatever it might be. Yeah, so it sort of is the organizer, right? So it monitors like the telegram API or the Slack API and gets incoming webhooks or whatever. And then from there it says like,
Starting point is 00:08:49 oh, right, let's trigger the LM, assemble its context, all the stuff that it needs. And then it also handles when the LLM comes back with like, I want to run a tool. The Gateway is actually what goes and like executes the tool call, calls the MCP server. And I want to make kind of a common confusion clear that we've seen come into the show. So say you're using Olamma with an open source LM or you're using ChatGPT-5-4 for the back end of your OpenClaught agent.
Starting point is 00:09:14 And maybe you're also using open code. They're the same thing. You're not going to really get dramatically different results because one's open-cloud. and one's open code other than some of this harness that Wes is talking about, like the memories or the skills and those things kind of make it different. And they give the agent the ability to remember mistakes, to remember that when I say custodian, I'm referring to 172.16.0.10. And so I don't have to write 17216.0.10. I just say SSH to custodian. And memory and, of course, DNS, help with things like that where maybe if you just went to a fresh open code session, or a fresh GPT session and just said Picking Custodian,
Starting point is 00:09:55 and have no effing idea what you're talking about. And that's where it's like they kind of all exist in a design, sort of possible design space, and they have a lot of shared components, and then some of them are just optimized for different experiences, like Open Club really started as this sort of personal assistant that could manage your calendar and email and interface with you and, like, you know, help you, you know,
Starting point is 00:10:11 chat with you in your Telegram instance. And maybe something more like OpenCode is, you know, they have customized prompts focused on coding and they have a different style of sub-agent implementation that's focused on orchestrated, multiple agents working specifically on code. I mean, it can do more than that, right? But you can see how sort of the defaults
Starting point is 00:10:27 and the shape of the interface drive what they're primarily used for. Yeah, so... But like all of them, right? Because it's all just an L.M. Under the hood. They can all write scripts. They can all run tool calls.
Starting point is 00:10:37 It's just kind of what you put on top to let them do. So in my case, I'm using OpenClaw, and I have five agents running through OpenClawe that have domain-specific focuses. And we can talk more about that in a moment. So because one of the questions we also get into the show is what the hell is even the use case? What are people even using this for?
Starting point is 00:10:55 I don't really get it. Like I could just write a cron job with a Python script or a bash script and do most of what you're doing. Or I could just use clog code or open code. I can do most of these things. Like what the hell are people actually using this for? I don't get it. And so I think I wanted to start with Brent because Brent so far, I know you've been busy. But I think partially too, you're kind of waiting to see where this goes because it is really early days.
Starting point is 00:11:17 He's watching us frequently post embarrassing, you know, things that our agents messed up. Yeah, and so I thought I'd give you a chance because you probably represent where a lot of the audience is at on this and just kind of talk about like where you feel it is on your adoption curve. Yeah, I'm typically a little slower on the adoption curve than you boys, which is beautiful because I get to watch you intimately screw things up and then learn for your mistakes. So it's lovely. But we do it for the show, right, boys? But my hesitation always around new technologies tends to be, you know, of course, privacy. but also security, because I might not have the same confidence as either of you to either not make the mistakes that I will regret later or to recover from them gracefully. So I like to wait just a little longer to see, let's say, like a project like OpenClaw reach more maturity than adopting it, you know, on week one as so many people have done throughout the internet.
Starting point is 00:12:13 So I would say that's probably pretty accurate that some of our audience fall into the category that I sit in or maybe wait even longer. And I would say that's not a bad thing. That's okay. It means you're falling a little bit behind because the tools are moving so fast these days. Like every other day it seems I just add to my list of things to learn. But that said, it's an evolution that you still need to keep up on. in my opinion. And I didn't have this opinion several months ago because I was still kind of pausing and waiting to see. But having put my own pause on some of these tools, I got to say, it's made me a better open source software user and allowed me to accomplish a bunch of projects that I've had on my to-do list for years and do them at speeds that I never would have anticipated.
Starting point is 00:13:08 So that part, even though I hesitated to start to adopt, it's incontrovertible to me now that it's useful if you pointed at the right thing. Yeah, so I think your take is pretty spot on. I don't know if you agree with Wes, but I think like it is it is breaking and moving fast. And if you're not comfortable going in there and using something like a codex or an open code to sometimes fix it, you're probably going to have a bad time. Yeah, and I think that's where like the difference in model sort of matters, right? Like open code, I tend, or cloud code,
Starting point is 00:13:48 I tend to like, you know, you open it. You run it on your computer or you run it somewhere. It sort of has a, maybe you run it for a long time and it runs persistently for days and weeks or whatever. But, you know, versus the gateway for OpenCla is a system D service that runs on a server. And so just the models are very different and the introspection and the default of how much info
Starting point is 00:14:06 and sort of the interstate that you're exposed to of the system is very different. And then on top of that, you know, you're so there, you're sort of pressing the bounds of like how little interface can you have and still have this thing managed productive work, which is its own question, but that sort of imposes a lot on the whole model. And just the nature of the project, yes, like it just moves crazy fast, so fast that we've both now had to sort of fork the upstream Nix code,
Starting point is 00:14:33 which wasn't updating fast enough to keep up with the proper upstream source code. And it's just, you know, it's trying to do a lot. There's a ton of features. So I think we've maybe both been continuing to run it because we have been curious about, I mean, we run other things. But just because it has been sort of the locus of a lot of the frontier. But if you don't care about that aspect, you can still even have this model style of approach and have much more stable things or things that are moving slower or, you know, aren't based on the Node ecosystem but have like a go core or a Rust Corps or a Python car. I want to talk about a couple of those tools because it's not, I think you're touching on the, good point is it's not going to be just OpenClaw.
Starting point is 00:15:10 And some of them are going to be more stable, more LTS style. Yeah, there's going to be enterprise versions and like Debian style versions and, you know, I think all manner. Here's my, here's how I boil it down is right now it's not worth burning a lot of money on tokens to run OpenClaw. I just, I generally don't think it is because you'll spend a lot of those tokens fixing it. And if you have a plan where you have access to a lot of AI tokens or you have local AI hardware, where it costs you nothing, well, then go for it. Because if you do, you will learn so much.
Starting point is 00:15:44 It is an incredible learning experience, but also you do become a better operator. Like I know where their deficiencies are now, so I prompt better and better and better, and I get better and better results, and I have them doing more and more things, which will get in a couple of those because I do want to talk about use cases.
Starting point is 00:15:59 But that's my hot take, is I don't think it's worth spending a bunch of money on OpenRouter or going and getting some $100 a month plan to run OpenClaw right now. I think especially if you're going to try to use it as like the only thing you're doing. Like if you're trying to push through
Starting point is 00:16:14 building the entire thing through that. It's going to be a bad time. It's an interesting experiment. It's just not there yet. But yes, versus like if you're kind of having it orchestrate battle tested open source services or scripts that you have open code right for you. Like that is a very different experience.
Starting point is 00:16:28 That's it. So let's get into that's it because that I think gets us to our use cases and our setups. If you're comfortable using a superior model and a superior tool to build the infrastructure around these things and then have them operate it with guardrails,
Starting point is 00:16:42 you're going to get great results. If what I just said doesn't make sense to you, it's going to be a hard time. And that's where we're at. And I just, if I, it is a really tough thing. It's like, it's like when it took you days to get Linux running. I mean, this has a lot of this same energy
Starting point is 00:16:59 I've been putting into this where like, I don't have time. But yeah, just like I didn't have time 20 years ago when I was learning Gen 2, I somehow did it, right? Like, I just, the drive is there. Yes, because you can see there's potential, there's a lot of fun in it and a lot of frustration.
Starting point is 00:17:14 And so I think it's worth knowing that and like, it's almost like a pet. You know, it's like a pet. It's a big side project. But don't go in necessarily expecting to convert it into a production thing that's going to be rock solid that you forget about. I think pet is one, you know, like the Tomogachi is one thing. And that's fine if that's what you want.
Starting point is 00:17:30 And you've got a way to do that economically. I kind of look at it more as an intern. or a really kind of basic producer. And I've gotten it pretty close to that. So in my case, I have it doing a lot of analysis for the show. Every episode it pulls down the transcripts. It does sentiment analysis. It keeps track of everything we've talked about.
Starting point is 00:17:53 I can also pull down emails and like that email we got earlier that I read in the show. It matched some of the sentiment analysis of the things we actually talked about in the surface that email because, hey, this is actually, you guys, you know, this email is kind of on point. Like, I don't think Matt likes to hear this, but it was the agent that surfaced his email to me that said, hey, he's complaining and you have been talking about it in this episode, this episode, you mentioned this. And it's like, he might have a point here. It's worth reading this. And that's why his email made into the show today. And that's something I would task a producer to do if I had, you know, I don't know. A budget for producers.
Starting point is 00:18:22 What would it be in Washington State? $100,000? I don't know. It's crazy here. So it's just not going to happen. I'm not even paying myself at the moment. So it's not going to happen. It also generates news briefings for me every day.
Starting point is 00:18:34 both in text with sources, but also in audio that uses my fresh RSS server feeds. So the feeds I've curated for the last decade that I have in my local fresh RSS server, every morning it goes, and it does analysis on that, and it generates me a seven to 15 minute long report of the stories that are relevant to our shows, and then it marks them red in my fresh RSS feed. So then later, when I go to read my news stories, because I'm always trying to stay on top for the shows, the ones that have been in my audio briefing that I listen to on the drive to the studio
Starting point is 00:19:05 are marked red for me now. Little things like that. Or when a sponsor emails me and I'm trying to get a sponsor going, I have that surface, that alert to me using GWS. So GWS CLI lets me check in on these things without going crazy with permissions in my inbox. And these are little tasks that I have it to,
Starting point is 00:19:22 but I think the stuff that I would find, if you're asking what's the use case, it's for giving you an easy interface to manage all the crap you've set up. There's a lot of, you know, home assistance is a great example. It really can be a great accelerator to your setup there. So at home, I have the Frigot DVR. The Frigot DVR, when it notices an event or a face via MQTT, sends an alert to Home Assistant.
Starting point is 00:19:45 Home Assistant has an automation that wakes up my agent O'Hura. O'Hura, and that analyzes the image from Frigot, and then sends me a report with the faces, the people identified, and a description of the situation. and its estimate if it's the severity level of the situation. I just have a telegram chat, and whenever I'm away, because Home Assistant automatically activates this detection system, home assistance is doing the lift here, but my OpenClaught agent is doing the final analysis and report. The wiring, the hard stuff is the infrastructure with Frigget, MQTT, and Home Assistant.
Starting point is 00:20:20 Then OpenCla just sits on top of that as a layer to give me access to all these APIs and features. So when I wanted to start getting these alerts, I tasked the OpenClaught agent to finish up the YAML in Home Assistant to add the sensor for face detection and then to expand its reports that come back to me with image analysis. But I didn't have it build the entire system from WholeClaught. I had to do the last 10%. And it's working great.
Starting point is 00:20:43 I've been sending West the results all week. It's been a lot of fun. It's a lot of fun to have it analyze and learn the family and pass silent LM judgment on a room being cluttered. It likes to think you're, I mean, does it not get that you're in an RV? No, it does. That's what's also interesting is it does recognize. It's figured out that it's an RV.
Starting point is 00:20:59 Oh. That's great. Yeah. So that's probably one of a million use cases. But again, I wouldn't do it unless you could find a way to economically get access to the AI, which we're going to talk more about in a moment. But I'm curious if you want to share any of the things that you're kind of routinely using these things for. Well, yeah, you're just making me think like, I mean, anything you do want to orchestrate that you don't want to have to go sit at a computer to do. So it could be stuff that's routine is like monitoring your systems or. reporting on, you know, like you don't need it to go collect the metrics necessarily, right?
Starting point is 00:21:29 But you can have something that's collecting metrics and it can look at it and give you an assessment about things or look at anomalies or you could have it if it has permissions and you're willing to do this, go run updates on some of your servers and report back on how that goes. And it also is just useful to have it flip the script and like, you know, I think many of us recognize that if you leverage web search with LLMs, they can be very useful researchers, right? It can be a little, if you just rely on what's in their pre-trained data set, then you can get much different results. But if it has access to fresh good, you know, reg style sort of data and information, it can be quite useful. And so instead of having to go sit at your computer and pull up
Starting point is 00:22:02 some web interface to go do all that, you know, I can just, like I spent a while making sure that my bot had really good access to whisper transcription so that, you know, I could just send it. I could record voice in telegram and shoot that over. And then it could go spin up and have subagents that go use search engines to go pull a bunch of stuff and then sort of recursively analyze that. and then at the end produce a markdown report, and then kind of like you've been doing, I can have that then spit back out into sort of like a podcast form
Starting point is 00:22:30 with a pocket TTS voice. Which is open source, runs on your CPU. And that's all stuff I could do myself or sit at a thing and task an LM to do. And I might not even bother if it's like, oh, I got to go interrupt what I'm doing, but if all I have to do is fire off a quick voice request. But you can also burn a lot of time
Starting point is 00:22:47 if you're trying to set that up fresh every time or if you haven't properly sort of ossified it and tested it and made sure that it works, reliably and this is where you're talking about being a good operator is understanding because by default the lLM that's sort of operating within this harness doesn't necessarily and this varies per harness but open clause not great at this doesn't necessarily have a very good understanding of how it works like how it's a moan model of how it functions and so if you don't have that model and aren't sort of you know passing some of that info or haven't spent time doing that then it can get really
Starting point is 00:23:17 confused and you have situations where it works great a couple of times and then a week later you're like oh, it has no idea that it was ever able to even do that. Yeah, I think one of the benefits of playing with this now is I've learned how both capable and dumb LLMs are, right? They're starting with a fresh world every single time. And when you experience them without an agent harness around them, that's always your primary interface. And so once you have an agent harness and it starts to become a little more personalized, I'll give you one more use case example that happened on Saturday, yesterday.
Starting point is 00:23:47 The boy and I were sitting there and we went to go to Tuna, which I talked about recently. to watch some streaming TV. And we went to the regular show channel. And it's one of those dad moments you hate where you hit the button and you get stream failed. And we're like, oh, I've just been telling them I just set up a regular show channel. Stream failed. I'm sorry, Dylan.
Starting point is 00:24:07 I don't know why this isn't. Oh, wait a minute. Lour, go see why Tuneard says there's no episodes for a regular show. I've got the entire series. There's eight seasons. There's plenty of episodes. Fix it, right? And what Laura identified was, is that Ersats apparently didn't care, but Toonar does care how the files are organized on disk.
Starting point is 00:24:26 Even though it's getting the information from Jellyfin, it's still, TNR is still sensitive to how the files are organized. And if you don't have them in individual season folders, it doesn't see any episodes. So there's eight seasons, and it's all, you know, season one, and there's some of these are there's like 36 episodes of season, and they're all just in the root directory. So Laura identified that and he said, well, here's the problem. Do you want me to SSH into Custodian, create the season folders, and move them all for you and organize them, and then ping the Tuna API and have it rescan? Yeah, do it. And then five minutes later, we go back to the channel, hit play, and it works. And I mean, I never had to get off the couch.
Starting point is 00:25:05 I never had to stop interacting with my son. I just sat there and tasked the machine to go fix it for me because that harness tells it all the information it needs to know to go do those things. And, of course, it can execute tools. It's so useful as an organizer and sort of default cis-admin interface for a lot of stuff like that, where it's like, I know what needs to be done. I just don't really want to do the TDM of doing it. And it can present me like a plan that I can audit and approve and then tell it to go do that. So I think the reason why it kind of shifts this week is Gemma 4 has landed. This is from Google DeepMind.
Starting point is 00:25:37 It's their open source Apache 2 licensed model. And so it's based on their bigger commercial Gemini model, but this is their open source. source play. So they have a high-end commercial play, and now they're trying to have a high-end open-source play. And it does seem to have pretty advanced reasoning. It's been trained with Agentic workflows, and people have been running it on their iPhones. And it seems to be performing more than 20X models its own size. It's, you know, competitor models. And you can try it out right now in LM Studio. I downloaded it on my 2016-era NICS station upstairs, which is, it's got like an AMD radion maybe from
Starting point is 00:26:15 2018 at best. I think I upgraded it once from like an old invidia card to a radion just for compatibility. Maybe. And it's got an I-7 from 2016. And it's got 32 gigs of RAM. Oh, maybe 64. And I was able to load Gemma 4
Starting point is 00:26:31 on that 2016 era system and actually have it do accurate image analysis in, you know, under a couple of minutes. It's slow on that old system. But people on iPhones and even newer boxes are seeing incredible results with tool call capability, reasoning, and you can run the entire thing on your local machine.
Starting point is 00:26:50 PJ, you were trying it out earlier on your machine. What are the specs for the computer that you were running it on? The machine I have is a, we'll see, AM4, 5700, I think. The nicer one, the one of 3D cache, and it's got a 1070, GtX 1070, so another older GPU. Could you tell if it was using GPU acceleration? Yeah, it was. I loaded up NVTOP, and I know that the Olamo WebUI that I've got loaded up is, it's all set up for GPU use. And I just threw a picture of my dog, put it in the chat, and then it disguised my dog within just a few seconds. Oh, really? That fast. Yeah, I'm even using the 12B model, which is probably too big for the amount of V RAM I have, but it tends to work fine when I throw stuff like that at it.
Starting point is 00:27:35 Yeah, this is, I think, beginning to be at a model scale where you could play with the Zagentic stuff and not blow a bunch of monies. money on big tech cloud tokens. Then something else that landed this week, which we haven't had a chance to try it because we want to give it a proper try, if you guys are interested, is Mesh LLM. And it lets you pool your spare GPU capacity across your land and then expose it as one open AI compatible API. This is from Block, and it's a peer-to-peer system that lets anyone pool spare GPU compute.
Starting point is 00:28:09 And what it does is the system loads an LLM, And when it consumes the VRAM of the local host, it goes out to the mesh and it distributes across the mesh and then continues loading the model. I really want to try this. I, you know, I have been saying this is coming. And I am so excited to try this because it really opens it up for folks like us that have spare hardware. They're not great, but if you really pool it all together, I don't know, or maybe even just a couple of cheaper VPSs. We'll see. We'll see how much it really needs a great GPU or whatever.
Starting point is 00:28:40 But I'm really excited about Mesh LM and it's open source by block and I think it's going to be really great. They also have an agent harness they've open source. So we've talked a lot about OpenClaw. Loeb Hub is another one I've played with. I think it's out of China. It is open source and it's a lot more user-friendly. It is the chat interface itself. It's also the skill store, the MCP store, different agent personality stores.
Starting point is 00:29:06 and it connects to every freaking kind of provider that ones I've never even heard of. I had no idea there were that many plus all the local stuff and all the free stuff. It's crazy. And it's just a built-in agent orchestration platform in one UI. It's called Loebhub. Goose is another one that's out there. And then there's the Hermes agent, which is being described as a self-improving AI agent built by newest research. And they say it has a built-in learning loop, which I've also created for my own.
Starting point is 00:29:32 I call it the reflection loop, which I recommend. end. It's a nightly job that scours the JSON chat sessions and looks for mistakes and corrections that the agent made and then documents those in the memory. And they've built that in. So it self-improves. It nudges itself for persistent knowledge. It searches its own past conversations and it builds a deepening model of who you are across sessions and you can run on a $5 VPS. Love it. Yeah, there's so many cool options. I put two in here that are sort of on opposite ends of the spectrum. So one is called Libre Fangs. This is like a Rust, It's an agent operating system, a full platform for running autonomous agents, built from scratch in Rust, not a chatbot framework, not a Python wrapper.
Starting point is 00:30:13 And as they put it, traditional agent frameworks wait for you to type something. This thing runs stuff that just is supposed to work for you, 24-7. So it comes with like a researcher, a collector, a predictor, a strategist, a traitor, a Twitter personality, LinkedIn, browser, API tester, DevOps personalities. And so you can spawn all these things that can persistently run. Wow. You've seen similar things with like Gats. town and there's a bunch of it. But then on the other side is something called open harness,
Starting point is 00:30:39 which is a lightweight agent infrastructure, sort of tool use, skills memory, multi-agent coordination, but that's kind of it. It can connect to other things too, but it's intentionally built to be an open source Python implementation designed for researchers, builders, and the community to help you understand how it works, experiment, extend. So rather than something that's like a product or a crazy fast open source moving thing, here's something maybe you could get more comfortable with and actually play with and like build understanding.
Starting point is 00:31:05 Okay, nice finds. I think a couple other trends. I also, I have like my biggest takeaway for both the haters and and the hypers that I want to get to. But I think we're about to see this massive enterprise shift towards agents. And Microsoft's leading here. I predict here on the show. You're also going to see Red Hat make a big deal about this at Summit this year. You're already seeing it sort of in like job postings and stuff, you know, way more stuff like MCP tool mentioning and just like agent pipelines.
Starting point is 00:31:34 That's one of the things we watch for the show, and it's clear. So Microsoft has released the agents' governance toolkit. Really, the takeaway here isn't this particular product. It's that this is a thing that these companies are going to make now. And it's their new open source project. It's a toolkit that has identified all kinds of different types of risks and tried to create essentially a sandbox and OS system around it that does governance, monitoring, communications, monitoring, all that kind of stuff.
Starting point is 00:32:06 Because you can imagine, right, like when these things have access to various tools and various implementations of them, and you're going to want to have, you know, some person over here in sales be able to do certain things, and another person in IT do more things, and you're going to want to tie it into all your existing ancient enterprise tools. It ties into your policy engine. It intercepts every agent action before execution. There you go. And also, interestingly enough, includes a mesh network.
Starting point is 00:32:31 component so you can have agent-to-agent communication. But presumably with policies. Yeah. That's interesting. Yeah, yeah. I think you're going to see other companies lean into this a lot. And then there are, I think, probably two big takeaways that we should probably talk about. But first, thank you to our members for making this possible.
Starting point is 00:32:52 You can support the show at LinuxUmpug.com slash membership. We just have to find networking right now. We could really use the support to keep us going. And if you like where the direction we're going and how to be. we go about it. It sure supports one of the best ways to keep it that way. Linuxunplug.com slash membership or jupiter.com party. And a big thank you to our members for supporting the unplugged program. Now, we've seen a lot of open source projects recently describe how they want AI to be used in their projects.
Starting point is 00:33:22 We've even seen some projects just outright try to block AI. But the real question is, is that a futile gesture? Or are they just going to have to find a way to play? nice with these AI agents who are trying to make our world's a better place. Yeah, I mean, there's a few different ways to think about that and, you know, sort of the game theory or strategy side and the real, you know, sort of whatever happens in the market and where things go. But I think it's interesting that, like, the downsides and sort of like, who are you
Starting point is 00:33:51 fighting and who are you trying to serve? Maybe changes. Like, there's one version where, like, it's a bunch of scrapers sucking down the entire internet to train things and, you know, is making it hard for you to run your web website. We see that and that's a real problem. Yeah, for sure. And the other version is, like, I'm trying to compare some open source projects for the show, and I want my bot to go clone each of them so we can write a report about how to, you know, make me a table to think about the different ones and see which ones I want to actually try. And, you know, I want it to respect rules and
Starting point is 00:34:23 rate limits and all those things, but it's also sort of just doing what I would do myself. And so having it blocked just sort of feels like it's limiting the stuff that I want to do, which is probably to go on and, like, highlight an open source project that I've tried and I'm excited about or maybe contribute to or, you know, engage with in some way. Yeah, and so I think what we're seeing, and this is going to be a tough one for the community to wrap their noodles around because it's a big transition. Because for as long as I've been on the internet, bots are bad. And, you know, maybe you want your site indexed for Google or something, but that's about it.
Starting point is 00:34:54 Otherwise, bots are just traffic and putting load on your system that you don't want. But we're shifting from mass web scraping bots, which still exist and maybe are even worse than they've ever been in some regard, to something that's a lot more personal, something that's an extension of the user's hand acting on the direct intent of the user. And so if you block that, you're blocking new users to open source software. You're kind of saying, well, we want users, but we don't want the users that are asking the machine to do it for them, which is kind of a moral call there. And I don't think we've really thought through that as much. And I think by just saying what we're going to do is we're going to just put a proof of work, you know, anime bot thing up first. And then you look at the cute little anime thing and you do proof of work and then you get to, you know, view the source code. And while that is quaint, it's easily defeatable.
Starting point is 00:35:48 I mean, there are so many projects now. A lot of these aren't harnesses are just coming with things that just defeat it built in. And so you're just creating more load and more work and burning more. more CPU cycles and using more electricity for no result. And I'm not saying that that's a good thing. I'm saying the way to solve it is instead of bearing your head in the sand, it's to engage with the process. You know, free software didn't get as far as it did because we never engaged with licensing debates and legal actions. We had organizations that sprung up around free software to protect and fight for it in the legal system to give it a legal space where it had to be respected.
Starting point is 00:36:27 And now you have organizations that are ginormous the size of countries all around the world that are following free software licenses for the most part. It didn't happen because we just ignored commercial software in the legal system and buried our head. That happened because parts of the community intelligently engaged and advocated. And that's what has to happen here. You're not going to get away from these bots. They're going to just walk right around your cute little anime proof of work splash page. And they're just going to get the information they want. or they'll go to a project that makes it available via an API, JSON, markdown file, or just doesn't block them.
Starting point is 00:37:02 And because software is going to be easier to create, easier to extend, and easier to patch, it's going to be easier to make the projects that block that more irrelevant. And because it's going to be the new generations coming on board that are going to be using these tools, you're going to block new adoption. And we sure hear a lot of lip service about trying to draw in new users to free software, but when the opportunity actually comes along, we're actually gatekeeping, and we're blocking them and preventing them because we don't like the tools they're using. I think it's, you know, there are certainly valid concerns and arguments around sort of impacts to and risk to sort of the commons from some of these developments.
Starting point is 00:37:38 But I think what we don't talk about enough is the extent that these same tools can enable that. Like, I'm making and publishing more open source these days with some of the help of these tools than I ever have before. Now, if anyone wants to run it, that's up to them. But, like, I think there's a version of this where we can sort of embrace the good parts and you, use that to build more of the open source stuff that we need. But I think, you know, your story is a good one, and you really have, like, leveled up. It's incredible. But my story, I think, is maybe more what the community should think about is I have
Starting point is 00:38:07 been using computers for 40 years. I'm getting to be an old man, and I've been using computers since, like, that you hooked them up to TVs. And, you know, like, like, a long time. I used cartridges, and, like, it's been a while. And in 43-ish years of using computers, because I'm getting old, I never once wrote anything more than a line of bash code that I use myself. Now you can go to my GitHub repository, and I'm releasing software like crazy. Now, a lot of it's for myself.
Starting point is 00:38:41 Some of it are upstream patches. Some of it's for the J.B infrastructure or something like that. But I am now writing open source code that people are using. And it's good. It works for me. and a lot of it's been running for months in production. So I never created software for 40 years until these tools came along. And now I'm creating GPL'd software.
Starting point is 00:39:02 And like we just see like, okay, there are proprietary things, right? Like, you know, cloud code and et cetera. But there's just, as you expect in our wonderful community, like immediately you see all of these various open source harnesses and, you know, whatever you think of GitHub itself. Like if you just go look at activity and various things on GitHub, it's clear that there's just a lot of people excited. to work on and tweak and share ideas and get inspired.
Starting point is 00:39:24 It's like there is a sub-aspect of this that is all of the great things we love about open source. And I think that probably doesn't get enough attention. So that's, I think, one aspect of this I want to talk about with you guys before we move on. And the last one is I really want to stress this point. The agent is not the magic part. The real reward in this is the infrastructure you build along the way. And the more I work on these AI agents like OpenCla, the more I think the really valuable part is the infrastructure I'm setting up around it.
Starting point is 00:39:58 Because you got to remember these things are often starting with a blank brain for the most part. You can't trust them with complete tasks or jobs. Actually, that's the breakdown. I don't think you can trust them with a complete job, but you can trust them with a complete task if you give them the skills and the guardrails. And what is that when I say that? It's generally a Python script that they call with certain forms. flags depending on the task. It's maybe a CLI Rapper, a Raptor. So like, for example, I use GWS. And GWS is a command line client that Google is put out to interact with Google workspace
Starting point is 00:40:32 in an agentic safe way. And I have, I have like six GWS inboxes. And I have created a wrapper for GWS, UUS unplugged, GWS, you know, XYZ. And so when the agent goes to check the inbox, there's no ambiguity of what inbox they're checking because they're calling the unplugged, wrapped GWS client. And then there's a skill, which is a markdown file that just says, to check the inbox, do these steps. I wrote that once. And then I can, for the rest of internity, just ask the agent to go execute that task.
Starting point is 00:41:04 And that's kind of when I say you need to build the infrastructure around. Maybe it's a script. Maybe it's a CLI. It's probably a skill, something that gives the agent some instructions from a completely blank brain. And when you build this a little bit of scaffolding around it, you get incredible results. This can also be where it's helpful to say maybe you start in the browser talking to something like Cloud or your favorite assistant and you build the spec and then you use something like open code to actually implement and test it and get it ironed out and then you can load it in and run it in Europe. There's a lot of ways to sort of combine these and not just shove it all through the claw as well.
Starting point is 00:41:36 I think actually, you know, for you, most of your work isn't in a claw, right? You're generally interacting through some other application or your app or some interface. It's really just what you're trying to get out of it. I mean, I think that's interesting, like, my primary interface is probably the open collage, but I don't think yours is. No, I mean, I usually have open code going on a couple of things, and then I'll have YAP. I especially like using YAP for, like, getting skills going because it's a very clean
Starting point is 00:41:59 environment, so it's just, just what I've put into the prompts in the history that it has, but it has full access to a lot of tools, especially now that it has, like, direct search built into it. So that helped a lot. And then, yeah, right? And then once I've sort of, often I'll build a lot of services, maybe it's a new MCP server, maybe it's some new scripts. Like I just worked on something to better as a fallback to sort of the public convert sites to markdown stuff like markdown.
Starting point is 00:42:23 I wanted a mechanism I could run on the command line just as a fallback. And so that was useful for the agent as a fallback. It's built into one of the search MCP servers I'm using now. And I have it as a tool that I can just also run. So you can kind of like, you know, shop them all around. I think another thing to remember is they're probably going to disappoint you the first couple of times. you task them to do something because there's going to be little bits that you've missed in your skill or in your script. And so I, when I designed to do something new that's going to do routinely, I expect the first couple of times it's going to screw it up.
Starting point is 00:43:01 Because I think of it as a new hire that I've just trained to do something for the first time. And you've got to expect the new hire is going to need a little handholding a few times they do the task. So the first time the agent runs through the task and they screw it up, I then use something like open code to go review the logs. and figure out where the agent went wrong, and then go harden up the skill, quote unquote, to address that. And then I have the agent run through the cycle again. And one of the tricks I do here is I reset the session,
Starting point is 00:43:27 so it's always a fresh context. So it's not using memory because you always want to plan for a fresh context. So I'll reset the context, and then I'll run through the process again. And if it makes a mistake, I'll have open code, analyze the session logs, figure out where it went wrong, and harden the skill again.
Starting point is 00:43:42 And I iterate on that a few times, and usually by the third pass, I've caught all that stuff, and from that point forward, the thing just runs on its own forever until I want to modify it. Or OpenClaw screws something up with some massive update. I could always happen. You know, it did strike me.
Starting point is 00:43:56 It was very slow, but I was playing around with Gemma 4. And even just on the CPU, I could get it to run. Not super fast. No. But, like, I've been doing this parallel work, and you're talking about the infrastructure. And something that was really clicking for me was just this, like, I've been using Searching, Search XNG more.
Starting point is 00:44:10 Yeah, yeah, that's really hand. It's a great example of building on something that's open source and self-hostable. And so at first I was just using it, right? But then I needed to search. I didn't want to sign up for a brave API key, which is the one built into OpenClaw. You know, there's various mechanisms to do it.
Starting point is 00:44:24 I was like, but I have this infrastructure. And so I got open code to help me develop an MCP server for it, and that's baked into my injector setup. And so now anything, any LLM that makes a call that uses the injector has access to search automatically that's routed through my local infrastructure. And then so I hooked the Gemma 4 model up to that. And so then I was able to have this local model doing direct calls to my local search engine provider to then go prepare the report from here or whatever I had to do as a test desk.
Starting point is 00:44:52 Now, it took four minutes because it was running on the CPU to do a handful of tool calls. But that's just going to get better. That was all local using local self-hosted search. Now, of course, that search reaches out to duck, dot-go and various of the providers. But like, yeah. Yeah, but you're controlling that aspect of it, right? I get to set all of that. That's configured declaratively in NixOS, right?
Starting point is 00:45:10 And that's what I'm trying to come back to. It's like, oh, yeah, you have a search XNG, whatever it is, instance. Well, guess what? It just got a lot more useful. You got Home Assistant, guess what? It just got a lot more useful. You got Frigot DVR, guess what? You got TuneR, you got Jellyfin, you got anything that has an API or a config file just got more useful.
Starting point is 00:45:29 Yep. That's how it works. And so that's the exciting part, but it is very early days. And I think you should wait if you can, and things like Gemma 4 are going to make it a lot more possible. And then just because I can't not, but I think NixOS or some kind of declass. or some kind of declarative infrastructure is... Ways and keeping everything can get that you can, all very useful for this stuff.
Starting point is 00:45:49 Because they can mess things up, you can mess things up, config files change a lot, or new things happen, and just having a lot of snapshots you can roll back or reference is great. The way I try to do it with a budget is I did subscribe to the Minimax token plan, which is a one-time annual charge, and then it's so much capacity that I've been throwing everything I can at it to try to use it up. audio generation image analysis, everything, and I just cannot use up the tokens.
Starting point is 00:46:15 So it's a great problem to have because it lets me really experiment, but it's not the most advanced model. It's good. Minimax, it's an open source model. It's very good. I would love to run it locally. It's not there yet, though. But it's good, but it's not great. And it will often mess up Nix config. And the great thing is, is the Nix Config has to build and verify. And so then the agent sees the build fails, goes and fixes it. its syntax and runs the build again. And I often think if I was on a Debian system or a Red Hat system, would it have just injected a bogus config option, and then I would have restarted the service, and the service
Starting point is 00:46:51 would have just failed or whatever, or the OS wouldn't have booted. And so what Wes is saying is the reason why it's nice to have it in a declarative environment, maybe it's even just a container. I don't know that you are, you know, you can take image snapshots up or GitHub backups of config files. Whatever you're doing so you can iterate is really useful because they're not great yet. And, of course, the fancier models are, but if you're trying to do this on a budge. Yeah.
Starting point is 00:47:13 And having some kind of feedback mechanism, whether it's a linter, a format, or something they can just tell you, you know, or do a smoke test of any kind as just a fast feedback mechanism, too. So you catch mistakes before you're, like, way down six steps and it's moved on. That helps a lot, too. We're starting off our boost this week with a spaceball boost from Kangaroo Paradox. One, two, three, four, five, six, Satoshis. The hell was that? Spaceball 1. They've gone to Plaid. Thank you, Gungaroo. Here's the message fell behind on the shows. I'm slowly catching up, though. Your pre-show rant on open source projects versus AI really resonated with me, so I had to give it some value back. I was mostly on board with open source software maintainers and their approach to block big tech AI bots, and it seemed reasonable to me. But as usual, your words and passion, Chris, made me feel that I should only be at most a short-term solution because letting OpenAI and others hammer your forge doesn't really seem viable. However,
Starting point is 00:48:24 on the long term, you're absolutely correct. Yeah, that is, it's a tricky thing, right? Because there is a real problem of resources and open-source projects are limited in resources. For sure. They don't have time to be chasing server infrastructure problems. But again, I don't think you fix that by by blocking them because they just go right around you. Thank you very much, Congaroo, for being our baller booster. Appreciate you very much. Hybrid
Starting point is 00:48:48 sarcasm comes in with a row of mick, ducks 22,22 sats. Things are looking up for all but duck. Having set up my own clanker with open claw, I concur that what I want to find, or is APIs for all the things, starting with Lou Blogger. Did you see that also
Starting point is 00:49:04 Tuna has an API? Lou Blogger is the one I need to set up. Hey, add the oil change. That's another really nice workflow of the thing in your chat app is, you know, adding to the grocery card, adding to something, tracking this and writing it up nicely for me, whatever. Unbelievably, both Wes and I have grocery stores that have APIs. And so we both have wired up our clause to talk to the API. And I find this very useful.
Starting point is 00:49:27 Actually, the most, when I come home from the grocery store, I come home and I'm unloading the groceries and I go, oh, crap, I forgot to get. And then I open up Telegram and I go, add this, add this, add this, add this to my chopping cart. And then the next time I go, it's just there in my shopping cart. Or, like, Dylan comes in and he's like, Dad, I'd like to get some more bottled water. I'm like, bottled water, Dylan. Dad, I really want some bottled water.
Starting point is 00:49:45 All right, let me in. All right. Add bottled water to the shopping cart. Fine. And now it's in the shopping cart, right? So because if it has an API, that's what I'm doing. But don't get the expensive stuff. Yeah, right.
Starting point is 00:49:54 Well, and so what I set up is a preferences file. So the first time I choose a brand, it remembers that that's my preferred preference. And then, you know, size and flavors and stuff like that, too. But you're absolutely right. Tunaar does have an API, and it's glorious hybrid. because it'll let me fix a problem this weekend. You also sent us a row of duckies to say, happy Easter. Happy Easter to you.
Starting point is 00:50:15 The show is here on Easter Sunday, and we appreciate the value. Nyquist comes in with 5,000 sets. I am programmed in multiple techniques. No message, just value. Thank you, sir. And then E-Massie O-1 comes in with 4,096 sets. You're doing a good job. For a few years, I've been using Secure Boot with my own keys on my laptop with Arch Linux
Starting point is 00:50:36 Windows and Mac OS. Ah, excellent. Okay. I generated my In-It RamFS with Drop Cut, then generated a UkiI, and signed it with SBCTL. I used OpenCore as my boot manager, and I signed that and the Windows Boot Manager as well. After each kernel update, I would regenerate and re-sign the Ukii, and after every Windows update, I would re-sign the Windows Boot Manager. I also encrypted all three operating systems with B-Cash-FS, encryption, bit locker, and then it kind of gets cut off.
Starting point is 00:51:02 Interesting. This is incredible. Thank you for the experience report. Yeah, that's exactly what we wanted. I'm wondering why. I'm wondering what the motivation was. Is there a corporate requirement? Because that's a work.
Starting point is 00:51:12 It is a lot of work. Every kernel update? That guy's not messing around. Emacy, let us know. Why? I mean, other than just because it's cool to be secure, which I agree with. And then come over and set up all our bootloaders for us. There we go.
Starting point is 00:51:24 Thank you, sir, for the boost. Appreciate it. Well, our dear Gene being boosted in a series of boosts here. There's a couple rows of ducks, some elite boosts for a total of 9,340 cents. I like you. You're a hot ticket. A little comment on Linux Unplugged 658 saying, I too am thankful for your scale coverage. Oh, thanks. We missed you, Gene.
Starting point is 00:51:53 We sure did. At home, I'm using system debut in most places because it's the default in NixOS. Yeah, yeah. And the more I think about it, the more I feel that I'm very against any age verification that is done off device. I'm not a fan of the idea in general, but can live with something that, isn't sending me to a third party for that verification. These inherently won't work as well, but it's okay considering the alternative is to give up any semblance of privacy.
Starting point is 00:52:21 If I have to prove myself to my computer, I've just effectively registered it and everything it does with the governments. Yeah, I guess if you're thinking it's going to happen one way or the other, and if it doesn't happen the way it's been talked about now, it's probably going to happen through some sort third party verification, that does seem kind of a bad direction. Good point, Gene. I read a really neat contrast to this saying, hey, the content should tell us what age it's appropriate for, not us telling them we are and what our age is. That's what's always made me think it's really just about waving the hands on, like, the advertisers side.
Starting point is 00:52:59 Well, we checked, right? We did something. If they really were trying to prevent it, they probably would do it at the content side. But maybe that gets more legally murky, I don't know. Gene continues here with Elite Boost. Do you know of any podcast clients or other clients that will show the video version that you mentioned being in your feed? Podverse and Fountain. Yeah, Fountain.
Starting point is 00:53:19 I know there's a couple of others, but I don't have direct experience with them recently. But they will let you see the video version of the show. Gene, thank you for asking. Gene also wants to make sure we saw an article here about the Euro Office launching Europe's open source office rival. And links to a nice little source here. Yeah, there's also been quite a nasty breakup between Calabra and Libre office. It's not good. It's never been a better time to be a markedown user and not need an office suite,
Starting point is 00:53:47 which is a privileged position. I agree completely. That is true. Thank you, Gene. Appreciate you very much. It's always good to hear from you. Theo Mall comes in with 6,000 sets. Oh, my God, this drawer is filled with broolopes.
Starting point is 00:53:58 Oh, he's a long-time listener. First-time, booster. Hey-ho. Love the show, as so does my son, 15. That's great. running Linux and loving every minute. I'm wanting to migrate away from Google workspace and was leaning towards Proton.
Starting point is 00:54:13 Now I'm thinking maybe NextCloud might be a good way to go, host on a local server. What are your thoughts? Hmm. Well, I think if it's only a handful of users, NextCloud, well, hmm. Depends of what kind of work you want to do on the long term, I would say.
Starting point is 00:54:28 Yeah. Yeah, I do like this idea. I was going to maybe suggest, what's the one that you like? Zoho. Zoho. I know it's not self-hosted, but it's a nice alternative
Starting point is 00:54:40 to Google workspace. I think you should try it, to be honest with you, Theo, because it's good for most people, but it really comes down to users and how they interact with web apps and how they take to the performance
Starting point is 00:54:52 of NextCloud. True. Exactly what you're doing with it. Yeah. I wish it could be a solid yes. I really want to be able to say that, but I just don't think it is if I'm being honest with you,
Starting point is 00:55:02 but I think it's worth trying. Is that fair? Yeah. I mean, different people have different standards, different needs, different performance characteristics that they're okay with. And honestly, we'd love to hear feedback
Starting point is 00:55:12 from people that have it out there working successfully as a Google workspace alternative. No, not just talking file sync or your darn photos talking full Google workspace sync alternative or workspace alternative. It's also hard because I don't know, maybe there are, there probably are some, but I don't love you in Google Workspace
Starting point is 00:55:27 or Microsoft offering. So I don't know what the best version even is. Forward Humor Boosten with 4,444. Things are looking up for all but duck. Hey guys, I'm enjoying hearing the compliance conversation with Determinate Systems in episode 657. Have you heard of anyone running NixOS in a CMMC or ITAR environment? I'm not sure if it's even possible to meet FIPP's requirements on NixOS
Starting point is 00:55:51 and would love any input. Imagine these are the exact kind of problems to determine systems and flocks are trying to solve, right? This is the value add that they can bring to enterprises they're using Nix. Yeah, I could probably go ask around maybe on the NixOS discourse. Could be one spot, maybe also. see, and you may have done these things already. So, you know, maybe go troll some recent Nix conferences.
Starting point is 00:56:11 There might be folks talking about that kind of thing, because there definitely are people exploring the space is exactly where the progress is and the exact things I don't necessarily know. And then, yeah, third, maybe go reach out to folks like that says or flocks or various folks who are more interfaced with, like, people who might be in those environments. Yeah. And you might be able to find some people who would be willing to have a talk.
Starting point is 00:56:30 Yeah, it is an area that, you know, Red Hat is focused on and Sous has focused on for very long time, you know, just trying to get each one of those checked off over the years. And I think you're seeing the same process start with Nix, but I'm not sure. It might also depend too, right? Like, are you trying to run something that ultimately builds a container that has an S-bomb that runs on whatever? Or are you trying to run
Starting point is 00:56:48 like full-fat NixOS? Yeah, yeah, yeah, yeah, exactly. All right, well, thank you everybody who supported the show with a boost. We really do very, very much appreciate it. Thank everybody who also streamed Sats. 18 of you did that. And collectively, you stacked 33,000 366 Sats.
Starting point is 00:57:04 Bad, not too bad. You combine that with our boosters, 208,174 sats. Thank you very much for supporting episode 661 of your own plugged program. If you'd like to send us a boost, I think Fountain makes it probably the easiest. There is a whole self-hosted route you can go with Albi Hub and a bunch of apps, which is a lot of fun. We talked some of that in the recent episode of this week in Bitcoin, if you want to check that out. And thank you also to our members.
Starting point is 00:57:30 One pick this week. That's a rarity. Weird. Which is technically the rule of the pick segment. It's only supposed to be one. It's been almost a year, I think. But we wanted to talk about single file CLI because it solves a pretty, pretty handy problem. Or whatever.
Starting point is 00:57:47 It's a CLI tool that solves a problem that I've had probably, I don't know, forever, because I used to solve it with something built into Netscape to Firefox. And it is a complete copy of a web page in a single HTML file based on single file. And this is single file, CLI. Yeah, that's right. CLI tool for saving a faithful copy of a complete web page. And crucially, right, like you can do that in a variety of ways. And there are probably better or different tools.
Starting point is 00:58:12 So, you know, boost in, right. And if you have a preferred version of getting this task accomplished or archiving websites. But I liked sort of the idea that maybe for your own archive, for processing somehow, whatever you're trying to do of just like a single HTML file per site instead of having stuff that's like venturing. a bunch of images into a folder, which is better for some use cases for sure, but not for simplicity. And this could be good or bad depending on your opinion, but I like that it uses Chrome or chromium, and then it uses Dino as a standalone script injected into the web page using the
Starting point is 00:58:46 Chrome DevTools protocol to actually render it through Chrome. So if it's a website, Chrome can render, you're going to be able to capture it with this, which means everything, right? So I use Firefox as my daily driver, but that's absolutely for me valuable. You just have to have Chrome or Chromium installed or a chromium-based browser and then be able to support that remote. Yeah, the DevTools Protocol, which is pretty easy these days.
Starting point is 00:59:09 It is. It's very easy these days with the current version. So single file, CLI, and it is... A-GPL3.0. Thank you, Wes. Agpl3.0. You could always use your suggestions on some picks, too. If there's something that you find very handy that you run on your Linux box that we haven't covered or makes your server more useful,
Starting point is 00:59:27 send it into us, we'd love to cover it because we're always looking for great useful tools. And I feel a little bad that we only had one pick for you, even though that's technical real. Here, I got a bonus pick. What? No way. Okay.
Starting point is 00:59:37 It's called HTML to Markdown. Okay. And it's just a single, it's a Go project. This was one of the inspirations for me making my own tool, which you can use that too if you want. I'll throw a link in there, but it's really more meant. I just wanted it more as a library. So this is a sneaky double pick?
Starting point is 00:59:51 Yeah, it is. Oh, my God. And this was one of the inspirations along with the Mozilla readability, like for their reader mode stuff. Wow. You know what, Wes, you make me want to be a better man. You make me want to be a better man. A robust HTML to Markdown converter that transforms HTML even entire websites into clean, readable markdown.
Starting point is 01:00:08 It supports complex formatting, customizable options, and plugins for full control. But it can handle, you know, tables and complicated nesting and a lot of nice stuff. So you can say I can take, like, those Libre documents, and I can rage quit Libre Office because they got beef with Calabra, and I like Collabora a lot, and now I can just convert them all to market. down and have beautiful markdown-down-redged versions of documents, even if they have tables in them? You know, I don't know, but it depends on how dynamic. If it's all JavaScript rendered, then maybe not. I think you just should have said, no, that'd be, yeah.
Starting point is 01:00:38 Wait a minute, hold on. There's JavaScript and Libre Office documents? Well, if you're talking about a web site. Oh, oh, no, I'm talking to Libra. Okay. Yeah, you're not, you're an office world. HTML? Because, you know, I thought they could save them as HTML's.
Starting point is 01:00:49 I don't know. Oh, probably. Yeah, well, that should work. Okay. Well, I thought you could. I thought you were thinking like some sort of online interface. I don't know what you're. I'm trying to rage quit Libre office
Starting point is 01:00:57 because I got beef with my boys at Calabra and I thought you maybe bring me a tool and make that easier and you're just shutting me there. I'm trying to help your claw read stuff on the internet. That's useful and relevant and on theme for the show. Or make your archive, you know, instead of some stuff you want full HTML, some stuff you just won't mark down
Starting point is 01:01:13 because do you need the full HTML of the Forenics article? He's doing it again, Brent. We're bringing a show relevant thing. It's on theme. I'm kind of with Wes on this one. You probably should be. All right, links to that are in the show notes over at Linuxunplug.com slash 661. You can find all of that and our contact information, RSS feed, the Mumble Room.
Starting point is 01:01:33 But you know what, if their claw does want more information, more metadata, or maybe they're just their own meat hooks, we got something for them, don't we, Wes? Oh, we got a data, structured data rich RSS feed. It's an XML file, don't you know? I like XML. Well, I don't, but the machines do. Well, and you can have namespaces, which is pretty cool because then you can put the podcast. podcast namespace in there, and that's got all kinds of fancy goodies. You could just ask your claw to expose the MP4 that snuck in there that your podcast client doesn't show you.
Starting point is 01:02:00 Yeah, that's right. What about, like, information that was in the, for the content of the show, West? There's got to be something I can sing my client to. Like the description, dag? No, I mean, yeah, I guess that's a starter. Or, like, the iTunes keywords? No, it seems old. No, I want, like, I want to know all the brilliant things Brent said.
Starting point is 01:02:17 Oh, for that, you want cloud chapters? Well, I mean, that would get me close. That gives you the Brent section of the show. Okay, okay. I thought maybe you'd have a transcript for me. Oh, yeah. Yeah, no, we do actually. Oh, we do. Yeah, VTT and SRT. You might even say we've had it for a couple of years now, but it is handy more so than ever. All right, and the last but not least, a little bit of metadata for you.
Starting point is 01:02:38 We are live every single Sunday over at jb.live.tv. See you next week. Same bad time. Same bad station. We do it at 10 a.m. Pacific, 1 p.m. Eastern. And, of course, we got it over at jupiter. Jupiter Broadcasting.com slash calendar. Say it like that. Jupiter. And if you go to Jupiterbroadcasting.com slash calendar, the script will just convert it to your local time zone
Starting point is 01:03:01 so you don't even have to do the math. You come hang out in our Mubber Room, our chat room, tell us where a bunch of goofballs. We love it. And help title the show as well. Shout out to our members for supporting this episode and for everybody tuning in that shares it. We always appreciate that as well.
Starting point is 01:03:14 Word of mouth is the number one way to promote a podcast. Also, Brent better leave soon because Linux Fest Northwest is coming up And the schedule is actually live now. Linux Fest Northwest. Go check it out. Schedule's live, and we'll see you there. It's going to be great. We're going to do a live show.
Starting point is 01:03:25 Thanks so much for joining us. See you next Sunday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.