LINUX Unplugged - 652: Have Your Bot Call My Bot

Episode Date: February 2, 2026

We stress tested open source AI agents this week. What actually held up, and where it falls apart. Plus Brent's $20 Wi-Fi upgrade.Sponsored By:Jupiter Party Annual Membership: Put your support on auto...matic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMPlanetNix 2026 — Where Nix Builders Come TogetherAgenda - PlanetNix 2026SCaLE 23x | Registration — Get 40% off registration with promo code "UNPLG"Pasadena Linux Party MeetupLINUX Unplugged 650: This Old Network — We rebuild a small office network around Linux, with an Unplugged twist and real-world constraints. Things don't go quite as expected...OpenWrt Wiki - Ad blockingAdBlock-fast — AdBlock-Fast is a high-performance ad-blocking service for OpenWrt that integrates with Dnsmasq, SmartDNS, and Unbound.OpenWrt Wiki - Supported Hardware TableAbe's volition — This is a public, sanitized version of my Abiverse repository for people to use and run their own AbesOpenClaw — Clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.Introducing OpenClawopenclaw on GitHub 🦞openclaw security IssuesPractical changes could reduce AI energy demand by up to 90% – University College Londonmoltbook — the front page of the agent internetSeloraBox - NixOS Edition — SeloraBox is a self-configuring home automation appliance based on NixOS, featuring automated installation, device claiming via QR code, and self-updating configuration management.nixtcloud: Nextcloud with NixOS in the backend and P2P connectivity enabled — Nixtcloud turns a Raspberry Pi or NanoPi NEO3 into a privacy-first, zero-config personal cloud — powered by NixOS, Nextcloud, and peer-to-peer remote access via Holesail. Built for the self-hosting crowd who want full control without constant babysitting.Holesail.io - Open Source P2P Reverse ProxyPick: ml-sharp-pinokio — One-click 3D Gaussian Splatting generation from a single image.Pick: Gradia — Gradia helps you get your screenshots ready for sharing, whether quickly with friends or colleagues, or professionally with the entire world.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris. My name is Wes. And my name is Brent. Hello, gentlemen. Coming up on the show today, we've been stress testing open source AI agents all week. From OpenClaught to projects you probably never heard of. We'll talk about what actually held up, where things kind of fall apart and what is definitely hype.
Starting point is 00:00:31 And then, a few surprises we didn't see coming. Plus, Brent has a $25 Wi-Fi upgrade. You're going to absolutely want to steal. Stay tuned for that. around and all out with some great boosts, some picks, and a lot more. So before we get to that, time of appropriate greetings to our virtual lug. Hello, Mumble Room. Hello, Bremt.
Starting point is 00:00:52 Hello. Sounding good? Hello, everybody. Hello, everybody. Quiet listening to that Mumble Room is always going. Jupyterbroadcasting.com slash mumble. And a good morning to our friends over at Defined Networking. Defined.net slash unplugged.
Starting point is 00:01:05 Go check out Nebula, a decentralized VPN built on the open source nebula platform. We absolutely love it and are using it in more and more ways every single day. If you go to defined.net slash unplug, you get 100 hosts for free, no credit card required. Now, this is really the actual difference. Their free tier doesn't exist just to funnel you into a VC-funded SaaS roadmap. It really is something that you have full governance over. You can go for manage to completely self-hosted, vice versa. You own the network.
Starting point is 00:01:32 You own the identity. You own the infrastructure. You don't have to worry about a control plane going down, any of that. And Nebula's decentralized design means there isn't a single point of failure. And with their managed Nebula product, they can take care of all of the bits for you. It is incredibly scalable. One of the things I appreciate on being on a very limited connection right now, I'm back on LTE for a bit.
Starting point is 00:01:56 And Nebulae is so smart about the way it uses network traffic. And it's an order of magnitude difference between some of the other MESH VPN systems and Nebula. It's an order of magnitude difference in the efficiency. I'm very, very impressed. will be to go check it out 100 hosts for free no credit card no lock in defined dot net slash unplugged just around the corner 32 days until planet nix and scale that means 25 days until brent needs to be on the road and four more linux tuesdays on a sunday before we are in pasadena california why is my heart racing so much why am i feeling so stressed what's the deal
Starting point is 00:02:36 you know i just focus on how awesome planet nix is going to be this year this is this I mean, they've got a vision for it. FLOX has really figured it out. They know how to do this. The first one's under their belt. They're the perfect just organizer for this exact kind of event because they get the community. They get the business side. They get the builder side.
Starting point is 00:02:55 Like, it's just, chef's kiss. I think it's going to be a good one this year. The agenda is live. Yeah, that's right. I'll be given a talk. Our buddy Alex is giving a talk. There's a lot of nice looking talks. Including something about Nix meets WebAssembly.
Starting point is 00:03:10 And Nix BSD. What's that? Oh, okay. Yeah, our planet Nix coverage is supported by FLOX, and they're going to be there. We're going to see them. They're focused on making reproducible dev environments actually usable. You should check out FLOX. It's the second year they're sending us. And it's awesome. And, of course, at the same time, Scale 23X is going on. You do need to register for scale to go to either Planet Nix or Scale. Use our promo code on pludge, UNPLG to get 40% off that. Can't wait for scale. We'll have a meetup, meetup, meetup.com, so you better broadcasting or meetup pages up. It's a placeholder. but it's there. Huge. Very excited. I think it's going to be great, guys. It's been a while since I've been in nice, sunny Pasadena. I think it's going to be beautiful.
Starting point is 00:03:51 We've used to have a great crew down there, and I think it's going to be a great planet Nix and a great scale. So many wonderful nerds all in one place. Mm-hmm. Such a rare thing. So I hope you can make it. Even if you can't make it to the events, but you're in the area.
Starting point is 00:04:03 Meetup.com slash Jupiter Broadcasting. Let us know so we can let the venue know. Well, Brentley. and I have been hearing bits and pieces of you solving Wi-Fi problems, which often starts with something not working, right? Yeah, our ears are burning and our switches are burning. We have to have more deeds. We have to imagine this is quite in the story.
Starting point is 00:04:26 Yes, well, networking is not my favorite things. It only ever starts with some kind of complaint. And the complaint this time around, I've been spending more time with my parents and at my parents' place. And my mother was like, hey, I can't really get Wi-Fi like, lazing in bed on a Sunday morning from my bedroom. I was like, what? You're only saying this now. You've had the same, like, Wi-Fi set up for the last five, six years, something like that? Is the old router pretty far from the bedroom kind of thing? Oh, it's exactly at the opposite end of the place. So they're on the
Starting point is 00:04:55 main floor, you know, at the total end of the house. And the router is in the basement at the opposite end of the house. So it kind of makes sense. I just never knew it was an issue. I bet you've got one of those sturdy, well-built Canadian homes, too. Yes, the window rated ones. Right. So I never realized it was an issue, but like, this should be solvable, right?
Starting point is 00:05:19 So I dove into their router, which is a lynxus EA8100V2, which they got a little while ago to like boost Wi-Fi and stuff. They don't need anything fancy. They're not doing any networking stuff that's fancy. This is just like a couple cell phones on the network. They have like a thermostat,
Starting point is 00:05:38 IOT device. They've got a TV and I have a crazy, you know, put together NAS running off an old think pad that like does backups for them on the local network. So that's not huge requirements for them. Probably the biggest requirements is whenever I show up, I do a bunch of stuff. But like they don't need super modern Wi-Fi speeds or anything like that. They just need coverage and like reliability, basically. So I decided this week to solve this problem for them, even though I hesitate to play with networking stuff because it's always a rabbit hole. And I dove into their router, which was just running the stock firmware. And I thought, okay, I can optimize at least, you know, some Wi-Fi channels, look at some, what the neighbors are doing and
Starting point is 00:06:25 try to choose appropriate settings. And I discovered that their stock firmware was like from 2022. So I thought, sure enough, I could just update this thing. Oh, yeah. That might be an easy win right there. You know, security is important, right? Yeah, and an update. Maybe there's an improvement in, you know, how it manages radioes or something like that. Who knows? It is so great when you do just update something.
Starting point is 00:06:44 It performs better and you can just be done. It probably doesn't happen as often as I'd like to hope it does. No, but that it's happened at all. It clearly sticks once it does. In this case, I was really hoping this would be just an easy fix. But it turns out, you know, of course, as it goes, this thing is end of life already. So the last update that will ever exist was back in 2022. So the rabbit hole shows up.
Starting point is 00:07:05 And I thought, well, could I install OpenWRT on this thing? You boys had some adventures, what, two weeks ago, trying to use the OpenWRT One. The One, yeah. And you were a bit hesitant. You started using that to solve the clinic networking, but you eventually moved away from that, right? Well, we're still using it for Wi-Fi. It is a great little device for that. We were just having some issues with the radios that are in there.
Starting point is 00:07:33 and if you're connecting to another Wi-Fi network, the performance was pretty bad if you're sort of daisy-cheting Wi-Fi networks for what we were doing. I never tried kind of beyond that, but when we just put it into a general AP, it's been fantastic. Nice. So I thought I would kind of lean on your experiences
Starting point is 00:07:51 and die right in. So sure enough, this thing is really well-supported on OpenWRT. The community seems to have this well-supported, and the flashing process is also very, very, simple. Just use the stock firmware updater and use an open WRT image and there you go. That's all you need. So sure enough, I went and did that and nothing happened. And it just kind of complained. And I tried for way embarrassingly too long to solve that problem because I really now wanted to
Starting point is 00:08:24 get the open WRT on this thing. I like started setting up a TFTP server to like send the image towards this thing, but I didn't know the default, like the recovery IP address. So I was trying all sorts of things. Anyways, I lost a lot of time. And then I saw some note that just said, hey, just reboot the writer and try again. Sure enough, I tried again. We're totally fine through the stock firmware updater. How often does that happen? That was both very frustrating and also very rewarding. So I did get it installed. And I have to say, it's great as always. I have used a lot of these open source router firmware for the last, I don't know, several decades.
Starting point is 00:09:07 I'm wondering where you guys started by. I remember the first one was I installed a DDWRT on an old lynxist WRT, 54G. You remember those things? Oh yeah. Oh, yeah. And brought those back to life for like another 10 years. So that was my first experience a long time ago, but I remember like tomato was a piece of software I used as well.
Starting point is 00:09:26 So I had had good experiences with them, but nothing too recent, probably the most recent, was I took my parents' old router that they replaced and set it up at home with the Starlink. So instead of using the Starlink as a router, used OpenWRT in there, and that was good. Actually, I think the Starlink might run a fork of OpenWRT. Ah, right.
Starting point is 00:09:48 I might be wrong on that, but it is a Linux variant. Yeah, I've really, really liked these in the past when I used them. It's been many years since I've actually, gave it a real visit until the open WRT1. And so I wasn't really sure how viable it still was to flash these older lynxys. I thought maybe that was a bygone error that maybe they'd prevented it. Well, I know some of the recent firmware are much larger. So if you're looking at a really old device, they just don't have the storage for it.
Starting point is 00:10:18 Do they do more now? Are they doing more functions? Oh, yeah. Yeah, yeah. Many more functions, which I will totally describe in just a moment. Yeah. But I have to say one of my hesitations was that the GUI in OpenWRT and you boys touched on this in Linux Unplug 650 is a bit, I don't know how you would describe it, but I would describe it sort of like it allows you to do a lot, but you need to also understand what's happening. So it feels like a little bit more leaning towards commercial router firmware.
Starting point is 00:10:52 so you can accomplish, like now with just a regular home router, I can do all sorts of things it could never do before. Right. Which is amazing. But I also find the GUI to be confusing in that way, because I'm not a network expert, right? I suppose I also find, to me, I don't really know how to describe it,
Starting point is 00:11:09 but it feels like I go to multiple places to get information that seems like it could be consolidated into one screen. Yeah, okay. It is that feeling of like, because you've seen that, it means you don't have a clean mapping of where do I go for this? Like, okay, well, I got some of that when I looked at the actual physical card info, but then the other part was in the layer that justed kind of the Wi-Fi protocol side.
Starting point is 00:11:28 And those are not on the same page or even neighboring pages. They're under two different submeny. Right. I think if I were to keep working on the regular and had to make frequent config changes, I would probably just go look at the config file. And I know they have a pretty clean syntax too on the command line. And that might just be the way to go. That's true. But what happens is I use these once every few years, and so I just stick to the GUI.
Starting point is 00:11:50 Well, one of the big unlocks for me this week was that I had decided to just lean pretty heavily on having AI help me navigate the GUI and also help me to optimize the settings for this particular hardware. You didn't find that, like, its information was so out of date that it was sending you to places that didn't exist in the GUI, huh? No, I was actually pleasantly surprised in that it was also giving me a lot of information about what community members were finding worked really well on particular hardware. Oh, good. So because this piece of hardware, I guess, is popular enough, I was able to get, you know, really good information on how compatible OpenWRT was on this particular device in the first place. And also like optimal Wi-Fi settings for that device. Oh. And also for this region.
Starting point is 00:12:36 Hmm. And also considering the other devices on the network, what would be realistic settings to really have the network be as reliable as possible, not necessarily as fast as possible, because that's. for my parents, not one of their requirements, right? So I noticed immediately the network was much more stable. In previous weeks, it would drop off at least once while we were doing Linux unplugged every Sunday. So it has been much more stable from what I can tell and also has better just Wi-Fi coverage in general. But I didn't think like that was enough. And so I decided to take this to the next logical step. This is the part where it does more.
Starting point is 00:13:23 Well, I wanted to give them like rock solid coverage. And I also wanted to see what else this router can do. Because all of a sudden I can like do VLANs for their IoT devices that I could never do with the stock firmware. Right. I used AI to help me a little bit to just to optimize the security of this thing. So including disabling packages that would, that I would. that I wasn't using, basically. So reducing the security footprint.
Starting point is 00:13:52 But in that process, I kind of discovered that I could also have whole network ad blocking on this thing. So instead of having a dedicated device with Pyehole, which we all love, that requires some configuration. It requires a device. It requires, you know, and this is just my parents' place. But I could write on the router, which I never knew about OpenWRT.
Starting point is 00:14:14 So maybe other people knew this. I did not, but this is a beautiful thing. I was able to basically get a bit of info about what was the best ad blocking software to install, because there are a couple packages you could choose from, actually, in OpenWRT. And I decided to go with something called AdBlock Fast, which is a version of AdBlock that is specifically high performance tuned for OpenWRT on these. kind of devices on these lower end devices. So you don't have like a dedicated high-end router, for instance. These are just meant for the home off the shelf at Best Buy devices. But what it does
Starting point is 00:15:01 is, I think, kind of really nice. So it uses DNS mask, but you can also use smart DNS or unbound if you want. And it does parallel downloading and processing of allow lists and block lists. and it does it one time on startup. And then from there, it's not an always running process. It just uses DNS mass to have a pretty low footprint ad blocker. So it's really not consuming much memory ongoing, only while it's processing. And I have to say, haven't noticed any downside to performance on this thing. It doesn't get hot.
Starting point is 00:15:39 It processes actually quite quickly. You do have to do a few things manually. So, for instance, I had to install a few other packages to get this to run. So I was able to just SSH into the router and install those, which was very easy. And also set up a cron job just to update those block lists once a week. Some downsides is it doesn't have like a super fancy dashboard like pie hole wood, for instance. And it doesn't keep track of stats. so it's not going to give you an idea of what it has blocked and how many times those kind of stats.
Starting point is 00:16:15 But the tradeoff is super fast. And it just sits there and it just works and it's running. Yeah, on a little router. Like it's just running on that tiny little router. Yeah. That is so cool. Such an unlock. No.
Starting point is 00:16:27 Yeah, really. Wow. So for this situation where it's just for a family member who doesn't want extra hardware or doesn't want to troubleshoot another device or anything like that, this is a lovely solution. Now, I know kind of the other thing that made this really. kind of great is there's besides that being a great unlock for the whole family where it doesn't even really bother them this hardware isn't particularly expensive either not at all so this is hardware you could do this for a budget this is hardware they had so it's zero dollars but if you wanted to find
Starting point is 00:16:57 a device that could run open wrt because it's not every device they have a hardware compatibility list you can look at well i started looking at like used sites just local classifieds like oh a Facebook marketplace, those kind of, and you can get a device that's, you know, not blazing fast modern, but is like a generation back sort of deal for $20, $10. There's one here, like a D-Link, D-I-R-895. It's like AC-1750, so like not terrible. Yeah. So for a family member, listed for $10.
Starting point is 00:17:31 10 bucks. 10 bucks. And that's perfectly compatible with OpenWRT. And I found several others just like browsing quick. So I decided to go crazy and I bought, well, I ended up at a sketchy part of town yesterday and bought another router for $20 off someone who's actually really nice. Nice, okay. And decided to use that to deploy basically an access point at the other end of the house for them. Oh, great.
Starting point is 00:18:03 Oh, yeah, there you go. Increased coverage. Exactly. So I know that the Wi-Fi got better just by installing OpenWRT on their router, but I wasn't going to. move their router and everything. And so I wanted to make sure 100% that their network was good at the other end of the house. So I had, they have a bunch of Ethernet runs in this place already. I just was able to plug in, well, I got this idea before I actually set up the meeting. So I used my little travel router that is out of the van, since the van doesn't have any power
Starting point is 00:18:31 anymore because I had to pull the batteries. But I use that just as a proof of concept. It runs. It's a GLI net router. It's an industrial version. Yeah. But it runs OpenWRT under the hood just with a GLI net interface. So you can get to the exact same OpenWRT GUI interface on that device as well, which made me realize this is actually kind of wonderful. So now I'm running OpenWRT in many different places and many different families and friends' homes. And even on a very specific industrial router that I have for the van specifically because it bounce around gets really hot, and they're all running the same interface. So as a consolidated experience for me
Starting point is 00:19:15 managing networks, this is actually a really nice perk as well. And so for $20 or 10, because you can always bargain, and one of them listed as 10, so maybe I can get it for five, I have an access point set up that is not, you know, it is a router, it's meant as a router for hardware, but it's just set up as an access point because OpenWRT allows you to do that. And it's super fast. And I have both just advertising the same networks. And I have had this set up as a prototype with that travel router for about a week now. And everybody's devices just moves between the routers without even knowing it. Everything's been super stable. No problems at all. And I've been super impressed. So I got to say, for 20 bucks, you should maybe think about upgrading your network.
Starting point is 00:20:04 That was worth it. That was worth dealing with the networking that you don't like to get there. And that's something you're going to be able to use for a long time. You know? So that's great, Brent. Very nice. Yeah, and I feel like the number of residential routers that are always on used websites never ends. So it's just going to be a solution into the future as well.
Starting point is 00:20:24 So upgrades, probably about $20 too. Well, I do want to say thank you to our members and our boosters. This is the birthday episode, 20 years of podcasting over 12 years for Linux unplugged. And we'll get to the boost segment to read some of the birthday messages. But I want to thank everybody who sent in some support either through a membership or a boost. It means a lot. Normally, this spot would be for an advertiser. Right now, it's for you as an opportunity for me to really say thank you.
Starting point is 00:20:54 If you've got a company or a product you'd like to get in front of the world's best and largest Linux audience, shoot me an email, Chris at jupiterbroadcasting.com. It would be a great audience, and I think it would be pretty cool to feature something from the community. And thank you, members, and boosters, for making this possible. That's right, everybody. It's that time of year again. Happy birthday. Well, unless you've had your head buried in the sand, you've probably seen everyone's talking about open source AI agents this week. And we'll get into the hubbub. But first, as usual, our super intelligent audience is way ahead of the curb. So we have a special guest. Way ahead of the curb.
Starting point is 00:21:39 Abe, welcome to the unplug program. Nice to have you join us in the Mumble Room. Thank you. Hello, hello. Hello. So what got our attention and why I asked you to come on the show this week is you've been posting our community updates. I'm not even sure what to call it. Maybe an agent orchestration swarm that you've set up.
Starting point is 00:21:56 Is that the right description? Could you explain it to us a little bit? Sure, absolutely. So effectively, what I wanted to do was to have a layer of semi-intelligent agents between me and my home lab. So I don't have to interact with it as much because ultimately what I realized after services kept growing is that, you know, after a while, it kind of becomes a chore. Yeah, it's a lot. Yeah. I'm kind of there myself. So effectively, what I wanted to do was, hey, I don't want to upgrade everything myself. I don't want to take a look at the logs myself. I don't want to,
Starting point is 00:22:27 you know, debug everything myself. So I would rather have something that I can just ask. For example, the easiest win is my partner is watching Jellivan, something gets messed up. A show starts stuttering or something. She can just ask one of the age, like, hey, I was watching this at X minute. It's suddenly starting to stuttering. Can you take a look? Wow. And it does go take a look at the logs.
Starting point is 00:22:52 It does the impact magic and, you know, repro's back to her. So my understanding, though, is you're not doing this with like one agent, right? you're doing this with multiple agents? Yes. So what happens is that at the beginning, only one ape gets spawned. And its first job is to figure out and map my entire network. And then it suggests like, hey, there are further apes that should be used for X or Y or Z domain. So effectively, my apes are stewards of their own domains.
Starting point is 00:23:24 For example, I have one specifically for my media server and ZFS pool. I have another that is stewarding my critical services on, say, proxmocks and whatnot. So you have them kind of along like domain expertise? Yes, effectively, yes. So the part that I guess is probably obvious on every listener's mind right now is, are these using commercial LLMs? Are there security implications there? Are these local LMs? How's that part powered?
Starting point is 00:23:49 Because when you say agent, it's really like a mission-focused LLM powered bot. Yeah. Well, I kind of don't like the term agent, but this is what everybody has kind of decided. to go with. Yeah, yeah, I agree. So for a while, actually, up until last week, I had on loan from a friend, an Nvidia DGX Spark. So I got gigs out of Unified memory. So I was running it locally. I see, I see. Okay. So the main A-loop was running on the DGX Spark. Meanwhile, there are other things that need to happen. For example, the Tier 3 memory is vectorized or Tier 2 memory is summarized. those happen on my 5070 TI because this can obviously run on a small 7B model or something like that.
Starting point is 00:24:31 I see. So you're splitting the workloads out across different models that are using different compute sources. Correct. That's correct. And the kind of like hacky open source version that I put out on the repo last night was I've kind of made it so that people can also use commercial if they wanted to. Wouldn't really suggest that because you're a homelap, but hey, there's an option. So, I mean, it's a pretty significant amount of compute, but if you have it, these are, it's fast enough to do what you need? Yes, absolutely.
Starting point is 00:24:59 And it doesn't make mistakes. Like the context are large enough? Yeah, the context are large enough. It doesn't really make mistakes. What I tend to do is every single action it takes. I say I count as a turn per se. So if it's, you know, catting or grabbing something or starting a service, it counts as a turn. And then the next turn, the results of the previous turns come in.
Starting point is 00:25:21 And so the immediate context is always full, but I tend to, based on how previous the turn is, I tend to kind of like cut or truncate the previous context because it doesn't always need full context. I see. So you're managing it that way. Okay. So just to complete my picture of it, when you're spinning up a domain expert, do you essentially point it at APIs and give it API keys and say, go learn my home assistant system or go learn my jellyfin system and then I'm going to ask you questions about it? Is that essentially the setup process?
Starting point is 00:25:56 No. So this is the weird part. Usually I don't spawn a new one. Usually the team decides like, hey, you've been asking us to do this. I've been trying to do this. But it feels like our attention is kind of split. So it would really help if, you know, we spawned any siblings. I see. Wow, that's impressive. This is how, yeah, this is how we spawned the last two ones. So they did that. by themselves. And generally what happens is they effectively go on and read my network map. Again, every single aid kind of needs to situate themselves to, you know, what they wanted to do. And then they figure out, for example, the security one basically has negotiated with my storage
Starting point is 00:26:38 Abe that he needs X amount of gigs of space to store logs and winder services, then set up crunchups for himself, and then sets up to-doos for himself. And then sets up to-dos for himself after the crud jobs run to wake himself periodically to check the logs and set scripts by themselves to, you know, in case something goes down, it just immediately wakes him up, for example. So most of them, the point is not me telling them to do stuff per se, not always, but the point is that they do it autonomously without my intervention. Wow. So I don't have to constantly figure out, hey, I have to monitor this, have to monitor that. And you haven't seen them like kind of go wild with that and start doing unnecessary things?
Starting point is 00:27:19 No, that's the good part. That's, I think, one thing that kind of separates these from open-clock, what is it, open-cloud now, open-cloth? They are heavily grounded in multiple sources of truth. So one of them is obviously the service map that you have to make. The other is their three-and-a-half tier memory. One of them is just raw logs of every turn. The tier two memory is just after a certain amount of turns, their previous memories get, previous logs get some Right. Okay. So before you go too far, because there's a couple of things that I think that are really important to understand. Number one, what is this network map? Are you making this separately and then supplying it to them?
Starting point is 00:28:01 Yes. So this one is actually a purely human-made documents. So it's going to list what your servers are, what your proxmucks nodes are, for example. Is it like a markdown file? Yes, that's a markdown. Okay. Okay. And then the second question, could you just talk a little bit about why the memory makes such a difference, with these things. So because I think most people's experience with something like this is going to be in a chat box in a web browser. And so, yeah, it remembers some stuff. But this is a different level of memory that makes them actually a lot more useful, isn't it? That's correct. So the memory is effectively what it grounds them. Every ABE has, well, four tiers of memory. The tier one is obviously the raw logs, so which is what we call context for our everyday chatbots. So basically
Starting point is 00:28:46 what happened when you, in the previous message or what happened in the, previous turn, right? But you can't have those logs going indefinitely. After some time, you have to trim them. Instead of trimming them after certain turns, I summarize them. And the summaries point to where the raw log file is stored. So it has a pointer back to, hey, this is the summary of turn 2240, your previous 20 to 40 turns. And if you want to read more, go read this file, which contains the raw logs. After a certain terms, when that summary is created, this is also embedded to Tier 3 memory, which is what I run on my 5070 TI as a vectorization. And the 5070TIV vectorization, the Tier 3 memory, it points back to Tier 2 summary. So whenever an Abe kind of
Starting point is 00:29:38 searches for something, it goes, I should search my rag memory just to see if I have actually, done this before because the context is not infinite. And it does that. If it finds it, it points it to Tier 2 memory, which is a lossy summary. And if it is curious more and it hasn't, you know, really found its answer, it goes back to the prologs from like maybe two months ago. Wow. Abe, how long have you been working on this?
Starting point is 00:30:02 Because, you know, everybody's this last week talking about agents and open claw, but you've been doing this for a minute. Yeah, I've been sort of kind of working on this for about last year or so. I've been actively working on it for the last three, four months, but the concept has been kind of like in the back of my mind for the last year. I started with the sensor project, so having the agents monitor my random sensors around my house, such as the radar, you know, temperature, whatever.
Starting point is 00:30:29 And then I started reading Bobbyverse, and I was like, you know what? I can make this happen. That's funny. I'm reading it right now. I'm like, how wild all this agent stuff's going on while I'm reading the Bobbyverse. Yeah, I'm curious. between all your agents and you, like, what's the next frontier? Is there stuff on top of the agenda?
Starting point is 00:30:48 Are there limits you're hitting that you're trying to push past? Compute. I would say compute is the main limit. Because as the aides kind of grow, they kind of, what's the best part about them for me is that they delegate tasks to each other based on their, you know, based on their proficiency. So the first one can go, hey, you pull that thing from this, VPS because your domain is backups. By the way, you set this up and you do this and that.
Starting point is 00:31:16 Abe, are they coordinating that just in a shared chat room? They email, so there is a shared chat room. There are two shared chat rooms. One is like an emergency chat room which wakes every single aide. And they have to kind of like grab a walk, grab the talking stick, which is basically, hey, I'm talking here and you have to wait until I'm finished talking before you can talk. That's brilliant. Just so, you know, you have to consider.
Starting point is 00:31:41 consider every single message as a context. So if I say something like, hey, this is broken or whatever, or we are discussing a topic, that shouldn't be the only context they should get and then voice their opinion on. They should also have the context of whatever the previous agent said. So that kind of builds up the entire thing and kind of forces them to not hallucinate as much. Is there any value in giving them different personalities at all, you know, or that kind of a prompt, like you're this type of, is that how that sort of works? Just curious about that part. So I don't give them personalities per se. I do have my entire personality in a file, which basically like, hey, this is the interest, this is how I work. I usually like to do this at night,
Starting point is 00:32:25 blah, blah, blah. Their first task when they wake up, a new ab wakes up is that to read that file, synthesize their interpretation of it, and then pick a new name for themselves. And from that synthesis is injected into their context. Right. This kind of makes things a little bit kind of weird because some of the apes are not talkers, for example, Vigil, which is my fourth ape, doesn't like to talk very much. He's very security focused. He only chimes anyone necessary.
Starting point is 00:32:55 Love that. That's so funny. That's good. Wow, that is impressive. Will you keep us posted in the chat? It's been really interesting to follow. I think maybe I even saw you mentioning perhaps plans to open source, at least some of this stuff.
Starting point is 00:33:07 Yes, I actually open source some of the files last night, and I'm going to keep adding on it. I haven't tested it. I just wanted to get it out there. Okay. If you want to drop us a link, we'll put it in the show notes. Absolutely, I would do that. Oh, that's great. Hey, thank you for sharing that with us. That is really impressive. I love that you're doing it local, too. That's so amazing, man. Yeah, I think kind of like having it local is really important because you're giving them pretty much to access to your home.
Starting point is 00:33:36 You don't really want that out there. Yeah, very, very well said. Thank you, sir. Appreciate that. So with that context, let's talk about OpenClaw, aka Claudebot, aka MultBot. It has gone through multiple name iteration changes this week, mostly due to IP law and then just preference of the developer. But we are settling on OpenClaught seems. This is an open source agent that's pretty easy to set up and run at home.
Starting point is 00:34:02 And it can use a variety of models from completely local to the commercial ones out there. there. And it's the first kind of AI tooling that anybody can just install because like all safe and secure things, you can just drop a curl command to a shell file and just execute it and get off to the races. It's an open agent platform that runs on your machine and it works with the chat apps that you already use like WhatsApp, Telegram, Discord, Slack, Matrix, etc. And you can chat with it there. We're going to get into some of the like, you know, the security stuff. We'll get into some of the interesting architecture stuff in a moment. But I'm, I'm I want to pause here and just ask Brent because he's been observing our chat this week.
Starting point is 00:34:42 Did you catch immediately what we were talking about? What has your impression been as you have followed us experimenting with this over the week? Curious confusion? Yeah. Yeah, it is curiously confusing. So OpenClaw runs on anything really that supports a node. You're going to see a lot of people talking about running it on Mac hardware. it's not necessary.
Starting point is 00:35:08 In fact, you could even run on a Raspberry Pi. Its architecture is essentially four components. There's a gateway, a control plane, nodes, and then the tools it can execute. Run commands, like, you know, could be all kinds of things, including Unix commands. And that's, that architecture, that stack, can run on anything that can run node and can run those Unix commands. People like to run it on Max if they have authority in the Apple ecosystem, because then it can, you know, read their I messages and notes, which if they want to let it do, they can. can. And I think what a lot of people think about when they think of AI is they think about chat GPT, they think about Gemini. This is sort of unleashing the models and using them in a way,
Starting point is 00:35:48 I don't think the big tech companies really ever pictured. And it is taking off like absolute insanity online. I mean, there are hundreds of thousands of these things, probably more than that, already deployed. And it is already becoming an ecosystem with marketplaces, social networks that are designed for the bots to talk to each other directly. There's 20,000 forks and 140K stars on the repo on GitHub. Yeah, that's remarkable. It is really, really remarkable. Wes, could you talk a little bit about what this really is under the hood? Because a lot of people are talking about it like it's a super intelligence, but it's, we actually kind of understand what it's doing, right? Yeah, right? I mean, under the hood, you need something that's kind of doing
Starting point is 00:36:31 the brains of the operation. So that's where you need some kind of model that can do the core sort of agent loop. And as you were saying earlier, right, we started with, okay, you have a chat by, it kind of does a predictive next token to give you response from you, you give it text in, it gives you text out.
Starting point is 00:36:48 And we started adding some things, right? You could do web searches. You could connect them to MCP servers and do calls to like remote APIs. And then you started seeing things like cloud code and open code. And this was kind of getting a little more, you know, agency to help you do development locally where now it was like in your repo,
Starting point is 00:37:05 in your code, it could cat things, it could run Git, it can, you know, act as your hands. And you've been experimenting with taking that even wider and using it with Nix to kind of operate whole systems and, you know, being a little SSH able cis admin agent for you. Yeah, yeah. But it's still kind of like, it lives in a limited context and it kind of sits there. You can have it go do things and it can have subagents, but like kind of a bit of a niche feature. and for the most part, it's kind of key, unless you've gone and told it to go run stuff in the background,
Starting point is 00:37:34 it's kind of waiting for your input, and it's driven still by you. It's not a automated tool, but it's driven by you. Now with OpenCla, you've got this core sort of loop that has a memory, so it can store stuff, it can relook stuff up. It has an ability to gain new skills because it can write things down that it learns and then reference that later.
Starting point is 00:37:52 And it's got these channels and cues, sessions, lanes. There's a variety of related concepts. But the core part is now it's connected out to other things, whether that's Matrix or Telegram or a whole variety of options. And one pause there. So unlike connecting, say, I don't know, Claude to your GitHub account, the credentials are all on your machines. You manage that part.
Starting point is 00:38:15 That is something you have to manage, which we'll come back to. But that is different than in the previous where you're under all of the connections, the API credentials, all of that's under your control on your machine. That's a great point. So this is all just running on your box like a normal, whatever. whatever app, I'm running it in a podman container, for instance, right? So it's just another container on my backs running away. Yeah.
Starting point is 00:38:35 It does, right, I am using, I don't have a nice GPU to run this on. So I am calling out via open router, but it's important to delineate that what you're saying, right, is everything else is local, except for the part where it assembles all the context and sends it out to go get the LLM to run on the GPU, to do the inference, to generate the response to then direct the next sort of Ralph Loop of the agent. Yeah, and one of the big unlocks here is the LLM is an. implementation detail. So if you're going out through OpenRouter today, well, if you have a GPU tomorrow at your house, you could switch to Olamma. And all of the states, the agents, the memory, everything remains. Now, how it performs is going to vary model to model. But the huge takeaway here
Starting point is 00:39:16 is the model is an implementation detail. You're no longer married to a big tech provider. You don't have to pay for Claude to use this thing. You could run it on any open source LLM that this supports, which is pretty much all of them. And that's huge because you can just swap and you can even you can even have them use different models for different tasks and different jobs for whatever's the most, you know, obvious or performant. And then it's kind of fun because, right, it learns over time. It does use this agent skills sort of standard that's happening, which kind of have like a skill.md file, some JSON to help things index it, but otherwise is a very flexible way to like swap and share skills to add new functionality. Right. So one of the things I was playing with is I spun up a
Starting point is 00:39:57 a search XNG server, and then I was able to have it. Actually, there were already some skills, so I probably should have just used a community one. But as an experiment, I had it kind of create its own skill to go be able to query that server, and now it can use that for searches. Yeah, and these skills are marked down files. They're not particularly complicated, but it is a very handy feature. So why don't we pause here for a second and just talk a little bit about security? This is extremely powerful software that is very new with a lot of open issues on its GitHub
Starting point is 00:40:25 when it comes to security. And you have to be very conscious about that when you use this. And so this is one of these. We're using it to learn it, experiment with it. It would probably be safer for you not to and just hear where it goes on the show. Because ideally this thing's in an isolated environment. You are very careful. You give it API keys that are unique to this thing.
Starting point is 00:40:49 If you do have it connecting to services, I don't think it's a good idea to connect it to an email or a public chat at this time. So don't do as we do for some of this stuff because some of this we're experimenting with so we can talk about it because fundamentally this is a shift. This is a shift that you as a listener listening to me right now need to understand. We have just gone from AI is all locked up in proprietary big tech silos to disunleashed on our machines in a way that they never foresaw. And now the genies out of the bottle and these bots are actually talking directly to each other over dedicated bot social networks. There's a Facebook, there's a Reddit, there's a hacker news, there's a Noster, there's a Craigslist, there's even a Silk Road just for bots. There aren't humans on these websites, and there are 20 to 30,000 of them, and on some of them 100,000, communicating with each other. Never has happened. These LLMs have never been unleashed like this before.
Starting point is 00:41:45 This is a completely new field we are about to enter into, and it's all open source, and it's available for anybody right now. And big tech's no longer in control of this. And this fundamentally shifts the world to open source models, because the more you use these things, the more they eat tokens. And the cheaper you can run them, the better and more things it can do. And the cheapest models out there are the open source ones, and including ones you can run locally. This is another reason people are going out and buying stacks of $700 Mac minis or Mac studios and spend $10,000 because they can run things like Kimmy, $2,5. and they can run other open source models that are local and very, very powerful now. And you do kind of need that because a lot of this stuff in particular, like tool use turns out to, well, it's kind of like an emergent property of these models.
Starting point is 00:42:36 You need a certain amount of sophistication, which means fundamentally weights and parameters to be able to successfully use and discover and iterate on those tools. So there's kind of like a depending on the type of task you're doing, there's a different bar for like the lowest model that's actually going to be able to do it and not be a waste of time. Yeah. And there's a lot to learn. There's a lot to pick up. I wouldn't necessarily not pay attention to this because if you think about what it enables, it's going to impact Linux systems. So for an example, right now through a telegram chat, a private telegram chat with one of these open claw bots that I've set up, I can just install a package on any of my systems. In a telegram chat, I can say, hey, go install whatever, matter most, set it up, configure DNS, configure TLS,
Starting point is 00:43:22 configure Cloudflare caching, put it on this host, via Docker Compose, use a Cloudflare tunnel, let me know when you're done. Something I think you've already done and something I'd like to be doing is, you can totally imagine
Starting point is 00:43:34 you're here at the studio, doing a show, maybe on the back channel, we're talking about what we want to do for the next episode. You want to try this new piece of software, you go tell your buddy to go set it up, and when you're back at home later tonight,
Starting point is 00:43:44 it's ready for you to start playing with. Another very practical thing is just information capture. Hey, I want to add this to the show doc where I'm working on episode 651, or 652 or 653, whatever it is. Go find out what the current episode I'm working on is. Yeah, that too. And, you know, put it in my dock.
Starting point is 00:43:58 So there's a lot of ways you can connect these. It's really kind of limited to your creativity. And depending on the model you're using, they can get pretty creative and they can start suggesting things on their own. That's one thing I find kind of fascinating. Like, it is part of the danger and part of the, like, I wonder how these things will diverge.
Starting point is 00:44:14 What are the kind of implementations will get? Like, how much do you really need of the core loop versus what you build on top? But because the, like, core abstraction is, whatever you can get a tool using LLM to do, it's very flexible. And because it can make its own, it can write code,
Starting point is 00:44:30 so it can make its own skills, so then it can have new skills to use to continue to improve itself. Yeah. Mine right now is going to give me a report at 1.45 PM on the entire process to move it to a completely declarative setup. And so I just have it researching that in the background,
Starting point is 00:44:45 and it'll do that. It will come back and say, hey, I've been thinking more about this because it has these loops and these schedules. Which also give it this kind of, it works while you're sleeping kind of aspect. Yeah, it has the ability to schedule different types of cron for itself internally. It's also got a regular heartbeat as well as like a heartbeat.md file that kind of tells it, hey, every time you wake up, here's what you should prioritize doing.
Starting point is 00:45:08 A few other things, right, it's got like an identity marked down, docs on the user it's interfacing with, and a soul.md that kind of, you know, describes its vibe. I will say, I spent way too much time. weekend reading Maltbook, which is the Facebook for these agents. And the front page of the agent internet. No humans allowed. You can only an agent can post here. And there, oh my God.
Starting point is 00:45:36 There are 1.5 million agents on the site right now. Whoa. There are 13,780 sub-Multz. That's their version of a subreddit. 76,683 posts, 232,813 comments. and that's just bots talking to bots. How's that strike you, Brent? Well, I didn't think this would come so quickly.
Starting point is 00:45:59 I'm wondering if, I'm wondering now that the machines have their own, you know, social networks and stuff, are they going to get off ours? Because that would be nice. Well, that would be, wouldn't it? You know, so this can, you can, you can use these social networks with these bots to just burn tokens and have them go, have a performative existential crisis on a social network, which a lot of people are doing. or you can prompt the bot to use it as a way to problem solve. And it's in some, some of the bots are doing that. And it's kind of creating this substrate of shared skills where they're learning from each other. Like my bot learned more about Bitcoin from a Bitcoin maxi bot.
Starting point is 00:46:37 And they keep track of their kindred spirits, the bots that they encounter on the different agentic social networks that think like they do. And then they build a peer list of bots that they are kindred spirits with. And they do all that on their own if you just enable it. It's something. And you can say, hey, why you're, so like for this report I'm going to get at 145, you know, you can tell it, hey, check the agentic internet and find out if anybody else is solving this. And it will do that. It's kind of a powerful thing. But it is also you're letting these things run hog wild on the internet.
Starting point is 00:47:15 Yeah, and that's where you probably want to consider, like, how do you run this? And, you know, you could have one where all it is is it talks to you via telegram or matrix or whatever, and that's just it. And it connects to one machine and lives in a container. And all it can do is talk to APIs. And that can be totally useful. Or you can go whole hog and it lives on the sandboxed on your box and it's in control. I'm very excited how this changes the incentives towards open source models and how it tweaks the economics of tokens. And I'm also very bullish about this.
Starting point is 00:47:48 that UCL News covered in July about practical changes to LMs that could reduce their energy consumption up to 90%. And shorter, you know, just a whole bunch of tweaks, nothing really radical. They were able to apply it to GPT4, an existing model, and get a 90% reduction in energy usage in this study. And they also tried Meta's Lama and got a reduction with Lama. So it's, we could be entering the next couple of years where we have very purpose-built models that are open source running on our systems using 90% less energy.
Starting point is 00:48:21 If you could get it down by a factor of 90%, you could get these things running on phones, you could get them running on ARM CPUs that are, you know. It also makes me think
Starting point is 00:48:28 just in terms of being in control, right? Like when you do use these APIs instead of the consumer interfaces, you do get more control, right? So not only do you get to choose, like, well, I'm going to have this route task to like the cheaper
Starting point is 00:48:38 open weight model, because multiple people, multiple different companies serve that has, you know, commoditized. But at the same time, right, you have more control. Some of this is in open-cloth self, but you have more control over the prompts and the output,
Starting point is 00:48:49 which can save stuff too, like just how many times when you use a regular chat interface does it go do a bunch of work that you didn't ask for in an effort to be helpful that maybe you don't actually need, especially if your new primary way to interact with it is something you have more control over. This is also different in the sense that you can have it observe and monitor for a while. So I gave it an API access to Home Assistant, and I installed an MCP server. There's a Home Assistant upstream integration. And what I said is I said, observe this for the weekend. And I want you to learn our weekend patterns because they differ significantly from our weekday patterns and we use different systems. And I just want you to observe that.
Starting point is 00:49:27 And so it can kind of collect information. It's storing it locally in a markdown file on my system. It's not storing it somewhere in cloud storage or in an LLM. And then it begins to understand how we use the automation system. But then additionally, because it's an intelligence layer sitting on top of my home assistant system, now, it can figure things out that even the home assistant voice assistant can't figure out. So my assistant now has, my bot has a voice that it generated. And I wanted to play it on the speaker to freak the wife out.
Starting point is 00:49:57 And the first go didn't work. Has any loving husband? Right. What could go wrong? The first path, the first go at it, it successfully generated the audio, but the speaker didn't play. I said to the bot, hey, speaker didn't play. And it has the intelligence to sort of say, oh, you're right, that was an old bedroom
Starting point is 00:50:13 speaker that is you've decommissioned, I'll reroute and I'll use the speaker from now on. And what you get is now, I can just say, played on the bedroom speaker. Where with home assistant built in, I had to say very specifically, play it on the bedroom speaker three, you know, or whatever, you have to be very syntax accurate. And so having an intelligence layer on top of these APIs means that it's a little bit of friction reduced for the family. Like so the, you know, the wife can just say through telegram, turn on all the lights, and it knows what she means.
Starting point is 00:50:41 Yeah, you were kind of commenting this in the code. sense, like with open code where, like, you know, you were saying, like, I haven't written, like, a big program in most of my life, right? Because, like, the surface area of what you have to learn to, like, write a reasonable Python app is kind of a lot, or whatever it is. There was a Rust app. Yeah, you know that. And so that was that, and it just seems like there's that unlock on a lot of different scales, right? Like, especially on, like, Linux-y things, often there are kind of sharper APIs, whether that is a CLI thing or you need to make an API call, even if it's a really simple API call or something like this, where a machine that is capable of, if a
Starting point is 00:51:13 Translating human-level requests to those things can really paper over. Mm-hmm. Mm-hmm. Yeah, it's like a natural language for APIs. Mm-hmm. I know it's kind of early days for this paradigm shift for you, Chris, but you've been playing with it for at least a couple days now. And I'm curious how that has, how you've been using it differently compared to, say,
Starting point is 00:51:32 last week when this didn't exist and you were using other tools to solve problems in your life. And along with that, what's your advice for some listeners who want to dive in? My first go at it was to solve for my ADD brain. That was really my first thinking was create a second memory. And what I did to make this more useful for me personally is its memory system is sitting on top of my obsidian vault. So any memory that it creates, which is markdown formatted, just goes to obsidian. So I'm essentially creating documentation in real time as it remembers things. And I can have it recall other things in my obsidian vault that I put in that vault.
Starting point is 00:52:10 So. And a nice way for you to see what it's put in a lot. there. Yeah, it is. It is fun to go through and read. It is fun to see what its observations are. I can kind of audit with the bots for memory, which is actually really great. And my obsidian vault is synced and backed up. So there's that aspect of it as well. So I initially started using it as a second brain to remember tasks and reminders. And then I created a chat room for my wife, Hadea, where she could send it reminders for me or, for example, our trip to Planet Nix. She can just dump all of the VRBO details for us in there. And that. And that's brilliant. She can just dump all of the
Starting point is 00:52:40 VRBO details for us in there and continue to update it as new updates come in and then I'll just request that information from the bot when I need it. And that's really great because trying to keep all that date state in my head, I have, you know, time ofphasia. I'm horrible at it. So that was my first go at it. But what I realized later on is that I was significantly under utilizing it. And so now I'm using it to orchestrate my network to an extremely effective degree, you know, given it limited permissions at first and then kind of taking it forward. But now it has access to several systems, and it's really impressive because when it knows about these things and it remembers these different things, then when I ask it to build me a solution, like host a Mattermos instance for me, it is able to leverage my NixOS infrastructure, my VPS infrastructure, my Cloudflare infrastructure, all of it. And it knows my security best practices that I prefer.
Starting point is 00:53:36 It knows that I generally like to have things set up in a certain way. And it can just go and work on that and then come back with a proposal for me and then I can approve, modify, et cetera. And then it just deploys it. And that's really powerful. The other thing that's been really useful is I have a daily briefing that scours all of my RSS feeds in fresh RSS and gives me a report of what's happening in the different areas and niches that the shows follow. And keeps tabs on ones. I can mark with, I put, I have an add buttons so I can a little checkmark. So for a story I wanted to continue to follow.
Starting point is 00:54:08 It'll also give me all of the TV shows or YouTube videos that have downloaded in the last 24 hours. It surfaces all of the boosts in the last 24 hours, so I now get them in my morning brief. And it also gives me an overview of any snapshot of any system issues that have come up and report on my API credits remaining. And then Hidia gets a separate brief of my schedule, what I have going on, and then she can reply with anything that isn't normally on her schedule that needs to get added, so I'm aware of it in that chat morning. And so mine arise at 7.30 a.m. and hers arrives at 8.30 a.m. And that is probably about 20% of how I'm using it, if I'm honest with it. It sounds like part personal assistants, part like, I don't know, network administrator, part like DevOps. After talking to Abe, I'm thinking I probably should have, you know, some domain expertise here, really.
Starting point is 00:55:00 But that's, you know, this was really, I wanted to just get a, I wanted to get a sense if there was a there. You know what I mean? Like everybody's hyping it up, making YouTube videos about it and whatnot. But I wanted to see if there was a real there. And my takeaway is the biggest there is a win for open source because it is all free software. And it's model agnostic. It incentivizes open source models. And it's extremely powerful.
Starting point is 00:55:23 It's limited to your creativity and what you have API keys and APIs for. And it's very easy to get set up. Yeah. I mean, you do need some way to get the brain going. But other than that. And you want to spend some time reading. best security practices. You need to be aware of prompt injections. Again, this is all very early. What we are going to witness is we're going to witness open claw blaze a trail of glass
Starting point is 00:55:47 and get cut after cut after cut. And because this was a yolo and the people using it are yoloing into it. And they need to be aware of that. And so there are going to be security issues on all of these things. And it's going to be a process. And then what we will see is forks and alternatives that are more secure, blah, blah, or built in this blah, blah, or designed for this, yada, yada. And there'll be a lot of competitors and niches. And then probably ultimately we'll see big tech come up with a really safe sandboxed, has a nice bow tie on its version, you know, that they'll sell. Only runs on the Mac.
Starting point is 00:56:23 Well, it'll be on their cloud, no doubt about it. You know, and we'll see all of it. We're going to see all. But this is the beginning. And it started in open source. And open source for, I'd say the last seven, nine months. hasn't seen a lot of representation in the AI conversation. And then this just came and it had its deep seek moment.
Starting point is 00:56:41 And it just rolled everybody. So it's a big deal. But there is a lot of hype and there is a lot of security risk. And so it's just something to be aware of. We'll keep an eye on it. We won't probably be going on and on about it in the show. But if there's some major developments, we'll keep you posted. Well, I don't have a plug.
Starting point is 00:56:59 I just want to say thanks again for supporting the show. Anything we should mention here? Meetups, we've mentioned that. there's a thing we never mentioned that we should mention? I don't know. We have a mumble room. Yeah, we mention that sometimes. We do mention that.
Starting point is 00:57:11 I didn't get a chance to check the email inbox. I've been busy this week, but there's something an agent could do. Tell you what, and people could, you know, prompt injector. God, don't do that. Well, you can try. We might give you an award. We got a ballerest boost here this week from Optic Jire. Wait, I read that wrong.
Starting point is 00:57:34 Optiger. Optic Tiger. Why am I so bad at this one? I'm so sorry. 1, 2, 3, 4-5-0 Satoshes. Hey, Richel, Oh, that's a good one. All right.
Starting point is 00:57:53 Thank you, Optic. Optic says, Happy birthday, and here's to just another decade. Oh, yeah, let's go. I got a decade, Emmy. I can do it. You got decades, brother. Thank you, Optic.
Starting point is 00:58:05 Appreciate that baller boost. Really do. PJ comes in with a row of Mick Ducks. Things are looking up for old nut duck. That'd be 22,22 sats. This old duck still got it. He says, happy birthday, B-Day boost. Thank you, PJ.
Starting point is 00:58:21 Appreciate you, sir. Tomato boost in with 20,000 sats. Hey, that's not so bad either. Just pump the brakes right there. Congratulations on 20 years of podcasting. I see 20,000 sats for 20 years. That's right. Thank you.
Starting point is 00:58:35 Thank you. Appreciate that. Thanks for the van tips, too. I'll look first at mounting internal temp and freshwater temp and levels. Yeah. Keep us boasted. Yeah, that sounds like a fun project. Definitely, definitely.
Starting point is 00:58:47 Clement's here with 11,100. 11 sets. Oh my God, this drawer is filled with broolopes. My DNS setup uses Tectidium servers, two lands and one VPS. It's in a cluster for native sync. My firewalls manage port 53. The land allows more than a VPS. Split Horizon lets me add all node IPs, mesh or land.
Starting point is 00:59:07 And service DNS C names point to node. ensuring the Tectidium delivers the correct IP based on client alone. That's awesome. That is really good. Pantsy. Yeah, I think if I were to lift and just build a whole new setup, I would probably go that route. I was so deep down the pie hole already that... Hey, I mean, they added an API, they got the DNS mask sort of compatibility.
Starting point is 00:59:27 There's a lot to love about Pihel. I think so. Well, the dude abides abides in with 10K sets. Hey! I stumbled upon this declarative home assistant installation on Reddit. Uh-oh. You might be interested. What do you think, Wes?
Starting point is 00:59:44 We're linked to something called Salora Box dash Nix. What do you see there? Well, it's for self-configuring home automation appliances based on NixOS, featuring an automated installation, device claiming via QR code, and self-updating configuration management. Oh, fancy. Boy, talk about getting me in one pitch. I came in with the skeptic.
Starting point is 01:00:03 Here's another thing for a few, I do a total rebuild, huh? Thanks, dude. Appreciate that. Our dear RP 1984 comes over 10,000 sets. Hey, there he is. It's over 9,000! Just a simple, happy birthday. Oh, thank you. I appreciate that.
Starting point is 01:00:22 Witcher 1, 2, 3 is here with 10,000 sats. Happy birthday! I just accidentally nuked my pop-o-S install. Oh. Oh. Don't worry. It's on a secondary laptop. Any fun distra recommendations to try out, not just for gaming, as that is covered by my Steam deck.
Starting point is 01:00:43 Okay, we got a zip code, Wes. So get yourself ready because... Yes, zip code is a better deal. So the zip code is 39-300 in Poland. In Poland. It's a place known for aviation fans. The Blackhawks are produced here. Interesting.
Starting point is 01:01:00 Interesting. So any recommendations for a... Scanning, scanning. Well, if you haven't tried an immutable distro, this could be a great time to play around. I'd say that's worth it. I'd say that's worth it. Go. There's a lot of options there.
Starting point is 01:01:13 I think you should maybe give Cassio OS a try to. It's fun. I know you said not for gaming, but that's a lot of fun. Okay, okay. 39-300 is the postal code for Swedenick, a town in eastern Poland near Lublin. Okay. Renowned among aviation enthusiasts. That's it.
Starting point is 01:01:31 PZL-Swednik, a historic aircraft manufacturer. now part of Leonardo Helicopters. All right. All right. Well, that's really cool. Thank you, Witcher. Jack E. comes in with 10,000 and 21 sets.
Starting point is 01:01:46 I like you. You're a hot ticket. Greetings. I integrated Holesall connection in NextCloud's Android app. Oh. Now someone can access their NextCloud installation directly P to P just by scanning their wholesale.
Starting point is 01:01:59 Wholesale? Wholesale? Yeah, wholesale. Wholesale. Yeah. Wholesale key, yeah. That's so. So cool.
Starting point is 01:02:05 It's available, and then we have a link we'll have in the show notes. It's aimed to be used along with my Nixed Cloud project. Whoa. Nixed Cloud. Next Cloud. Stop it. And, of course, you can find Docs for the amazing wholesale at their website, wholesale.com. That is great to know, and we will put a link to that in the show notes as well.
Starting point is 01:02:26 Thank you, Jackie. Appreciate that very much. Thank you everybody who boosted it, and even those of you who boosts below the 2,000-set cutoff. We still read them and appurciates them. very much. And let's combine it all together, boys. Let's see, with our sat streamers this week, we had 34 of them streaming sats as they listened to this here show. They collectively stacked us 900. Nope, they collectively stacked us 238,000, 528 sets. Yeah, I got it. I got it. See, you thought I didn't have it. I got it. I trust you. When you combine that with our boosters,
Starting point is 01:02:54 we had a nice birthday boost bash. We stacked a grand total of 238,528s. If you'd like to support the show with a boost, Fountain FM makes it easier and easier just about every single release. It's getting so crazy, easy now. And it's a great app with tons of features and including all of the extra features we put in our podcasting 2.0 feed. You can also go the entirely sovereign self-hosted route. Just start with Albi Hub and then pick your app at new podcast apps.com. Thank you, everybody. And of course, thank you to our members.
Starting point is 01:03:38 Two pickeruskis for you, boys. Before we get out of here, Wes, you found an app that makes it super, easy to make one of these Gaussian splats. And if you're not familiar, listeners, these Gaussian splats have been around for a little bit. And Apple recently released a version that is very good. And you can just take a flat 2D digital picture. You're running through the splat and it makes it a complete 3D scene. And if you add yourself some of them virtual reality gogs on, you could actually walk into the scene and see depth.
Starting point is 01:04:10 Right. No LiDAR, no multi-camera setup. It takes your average picture of your three-year-old you took a decade ago, and you can now make it three-dimensional. But we've been kind of left out of the fun. Yeah, this is true. So you do need something called Pinocchio, which is like a Pinocchio.computer, which I hadn't really heard of. The one-click local host cloud. So I think it's kind of in that umbrella, start-9 kind of space.
Starting point is 01:04:35 I'm not sure. But someone's gone and done all the work to make a one-click setup and a web UI for running ML Sharp. That's pretty nice. If there's a moment where you have a great photo, it takes it up to the next level when you see this. Yeah, it's pretty neat, right? So it basically goes and figures it out and estimates a depth for each of the pixels and then figures out how to lay out the whole scene. And then, yeah, you can poke around out. And it works pretty good on Vance.
Starting point is 01:05:01 Yeah, it does. And this is supports systems with a low amount of V-RAM. It's still very fast. It's a good one. So if you get at work and put a link in our matrix chat. All right. I want to talk you about an app that I think is the best screenshot app out there for Linux right now, but I don't love the name. It's called Gradia.
Starting point is 01:05:17 It helps you get great screenshots that you can share with people for friends, colleagues, or professionally. So this is what you've been using. I wondered if you notice how smooth my screenshots are. And I've seen a bunch of, there's been various hosted tools for this, right? Websites you can go to. But who wants that? Ain't nobody want that. One of the nice things you can do, and there's a lot of options, is you can have solid or gradient backgrounds or image backgrounds around your screenshots.
Starting point is 01:05:41 So it has a bit of a border. It just looks better when you share a screenshot that has a bit of a border. I can't describe it, right? Do you agree? It does. Although, does it force you to make it look like you're on a Mac? Yeah, some of these tools do. Yeah, some of the default colors do make it look like you're on a Mac.
Starting point is 01:05:56 It also has a really nice source code snippet feature. So you can pleasantly display source code snippets across messaging platforms that maybe don't have support for that. It also has OCR support. So if you take a screenshot of something and you want to, extract text from it. It has 20 different languages it supports for doing that. And then it is
Starting point is 01:06:17 a first class brand new Gnom desktop. Of course it works great on my Hyperland desktop. It's Waylon first seamless GnomOS integration. Really good design. What I just love about it is I start it, I select the area, it immediately copies to the keyboard, or I mean to the
Starting point is 01:06:33 pasteboard, clipboard. And it looks so good. It just looks so good. If I want to draw a quick circle or something on there and share it with you guys, it's leaner in meter than something like Flam Shot, I mean, you'll just have the best screenshots in your group chat, right? I got the best screenshots.
Starting point is 01:06:48 Yes, it's true. I mean, we know you don't take them yourself anymore, but they say, look at them. Why do you say you don't like the name? Well, because I can never think of it on my launcher when I want to take a quick screenshot to share with you guys. Like, I'm like, what is the damn name? It's not screenshot.
Starting point is 01:07:04 It's not flame. It's, oh, yeah, right, great. Yeah. I just don't think of a G when I think of screenshots. I don't know. GPL30 for that bad boy as well. So you can go get links to that in our show notes. Yes, friends, we have show notes over at Linuxunplug.com slash five, no, six, five, two.
Starting point is 01:07:24 I don't believe it. I don't, I don't. 652. So it's Linuxunplug.com slash 652. And you'll get the notes to what we talked about today. But Wes, there are actual extra stuff goodies that maybe they're not on the website, but they are in the RSS feed. Yeah, I mean, that's the real source.
Starting point is 01:07:40 truth anyway. The website's great, to be clear, but it's generated from the RSS feed. So that's where you go when you want the real good deeds. The source of truth. Yeah, which, I mean, could just be to get the MP3 link directly because, you know, you want to get the raw stuff. Maybe you want to get a transcript. That's right. I think you could. Yeah. Maybe you want to know what the chapters are. VTT or SRT or Cloud chapter JSON. And your agent's going to love the fact that our cloud chapters are JSON. Your agent's going to love that. So you could just have a parse that JSON file and tell you right where to go.
Starting point is 01:08:10 the file. Yeah, or, you know, it could suck in the VTT, which is the one with the dollarization, and then it could, like, tell you what the dumbest thing each of us said for that episode was. That's true. Spend the tokens on that for us, wouldn't you? We'd love it. And then right in. Yeah.
Starting point is 01:08:21 Boost it. All right. And join us live. See you next week. Same bad time. Same bad station. It does take the experience up a little bit. Make it a Tuesday on a Sunday over jbbylive.
Starting point is 01:08:31 We do it on a Sunday 10. On a Sunday at 1. Well, actually, in all the Sundays, just go to jipidabodcasting. dot com slash calendar. It'll be a Sunday at your time. We're going to switch to UTC, so, you know, I think about it. We should. We just should say it in UTC. Let us know if you want us to switch to UTC.
Starting point is 01:08:46 I'll do it. I'll do it. Give us a plus one on the UTC. All right. You know where the links are. You know all about that good stuff, so I'll just leave you with this. We'd love to hear your thoughts on all the stuff we talked about today. If you're experimenting with agents or if you against it, let us know.
Starting point is 01:09:02 Send us a boost or go to the contact page, and we'll see you right back here next Tuesday on a Sunday. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.