LINUX Unplugged - 655: Speeding Up Mistakes
Episode Date: February 23, 2026Planet Nix and SCaLE are just days away, and we're getting a head start with two guests, the tech, and the trends shaping open source. Our trip starts here!Sponsored By:Jupiter Party Annual Membership...: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:💥 Gets Sats Quick and Easy with Strike📻 LINUX Unplugged on Fountain.FMSCaLE 23x | Registration — Get 40% off registration with promo code "UNPLG"PlanetNix 2026 — Where Nix Builders Come TogetherPasadena Linux Party Meetupstahnma (Michael Stahnke)Flox, Nix, and Reproducible Software Systems with Michael Stahnke - Software Engineering DailyPro OpenSSH (Expert's Voice in Open Source): Stahnke, MichaelNix and AI, are we there yet? | SCALEstahnma/mandatoryFunPost by @kelseyhightower.com — Bluesky — I don't care about LLMs or AI. I care about people. AI is just a tool to me. Tools enable people to help or harm at a scale beyond their natural abilities. I care about how people treat each other using these tools.distrostru's gatus configStatus Page On Demand | GatusMimiClaw — MimiClaw turns a tiny ESP32-S3 board into a personal AI assistant.hrwtech/openclaw-esp32: OpenClaw on your Arduino ESP32 devicesPick: WebsocketCAM — Access Android camera via WebSocket client API and run AI / computer vision tasks with ease on live camera streamPick: ChrisLAS/openclaw-mcp-bridge — OpenClaw plugin that bridges any MCP HTTP server as native tools. The missing piece between OpenClaw + Ollama and the MCP ecosystem.Pick: ChrisLAS/freshrss-mcp — A simple MCP server to stand in front of your FreshRSS ServerPick: wechsel — A Cli Utility to replace home folders with symlinks to project folderswechsel-extension: A Gnome Extension for Project Switching
Transcript
Discussion (0)
I just about drank all my red bowl.
Oh, I did drink all my red bowl.
Uh-oh.
Wes brought me a winter iced edition.
They're pretty nice.
The iced vanilla berry.
It's quite pleasant.
It's a nice winter treat.
Do you think they'll have this in the summer?
Because I think it could be refreshing all your round.
I think you're right.
Maybe poured over a little ice in the summer or sitting on the deck.
I think it could be our drink of choice on the road trip.
Do you think they'll sponsor us?
You know, B-Rent's talking about flying into Van Cooves and then coming down with us in the car.
To California.
Yeah.
Great.
I support that.
I know.
I know that wasn't plan A, B, or C, but a plan B or C, but a plan B sounds fabulous.
Boy's trip.
I'll miss the van, but it'll be a solid.
Well, I know.
I know.
I know.
If it wasn't like so many mountain passes, they're really snowy to get there,
plus like a four, five-day trek.
I understand.
Of course.
We do want you fresh.
Every Airbnb we booked is only for two people, so you will have to sleep in the car.
But that's fun.
Hello, friends, and welcome back to your weekly Linux talk show.
My name is Chris.
My name is Wes and my name is Brent.
Hello, gentlemen.
Well, as we prepare to hit the road for Planet Nix, we're going to chat with two individuals
who have seen some major shifts and transition in Linux over the years, and we're going to
get their take on where we stand right now.
Then we're going to round out the show with some great boosts, some picks, and a lot more.
It's a big episode, so before we get there, we got to say time-appropriate greetings to our
virtual lug, which is PJ.
Hello, PJ!
Hello!
Hello!
No shade to all the folks up on quiet listening.
To be honest with you, if I was listening to the show live, that's how I'd be doing it.
I'd be up there in the quiet listening to log it too.
Low latency, no risk of accidentally showing up on stream.
Yeah.
That way I can make my fart noises and stuff and it doesn't want it.
And we get to see it, right?
That's true.
Everyone else watching, it's a little hard to tell.
We know you're there, but it's a little mumble.
It's right there.
It gives it the live vibe.
We are live every single Sunday over at jbblive.tv?
Is that what it is?
Or jbbbb.com.
Is that where you go?
Or jabbylive.
W.S.
What if I just want to do it in my podcast app?
You can do that.
I can.
Both live audio and live video if it supports an alternate enclosure.
Whoa, in the podcast app?
That's crazy.
You've got to be 2.0.
Okay.
Well, let's say good morning to our friends over at define.net.
Go to define.net slash unplugged and meet managed nebula from Define networking.
It's a decentralized VPN built on the open source nebula platform that we love.
And it's optimized for speed, simplicity, and security.
And unlike the other systems out there,
There isn't a central provider or a central control plane that can knock you offline.
In fact, you can build out the network as resilient as you like.
You can let them manage it entirely.
You can have one system to talking to another system for backups
or you can have a worldwide enterprise system.
It's best in class.
It's very impressive.
It's one of the projects that we have followed from the very beginning,
and we're super excited to have them on the show because we love it.
We've been like totally dialed into where this thing's going and we're using it.
Nebula was built in 2017.
That's how long we have been fanboys.
So go check it out because you can get started.
100 hosts for free.
No credit card required.
You go to defined.net slash unplugged
and you deserve a better VPN experience,
one that you fully control,
especially when it's your infrastructure
and you're going to use it for a while.
Defined.net slash unplugged.
And thank you to Define for supporting the unplugged program
are one and only sponsor right now.
Well, Planet Nix is just one episode away.
We will be in studio, although I was thinking it would be fun to just hit the road early and do one show from...
You're right.
Just to do it.
Just to do it.
But I don't, then I think about what a pain in our behind that would be.
And I think...
No, it's a lot more work for no reason.
We'll just drive harder and faster.
So we have one more episode in studio.
And then we're hitting the road to Pasadena for Planet Nix and, of course, to scale.
Planet Nix is bringing engineers from Anthropics, Spotify, Microsoft, and AWS for just,
practical getting things done
Nix talks, nerd workshops.
The agenda is live.
March 5th through the 6th, check it out.
Planet Nix. It's going to be amazing.
Flax is sending us there.
I'm very, very grateful for that
because this is something that we would have
so much FOMO on if we couldn't make it.
Oh, my gosh, I know. So much.
It would be so intense.
And, you know, Flox is the perfect company
to do this. They're in the right position.
They're trying to make reproducible
dev environments that are actually usable for everyday folks.
They're kind of, you know, in the mix here.
So we're going to have a meetup.
It's going to be great.
Meetup.com slash Jupiter Broadcasting.
The details aren't necessarily final there,
but we will get them locked in.
We do appreciate if you're going to make it
if you can sign up because it lets the venue know.
One more unplugged, boys.
I'm getting nervous.
Hopefully the weather's great.
I hope so.
How do you say things like that?
You just send us up.
I know, right?
If you can't make it too,
we would appreciate any support you can provide.
A boost or becoming a member would be a big help
because the show is running hyper-lean
at the beginning of the year.
And these are pretty expensive for it.
us. And maybe also
let us know what you want to see out of some of our
coverage. The things you're curious about, stuff we can
check in on. We're going to be at scale too.
And that is the largest Linux event
in the Northern American
area. And so there's a lot there.
We will be on the ground.
We do have a promo code. You can use
unplug, UNPLG, to sign up and
get 40% off your tickets to scale.
And if you're nowhere in the area,
you're not going, you're just sick and tired of us
talking about this. The good news
is we're almost done. And then you'll have to hear us.
about this anymore. And we do appreciate your patience. Really, we do.
Well, we had a chance to sit down with Michael Stonkey, the VP of Engineering at Flock's,
and I did a little of my internet stalking before we sat down with Michael.
And I discovered that he's been responsible for some projects that we have all used and benefited
from in one form or another. So he joins us now. He'll be at Planet Nix in just a couple of weeks
as well.
Given a talk.
Yep, given a talk.
Good talk.
We'll have that linked
in the show notes.
So let's welcome Michael
to the program.
Okay, Stonkey.
Welcome to the unplug program.
It's great to have you, sir.
Thanks for having me.
So you are the VP of Engineering at Flox,
and I think that is particularly interesting.
I'm just going to get right into it
because this is my bad.
I didn't realize that you were the co-founder
of the extra packages for Enterprise Linux
back in the day in 2005.
And now here you are all these days later at flocks doing Nick stuff.
Still involved in packaging, turns out.
Yeah, I mean, that was kind of what came full circle was when Flux kind of reached out to me.
I was like, wait, that was me to run a whole business based on packaging.
Hell yeah.
I'm so in.
So, yeah, I mean, I've loved packaging my entire life.
It was just one of the coolest things I have ever gotten to do and work on.
And so why would I not want to keep doing it?
And, you know, there's different ways.
there's different theories, but I've packaged for basically every Unix and Linux under the sun.
And they all have advantages and disadvantages.
And Nix was actually something I had looked at first in, I'm going to say 2007, 2008, somewhere in there.
And I just thought, okay, a bunch of Haskell people decided packaging just wasn't difficult enough.
And they made Nix.
And so I kind of dismissed it for a long time.
And, you know, obviously kind of sitting on the sidelines, I saw things mature and stuff like that.
But I really hadn't given it a shot until Flax had reached out.
And I started playing with it again.
And I was like, oh, there's actually a lot here that I really like.
So, and looking back on it, I would have changed my career completely if I had known Nix.
Because some of the work that I was hiring, you know, dozens of people to do probably could have been removed.
Oh.
Yeah.
So I guess also for a little bit of background, you spent some real interesting time at Puppet during a real key moment in history and CircleCI.
So that really feels like a very interesting background to now bring to NICT because
it's a totally, totally different world and sort of almost a 20 year journey to kind of bring you to this point.
It still must have been quite the leap, though.
I have to imagine from that background going to what you're doing today, that must have been a bit of a mental learning process.
The tech wasn't too bad.
I mean, there's new words, you know, you hear things like derivation or you hear things, you know, like providence, you know, spoken about so regularly and things like that.
And like, that's fine.
And it, but so there was new vocabulary to learn, but most of the principles were basically the same.
You know, it was like, why do I chatter the immutable bit of my distribution server?
Well, because it should be read only so you can't ship the same thing with the same version with a different shot sum.
And it's like, well, it just handles that internally.
Like, you know, so there's a lot of those principles made total sense once I unpack them.
And there are other things that didn't make sense and frankly still don't to a lot of degree.
You know, there's just choices that are made.
I'm like, I have no idea why this.
decision was made or for whom this was made, you know, was it made to make one developer's life
easier or every user of Nix easier, you know, things like that. And I'm, you know, and again,
there's just not, every technology has these moments. So like this is not unique to Nix.
Yeah. But, you know, kind of from from the puppet world where we wanted to declare the state,
we wanted to have, you know, state enforcement and all of that, looking at Nix, you actually end up
in a very similar kind of CSE, you know, graph theory. I'm going to walk my dependencies.
kind of mode. It's almost the same stuff. It's just applied through a, you know, different language
lens, different client server models and, you know, things like that. So.
Yeah, and maybe you flip it a bit, right, where you can, the system's working with you instead of sort of
having to impose item potency from afar after the fact. Right. Right. It does make me, it does kind of
stand out to me that, you know, between Apple and puppet and circle, like, you've been involved clearly
with a lot of the different life cycle in software, not just, you know, building a back-end service
that runs a SaaS. It's like actually having to do good.
builds and get them out there on infrastructure and get them configured and running properly
on to end machines, which I think probably connects a bit to Fox's mission too.
Yeah, I mean, you know, at Puppet, it was built basically a whole distribution of this thing
called Puppet Enterprise.
They had to run on all these different platforms.
You know, we're testing on 100 different targets or whatever.
And so through that testing, we had to build custom CI systems because there was nothing
off the shelf that did anything that we needed it to do.
And eventually that's where CircleCi comes in.
They seem to give presentations on the way we build and test Puppet, and they're like,
we'd love to have you. And so I get to go do that, which is super fun because the cost of being wrong at Puppet was you had to ship a new, you know, a new version. You had to wait for your customer to take it up. And that might be a change window that's nine months from now before they even get it installed and all that. You go to CircleCI and it's like we can ship literally 85 times. And it was just awesome. You know, like that is so cool. And then at Flox, there's a little bit of both because it's like a client piece and a server piece and the servers. We can be moving all around and we can be updating and, you know, adding new observability capabilities or, you know, whatever all the time. And then on the fact, and then on that. And then on that. And then on that. And then on that. And then on. And then on. And then on. And then on. And then on. And then on. And then.
the client, we ship it every two weeks and roughly, I mean, almost always every two weeks,
but every now and then we take a week break. If it's like a holiday week, we can roll out an
off site or something. But, you know, and that gives you that client on your computers, whether
it's, you know, client server, laptop, whatever. But and, and so it has a little bit of both
of those like backgrounds that I really, you know, got to work in. And so it, I would say generally
it leverages a lot of my background pretty well. So then, okay, where is the gap now for regular
enterprise adoption? Because you kind of just made a great case.
but it still seems a little slow.
I mean, it's picking up.
Yeah, I think a lot of the friction is,
it's like sociotechnical, it's not just technical.
And so you have, you know, I think Nix is, it's cool.
It's, it's terrible.
The documentation and the getting started guides are still a little bit esoteric,
or maybe they're like, this one's weird if you're doing it
in this way versus that way.
But a lot of companies end up evaluating Nix for using it.
And what we find is there's kind of one person who's the champion
of the Knicks, you know, experience.
And if that champion is no longer available for whatever reason, you know, they win the lottery, they leave, they get hit by a bus,
they go around PTO, they have a baby, whatever it is, the NICS effort kind of stagnates.
And then basically gets a bad rap because people are like, oh, this NIC stuff, no one understands it.
But like, we rely on it.
This sucks.
It's always broken.
No one knows how to fix it.
Why do I have to learn all this?
That doesn't get a feature shift.
Yeah.
Right, right.
And so, you know, when Michael, our founder,
you know, was at D.E. Shaw, which is a giant hedge fund, you know, big financial services.
The cost of being wrong there. Very high, by the way. You know, you're talking millions of dollars a minute at times.
So, and so for them, they needed a lot of precision, which meant they need to know where a software came from and everything.
And so they found Nix as this build system. And he was, this is awesome. My developers can't use this.
And so he started working on whether the abstraction layers, what are the patterns, what are the workflows that people need?
And that was eventually spun out to become flocks. And as we've, you know, been developing flocks, it's been, well, how do we,
work on team collaboration and team usage and team understanding versus kind of that superhuman
individual that, you know, maybe eat, sleeps and breathe this stuff and sees it more religiously
than probably pragmatically. And, and I think for us, you know, if you're, if you're religious,
like I love Nix about it, like, cool, that's fine. But I think a lot of people just want to get their
work done and move on. And that's cool, too. And so we want to help you do that with these principles
that are awesome from Nix. Yeah, it kind of reminds me of editors. You know, you could do something
where you have a custom NeoVim set up,
and it really fits you,
and you've made it bespoke, and it's great,
and it works really well,
but you can't really onboard someone to that,
and it's kind of nixus the same way, right?
As a private superpower, it's incredible.
But yeah, trying to make sure everyone else
is on the same page with your particular way
of injecting the right modules.
Different story.
For us, like, one of the more interesting things
I would say about, like, flocks right now
is we're not just trying to be like Nix Plus Plus or Nix with a veneer.
You know, like there are other capabilities
within flocks that are more about,
portability of usage and about acceleration of environments and like all these kind of environmental
things that are, I would say, more flocks primitives than like Knicks primitives at this point.
And that's really important to us as well because we don't just want to be like Nix Plus Plus
because if all you're going to do is make Nix better, just go work upstream and make NIC better.
Like, that's the right thing to do.
But we were trying to build it more with a commercial eye toward adoption from, you know,
businesses that actually want to get worked on and have a build system that's consistent.
Any of those primitives stand out?
I'm just kind of curious.
I've dabbled with flocks, but I haven't used it
and part of a ship and team yet or anything.
Yeah, I mean, I think the flocks environment
is kind of the thing that we would say
is different. You know, in a lot of ways, it
resembles a Nix profile, but it's like a Nix profile
that is a big super set of what a Nix profile can do.
You know, we have service management within there,
so you can have developer services
kind of a key into those, you know,
you could say Flock services start or flock services status
and kind of get, you know, I have my Postgres running.
I'm at Redis running.
So the second I can clone the thing I'm going to go work on,
I can just clone, activate, start services, and now I've basically got a working developer setup, you know, within moments.
And then more on the runtime side or, you know, builder side, you can build custom packages, you can add them in there.
You can activate things where you have libraries and able to compile against or you only have bin paths in the bin path.
It works with ZSH and not just bash.
Like so there's all sorts of, there's all sorts of enhancements that we've done there, but some of them are also just not things that like a Knickshell sets out to do.
And that's totally fine.
And, you know, for us, we want the flocks environment to be the unit of change.
And so, like, it's got history. It's got generations. You can roll back to previous versions of it, you know, like everything like that.
So that is more of an enterprise capability. So, yeah.
Yeah, that makes a lot of sense of kind of taking some of the ideas in the broader ecosystem in ways Nix gets right and maybe applying them to a broader scope of ideas.
That's neat. Yeah, if you kind of think about like a thing that was a unit of work and all the CNCF workflows and, you know, we built all sorts of tooling around it and everything.
and it's kind of similar like that,
and we just, we use the flocks environment.
It's not saying we can't work with containers, by the way.
We totally work with containers, but you don't have to.
It's optional.
Yeah.
Okay, so I want to shift gears and talk a little bit about another gap
that I think maybe you're bringing up in a talk on Friday.
So I kind of wanted to get a sense of, oh, I don't have a picture.
I kind of want to get a sense of what you'll be talking about.
Nix and AI, are we there yet?
And then you have a line in here.
Why aren't more ML teams using Nix?
Yeah, well, so there's like so many threads to pull out in that topic.
And I mean, I think part of it is model distribution is it's expensive and it's bespoke to a large degree.
It's like your home directory has a dot, you know, Olamma directory or whatever.
And it's just got gigabytes and gigabytes of stuff in there.
And if you have another user on the same system, they have this same thing.
They also have gigabytes and gigabytes of stuff in there.
And it's just like, I mean, just from an efficiency standpoint, it makes me angry.
But, like, also, is it versioned?
Is it packaged?
Like, how do you know what's in it?
Where did it come from?
Was it from hugging face?
Was it, you know, did you get it straight from somewhere else?
Did you just go download it off GitHub?
Like, there's all sorts of needs there for those AI and ML teams.
And I think you can just look at it as, well, what if they were just packaged?
And, like, you know, they should be read only.
Like, those models shouldn't be being written to.
So, like, a lot of the things that Nix does by default, you kind of want those, those defaults.
And so packaging some of that, but also, you know, just being able to pack.
it around and be like, you know, we, a lot of those models also don't care what architecture they run on.
So you don't need like a package specific like this is on AR-66 before and this is on, you know,
PowerPC or whatever. It's like it's just inputs. So not all of them depends on the model exactly.
But like, you know, there's just different ways of doing that stuff. And so I think that's part of it.
But also one of my favorite things I've been doing since I've worked at Flox is I have a program that I like to throw in say generating Nick X expression to build this program.
And I'll throw it at an LLM.
It's a really, really simple Go program.
I'll be honest.
Like, it's maybe got, I don't know, maybe 15 dependencies,
and like most of those are really well-known Go stuff.
Like, not, nothing esoteric.
And when I first did it,
I'd reiterate with chat TPT 30, 35 times to get a build that worked.
Yeah.
And it was like crappy.
And like now, you know, if you pull up like anti-gravity or a clot or whatever
and just be like, hey, give me a mixed expression,
it one-shots it.
And like, that is a huge difference.
And so now when at one shot, it's like, is this the best Nix expression I've ever seen?
No.
Right.
You know, does it do everything the way that I would prefer it to be done?
No.
But does it get me the package?
It does.
And like, that's huge.
And so some of this is you want to look at like evals over time.
And the way that most models are, are kind of scored is they have these things called formal
evaluations.
And we've never built one of those for Nix.
I just had this program that I've been using for three years.
And like, that's what I've been doing.
That's been my eval.
And so I'll probably end up talking about that.
I love that.
Yeah.
That is really.
Okay.
I'll be there.
Yeah, all right.
So I have a question unrelated,
but I have to ask you before we let you go.
Because, you know, I was doing my due diligence,
and I always like to check out a guy's GitHub
before he comes on the show.
And I noticed that you have a project called Mandatory Fun,
a repository called Mandatory Fun.
What's this about?
So it's basically this idea that I had
with a buddy of mine or maybe a couple buddies of mine
that was, we love to make different collaborative,
collaboration software.
And basically,
you can think of it
as like shit posting at scale.
I don't know if I can say that,
but you can beep it out, I guess.
But like some of it was,
we wanted a pipeline where you could
shoot your AI generated images
as fast as you can.
It would drop it into a channel in Slack.
And like, it's just fun that way.
Or like we had, you know,
like I think a lot of my Hubbot extensions
are in there.
We have a lot, you know,
a lot of like bot related activities in there.
And then some of the stuff
I was also playing with how does a monore
repo work because that is a giant mono repo with several subprojects.
And honestly, the more that I've played with it, the less I like mono repos again.
I go back and forth every time.
Just like the drawbacks.
Yeah, the grass is always greener where I'm not standing.
So, but I mean, generally, I also have a thing where like if you have a permanent Zoom meeting and somebody joins, I can throw a webbook and like notify you through Slack so they can say, hey, somebody's joined the Zoom.
And so like, my buddies and I have like a standing Zoom.
And if somebody's there, you get a notification and you can be like, I have time to join right now and say hi or whatever.
And so it's just stuff like that where it's it's it's a bunch of how do you collaborate with your friends.
And I mean, not say that it wouldn't work in a work setting, but it certainly wasn't designed that way.
And that's why we pulled the name Mandatory Fun as just an idea.
So I think just it's a great, it's a great, just a great title, Mandatory Fun.
That's always how it works.
And a nice mix of, you know, right, these sort of collaboration, corpority workflows.
But yeah, like real people want to do it in a more casual way to get fun stuff done.
Yeah.
Yeah, well, and there's no rule that says you can't have fun while you're working.
So I encourage people to do that.
Well, I think we're going to have a lot of fun at Planet Nix.
I'm really looking forward to it.
Yeah, I certainly hope so.
You know, I think we have several hundred people registered right now.
We usually get about several hundred more during the last week.
So, you know, it should be four or five hundred people, which is amazing.
And there's so much going on this year.
It's going to be just like the energy there, I think is going to be crazy.
People are going to be laptops open building stuff.
Yeah, I mean, we have work.
work areas. We have, you know, kind of workshop tables. We have talks. We're going to have
stickers. We're going to have giveaways, trivias, all sorts of stuff.
Gosh, we've got to get ourselves to California.
Like, thank you for coming on and chatting with us. And I look forward to seeing you in California
in just a few days. All right. Well, thank you so much.
Thanks, sir.
Well, if you would like to support this here production, you can do so directly, Linuxunplug.com
slash membership and you get access to the bootleg feed, which is nice and long, and or you can
get the ad free, which is shut, tight, and, you know, great, I guess, right? It's still got
Drew's touches, which is really the best part. Yeah, you get it, yeah, clean, super well produced,
well mastered. You get all the shows, ad free and bootleg at jupiter.com. If you want to support
the whole network and help us get down to scale. And of course, you can send us a boost to get us down
there as well. Appreciate that.
Well, Kelsey Hightower is back. He joined.
us about a year ago. He's a software engineer who calls himself retired, although for somebody
who's retired, he still works quite a bit. He hits the speaker seeing a lot. He is known renownly
for his Kubernetes work, open source advocacy, really an insightful speaker. And he's going to be
joining us at Planet Nix, but we had a chance to sit down and talk with him before Planet Nix.
Kelsey, welcome back to the unplugged program, and happy pre-Planat Nix to you.
Oh, happy to be here. You guys have that whole broadcast studio. I'm going to
I'm glad I broke out the RE20 today.
Like, I was literally going to go pure iPods.
Oh, my God.
Now I understand why I got the mixed pre out for you all.
You fit in.
You look good, right?
Yeah, we got the same leg.
Yeah.
We appreciate it.
So the last time we talked, it was just before Planet Nix.
And, of course, we saw you at Planet Nix last year.
But it feels like since then there has been a just next level of, I'm trying not to use the word hype, Kelsey, but it's all I can think of hype around AI system management.
agentic development and management of systems.
And it strikes me, you made a really, I think, keen observation
in an interview you had with JetBrains back in December.
And you said that you spent your whole career
trying to make these systems as determinant as possible,
right, to know exactly what they're going to produce every single time.
And this is, in this last year or two,
it's kind of flipping that on its head a little bit.
And I wanted to know if you could expand on that comment
you made in that JetBrains interview
because I thought that was pretty insightful.
Yeah, I think for a lot of people who have been doing either software development,
IT operations, even the lights in your home, when you push the button to turn the lights off,
you want the lights to turn off.
You don't want to push the button and the lights.
I'm thinking about turning off your lights, but giving that you haven't eaten dinner yet,
I'm going to keep the lights on.
Like, that's not what you're expecting.
We expect these systems to be deterministic.
So these are machines first and foremost.
And when we build machines, we kind of want them to do exactly what they're designed to do.
Light switch on, light switch off.
And so when you turn that into the software world, anytime we get behavior that we ourselves didn't intentionally design,
we call those bugs or defects or security vulnerabilities if they're ever exploited.
And so we've been spending the last, I don't know, a couple of decades trying to build reproducible software,
the terministic software, right?
we want these things to be predictable.
So then we can make promises.
Like we have SLA as a whole discipline around observability is saying that if this thing
doesn't do what it's supposed to do, then we have a problem.
And so what you get with an LLM, so let's say you take away some of the deterministic work.
To me, the value of an LLM is this somewhat of a human persona that we can lay on top of these systems.
So instead of just telling people what the temperature is today, maybe this thing can, you know,
say it in a very polite way or maybe in a kind way. It's like, oh, that was a fun interaction.
But it also does another thing that I remember for the first time when I wrote a regular expression.
I want to write a regular expression to filter out all the email addresses. And so you craft this
like you think is the best regular expression in the world. And then you find out that it's not
actually capturing all the email addresses because either, A, you didn't understand what's a valid email
address or it's a really hard problem to solve. And so now what was the saying back then? If you
use regular expressions to solve a problem, now you have two problems. Exactly. And so I think the
regular expression was the first time. I think a lot of us have ran into something that just
looking at the regular expression, you can't really tell what it's going to do, but there's a lot
of power in this kind of Swiss army knife approach. An LLM can say, look, maybe you didn't write
your software to create XML output and Jason output and YAML output and YAML output. And YAML output.
But if you just code the data structure, you now have this universal tool that can convert
between these different formats.
Making a JPEG or an impeg or a PDF out of an invoice, that is an amazing capability
that is in some way subjective on how to format a PDF from an HTML doc.
And so the LLM allows you to do all of that.
So I understand why people are super excited.
But at the same time, though, then we see ourselves now with the flexibility building a bunch
of guardrail to make sure that it doesn't generate something that's out of bounds from our
expectation. So that's kind of what I meant by. We spent our whole careers making these deterministic
predictable systems. And now we're letting them a little bit more loose. And then we're all like
crossing our fingers that they don't turn off the internet. It does surprise me. And we're kind of
rushing into it too in a way. You touched on some of the like how it can manipulate outputs or transform
data or maybe, you know, shuffle things on a pipeline or between deterministic tools. And I've also heard you
touch on maybe thinking of AI as a like a surface level interface. I wonder if you could maybe expand
on that. So this part I think is very important. If you just take a time out, you don't want to bash the
technology. We don't even have to take a position of whether it's helpful or not. But the reality is
the CPU architecture is the same. The programming languages are the same. So that means the things that
you could do, you could have always done given these particular tools. And for me, that takes the magic
out of all of this. So then if you use an LLM to write code, what is the LLM doing? Well, the LLM is producing
syntax in the way that you would have produced syntax. So that means it sits at the surface,
meaning what it outputs is still targeting the same output you would have done, which is also at the
surface. Now, what would a below the surface technology look like? It will look like, let's just
bypass GOLANG and the JVM. Let's go straight to assembly. Right? Let's just go and program
the logic gates directly.
And that's not really what most people are doing.
Most people are literally just providing themselves
with a better user experience.
And I'll give you another example,
which is I think of more applicable
to the average person.
If you ask an LOM to give me a summary
of this particular document,
or in my database, the customer that spent
the most money in June of last year.
Now, you could have done that with SQL,
but SQL is very cumbersome,
it's very strict.
You have to know the dial.
and sometimes the SQL is very far away from your intent.
And so the ability just to make that one statement and then have the LLM at the surface,
translate that into the correct SQL query, and then return a response.
That is the perfect level of abstraction for what most people want to do,
which makes it highly effective on being able to interface to lots of existing systems.
And I think this is why LLMs in some ways are deserving of some of the high
and the attention that they're getting, because they can be very useful in many domains.
So I wonder with this established and your thoughts on going from very deterministic systems to sort of
not so deterministic, you don't really always know what you're going to get, is there a layer
here at the environment level to kind of manage this? Is this an area where you think maybe particularly
NICS could have some strengths? Curious on your thoughts on almost solving it from a different
direction in this instance.
So we were to talk about Nix in particular.
So what does Nix represent?
Below the surface, Nix has tried to work really hard to give you at least a reproducible
outcome.
If your app that you're building requires a specific set of libraries, we've tried for a long
time to get away with semantic versioning.
Like this is 1.1.2.
But it turns out those are just suggestions, right?
Nothing stops a developer from re-releasing 1.1.2.
that does something totally different.
Your version of a miner is not mine.
Exactly.
So in that world, it's almost like,
you can't really trust these labels.
So I think what Nix does really good is it says,
look, let's just turn our file system into a data store
and make these things predictable.
Sometimes you need a different version of Ruby on the same system.
And the fact that it acknowledges that,
leans into that, and just gives us a tool to deal with that kind of complexity.
So we take that diamond problem,
and we just try to give it an honest solution.
These are the things you depend on.
Here's the things that make it work.
And just give you a way to articulate that,
express that, discover that.
And so if you think about it from the AI perspective,
then what you have is guardrails.
So if you want to express a build,
if you want to express output,
what you have now is at least a framework
where you're not starting from scratch
and say, hey, at the surface,
I would like you to build my application.
Well, if you were to target maybe the next ecosystem,
then you get something super reproducible.
And I always like to think about the quality of these tools as,
if I were remove the LLM, what am I left with?
And I think you're left with something that's still usable,
even if you chose not to use an LLM again,
maybe downstream, your bill system, your deployment systems, generating S-bombs.
Those things can be done without the magic of AI and all of its complexities
because you've kind of targeted something that's going to be a little bit more strict
and domain-driven.
So that's where I see tools like Nick Steele having,
a ton of value. And this is why I think the explosion of things like skills and MCP, where you can
just say, hey, let's not invent package management or bill systems. Let's just teach these tools
how to leverage them. Something else I saw you talking about recently with our other guest today,
Michael Stonke from Flock's, is you're looking back at the original Borg paper and kind of pointing out
that, you know, they weren't really using Docker for Borg, even though Kubernetes these days does,
right? They were using a package manager that turns out kind of shared a lot of properties with
Nix and you've been also making the point which I really like that, you know, okay, I can do
this stuff at the surface, but if we haven't necessarily mastered 20-year-old tech from Nix to
just a lot of other things, like maybe we need to be looking at that too.
You make it a really important point. I mean, I have this thing now where I just try to take
all my technology offline from like eight to eight. Twelve hours of uncomputer-assisted
thinking. And so those are the shower thoughts, the
you know, the prep time before you go to bed, you know, while you're cleaning.
And I thought about it's like, you know, most companies have struggled with even deciding what to build.
And a lot of times when you ship something, you can't just go take it back.
And the part where you say, hey, now AI, let's just move 10 times faster,
sometimes I hear 10 times more tech debt, 10 times more bugs, 10 times more things you shouldn't have shipped in the front of.
first place.
And so when I think about our ability to understand what's available in our tool, I'm
staring at a collection of tools.
And a lot of times, I think people never really understood the core decision that
happened with Docker.
Docker comes after decades of decisions around trying to reinvent package managers,
RPM, Debian files.
It's just so many of them, Jim.
And they're all kind of solving the same problem.
There is nothing you can do with one programming link.
on its own. At some point, you're going to have a foreign function interface to some C library,
SQL Live, Lib, C, just something's going to happen. And we all try to pretend that that wasn't real
or that Debbie and RPM could get us there. And so we just kept rebuilding these package managers
trying to solve problems we've created. Docker is born from that world. Docker just says,
well, if that's what it's going to be, let us let's let you keep doing that. And look, it was fun.
it did speed things up, but in many ways, it sped up the mistakes.
There are now far more things to scan than before.
So before we were scanning servers and VMs.
Now we're scanning apps.
We're literally scanning apps for OS vulnerabilities.
We're scanning apps for all of these things that now are getting packaged
because we're still doing the same mentality as before.
I think if you rewind the clock back 20 years and say,
hey, what's the real problem?
the real problem is we don't really even know what our dependencies are.
The real problem, we don't even know what we're linking out to these days.
And the vending approach, it just kind of created a mess.
So that original board paper was like, you know what?
The best way to think about software is in a holistic way.
You can't think about the Ruby ecosystem independently of the C ecosystem.
And so what you do is you just say at some point, this is just how computers work.
Apps will start, and then they will reach out across the file system,
loading things that they need in order to run. So that MPM package is much closer to the NICS philosophy
around intentionality around what's on your machines, intentionality around what we link to,
and then you manage the system from that regard. And then I think it's a little easier to think
about secure software supply chains, but you need to know what the software supply is before you can
secure it. And so I think that we're dealing with. And again, and I set this at the Nix event,
nothing stopping someone from saying, hey, Docker's a really awesome way to package and distribute
things. What if I was using Nix inside of a Docker image? So I still get the portability.
I still get access to things like Heroku and, you know, Lambda and all these container-only
platforms. But that container image, when combined with intentionality, I do think gets us
there much faster than starting over from scratch. Yeah. Well said. Kelsey, I'm really looking
forward to Planet Nix and your talk.
First up.
First up, you and Ron, and we're going to be there.
Chat, right. It's going to be a chat.
Thank you so much for taking a little bit of time and joining us on a Sunday to talk about
this and get us in the mood and the mindset for Planet Nix.
I hope we get a chance to talk at the event, too.
I'll be around. Thanks for having me.
Yeah, thanks, California.
And now it is time for Le Boost.
We have a row of Grandpa Ducks.
And now it is time for LeBooz.
You said it. I had to do it.
Things are looking up for old Macduck.
Starting us off, we've got a row of, what are these?
Grandpa ducks, 22,200.
22 Satoshis from truck, Chuck Run Amuck.
Chuck says, how are you now?
Good?
Hey, how are you?
Fine and you?
I was building a new VM server the other day.
What's your go-to remote desktop app?
It used to be VNC for me.
I use it all through my corporate career.
Then X to Go was the new hotness for a while.
That got janky, though, so I switched to No Machine, which I think is the proprietary mother of X to Go.
What do you use now to connect to your headless desktops?
You mean besides my psychic intuition, I got to give a shout out for my old buddy, KRDC.
Are you guys familiar with this one, KRDC?
Yeah, I haven't used this.
It's nice.
You get them all in a list.
It'll do R to P.
it'll do VNC.
It'll do what you need.
A lot of days, people are using themselves
some kind of VM that has a web interface as well.
So you've got yourself like a cockpit option
if you're going with your natives
or the ProxMox type thing
where you get yourself a console interface.
That's pretty popular.
Right? Is that good?
You think that's, yeah?
You got any...
In a different flavor, you know,
we've used Rust Desk quite a lot
to do remote access.
It's not quite, you know, for a virtual machine necessarily.
but that can work in a pinch
if you need to do that kind of thing.
Yeah, I still like Rustisc.
And they've updated for Wayland
quite a while ago now,
so that's been working, which is really good.
And we've seen a lot of Wayland improvement
generally, I think, in terms of remote desktop,
both like in Gnome and Plasma,
but more broadly, too.
Well, thank you, Mr. Runamuck.
We appreciate you.
Distro Stu's here with a space ball's boost
that's one, two, three, four, five sats.
So the combination is one, two, three, four, five.
Chris, you may be.
uptime Kuma in your network setup.
I've been using Gatus, Gattis, Gastis,
which is similar, but has a declarative config
for endpoint monitors.
There you go.
Here's my Nix setup and links me to that.
Side note, I set the service up
to debug some networking issues.
Turned out my switch,
by switching my router to OpenSense to Nix
was the solution.
Hey, we've been there.
Very nice.
You know what?
Honestly, Linux makes a great router.
Why do you think all these embedded systems
are based on Linux?
Especially right.
You're not trying to push
like tens of gigabits per second
where you need some like hardware offload
or something in like an enterprise use case.
Like you just have a gigabit or something.
Like modern Linux will be fine.
More than fine.
It'll be fantastic.
And so Nick's boosts in with 5,000 cents.
I'm glad to hear you're doubling down on Matrix.
You had me going for a moment there, Chris.
I thought you were going to announce a switch to Discord.
You might have thought that too.
Yeah, I did.
It does make me wonder, though,
why don't you just use Mattermost?
After the glowing review in the previous episode,
is it not a better, more stable fit?
You could even use your A-Hy minions
to set up a separate instance for the community.
Mattermost has sort of a slack vibe.
It doesn't really seem to be the right tool for the job.
Do you have that impression as well?
It's a good tool.
Yeah, I think maybe for some communities.
You know, I've seen some folks kind of do that where...
Well, use Slack for some communities.
Right, yes, exactly.
But I think for the style we're going for,
especially with the amount of like self-hosting,
interrupt like
something like Matrix
and the Federation
and the ability
for other people to run
their own servers
or use ours
or like another
community one
and be able to all
connect together
is pretty killer.
You know,
there are maybe other options
XMPP.
We used IRC for a long time
but those had
at least closer
to some of that.
You know,
I'm surprised
XMPP hasn't coming up
more recently
with the discussions
of agents
and agent-to-agent
communication.
Seems like XMPP
should be getting
a little bit more love
than it is.
Yeah,
I think,
don't you think,
Brent,
the federating
aspect is pretty big for us.
I think it's big and it's also quite fun.
Like as an audience member to have your servers
federated to other audience members and the studio,
that's just a really fun networking.
Paradime?
I'm going to say paradigm.
Well, Kiwi Bitcoin Guide boosted in 2,500 Satoshis.
I got an open claw question.
What are the pros and cons of self-hosting open claw
on a server in a homeland?
After looking at my options, I thought I'd
try hosting the agent on a pie at home, but a friend pointed out malware and security vulnerabilities
and suggested a VPS instead. What would you suggest? And can you recommend a good VPS service
suitable for this particular task? We've sort of been debating this very question behind the scenes.
It depends a lot on what you task it to do and what you allow it to do and are hoping that it does
and what it can reach out to. Like on the VPS side, a lot could be suitable because it depends on your
workload. You know, especially if you're using a LLM provider that's not you, that stuff's all
offloaded. So besides just like the OpenClaug Gateway, which is a node service running a server,
it's only ever doing what other commands it needs to do from the work you're putting into it.
On the land side, it might depend on like what access do you want it to have. It can be really
useful if maybe you want it to be able to run scripts to make reports or control your audio
bookshelf for your jellyfin or something. Or maybe you want to do some network segmentation. So all it can do
is reach the outside network, except for some blessed paths that you allow with tightly controlled
API keys. So there's a lot of different ways you could, but you need to kind of understand
what you wanted to do and what you're willing to accept the risk for. I think there's a couple of
easy yes, no branches on this path. If you have a machine that can run a local model, yeah, run it
locally because then you don't have a cloud dependency when it comes to the LM reasoning. I think also there
is an argument to be made to run it local if your intention is to have it interact with Jellyfin
and Home Assistant and your RAR stack and things like that.
If a lot of its work is going to be executing local APIs,
it will have to still do reasoning and instructions with the LM,
but then a lot of that execution can actually take place via scripts that it creates,
and those run persistently local.
And a lot of us actually have systems that are capable of running models
that can at least be used for memory systems and things like that.
So it's something that I think we're both debating.
Really, like West said, comes down to a big depends.
But there are a couple of yes, no pass,
And the number one yes path is if you can run a local model.
If you don't intend to integrate with local self-hosted services on your land a lot,
I think that's another vote for going with the VPS.
Yeah, and it might depend.
Like, is it mostly pulling strings on APIs?
Is it doing research and making reports for you?
There's a lot of different modalities you can use with this stuff.
Yeah, good question.
Let us know what you do.
I'm going to code the complete other way as a suggestion
and suggest that you install this on an ESP 32.
I saw two projects that's one of the,
called Mimi Claw and use an ESP 32 to get this running.
And there's another project out there, OpenClaw, ESP 32.
So if you want to go the Jeff route, we will send this your way.
Wow, that's crazy.
Yeah, that way it's not powerful enough to really harm anything.
That's something else, isn't it?
Or at least you can catch up to it, right?
All right, I'll take the next cue.
McEwa comes in with 2048 SATs.
Fun will now commence.
This says keep up the good work.
Thank you very much.
Appreciate that value.
And Sam H comes in with a row of Dukkariskees.
that's 2,22 cents.
I'm definitely interested in hearing you talk about the clause.
I'm very conscious of the security concerns.
But I thought your basics in LUP 654, isolated setup, dedicated accounts, etc.
We're a helpful starting point.
It'd be nice to see open-cloth variants that defaulted to local models and an open chat platform.
True.
Yeah, a lot of these are based around Discord or Telegram or Slack.
Like I was saying just a moment ago, I think XMPP has been left out in this.
Yeah.
There are some other solutions out there, and there are plug-in systems available.
But I agree.
I think the two other projects that haven't really solved us, but I'm watching with some interest, are Iron Claw and Nanoclaw.
Iron Claw is, surprise, surprise, rust-based.
And Nanoclaw is Python-based, but a nice tight little Python project.
And they're doing different things, security-wise, and so we're seeing several of these now develop.
You know, and the other thing maybe to think about is Kelsey touched on this is consider what you have the,
agents do and what you make deterministic.
Because if they're just triggering scripts that you've already blessed and aren't going to
surprisingly call the delete API, you'll probably have a better time than if it just has
full access to everything and you expect it to kind of do that dynamically every time.
Pete Zinger-Busin with the 7,680 cents.
I'm relatively new to the show and only started listening last year.
Well, welcome. Thanks for listening.
A few weeks back, I got talking to Abe and he showed me a conference.
video from 1967 called
1999 AD,
where the home computer
would talk to other computers to do things.
He's also been helping me run my own
Abes. I have two, and I just
let them be, I don't call them Pets,
because Pityverse doesn't have the same
zing to it. I am more
sure than ever that this separation of duty
is the way to go. Separate AIs
that have, oh no, the Luce got cut off.
Cut off there, yeah. But I'm imagining
you saying that have separate tasks or jobs, right?
Because if you keep
them relatively focused, they don't wander off
and get lost in the context and whatnot
like that. And you can also then scope the
permissions and things like that.
And probably easier to manage the context
window. Yeah. Yeah, and memories and all
kinds of things. Yeah, absolutely.
Thanks for the report. And
keep us posted. That's exciting. Let the ape spread.
I've thought about that too. I wish I could call mine Chris's
and have a Chris first, but it just
doesn't have the same ring.
Southern Fried Sasibus in
with a row of ducks.
Condolences to Brent.
Silver is a nice consolation prize.
Oh, this is a sports ball reference, isn't it?
This is a hockey reference in both genders.
We played the U.S. and lost both games in overtime to you, folk.
So darn you, shaking fist.
Shaken fist over there at you both.
Yeah, well, we personally had something to do with it,
so feeling pretty good about that.
Didn't even know about it.
Actually, I did.
I saw the cheating, at least.
I saw the embarrassing, disgraceful cheating.
Brent seemed didn't bring up on his own.
No, interesting, huh?
I didn't see any of that.
All right, well, thank you everybody who streamed or boosted, even below the 2000 set cutoff.
You sat streamers, 22 of you collectively sent in a nice, a little bit more than a row of Mick Ducks.
This old Ducks still got it.
22,649 Sats sent in by our sat streamers.
And you combine that with our boosters.
It's a pretty light week, and we are getting ready for some big trips, big coverage, and very light on the sponsors.
So we'd love to see some support from you next week if you are.
in the mood. We got 80,388 cents this week.
It's all right.
It's all right. It's all right. Don't cry.
It's okay. Are you all right?
It's so sad.
It's all right. It's all right. This happened, especially because we had a great new year and end of year.
We really do appreciate that. But your boys are getting ready to head off into, what do you call it?
The Rough. It's not a battlefield. It's a nerd field, really. We're getting ready for it either way.
And the boys could use some support.
The great conference cloud.
Yeah.
We sacrifice our health so you don't have to.
No, actually, I think we've been really lucky the last couple of years.
We have done very well.
It's not...
You've got a whole regiment, I think, is helping.
You know what?
I should think more about that.
I should get that locked in is what I should do.
Thank you everybody who does support us.
And, of course, thank you to our members who put that on autopilot.
Keeps us going.
It means the world to us.
Well, we got too many picks this week.
Most of it's my fault.
But you boys also found some...
really good ones.
And let's talk about WebSocket Cam.
It's an app for Android that exposes your camera.
That's right.
Access Android camera via WebSocket client API.
You can also run AI and computer vision tests with ease from a live camera stream.
Written in 100% Kotlin GPL3.0.
Oh.
And you can get it on Octaneum.
I think you should be their ad man, you know, written in 100% pure Colin.
I have yet, full disclosure, I haven't had a chance to try this.
yet, but as someone who's used
droid cam more than a handful of times
over the years, having more options
of taking advantage of a probably decent camera
you've already paid for and is in your pocket,
I think is really neat. I agree with you,
West Paying. Good pick. Stay tuned for
Brent's fantastic pick in a moment. Well, I don't know
if it was Brent's, but I'm giving it to him. I want to
just mention that I've created a couple of projects.
If you are playing around with the claws,
one of the things that OpenClaw doesn't have built in,
which is a little disappointing, and I'd love to see him fix
one day, is an MCP
bridge or an MCP client.
But you can add one via a plugin.
So I have built one.
An OpenClaw plugin that bridges any MCP HDP server as a native tool to OpenClaw.
They don't have one yet, but MCPs are big.
The model context protocols out there, they're big.
And OpenClaas kind of not really built to solve it directly.
They recommend you use MCP importer, which uses a CLI interface.
So I created a plugin that lets you use MCPs directly in OpenClaw.
And you're already running it in production, is my understanding.
I am. It's working well.
And to that end, then I created a MCP server for Fresh RSS, which definitely should have one.
But since they don't have one, well, now we have one.
So it's an MCP server I created that sits in front of the Fresh RSS Google Reader API,
exposing the feed management as tools for your AI agent.
It supports streamable HTTP transport.
And it's very easy to get going on Nix.
Flake is ready for you.
the details on how to set it up in your config are all there for you.
It just requires a password that you set up on the fresh RSS server.
And then you plug it into your gateway once you have my other thing working.
And there's even a guide on there for your agent if you're too lazy to do it for yourself.
And you should see significant token savings when you do this because it uses a lot less reasoning,
pulling down the whole XML feed and going through it and just sorting out the few things that you want
because the MCP server exposes in particular options and tools and makes it a lot more efficient.
So it's a great way to get like a daily briefing
without burning a bunch of tokens.
Yeah, do you have any workflows you're willing to share
around like what you have your agent
using these tools to pull from Fresh RSS?
So mostly on-demand stuff for when I'm prepping a show.
I can be like, hey, what are the headlines in this category
because I've broken the shows down in individual categories now.
And then a supplement to that in my morning brief,
which has a lot of stuff in it,
I have headlines from each, I have the top headlines from each category
for each show that shows up in a morning brief
that's pulling from Fresh RSS.
Very nice.
And the thing that's really great about,
that is all I have to do is add and delete feeds in fresh RSS and it won't show me the stories I've already read in fresh RSS.
So I'm only seeing the stuff that's unread that's being exposed to me because all of that is just in fresh RSS already.
So it's really cool.
And I like it a lot.
All right, who wants to tell us about this utility that promises to replace your home folder?
Tell me about this.
Yeah, Wesch.
Weschel?
What is it?
Brentley, did you find this?
Is it a Wash project?
Did you make this one?
This is great.
Why would I want to replace my home folder?
I think I've found this and I'm not sure if it's a great idea or not, but I love it.
It's super inventive.
It's a clever idea.
And for some folks' workflow, maybe it is the right idea.
Yeah, okay.
Give us an example, a little taste here, why you might want to use this.
Well, organize your computer by replacing user folders with SIM links to project folders.
Okay.
It's a tool that helps you by creating individual download, desktop, et cetera, right?
like the SDG standard folders you're used to from like your homeder, right,
for each project.
It replaces the original folders with sim links to the folders of the current active project.
Now the random files you download will be placed in the download folder they belong to.
Also, all the like dot config files you might have for project specific settings maybe for
IDEs or tools or whatever.
Those all get sandbox too.
Brent, I think you ought to set this up right before the trip so that way you could give us a report on how it goes.
We could talk about it while you're on the trip.
I don't think I'm going to go wrong.
It is written in Rust and has the MIT license.
So that makes you feel better.
What do you think, Brian?
You give it a go?
The hardest decision is to decide what a project is?
Because I feel like our trip is like a meta project
and there are multi-projects within it.
That's the most Brent answer possible.
Isn't it?
Well, I'm true to form.
True to form.
Yeah, I feel like the hard question is where do you sim link too?
You know?
Do you put it on your scary raid?
That's probably not a good idea.
But it's fast.
I think it might already have a spot that it uses.
Oh, that's lame.
But you probably can figure that.
Well, see, what I like about being able to do it myself is I won't be consistent over the years that I use this.
So I'll start with one spot.
And then a couple years go by.
I'll start doing it another spot.
And you give me four or five years.
And I'll have sim links all over the place.
It's really kind of, it's like a web of knowledge on my.
file system, DSM links.
You can't find anything, but it's all there.
Yeah, it's all linked.
It's all there.
Somebody could probably put it together, you know, if they came back and pieced it all
together.
All right, just one more episode before we hit the road, meetup.com slash Jupiter Broadcasting
is where you go.
If you're going to be there, we want to see you.
And don't forget, you can get 40% off your scale registration when you use our promo
code on Pludge, U-N-P-L-G.
40%.
That ain't nothing.
and ain't nothing.
All right.
With that,
links over at Linuxunplug.com
slash 655,
if you can believe it.
You got any pro tips
for people before we get out here,
like maybe fancy JSON chapters
or transcripts, things like that?
We have both those things.
Yeah, that's right.
We have cloud chapters.
So it's a JSON file
that your podcaster can download
or you or your agent.
And it tells you
where we talked about the things
in the show.
But if you need more granularity,
then all of the words
that we said,
you know, up to some possible error rate.
Those are included in SRT and VTT format that you can get in the feed.
And these days, we're trying not always is, you know, new,
but we're adding an MP4 in an alternate enclosure as well.
You know, we ought to start, we ought to just advertise to sponsors.
Hey, yeah, we're the Agenic Ready podcast.
That's right.
We've got our chapters in JSON, and we have our transcripts in two formats.
We have rich podcasting 2.0 metadata via the tags.
That's right.
And that also means...
See you next week.
Same bad time, same bad station.
That means we're lit.
We are live every single Sunday using the live item tag,
so you can listen to your podcast app of choice,
or you can go to jbblive.tv.
If you're on the go, check out that clean audio feed over at jbblive.fm.
Links to everything over at Linuxonplug.com.
That's the source of truth,
and I'll give one more shout to our virtual lug.
Details at jupiterbroadcasting.com slash mumble.
You never know when we're on the road.
It can be hit and miss.
So you probably want to join next week,
just you can try it out,
Because you know if we're in studio, there will be a mumbled room.
See, probably going to join us.
Next week.
Make it a Tuesday and a Sunday.
All right.
Thank you so much for joining us on this week's episode of the Unplugged program.
And we'll see you right back here.
Next Sunday.
