The Changelog: Software Development, Open Source - The state of homelab tech (2026) (Friends)
Episode Date: January 24, 2026Techno Tim joins Adam to dive deep into the state of homelab'ing in 2026. Hardware is scarce and expensive due to the AI gold rush, but software has never been better. From unleashing Claude on your U...DM Pro to building custom Proxmox CLIs, they explores how AI is transforming what's possible in the homelab. Tim declares 2026 the "Year of Self-Hosted Software" while Adam reveals his homelab's secret weapons: DNSHole (a Pi-hole replacement written in Rust) and PXM (a Proxmox automation CLI).
Transcript
Discussion (0)
Well, friends, it is changed logging friends, the weekly talk show about, well, you know what?
Whatever you want to talk about?
And this week, it is the state of Homelab, 2026.
A massive thank you to our friends and partners at fly.com.
Launch your sprites, lunch your apps, lunch your fly machines, luncher everything at fly.
Like us.
Okay, let's talk.
Well, friends, I'm here again with a good friend of mine, Kyle Goldbreth, co-founder and CEO of Depot.
Dev. Slow builds suck. Depot knows it.
Kyle, tell me, how do you go about making builds faster?
What's the secret?
When it comes to optimizing build times, to drive build times to zero, you really have to take
a step back and think about the core components that make up a build.
You have your CPUs, you have your networks, you have your disks, all of that comes into play
when you're talking about reducing build time.
And so some of the things that we do at Depot, we're always running on the latest generation
for ARM CPUs and AMD CPUs from Amazon.
Those in general are anywhere between 30 and 40% faster
than GitHub's own hosted runners.
And then we do a lot of cache tricks,
both for way back in the early days when we first started
Depot, we focused on container image builds.
But now we're doing the same types of cache tricks
inside of GitHub Actions, where we essentially multiplex
uploads and downloads of GitHub Actions cache
inside of our runners so that we're going directly
to blob storage with as high of throughput as humanly possible.
We do other things inside of a GitHub Actions Runner,
like we cordon off portions of memory to act as disk
so that any kind of integration tests
that you're doing inside of CI
that's doing a lot of operations to disk,
think like you're testing database migrations in CI.
By using RAM disks instead inside of the runner,
it's not going to a physical drive,
it's going to memory.
And that's orders of magnitude faster.
The other part of build performance
is the stuff that's not
The tech side of it, it's the observability side of it, is you can't actually make a build faster if you don't know where it should be faster.
And we look for patterns and commonalities across customers.
And that's what drives our product roadmap.
This is the next thing we'll start optimizing for.
Okay.
So when you build with Depot, you're getting this.
You're getting the essential goodness of relentless pursuit of very, very fast builds near zero speed builds.
And that's cool.
Kyle and his team are relentless on this.
pursuit. You should use them. Depot.dev. Free to start. Check it out. One liner change in your
get-up actions. Depot.com. Well, friends, we're back. It is a new year. Not a new Tim. Same Tim.
Got your hat on Tim. That's right. I heard you got some strife on the internet recently.
Yeah. About mid year back. You took your hat off and you started to like just have your non-hat
Tim going, you know, and a slight uproar. What happened there? Yeah. People freak out.
when I don't have my hat on.
And in my last video, no glasses, no hat.
I broke my glasses and I didn't have a backup pair.
And I thought, yeah, I'll go no hat too.
And so like, you know, you got this crazy person on Tim's channel that kind of looks
like him and sounds like him, but doesn't really look like him.
So yeah, people, people get really confused.
Every now and then I do it on purpose, though, to try to like throw off the algorithm for
YouTube because I don't know.
They're like, maybe we target people who like glasses and now we'll target people who don't
like glasses, you know, so the backward hat, you know, I've done that forever.
But I've noticed some people are like, take your hat off.
You're inside, you know, and so sometimes I switch it up because maybe the algorithm will
now target those people that think that, you know?
So, you never know, you know, those games you play with the algorithm.
Well, of course, man.
You got an A.B test and A.
Your A.B. tests.
You know, it's what I've been doing.
Yeah, me too.
Yeah, A.B. Test, yeah.
And then a C, then you take your C is your B and your next.
You know, it's a.
constant game team. You know this. It is. It is. Yeah, fighting for ears and eyeballs. You know how it goes.
Ears and eyeballs. Well, we love these people out here listening to our pods and our. That's right.
Content. We just go on this journey because we're, we're just nerds. We can't help it, right? We just have to pursue the, you know, the inevitable, I suppose, and somehow put ourselves that pain slash pleasure and share it.
That's right. And that is what you call what we do. Man.
did you think we would be where we're at right now last year, Tim?
No. No, not at all. A lot's changed.
Even I feel like for HomeLab, it's changed even more.
Yeah, a lot has changed.
What do you think has changed?
I mean, it's obvious, but I want to hear in your own words.
What do you think is changed for HomeLab in particular?
If I could sum everything up in one word, it'd be availability, just availability.
And that goes a few ways.
You know, availability in parts.
You know, it's very difficult right now to get your hands on server parts.
Oh, my gosh.
Whether that be used server gear, motherboards, CPUs, it's very hard to get your hands on those because, you know, I suspect that, you know, most of these companies have contracts with really big companies.
and so your 1 Z2s orders or even your mass orders from stuff like New Egg aren't as big as, say, you know, Microsoft or something like that.
It's really hard to get your hands on, you know, server-grade hardware.
And also, that also has been true for the secondhand market, too, when it comes to CPU's motherboards and everything like that.
I mean, home labers have always been used to paying the home lab tax.
Well, let me take this back.
We used to not pay the home lab tax.
We used to get the used server gear free, take it away, really cheap, you know, type of hardware.
Then people started realizing, hey, there's home labbers out there and I can make money off this.
Or it still has some value.
I guess I should put it that way.
It has a lot more value than it used to because, you know, people are using this stuff in their home for home labs, which was awesome.
Second market, you know, was great.
And now we're at a point where not.
only do we have the home lab tax because people realize that they could start, you know,
making money off their secondhand gear. Now you can't even find it. And so that's, that's one big
change I see is just availability of, of server parts, you know, and the same goes for RAM. Ram, as you
know, prices are through the roof. If you can even find it, you know, prices are through the roof.
you're paying, I don't know, double, triple.
I'm scared to even look anymore of how much RAM costs.
Hard drives too.
Hard drives up.
I was looking earlier today and I paid $159 a year ago for a 14 terabyte hard drive refurbished.
That same one now on eBay, if you could even find it, you know, is almost $100 more.
Really?
Yeah, anywhere from $70 to $100 more.
And, you know, storage is just gone through the roof, too.
It's easy to find storage, but you're paying a lot more than you were before.
What about CPU?
CPU is the same?
It is the same if you can find them.
Yeah, secondhand CPUs, they're expensive.
I mean, most people aren't, I mean, let me take this back.
If you're building, you know, server with server-grade hardware, a lot of people are buying them used.
We can't afford new ones.
Yeah.
Or don't want to afford, I should say, in some of the cases.
But even though secondhand ones are through the route because people are still getting a lot more life out of them.
Or they can't buy the ones that they want to upgrade to, you know, the latest, whatever, epic CPU.
You know, most of those are allocated to some big customers.
So even the mid-sized customers can't upgrade because they can't get those.
So they're not releasing that gear to then trickle down to the rest of us.
So it's tough for CPUs.
CPS too.
Yeah, DDR5, CPUs, hard drives, motherboards.
I feel like the only thing that's really cheap right now are cases.
Inclosures because no one can buy them.
I'm sorry, no one can build them.
No one can build them.
So I think like, you know, enclosures are really cheap right now
because they're like, please build something, but we can't.
So it's tough.
Who was it?
Gamer Nexus was talking about cases recently.
I was about four months back saying that
cases were actually up because of tariffs.
Yeah.
Yeah.
So then there's that.
Then there's tariffs on everything,
which is, you know,
whatever percentage increase across the board for everything.
There's that for sure.
But a lot of this has to do with, you know,
just the AI race that's going on.
And, you know,
build, build.
And DRAM prices are through the roof because there's a shortage,
shortage of hard drives.
There's a shortage of everything.
because I you know everyone building data centers right now.
So yeah, there's there's definitely, you know,
tariffs across the board on everything.
But this is like beyond that.
This is like beyond, you know, hard drive prices, RAM prices,
GPU prices, I mean, through the roof.
GPU is another one.
Like you can't even like get your hands on them.
So, you know, if you bought one four years ago, you're pretty lucky.
You know, if you bought a 30, whatever,
I still have my 30 90.
It's actually back there.
It's the one I'm testing with.
Yeah.
Right at the beginning of COVID.
Right at the beginning of COVID.
And I thought, how am I?
I don't want to pay $1,300 or $1,100 for a GPU.
Like I thought, you know, am I really going to pay this price?
I'm so glad I did now.
Wow.
You know, I've had it for four years.
You know, I paid retail for it.
And so, you know, if you think, you know, the 4090s, 5090s, they are more expensive than that, too.
So.
That's funny.
I bought my 3090 last year for 300 bucks less than your retail price.
Wow.
Yeah.
Yeah, yeah.
Yeah, yeah.
Because that's when the probably the 5090s were just coming, just about to come out.
That's right.
4090s were out.
Yeah, 5090s were being announced and everyone was dumping on.
But now it's like you can't even get your hands on him.
But yeah, that's a good deal.
That's a good deal.
Yeah.
It's a good GPU.
I may have some fun things happening on that GPU as we speak.
right now. Training, training a rag system. Just doing some cool stuff of rag right now.
Cool, man. I was just looking up that stuff too. Yeah. Yeah, yeah. So I thought you would say that,
yeah, I'm glad you mentioned the hardware shortage, because that's key. But I thought you would
say this, your availability remark would have been not unavailability, but abundance of
availability in terms of capability. These, this new augmented home lab,
that can now tend their homegrown vegetables, aka software.
I mean, I thought they were at this home lab garden, so to speak.
Oh, yeah.
So that's my second piece is the explosion.
So then my second piece to availability, that was the other side of the coin.
And I'm not just saying that.
It's in the notes right here.
It's the explosion of self-hosted software that we can now run at home.
Like it's incredible.
And so that's my prediction for this year.
is a year of the self-hosted software. We can't get hardware. We've got to make do with what we have.
And so this is the year for software. And it's the year for software for many reasons.
You know, first of all, we have way more capabilities at home. Like I've been running Olam at
home, open web UI, you know, to play with models and do chat. I've even done, you know, some
coding assistance with some agents. You know, that stuff's fun. Models aren't as good as the open ones.
I'm sorry, models, the open models aren't as good as the closed models, the ones you pay for.
Right.
Obviously.
Or you wouldn't be paying for them.
But they're good enough to do a lot of tasks, especially just, you know, like you were mentioning rag and stuff like that.
Like I've been playing with paperless, paperless, paperless NGX.
What is that?
Paperless NGX is a self-hosted document scanning solution.
So if you think about, you know, you think about, you have a.
lots of documents that you want to store.
You want to store on your own hardware.
You know,
they might be private documents,
whether they're,
I don't know,
financial statements or subpoenas or marriage license.
You name it.
Yeah.
Whatever you have that,
you know,
it's private that you keep.
Just think of what you keep in my,
your documents,
you know.
Think about keeping that on your own servers
and then being able to scan those documents
and then getting metadata and data
and data about those documents.
That's kind of what paperless does in a nutshell.
Well, now all of these, not really sidecar,
but these kind of sidecar solutions are starting to pop up
where you can feed them to a model and get better data out of those.
So right now, paperless NGX is super cool.
That's actually my next video, so you're getting a sneak peek.
Nice.
That's why I'm so like, yeah, that's why I'm so like gun home about it right now.
Paperless uses traditional OCR, so optical character recognition.
And for the most part, it's okay, right?
It's okay.
It's way better.
It's faster than humans.
But it is nowhere near the accuracy of a model that's been trained for vision.
And so this is kind of the next evolution in OCR.
It's not use optical character recognition.
Use a model that's been vision trained or model.
multimodal is what they're saying now, where you can feed it text or feed it image and you get
text out. And so I've been playing with this thing called paperless GPT and paperless AI, which
hooks into paperless. And now I can scan documents, scanned images and get high fidelity data
out of those images. So for example, I scanned a serial number on one of my devices, you know,
took a picture, scanned a serial number. OCR did terrible. It got.
like made in Japan, right? It's about it. Serial number wrong. Everything was wrong. You feed it to an
LLM that, you know, has been trained with Vision, like a super small one from Olamma. And everything
works perfectly. It even, it even was able to figure out that the FCC trademark, their little
FCC logo, actually said FCC, even though it was F with circular C's inside of it. So it's really cool,
it's really cool solution, self-hosted solution. And so, you know, that's where I'm, I'm thinking,
that this year is the year for software, not only because people are making all of these,
you know, awesome solutions to self-host. It's that people have a lot of assistance now
to get those ideas to make them come to fruition. You know, they have agents to help them.
I was just talking to a guy the other day who just built the piece of software he always dreamed
of, but never had the ability to do it because he's not, you know, he's not a developer.
He's not a developer.
And say what you want about, you know, that code and whatever.
I'm a developer too.
Yeah, yeah.
Good.
Because I think that's, that's, that's, that's, that's, that's, that's work.
This is it solve a problem?
That's good code.
This is solve my problem that it exists before this moment.
That's good code.
That's right.
And so people who are driven by results or who want, you know, code is just means to an end for a lot of people.
You talk to product developers.
You talk to people.
people who are in IT but don't do code but have these ideas.
Those are the people right now who are creating these awesome solutions and finally being able to get those ideas out of their head.
So anyways, long story short, I just talked to a guy who built this whole solution on top of like Unifies API.
It's actually Chris from Cross Talk Solutions.
He built this whole solution on top of Unifies API.
And it's like the solution he's wanted for years but never took the time to do it or paid a developer to do it.
So anyways, we're seeing lots of that now.
And I'm hopeful.
And it's super exhilarating to see these ideas coming out.
Because, you know, when developers, when you have people who are generally software developers,
you know, they're, I don't mean to bucket people up.
But, you know, a lot of times they're, you know, deep focused on some super technical
solution looks perfect, runs perfect, you know, structured in a certain way.
But now I'm seeing these solutions by people who, you know, think way outside of the box.
and they're just trying to solve a problem.
And, you know, I get to see how they solve that problem.
And it's really cool to see because, like, you know,
I might not have approached the problem the same way that they did.
And so I get to see, like, a new perspective on coding.
So I don't know.
That's the other piece is availability.
So I think this year is the year for self-hosted software,
for open social software, or for solutions in general,
to be able to run on your homeland.
Because a lot of people are just going to be okay with the hardware that they had
for a couple of years.
And, you know, people like me who love to self-host stuff are always looking for,
like, the next container app to run on your server.
Yeah, I feel you on that front there, man.
I've been scratching little litches, is how I'd say it.
Question on paperless is, can it handle books?
I suppose this does the OCR or the vision version of that, books are cool, too,
because one thing I'm doing is I'm trying to figure out how to get knowledge out of certain
books I only own in paper.
that I can't even get in digital.
There's a lot of books that are just not like that.
And I'm thinking, great.
I have this book.
It's on my bookshelf over there.
I've had it for years.
I've paid the,
you know,
I've paid the author.
I've paid the publisher.
It's my copy.
But I'm not going to go pick it up because that's the old way.
You know,
that's not that I don't read books.
I still,
I still read,
okay?
I still read.
But my preference is,
I can read good,
you know.
I was trying to build a little center for people who can read good.
But, you know, I'm just not there yet.
I'm not Zulianer yet.
That's school for ants.
That was a good clip, man.
That was a good clip.
So can you do books with this paperless world?
Yeah.
So you can, you can because you can have multi-page documents.
So you could.
I mean, you would have just basically have this, you know, this whatever, 300-page, you know, document.
Right.
And then you can feed that to the LLM, which has vision.
And yeah, it should be able to parse it all out.
Like, paperless by itself should do pretty good on books as long as you get a good scan on it.
But I'd still feed it to the LLM anyways for it to use its vision because it's going to be,
you're going to go from whatever, 80% accuracy, probably a lot lower, to like 90, high 90s accuracy.
OCR in general, you don't realize how bad it is until like you actually try to scan something in the real.
world and you're like, oh, yeah, this used to be amazing, but it's not amazing anymore because we
have vision-based LLMs that are amazing.
Yeah, absolutely.
So, yeah, you absolutely could.
So I'm thinking, okay, you would scan it, you would get it in paperless.
It would put it in PDF form.
And that's the thing that's cool about paperless, too.
It tries to get everything in a PDF form.
So you would basically get it in a PDF.
I assume the reader that you're using uses PDFs too or maybe it uses PubM or whatever
the weird extension is that I can't think of.
But I think if you got into PDF, that would probably be good enough, I think.
Yeah.
Yeah.
My preference is to mark down.
I want to get things to mark down.
Yeah.
You know, and I got some solutions I'm working on around transcription and just
really pure, good stuff, let's just saying.
And that's my next goal is like, is to be able to transcribe with really good accuracy.
because there's a lot of jargon out there and whatnot.
And I'm close.
I'm like 98%.
Let's just say 99% there.
So I got some curiosity in that front.
It's so funny you mentioned that too because I've been going down this rabbit hole on document scanning.
This is kind of how it goes when I start researching the video or something that I'm doing.
Like this is like the third rabbit hole within this whole paper list thing.
You know, I learned that there are solutions and people.
probably know this. I don't because this isn't the world that I live in. But for document scanning,
there are solutions out there that prepare your documents for AI. And so it will take, say,
you know, a document and identify it and break it up into its parts so that you can feed it to,
like what you're trying to do, a system for RAG. So it will understand a title, a footer,
and all of these pieces of the document and not just the text itself. So there's two,
solutions out there. One's called Dockling, which is from IBM and it's open source. And it takes
any document you want, whether it be MP3, PDF, Excel, and we'll break that up into its parts
and then feed it to your LLM so that you can do rag against it. The other one is paddle. And so
paddle is another one, paddle OCR, that I don't want to say does the same thing because people who
know this stuff are going to be like, it doesn't do the same thing.
But for me, from the outside looking in, it's a solution trying to solve the same problem where it's trying to, you know, not only get the data out of the document, but lots of metadata about it too.
So those are two solutions that might help you.
And I say that because you're saying you want everything in Markdown.
That's going to help you big time because, you know, if you scan a document that has a table and you do OCR against it, the text you get out isn't a table, right?
And so same with even an LLM.
So what you want to do is, you know, use dockling or or paddle to feed to do the, the transformation or the recognition of the individual parts.
So if you took a picture of a table, you know, workbook table, Excel table, then your output could still be a table, but in Markdown.
And so this is like the next, I don't know.
I feel like this is like on the frontier of document scanning.
And anyone who's doing document scanning in the industry,
they're probably like,
this has been around for four years.
This is coming from like a web developer who does infrastructure at home.
And so this stuff is new to me.
So those are two things I would look into.
And I'm going to mention them in my next video because they're pretty cool.
But I think that these two solutions,
paperless AI and paperless GPT are trying to solve that thing.
And the funny thing is paperless GPT can hope.
hook into Dockling to do that for you too.
So it's getting wild, man.
I was just thinking about architecture there.
So if I, maybe you'll go here with me, Tim.
I've been thinking about ETL pipelines.
I feel like the world is an API.
The world is a CLI.
And the world is an ETL.
And that means extract, transform, load.
And I feel like that's exactly what you're doing there.
So if I were building that pipeline and I were, you know,
using paperless and I was, you know, behind the scenes,
your little nerd research lab there or whatever, you know,
I would want to keep the original images.
And the reason I would want to keep the original images,
I would want to extract whatever the purest original copy of it would be,
which would be an image, right?
Let's not take the transformed version of it.
Let's extract literally what we get from it, the raw data.
So let's have a raw layer.
That's, if you went with the, I think it's the medallion process,
I believe it was what's called.
You got silver, you got gold,
and you got bronze, you got silver, you got gold.
And so the bronze layer would be this original raw layer.
And so that would be simply images of every page you have.
It could be the simple image of your cereal placard, you know,
or it could be all the pages of the book.
Store those in the raw copy as an image.
Boom, you got that.
That's your bronze layer.
Then the T comes into play, the Transform comes to play.
And you say, okay, let's now take all those images.
And this is great as technology or models change or vision models get better is you can go back to that original raw source.
It's almost like how they do mastering for films.
They go back to an original film that was shot on film, remastered for 4K, but they're going back to those original slides, you know.
And so that's kind of the same process.
I would want the original image, though.
Oh, I agree.
I agree.
I like it.
This is similar to like meta fields, you know, in developers and APIs when you scrape stuff, you know, it's like let's pull out the stuff.
that we can use and put it in our API.
But, oh, by the way, we're going to have this meta field that has everything we found
to begin with just in case we need to come back and process it a little bit later, a little bit
better.
Yeah.
Well, you leave it on the table if you don't do that.
You put them on the floor.
You're not capturing it.
So you tend to throw away in that process to get to the pristine.
You throw away what was not really that good to you.
But in the ETL world, you want to keep that original raw source.
Now, insofar as that.
it does hold value.
You know, but if you go back to that original raw,
if you need to ever,
as your technology changes in the transform layer,
well, then you've got lots of things you could do
that goes back and gets more accuracy if you can't get full accuracy.
Now, if your score is 50 out of 50,
you're 100 out of 100 in your quality score
and your raw is not really need anymore.
We'll throw it in a reference pile, you know,
but I want the images.
I want the images so that when it comes down to that table,
I can actually have the LLM examine the image of the table,
and then the markdown we get from it and be like, that's good.
Let's go, you know.
Yeah, I like it.
Man, the ETL is taking on such a different, I guess, a different perspective.
You know, last time I was talking about ETO was, you know, it was SQL,
like trying to, you know, pull data out of, you know, one database and put it in another,
you know, in the perspective of this.
Yeah, it really makes a lot of sense because, you know, LLMs in general are like best effort,
best guest every time, you know?
Yeah.
And so that best effort, best guess, is going to be different every time.
And it could be better in the future.
Or it could be worse.
Who knows?
But yeah, saving the source image.
Yeah, that's awesome.
That's, it sounds like a fantastic way to treat, you know, analyzing images like this.
Yeah, the pipeline is medallion.
And like I said, it's bronze is the, you know, the base layer, which is your raw, your silver,
which is like a, you know, maybe an augmented version of that that's been kind of cleaned up a little bit.
Then you finally transform it in the fun.
layer in your gold layer, which could be your production database, for example, or your production
layer.
So you've got, you know, that first layer raw, that sort of middle layer where you're sort of
evaluating things.
Maybe you're doing some joins.
Maybe you've got multiple databases.
And the final transform is in, is in the gold layer where you're taking maybe two or three
different databases or two different data sources and you're merging them in production.
That's cool.
Yeah.
Yeah.
That's cool stuff.
Let's go to
Let's go back to
Chris from Cross talk
and unleashing
Let's just say Claude
Clod Opus 4-5
On your UDM Pro or whatever
If you're Tim
Technot Tim maybe you've got the latest greatest
I don't you know Tim
I'm a truck with that over here
I got to buy my own things
And I know you probably buy your own things
But you get gifted a lot of stuff
I don't mean negatively
No no yeah I mean
You get to play with the fun stuff
I'm envious
So here I am with my UDM
Pro that's not even the special edition.
It's just the one that's not special.
And so that's what I'm using.
Yeah, yeah, it's, yeah, I wouldn't worry about it.
Like, at the end of the day, like, their software continues to evolve.
And so you're getting the latest and greatest everything, even though your hardware might
not be up to stuff.
That's the cool thing about unifying general.
Well, yours does like light dances and stuff.
You got all your RGB stuff, bro.
I mean, like, I want mine when I put on my beats and I take my dance break.
I want it to dance and, you know, do a light show for me.
I just don't have that capability like you do.
Oh, yeah, no, no.
Or you can play snake.
Do you see that video of someone playing snake on there on there?
Yeah, it's pretty wild.
On their display.
That's right.
So this world, something that happened recently with me, one of our neighbor friends came by,
one of my son's friends came by.
And he brought his switch, his switch to, as a matter of fact, after Christmas.
It was one of his presence.
And like anybody who's inviting somebody with a device into their home, where do you think I said?
I said, you got to be on my guest network.
Well, for some reason, they just couldn't get on.
Like the authentication happened to the Wi-Fi network, you know, all things checked out as good.
Couldn't get DNS.
And so I thought it was my newly homegrown Rust project, which is called DNS hole.
So I rebuilt Pyehole in Rust, if you didn't know this.
It's not available yet.
DNS hole. Dev in the future very soon.
I'm waiting for one or two more things to happen before I can do that.
But right now, even as we speak, my DNS is being resolved by my own DNS server
that has fully replaced what Pai Hole is.
I think you'll love it when I can release it.
Matter of fact, I'll share it with you soon if we can.
All that to say is that he couldn't get on via DNS.
And I'm like, gosh, DNS hole.
Maybe you messed up here.
It was not DNS hole.
Okay.
It was DNS hole is perfect.
You know what it was?
What's that?
It was VLANs, man.
It was my VLAN rules.
Okay.
So,
of course,
I popped out Clyde and I'm like,
what's going on here?
Because I couldn't figure it out on my own.
I'm like,
gosh,
why didn't you just pull out Clyde?
And like,
let it just log into your Unify
and just check out some things.
And so through investigation,
it turns out I had some jacked up VLAN rules.
And while it was in,
it was like,
you've done this all wrong.
You know,
this is great work here,
but like,
you got old rules.
You got these rules that conflict.
You got this one rule that does nothing.
You got this whole set of rules.
It doesn't make any sense.
Can I fix this few, please?
Sure, Claude.
Please help me out.
Five, ten minutes later, beautiful VLAN scenario again.
All the world's great.
He's on the internet.
They're playing and having fun.
They're playing Mario Carton.
Life is good.
So, I mean, like, that's the world we live in, Tim.
You know, I can't even get my VLANs right, but Claude can.
That is awesome.
So I haven't used Claude in that way.
Like, to be honest, I haven't used Claude.
all that much.
I, you know, I use, uh, co-pilot, you know, with, with models, you name the models.
Uh, but no, that's interesting.
So did it do it through the CLI?
Yes.
And the API.
Okay.
Yeah, yeah, yeah, yeah, got you.
Well, it knows the IP address.
It has my off.
I've got an S S S S H key.
So I'll SAC into my UDM Pro.
Gotcha.
So it's me.
It's me.
Yeah, yeah, yeah.
So it logged in.
Dude, listen.
Okay.
Hold your, hold your seat.
it reads the Mongo database directly.
It will update the Mongo database directly.
I know that's,
it's not coal production.
Don't read and run for the database.
Yeah.
But I've done it before.
And so I was confident.
And then it will trigger whatever it does to like let the UI catch up essentially,
like the cash layer that's in the UDM pro or whatever.
Like just because you change the database doesn't mean that the reads come back quickly.
You got to sort of recash the cash kind of stuff.
And, oh yeah, man, it's so cool.
It will log in to the UDM Pro via SSH as if it's you or in the case of Chris, which I'm sure he did or was thinking about doing is you can use your own, you can use the API, the Unify API or you can just log right into it and just SSH around just CD directory and, you know, like you're on a system like you're assist admin, no different.
I think that's such a wild world.
I think that's what's making HomeLab more special to me.
ProxMox has got a little more fun.
Tim, if you, I'm going to have to show you some things.
Okay.
I have a CLA.
I have a CLA built called PXM.
It stands for ProxMox.
And a one liner, in a one liner, Tim,
I can have a brand new Ubuntu machine running.
I can specify the IP address.
I can specify the CPU, the RAM,
and the disc.
It already has my SSH key.
And literally, in less than 10 seconds,
it's reporting the IP address back to me.
Yeah, man.
Via the CLI.
Yeah, yeah.
And one less later, that same,
I can do PXM info and then whatever the VM ID is.
So PXM info 104, for example.
And it reports back to me.
SSAH user is, you know,
Ubuntu at whatever IP address,
all that good stuff.
Whatever the details are,
that machine and moments later my agents can be building on brand new infrastructure.
Yeah.
Isn't that cool?
Yeah.
No, it is awesome.
This is exactly what I'm talking about.
This is exactly what I'm talking about.
It's like, you know, AI and agents in general are just letting people, you know, get these
ideas out of their head and tinker way more and go way deeper than they used to before.
And, you know, this is, this goes, this reminds me of, you know, when I went from IT to,
to a software developer, you know, I went from.
using other people's tools to using my own tools.
And for me, that was like a light bulb.
I was like, I don't need the UI anymore.
Give me an API or even a CLI.
And I can figure it out.
And for me, that was just, you know, like a light bulb went off.
And it was just like this moment where I was like,
I felt like so much freedom, you know, to be able to build software that I wanted.
And so now it's just so awesome to see like other people being able to do that.
you know, take that step from using other people's stuff to using my own stuff.
And so now I feel like like you, you know, I mean, would you have ever written that thing
out in proxmox, you know, five years ago? Maybe, but it would have taken a long time.
It'd take me way too long. I wouldn't have the time. I probably wouldn't have,
it's just too dung-to-a-task to do. Exactly. Because it's really time. It's not necessarily
ability. And I suppose that's probably both time and ability. But, yeah, I would have just never
tackled it because it had just been too hard of a mountain to climb,
really because I mean, even with the augmented AI tools, it was still hard.
I mean, it didn't get easier.
It got easier to move faster and to get past the hurdles.
But gosh, I had to solve so many problems and figure out so many ways to deal with,
how do you store the image on Proximus?
Well, that's kind of obvious to most people.
But like getting through the whole life cycle, and that was one of the first things I've built with Claude.
So I've learned a ton since then.
So I want to rebuild it.
I want to go from because now.
I know what the tool should do.
And before I was trying to make this, I don't want to make.
I was just trying to explore, really.
And now I know exactly what I wanted to do and what I don't really care that it does
that I just don't need.
And so I would just wouldn't waste my time on that parts of it because I was trying to make
this, I guess, I didn't want to have to log into my process machine every single
time and navigate the web UI and click all the things.
And it's, it's not that it's a bad UI.
It's just that that's just the, that's not the way.
anymore. This is the year we almost break the database. Let me explain. Where do agents actually store
their stuff? They've got vectors, relational data, conversational history, embeddings, and they're
hammering the database at speeds that humans just never have done before. And most teams
are duct-taping together a Postgres instance, a vector database, maybe Elastic Search for Search,
It's a mess.
Our friends at Tagger Data looked at this and said,
what if the database just understood agents?
That's Agentic Postgres.
It's Postgres built specifically for AI agents,
and it combines three things that usually require three separate systems.
Native Model Context Protocol servers, MCP, hybrid search, and zero copy forks.
The MCP integration is the clever bit your agents can actually talk directly to the database.
They can query data, introspect schemas, execute SQL, without you writing fragile glue code.
The database essentially becomes a tool your agent can wield safely.
Then there's hybrid search.
Tagger data merges vector similarity search with good old keyword search into a SQL query.
No separate vector database, no elastic search cluster, semantic and keyword search in one transaction.
One engine.
Okay.
My favorite feature.
The Forks.
can spawn sub-second zero-copy database clones for isolated testing.
This is not a database they can destroy.
It's a fork.
It's a copy off of your main production database if you so choose.
We're talking a one-terabyte database, fort in under one second.
Your agent can run destructive experiments in a sandbox without touching production,
and you only pay for the data that actually changes.
That's how copy-on-write works.
All your agent data, vectors, relationships,
tables, time series metrics, conversational history lives in one querable engine.
It's the elegant simplification that makes you wonder why we've been doing it the hallway
for so long. So if you're building with AI agents and you're tired of managing a zoo of
data systems, check out our friends at tigerdata at tigradata.com. They've got a free trial
and a CLI with an MCP server. You can download to start experimenting right now. Again,
Tiger Data.com.
I wanted to be able to do a C-Live version of it.
I wanted to get JSON back and feed it that's my agent.
And now that's all possible, really.
So have you played with or heard of this latest thing,
which is called Ralph Wiggum?
Ralph Wigam. No.
That name sounds familiar, though.
From The Simpsons.
Okay, yeah.
Gosh, what is, why is it called Ralph Wiggum?
I forget.
I think it's because they just keep trying despite setbacks,
is how I, if I can paraphrase Ralph Wiggum,
is keeping the leap going despite setbacks.
And so I believe it was, yeah, I don't have it here.
I was going to try to figure out who I actually created.
I think his name was, I don't know, I can't remember,
but it was somebody who discovered this loop essentially.
So you essentially keep feeding back the loop of the input output that you would normally do
with your own typical cloud scenario,
which is you know, you entering the prompt,
it doing some thing and returning some sort of, you know,
response back to you and doing work in between.
Well, they have found this way to create this Ralph Wiggum loop
so that you can essentially define a pretty clear instruction set.
You might call it a spec or a spec,
but they actually just call it prompt.md.
And so in this prompt.md,
which you would feed into Ralph, can do a loop.
It could be a small loop like, you know,
build this one part of the feature end to end and you just go until it's done.
Well, the reason why I'm telling you this is because I feel like now if I,
now that I know what it could do and what it should do,
I would want to and if hardware was more available,
I would be more inclined to do this,
but I would build a test subject hardware machine that is proxmox.
And then I would just set loose this now that I have a pretty clear vision,
I would set loose this thing on that machine,
just have it build this ProxMox redo, I suppose, potentially,
you know, because I'm just trying to get the value out of it,
not so much the pristine code.
Sometimes that's the value part too and you enjoy the process,
but just for an exercise,
because I want to automate ProxMux,
why not do it via this Ralph Wigam Loop,
which is to just do it until it's done.
And you can kind of give it repetitions.
You can say, okay, do, you know,
one version of it, not a version, but I think like tries.
I forget what the terminology is for it.
Let me see if I can find real quick.
It's like one iteration, two iterations, and you can specify because you have only so many
dollars to spend.
I don't want to spend more than 20 bucks on this feature or 10 bucks on this feature.
So either spend 20 bucks in the feature or, you know, 10 or 15 iterations until you get to
some result.
And then I'll come back and examine it.
And to run it again, all you do is just run it again.
It's like item potent in that way.
It'll just go back and do it again.
Like that's such a cool world to be in, man.
Yeah.
Building a little home lab gardens with that kind of loop.
So cool.
Yeah.
Yeah.
No, that is for sure.
Because, yeah, a lot of times, you know, agents will stop.
And I know exactly what you mean now.
Yeah.
Because, you know, you can tell it to do something to completion,
but it's either going to stop or check in or do something, you know, and I know,
I know the prompts that I give.
I have a lot of my prompts saved because,
they're, you know, annoying to keep explaining to AI, you know, like, you know, you know,
fix all unit tests and run all linting and do all this, you know, do these things,
don't do these things, go, you know.
And to be able to not be a human in the loop anymore until the very end is pretty cool
to think about is like, no, you loop, you figure it out, you do so many iterations of this
piece of software, or you do one really good iteration.
And let me see the final result.
at that point, you're just like a director.
You know what I mean?
You're just like, you know, a director saying.
That's right.
Yeah, give me, give me something.
That's right.
A parent.
Go, go clean your room.
Don't come back until it's all the way clean.
And you know what?
I'm going to check under the bed.
I'm going to check in the closet.
So, you know, make sure you don't put stuff under the bed and in the closet.
And when I come back in, you know, an hour, it better be done.
Right.
I'm not a parent, but, you know, I remember those days.
You know, thinking, thinking I can outsmarting.
are my parents by GM and everything.
You got your pups, man.
You got your pups, right?
Don't you have your dogs.
That's right.
Oh, yeah, yeah.
They, they're pretty clean though, you know, and they don't clean up after themselves.
I mean, not yet.
Well, generally, unless, you know, something.
Yeah.
I won't go there.
But, but yeah.
What is the centerpiece of your home lab right now?
Like, what are the center pieces?
I imagine ProxMox and TruNAS is still there in the center.
Unify hardware is obviously probably part of the center.
What's what's around that center?
What's you building on?
It is.
So this kind of goes into, you know, another one of my predictions for this year, too, is if we could ever build anything is one big box.
I think people are going to return to this one big box idea where they're, yeah, only because things are hard to get a hold of.
And, you know, while, while you might be able to get hold of lots of little older machines, you know, the one liter machines, you know, to do.
you know, clustering. I feel like now that things are so scarce, people might be going back to
one big box, that you're stores, that's your AI, that's your compute, that's your, you know,
virtualization, that's your NAS, that you're everything. And it's kind of the way I've been going to.
So I do have my true NAS box. It's one big box, you know, has a video card, has RAM, has 10 hard
drives, you know, GPU, all that stuff. And so that's, you know, not only my NAS running ZFS, but it's
also where I'm running my applications now, too.
So I've moved a lot of my applications onto my NAS.
Yeah.
So I watched that video of yours where you were doing stuff with containers.
You had to do that sort of sidecar load, which I did follow.
But I didn't get the same results you did.
Maybe is that the way you're doing it with that whole you have to create the YAML file and
then it knows about it in the app container world?
Yep.
Well, if you're talking about TrueNAS, yeah, yeah.
So you don't have to use AML and do it that way.
I do it because I want Amel because I'm a developer, but also it.
It's a lot. I'd much rather edit YAML than fill out of form any day, even if you could give me,
you know, you could. Yeah. And also because then you get CLAs and you get help from AI, you get all
the stuff you get with YAML. And I can do it in VS code. So there's that too. So yes, so I'm now running my
applications on top of my NAS. So I've always gone back and forth. Like, you know, do I want my NAS to just be
a NAS and just be storage? Or do I want my NAS to be an application server too and then run those
applications on top of my NAS. And so I've done both. And I'm still kind of doing both. But for the
most part, what I'm calling now my home production is on my NAS. And so my home production, you know,
I've gotten a little bit wiser over the years and a little bit crazier. But, you know, I have a home
production now. And my home production is where the services are that need to be up. They need to
work or I'm going to hear about it. You know, that's that's Plex. That's my NAS. That's, you know,
whatever else I have running, which is a lot of stuff.
And when I say I'll hear about it, I don't just mean my wife because she will say something
if flexes down because we record a lot of stuff.
And if it doesn't record Survivor on whatever night, I hear about it.
So that needs to be up.
But also, you know, alerts and stuff I have set up and running too.
So, you know, my own production.
That is right.
Yeah.
Grafana is on there.
Oh, I've been getting so deep.
And Prometheus is on there.
Oh, yeah.
I've been getting so deep in Grafana and Prometheus.
now. And it goes back to what you were saying. Like, you know, I do a ton of, well, I've done a ton of
Grafana and Prometheus in the past. A lot of observability stuff, you know, in the enterprise world or
corporate world. But at home, I was always kind of like, man, that's, that's a lot of work.
That's a lot of work to get that going. You know, now that I have help from, you know, LLMs, it's,
it's work I want to do. Uh, because while I could muddle through it and spend a week getting,
you know, scraping, working on one machine, you know, the tradeoff.
just wasn't there. And so now that I can scrape metrics on one machine in about 10 minutes,
the tradeoff is there. And so it's worth it to me. And so, yeah, tons of, tons of,
I'm monitoring everything now. I have metrics on everything now. You name it. You name it.
And I'm going to show off some of this pretty soon in my Home Lab tour. I do every year. It's coming
soon, both the hardware and software, everything I host and run. Excited. Yeah. So that's coming soon.
But I need to make it really good for YouTube.
You know, people have certain expectations.
And for some reason, it's always got to be a little bit better.
People will be like, oh, that's what you did last year.
So, yeah, so I'm running my applications on my NAS now.
And there is a reason for that, not just because I want one box,
but ZFS is a really, really good file system.
And when you layer on stuff like, you know, caching and then metadata,
special V-deves and separating out your metadata.
putting that on ultra-fast storage and then putting, you know, your app data on fast storage too.
You get this really good bulk storage that can perform like MVME storage.
And so that's been my idea, this kind of crazy idea they have.
I'm putting 10, 10, 14 terabyte hard drives in an array, you know, I'm doing strike v-deves,
kind of boring.
But and then on top of that, I'm, you know, augmenting the things that need to run fast,
like metadata lookups and app data
and putting that on super fast storage,
but anything in that ZFS pool can also use it.
So it's kind of a tiered approach to storage.
You have ARC, which is RAM.
Ram is going to be the fastest.
Then I have this special V-dev
where I can put files on there too
that are below a certain file size,
and then I have bulk storage.
So my idea is run hybrid, hybrid ZFS,
you know, all of my video editing goes on there,
but also all of my databases still go on that pool too.
And I still get, you know,
MVME-like performance for most of the things that I'm running.
So it's pretty cool.
And so it's been a challenge for me to get that working.
So that's why I'm doing it.
And mainly because, you know, I always look at it like this.
You know, if Plex is running on one machine,
which it used to and my media collection is on another machine,
now I have two chances for it to be down.
If I reboot my NAS or I reboot my application server, right?
And so now I've doubled the chances of that service being down in my home.
And so I co-located everything onto one box.
So now it's like, well, you know, if if the NAS is down, that means the apps are down too.
But my NAS should never be down.
And if it's down, we have big problems.
So yeah, that's kind of what I've done.
I still have a Kubernetes cluster at home.
You know, I still have three proxmocks in a cluster running Kubernetes, and I still have that on mini machines.
That's kind of my home lab test kind of lab where I test stuff before I actually run it in production,
which I have a co-location too where I'm self-hosting proxmox and another Kubernetes cluster there that's
running techno-tim.com plug from my website.
Like that's all self-hosted in a co-location on hardware I own running on proxmox in a cluster,
running on Kubernetes that I maintain myself.
So, you know, my home kind of cluster is kind of a test bed for that too.
So, yeah, I actually run and manage three Kubernetes clusters.
I have a lot going on, but it's fun.
That's a lot of stuff to run.
I was going to ask you about your clusters because I went the route.
You did a while back when you built that cluster from the Nux.
I had only bought one because I could only afford one, but you bought three.
And I thought that was cool.
And I think that's where you ran your Kubernetes.
Or you ran ProxMox on High Availability.
there, one of the two.
That's right.
Yeah.
So I don't have proxmox and high availability.
I have them in a cluster.
I'm like I get it why people run proxmops.
Need to do that.
Yeah, yeah.
Because like I don't need H.A.V.M.'
I push that further down to the right to the left.
I don't know which way it is.
I push it further down to the services and then run H.A.
services, right?
Like I don't need an H.A. Kubernetes node.
Right.
You know, I just build more nodes.
It's the whole cattle, you know, approach where I'm like,
Like, you know, it's great for, you know, if you have a single VM, something super important,
that's old legacy app that you need to run and it needs to auto migrate somewhere.
But what Kubernetes, you know, as you know, you don't worry about that.
You just worry about the services.
You run three replicas.
And if one no goes down, one no goes down.
So that's the approach I take.
I don't have proxmox in H.A.
It's just clustered.
So I have one UI and I can migrate stuff easily.
You got your, uh, your trunaz, which is run your home production service.
That's right. Yep. You got your Kolo, which is in a data center.
That's right. And that's all of my public facing stuff. Yeah.
Which makes sense because you want bandwidth there and, you know, maybe no firewall poking, stuff like that.
Although you could probably use tail scale or something else to do that. Yeah, yeah. No, I used to, I used to host it out of here. No problem. You know, update DNS dynamically. It was it was all fine. And I could today. I just.
Wanted to kind of expand and I had an opportunity from a local person here in Minneapolis to join their Kolo.
And I thought, hey, why not?
Why not?
You know, let me do some super fun, you know, site-to-site networking stuff and backups back and forth.
So pretty cool stuff.
What do you run on your Kubernetes clusters then?
Are these applications?
Like, what do you run in there?
So Kubernetes clusters, so I have, yeah, they're applications.
Anything from Discord bots, websites.
So I host, you know, my own documentation.
site that's, you know, multiple replicas.
I have my own links site.
I have some APIs that I run, two or three APIs, because I have this mobile app that I use
that I built many years ago that's still running.
And this mobile app then has, you know, APIs, which then needs, you know, databases.
Databases are in there.
Other people's websites, like, you know, my brother has a website.
I built it for them.
You know, my other brother has a website.
I built a forum.
a whole bunch of, I'd have to look, but just random stuff.
But it's basically web dev stuff, a lot of web dev stuff.
And Proxbox was mainly just skunk work stuff, like lab stuff then?
Well, those are, so Proxmox is actually the host.
My Kubernetes nodes are VMs in ProxMx, right?
So I'm not running Kubernetes bare metal.
Okay.
I'm running Kubernetes as virtual machines.
So I have nodes that are Kubernetes nodes running on Proxmox.
And the Proxmox is also running some LXCs like DNS, Postgres.
So I have Postgres in a cluster running on an LXC because, you know, kind of mixing that into Kubernetes.
While it does work, depending on your I.O can go bad really quick.
And I don't have a ton of I.O.
And then, yeah, Alex, you know, I'd have to look at the list.
but I use LXCs too.
I was always kind of against them.
I know they're doing containers now too,
which aren't really great.
It's kind of like a hack of how they're doing containers now.
But I am using LXs for small things.
And when I say small,
I mean like I don't need a full OS for them.
Right.
Most of the time I don't.
It's interesting the way you're using Trunaz,
because I was always in the camp of let my NASBox just be a NAS box.
But I'm kind of boned because it's,
It's a Zon CPU.
It's a ton of RAM.
And so I look at it.
I'm like, well, you're not doing really much.
I mean, like a file server is not taxing.
It is an enterprise maybe with like thousands of users.
That's the box you want.
And so I've never really been happy about that scenario.
But then I'm like, you know what?
One problem, one issue.
It's a NAS.
I don't want to conflate what's on there because if I start putting applications on there,
different things on there, well, then the uptime may go down or I may have an issue that is not
NASS related.
So then I've mainly been like thinking like NFS mounts.
So why, what made you want to put applications there versus just NFS mounts?
Yeah.
So NFS mounts are great.
You can get into some trouble with NFS mounts like SQL.
For databases.
Yeah.
Terrible.
But not like other apps.
They just need to have a storage, you know, for like a database application, you want to be
closer to the actual storage.
for sure. Yeah, yeah. And it's not even just the latency piece. It's the locks and everything like that. Like,
NFS just doesn't handle it so well. Like the latency I can kind of get over, but like SQLite in
general, you'll have these locks that are like locked that you can't unlock and you'll get corruption
and stuff like that. But NFS mounts were great. Yeah, I mean, you got to figure out permissions and
stuff like that. You know, and I went down that route, you know, too for a while. But then it's like,
I have to back up, I have to like, you know, take care of.
applications over here, you know, set up all those mounts and do all that stuff.
And then also still, you know, Karen feeding for the NFS mounts and snapshots and
keep that connection up. And again, you're back to, you know, you've just, you know,
doubled your chances of downtime, you know, two. You know, it takes two, right? It takes two to make
one. So, you know, and again, like I've gone back and forth with this like so many times, you know,
It's like this whole generalized versus specialized, you know, that you see all over IT and, you know, enterprise in general.
I've generalized and specialized my servers so many times that like I think I'm ahead of like corporate, corporate entities that do this with their, with their employees.
And so I've specialized and generalized my NAS so many times with so many different things.
But right now I'm landing on this.
And I think a lot of it has to do with two things.
Well, actually three.
one, Trunas ditch Kubernetes
and went back to plain old Docker
which was awesome because I never would have done it
if they were still running the whole Kubernetes bit
but they went back to just standard containers
quote unquote standard containers
you know Docker images Docker containers
I guess it should say
two that I'm able to do it with the Amel
because I'm not going to fill out their forms
like if they ever take that away I'm bailing
I'm going to find something else to do
because just not me
like I don't want to like fill out a form
you know, to be able to, you know, put in my environment variables.
Like with the dot E&V file, I copy and paste them, and they're there.
Like, why should I have to do that in a form?
I get why they exist because people aren't developers.
But that's not how I want to manage my containers.
But a lot of that has to do with Kubernetes too.
And then, you know, the other piece is this whole, like, hybrid.
Well, it has a lot to do with what you just said.
It's like, hey, you have this beast of a machine just,
sitting there doing nothing.
You know, I feel like it's that meme where that guy's like poking that thing with the stick
and he's like, do something.
You know, that's kind of how I feel if my NAS is like, you know.
It's kind of boring to watch the metrics.
Yeah.
Like, yeah.
Dude, tell me about it.
You know how much over there, Beastie.
Oh, it's funny to say that because now you should see my metrics.
Because now I post all my app metrics on the screen.
Like you can put little widgets.
You know, my, my, my, uh, traffic reverse proxy, you know, is five or six
megabytes per second, you know, which is.
You know, not a lot, but you think like that's all day, every day.
Yeah.
You know, and then like, you know, I look at my, I can see my Maria DB database, you know,
going.
It's doing, you know, three, 400 megs per second of queries and stuff like that.
I'm like, yeah, this thing is like, it's humming right now.
You know, and if I look at the CPU differences, you know, you know, it's only a couple percent,
you know, more than it was before.
And so I probably have 50, 60 containers running on there.
And again, like the reason why I ended up doing this is because I figured out,
a way with ZFS to create this hybrid pool.
I mean, people in production probably won't do this.
I think it's cool.
But, you know, to make my hard drives, normal hard drives, you know, be as performant and
as responsive as MVME.
So not only the speed that you get, but as responsive.
And that's, again, like I've layered in MVME to handle all of the quick rights, quick
files, quick access, and for anything else it's going into RAM, which is another huge case of like,
yeah, I want to store my applications on something that has tons of RAM. And if it's storing,
you know, bits and blocks in ARC, which is, you know, ZFS is RAM cache. If it's storing that stuff
in RAM, yeah, do it. I want it to. So I'm like getting like the super performant, you know,
apps that are mostly reading out of RAM. If they aren't, then they'll read out of.
of MVME storage.
And then worst case scenario, they read from a slow disk, which is worst case scenario
because I have all those tiers.
Can you tease what your RAM, your V-dev is, which is MVME, and then your your, your
discs?
I know you're probably person some of your future videos bubble, but.
Oh, no, this is, no, I've done videos on this too.
So, so my, so I have striped V-devs.
So, let's back up.
So for bulk storage, I have 10, 14.
13 terabyte hard drives.
Okay.
And of those 10, 14 terabyte hard drives, I'm doing striped V-devs or kind of mirrored pairs,
where I have two that are in a pair that are mirrored.
And so, which means I have 50% loss of my data.
So you got seven terabytes essentially.
Yes, per pair is seven terabytes.
Yep, that's exactly right.
And so, but I do that for two reasons.
One is because you can, in mirrobed pairs,
it's you can stripe them across.
So you're basically kind of getting like a raid 10.
I don't want to say a raid 10.
Think of it like a rate 10 where, you know,
you have these pairs, but the data is striped across.
So you get the performance on reads and you get performance on right,
which is good for me when I edit videos because, you know,
I have a bunch of sequential reads and writes and I don't know where they're going to be.
But on top of that, you can also expand by pairs.
And so traditional Z-FX,
it's super complicated on expanding,
and I know that they've been adding features
to be able to expand V-Devs and do all this stuff.
But, you know, when I started my array years ago,
I realized that, you know,
buying two drives at a time is a lot cheaper
than replacing all four drives with four different size drives
just so I can get a bigger pool.
And so in the traditional sense with ZFS,
it's kind of what you have to do
is you got to plan up front and buy up front.
But doing pairs, let me build incrementally.
and buy incremental.
So anyways, that's my, that's my slow disk array.
Then I've done this thing called special VDEVs,
which is basically you say,
hey, all of the metadata about that data,
instead of storing it on the pool itself,
move it off onto super fast MVME drives.
So if you end up in a folder,
like I have some folders that have thousands of files,
you know, that can take a really long time for it to parse that metadata
and retrieve that metadata because it lives on slow spinning disks.
Well, if you have a special V-dev, you move it off of there so I can look it up on MVME.
Then on top of that, another thing you can do with special V-deves is you can say,
oh, by the way, don't just store my metadata there.
If you find any files that are, I don't know, below 64K, just put them on there too.
That's pretty cool.
So if you think about it, like now I'm now.
any small file, which usually takes a little while to find on spinning disks, is now stored on there too.
And so it's like I've just given my whole entire array a huge boost because now most of the stuff is handling, happening on MVME.
There's one huge caveat to that whole thing.
If you lose that special media, you lose all your data.
But I'm listening.
Yeah.
Yeah.
So you lose all your data.
You're definitely cowboy in this.
I mean, yeah.
But you know what you're doing.
Yeah, but I've done the same thing.
Mere V-Dives.
So I have four MVME drives, you know, and so two would have to die for me to lose all my data.
Right.
But then on top of that, you should have a backup, right?
And so, you know, I am definitely going cowboy because I want the performance.
But I think that's, that's, that's, I'm being a careful cowboy.
I don't know if that's possible.
You got spurs on, man.
That's right.
You got your gun on your, you get your slinger.
That's right.
But my slingers are on safety.
How about that, you know, where, you know, I'm still, you know, taking precaution.
I'm still building in redundant, you know, redundancies.
Yeah, if you're a rocket one MVME, I was going to call foul, Tim.
But you got four, of course.
That's right.
True Tim fashion.
So not only four, but you got them in pairs.
So you can lose two of the MVME.
Now, did you also go with different brands, buy them from different places, things like that, too?
Or did you just buy a batch?
So I, on top of that, there's these old Intel Optane drives.
So Intel used to make these.
obtane drives. They're ridiculously fast and they have like the lowest latency ever. Intel stop
making them. Yeah. But these things will like outlive earth. You know, like these things like the read
and write performance you get on them and the longevity and how many times it can read and write is like
ridiculously long that I ended up buying four of them because of how fast and because of how responsive
and because of how long they're supposed to live.
So no, I bought, I bought, well, I bought two from, I think, New Egg,
and then two more from Amazon, like later on.
They're hard to find, but they're still the best drive for that specific use case.
And so when those die, what I'm going to do is just replace them with Samsung, you know,
consumer grade.
Yeah.
I hope they never die only because they're the fastest thing out there.
They blow away any MVME that's on the market.
How did you get to get four MVMEs in this world?
because that's a, that's different.
It is different.
So, so I use an adapter, you know, a PCI Express card.
So, you know, traditional MVMEs, this is going to get a little complicated too,
but NVMEs want to use four lanes of PCI Express where they can talk directly, you know, to the CPU
or talk directly to something.
I can't remember.
I'm not a true infrastructure person.
I just play one on YouTube.
So, but anyways, the mask is off.
That is right.
That is right.
Hey, I coined the term infrastructure as a hobby.
That's kind of my thing.
I, IH or something, that's what I call home lab, because, you know, my actual career is, you know, I'm software developer.
You know, infrastructure is my hobby.
But I love it.
But anyways, NVMEs can use four PCI express lanes.
And in your motherboard, typically, you'll have one big slot that's 16x, you know, PCI Express
express lanes, you have to buy a card that can actually split that out into four individual lanes
and then you can address all four individually.
Right.
There's also this thing called bifurcation.
And so you need to make sure that your motherboard can do bifurcation.
And what bifurcation is is exactly what I just talked about.
It's able to split that 16x card up into four individual, you know, lanes.
for Forex lanes.
And your motherboard has to support it
and the card has to support it.
Server motherboards generally speaking,
like super micro ones,
a lot of them do.
Desktop consumer boards generally don't.
And so if you're going to do it on a board
that doesn't support bifurcation,
then you can buy a card,
which is really expensive
that can do it for you on the card.
And those cards are,
I don't know,
four or five hundred bucks,
maybe even more.
But if you get lucky
and you have a server grade motherboard that does it,
that's where you want to do it.
So anyways,
that's how I get four MVMEs
and one PCIXX,
16x slot,
and they're all getting 4X bandwidth for each.
And then you still have lanes for your GPU.
That's right.
Yeah.
So I have a lane for my GPU.
Your GPUs in the primary, probably, right?
Yep.
And maybe one of your secondaries,
which are baffircating.
You probably have a workstation
or a server grade motherboard,
I'm assuming,
because you got that ability.
Yeah.
Workstations are great.
workstation motherboards are a great hack to not have to go server grade and not go consumer
grade you kind of get that middle ground you're still in the 600 800 bucks maybe a thousand dollars
for the motherboard but you know that's because it's just the world we're in right now but
you do get the capability and you usually get ecc ram available as well versus non-ec
ram which you don't really need need in an ass world but you should have if you want
peace of mind i guess yeah yeah yeah
A lot of people, yeah, that's like, it's like, you know, a lot of people will go back and forth whether you need it or not.
People will say, well, if you care about your data, you do because you don't, you know, if the memory gets corrupt.
Well, that's where it writes first.
It's the source of truth.
Yeah.
Yeah.
So in ZFS, it's less important.
I'll say that because of how it checks and how it does its CRC checks on the memory and can verify the data.
Some people still swear by it.
Like, they'll say, like, if you don't use ECC, you might.
will not even do stuff.
You know, people go to the extreme.
I'm not there, but I'm glad I have it.
How about that?
That's an interesting world.
So when you think about a problem like this, so let's zoom out to, you know,
not scare the home labyrinths away, those curious folks.
Yeah.
When you think about this problem, do you get out your big old whiteboard?
When you're specking this world out, how do you think about it?
Because you're a YouTuber too.
So you think about it probably in the story arc.
And you also think about as a technologist, how do you map this and plan this and test this world?
How do you do that?
You know, first and foremost, like, I've been doing this for a long time even before YouTube.
Like, I've had a server in my basement since I don't even know.
I'll probably date myself, probably like 2004 or five, you know, going back to like this old piece of junk that when I was in tech support, I asked my manager if I could take it home.
so I could learn about Linux and learn about Active Directory.
And he looked at me like I was crazy because the thing was, you know, 10 years old already.
And he, you know, told me, yeah, you can take it.
Just take the hard drives out because, you know, I had data on it.
And so I've been doing this for a long time.
I've had a server in my basement for a long time.
So I, I generally think about, you know, what do I want to do?
What capabilities do I want to have?
You know, generally speaking, you know, I want to provide file services.
So I want a NAS to be able to do shares and store my files on.
I don't want to store my files.
Obviously on my system, I want to store them on my NAS because then that gets backed up.
I want to have some compute, you know, and it takes very little compute to do most of the things you want to do.
A lot of people will say, get server grade, you know, but I could spin up this I three right over here that's, I don't know, from five years ago.
And this will like destroy any, you know, self-hosted container I throw at it.
like laugh at it. Like, so you don't need much compute at all. Ram is great, but again, like,
you don't need tons of RAM. And so I just try to think of, you know, what services do I want
to provide? For me, it's storage. For me, we record a lot of TV and, you know, stream stuff at home.
So I think about Plex. Plex then brings in a video card if I want to transcode. Most of the stuff,
I can direct stream here. But, you know, when we travel or whatever, iPads, you know, phones,
they want to transcode.
So having a video card in there that can do that transcoding for me on the fly is good.
You know, that's a decision point too.
You could get by an Intel stuff.
But then I start thinking, well, I want to run models at home too.
So I kind of want to, you know, have shared infrastructure.
So I want my video card to be shared too.
So that's what I do now.
Share my video card with Plex with Olamma, with the stuff we just talked about, you know, with paperless AI and stuff like that.
technically that's going through Olamma.
But even some of my, you know, I do some transcoding for some of my, you know, video cameras too.
So, yeah, I try to find a video card that will work in all those scenarios.
And so, you know, I never want people to think like, oh, I need to go and buy this big thing before I can start home mapping.
It's never like that.
Like, if you have an old PC in your basement, use that first.
figure out what you want to do with it.
You know, use it as is.
If it had windows on it from 10 years ago, wipe it, put Linux on it.
And if you're scared of Linux, that's fine.
You can put windows on it.
But I would say, just try Linux because you're going to find things are a lot more compatible.
Yeah, for sure.
And just try it.
I mean, you might find it sucks to be a sysadmin at home.
You know, I enjoy it.
I enjoy it.
When things go wrong, my wife says something doesn't work.
That's when I'm like, all right.
You know, it's Deval.
Something broke.
I have a job.
My head is on.
That's right.
This is why I'm...
Turn it backwards if I'm Tim.
Oh, that's right.
That's right.
Yeah.
But, you know, I enjoy doing it.
You know, a lot of people joke on a lot of my comments, you know, on my YouTube video are,
bro has a full-time job at home.
And it kind of is right, you know, to an extent.
But if you build things right, you got things working and you, you,
you know how they work and you document it in case things go wrong and a little help from AI.
Like a lot of things are hands off.
So you're focusing on the next thing you want to do.
Yeah.
So, you know, again, like I don't have like, you know, a recipe of how I do things.
I just try to think of, you know, what are my base services I want to have?
It's always going to be storage streaming and some compute and some kind of transcode capability.
and from there I just try out a whole bunch of containers.
You know, I treat containers now.
Just think of like apps on your phone, you know.
I mean, I can't spin them up that quick.
I can spin them up, you know, probably about five, ten minutes,
all said and done, working with a proper certificate.
But they're like apps on my phone.
If I want to try an app, I try the app.
Does it do what I want?
Yes, it does what I want.
No, it doesn't.
There's this other app that I should try.
And so I'll go try that.
And, you know, that's kind of the world I live in now is, you know, my self-hosted services are basically apps at home for my home to use.
I want to say at home, I mean pretty much me, but, you know, there are some that my wife uses.
Yeah.
Yeah.
And, you know, and it's as long as I have a platform to do all of that, it really doesn't, you know, matter what parts I use.
Because you can get by with so very little nowadays.
they really can I agree I don't think you have to go so big to I mean like this is where it leads okay it's like I don't know if you're a golfer Tim but that's what golf folks say as well it's like hey you get invited to play golf you're like now I don't want to go and then somewhere during that first round you're like oh my gosh this is the best game ever I'm going to go buy all the clubs I could possibly buy and golf tech like any tech is just limitless really what they what they can like fine tune and dial in and so if you've if you've ever become
a golfer, you start with like this small itch and the next thing, you know, you've spent
$10,000 your first year in some way shape or, I'm being facetious, but, you know, golf rounds aren't
cheap, golf trips with friends aren't cheap, golf clubs are not cheap, and then you have to have
special clothes or you want to have special clothes because, hey, why not dress the part?
I kind of feel like that's the same thing with the own lab.
It's like you can begin with, like I did, I can remember the day when I got my raspberry
pie.
I can remember the day when I spun up the trunas, or actually wasn't true nats, was just like,
Like whatever 45 drives sent me because like way, way, way back of the day, I want to say probably six or seven years ago.
I mean, eight years ago, they sent me an AV15 to try out.
And they said, hey, no, you know, you can just keep it.
It's just yours just to have them play with.
And this is when they were first launching that line to home labors or what would become home labors.
And I was like, really?
I'm like, yeah, we don't want it back.
It would cost way too much to ship it back to us.
And at that time, I guess hardware was just cheap enough.
They were like, yeah, we don't even want a back.
just use it and enjoy it and tell people about your experience about it.
I'm like, okay, cool.
And so that today is my true-naz box.
You know, and it's got the zon silver, I think,
412, I believe the CPU in there, if I recall correctly, or 4212 maybe.
But I began, that was gifted to me.
I didn't even know what I was doing with it at first.
But then I began on a Raspberry Pi and started to experiment.
And so that's where I began with everything ran on my Raspberry Pi.
Now I wouldn't because my, not my, not my,
my needs, like what I actually need for computers grown, but my desire to play with bigger things
has grown. You know, the playground, I can't just have the carousel on it. I got to have the
slides. I've got to have the swing. I've got to have the rope climb. I've got to have the,
you know, all the things. You know, so my playground for Home Lab just grown a little bit. I'm curious,
though, when we look at maybe this potential dichotomy between Trunaz and your desire to put
all things there, which maybe somebody out there is having similar feelings.
And then this world of prox box.
I feel like I'm with you.
I kind of want my trunas box to do everything.
But trunas the software is not quite there yet.
Where do you see?
Do you have this vision or purview into that world where you can see
trunas being this all one big box software?
Because it traditionally has been great, you know, great for what it is.
you know, ZFS, storage pools, a little bit of applications you need it, but not a tremendous
amount. I kind of want my prox, mox, and my Trunaz in one box. Is that how you feel? Yeah, I do. And
you kind of can. So you could virtualize Trunaz. So let's take, let's take your,
you did that before. I've done it for years. And it's totally fine, totally fine. It's totally fine.
It's totally fine. It's totally fine. But don't you have to like map them to like weird, like,
Drive IDs and stuff.
Isn't it like weird in the uptime, I suppose,
if you had some major issues with your drives?
No, so it's easier than you think.
So let's take, for instance, you have that trunas box.
It's running trunez.
Just pretend you want to run proxmocks on it now,
but then you create a trunas virtual machine,
and you give that trunas virtual machine,
you pass through the hardware of that HBA controller.
You give it the whole piece of hardware.
You say, nope, this hardware controller,
hard drive controller, HBA, is now assigned to this virtual machine.
And what that does is now the virtual machine has direct access to all of those disks.
There's no ideas to pass through because it thinks it's the true owner of the discs.
And then you do that, then your life is good and then whatever.
That's one way to do it.
No, that is one way to do it.
But I agree.
So let me put it this way.
And this is not a dig to either product.
They're better at some things.
And so, you know, True NAS is leading as a NAS.
They're leading with, I'm a NAS, but I could also do apps.
And I can also do virtualization, not that great.
And that's my own opinion.
But it can do virtualization.
It's just not that great at all because it's not a hypervisor first.
It's leading with NAS.
And when you think about ProxMox, ProxMox is like, hey, you know, I'm a hypervisor first.
Like, that's what we do.
We do virtualization.
you could install apps like LXCs, although I'll get into that,
but you could install a whole bunch of apps and run them on the machine itself,
but, you know, and that does work, but like if you think about it as a NAS,
not great NAS experience.
Sure, you can install a Samba server, assign it some, you know, pool,
and then do all of the Samba config and a CLI and all that stuff like that,
figure out permissions.
You can do all that.
Like they're both capable of doing.
each other's job. I'm kind of like, I want something in the middle, and there is hex OS, which is
kind of coming, but that's, that's kind of like, you know, some joint venture between some people
and Linus and, you know, TruNAZ or IX systems creating this more, I guess, consumer-friendly
version of Trunaz. There's that coming, but, you know, I played with the beta.
looks pretty cool. It's, it's kind of going to be, you know, a facade on top of
Trunas because, you know, they're going to use Trunas APIs and it's really true Nass under
the hood, but that's something different. I want the best of both worlds. I wish they would,
I mean, they're not going to do this. But if I, if I could design, I guess, the perfect
NAS at home for me, it would be, you know, it would be like a true NAS like experience for
the NAS piece and maybe even for the application piece. But give me, you know,
give me the virtualization capabilities that ProxMox has, you know, and the networking
capabilities that ProxMux have.
Or, you know, if I couldn't do that, I would love for ProxMox just to run Docker containers.
They are so, I don't know what's going on there.
They're so against, like, running Docker on the host.
And I know you could shim it in and do it yourself.
But they've even got to the point where now they're, like, converting, you know, OCI containers
which, you know,
Docker containers, quote unquote,
to LXCs to then run as an LXC
because they don't want to run
true OCI containers like Docker.
Like, I don't get it.
Is it licensing maybe?
I mean...
I would have been licensing.
I don't know.
Docker CE, does it cost money to...
I mean, I don't know.
I'm not a lawyer.
I know that there's been a lot of change
in the Docker world in the last, you know,
six years.
I mean, they were almost.
a dead company.
And then they were revived.
It was Docker 2.0.
I talked to their CEO on a podcast here on the same podcast.
You're on a few years back.
It was a fun conversation about Doctor 2.0 where they went from zero revenue to revenue.
And I mean, at some point you have to protect your moat.
Really, you do as a company.
And I imagine it's probably licensing.
It's probably something there.
But as a user like you are, I want that world to marry.
So figure it out.
Yeah, yeah. Like, and Docker's done like now that they're, you know, they have model runner.
They're doing all the scout stuff and scanning containers and like really building up their Docker desktop.
And I'm just focusing on the Docker CE part.
And maybe there is licensing around, you know, maybe you can't ship this with your own product.
But let's take that all the way out and just go down to like container D.
Like just get, you know, OCI images.
Podman. I mean, something like at the end of the day, I don't want true.
It's true.
Robin is available.
More license-friendly.
Yeah.
And so at the end of the day, like, I wish I could run OCI containers as first-class citizens on ProxMox.
And I, again, like, I don't know the reasoning behind it.
I honestly feel like it has something to do with more of their strategy, you know, and how they're, you know, trying to be highly available.
And LXE is highly available, although that I don't think is there yet.
And VM's highly available.
I just don't think they want containers spinning up on the host itself.
I get it.
You can get around them.
Friends, you know this.
You're smart.
Most AI tools out there are just fancy autocompletes with a chat interface.
They help you start the work,
but they never do the fun thing that you need to do,
which is finish the work.
That's what you're trying to do.
The follow-ups, the post-meeting admin,
the out-get-to-that-later tasks that pile up into your Notion workspace
looks like a crime scene.
I don't mind it.
I've been used to Notion Agent.
and it's changed how I think about delegation,
not delegation to another team member,
but delegation to something that already knows
how I work, my workflows, my preferences,
how I organize things.
And here's what got me.
As you may know, we produce a podcast, it takes prep,
it's a lot of details, there's emails, there's calendars,
there's notes here and there,
and it's kind of hard to get all that together.
Well, now my Notion agent helps me do all that.
It organizes it for me.
It's got a template that's based on my preferences,
and it's easy.
Notion brings all your notes,
all your docks, all your projects,
into one connected space that just works.
It's seamless, it's flexible, it's powerful,
and it's kind of fun to use.
With AI built right in,
you spend less time switching between tools
and more time creating that great work you do,
the art, the fun stuff.
And now with Notion Agent,
your AI doesn't just help you with your work,
it finishes it for you based on your preferences.
And since everything you're doing is inside Notion,
you're always in control.
Everything Agent does is editable.
It's transparent.
And you can always,
undo changes. You can trust it with your most precious work. And as you know, Notion is used
by us. I use it every day. It's used by over 50% of Fortune 500 companies. And some of the most
fastest-growing companies out are like OpenEye, Ramp, and Versal. They all use Notion agent to help
their team send less emails, cancel more meetings, and stay ahead. Doing the fun work. So try
Notion. Now with Notion agent at notion.com slash changelog. That's all overcase letters,
Notion.com slash changelog to try a new AI teammate, Notion agent today.
And we use our link.
As you know, you're supporting your favorite show, the changelog.
Once again, notion.com slash changelog.
Do you miss with Fly.io by any chance, that world in Lake Prod, Fly.io?
No, I have it.
I mean, if you love containers, then you'll love Fly.
I mean, Fly is what we host on changelot.com.
They're a partner of ours.
We love them, obviously.
This is not technically paid.
I'm paid to,
not paid to love them.
We just love them anyways.
Flies like that.
You're running containers, right?
You're running a container in production.
Yeah.
It's Firecracker VMs.
I'm not familiar with everything behind it,
but fly machines essentially.
They spin up very fast.
They run down really fast.
I want,
I would love to have a version of fly in my own lab.
And it sounds like that's what you're describing there,
which is,
I want an OCI container.
I want, you know, as close to bare metal as I can.
I don't have to spin up a Ubuntu VM to then throw Docker on to then launch my
Docker container container container.
I would like to just have the entire system be container friendly.
It sounds like you're saying.
Yeah, yeah, yeah.
I mean, you know, similar to Cloud Run on Google or any of these things, you give it a manifest,
you spin it up and it, you know, it's running, you know, container only.
And I mean, that's kind of what I'm getting with TrueNAS right now.
I feed it some EML.
I got to create a data set for it.
But I got to tell it where to put the data.
it's going to use, then I use a Docker Compose file and I'm done.
So that's kind of what I'm getting with TrueNaz now.
I just wish, you know, ProxMox would just build that into their CLI or UI or whatever
so that people just don't have to do this.
Well, I'm going to run an LXC and I'm going to run it as root so then I can install Docker
to then like run, you know, an OCI container inside of this LXC container or, you know,
or pay the virtualization tax and run it inside of a VM.
I just want, you know, as bare metal as possible.
And I don't know.
I'm sure ProxMox has a lot of reasons why they don't do it.
Probably, you know, has something to do again with their strategy.
It doesn't fit in there.
But I just don't see how you could ignore OCI containers in general.
Like, yeah.
Yeah, I agree with that.
I do agree with that.
Well, the closest I've been able to come is the CLI built,
which in this case, it's got the cloud image.
image and Ubuntu or even a Fedora Cloud image on the server already, which it uses that
as its base like you would.
And so rather than going that whole route of creating the template, including the template to
create a new machine, the CLI does a version of that through automation.
And you're able to, you know, through Cloud init, you're able to define the network, right?
You've got all that there, the user, the network, the SHK.
and then everything else is the ProxMox API,
which I really wish ProxMox and TruNAS did a better job of documenting their API.
It's just not, it's not super.
I mean, it's good documentation, but I just feel like they don't treat it like a first-class citizen.
And me, the kind of developer I am is I want to play with your API.
I want to build my tools on top of your system, not be forced to go to your WebI,
even WebUI, like even with TruNaz, like you're the same.
I don't want to fill out a form to spend up a new thing.
Well, I would much rather automate through whatever layer it is, whether it's me or an agent, some sort of CLI.
And then if the agent's using it, then it can just easily use the CLI built.
But I want to be able to automate those things on those kinds of systems.
And the closest I've come is, is that exactly is like PMX, you know, VM, new, and then specify all these things, send it a template, which is super easy.
and those are YAML files.
Those are YAMO files, like two or three YAMO files to define a couple things.
And you're off to the races.
You can define a minimal, Ubuntu, you know, brand new machine.
Now, you could define it to be bigger,
but I just have found that it's just more easy to layer on a post-install BAS script
than try to script it all.
Like you got into Antibald land.
It was just really nasty.
It got really error prone.
So I was like, you know what?
Forget all that.
I just want to define a base VM.
A base VM that is blessed with an IP address, with the RAM I want, with the CPU I want, with the disc I want, and, you know, protect it.
You know, it's got a dash P on it, which means if I try to delete it accidentally or my agent tries to, it's got to go through this whole entire dance and, you know, send across like documentation and social security numbers stuff like that to like delete a VM.
Like, you can't just accidentally delete a VM, you know what I mean?
Like it's, you got to do some things, you know.
but it's pretty, I mean, like literally within,
so your 15 minute scenario that you defined earlier with a new container,
less than 60 seconds, Tim.
Yeah, maybe even 30.
Maybe a minute to get the IP address back because it's got to like launch the VM,
get all the updates, right, from a boon to or whatever, which takes time.
And then actually launch the actual machine itself and then get an SSH key.
That's the thing that takes time is the boot, the update, and then the finally, you know,
QEMU giving you that IP address back.
But like, that's all like instant.
Yeah, that's, that is awesome.
And so when I said 10, 15 minutes, that's me figuring out, okay, data amounts,
environment variables.
Like, that's me.
That's like research I had to do anyways.
If I, if I knew it, yeah, it's still take me five minutes, but that's not 60 seconds.
I should, I should look at doing that with with Trunas too.
It's just automating.
I'll show what I'm doing behind the scenes.
It's not ready.
It's, it's, it's ready to.
to be used if you don't mind warts.
It's not open source ready yet because I think people have a different expectation of what
that might be.
I want to do one more rewrite because I now have a more clarity on what it is.
But I think we could layer on what I'm already doing with the idea of already saying,
okay, there's true NAS on the system here and there's a separate layer that you can spin up
that same VM, but then also say there's mounts elsewhere.
There's NFS mounts elsewhere, whatever you want to do to do your world.
what you're talking about there.
So, yeah, I'm curious why you don't use Ancible, though,
because I have my, I have my post-Ancible, like I have, you know,
bash.
Bash and agents, agents like Bash.
Yeah.
I didn't fight the system.
I just mean like, no, I know I get it.
But like, I have my Ansible playbook for new VM, you know,
and it has 50, 60 tasks that are all there.
Yeah.
And it does it intelligently, like, hey, if I say, you know, stop the firewall service.
Well, if it's not running, it's not going to stop.
it, you know, and, hey, if I tell it to install this one package, it knows that doesn't run on
this type of machine, so it's not going to try, you know what I mean? Like, I have my Ansible
playbook where it's, I just click go. Like, anytime I create a brand new machine, I have a standard
playbook I run, and it's going to apply updates, update, reboot, install Z-shell, configure Z-shell
with Robbie Russell, because that's what I like. You know, it goes through the whole
shebang of like, this is my standard VM. And I,
I just don't even pay attention to it, you know, a couple seconds.
You know, it probably takes, you know, a minute or two throughout all the reboots and applying and installing packages.
But after that, then it's like, it's ready, you know, ready for production.
Yeah.
So I think the, I'll defend that by saying I have never been, not for any reason, anensible guy.
I just never really got into it.
I didn't understand the world.
I know what it does and how people use it.
I get all that client server.
I get the recipes.
I'm not foolish, but I never got into it to need infrastructure automation.
And the linguifranca of agents is markdown files and bash.
Bash is everywhere.
You have to add on Ancible in your client or on your server somewhere and it's baggage to native Linux.
Right.
So it may work for you because that's been your history, right?
But in that world, when you're trying to automate that kind of thing, I just tell the LLM,
hey, I'm launching a new instance of DNS hole.
And here's the specifications for it.
It will write a script.
That script can be item potent.
And it will rerun it.
And it's just as good,
if not probably better and faster than ANSI.
Oh, yeah, yeah, yeah.
I'm not, I'm just thinking about like if you're building,
you know, if you're building a CLI, like at some point, like,
you know, it has to,
it has to be able to scale to different things.
And so, like, are you?
you recompiling the CLI every time it needs to install one more thing or just writing a new script or
how does that work? You know what I mean? Well, it has versions. It's written and go. So it's got
versions. And so if there's new features or a patch release, I'll patch release it and, you know,
through a new version out there, which you can then, you know, run PXM update and it will get the new
version of it. Sure. I just mean like the source of truth for your list of things, you know,
say like one day you want to install Z-Shell. Here's the beautiful thing, Tim. Those aren't in there.
Yeah, that's what I was going to say.
Those things are in user land.
Those templates are in user land.
You can define them.
And so that's why I want to rewrite it because I sort of like, I haven't been doing this as a day job.
This is my little scratch, my little itch.
And I really haven't, I probably haven't changed the code in three months, honestly.
Like it does what I needed to do.
This Adam wants to change it to release it and make it better.
But the word I wanted to build was this really interesting CLI.
But most of those things live in user land.
And so I have a separate repository.
that the world can define their own minimal Ubuntu,
minimal Fedora, minimal, you know, Debian,
you name it, whatever you want to do.
And all the patterns are there.
And all you've got to do is clone the repository,
put it in a certain place, update your config.
And when you run PXM, you know, new VM or whatever it is,
and you do dash, dash template or dash package,
and you say the name, well, it knows that because config says
all of your templates are over here.
So you won't have to recompile the Go binary.
to go see a lot to do that.
All that will live in user land.
Originally, they did.
Okay, I was a fool.
Okay, I put those things in there and I was going to make it to extract out because I wanted
to give people a good bootstrap.
Well, then I learned, well, that's just probably not the best way.
And so it's better to have a user land repository that people can commit to an update
and it can be appointed to people.
And sure, it's one more step to clone that repo down and put it in a config, but I feel like
the tradeoff is better long term.
So that's where I'm at.
I haven't gotten to the point where now I can put a lot of those minimal templates or even more expressive ones that use Ancible, which is totally possible.
I just didn't want to do it.
And I just wanted to get a base image and at least my agent on that image and be like, make the world.
And it goes and it makes the world, you know.
Yeah, no, that sounds awesome.
No, it sounds awesome.
I'd love to check it out.
Because, yeah, again, it was like, you know, I did an Anzable only because like that was way better than running, you know, CLA commands over and over and over.
or even writing one big bash script.
Now that's changed because LLMs love bash.
You know, so now things have changed.
But yeah, I should definitely revisit it.
Yeah, I'd love an agent.
Yeah.
Yeah.
At some point I'd love to hook up my open web UI,
so my internal chat that I use with Ollama and build, you know,
use open agent or something is that new open agent that's out.
Open code.
Yeah, open code.
Run that agent and tell that agent to do stuff in proximate.
for me all via chat.
Like I just don't even want to run a CLI.
That's what I'm saying.
Like I don't even want to run the CLI.
Sure, I could do that and expand the variables and figure out what it can and can't do.
Well, the problem with it is that ProxMox has some challenges.
Like, it would have to navigate.
The reason why I went the CLI route was one.
I thought I wanted it for me.
Now the whole world shifted to be agent first.
Yeah, I want to build an MCP.
I want to build an MCP for ProxMox that the MCP understands the ProxMX API.
So, hey, MCP talk, you speak ProxMox API, LLM, you speak human, but you also speak to the MCP.
You know what I mean?
And so now it's like, I speak human to, you know, an LLM that speaks MCP to that, you know,
LLM kind of API to ProxMox and just do the stuff.
You'll get to a limitation at some point, though.
So the CILA is the blue layer in the middle there.
For sure.
So the CILI is kind of the important piece.
And then you layer on the MCP server on top of your CLI.
and so the MCP can speak native ProxMox as necessary,
like if there's an API and using Native API,
or the things that it doesn't do that you've taken from 15 steps to one command.
Yeah.
And you've got tests against it.
Well, your CLI, in addition to the ProxMox API with your MCP server
and your agent in open code or in Clod, well, that's the beautiful world.
Yeah, man.
Yeah, that sounds awesome.
That sounds awesome for sure.
Because, yeah, the proxmox binary, whatever it is, PXM.
PXM.
That can do so much.
I mean, that's how people are doing even their Ansible scripts right now.
They're like, you know, shelling into it, running a PCM command.
So, yeah, it sounds awesome for an MCP, too, to be able to say that.
But then that's a lot of words for me to type.
You know, I'm getting lazy developer.
Like I'd rather, like, you have a CLI and run a command.
Like, do I want to sit here and describe, you know, what to build?
No, just build a standard VM.
Let me know when it's done.
Call it this.
Right.
Well, the determinism there is the challenge.
You can do that, but, you know, the reason why MCP came into play was because LLMs are
nondeterministic.
You can say that, but every time you might get a different version of it.
Or if you're using Sonnet versus Opus now, and like Opus 4.5 is the default, you know, per
Anthropic and Cloud, they want you to use Opus 4.5.
it's non-deterministic.
It may know that and you get great results every time,
but it's not the same path to roam.
The MCP server is what helps you determine and create determinism or even skills.
So you layer on skills, MCP, and CLI with a decent API,
which is why I would love ProxMunks to just bolster that.
Make it more expressive.
Give me more in there, better documented.
Because if you give us those tools, we'll use your thing more.
And I got to imagine if I'm an enterprise,
buying a support license, you know, I'll, that's how they make their money.
Right.
Isn't that how ProxMox makes their money on ProxMox?
Oh, yeah.
Support licenses.
Yeah, that's how most open source, you know, are making it.
Now they're starting to do premium features.
It seems like that's like a huge trend now.
It's like, oh, you can, you can self-host the normal version, but then premium features.
You know, it's a combination of I'm either going to be your SaaS.
I'm going to give you extra features or and or I'm going to,
I'm going to be your support.
Yeah.
That seems how most open sources,
projects are monetizing now.
In large installations,
I can see that.
For me,
I would buy a support license as a means to sustain
and just to get rid of that thing
that pops up every single time,
really.
I would just be like,
I would honestly give prompts months
a hundred bucks a year just to get rid of that,
you know,
realistically.
Yeah,
there's a,
yeah,
I agree.
I asked them one time,
Hey, do you have a home lab license?
You know, do you have a cheaper version of the home lab license, you know, that I could use so I could get, you know, legit updates versus bleeding edge updates?
Because their price per core, whatever it was, you know, kind of cost prohibitive for a homeland person.
It's like five, six hundred bucks.
Might be more now per year.
And I'm like, yeah, that's a lot.
Not a lot for enterprise, but a lot for Timmer prize, you know.
So.
And they were like, don't worry about it.
You know, they were like, don't worry about it.
Just, you know, use our latest updates.
Anyways, I say that because you can get rid of that nag screen really easy.
You can.
I don't know.
Yeah.
Oh, man.
So since we're talking so much about ProxMox, have you heard about ProxMachs helper scripts?
No.
I did a video on it.
It doesn't matter.
You should check out ProxMox helper scripts.
I pay attention to your channel, Tim, and I missed this somehow.
That's all right.
That's right.
No, it's going back to algorithm.
You know, it's fighting for ears.
eyeballs and you know and and Google says like hey this does not deserve ears of eyeballs and plus like dude
there's no way you can keep up anyways that is a collection of tons of scripts single liners you can run to
do anything you want to do on proxmox and all written in bash there's one that's like get rid of nag screen
that you can run but there's also like the the default script you run is so good it'll like
get rid of nag screens uh set it to you know remove the enterprise
repositories. You know, it'll do everything you want to do as a home labber, which is give me
bleeding edge updates because that's all I can get. Turn off nag screens because I can't really
afford a license and, you know, disable some other things because I'm never going to use it.
So really cool, really cool repository. It was actually made by someone who passed away and
he passed it on to the community. So really cool website and a really awesome group of
people. What does it call it again one more time?
Proxmox Helper Scripts.
Okay.
It might be proxmoxhelperscripts.com. I'm not sure.
But if you check it out, it's, it's so awesome.
I mean, it's worth looking at.
ProxMox helper scripts.
It looks like a GitHub.io website.
Easily search that on the web.
We'll link up in the shows, of course.
I know we're getting close to our time, Tim.
I'm happy to keep talking to you.
But.
Oh, yeah.
Go to view.
scripts. I mean, anything, so, so they basically built almost like an app store for LXC containers,
which is pretty cool. Like, hey, do you want to install, uh, do you want to install home assistant
as an LXC container on your ProxMox? Yes. Click one shell script. You're done. Do you want
install? No, yeah. Do you want, do you want Olamma with GPU enabled? Yes. Okay, run the shell
script. Yeah, go to Proxmunks VE helper scripts. It's community dash scripts.compts.com.com.
I.O. And that's what they've done is they've basically built almost like an app store for Proximox
because they're able to either create an LXC for it or create a VM for you. Like databases,
like, hey, do you want to run Postgres? Yes. Do you click this button? Run the shell script,
you know? And this is cool, man. Yeah, it's really awesome. I did a video on it. It deserves so much
attention. I had no idea this even existed. Yeah, man. It deserves so much attention. And
It's not building VMs.
It's building LXCs for the most part because that's what you want, right?
You don't need a full fat VM to run, you know, Maria DB.
You just need Maria DB running, you know, with a little bit of storage and a little bit of RAM.
Yeah.
Yeah.
Like, like, like, think about setting up, you know, Bit Warden, how hard that is, you know, or Othalia, you know, this does it for you.
And, you know, very simple shell script.
Well, all right, then.
I'm glad to talk to you, Tim.
Yeah, man.
Like Home Assistant, like, you want to run Home Assistant in an LXC container?
You just want to try it out.
Click the script two seconds later.
It's running.
You don't like Home Assistant, delete it, you know?
Are you using this for those?
You're probably not using this for those things,
that you're using it just for the simple things, though.
It sounds like.
I am using it for Puy-Hole, instead of Pyehole as an LXC container because I'm like,
yeah, why not?
Like, do I want to run, you know, pseudo-app to install and do all this stuff?
No.
Do I want to run as a VM?
No.
So yeah, so I've done it for certain things, for certain things in my lab.
Yep.
Red, did I do it for Redis?
I might not have done it for Redis because I run Redis in a cluster.
I do run Redis as an LXC to run it in a cluster.
But I don't think I use their scripts.
But anytime I, you know, anytime you want to test something out, I mean, this is almost
faster than trying it with a Docker container only because, you know, they fill out all the defaults
for you.
So you go from, you know, you go from running the script to it actually running.
The challenge here and the one thing, the big takeaway is here is you don't run this script
in a shell prompt.
It's kind of weird.
You have to run it from the ProxMox terminal web.
That makes sense?
You got to log into your web UI.
Yeah.
And open terminal.
Things might have changed.
Yeah.
Things might have changed since then.
but if you run it in your shell,
you know, you're not executing it,
I think is the right person.
So, and that might have changed.
They might have made changes.
But I know for sure if you run it in the terminal from the web,
it works.
So yeah, set up, set up Grafana, set up Prometheus.
It's all right there.
Wow.
Yeah, it's pretty awesome.
That is pretty cool.
I didn't even know this existed.
I've been using Prospeunks forever.
And learn something new every single thing.
day. What's left? What's left on your on your thought list? I know you made a list.
I did make a list. Anything else on your list is like, man, did we can't end the show without talking
about this? No, a lot of it was just, you know, I don't want to say complaints, but you know,
just just the state we're in with how expensive things are. Nothing's available. But software is
getting better. So that's a huge upside. And, you know, people are making awesome things using a lot of
tools and getting ideas outside of their heads.
A lot of awesome cell phones and stuff in,
in the open source world.
And it's, I love it.
I love it.
I went from this,
I feel like I went from this drought of software.
I don't know,
a couple of years ago where I'm like,
yeah,
you know,
I've seen all the Docker containers people run at home.
To now I'm like,
oh my gosh,
there's this whole like new world of people building these new containers of
things I can run that I've never even heard of before,
you know,
based on AI or maybe,
maybe not.
that like I feel like so refreshed because now I can try all these apps again.
So yeah, these are like for me, it's like server apps.
Think of it like that.
Mm-hmm.
So.
I think this year's going to be a wild year.
I would definitely encourage you to unleash Claude on your UDM, whatever you might have.
And just say like, help me just examine my VLAN scenario, examine my rules and my profiles.
And it might be like, you know what, Tim, you're pretty good.
Or it might be like, you know what, Tim, I can help you here.
you know.
Oh, it's probably going to say like, why do you have these duplicate rules?
Like, I, I, there are probably rules in there that I either never deleted.
Like, I am not a firewall rule expert.
I'll be the first to say is like, I try it until it works.
You know what I mean?
Like, it's, it's always guessed for me.
Well, let me just say that I set up my VLANs based on your video and it was upset with me.
Just saying.
It's operator.
Just saying.
Just saying.
And I really didn't know much.
the bottoms. I was like, you know, I've been told, hey, if you run a network, you should have VLands because
you want your kids to be on one thing, which I totally agree. You know, you want your guest to be on
another, and I totally agree with that too. And then everything else is on this trusted network,
which I totally agree with. But then all this intermingly, I'm like, well, just because
just because it's an Nvidia Shield, should I put it on my IoT thing? And then the answer is no.
The answer should be, it should be entrusted. It just needs access to too much, you know?
And there's other things that like, it's the dumb thing.
that you definitely don't want on your trusted network on the IoT VLAN.
So the three VLANs I'm for sure I want is trusted kids and IOT and guests.
Yep. Yep. Yeah, that's that's kind of where I've settled too. But then I have all of my
networking equipment, you know, on a separate one. Like it's, you know, it's on the default one.
My trusted is its own VLAN. I do have one more and then cameras. All my cameras go on one
VLAN. But yes, I agree. You guys, I agree. You guys.
to think about like it's not necessarily what the device is. It's kind of the role that it plays
and how much do you trust it? Yeah. And so some people might say, yeah, put put my home pod on IOT.
I don't want my home pod. You know, that's IoT. Well, if you're asking, if you give it access to
your schedule and you're telling it to turn on your lights in your home, I think you kind of trust that
thing, you know, enough to put it on your trusted network. And so that's the way I kind of think of it now is
Like, what does it need access to and how hard is it to go to, you know, cross the chasm, you know, like home assistant.
Like, I do trust that thing, but I don't want to write a thousand firewall rules for it to go talk to everything in IoT.
So I put it on the IoT because I don't want to do the opposite.
So, yeah, it's, even my Xbox for a while.
I was like, die hard.
Like, no, you are IoT.
But I'm just like, you know, I put my password into this thing and I play games.
it's going to go on my trusted network because I don't want to cross the network just to do other things, you know?
I think you just got to worry about if that thing ever gets circumvented.
That's the main thing is like if it gets.
Yeah, yeah.
It's all about limiting your blast radius, limiting your blast radius and how comfortable you are with that, you know?
Yeah.
I think this year, though, is the year of AI in the Home Lab.
I know it has been already for me.
And I do lots of stuff across machines with Claude.
I don't just do it on the single machine I'm on.
I'm not only using it to build software or to build little itches and scratches and stuff like that.
It really is like the moment, for example, when PXM gives me that IP address back,
I take that info report it gives me, which I've designed to be agent friendly.
In that case, I'm the CLI or I'm the API.
I copy and paste it into the agent.
I say, here you go.
Here's your machine.
And then it logs in because it's got my SH key.
And it's like, okay, sweet.
It's a brand new, brand new, you know, base image of Ubuntu.
Let me build your world here for you.
Here's the Basker people want to write here.
It stores it in, you know, in Git in the repository and maybe a deploy file or deploy
directory.
And we always make it idipotent, you know, so that way if we want to rerun or something like
that or if it needs a different one, maybe a post or a pre-install, who knows what.
But that thing has just been so cool to just unleash like that.
So I think this year will probably be your year, too, of AI and your home lab.
And that's kind of fun.
New worlds, new capabilities, Tim.
Yeah, I definitely need to unleash some agents here and try some of that, too.
I've been, you know, I do run a ton of AI stuff.
And I've been, you know, doing a lot of LLM stuff, especially.
But agents, yeah, it would be cool to turn some stuff loose in my proxmox cluster and just say, go build some stuff.
rather than using an Ancible playbook.
Just go for it.
Rather than one year later, Tim,
we should talk in three months.
My prescription, if I'm your doctor,
Dr. Stakoviac here, okay,
is your prescription is go unleash Claude
on your home lab network
and do some cool stuff.
And come back in three months
and tell me some tales.
Okay.
Because I guarantee you,
I guarantee you come back a whole different, Tim.
I bet.
You know, it's just like, again,
like a lot of people probably,
going through this.
Like, you know, I've had the tools that I've used and I've designed the tools and I
know they work and I'm comfortable with them and I write CICD pipelines.
But maybe I should just kind of just give that to AI and tell it the outcome I want and not
worry so much about how it gets there.
Especially on the low-stakes stuff.
You know, if you just got this little thing you want to do, what's the harm?
You wouldn't have written it anyways.
Who cares about the code?
Why do you need a good, dude, code review?
All you need is a code viewer.
Your VS code is now, that stands for code viewer now.
I'm just kidding.
I'm just kidding.
It stands for, uh, continue, continue, continue.
Right.
Yeah, exactly.
Well, I mean, you can also automate a lot of that stuff too to be in yellow mode.
So a lot of people will say yellow mode for some things.
That's kind of Ralph Wiggum, that loop there.
The Ralph, the Ralph Loop is if you, if this is your first time hearing about it, Tim,
you're going to hear a lot more about it soon.
Because it's the, it's the beginning of what's going to come when it comes to,
a well-engineered.
Like, it doesn't remove you as an engineer.
You start to be an engineer.
So be a good engineer with a good to-do list or a good spec and unleash Ralph, that Ralph loop on it.
And, you know, if the code works and it pass tests and these different things around security,
well, then who cares really?
In the end, is it the best idiomatic go?
I mean, I kind of do if I'm, if I'm, like, maintaining this thing.
but if the agent's maintaining it and I didn't have the software yesterday,
I need it today and now it's here and it solves my problem.
And I don't have the time to maintain it anyways.
It's like did the tree fall in the woods and you hear it anyway?
Like it's that whole thing, you know?
It doesn't really matter.
Well, kind of related to that.
One thing I did learn too is, you know, after having AI write some code is having it
right test too is super important.
I've noticed.
Yes.
So I've noticed like anything I want to keep, have it write tests, because not only does it prove that it works,
it's a good hint for it to understand how the code works, just like humans.
You know, anytime like when I review someone's code, I'm like, where are the test, not because I'm asking where your tests are,
because in your test, I can kind of figure out like what you were thinking when you wrote this and what you're trying to do.
So I've noticed that too is like anything you care about, you know, that you have AI writing, have it write some tests too.
It's, you know, cost you what?
some tokens, not really any brain power, but you tell them to do it too because it's,
it's good, good.
It just helps, you know, your agent understand in the future what it was doing before.
YouTube.com.
What is your, what is your, do you say what your YouTube is out loud?
And just Technotim.
I usually just, yeah, just people Google Technotim.
It's techno Tim.com now.
Huge, huge change.
No more techno Tim.
dot live it was that you know I paid the squatter now I'm big time I have a dot com now I had to put tons
of redirects in place I was like redirects everywhere actually I did them on the edge on cloud flare
most of them but then I had to like find my old links then I have link shorteners then my email
and then I set up aliases and cut that over and a lot of DNS stuff I actually did in about a day
I'd help from AI I did about a day so it wasn't it wasn't that difficult it was just
a lot of things to remember.
A lot of things to remember to do.
And still, I think I still broke something.
But at least I didn't,
at least I didn't lock myself out of my own email.
So if you ever change domains,
it's, it's, it's a lot.
Do it as early as possible or never at all.
Yes, my gosh.
It's like renaming something.
It's the worst, especially if, yeah,
it's just the worst.
Pick a good name from the beginning.
Do what you can to never.
have to change your domain ever, ever, ever.
Yeah, yeah, never name your business after a street or a product unless it's like
Main Street.
Yes.
I noticed that with so many, you know, local businesses.
It's like, you know, we're, I don't know, we're whatever boulevard, but they're not on
whatever Boulevard anymore.
It's like, oh, wait.
That's made sense.
Yeah.
We actually have here in dripping springs, you have a Mercer dance hall.
And it used to be on Mercer.
and it's not anymore because Mercer's real estate got more expensive.
Now they're on like Route 12.
Yeah, see?
I was like, well, you're not Mercer Dance Hall anymore.
It's like, where do you go to Mercer Dance Hall?
Route 12.
Everybody knows Mercer Street here.
You know, it's like, well, I'm here at Mercer.
Where's the dance hall?
It ain't there anymore.
It's somewhere else.
Yeah.
Yeah.
Yeah.
That and a product.
Like, you know, yeah.
Or like a price, you know, dollar store.
They're not even dollar store anymore.
It's like dollar.
25 store, you know, but
that one's kind of generic, but
you know, if it was like
everything's a dollar or everything's
$5 and it goes up to $10, then you're
in big trouble. Well, there was another
story called something five.
Oh, a five below?
Five below. Yeah. You can
go in there and buy things above five bucks.
That is right.
What is up with it? Yeah. Now, I get
the color is blue and they make it like it's cold,
but that's right. Like you're
a double entendre is no, is a single on
Now, you know, like, come on.
You know.
That is right.
Yeah, yeah.
I get, yeah, you better keep a five below in there, you know, temperature-wise because, you know,
not everything's below $5.
That's right.
That's right.
Well, everyone, techno-timum.com.
Check that out.
Thank you, Tim, for just exploring this fun world of HomeLab with me every year.
But my prescription is, go away.
And instead of coming back a year later, come back in three months and tell me about your new
world. I want to see Tim's new world in three months when you just unleash AI, even more so,
these agents on your home lab. I'm here for it, man. I love talking to you too. It's always a pleasure.
You give me so many ideas. And now I think I have a lot of ideas. I want to do something right after
this next video. But yeah, I'm excited. I'm glad to be here. And it's always nice talking to you, man.
Yeah, same, same, Tim. Same, Tim. Good seeing you. Glad you well. Bye, y'all. Bye, friends.
Bye friends.
A new year, fun time with Techno Tim.
Always fun digging in and talking about the future of HomeLab with Tim every single year.
Hoping into this show.
This is the year of software.
More software getting built.
More software developers coming in.
More people building more software.
More, more, more.
I don't know about you, but my Home Lab is super active.
My Proxmox is basically on fire over there.
It's warm in my office because my stuff is always going.
so hot. It's kind of fun. I want to hear from you, though. What's your homelab like?
Hanging out at my new fun hangout, how it works.club. That's a fun place to be.
If you're not there, you're wrong. You should check it out how it works.com.
And of course, hang out in Zulip, change law.com slash community. It's free to join.
I'm there. You're there. Well, you're going to be there. I mean, go to that URL and sign up.
It's totally free. And hang out in our HomeLab channel and talk HomeLab tech with us,
because that's fun. But we have some awesome sports.
sponsors depot.dev, our friends over at Tiger Data, and our friends over at Notion,
depot.dev, tagredata.com, and Notion.com slash changelog. Those are fun your oils to go to.
Check them out. They love us. They support us. And of course, to our friends, our partners,
our hosts, where we host our sprites, our machines, our apps, our everything. Fly.io.
If you're not using Fly, well, that's just sad, man. Again, fly.io.
and to the beat freak in residence breakmaster cylinder the banging beats keep flowing love those beats
and i love breakmaster all right friends the show's done we'll see you next week
