The Changelog: Software Development, Open Source - Kaizen! Mop-up job (Friends)
Episode Date: October 24, 2025It's our first Kaizen after the big Pipely launch in Denver and we have some serious mopping to do. Along the way, we brainstorm the next get-together, check out our new cache hit/miss ratio, give Pip...ely a deep speed test, discuss open video standards, and more!
Transcript
Discussion (0)
Welcome to ChangeLog and Friends, a weekly talk show about Rattlesnake Encounters.
Thanks, as always, to our partners at fly.io, that's the public cloud built for developers to ship.
We love Fly. You might too. Learn all about it at fly.io. Okay, let's Kisen.
Well, friends, I am here with a new friend of mine, Scott Dietzen, CEO of Augment Code. I'm excited about this.
Augment taps into your team's collective knowledge, your code base, your documentation, your dependencies.
It is the most context-aware developer AI, so you won't just code faster.
also build smarter. It's an ask me anything for your code. It's your deep thinking buddy.
It's your stand flow antidote. Okay, Scott, so for the foreseeable future, AI assisted is here to stay.
It's just a matter of getting the AI to be a better assistant. And in particular, I want help on the
thinking part, not necessarily the coding part. Can you speak to the thinking problem versus the
coding problem and the potential false dichotomy there? A couple of different points to make.
You know, AIs have gotten good at making incremental changes, at least when they understand customer software.
So first, in the biggest limitation that these AIs have today, they really don't understand anything about your codebase.
If you take GitHub co-pilot, for example, it's like a fresh college graduate, understand some programming languages and algorithms, but doesn't understand what you're trying to do.
And as a result of that, something like two-thirds of the community, on average, drops off of the product, especially the expert developers.
Augment is different.
We use retrieval augmented generation to deeply mine the knowledge that's inherent inside your codebase.
So we are a co-pilot that is an expert and that can help you navigate the codebase, help you find issues and fix them and resolve them over time, much more quickly than you can trying to tutor up a novice on your software.
So you're often compared to GitHub co-pilot.
I got to imagine that you have a hot take.
What's your hot take on GitHub copilot?
I think it was a great 1.0 product, and I think they've done a huge service in promoting AI.
But I think the game has changed.
We have moved from AIs that are new college graduates to, in effect,
AIs that are now among the best developers in your codebase.
And that difference is a profound one for software engineering in particular.
You know, if you're writing a new application from scratch, you want a web page that'll play
tick-tac-toe, piece of cake to crank that out.
But if you're looking at, you know, a tens of millions of line codebase, like many of our customers, Lemonade is one of them.
I mean, 10 million line mono repo, as they move engineers inside and around that code base and hire new engineers, just the workload on senior developers to mentor people into areas of the code base they're not familiar with is hugely painful.
An AI that knows the answer and is available 7 by 24, you don't have to interrupt anybody and can help coach you through whatever you're trying to work on.
is hugely empowering to an engineer working an unfamiliar code.
Very cool.
Well, friends, Augment Code is developer AI that uses deep understanding of your large code base
and how you build software to deliver personalized, co-suggestions, and insights.
A good next step is to go to augmentcode.com.
That's A-U-G-M-E-N-T-O-D-E dot com.
Request a free trial, contact sales, or if you're an open-source project,
augment is free to you to use learn more at augment code.com that's a u g m e n t-c-o-d-e dot com augment code.com
all right kaisen gerhard we are here to kaisen the first kaisen after our on stage kaisen yeah how's it
going how's your life your life has changed since then it has yes yes a new job that started in early
september actually september brought in that change it was a good change uh so that's as you know
new jobs are always exciting there's always like so many things to do and you know we have to
meet everyone on board, uh, do things properly. So that was, that was really the whirlwind.
I really enjoy that very much. Um, well, obviously before that, even we had like our family holiday,
which was amazing. I always, you know, love going outdoors and spending the proper outdoors,
you know, like not the, um, well, the mountains. That's, that's what I basically, the proper outdoors.
Exactly. The proper outdoors for me, it's mountains and lakes and, you know, so that just goes together.
Yeah, which is why Denver was really, really nice.
really enjoy that. You know, it was close to what I enjoy. But otherwise, I mean, how,
how is it October already? That's what I don't know. Right. It was July and then now it's
October. How did that happen? Well, there's this thing called time. Every 24 hours. It just
keeps going. Yeah, I'm with you on that. I don't know how it's October. I feel like
it was just January. I know, wow. I feel like it was a very fast year. Yeah.
this year for some reason.
It was going slow, and then it got really, really fast.
That's kind of how life goes, right?
It starts off slow, and you're like 10 years old wishing you were an adult,
you know, and you're like, when am I going to be able to drive?
When am I going to be able to drink legally?
I don't know what the 10-year-olds want to do anymore, but back when we were kids.
And then you get there and you're like, slow down life.
And then now you're all of a sudden you're in your 40s and you're like, holy cow.
I am this Daisy, man.
A vapor, totally.
Tell us about the new job.
What are you doing now?
You were spent years at Dagger.
Yeah, it was almost four years.
It was time for change.
The new companies called Lupal Labs.
They're focusing on infrastructure primitives.
Some really interesting things that revolve around live migration.
That is definitely the central piece.
And when I see live migration, I mean the memory, the disc, the connections.
How do you live migrate connections from one host to another?
I find that really interesting.
What's most interesting is, like, the same thing.
size so if you have 64 gigabytes to migrate how do you how do you do that in milliseconds i mean
what does your connection even need to look like migraine between physical hosts between physical hosts
really yep 64 gigs you might need like a 100 gigabit home lab to do that exactly that's coming
up yes that is coming up indeed so that just send me down that rabbit hole like okay so i need a new
home lab basically because 10 gigabits is just not enough so what would 100 gigabit look like yeah
yeah i know what it looks at 400 i've been watching your channel dude all right what did you think
i'm loving it i love seeing the you got lots of engagement you know that video blew up the
hundred gigabit home lab uh i love the production quality i know all the i know you sweat the details
it's fun to watch and uh the narrative is cool and yeah i'm just happy for you it's going well yeah
No, I'm just so pleased, and it's part of the day's kaisen as well.
Oh, it is.
Everything connects.
Everything connects, seriously.
Continuous connection.
Adam, have you seen any of Gerhard's recent videos?
Not as recent, but I've seen them before.
Not this 100 gigabits.
I'm a little jealous.
You'll like this one because it's kind of up your wheelhouse, although his home lab is better
than yours.
I mean, that's the sad part.
I mean, happy for him, but sad for you.
So I think this is going to be almost like a challenge.
like how can we improve the home labs to Linux, Ubuntu, Arch, a couple of things are
going to come up. Networks because they're really interesting GPUs. Like how do you run all this
stuff in a way that does not break the bank? Because that's the other consideration. I'm not
a data center. I wish I was, but I'm not. I'm very sensitive to noise. So I can't have like fans
blaring and like, you know, one you, two you servers, the really shrill ones. So I'm very
just noctua, the nocture person that goes for whisper, quiet, everything, even fanless
if it's possible. I tried fanless for 100 gigabit and it runs really hot. That's the one thing
which I was not expecting, just how hot these things run. 400 gigabit is even more crazy and 800. Wow.
So that's really like the next frontier. That's what I'm looking at, 400, 800 and beyond. And all this
to service some workloads that have very sensitive latencies in terms of throughput, latency,
I mean, how do you run a remote GPU?
I mean, that's crazy.
What I mean by that, the GPU's in a rack, you're on your laptop,
and you're running against the GPU.
So you have an Nvidia GPU on your laptop.
How does that even work?
So, like, the software on your laptop is, believes there's a GPU available to it,
but it's over the network.
The kernel.
Oh, the kernel extension.
And it presented as a local device, but it's actually a remote device.
So it intercepts all the calls, and it has to have very low latency and very far.
network to carry all the stuff around and it makes it appear as if it was local.
That's amazing.
And video GPUs in MacBook Pros.
I never thought I would see the day.
That's crazy.
Yeah, I know.
So that's like a preview into the new job and what's coming.
It's only been like a month.
I can't believe it's already been a month.
But it all connects.
Really all connects.
And that's really important part.
So all roads lead to change log.
Isn't that what Adam keeps saying?
That's right.
All road leads to change log.
Yes, sir.
There are many ways to get to change log.
Indeed.
Do you have a visual aid for us, this, Kaysen?
Do you have a deck?
Oh, yes.
You bring your deck?
Oh, yes.
Always.
You know, always.
So you need to try screen sharing.
October 16th, I'm going to try an intro.
See if you like it.
Okay.
I'm going to try and do the job easier for like the intro.
Okay.
It's October 16th, 2025, and you're listening and watching Kisen
where Adam, Jared, and Gerhardt do some mopping.
Not moping, mopping, double P, mopping.
Mopping.
Yeah, not moping.
So, yeah, so stick with us to the end so that you can rate our mopping performance.
That's the plan.
I'm so good at mopping.
I'm going to find out how good I am.
Let's do that.
Well, the audience gets to rate us.
So at the end of this, this is all, again, leading to,
some performance rating, some mopping performance rating.
I'm getting nervous already.
Here we go.
Launching PIPley together was one of my 2025 highlights.
I mean, seriously, it was just so amazing.
After 18 months of building PIPley in the open with friends,
we shipped it on stage in Denver, and it was so awesome.
Seriously, such a great feeling.
The audience was clapping.
Jared and Adam were smiling
I was so proud of what we have achieved
really really really proud
but do you remember
what happened
a few hours right after
we did the stage bit
hiking
yes
and between that before that or something else
lunch
lunch yes yes that's right
Adam help me out here
I don't know, what do you remember?
A selfie?
Yeah.
Well, this is not a selfie, technically.
What are you trying to get?
Take us where you're, this is a gangster selfie right there.
This is like a gangster selfie.
I really like it.
I think it's all of my favorite pictures.
Technically not a selfie, but that's a great picture right there.
Yeah.
A Kaizen for next time, an improvement suggestion for next time.
Emo Knight, M.O. Night, Brooklyn.
I think we have to just.
Oh, that's the other.
That's the other thing.
Underneath the marquee, which we made sure.
that they put the change log podcast on their marquee.
And underneath there,
apparently that night is going to be emo night, Brooklyn,
which is a weird thing to have in Denver, but...
Yeah.
And it reads as if the changelog podcast, that's what's happening.
It's our emo night.
There should be like a line between the two.
All right.
So we wrapped it up at the venue.
We went to lunch and after we went to a hike.
That's right.
But right after lunch,
as I was getting ready for the hike,
I thought to myself, let me check the fly.0.I.O. Metrics.
Okay. Now I know you're going with this. All right. So for those who are listening,
we are looking at a point in time snapshot of the fly.com. A.O. App Grafana dashboard for
Pipetream. So for the CDN that we just launched, we're just looking at that screenshot.
So I think I was mentioning Pipetream in Pipely. Do you still remember?
remember jerry the difference between the two i do so pipley is the software it's the open source
project that allows us to run pipe dream which is our instance of pipley's running on flies
network that actually serves as our cdn correct that's right yeah yes that is correct yes and pipe dream
pipe dream is i think i just told you the pipe dream was sorry yes you did there's a trick question
You want me to pop that up for you here?
You just want me to say it cleaner?
Yeah, PipeDream is our pipley instances running as a distributed network around the world on-fly's network.
That's it.
Yes.
Okay.
Pipely, the generic one.
Pipe dream, the specific one just for us.
Because it was our pipe dream.
It was, yes, exactly.
But Pipe is for everybody.
It's not just for us.
Exactly.
That's the, yes.
That's where we're going with this.
That's right.
So we're looking at this Crafana dashboard.
And clockwise, we see pops by traffic in North America, that is the top left.
network, I.O. in the top right, and CPU and memory utilization at the bottom. What stands out
to you? CPU utilization stands out to me because at about 1437, it went nuts. That's it. Yeah,
yeah, yeah. So that's exactly what happens. 100% nuts. Yeah, yeah, yeah. It was crazy.
There's a title. 100% nuts. 100% nuts. Like CPU just freaked out. So after we updated the
DNS on stage, once DNS propagated and more traffic started reaching our shiny
new CDN, some instances, we're just getting overwhelmed.
And users hitting those pops, they start experiencing a slow and unresponsive changelog.com.
We're going to pretend that there's a bad boys, bad boys meme here.
Okay.
We bad boys did.
So, basically, what's the bad boys meme?
I don't even know if I know this meme.
You know, like when, when like stuff blows up and they just walk away without walking.
Oh, they're walking in front and the explosion.
And stuff blows up behind them.
Okay.
So that's exactly what, what I did.
I have to take responsibility for this.
You exploded everything.
Pretty much.
Pretty much.
And you walked away.
You went to lunch.
Yeah.
Yeah.
So honestly, I under provisioned.
Okay.
So our CDN was running on these tiny instances and just didn't get very far.
So we're looking at an impossibly tiny bike.
that someone is actually riding.
That's what we're saying right now.
I can't even believe it can ride that bike, by the way.
We're watching a video.
I don't know.
Is this on, okay, it doesn't matter.
There's a video with a bike and a dude,
a very small bike and it's impossible.
If you're watching this,
if you're watching this,
maybe this will make it part of the B-roll.
Who knows?
We'll see.
We'll see what the edits will decide.
But I under-provisioned.
I mean, that's really what happened.
Okay.
And I know that you've heard me say this many times
over the years in the Kaysen.
I'll say it again.
Always blue-green.
Always.
And in this case, it means that, in this case, it meant that all previous infrastructure remained in place.
We just deleted some DNS entries onstage in Denver.
And because we took that approach, really all I had to do was to add the DNS records back
and confirm that the traffic was coming back to life.
Everything was healthy again, so it just took minutes.
It took maybe 10, 15 minutes for everything to propagate up to 30, but that was just like the edges.
And everything was back to normal.
And because we blue-greens, there was nothing that could not be undone rather quickly.
Explain the concept of blue-green for those of us who aren't SREs.
So blue-green in this context means whenever you introduce a change, try to introduce it alongside what you already have.
have what you're already running. Because if you run the new system in parallel, it means that
it's very easy to go back to the previous system. In this case, one is blue and one is green.
So rather than doing an in-place replace, replacing in-place whatever you're running, doing an in-place
upgrade or taking things down and, you know, putting the new things up, you don't want to do that.
You want to run two of the same setup and then either gradually, which is what we did, we were
gradually migrating traffic.
That's why we knew that 20% was good, 30% was good.
So again, we're not noobs.
We've done this before.
But in this case, I just could not estimate how much traffic it will hit the new
instances and especially bought traffic.
I mean, there was like a lot of bought traffic.
I don't know whether it was LLM scraping.
I don't know what exactly it was, but there was a lot of traffic hitting specific
instances and they were just like falling over.
They didn't have enough memory.
they didn't have enough CPU.
So in this case, because all of the previous infrastructure was in place, updating the DNS record and in this case, adding some DNS records was enough for the previous IPs to start propagating and then everything continued working as before.
So the actual mechanism that you used in order to do a blue green with a CDN was running both CDNs concurrently and using DNS to just point different directions.
And so your rollback was literally just to go delete the new DNS entries, or a few of them.
We had options.
So what we, so when we were on stage, we had, we were serving, I think, five IPs.
One IP was a new CDN and the four IPs was the previous CDN, which meant that 20% of the requests, based on how DNS would resolve, would hit the new CDN.
Right.
And then everything else would go to the existing.
thing CDN. Now, I think when we were stage, I think we were maybe one in three, so we were
about 33%. It was more than one, because that's how we started, like 20%. Well, we took it to 100%.
We took it to 100%. We went to lunch. I checked the numbers. Like, crap, this thing is on fire.
Right. So, like, it's really like blowing up. So all I had to do is add back the previous IPs
so that when DNS queries would hit. So that fastly would serve some of our requests and Pipeley would
serve others.
Exactly.
So we went back to about 33%.
So they couldn't handle 100%
with the under-provisioning
that you had done with our fly VMs.
That's correct, yes.
They were too small.
They didn't have enough memory, not enough CPU,
and there were too few of them.
There were certain hotspots that need more than one instance,
and that's what we did.
Okay, mop it up.
Gerhard, mop it up.
Exactly.
How's mopping in that case?
Well, instead of dealing with a changeloadcom incident,
we went hiking.
That's right.
Like nobody knew that this thing had happened, right?
And this is video.
So it's actually a plague.
So we were relaxed.
We enjoyed great conversations.
Jared was looking for great drone footage to shoot for news.
That's right.
We were in the amazing Red Rocks area,
which is just where Jared is pointing very soon.
He's coming up to pointing it.
He was like, see, looking around,
where can I fly this thing?
Right.
And look at that.
let's go there. That's where the Red Rocks area. Did you use any of the footage that you
shot, Jared? I did create one episode that featured B-roll from Denver. I'm not sure how much
or if any of that was actual drone footage. I had to fly the drone all the way across the
valley to get it over to Red Rocks, which was really kind of boring. Even if you speed it up,
it's kind of just like, there's a highway. And I'm not, I can't remember. It's been a few
a few months.
I know we made a Denver version of Change Dog News with the B-roll from Denver, but I
combined a bunch of stuff.
And I probably slipped a little bit in, but I also have not the best.
I'm getting better at driving the drone.
In fact, I have a show coming up next week by the time this publishes.
It'll be this week's Change Dog News where I'm falling around the tractors as they
as they harvest.
I did the soybean harvest and now I got the corn harvest.
And I'm getting good.
Like I'm flying at like 10 feet above the tractor and
trying, you know, keeping it center frame and stuff.
So I wasn't very good at flying it then because it was pretty new.
But a little bit probably snuck in, but not enough to be exciting.
I mean, it was great to, to know that, to basically see how news is being put together,
where you just like literally take this drone out, fly it around, and then you decide whether
it was something worth using or not.
So basically, seeing that production.
So I think we had like two in one in that point.
and that was really cool
on this hike
I would have missed
this rattlesnake
was it not for Matt
Matt Johnson that almost walked into it
you don't want to walk into a rattlesnake
by the way no he don't
so yeah
but it was my first time seeing a rattlesnake
up close
and it may be the happy
memory because Matt was quick to react, but things could have turned out very, very differently.
Now, you have to imagine, this is summer, we were in shorts, we were engrossed in our nerdy talk,
right, and nearly walked into this rattlesnake. So it was very, it was very close calls.
It's the exact same thing as shipping an under-provision system into production.
Almost the same. Almost the same shorts even, probably.
Yeah, yeah, yeah.
Yeah. So, yeah, exactly.
Exactly. Save shorts.
Yeah, you're short.
Short once again.
Yeah, that was a close one.
So this is the analogy to what we avoid it, both in real life, but also on stage.
Now, the nice thing about a rattlesnake is, you know, God gave them those rattles.
And those rattles are very useful because a rattlesnake doesn't want to be messed with.
And so when you get close to a rattlesnake and don't know it, they'll let you know that you're close.
And that rattle, that shake is saying, get away from me.
However, I don't think we got any notifications, you know, to draw the analogy.
Our fly machines didn't say anything, did they?
Like, I didn't know.
There's no rattle on that side of the equation, was there?
Well, I did receive some emails that instances were running out of memory and crashing,
but it was taking, happening after a while.
So that was maybe the equivalent of that.
Right.
But in this case, because we were so engross in the conversations, we never heard the rattles.
There you go.
Okay.
It was loud.
We weren't checking our emails.
There were drones buzzing around.
And, you know, it was a bit crazy.
So, yeah, we did not pay attention to that.
All right.
So if you're watching this next piece, and if you don't like meat, look away.
Oh, boy.
Oh, I know what this is going to be.
Just, just, just listen.
I'm going to play it for you.
Okay.
Adam, what you would like to describe what is happening?
Well, we ask very politely.
can you show us the kitchen and how things are done and they said yes and then they took about 20 minutes to prepare tour and they allowed us to come in there and hold exactly how the chefs prepare and prep the meat to come out to the very awesome fogo de chow patrons which is kind of cool and so there's jared uh holding it like a beast about to eat it but not because that's somebody else's yeah that's not my food that's not your food but you just had some versions of them
It's not the snake.
No, it's not the snake.
That would have been
apropos, wouldn't it?
Yeah. This is not the snake.
Yeah.
Oh, man, that was really good.
Really, really good.
So, yeah.
That was a good thing.
It was like, yeah.
The funny thing about that one, too, though,
is that you never get what you don't ask for.
That's my, if it's one advice I give anybody in the whole entire world,
it's like, you don't get what you don't ask for.
So ask, and you might get it.
And what Adam's referring to is the tour of the kitchen.
at Fogato Chow.
Exactly.
Because I didn't, I wasn't going to ask.
I was ready to pair of check and go home.
And it was like, we had very kind, uh, wait staff.
And they were discussing with us talking about, did you know the actual chefs are the ones
that deliver the food?
I was like, I had no idea about that.
So they're giving us an insider look.
And Adam just said, can we see the kitchen?
Could you give us a tour?
And then he's like, yes, I can.
And we're all kind of awestruck because normally we're used to hear and know.
But yeah, don't, uh, if you don't ask, you don't receive.
So thank you, Adam, for getting us this sweet shot of the change log t-shirt on me while I eat that rattlesnake.
Or no, that's a cork-chop, I think.
I don't know if that is.
That's a joke.
Looks like maybe, I'm going to guess.
Who can guess this?
I'm thinking of rib-eye.
Is that ribby?
Sirloyne.
Looks like maybe surloin.
That's a little too.
That's what I would have picked.
If they gave me the choice, would be the sirloin because that was to die for it.
Potentially, pecania.
Is that pecania?
I don't know.
You're the connoisseur.
It's like pecan.
I can see the fat cap there on the left hand side.
I remember that.
That's so good.
And with fruit, I mean, that's how I like my meat with fruit.
And it was just amazing.
It was like one of the amazing meals I had in the U.S. ever.
Like, seriously, it's just like above, above them all, just so, so good.
The thing which I wanted to convey is that even though we had a short trip, it was only two days.
It was a very rich trip.
Many things happened.
It wasn't just about tech.
It was literally friends coming together and spending a bit of time.
And it just felt so good.
So natural.
It flew by.
I mean, we were joking about how fast time flies by.
But that, those two days were like, even, I remember like when the, when the border
agent was asking me, so how long are he staying for it?
And I said, two days.
And it was really?
Two days?
Where are you coming from?
I said, the UK, really?
Can I see?
Like, he just would not believe me that I'm flying in for two days.
It was so worth it.
I'm happy to hear that because that's a long trip.
you. It was really, really good. So this was the end of my Denver trip. And it is the
favorite thing that we did together ever. So it was really good. And this is obviously leading
to something. What are your thoughts on doing this again next year? 2026. And I'll give you,
I'll give you a couple of moments to think about that. I didn't need any moment. There's a reason
I'll say, a hundred percent.
Jared, Jared.
Sorry, Jared, you answer.
Oh, I'm for it.
I'm already, I'm for it.
I mean, Adam was like, let's do it four times a year or something.
So we're talking about how many times, not.
I'd say two to three.
Two to three, four's too much.
Two is for sure.
Three might be pushing it, but I'd love to do more live shows like this.
And twice a year is enough, in my opinion.
Three, four, maybe.
Two for sure.
honestly i'd be very happy if we if we do this again like at least like a repeat that'll be very nice
it doesn't have to be the same place but at least once a year would be a good habit i think to start
having um because so many things happen around and it's not just a show it's everything else
and um i don't know how many people want to join us uh now well there's an opportunity to comment
here is an opportunity to comment yeah you do have to be close to Denver or willing to travel
And we had many people travel, which is kind of cool.
Now, you know that I like to do things well in advance.
For example, most of my next year's holidays already booked.
The only one which I couldn't book is actually end of October
because flights only become available like a year before.
So I need to wait a few more weeks.
You're such a planner.
Holy cow.
Wow.
I have.
That's why I'm asking.
That's why he's asking.
Adam.
He wants to get something on the calendar.
Right.
So I think I think it would be a good time like in the next.
Well, what's your opinion on Denver, your heart is like that our place now?
is it more like, that was fun. Let's go somewhere else and let a different subset of our
audience and friends have an easier time getting there. And we know Denver's not accessible for
everybody. Austin, obviously, would be another good choice because it's close to Adam and it's
close to lots of people that we know. I'm not sure if you got directs to Austin, Gerhard, but
I do. Yeah. You do. So thoughts on a new location versus right back to the mountains.
I'm all up for it. I've been to Austin. It's a really nice place. I had to go.
time there. The Colorado River, which is a different Colorado River, was very nice to
kayak on. That was like a good experience, for example. And yeah, I mean, there's so many things
that we could do. The more interesting question would be, who would like to join? To see where is,
like, where do we have like the most loyal fans so that, you know, we were there for them as well
and they can join us and we can, you know, do something together. And do you want to do any
interviews, what does that look like it? Will there be a conference before or after? So how do we
structure it so that it makes sense on all accounts? What about holidays? Because all of us have
holidays and other conferences. So what would make sense? Well, I think thank you for getting the
conversation started. We're definitely in to do it again. And I think it's time now in October
to start discussing the details of what that might look like. I don't think we have to wait until
next summer. But I haven't seen your calendar, Gerhard to know. As you know, I'm going to be
crossing the pond in May with my family, taking my daughter and wife over to France and Italy
and to celebrate her graduation. So that's going to be a big trip for my family. But I don't know
what yours looks like. I don't know what Adams looks like. So let's get talking and get some stuff figured
out to our listeners, you know, comment. Let us know, where should we go? When should we do it?
and what should it look like?
What would you like to be a part of?
Should it be just like this last one?
You know, an interview episode, a Kisen episode,
and then, you know, festivities around that.
Should it be more?
Should it be less?
Please let us know in the comments.
I think if you ask Adam, he'll say,
change lock con.
The first change log conference ever.
Go, go, go.
Maybe.
I kind of like this as it was, though.
Honestly, I really,
I wouldn't mind having like some trusted
not like demos, but like some show and tell.
You know, I think there's a lot of like pontification from the stage.
I'd love to have some show and tell type stuff if that was a thing.
And maybe that's demos.
I'm thinking like oxide with their racks and stuff like that.
You know, that's kind of show and tell.
But, you know, I don't know.
I don't know.
I really just enjoyed exactly as it was.
Honestly, I think a lot of it was really good.
I think it was just short enough and just sweet enough that it was.
doable and repeatable and it was just an interview and just kaisen on stage and just hanging out
with friends and it was more community than it was like brands or you know what's new in tech
kind of thing i think it was just just the right kind of feels honestly so i don't know sure
i haven't thought a lot about the the design of it enough to to want to change it much okay
well we got the conversation started we got some thoughts we can put it on the back burner
and come back to it, maybe between now and the next Keisen, see if we've reached any conclusions
or any suggestions which are firming up.
But this was good.
I think we just set a goal and say by the next Kaizen, we will have a date and a city.
And that way, we can at least save the date, start booking flights or whatever we have to do.
And then the details of what we're going to do while we're there, we can figure those out from
there, but at least in the next quarter
we should know for sure by the end of the year
what we're doing
where and when. I like
that. Okay. I like that very much actually.
While we're on that subject, I did have
my friend here that could
tell me a couple of details about Austin at least.
Which is Claude.
You know, the latest model.
They're actually suggesting Sonnet 4-5 right now over
Opus 4-1. They're calling it legacy.
They're calling Opus 4-1 legacy. That's kind of
funny. I was like brand new a month ago, and
it's legacy all of a sudden.
Anyways, it's hot here in Austin.
So the best month they say to come here is March through May or October through November.
And we're obviously in October now.
So we can't do it next month.
So I think if it was Austin, I would agree with this that it's before summer, not during
summer.
So March through May, somewhere in there, if it's Austin.
Yeah, I think that's a good time.
What if AI agents could work together just like developers do?
That's exactly what agency is making possible.
Spelled AGN, TCI.
Agency is now an open source collective under the Linux Foundation, building the internet of agents.
This is a global collaboration layer where the AI agents can discover each other,
connect and execute multi-agent workflows across any framework.
Everything engineers need to build and deploy multi-agent software is now available to anyone building on agency,
including trusted identity and access management, open standards for agent discovery,
agent-to-agent communication protocols, and modular pieces you can remix for scalable systems.
This is a true collaboration from Cisco, Dell, Google Cloud, Red Hat, Oracle,
and more than 75 other companies all contributing to the next gen AI stack.
The code, the specs, the services, they're dropping, no strings attached.
Visit agency.org, that's agn tcY.org to learn more and get involved.
Again, that's agency, agn-t-c-y-c-y-org.
We're looking at all the different steps that we have.
to take between being on stage of Denver.
Do you know which RC we were there?
Just looking at this list.
It's a list on the Pipea repo by that we're looking at the ReadMe.
All the various release candidates of 1.0 before going to 1.0.
I thought it would happen on stage.
You didn't or soon after it didn't.
But it did happen now.
So we are beyond and we are running on 1.0.
If you look at 1.0 RC4, limit Varnish memory to 66%.
and that's the one commit which I pushed
that was on stage
there was next one
RC5
handle of orange JSON response failing on startup
and bump the instant size
to performance
that was the scale up that needed to happen
so really we didn't need much
in terms of resources one CPU
a performance CPU
and 8 gigabytes of RAM that was enough
and then we could send
50% of the traffic
and we we were
on that 50% for quite some time.
So RC7, there was like the lot,
RC6, more locations,
back in timeout.
Now, the one thing which was failing,
and this was discovered after,
I think we were rooting more and more traffic,
is that uploads were failing,
MP3 uploads were failing.
And that was pull request 39.
Do they work now, Adam?
I think so, yeah. I mean, I've been uploading.
The answer is yes.
Nice. Great.
I can't say no.
what, I can't say, I guess I can see us.
Yes, it does.
Yeah.
They work.
Yes.
They work.
Yes.
Great.
So that was like the one thing, which I had fastly hard coded for a while.
Yeah.
Because it wasn't working and we needed to keep uploading MP3s.
And then when you asked us to test it, I removed that from my Etsy hosts.
That's right.
And I have had no issue.
I didn't report back because I was saving it for Kaysen to tell you, yes, you fixed it.
Thank you.
Nice.
Great.
That's what I wanted to hear.
So you probably are hoping that I report back, but I didn't.
say anything at all. I did test it, though. Well, as long as you don't have the hard-coded IPs,
that's what I was trying to hint that. Now, it's a good time to remove them from everywhere.
And because everyone else is going to pipe dream, so everyone's using the new CDN, and uploads
are also working, so there should be no more issues. And 100% of the traffic is being served
with 1.0. So 1.0 was tagged on yesterday. This was only yesterday. Okay.
However, however, the traffic has been served through the new instance on, I think it was on the 5th.
Yes, on the 5th of October, everything switched across.
All the traffic we're looking now at a screenshot from Honeycomb, which is showing the requests going too fastly.
And we can see that after October 5th, they dropped.
And there's a few, there will always be a few hard-coded IPs, whatever the case may be.
be it's not human traffic that for sure it's just that that sounds wrong it's not people hitting
the website yeah exactly it's it's it's it's bots um so we have been running 100% on the new
system for more than 10 days now and I wanted to be like why did it take so long because it was like
that was July yeah I wanted to be certain that everything worked fine like after the last time I
went extra extra extra cautious i needed all the metrics there were the summer holidays um i joined the new
startup i was a bit busy and i had to build the most insane home lab ever which took me a while but
that's going to come come a bit later so this is what it looked like now do we remember now that
everything is said and done why did we need to build our own cdn what's the reason behind it
frustration okay that's a good one plus the hackers
spirit plus our cash hit ratio was out of our own hands. We wanted it in our own hands.
Yeah. Yeah. It was like the previous screenshot. So this is the moment I turned off all traffic
from like forever, in this case, from Fastly. It was only a few days. But you can see that in those
few days, we had a 155,000 cash hits, sorry, cash misses, 155,000 cash misses.
and we had 370,000 cash hits.
So the ratio does not look right.
That green line, the cash hits,
there were days when there were more,
or like periods, not dates,
there were periods up to maybe half an hour, an hour,
when there were more misses than hits,
and you do not expect a CDN to behave that way.
And by the way, this is across both changelog.com
and CDN changelog.com,
so it includes both the static assets, everything.
It's just a small window, but it just shows the problem.
Now, as a percentage, that translates to 70.5%.
So 70.5% cash hits, and that is really not great.
Okay, I know you've been expecting this.
So let's see.
What do you think is our current cash hit versus miss ratio?
This is across all requests.
So now that we switched across, we have 10 days to measure this.
On the new system, what do you think?
is the cash hit versus miss ratio.
Now you're giving us four choices.
This is a multiple choice question.
Only one.
85% is A, B is 89%, C is 95% and D is 99%.
Adam, what are you thinking?
I'm locking in C, 95%.
Okay?
I'm going for the gusto.
I'm going 99%.
Whoa.
Straight and D.
I love to be wrong, but if I'm right, I'm going to be...
Ah, I'm so wrong.
9.5%.
So close, both of us were wrong.
That's the answer.
Yeah.
How good would that have been, though?
Yeah, well...
Well, you know, some stuff is fresh.
It just is.
Yeah, I mean, we can improve on this.
We can't even...
I mean, it's not...
Now, the important thing is it went from 70% to 90%.
Mm-hmm.
Okay?
That was a big jump.
Now, 100%...
And it's in our hands now.
We can actually affect it.
Exactly.
Which is the last one before, it was just like...
We could only complain.
Now we can actually do stuff.
And even that, after a few years of complaining,
you just become tired of complaining.
Yeah.
Yeah, we grew weary.
We just build our own.
Okay, so can we do better?
We just, can we do better than 89.5%.
But, so I think everyone is thinking this.
But really, what I think we should be thinking is,
do we need to do better.
Do we need to do better than 89%?
Okay, so let's have a question.
Feeds, if you look at all the,
feeds, we are at 99.5% cash hit ratio before they were at 96.8%. So what are feeds for our listeners
and watches that no, don't know? Who wants to answer that? What are they? Yes, what are they?
These are XML files that represent the current state of our podcast syndication, our
product are our episodes that we're shipping and have shipped and so they're hit often by
robots who are scraping feeds in order to update their podcast indexes and let people know
which episodes are available and they should be a 99.5% because they only change when we
publish a new episode which is at this point in our lives three times a week on a Monday on a
Wednesday and on a Friday and every other request every other day and time is the same
exact content. That's it. So I would say that this is possibly the most important or, yeah,
I would say the most important thing to serve because if we don't serve fees correctly,
how do you know what content change log has? How do you know when content updates? And this is
like worldwide. So I think, I think this is pretty good. And improving on 99.5%, I don't think we
should do it. No. The homepage before, the hit ratio was 18.8%. Oh my God.
I don't understand that.
This was my biggest issue for as long as I can remember.
Today, it's 98.5%.
And to our listener who's probably out there thinking,
you guys were certainly doing it wrong,
we spent years trying to do it differently.
Like, go back and listen to all the Kizans
of us trying to change the way that we actually configure
and respond and our headers and our, I mean,
we tried.
And we ended up with 18.
that's it. That's the best we could do. And so here we are. Oh, so bad. So bad. Now,
MP3s, I would say the second most important thing is 86%. Now it's 87.5%. Maybe this can be better.
Maybe this can be improved. We just need basically more memory or store them on disk, do a bunch of
things. Because by the way, caching is in memory. And I still need to understand why memory, once it gets
filled, it doesn't remain filled. So I've seen this weird behavior where after, like, a few
hours, memory starts dropping. But why? Because there's no pressure on memory. Why are objects
getting evicted? My assumption is that we're storing some large objects. And if there are
smaller objects that need to be stored in memory, the larger objects get evicted, which means that
the cache drops. But I would need to understand that a little bit better. Still, MP3s aren't worse
and they were before.
News.
I know this is something
that's very important to Jared.
It was 52.6%
cashed before.
Now it's 83%.
So
improvements
across the board.
Now, we could improve it
and I think we,
especially news and MP3s,
I'd like to look into that,
but I think news is like
top of my list.
But is there anything else
that you think
that we should pay attention to
or in terms of
the cash hit ratio? Any other resources? No, I mean, MP3s are static assets. You could go look at
other static assets, images, etc. But I just don't think we want to squeeze this radish too hard.
I agree that news and probably taking a low-hanging fruit pass on the MP3 endpoints and seeing
what could do there would probably be bear, would bear some good fruit. But even those I wouldn't
like spend hours and hours trying to make them much better.
What I'm thinking, I'll just basically double up the memory and see how it changes things,
which is just the config setting.
It'll take me maybe a minute.
That will be my first action item.
Okay.
Okay.
Yeah, I imagine news could be very similar to feeds because once news is published, it's similar to the feed.
It's not really, it's changing once a week, and once it's published, it likely never changes again.
It never has changed, right?
Now that we've moved all of our comments and everything exist elsewhere.
I mean, they're in Zulip, they're on YouTube.
So there's no comment feed there.
There's no reason for anything to really be new except on publish.
Yeah.
Okay.
So I'd say the news is the one of the feed, pushing that to the boundary because it doesn't change much.
I'd love to explore that when you do the MP3 exploration of large objects getting pushed out.
I'd love to just sit on your shoulder, I suppose, or as a fly on the wall kind of thing.
Just explore that with you.
I'm super curious about what makes that cash get purged out of the memory myself.
Yeah.
Well, pairing up is something that I'm getting better and better every day.
Recorded and published pairing sessions.
So Jared has the experience, not Adam.
I'm sorry.
Yeah, that sounds great.
Now, it gets better.
Oh, my God.
Okay.
It gets better.
What does?
All of it.
Do you recognize the seed?
Um, okay. So that was like, this is Johnny Mnemonic, right? No.
Is again? Johnny Mnemonic? Is this, uh, nope?
No, that's, uh, is that Hugh Jackman?
Yes. This is not, uh, this is not Wolverine. Swordfish.
This is swordfish.
Oh, no. I'm not sure. I'm very nervous right now. You're hard.
That's okay. It's fine. It's, it's recorded.
Swordfish broke the show like previously.
Yes. Okay.
I'm ready for it
That doesn't mean that I'm ready for
You're going to play this live
For me right now on camera
I will, yes
So this clip is going to blow you away
Okay
I'm going to watch it
Okay
I'm hoping this is Sora
Okay
Hallibari
John Travolta
He Jackman has a gun to his head
He's typing
He has to hack something
In a certain amount of time
Was it 30 seconds?
45 seconds, 45 seconds.
45.
Look at this, fingers.
Types very, very furiously.
He's typing furiously.
Oh, Axis deny.
He gets very disappointed when that happens.
He gets very disappointed.
Yes.
What Axis gets, he died.
All right.
Okay.
So that was a moment of fun.
I wonder what the hell is for.
What?
It was, it was, it was, it's, things get better.
Okay.
Okay.
So that's a moment of disappointment.
Exactly.
But it's going to get better.
Even if you're under pressure and you have to deliver, things will be better.
So pipe dream gets better.
Okay.
Now, we looked at the cash hit ratio.
What I would like to look at next is the response time in seconds.
Okay.
These are all the feed requests before and after.
So the P50 for all the feed requests used to be two milliseconds.
And you would think, wow, that's pretty good.
Well, the new system is like half a millisecond.
So it's a four-time improvement.
The P-75 is 13 times better.
So for 75 of users, 75% of the users, the feed responses get served 13 times as quickly as they were before.
The P90, 95, 99, like, it gets progressively better, which means that the requests are
served much quicker at least four times as quick as they were before.
And you might be thinking, hey, that's bots. What about humans? So what about the homepage?
For 50% of the users, the homepage is 860 times quicker. That's nearly three orders of magnitude
quicker. That's a crazy amount quicker. Now, obviously, it is the fact that it
was not cached, like only 18% of the requests were cashed.
Right.
But the page is instant.
Like now it's instant.
It's 0.000, like three zeros, three seconds.
That's like a third of a millisecond.
That's nice.
So.
You're welcome, humans.
You're welcome humans.
Now, what does that look like?
I think 863 times is really difficult to imagine.
So I'm going to play something for you to see what it means.
So what we have here is one second at the top.
That's how long it takes.
No, hang on, I'm not playing it.
I should be playing it.
There you go.
Now I'm playing it.
Okay.
While 833 seconds at the bottom is still loading.
And it will continue loading for so long that we're not going to wait 15 minutes for this thing to load.
Okay?
We're not going to wait that.
So that's the difference between how far.
the homepage loads now
versus how it used to load before.
This is for the majority of the users.
So the cash hit ratio,
the connection there was that
everything was slow
and there's nothing we could do about it.
And I think slow is relative, right?
Because when you think, when you're talking about milliseconds,
I think there's about 50 or maybe 100 milliseconds
when things are nearly instant.
But in our case, the homepage was taking 150 milliseconds to get
served. And the more like the tail latency is really crazy. Like the tail latency was over a second
for the homepage to serve. That was a long time. By the way, this thing is still going and it's not
even like 10% there. What's the what's the rationale behind this video? Explain to me how this
is supposed to explain things. So the top one shows you how quickly it takes for one second
to go by. Finish. Yeah. Right. So the response, the one second response,
it just visualizes it how quickly that gets served.
The bottom one shows you how long the previous CD and how long in comparison 8363 seconds is.
So we have now, like things are loading in a second or the equivalent of a second.
And before things were taking 863 seconds to load.
They were?
The same request.
Longer.
Relative
Not absolutely
Okay
Exactly
So the one second
It represents
Like one minute
A loading speed
Which is like a millisecond
But we can't visualize that
Because it's too fast
Exactly
And 8603 seconds
That's how much slower
It used to be
So it's a relative
That's great
Relative of one second
versus relative to milliseconds
Right
I understand
That way we can actually
visualize it
And now it's at 2.
Oh he reset it
Okay
I'm going backwards
It's like loading again
because it's finished.
So it's like 15 minutes versus one second
and then reduce that down to milliseconds
and we're basically that much faster.
Exactly.
15, yeah, waiting 15 minutes versus waiting a second.
That's exactly the speed difference
between what we used to have and what we have now.
Right.
It's just really fast.
And that was really, really fast.
That was a CDN thing.
That was not, that was traversing DNS into CDN,
getting a cash hit or miss,
serving, you know,
rehydrating, getting new
from cash, that's where all the time was spent.
So in this case,
these were the responses from
the Varnish perspective.
So if I go back here
and we're looking at this table,
so this is the homepage,
the response time, how long does it take
to serve the homepage from Varnish's perspective?
Whether it's in the cash, where it needs to go
to the application and request it and then
eventually finish serving, the actual
response. So before P50, 259 milliseconds. That's how long it used to take. Yeah. And now it takes a third
of a millisecond. What changed specifically with varnish then? Well, caching. Most of the, like if we go
back to the table, if you go back here, now 98.5% of the requests are served from cash. We don't need to
go back, I mean, the homepage is almost always in cash. We very rarely have to go back to the
application to fulfill a request. Before, only 18% or 19% of the requests could be served from
cash. Yeah. So, Varnish had to go to the application. The application had to serve the response
so that Varnish could serve the response back to the user, to the end user. You keep saying Varnish,
don't you mean vinyl? Yes, I do. I do.
Well, not yet.
Vinyl, yeah, vinyl is coming up.
There's a curveball.
Yeah, in January.
So it's not here yet.
For those not in the know, they're renaming the Varnish software.
They are, yeah.
Because of legal disputes to vinyl cash.
So Varnish Cash, the open source project, will be renamed to Vinyl Cash.
That's correct.
Whereas Varnished Software, the company will continue as Varnish Software, the company.
I'm going to jump ahead.
I mean, this is like, yeah, Jared is coming from the future now.
I am.
So, Ph.K. Poole Henningham, he posted on September 15th.
This is on varnishcash.org.
It was just about a month ago.
He wrote the 20 years old, 20 years old, and it is time to get serious, sir.
That is the title of the blog post.
And he talks about the open source, yeah, the open source, varnish cash rename.
Some legal disputes, indeed.
You can go and read it up.
But basically, Varnish n8.0, the,
open source varnish, 8.0 will be the last one that will be called Varnish Cash.
So from March next year, and I think this is very interesting, it will be Vinyl.
Name of Varnish, the open source Varnish will get renamed to Vinyl.
Okay, now I know we have to have, as a guest on our interview in Austin, Ph.K.
Launching the Vinyl Cash, live on stage, I think he lives over on your side of the ocean.
Or Berlin, yeah, because.
Yeah, he probably wouldn't come to Austin, but we can try.
So that's, yeah.
All right.
So speaking of speed, this one's for Adam.
Love speed.
You know that I joined Lupal Labs.
That's it.
So what we do, it requires a really fast low latency network.
And by that, I mean at least 100 gigabits.
Okay, it needs to have sub 1 millisecond latency.
So about two months ago, I started building a new 100 gigabit home lab.
And I know that Adam has been asking me for a real long time to do a video on my home lab.
So Jared already watched it a few weeks ago.
This is live.
In the easiest ways to go to YT.
Not make it work.
TV that will take you to YouTube and there you can go and watch it.
And I take you through the entire journey, why I had to build it, a couple of interesting things.
Yeah, it's, how did you, how, I mean, what did you think about it, Jared?
The portions that you managed to watch.
Yeah, like I said earlier, I thought it was really good.
I it's cool you get what did it miss how do how would I improve it maybe that's a better question
what would have made the video better what would have made the video better I don't know in terms
of like content that was missing I feel like the the way that you do your summarized
intro is compelling but sometimes it like jump cuts so faster it it's sometimes hard for me to track
exactly. So it's like a hard thing to give feedback on. I'd have to give you specifics
for you to actually know what you're, but you do, even though I'm like compelled to continue
watching and I do enjoy it, there are times where I'm confused. And so I think as you continue
to refine that, because I know it's a, it's a style that you're doing and you've gotten better
at it since because I've seen some of your videos from a year ago, is. And it's definitely
getting there where I'm like, I look forward to it. But I also sometimes am not sure what's
going on. I'm not sure if that's on purpose. Maybe it is to keep me.
intrigued as there are all kinds of techniques to keep people watching on social media.
But I think you can continue to refine the narrative that kicks off from the beginning
in making it more cohesive or just laying more breadcrumbs perhaps for the person who's not
initiated because you're so deep into what happened and you know the whole story from front
to back and you're telling it and that's great.
But I think that's one thing that could be improved.
And first, I don't know about stuff that's missing, but because I wasn't there for what I missed, what, you know, what got dropped.
That's great.
That's great.
So, yeah, okay, that's excellent feedback.
All right.
Well, Adam, when you get a chance to watch it, if you get a chance to watch it, I would love to get some feedback from you too into what, in terms of what would make it better for you.
For sure.
As you watch it, anything that could be improved because the next one is coming up.
So, how fast is the pipe dream?
That's what we're going to answer now.
How fast is the pipe dream on this 100 gigabit home lab?
Because you remember we talked before,
we were looking at how much, like when we benchmark these things,
fly.com, it itself, is limiting us to how much bandwidth we can push.
We wouldn't want to be pushing tens of gigabytes or hundreds of gigabytes.
That will be crazy because that costs someone money.
So running benchmarks like that is not great.
But also, the WAN, there's like a limit there.
But in this case, the limit is 100 gigabit.
We know that.
So this is a new home lab.
This is what it looks like.
It's running Ubuntu 2404.
I think the most interesting thing is the CPUs about it.
So it has threadripper 9-970X.
And the reason for the thread riper is because I need a lot of PCIE lanes.
So I couldn't use like a regular consumer-grade CPU.
I needed 16 lanes for the motherboard, for the GPU, the RTX-4080, which is there at the bottom,
and I need another 16 lanes for the network card.
Even though it's a PCI-3.0, if you give it, for example, eight lanes of, in this case, PCI-5,
it can only use eight lanes of PCI-3.
So you would like basically half its speed.
And eight lanes of PCI-3.0 means 64-gigabits.
So really I need a full
16 lanes doesn't matter
I mean it needs to be at least PCA3
to achieve its full speed
and the card is like the thing that you see
there's like some green LED lights
it's right below the fan
for the CPU
all right
so that's one
this is the server
we're going to be running
pipe stream on this host
and this is the other part
of the home lab
and this is my older machine
and it's about three years now
it's a rise in 750800 X
it has 16 threads, not 16 core, 16 threads,
a small GPU, a G4G-T-730 is a fanless one,
it's the one at the bottom.
And again, the reason why I had to do this is because I need the 16 lanes,
which are right under again, that CPU cooler for the network card.
If I was, for example, to put an NVME in a specific slot
or if I was going to fill any other PCI slots,
the first slot, because it shares bandwidth,
with the first PCA slot would be limited to 8X,
which again would create that 16, sorry, that 64 gigabits limit.
So I needed to give this full 16 lanes.
And that's where this limitation came from.
Now, this is the star of the show.
This is what that card looked like.
Those cars look like.
You can see, I mean, there's like a DAC cable.
Obviously you need two.
Now they have two modules, which is interesting
because the card itself, in terms of the PCI bandwidth,
will max out at 128 gigabits.
So it can't go beyond that
from a PCIE slot perspective.
Even though each module could do 100 gigabits,
really in combination,
you can only push 128 theoretical maximum.
And these are older cards.
It's an NVDA Melanox Connect X-5,
so they've been around for many years now.
I think 2020 at this point,
so they're like about five-year-old cards.
What's the price range on these?
Are they expensive?
they can be yes these specific ones i got them off ebay and i paid for both of them 500 pounds which i
think it's about 800 i think seven 800 dollars roughly um now in the u s i think you can get them for
even cheaper because you get more hardware right data center hardware that just gets you know
sold for um a good price so refurbished stuff or in this case it's just used it's not refurbished
and a DAC cable.
The cable can be a bit expensive.
You can pay anywhere between close to $100.
I think this one was about 60 pounds.
And then you need two.
And the reason why you need two is here
because I'm configuring the two modules in a bond,
in a network bond.
It's an LACP bond.
And in this case, what it means is that
I'm basically having two cables
and I'm creating a single virtual connection
that uses both modules.
So again,
Theoretical maximum is 128 gigabits, but in reality, it's more like 112.
I was not able to push it beyond that.
So HTTP stat, what I'm showing here is that if I do an HTTP stat from the client, H-22, by the way, H-22 is the year, HomeLap 2nd2, and W-25, the Workstation 25, but it's also HomeLap, but it's also workstations.
It's a it's a combined thing because of the many cores that it has
and it has a multi, multi-role that host.
All right.
So from the client from H-22, I'm basically going to that private IP, 1025, 10141,
and on Port 9,000, the pipe dream is running.
And you can see here the response is going to the app.
It's just basically a pipe dream running in a container locally as it would run on fly.
All right.
let's see what this baby can do
I'm using OHA
I'm running 64 clients
and I'm sending a million
requests to the homepage
let's see what it will do
that was it
that was one million requests
sent to the homepage
oh my gosh so it took less than five seconds
to complete
we pushed 225,000
request per second
in terms of data
we transferred 12.5
gigabytes in five
seconds wow
we reached 2.8 gigabytes
per second and that's over
just over 22 gigabits per second
so
if you were to guess what would you say
the bottleneck is in this case
you just had to guess
it's not the network we know it's not the network
it has to be
the CPU maybe container
the CPU
you. Yeah, it's actually, yeah, so the container, I mean, it doesn't have, it has, it doesn't have any overhead. It uses the, it uses it binds to the local network. So there's like no, no netting, no bridging, nothing happening from a networking perspective. It binds to a port on the local networks. So when you go to 9,000, it's the port local 9,000 on the host. Now, I'm wondering what happens to if you fetch the master feed, because the master feed, because the master feed,
it is really big. It's 13
megabytes in size. So it's about
100 times the size of the home page.
So we're going to do something very similar.
The difference is we're only going to run a 100,000
requests, not a million requests.
And this is what that looks like.
Okay.
It's taking a little bit longer.
Mm-hmm.
What do we see?
What do we see here?
So here we see the CPU and the network for the client, H-22.
And what stands out is that many of the cores are 90-80, close to 90% usage.
We can see the network throughput.
It's 4.59 gigabytes per second, which in this case is 36 gigabits.
So we're getting close to 40 gigabytes per second.
but in this case we can definitely see that it is the CPU that seems to be the bottleneck
but the CPU on the client so this is where we're running the benchmark from so I'm wondering
what does the CPU look like on the host on the big one the thread ripper the thread ripper
it looks pretty quiet from this pretty quiet pretty chill that's it yeah I think the peak that
there's one core which is 11% but otherwise everything is like less than even
5%. So most of the cores are chilling. So this confirms that we have the bandwidth. I mean,
we went from 20 to about 40 gigabytes per second, from 20 gigabytes to 40 gigabits per second. And we can
see the CPU is fine. So Piedream could serve more from the instance perspective. But the client,
where OHA runs, the one that benchmarks, seems to be approaching a limit. Even so, we are able to
send 100,000 requests, and I think it takes, I know, about 40 seconds, something like that,
roughly. We can see again the host, the client, which is running really, really hot, 36 constant,
so that's nice and constant. And we'll see it now at the end, finish 100,000 requests.
It transferred about 40-something gigabytes. This is where, like, this, in terms of traffic,
it would cost a few dollars, just this benchmark. We transferred 200 gigabytes of data for
this one and the peak, the request per second was 2,200. That's pretty good. That's pretty
good. So I'm wondering what would happen? There's like one more thing which I would like to do.
What would happen if we move the client from the host with the slower CPUs to the host with
the faster CPUs, right? So we do like a swap around. We run pipe dream on the slower host,
but we run the client that benchmarks things on the faster host. So what would that look like?
like. All right. So same 100,000 requests, but now we've reversed where we run these
things. And this is what it looks like. It's looking faster. This is the client. This is where
Varnish runs. Sorry, this is where the pipe dream runs. The CPUs are 100%, basically. And this is
where a benchmark runs. We see some, this is really, so let me just go a little bit back. There
go.
Again, up to 80 gigabits per second now.
80 gigabits per second, yes.
So we're able to nearly achieve like the performance, like the saturate the network.
And we can see the CPUs.
We still have plenty of CPU room on the Newark Station, the threadripper, which has 64 threads.
So plenty of CPUs there.
Which is now running in OHA.
It's not running pipe.
Correct.
Not pipe dream.
But we haven't maxed the client out.
Now we're actually maxing the server out.
That's it.
So in this case, if this host had faster CPUs, it could go faster.
But now we're just basically bottlenecked on the CPU.
Well, you're going to have to upgrade your host to care hard.
I think I will.
We need answers.
We need answers.
I think we will.
But this just goes to show that the setup scales really, really nicely.
And this is what we've been all working towards.
Fly.0.I.O. and change log I.O.
So, sorry, changelog.com.
So it's a good combo.
In this case, honestly, fly is sometimes like throttling us on the bandwidth.
That makes sense.
I don't think they were expecting a CDN to run on fly, honestly.
I mean, some of the peaks we can be pushing up to 10 gigabits I've seen like in the metrics so far.
There's the deal like only like the peaks.
I don't know what the limit is on fly.
But I think talking to them about this would be a good idea.
I've been mentioning this, but I think that's following up on this, because, I mean, what happens if there is, I mean, maybe throttling is what we want, but it will affect other users. So are they okay with us running a CDN? What can we expect from this setup? So yeah, so vinyl, we already know. We've been here. The vinyl is the rename. Varnish will become the open source varnish cash will become vinyl from March next year.
by the time we meet next,
Pipely and Pied Dream will be vinyl.
All right.
We're closely approaching the wrap-up.
What's next?
Let's talk about what's next.
The first thing on my list is
I want my BAM.
What does BAM mean?
Oh, your big-ass monitor.
That's it, the big-ass monitor.
So now when I switch my big-ass monitor behind me,
what I see is all
of pipe dream
all of it. I see all the traffic
going through change log. It's beautiful.
Look at it. I see all the
areas which are like the odds. It's just so nice.
Seriously, this is the best
painting I could hang
in my study. And I have it
on all the time. So I just see
it refreshes periodically. This is
the fly.com. On the left, I have the edge.
How does the edge behave? This is the fly proxy
for our CDN.
and on the right, depending on how you're looking at it, on the right,
I see the Fly app, which is in this case a pipe dream itself,
in terms of memory usage, CPU usage, all those things.
And it's a thing of beauty.
Now I understand when there's a problem.
I see the memory when it drops, like all the things are just there.
And it works well.
So the BAM is done.
That was a quick one.
And thank you, Fly, for a great dashboard.
it was really good.
But the one thing that keeps coming up is out-of-memory crashes.
They happen rarely, but they still happened.
And even though we have the limit set up,
I need to understand what exactly triggers those crashes,
how to prevent them from happening.
So sometimes, and I say sometimes,
I think the last time it happened maybe a week ago, two weeks ago,
it hasn't been this week.
So an instance, when it gets overloaded,
it runs out-of-memory crashes.
All it means is that the request,
they get rooted to a different instance.
And when the instance restarts, it starts like with an empty cache.
So it takes a while for the for the, for the cash to fill in memory.
So I'd like to dig into that.
Remember the logs, our events, our metrics, Gerard, that we've been talking about.
Where currently they're being stored in S3.
Yeah.
And then you have the job, a cron job, which processes them aren't really like to sort out that pipeline.
Because part of pipe dream, part of rolling this out, I just realized how everything is put together, basically.
and I think we can improve on that.
So what I'm thinking is
if we were able to ship all the logs
in a column-like format,
so like a wide format,
in something like Click House,
and I'm thinking Click House specifically,
this would just make everything so much easier,
like the whole metrics and analytics pipeline that we have.
Not to mention that in a way,
that would be two of everything
and Honeycomb could be one of them
but Clickhouse if it stores all these requests
it could be our other
event store
right
and we could visualize all those events
so that would be good now
do we know anyone at Click House
at Clickhouse Cloud specifically
do we have any Clickhouse Cloud friends
Danny? We talked to Danny a couple years ago
remember that Jared at
Open Source Summit in Vancouver
I don't remember
that, but that's why there's two of us.
I'm pretty sure we know some folks.
If not through acquisition, directly.
So we'll hunt our contacts and come back to you.
So Click House would be really interesting or anything that can store lots of events.
Because every single request in our case would be stored there.
We would batch them and do all of that.
But we would read quite, we would write quite a few events.
I mean, basically every single request would end up in that data store.
and then we need to be able to aggregate them
and read them back really, really quickly,
which I think would do away with any of the analytics eventually,
but we're currently doing Postgres with the background jobs.
All of that can just come from that go.
Exactly.
You know, the challenge there is that what stack on Click House in particular?
Because that's tying us to yet another behemoth
that might make us hold it wrong potentially.
and we'll be forced to build something else.
Click House spelled H-A-U-S instead of H-O-U-S,
you know, the German version of it.
What exactly do you like about that flow to what makes that be
the first-class citizen for you for data transporting?
So I've used it for a couple of years.
We've been using it a dagger and continue doing so.
And it works, it works really well at a large scale.
So it processes not billions, like trillions of events.
It scales really nicely and it's really fast.
And the Click House Cloud team specifically, they've been very supportive and they seem to be innovating and doing things in a very thorough way.
So it's something that has always been dependable.
Now, we currently use Honeycomb for like the whole UI thing.
But we also store a subset of.
of the requests in S3 and then we process them and I think we store them in Postgres.
So there's we, we duplicate these things in a couple of places.
If we had a single place where we store them and this could be like click house,
that would be like the primary store, we could read any metrics.
Whenever we need, we can create materialized views.
I was like it's just so flexible in terms of how we can slice it and dice it.
It would be our alternative to honeycomb and we still love them and I definitely see us
continuing using them but it wouldn't be the only one um and we would have the same view in terms
for alerting or monitoring or anything like that we would need to do something separate so it just
centralizes every single request coming from every single instance in this case yeah and
obviously we're still stored them in s3 click house can read from s3 which is really nice it has
support like a special parquet format which means that you can have like like like they
data store like in long term storage in an S3 like system and then if whatever was to happen
with Click House, it's down or there's a problem with it. It hardly ever happens. I mean,
I think I've seen it only happened once in like almost four years. So it's been very reliable
in that way. But it means that we can we can revamp how we do analytics. I'm not sure how
you think about that Jared because, I mean, you've been mostly using that, like the analytics
that you get in the app. How does it, I mean, do you need to upgrade it? Do you need to work
with it? Like, is it always like, do you just forget about it mostly? You set it up and you
forgot about it. Mostly forgot about it. I think the biggest drawback is how infrequently it updates
at this point. And so we don't, I mean, maybe that's a feature because we don't like check our
analytics on the daily on the secondly obsessively like maybe we could that being said if we can get
that information faster and learn something along the way i think it's worth it and i'm certain that
this this would change the way that we do things enough that we could get that information
much faster than via a cron job so i'm interested in it would it bring huge value to us to be
able to see the downloads faster probably not because we've habitually not done
done that. You know, we just check it on it every once in a while. But I'm up for learning and
trying and improving. So I definitely think it's worth the R&D budget. Okay. I mean, again,
it comes as a suggestion. I think we've, we've been doing this for such a long time in terms
of discussing these things in public in that. And I think this keeps coming up. We can defer it.
It doesn't need to happen. It's just something that I had to tell.
in the context of the pipe dream as I was looking at where the metrics are going the feed
requests how is that like put together what do you write in S3 all all those things like the different
buckets like all that stuff I had to go through basically part part of part of this I would love
to remove just to chop that part off it's just there as a thing that we do you know but yeah
doesn't have to be that way okay that's just how we did it so do you have a thing that you'd
like to improve between now and the next kaizen either of you because i can go i can keep going
through this list but i'm wondering if there's something that you're thinking about or something
that's bugging you that you would like to see improved i've kind of just become content of late
with the way things are adam okay hmm i think the only thing i think about really is
when i think about this system it it's kind of sad in a way is that
if eventually
and it's not the case currently
but if eventually the case is
that our true traffic comes from
one of the spokes versus the hub
like how important does the hub remain
and the one thing we're not tracking
really is the MP4 file that we upload to YouTube
in terms of the system right
so we don't have a store
so we have the MP3
for the plus plus version
and the public version side by side
in perpetuity for all of time
back to episode one of all podcasts.
What we don't have is that corresponding video file
because it's not part of this pipeline.
It's not part of the serving pipeline.
And that's what I think about.
And I think about the analytics
and the effort there.
You know, how does the change
to watching, viewing, listening patterns
change the show over time,
change the system.
That's what I think about.
So I'm not really sure.
I've certainly considered us a modification where we upload the MP4 to our system.
Yeah.
And let it redistribute to the various places that we want us to live.
That's a much heavier lift.
And I think there's benefit at the moment at least.
I agree.
Yeah.
We still have the MP4.
So it's not like we don't own that content.
We just don't own it, you know, on R2 alongside our MP3s.
And it's kind of a sad spend of money to store a file.
You never really, you send a YouTube one time.
Right.
You know, so I could also start to get, like, diverse with it and say, well, we're also going to be on peer tube and we're going to upload to Rumble or I don't know what all these places are now because people kind of scatter from YouTube and then they gather.
But so far, our video, the point of our video is to be on YouTube at the moment.
Right.
And so we haven't thought beyond that.
But if YouTube changes in some sort of dramatic way where, I mean, we've seen changes over the time where it's like, let's be in more places, similar to how we've, we're not just on Twitter slash X anymore, we're on more social networks.
There may be a day where that happens with YouTube and we'd be happy to have the pipeline set up to where we can just, you know, add another connector to it and say we're also on this video watching platform.
Or we can view our stuff directly, you know,
watch it directly on our website and we'll let Cloudflare bear that burden.
If it's passing through PipeDream,
we're going to have some serious bandwidth going through fly.io.
So those are things I've thought of,
but I've never been even, at this point,
I'm not even close to pulling that trigger.
Gerhard, you're on YouTube now,
but you also have Make It Work.TV.
So you're actually tackling this to a certain extent.
How do you do it?
Jilly Finn.
My, that's one.
Yes, Jellyfin is if you have a client, that's right.
You connect to the server and you can download things.
You can store them like offline on your device.
So it's like a media library, proper media library and a media server.
I find that works really well.
I mean, I always prefer watching it that way.
But I also store it on a CD in this case, Bunny.
So Bunny has stream, I think it's called.
And I think CloudLare has stream as well, where you just basically upload media content.
in this case, MP4 files.
So the CDN part works well.
I would love to replace that, to be honest,
because I'm not entirely happy with how that system works.
The chapters are a bit clunky.
There's quite a few things that are clunky.
The trick plays and great.
Again, lots and lots of things.
Just the way you upload things is just too much work.
So I'd love to automate that.
But YouTube really is like the main distribution mechanism,
not just because of how easy it makes it to just upload the file and then gets redistributed
everywhere, but also it's almost like you have something to sell. Do you go on eBay or
do you go to flea market or elsewhere? Do you build like your own shop to sell that thing?
And eBay, a lot of the time is easy because that's where a lot of the buyers are. That's where
people are looking and searching or Craigslist. What if they could have gum tree we had in the UK?
I think it's still a thing. So YouTube is a place where lots and lots of people,
already are. So in terms of distribution, it just makes it so easy. Now, I like to give the option
of, hey, if you don't want YouTube, that's okay. You can also download it from a CD and that I pay
for that I set up. And in this case, I haven't built, but that's coming. So, or jellyfin. Now, I don't
know how many users would set up a jelly fin, to be honest, for change log. But I like the idea of basically
having that one-to-one relationship
between the creator
and the watcher, the listener, the viewer,
there's like nothing in between.
So YouTube can't push its ads,
YouTube can't ban certain content
in certain places if it happens,
however it happens.
Again, it's very difficult to know that
because you need to be in those places
to know how that works.
And also like the idea of
people being able to download the content
and again the podcast players make podcast players makes it easy for YouTube you have to pay that premium you have to go YouTube premium which I do and I have for many years and I think it works great but again my experience my YouTube experience is very different to most people because I don't think many people pay for YouTube it's just like an extra expense so that's my take yeah it'll be cool if there was a vibrant community of people that
consumed video via open standards like they do a podcast.
I mean, the coolest thing about podcasting is that phrase,
you know, get this wherever you get your podcasts.
And it's like, that's because it's an open standard
that you can just directly subscribe to a feed
and people can build apps for those.
And that's amazing.
That doesn't exist for video.
Will it someday?
Maybe there are nerds out there.
And we get the emails about,
open standards for video podcasts and actually Apple launched the iTunes
podcast section with video podcast as like a first first class citizen it's just that
nobody the bandwidth was so expensive back then this is like either pre-yutube or like
right around the time that YouTube started and like people were just weren't
watching they didn't have the we just didn't have the technology to actually make that a thing
that you just watched you didn't have the phone you had to like move the files around
they were large I think Mac break weekly was like one of the
the only ones were ever saw.
They were like shipping 4K video podcasts like in 2007 or something.
It was crazy.
Wow.
That's hardcore.
Because they come from the TV side, you know,
where they're used to putting out video and where most of us are just coming from the audio side.
Anyways, those things existed.
Apple obviously just kind of like, it's still actually part of their RSS spec,
but no one uses it.
Even Apple Podcasts, I'm not sure if it even uses it.
Spotify has their own deal for video.
It's not using the open way.
It's using their own proprietary way.
there's weirdness there where if your video file duration or details differ from your audio file,
you may end up serving one or the other, even to audio listeners, which you don't want to.
And so it's just kind of murky right now.
And I think it would take some sort of a black swan event and maybe a sea change in opinion
and some sort of new tech that makes it feasible, at which point I'd be all about it.
I just think right now it's like a lot of effort and there's a lot of effort to like
re-encode your video into all these different formats
depending on blah blah blah and then serve that
you know a lot of cash misses if you got six versions of your video
depending on the client and so it's like money the storage too
yeah it's like money time and effort for right now like a very
minuscule advantage let youtube pay that price in their tech
and their tech stack and their developers and their bandwidth
and their servers etc yeah that's that's my only concern
is that really is like where do we begin
where do we reach diminishing
returns in innovation
you know with the CDN
beyond
MP3 and smaller file
it doesn't seem to naturally
scale to the video because
the incumbents
have that solved in ways we just
don't see that we need to solve those problems
the only problem
like I said I see is the fact that we don't have
this video file artifact alongside
the MP3 artifact that
is the same thing, but a different flavor of it.
It's elsewhere in our archiving stack.
And I would say largely inaccessible,
certainly not via an API, you know, potentially, but it's not.
Right.
I think building a standard or contributing towards a standard takes a really long time.
Yeah.
And lots of effort.
And that's why no one wants to do it.
And they're waiting for someone else to do it because they know how much of
in how much investment that takes.
No one's in the mood for that type of investment.
I think it's going to be very interesting what happens with AI
because there's a lot of money right now in AI
and I think it will change, not before long.
So where will that money go next?
We'll see.
I'm definitely curious for the next big thing, which is coming.
But one thing that may work well
is if ChangeLog had an app, had a native app,
whether it's an Android or iOS app
and then you control how you display the video and the MP3
and if you had something like that
then you'd be in full control
how you could how you would expose
your MP4 files and how you would integrate them in the player
and that's like a more holistic experience
where you know you can do like transcripts really well
you can do comments really well
maybe integrated with some sort of Zulip or something like that
where it feels like a like a sister
and it's more like a community of people
that are interested in this type of things
rather than
you know, just some content that gets distributed
on different platforms. Again, the
problem in that case is that most people
are already on those platforms. So
it's easy for them to consume things.
But there will come a point
where they just get, I mean,
they just want something different. The flow plane.
I mean, that's a new thing. That's being run
for a few years. So that's one example. There's another
one. I forget what it's called.
I know that you can
you can pay for
man I wish I knew
remember the name I was in a research
maybe like six months ago more like nine months
ago actually was more than six months ago
and I was looking at YouTube alternatives
and there's like this other
platform for media
which stores and distributes
higher and
like 4K videos 8K
videos but for that you end up
paying so it's not free
and then you get creators
that publish only on that platform
I forget its name.
I can look it up.
Maybe we can add in the show notes
because I have it somewhere.
But that is another interesting thing.
Vimeo, I remember when Vimeo was a thing,
but I still, I know it's around,
but I don't think many still use it
in terms of like people going and browsing Vime.
I don't think it's even a thing anymore.
Yeah, they pivoted quite a bit.
It's a very successful business to this day,
but it's like serving enterprise
and more professionals
who are using videos for various purposes.
not as a general consumer product at all.
And video is hard because the transcoding part is really hard.
And yeah, I mean, you always have like the tradeoff.
Do you pay storage or do you pay for compute?
As in, do you transcode on the fly like Jellyfin and Plex does and then you need like
GPUs?
Right.
Or do you pre-transcode and, you know, you serve, you save multiple versions,
you store multiple versions and then you serve those.
And then you have so many code.
I mean, it's just not even funny.
Like, AV1, H265, like, what do you pick?
Different phones, different devices.
It's not an easy, like, MP3 at this point is almost like a universal format.
There isn't a video equivalent.
That's a hard problem.
Yeah, anyway.
That is what makes video dramatically harder.
And that's, I think, where the divide is at.
And that's why I brought that up is because you got this divide of, you know, the potential of, you know, the precursor.
is that almost everyone that I'm aware of, at least, is still paying attention to podcasts
via not really a podcast client anymore. They're usually on some sort of platform. And they're
usually, they're like, they ask me, what show do I produce? And I tell them, and they immediately
open up YouTube and they start searching for it. And oh, is it this one? It's like, yes. Okay,
so that's where folks are tending to go. And here we are optimizing for this. And the migration
maybe that do the worlds eventually collide how do they work long term etc still know and i think
yeah exactly do you transcode it on the fly or do you make multi versions of it i think you just
don't do that unless you know for sure you should just transcode if you can yeah i know the jelly
thing that's exactly what it does and i really like it for that because it's very good on storage
but the CDN, the one that I use,
it does transcoding,
so they store multiple versions.
Now, luckily, I cap the maximum,
and I think I only published 10 ATP on the CDN
because of the multiple versions,
which they transcode and they make available.
But on Jellyfin, it's just like the 4K one,
so that's my approach to it.
What are your thoughts on number four here,
4 or 5 and the question mark here?
Well, change log.com new wiring.
What I was thinking is there's a few,
few utilities that we're using that need an upgrade. For example, Dagger needs an upgrade
really badly. It's like such an old version on the change log. Upgrade or replace. Still undecided.
We'll see how that goes. The deploys, I wanted to improve them for a while. There was always
something else. So improving that time to deploy, I think it was like four minutes last or three
minutes. There was two minutes at some point and went back to three minutes again. And I know
that at least a minute and a half of that is fly.io. So what do we need to optimize there? So
the deploys are a little bit quicker. Postgres, it means being stuck on, I think, 16 at this
point, I think, 16. Something. So maybe we want to upgrade something there. So we don't fall behind
too much. And replacing Overmind with Run It. So Overmind is a supervisor that runs, for example,
the log manager, the log manager, it runs varnish, vinyl, that runs the proxies. It runs multiple
things in the context of the pipe treatment and pipely really. But Overminds sometimes some things can get
stuck because of just how it's configured. There's like some duct taping there, especially when
the logs, how the logs are streamed. So that's something that would like to improve. And I know
that Run It, I've used it in the past. It's very reliable. It's a very old supervisor, very Unixie
supervisor. So I'd like to replace Overmind, which is Go based, with Run It, which is
much more old school and it does everything we need.
So that's like an improvement,
but that's a pipely,
pipe dream improvement.
And the question mark was like,
what else?
I think we tackle the question mark in the conversation.
Gotcha.
All right.
So we're almost at the end.
If you like this as a listener,
as a viewer,
you can like,
subscribe,
you know the drill and comment.
Right?
That's something that is also an option.
I mean,
we touch on many things.
Maybe you have a few ideas of how to do things better.
or maybe there's a few suggestions
that listeners and watchers have.
I'll be more than happy to answer any follow-up questions
as we will all.
Suggestions maybe for the next get-together,
for the next change-law get-together.
You can do it on the YouTube video.
I think you do people, by the way, comment on YouTube videos?
Yeah, a little bit.
The ones that you post?
Yeah.
Okay.
And do you reply to those comments?
Oh, yes.
That's a lot of work.
I just discovered that recently last week.
Oh, man.
Like when you get like 100 comments, it takes a while to go through them.
But it's a good problem to have for sure.
Zulip also works.
I think all of us are there.
Or even GitHub.
That's also, we have the discussion for this Kaysen.
Right.
And just remember to tag me because otherwise I will miss your message.
So I have many things on mute.
And so unless you CC, I'll miss it.
Absolutely.
Same.
Too many in bounds must get tagged.
CCed. One more thing. Last thing. Last thing. Last thing. And then we're done. Okay.
Okay. Last thing. So you know about Make It Work TV? Make It Work. Club is something new. So the 100 gigabit
home lab comes from Make It Work.com. It's on school. It's a community of the most loyal
Make It Work.TV members. But also those that want to go beyond just watching. So the ones
they want to interact. We meet every two weeks.
both Adam and Gerard have an invite
while you were talking
I send you an invite so you can join
you can see the various conversations
which are happening there
and the next one is tomorrow
and it's usually every other Friday
it's usually 9 a.m. Pacific time
yesterday's tomorrow's one is going to be 7 a.m.
because some of us have kids
and meetings and other commitments
it's going to be before work for some people
or we'll be talking about
a smart garage door
opener. We talk home labs. We talk Talos Linux. That comes up quite a lot. Quite a few things.
Kubernetes, it's all there. You can go log in and check it out. Adam and Jared.
You are part of it. I just wanted to get it. I just wanted to get it into a point where there's
like enough to show and enough for you to see. So it's been going on for about a month now,
a month and a half. There's plenty of threads. There's only, I think, 17 members. So it's not that many.
It still feels like a small community, has a small vibe to it.
But it's a bit like this, but with more people.
So it can be a bit more chaotic.
I think the 100 gigabit, I think we're maybe nine or ten people.
So it was quite a group discuss, but still with a, you know, presentation and focusing on,
I mean, you've seen the video, Jared, so you know what that was like.
Cool.
But did you get your invites?
Just double checking that in your email.
I just want to make sure that that weren't.
I got mine right here.
You got yours.
Adam, did you get yours?
Let me see if I got mine.
And then you can decide whether you want to accept it or not, but I just wanted that to be out there.
I do have my invite.
I do see it.
Yes.
Thank you.
I have my invite.
So this is the change log plus plus equivalent.
You can think of it like the change log plus plus equivalent.
Gerhard plus plus.
Yeah.
You can drop any time when we meet every two weeks and you're more.
than welcome to look at the threads, comments, ask for like someone, for example, Misha, he was
asking, he wants to build his own router, like a router, how the Americans pronounce it.
Router.
Router.
Yeah.
So he wants to build that.
And Nabil built, for example, he's like a smart garage door opener.
He just didn't want to get out of the car for the door to open.
So now he has like the all that like programs.
So he's going to talk about it tomorrow.
Interesting.
And, yeah, there's quite a few things.
So check it out.
Yeah.
All right.
That was me.
How do we want to wrap it up?
You put a bowl in that present there.
That was cool.
Talking about a garage door open would be kind of cool.
I just talk to my phone.
I just tell Siri open the main garage and she makes it happen.
And it's just part of us.
Apple HomeKit and the fact that my garage door opener is on the network and it has
those kinds of, I really didn't do anything to make that happen besides just flip a switch
and talk to it.
And that was kind of cool.
Now, sometimes she's like, you don't have a main garage door.
And I'm like, no, no, let's try this again.
Hey, Siri, do this.
And she's like, okay, gosh, don't shush.
Don't be open to my garage girl.
I have to stop her running right now.
Yeah, she was, she was hearing me say her name.
She's excited.
She's always like, kill me up in the garage place.
Or maybe not.
Or maybe not.
What are you keeping there?
What are you keeping?
Like breaking bad sort of situation going on.
Yeah.
Yeah.
No, no, no.
It's not that.
Yeah.
So, but someone had to set up the smart garage door for you, right?
I mean, did you set it up?
Because you need to have the whole, it needs to be hooked up, right,
to your home network.
All I did was enable the Wi-Fi access to my network, and the garage door opener is on the network.
It has an app that runs it, and the app allows me to install, I guess.
It's been so long as I've touched it, so I don't remember how I did it, but it was like shortcuts, I guess, essentially.
You can create shortcuts on your iPhone, and so that's all I did was just leverage the shortcuts that talk to the app that has the authentication.
to the thing via the network, and that's whether I'm at home or not at home.
So it's not even landbound.
It's, it's windbound.
So it's really awesome.
I can be literally in the mountains with very little service,
and I can tell my garage door to open or close.
And I can even tell if it's open or closed because if I say, hey, close it.
She's like, I'm closing your main garage.
She's like, oops, already done.
You know, it's like, she reminds me that it's already closed.
so I didn't really have to do much to do that thankfully
but to set it up so for to take
a regular garage door
right a normal that part
a normal one that's what in the build it
a non-networked one yes that would be
that'd be dope honestly
that's what he's going to talk about how he set up
the whole thing like all the devices what did he pick
how did he connect everything
and it was just like a regular garage door
it had nothing and then he made it smart
well the good thing about those garage doors
they tend to have an outlet which has two out two plugs one being used by the garage door opener
and then one that's used for nothing basically so thankfully if you needed an outlet for your
device to pair it to i'm assuming the answer is maybe yeah right we'll find out
make it work mine mine is just my mine is just a regular one exactly so that's what i want to
do like if i if i wanted to set this up what would i need to do to make it work indeed yeah as a smart one
I think the thought pattern around how to tackle that.
You know, I simplified it by just having one that was already networked.
But if you have one that is not network, then there you go.
You've got to create the network.
I know how we can end this show.
I can just ask Gerhard to review my new hat.
Your new hat.
Yes.
I thought it was there.
Yes.
Oh, I'm so happy.
What do you think?
I'm so happy there was not.
I think it looks amazing.
Do you like this hat?
I know you like blue.
I love it.
This hat's blue underarm.
But I was looking for it.
Everyone, I thought, I think I left it in Jared's truck.
You did.
It makes me so happy to know that you have it.
I've got it here for you.
And we get together again, I will bring it to you.
There you go.
It's yours.
It's yours.
It's mine.
I bring you a gift.
It's yours.
It suits you so well.
I know that it does look good on me.
You're right.
It does.
Thank you.
It's yours.
Let's your hat now.
So happy.
Hats on to you, Jerry.
Hats on.
Hats off to Gerhard.
That's on to me.
I think we have, I think we may have a title there.
That's off to Garrett.
All right.
All right, Kaysen. Good stuff, y'all.
Kaysen. It was awesome. Bye, friends.
Oh, man, our Kaysen episodes are always so much fun.
You just never know what Gerhard might have up his sleeve.
If you enjoy these, let us hear it in the comments.
And tell your friends, too.
Even after 16 years of doing this, word of mouth is still the number one way people find out about the change log.
Thanks again to our partners at Fly.I.O.
and to our sponsors of this episode,
augmentcode.com
and agency.org.
That's agn-tc-y-org.
And thanks to Breakmaster's cylinder,
we have the best beats in the biz.
Next week on the pod,
news on Monday,
Adam Jacob from System Initiative,
on Wednesday,
and on Friday,
Adam and I hit you with something spooky.
Have yourself a great weekend.
Lips of knowledge are a precious jewel.
And let's talk again real soon.
Game on.
