The Changelog: Software Development, Open Source - Kaizen! Pipely goes BAM (Friends)
Episode Date: February 28, 2025It's Kaizen 18! Can you believe it? We discuss the recent Fly.io outage, some little features we've added since our last Kaizen, our new video-first production, and of course, catch up on all things P...ipely! Oh, and Gerhard surprises us (once again). BAM!
Transcript
Discussion (0)
Welcome to changelog and friends, a weekly talk show about birthday presents.
Thanks as always to our partners at fly.io,
the public cloud built for developers who ship.
Learn all about it at fly.io.
Okay, let's Kaizen.
Well friends before the show,
I'm here with my good friend, David Shue over at Retool.
Now David I've known about Retool for a very long time.
You've been working with us for many many years and speaking of many many years Brex
is one of your oldest customers.
You've been in business almost seven years.
I think they've been a customer of yours for almost all those seven years to my knowledge
but share the story.
What do you do for Brex?
How does Brex leverage Retool?
And why have they stayed with you all these years?
So what's really interesting about Brex
is that they are a extremely operational heavy company.
And so for them, the quality of the internal tools
is so important because you can imagine
they have to deal with fraud,
they have to deal with underwriting,
they have to deal with so many problems basically.
They have a giant team internally, basically just using internal tools day in and day out.
And so they have a very high bar for internal tools.
And when they first started, we were in the same YC batch actually, we were both at Winter
17, and they were, yeah, I think maybe customer number five or something like that for us.
I think DoorDash was a little bit before them, but they were pretty early.
And the problem they had was they had so many internal tools they needed to go and build,
but not enough time or engineers to go build all of them.
And even if they did have the time or engineers, they wanted their engineers focused on building
external physics software, because that is what would drive the business forward.
Brex mobile app, for example, is awesome.
The Brex website, for example, is awesome.
The Brex expense flow, all really great external vision software.
So they wanted their engineers focused on that as opposed to building internal crud
UIs.
And so that's why they came to us.
And it was honestly a wonderful partnership, but it has been for seven, eight years now.
Today I think Brex has probably around a thousand Retool apps they use in production, I want
to say every week, which is awesome.
And their whole business effectively runs now on Retool, and we are so, so privileged
to be a part of their journey.
And to me, I think what's really cool about all this is that we've managed to allow them
to move so fast.
So whether it's launching new product lines, whether it's responding to customers faster,
whatever it is, if they need an app for that, they can get an app for it in a day,
which is a lot better than, you know,
six months or a year, for example,
having to schlep through spreadsheets, et cetera.
So I'm really, really proud of our partnership with Brex.
Okay, Retail is the best way to build,
maintain and deploy internal software,
seamlessly connected databases,
build with elegant components, and customize with code,
accelerate mundane tasks and free up time for the work
that really matters for you and your team.
Learn more at retool.com, start for free,
book a demo, again, retool.com.
We are here to Kaizen, which means Gerhard Lazu is also here.
What's up, man?
In the house.
Gerhard Lazu in the house.
Yes.
Welcome.
Everything's up.
Everything's up.
That's right.
That's the DevOps response, isn't it?
That's it.
Or the sys admin.
I don't know what you call yourself these days.
Well, it's just, I know titles, right?
They're always hard.
Infra engineer.
I mean, what is your title, Gerhard?
Officially Head of Infrastructure for Dagger.
Okay, cool.
Yeah.
It's a big role.
Yeah, it is. Right?
I'm enjoying it.
I've grown into it.
Are you on pager duty?
Always.
I'm responsible for everyone that's on pager duty.
Okay.
And I'm responsible that pager duty is set up correctly.
We are alerted when the right things go down, so yeah. So you literally use pager duty. Okay. And I'm responsible that pager duty is set up correctly. We are alerted when the right
things go down. So yeah. So you literally use pager duty. No. It's the placeholder. It's the
Kleenex. Oh dang. Was that a burn? Was it just a fact? It's a fact. Yeah. Okay. Well I know it's
a fact but was it a burn as well? A burn?
A PagerDuty burn?
Uh, I don't know. Maybe.
Okay. Maybe. I never really, I never really loved PagerDuty, I have to say.
And it's not what's behind it.
It's like the whole setup.
It's just too complex, I think.
I will say this about it because this is all I know about it.
Great name.
It's got a great name.
Right.
That's all I can say about it. I prefer incident. Incident.. Great name. It's got a great name. Right. Well, I can say about it.
I prefer incident incident.
I know. I think that's even a better name when
there's an incident.
Why? Because we don't have pagers anymore.
Pretty much. Yeah.
Who has pagers?
And it's true. I guess it's a terrible name.
But well, now it's just a page that
person, which means call that person or
email that person or slack that person or
zoom to just get a hold of them however it means possible.
Yeah, exactly.
So and if anything, if you only use a pager,
it means you don't have a backup.
And if something goes down, you definitely want your whatever is
monitoring to have multiple layers of redundancy.
Right.
I can just wear two pagers.
You know, they slide onto your belt.
So you can just clip a second one next to it.
But it's using a single network, so you need to redundancy, you need cell phones, you need
like you know emails, the whole thing.
Well two of everything I guess.
Can't silence it, that's my biggest issue.
I forget I silenced my phone and then I'm like why didn't I get that text?
Oh because my phone is on silent.
Do you normally not have it on silent?
My phone's been on silent for 12 years.
Same here.
You know, I don't know, man.
I don't know.
That's what I got to watch, right?
The watch will alert.
Yes. Yeah, exactly.
I feel like the phone is such a hard thing, man.
I'm just like when to make it, you know,
alertable, let's just say, or like something where it can bother me.
Cause I miss critical texts or emails,
or not so much emails, more like texts or phone calls.
You know?
I wore the watch for a couple of years
and thought that I needed it in my life,
and then the watch broke, and that made me ask the question,
do I really need the watch?
And I just decided 300 bucks or whatever,
I'm gonna go without it for a couple weeks and just see.
I never felt more freedom than after my watch broke.
Oh, I haven't bought one since.
I've hadn't had a watch for over a year now,
and I don't think I'm gonna go back.
What kind of watch you got, Gerhard?
It's the Apple Watch, so.
But which one?
Which one?
The Ultra 2 or the Ultra 1?
I bet it's the biggest, most expensive one.
Well, it is the ultra. I was waiting for that. Got it. It's I love the extra GPS is and everything. So it has like a couple of things in it.
Ultra 2, that would be the new one. And this would be the backup.
That's that's that's what we're working towards. But I do like, especially when I drive, I love Apple Maps.
That integration is really, really good.
Not sure if you've tried it, but when you have to take an exit,
or you have to take a turn, it just vibrates.
It's very, very helpful.
Yeah, you know, I'm with you there, but I'm not with you there.
I feel like I like the Apple Maps, and I go there, but I use CarPlay instead,
rather than the watch.
Let the car be the alert.
And she'll just talk to you.
She'll just be like, take your next right.
Or just pay attention to the map.
Yeah, but you gotta pay attention to the road, Adam.
Also, you got the game on your handheld.
You gotta watch the game while you're driving.
That's right.
I'm playing PlayStation 2 while driving and
I'd say Fast and Furious throwback Jared.
Oh, I thought maybe it was a Silicon Valley reference.
No, man.
You know, I got more.
Me, though.
You're not a single trick pony.
This guy has more than...
Fast and Furious the very first episode or the very first,
I guess, movie.
That's the one that I remember.
Yeah.
Before the race, the kid was playing PlayStation.
It was actually PlayStation 1.
["Pirates of the Caribbean 2"]
["Pirates of the Caribbean 3"]
In his car, in the console, prior to the race.
And it was like a flex, you know, it was like,
oh my gosh, I've gotta trick out my car.
I have to have a PlayStation console in my dashboard
That's not realistic because that sucker did not have what was it called when the CDs would just anti-vibration
Yeah, like, you know the old Walkman that took an actual CD and you walked around with it
It was skip constantly skip protection. Yeah, I'm pretty sure PlayStation 1 had the same problem
If you're driving a car and playing it,
you're probably skipping all over the place.
Gerhard, get us on track here.
We're here to Kaizen.
We'll talk about movie references, the entire show.
Kaizen 18.
So I realized that this was or will be,
when it will come out,
my 1-1-1 episode on the changelog. Oh wow you like you like
that number it's not round but it's symmetrical I don't know what it is it's
all ones. It's three ones I mean that happens rarely like the like the next
time like twos I think it's going to be such a long time right if we only do the
Kaizens I think that will last me to the end of life honestly. might yeah I mean two and a half months like one one one divided by two
and a half months that's a lot of years I think how many Kaizens do you think
we're making it to before one of us you know kicks the can well hopefully that's
what I would like to see we'll get to a hundred at least be awesome yeah I mean
you know we won't stop like ship it at 90. This one has to go to 100. That's right. That's what I'm thinking. Wow. So we got to 75 more episodes ago. And
that's I think, what is it 40 years? No, let's just acknowledge it and move on. Yeah. Yeah.
It's a lot of years. That's a lot about yours. Do you know like what episode appearance this will be for you? Mostly all of them. Well, we can look it up easily because it's on it's a lot of years. That's a lot. What about yours? Do you know like what episode appearance this will be for you?
Mostly all of them.
Well, we could look it up easily because it's on the person page.
Yeah.
It is, yes.
I love that page. I don't know if anyone is aware of that, but if you've been as a guest to the
changelog or even if I think if you replied, I'm sure about that part But it will it will show all your interactions or all your all your references exactly on the changelog
So I use that quality loss so changelog.com forward slash and what is it for the person?
person slash slug person
All right, Gerhard cool. So there you go one one one one zero episodes. So I've been on nine hundred and nine episodes
909 Wow. Yeah. Wow, that's a lot. Yeah, 909
Crazy be 910 for me or maybe 911 by the time it comes out
I don't know because Wednesday shows Adam by himself. So this will be 910 for me. Yeah
Do you think this is the year that you'll crack a thousand? Is this it?
Good question. Three a week. No. Three a week. Yeah. No. Three a week times 50. Yeah. But
we're there. It's February already. Yeah. Yeah. I keep thinking it's the start of the
year. It's March actually. Time is compressed. Yeah. Maybe. Yeah. So it's possible. Maybe
our final episode of the year will be the 1000th.
Wow.
Okay, and 802.
What happened there?
How come do you have more episodes than Adam?
What's this all about?
News.
He's got a hack.
I was on J's party for a while.
J's party and news.
Yeah.
Okay.
Okay.
Alright.
So I'm winning.
Alright, so far.
Yeah, let's see if I can catch up.
I think you're more losing depending on how you think about it.
Yeah, I guess I couldn't catch you, could I? It would probably be pretty hard to do that.
You can take over news if you want.
It's like it's not worth it.
I mean, uh...
Funny news, maybe.
You know, scaling is a people thing.
So, let's talk about something that happened.
Let's start with a low.
Well, change log was down for four hours. Let's talk about something that happened. Let's start with a loan.
Well, change log was down for four hours.
Oh, let's not talk about it.
Did anyone notice?
Well, I'm really wondering,
did you notice that change log was down?
You did.
Okay.
How did it happen for you?
Well, I went to the website and it wasn't there.
Right, okay.
Okay, cool.
The classic way.
All right.
I assume it was signed in people only
because I didn't actually check,
but I'm always signed in.
And so we will cash with Fastly,
if you're not signed in,
but if you have a signing cookie,
we pass it through to the app every time
and the app was down.
And so I noticed because I went to go share something
and wanted to look at something and I don't know,
it was down.
Although I think I already knew that because maybe you posted it. I don't know. But I definitely just went to the
website and it 503 or whatever. So for anyone if you're wondering if changelog is down,
go to status.changelog.com and you will see what is down, when it is down. So in this particular
case we had a previous incident. There's a bit of a red right there and when it is down. So in this particular case, we had a previous incident.
There is a bit of a red right there.
And this is the origin.
So the origin was down.
And if you click on that, it takes you to the status.
And you can see the whole history.
So this is something that I do update whenever
there's an issue like this, especially when it's a big one.
We had a few small ones, just like a few minutes.
But those don't show up but this one was
significant and February 16th 10 a.m. actually it was before 10 a.m. so that was a Saturday,
Saturday or Sunday? No I think it was a Sunday, February 16th. Yeah it was a Sunday so that was
half the Sunday looking at this. So what happened? Well, if you go to the discussion, 538,
that's where all the links are.
But basically, as it happened, it was a fly issue.
And the fly.io issue, it wasn't fly.io itself.
Fly.io, and I'm going to scroll down
to that particular message, has providers. So in
this case, one of the upstream networks, so let's see, let's
see, where is it I'm looking for there. It was a far upstream
issue. And I'm now looking at a post from Kurt, the CEO of fly.
And he was saying that the failure was far upstream from us
and a single point of network failure. So one of their vendors let them down basically and there's not much that
they could do about it. So this is what happens when you know, because we all depend on other
systems and other systems are always upstream systems, you have internet, the internet provider
I'm sure you know has transit links and peering links and all of that.
Some of those can be down if you don't don't run to everything.
Everything in this case, they didn't have to have two of everything.
The switch went down and it took four hours for someone to fix it.
And it was, I think Sunday, very early morning on the East coast, which just made it a bad Sunday.
Yeah.
Well, lots of us did, but one person in particular probably.
Yeah.
So their virtual pager went off.
Yeah.
So that was not great, but, um, I think one of the key takeaways for us is that.
In terms of how many requests didn't go through, so final impact, I posted again on the fly
community.
For the whole outage actually, our SLI for successful HTTP requests, that was like the
last 24 hours, dropped to 97.40% so and well below three nines even four nines but it's still 97% of the requests
were served. Most of them they go to our object storage right all the mp3s all the static
assets all of that. The website itself I mean some of the pages the most visited ones they're
being cached and they were served from the CDN
Fastly in this case. So if you were not signed in
Most likely you will not have noticed this and I think for many people that consume
You know the content through their podcast players or from YouTube wherever you get the change of content from I
Think I don't think you'll have noticed this. This was very specific
to the app. And if you have, let us know.
MP3s continue to serve, right?
Yeah.
So, yeah.
Exactly.
Unlikely that people really noticed, except for the people who noticed.
Yeah. Well, I mean, there were some for sure, because we can see that a bunch of requests
failed, but in the big scheme of things, it wasn't that much. Now did we run two changelog application instances? We did not. We did not. Actually,
well we did but they were all in the same region so this was a regional failure. All of Ashburn,
Virginia in this case flies, Ashburn Virginia IAD. That's the one that went down.
And unfortunately, that's actually the primary one.
What made this worse is that fly itself,
the control plane for the machines,
was running in that single region, which meant that no one
could scale their apps.
So if you happened to have a single or multiple app instances running only in that region,
everything would have been down.
If you could have scaled it while this was ongoing,
you could just basically spin up another application in a different region.
But that was not possible.
So again, there were a couple of things that failed in surprising ways and
For me what was surprising is that?
Well, we did have we did have two application instances, but they were both in the same region and the region went down. So
Now we have another one running in EWR. I think is New Jersey
somewhere in New Jersey so yeah we're good so we're good to go
we're good to go why don't you put that one on somewhere closer to yourself Gerhard
well if I did it would still need to go to neon that's where the database is so that would
introduce a lot of latency now if you could distribute the database
and we could have a couple of free replicas,
which it's something that I'm thinking about,
this would be, this would make more sense.
Do we want to do that?
Oh, I don't know.
Do you have other stuff to work on?
I do, but yeah.
I don't think we need to do that.
Chasing the Nines is fun is a fun cool. What's next?
All right, so there's a thread
linkify chapters in Zulip
new episode messages
Remember we talked about that in the last episode and I think was like a day before two days before just landed
How amazing was that?
So amazing.
Probably the coolest thing that happened in this whole guys.
So far, so far, hang on, hang on.
So far, you had an outage and a feature.
So what was it like to implement it?
It was not very hard, I don't remember.
51 additions and 18 subtractions, so that's a small feature.
You know, just a little bit of code
to go ahead and linkify those suckers.
So for those who don't know what we're talking about,
when a new episode is published,
our system automatically notifies various social things,
one of which is our awesome Zulip community.
If you're not in there, what's wrong with you? on comm slash communities get yourself a zula. It's really free and
You'll be able to chat about
the shows
After they come out and so every time a show comes out it posts in there
Hey new episode it has the the summary the, and the link to listen to it.
And we've also now embedded the chapters
as Markdown Tables.
And that was already there, that's not this feature.
What I didn't do prior was I didn't linkify
the actual chapter timestamps.
So you could click on timestamp
and immediately start listening to it.
And so that's what I added,
was I made those timestamp links so you can click
and listen from that spot, which was requested by the both of you on the last
Kaizen.
And so since we have a three day turnaround between recording and shipping
each episode, I actually shipped the feature out, I think prior to that
episode dropping, because like I said, it was half an hour of coding, but it's useful.
Yeah.
Those are the best features, right?
Little bit of work, lots of value.
Exactly.
I'm wondering if anyone else is using it or if they noticed it.
And what do you think useful?
Good question.
Let us know in the Zulu comments.
Useful, useless, or Do we revert it?
The revert commit.
Not going to happen though.
Yeah.
I just don't see any reason why you take the links away.
If one person likes it, we know Gerard likes it then.
Yeah.
Why not?
That's really cool.
Did you click one of those links Adam, since the feature landed?
I would say no.
I like that you would.
You said you were going to click on them. Did I say that? Something like that. Go back and quote me. I want to, I want say no. I like that. You said you would. You said you were gonna click on them.
Did I say that?
Something like that.
Go back and quote me, I wanna hear.
Jason, pull up a quote.
If I said that I wanna know I said it.
I'll eat some, what do you say that, eat crow?
You say eat crow?
Yeah, eat crow.
I wanna eat some crow, man.
I wanna eat some chicken.
Eat more chicken, okay.
I do like the feature being there.
I think that I'm just a go there and do a kind of person not to stay here and click around kind of person
Although I do like what I like about Zulip and I like about what this offers
I believe is that we tend to have thick conversations much thicker than we had in slack. And so one of my biggest
Excitements I would say,
if that's even a word, happiness levels,
pick your 8.34 in the morning word.
This is an earlier recording time,
as you can probably tell.
I'm gonna do a network chug, drink a coffee.
Are you apologizing for your lack of sharpness or what?
Yes, yes, yes I am. I am not very sharp in this in this conversation must be boring Jared. That's what it is
We're not good hosts and I'm boring himself over there cheese. No, I was drinking the coffee. All right
Well, yeah, did you bring some crow? I got more to say I got more to say I got more to say what I enjoy
If I didn't if I couldn't tell I'm getting there
Is that how thick these comments are in Zulip
So back to what Jared said you are missing out change all that comm slash community the quality when you say thick you're talking
About the quality thick good. Yes, sorry. I mean is that not is that not clear enough thick comments?
I'm just making it clear. No big comments actually means are not very good. Yeah, like how thick well depends
You like thicker not yeah, I think thick is always better than thin.
I mean like, go choose yourself a Reese Cup or whatever, right?
What? You see the car Reese, you pick your you pick your nomenclature for your Reese Cups, man. I like the big cups, okay?
Did you say a Reese Cup?
Can you show us Adam?
Can you show the viewers what that means?
It's this, okay? That's what it is.
It's big. It's big. Is it? They have a big cup. Okay, we'll call him Reese.
My wife makes fun of me too. I say, I used to say Nike. Had no idea it was Nike my whole life. Okay?
You said Nike? Yes!
Like a fool. I've never heard anybody say Nike this is amazing how many years
How many years did you say that?
That's amazing you never realized it till you were 35 I just thought those two is it
You never realized it till you were 35. I just thought those two is it
Totally blushed it way too early in the morning. Okay. All right. Sorry the comments are vast lots
They are plentiful. They are thoughtful and there's lots of commentary in our Zolap
So I think what these, back to the links, gosh,
what they provide is if you are there and you're in conversation and you're using that table
as a reference point, well then you're obviously
gonna be able to go and click directly from there,
which I think is super cool,
because you have the useful tool
where the conversation's happening.
Okay, well I found a quote from our previous episode. Oh boy.
Did you bring your crow?
Cause you might have to eat a little bit of it.
What did I say?
I said I had, I could make those links clickable
and maybe I'll do that.
And then Gerhard Lazut said I would love that.
And then Adam Sikovia said I would concur and plus one that
because that would make me click a chapter start time easily
because it would be clickable for one.
And I want to now.
So you said it would make you click it
because it would be clickable and you want to now.
Yeah, and I have, but I'm not like,
I'm not a daily clicker.
No, I'm not in there.
I thought you just said you hadn't.
I clicked at least one.
Okay, all right.
Well, controversy solved. My gosh. This bus is heavy
I'm under here. All right, so I just get this one to close that loop and then we can move on
So I love this feature great job
All right, I use it daily. I'm a daily active user of this feature awesome
Well before the show I'm here with Jasmine Cassis from century Jasmine Awesome.
Well, before the show, I'm here with Jasmine Casas from Sentry.
Jasmine, I know that Session Replay is one of those features
that just, once you use it, it becomes the way.
How widely adopted is Session Replay for Sentry?
I can't share specific numbers, but it is highly adopted
in terms of, if you look at the whole feature set
of Sentry, Replay is highly adopted. terms of if you look at the whole feature set of Sentry, replay is highly adopted.
I think what's really important to us is Sentry supports over 100 languages and frameworks.
It also means mobile.
So I think it's important for us to cater to all sorts of developers.
We can do that by opening up replay from not just web but going to mobile.
I think that's the most important needle to move.
So I know one of the things that developers waste
so much time on is reproducing some sort of user interface
error or some sort of user flow error.
And now there is session replay.
To me, it really does seem like the killer feature
for Sentry.
Absolutely, that's a sentiment shared
by a lot of our customers.
And we've even doubled down on that workflow,
because today, if you just get a link
to an issue alert in Sentry,
an issue alert for example in Slack
or whatever integration that you use,
as soon as you open that issue alert,
we've embedded the replay video at the time of the error.
So then it just becomes part of the troubleshooting process.
It's no longer an add-on.
It's just one of the steps that you do.
Just like you would review a stack trace,
our users would just also review the replay video.
It's embedded right there on the issues page.
Okay, Sentry is always shipping,
always helping developers ship with confidence.
That's what they do.
Check out their launch week details
in the link in the show notes.
And of course, check out Session Replay's new edition,
mobile replay in the link in the show notes as well.
And here's the best part.
If you want to try Sentry, you can do so today with $100 off the team plan totally free. If
you try out for you and your team, use the code changelog, go to sentry.io again, sentry.io.
So yeah, so that was a good one. I enjoyed that it landed. We will talk about the YouTube videos for sure. And that's going to come up. We can talk about it now, by the way, because really, that's for me, that just like took the highlight in terms of features. Okay. So once the video podcast landed, that was just so amazing. So I am still watching Adam's podcast, Adam's video with TechnoTim. Ah, I'm almost at the end.
That was such a great conversation.
Thank you, man.
Had it not been for the video part, I would have missed, for example, Tim's background, the little like minirag that he was building, the little body language.
It was just so good.
I'm enjoying that a lot more than if it was just audio only,
because there's so much more detail in that content.
Cool.
Well, that makes me happy.
Yeah.
So that's the one that, again, and it doesn't often happen
that I listen to a changelog episode from start to finish.
I usually have parts, which is where the links were coming in very handy
Yeah chapters, but this exactly like the chapters, but this one
Episode I'm like near near the end and I just cannot wait to see how it ends
Let me ask you a question if if Tim and I did that more frequently
Do you think that'd be a good thing? Yes
But I think that you need to up your game.
Okay.
And start delivering on some of the ideas.
Start implementing some of your ideas to see how they work in practice.
Such as?
Such as.
So you're saying about building a new PC.
So I'm curious.
What did you do about that?
I built the thing.
Did you build the thing?
Yeah.
Oh, wow.
Okay.
I got a beefy at home lab right now.
Very nice.
What are you running?
Oh, you want the you want the words here.
Okay, fine.
I will tell you.
I will tell you the words.
Let me see if I can like what's the case.
Did you go for fractal?
I know a big fan.
Yeah, I did go fractal.
If you like, it feels like we need some pictures.
I mean, if this will be in the b-roll for Jason I would love to see that. What GP did you go for? I'm very curious
about that like that was something. Well so I repurposed as you do anything you start with what
you have so rather than go out and spend the five grand that I would really love to spend on
something all I did was just go pick up a 3090
and add it to the existing machine I already had.
So I had a, I had just built this beefy machine
for my Plex machine, which was like just overkill.
I just wanted to build something.
So my motherboard is an Asus workstation level motherboard.
It's a 680 ACE and it's got four demos of DDR5 RAM available up to 128 gigabytes
of RAM. So I've got that, I've got the 13900K, so it's an older generation CPU but it's
still very very capable.
Coupled that with the RTX TUF Gaming 3090 and the maxed out RAM and an NVMe SSD,
well you've got yourself a really fast machine.
And that's my stack right there basically.
That's very nice.
Network?
It's 2.5 by default.
2.5, okay, okay, okay.
Are you thinking of going higher on the network?
So the motherboard doesn't offer it by default,
but I can add a card.
I don't know if I'm maxed out all my PCIe lanes though with my 16 lane requirement for the GPU.
Yeah well if you have NVMe's it means that you have only one or two you can't have more than two.
There's three slots on the board I'm only running one I only have a need for one.
Right so the reason why I ask that is because as soon as you, I think as soon as you fill
the second lane, it will, you'll lose it. You'll half the lanes for your GPU to go from
16 to eight.
And I don't want to do that.
Those lanes are shared with the NVMe drives.
Yes.
Actually in practice is not as bad as you would think. I did the same. So I maxed out
the NVMe is another machine and because I maxed them out
I have like four or five and
Because of that my GPU which is a 4080 drop to eight
Eight lanes, but that's enough like the drop in performance is so little because I don't game on it heavily
Yeah, it's so fast already you what you really want is the storage, right? You want the VRAM, not so much the speed necessarily. Unless you really are pushing the speed, like, and
you're doing AI stuff and you got a serious parameter, you know, LLM sitting there or
whatever, then maybe you want those tokens to be as fast as possible, because like that's
the whole point.
The actual difference is more like a few percent. So if you go from 16 to eight, it's just a few percent.
This is where I would love to geek out at.
Like this is like what I love about these conversations with Tim.
It's just they're so infrequent.
That's once per year.
So we're more catching up versus digging deep.
Yeah.
I would love you to have these more often and especially like, you know, you had that
conversation, you said about like some of your plans and like a lot of the things that you mentioned now
I remember you mentioning when you were talking to Tim
Yeah, and so how did you follow through on that?
Like did you stick to what you said or did you change your mind as you were building it?
It sounds to me that you'd haven't I never remember Tim mentioning the 3090
So I'm very curious like did you buy it off eBay because you're mentioning about your good experience with eBay
I got it. Yeah, I got it on eBay. Really good experience on eBay. I think I
got it for like 800 bucks. Right. You know, it's not the worst price ever. US dollars. Yeah. It's
basically brand new. It's super clean. You know, I tested it the moment I got it. Like I just did it.
I did like all these parameter tests. Initially I spun up in a boon to installation ran to issues with Docker and
GPU so I went to the dark side. I installed Windows 11 Pro and
So my AI home lab right now is being powered by Windows 11 Pro
I know Jared is backslash is trash over there on me, which I'm cool with but man you've got to explore
I love the idea. I've never played with Windows and like I told my son this I'm like
It's been 20 years since I've played with Windows that long and I feel like there's a lot of cool stuff there
But man, they got some really terrible warts over there
You know, they just so bad. It's like, it's developer hostile now, not just user hostile.
Like there's ways to clean it up.
Chris Titus has a script that you can run via terminal,
the administrator level terminal
and remove a bunch of stuff
and sort of like make some things nicer,
which I think is super cool.
Makes it a little easier as a non-Windows user
to like easily get to a certain state.
But yeah, I played with it at first.
I did some benchmark testing against it.
I really pushed it as hard as I could
to just confirm it was a good buy and it was a good buy.
But I started with where I was at versus like,
okay, let me get a brand new motherboard,
a brand new stick, you know, sticks of RAM.
And I would love that.
That's the fun side of building PCs is like,
I really wish there was a better operating system
that wasn't, gosh, will I get punched in the face
for saying this wasn't Linux?
I will say though, this is the first time I played
with Ubuntu desktop in a long time.
And that has actually come a very, very long way.
Ubuntu desktop, I think, is probably the closest contender
to a non-Mac OS operating system that's fun
to play with GUI.
Now, albeit I have not explored like Pop OS and others,
I just haven't had a, you only test things
that you're curious about.
I just haven't been curious about desktop-level
Linux stuff yet, mainly because it's been the year
of the desktop Linux forever
and it's never come, truly.
I'm hopeful though that one day, I think it's probably the closest it's been in a very long time.
So when I started with my adventure in GPUs, I needed it to do the video editing properly.
I went Linux first.
I was saying, you know what, I'm not going to go to Windows. What is the best Linux distribution that has good support for GPUs like out of the box, it just has the drivers pre-installed and everything just works. And the tiling manager works as well. Because that can be sometimes a pain. Pop OS was the one that kept coming up very high and I said let's just try it. So I did and I think
I've been running it for two years coming to two years. Coming to two years I've been
running it and before I had the NixOS. So this was a machine that went from NixOS to
PopOS and I'm enjoying PopOS more. It just more feels like more like a natural way of
using it. What's it based off?
Ubuntu.
Okay.
So it's, it's Ubuntu based, but a lot of like the little things that in Ubuntu, maybe they don't work.
They seem to have a better, there seem to be a bit more polished off in
Pop OS, specifically the Nvidia integration and the tiling manager.
Things are just, you know, a bit more, I know cohesive.
It feels that it's, it's a bit more cohesive.
So this machine I'm using for a bunch of things. And while I started editing the videos on it with
DaVinci Resolve, DaVinci Resolve itself, Pop OS is not a supported operating system. And then I was
forced to go to Windows. So I like when Tim said that you have to try them all, like in that interview,
I realized, yes, I actually went through the same journey. He's like, Adam, you have to try them all, like in that interview, I realized, yes, I actually went through the same journey.
That's what compelled me to.
He was like, Adam, you got to try them all, man.
So I use Linux for something.
I use Mac for something else.
And I use Windows for editing because apparently the editing software works
has like the best support, like codecs and things like that.
They work really well on Windows.
The operating system itself.
Oh, wow.
I don't know what words to use that would be politically correct, but also accurate. on Windows. The operating system itself, oh wow,
I don't know what words to use.
That would be politically correct, but also accurate.
They make it so hard to do everything.
Manage your own user.
Like even manage your own user,
there's like control panel and there's user accounts there.
Then you've obviously got system settings or just settings
which you have those things there.
It's like there's like three places to do pretty much anything and you got to do three things that change one thing and
they're all in different places and some of them are legacy looking applications.
Good luck even finding it in the sea of things you can find.
I just think that's
somebody there's not empowered power to fix it,
or somebody doesn't care.
I'm not sure which one it is, but they really could have a,
because of the reason you're seeing this,
you know, this out of the box support.
I had such trouble getting my GPU to play well
with the operating system,
and then being able to pass it through to Docker,
I had to like go and add some things and like I had to go to
documentation was like see but seemingly foreign.
I just felt a little lost on Ubuntu Linux to try to get the initial state I
wanted to be at.
And that's my default.
So I didn't try to go to pop OS or explore.
And I could have, but because I had that conversation with Tim, he's like,
you should try them all.
So I was like, well, you know, I tried well you know I tried you know Windows 11 and not the worst but man like even I felt successful just
SSHing into I had to post this video to our general channel and then Jared had
to go in there and backslash me cool love that song just Just because it's such success to SSH into the machine.
You had to go and install the OpenSH SSH server.
The client was there by default, but the server was not.
And then I think my original username had a space in it.
So like when I was SSHing into it, like I was,
it wasn't Adam, it was just something else.
I don't know.
Trying to find my slug for my username even I don't even know if I found it I think I luckily found
something to like swap it out and restart SSH server and I was in well if you have multiple
NVMe drives you could always do a boot and you could try, you know,
another Linux distribution. I can recommend PopOS.
They have a new graphical manager, I think Cosmic. That's new. It's not as stable.
But I hear very good things about it. So they rewrote it, everything in Rust.
Apparently it's amazing. I haven't tried it yet. I'm still like on the old one.
What are your thoughts on WSL 2 and how it integrates like that's I haven't explored
that deeply, but I have a lot of hope that there's cool integration.
Like one thing I know I can do is I can as a state from one machine to another, our
sync files from that via.
Ubuntu in WSL 2 on Windows, and I can run Linuxy things or if I have an operating
system or you know whatever installed to WSL a distro I should say. What's your
experience there with that? I tried it it's okay I mean it gives you a close
enough Linux experience it's much better than it used to be before PowerShell I
just just can't get along with it.
I just wouldn't use it.
Command prompt, seriously.
I was like 20 years ago.
The thing is still around.
So yeah, legacy, but I think good legacy.
So WSL2, I think it's a good feature,
but Windows itself as an operating system,
as a package, like the outer package,
just feels wrong to me.
And I use it only for specific reasons.
Yeah, so DaVinci Resolve,
I have a decent experience with that.
If I had to do this all over again,
I would get a Mac, like an Ultra 2, an M2 Ultra,
or an M4 Ultra when they come out,
like a really powerful CPU and a GPU. But the RTX, like
a 4080 or 3090, that is the level of hacking that you just don't have in the Mac world.
So I just wanted to try it out. It was okay. I mean, the Windows workstation, for example,
that has a 4090, it's a very loud system. Like really, I don't think people realize
how quiet the Macs are. Whether it's a laptop, whether
it's a studio, whether it's a mini, you know, they're like whisper quiet. And this first
Linux workstation which I built is a fanless one. The PSU has no fans, like there's no
fan in the system and I love that for it. I love it for that. For example, the NVMe
that has no spinning disks, which you know is just just like a great fast and silent.
Yeah, exactly. So it's a great feature in comparison. The Windows machine is just the opposite.
It's just loud. It's just like it's just hot, very hot. It's a 1390 KS.
It's like the top of the range. Okay. 13 series. The KS is overclockable, I believe.
Exactly, yeah.
It goes all the way to like 6.6 and something.
So we have the same CPU then.
Yeah, yeah.
Except for you got the overclockable version of it.
Yeah.
It has also like 192 gigs of RAM,
so it's like fully maxed out.
NVMe, like the whole, it's like a fully maxed out
or it used to be a fully maxed out PC
maybe about a year ago.
So yeah. like a fully maxed out or used to be fully maxed out PC maybe about a year ago. So, yeah, it's OK.
But like trying that world, it is my editing machine.
And I love when Tim said that, right?
You need to have roles for your machines.
And that's what it is.
And if it was to break down, that's OK.
There's like another machine to use to to to replace it.
So your comparison, though, the fans and fans and the noise level, I think.
So the exploration for me is not okay.
Let me, and I think for now I'm like, yes, let's, let's make this a greater PC of
some sort. Let's explore this world.
I don't know if I'll stay there forever, but I'm in, I'm enjoying the exploration.
I'm not enjoying it because it's windows necessarily.
It's enjoyable because it's new territory.
It's new found.
How does this work?
Does this fit for me?
If it does, where does it fit?
I will 100% concur and agree that while this machine,
its fans just spin up opening applications,
it doesn't need to, it's got this beefy CPU.
So for whatever reason, the front three fans spin up
for 10 seconds, just enough to hear it.
And it goes back down and it kind of cools off.
Or if you ask a big question in Olam or whatever,
it's obviously gonna spin up for the duration
of that question, so it's by design doing that.
Will the Mac world supersede this?
Any smaller, easier package that's silent
and less power hungry?
That's cool.
What they're doing there is super cool.
But you can't build it yourself and it's so sad.
I know, I know.
Anyways, we can probably move on.
But I think that's what I love about building PC is just
the exploration of the hardware. How does it work? You know, what works together?
Yeah, me too. Me too. And I think, um, we're at a stage where it does make sense to have a few
lying around, have a windows machine, right? If you have to do testing or anything like that,
use the Linux machine.
I think it's a very eye-opening experience
as to what is possible.
And then, if Mac is your default,
or if not, if you have the opportunity
to get maybe a Mac mini, do that as well.
And then you will find the one that you love
and the one that's your daily driver,
and you have a couple as backups
when something goes wrong, because it does.
It does happen. Cool. Well, talking about know, when something goes wrong, because it does, it doesn't happen. Cool. Yeah. Well,
talking about podcasts, because in video podcasts, because
that's how we started, I don't think we finished, there's like
so many new features around YouTube, and around content on
YouTube. I think the reactions have been mostly positive. There
was a whole Zulu discuss about it, which I don't contribute to
many, but this one I did
contribute to. And February 1st, I even got some love hearts from a few of you, Nabil and Marsh.
So thank you very much for that. But what do you think about launching video podcasts? Like,
how was that transition? How's that new chapter? I think it's going pretty well. I guess I don't consider it to be over with. Maybe it is.
Because I guess a lot of what we think about is production workflow and
we're constantly trying to improve that and make it better.
I would say that we successfully went video first now and we have systems in place that we can do
that reliably. I had to build a few things
and we had to figure out a lot with regards to chapters and timestamps and how we handle the videos on YouTube versus the podcast episodes and audio and all the nuts and bolts I think were fine.
Like we just kind of figured it all out and did it. Nothing really was too difficult there.
The response has been positive.
I think a lot of our audio listeners
have a little trepidation
because they think is it gonna become a YouTube show?
And you know, they never wanna listen to it on YouTube,
which I don't either, honestly.
We're doing this for people who like that kind of thing,
like Gerhard, I guess, and others.
And I acknowledge that you all are out there
and we appreciate that you are.
And we want you to watch it on YouTube,
which is why we came there.
But our existing audience, very few of them
find much value in the videos I think,
or the ones who at least are vocal don't.
And I get that.
And of course the trepidation is like,
well will the audio suffer? And will we start to like pull a thing up on the screen Or the ones who at least are vocal don't and I get that and of course the trepidation is like well
Will the audio suffer and will we start to like pull a thing up on the screen and have reactions to it without?
Explaining what we're looking at. I don't ever want to get there
Hopefully we can you know be self-aware and always remember that we have a listener not just a viewer and
Explain what we're looking at if we are looking at something so for them I understand because
and explain what we're looking at if we are looking at something. So for them I understand because
if you love something and it's changing,
you just hope that it doesn't change for you for the worse.
And so hopefully we haven't done that.
I think we've had most people who had trepidation, at least so far,
have been fine with the change. They haven't noticed much of a difference.
And for those who love video podcasts or watching conversations on YouTube,
because is it a podcast actually? I guess YouTube thinks it is. We're there now and people are
watching. You know, we get 500 to 1000 watches on a video. We hope to grow that. And no real
complaints there, I don't think, besides your random YouTube troll,
which we've had trolls our entire career,
so we don't feed them or care about them very much.
That's my initial thoughts.
Adam, anything to add or subtract?
I ran into, because I was actually
talking to my son last night.
I was like, dude, because my son's nine,
and I'm about to give him an Ubuntu desktop machine
to play with, I'm gonna start teaching him Linux.
And I was like, excited, because I had just,
like literally maybe earlier that day,
SSH into this Windows machine,
and I was like, success, you know?
And I was like, referencing embedded systems
and why it's so cool, how Linux is so cool,
and I'm like, you wanna see something cool? And I went like referencing embedded systems and why it's so cool how Linux is so cool and I'm like you want to see something cool?
And I went to youtube and I searched embedded
changelog and I just searched those two things and it came up with the embedded podcast we did jerry that you're aware of
And I go there and there's this comment that's like
500 words deep
And i'm like I had no idea one that this comment was here
uh, and two that is like, you know, I was
Revelationed I suppose in terms of like how cool this move is is that we've got this new
commentary level and
The person's like I like this podcast. I love to hear more and they kind of go into all this stuff now
The person doesn't have a username. They don't have an avatar. So that's of sad, but you know, I'm still hopeful that there's more like that that are
that are thicker. Geez, y'all don't like that word. I don't just like it. I think thick
is a good thing. Anyways, I won't go back there. It's a it's it's a exhaustive, you
know, thoughtful comment that I haven't even read the whole thing yet. But I was like, wow, there's this super huge comment that somebody's like
actually talking about relevant things and not how we suck.
So that was cool.
I love that. I like the.
You know, I was pushing for this because I was like, this is something
this we need to do. There's a whole audience there that we can tap into
that we're not and
Clips are great
But they're not the full-length podcast
I'm now sad that there's
That when I share with people we're on YouTube that they're like, hey, did you just start producing this podcast?
I'm like nah, man
it's been like forever basically and so we have this huge's not there. And that kind of makes me sad because there,
there's a lot of visuals and a lot of just like seeing
the reactions like Garrett mentioned with, with Tim,
just being able to see his pause or his, his thinking,
you know, or my thinking whenever I'm talking or him
pointing to his mini stacks behind him.
I think that's, it's not for everybody,
but I think there's a it's not for everybody,
but I think there's a large majority of people
who are gravitating more and more towards that,
who do listen on YouTube, pay attention when they want to,
but when they want to, they can go and look at the screen,
you know, and that's been my use case for it personally.
And so I wanted that for us for so long.
And I just felt not
so much bored but there was a missing necessary humanistic component that was
visual that wasn't there and so when you're audio only I feel like you're
stuck in this this box and I feel like we're now like we're like the genie out
you know we're the cats out of the box so to speak we're we're able to explore
the bigger world of YouTube and capture not
so much more of an audience, but like I think there's a lot of people there waiting, wanting
what we produce and now we're there in full form.
Yeah.
So YouTube here to stay a new way to interact for sure.
And more and more integrations in the websites.
I quite like that. For example, like the watch button.
That was something which was one of the new things to drop on an episode.
It's getting a bit crowded.
Maybe this one. And but this there you can click on it.
You can click on that and it'll pop in there and just start that.
How amazing is that? Cool.
Yeah, it's cool. Right.
That's the stuff right there.
And on the play bar, if you go to an episode's page, the play bar got a little wider and has
a watch button, which is the same thing. It'll pop. It'll embed it underneath it. Once you click on
it, we don't auto embed cause you know, only when you want it on demand. Should they say, listen,
this is watch. Yeah, maybe let's say that watch maybe. Yeah,
play and watch maybe listen and watch. Yeah, that'd be a good improvement. Yeah, but these
are nice. Like you can watch it right here and they just get automatically expanded.
I like that we are a commit driven company, by the way, a lot of the features that keep
get dropped. I just find them through commits. This is so amazing. No problem in circumstance, you know, no blog posts, nothing.
No, just we are a commit ring company.
So if you want to know what is happening at ChangeLog, follow the repository and just like
look at the commits.
That's right. We're very committed.
Well, friends, I'm here with Samar Abbas, co-founder and CEO of Temporal.
Temporal is the platform developers use
to build invincible applications,
but what exactly is Temporal?
Samar, how do you describe what Temporal does?
I would say to explain Temporal
is one of the hardest challenges of my life.
It's a developer platform and it's a paradigm shift.
I've been doing this technology for almost like 15 years.
The way I typically
describe it, imagine like all of us when we were writing documents in the 90s, I used
to use Microsoft Word. I love the entire experience and everything, but still the thing that I
hated the most is how many documents or how many edits I have lost because I forgot to
save or like something bad happened and I lost my document. You get in the habit when
you are writing up a document back in the 90s to do control
s.
Literally every sentence you write.
But in the 2000s, Google Doc doesn't even have a save button.
So I believe software developers are still living in the 90s era where majority of the
code they are writing is there some state which needs to live beyond multiple request
response.
Majority of the development is load that state, apply an event and then take some actions
and store it back.
80% of the software development is this constant load and save.
So that's exactly what temporal does.
What it gives you a platform where you write a function and during the execution of a function
of failure happens, we will re- will resurrect that function on a different host
and continue executing where you left off
without you as a developer writing
a single line of code for it.
Okay, if you're ready to leave the 90s
and build like it's 2025,
and you're ready to learn why companies like Netflix,
DoorDash and Stripe trust Temporal as their secure,
scalable way to build invincible applications,
go to temporal.io
Once again temporal.io you can try their cloud for free or get started with open source
It all starts at temporal.io
Now that you've got video first going on it's time to get CPU officially launched
Turn that frown upside down into a smile and an index you know something that's cool. Yeah very
nice. Okay any infrastructure that we need to think about to talk about for
CPU FM? You could share with him Jer which your thoughts are on the application.
The plan is just to have a bog standard web app with RSS feeds, right?
Knightly style, like that is a bog standard app.
Or in terms of the actual software.
Do you have a database?
Do you need a CDN?
What would that look like?
There'll be a database, a CDN will probably be smart,
but maybe we just drop it on R2.
Probably similar to what we're running now for us,
only it's gonna be simpler
and it's gonna be a separate software stack.
So probably gonna go back and give Ruby on Rails
another kick down the road and see.
Just because it's been a long time
and I've been in Elixir land for almost 10 years now.
And every time I write a little bit of Ruby code,
I'm like, you know what, this is my first love.
And so probably gonna be a Rails app, deploy it on fly.
It'll be pretty simple, have a backend,
write out HTML pages and RSS feeds.
That's the plan so far.
I haven't written the Licka code yet.
So these things may change, but that's the plan.
Nice.
Keep it simple.
Okay, yeah.
Neon for the database, I'm imagining.
Yeah, I would probably just reuse all the stuff
that we've been using over here.
OK.
Yeah.
Public repo, private repo.
Good question.
Good question.
Probably public.
I don't see why not.
Yeah.
OK.
I'm not going to promise that, but I
can't think of a reason why it wouldn't be public. Mm-hmm
It's mostly the the admins gonna be for like just managing the podcasts that are part of it
And then the code the actual logic of it is gonna just be in building
Basically a super feed for people and maybe custom feeds too
So you can get your CPU pods that you like and maybe if you don't like one uncheck it or something
I built that already for us. So rebuilding it over there would be straightforward get your CPU pods that you like, and maybe if you don't like one, uncheck it or something.
I built that already for us, so rebuilding it over there would be straightforward.
Okay.
Which will require user accounts, of course,
but, or would it?
Maybe not, I don't know, I'll figure that out,
but that's the plan, pretty straightforward,
not much code.
I don't see why we wouldn't open source it.
Unless I'm really bad at Ruby now, you know, it's been a long time, and I'm embarrassed. It much code. I don't see why we wouldn't open source it. Unless I'm really bad
at Ruby now. You know, it's been a long time and it's embarrassing. I think that's more of a reason
to open source it. You can ask for help. Contributions welcome. I haven't typed Rails new since probably
2015. So I'm kind of excited to like just type Rails new and see what happens. Well make sure
you record that. I think many people will be interested in your reaction.
I will.
See, Gerhard's thinking.
He's thinking about the content.
That's what he's trying to ask you about, Jerry.
I know.
He's like, how can we promote all these cool things?
Yeah, that's what I'm thinking.
Yeah, I mean, maybe record it.
I don't know.
I guess if you want that kind of content from me as I build out this new web app, which
honestly is not a super exciting web app,
but it's still, maybe I'll use Cursor the whole way,
you know, and then I'll just curse my way along
and then just rewrite it myself.
If you want that, let us know in Zulip,
let us know in Zulip.
Yeah, it's a new world and I think seeing
how you would approach that with your Rails knowledge,
things that have genuinely improved
from how you remember it.
What is better, what is worse,
because you have a unique perspective,
which is the Elixir one,
running an Elixir application for so many years.
How does that compare to Ruby and Rails?
I don't think many people did a switch back.
I keep hearing about people going from Ruby and Rails
to Elixir, but going back,
I'm not aware of anyone doing that.
It didn't make the hacker news,
it didn't appear on the changelog, this would be news.
Well, I wouldn't be ditching Elixir
because we'd still keep changelog.com over there.
So I would be going back for a new app,
but living in both worlds from then on,
which I'm happy to do.
Yeah, that could be interesting.
Old man fumbles around in the dark with rails.
He yells at rails, then yells at more. Okay, yeah, that sounds interesting. Cool.
So, Pipeley. Let's see how long this is going to take.
And by the way, this is where the screen sharing will get into its own.
So, that was a question that we had from Tim Aken, I'm not sure if I'm pronouncing
that right, uckun.
Why do you need a CDN if you have fly.io?
And I replied to Zulip, that's like the sort of conversation that happened there.
And I went through all the various things.
So the reasons why we need a CDN, even though we have fly. So we can read it either in this GitHub discussion
or in Zyglip, it's all there.
So you can go and check it out.
But the thing which I would like to talk about
is that we are starting to have contributions to Pipeley.
And you may be wondering, Pipeley?
What Pipeley?
Well, what is Pipeley?
We renamed from Pipe Dreamream to Pipely. Why?
Because Pipetream is taken. We can't get pipetream.com. We've already established
that. That's a big company, very successful, I think VC funded. So yeah, so
pipely.tech I think is here to stay and Pipely is the name of the repo.
Whichever you go to, it will just redirect you. So the change lock Pipely or the change lock PipeTrim, there's a redirect. And now we're having, we had two
contributions. If you go to the roadmap, the first one was make it easy to develop locally.
Pull Request 7 from Matt Johnson, that took a while to write some Docker files, explain how the pieces fit together.
There's a read me.
So if I go to Pipeley that we have docs,
which we didn't have before, local dev.
So all of this explains what we're testing,
how we're testing, quite a few things there.
So if you wanted to try Pipeley, running it locally,
there is a doc that explains all of it.
So thank you, Matt, for this contribution.
This one's great.
And I'm sure that we will build on top of it.
So Matt, did he do this himself
and just document as he went,
or did he, like, do you know how he went about this?
So I think there were moments when we got together.
So we had, okay, let's go Pipelee.tech.
Pipelee.tech has the whole story.
There's no more three mages or three wise men.
It's just a world.
So the image has changed.
But we had, let's build a CDN part two with Matt
and James.
So they're there and we'll link to the video
so you can go and watch it and make
it easy to develop locally. So this was kind of like a follow up to that. So Matt did a
bunch of things. If you go to pipeline.tech, you can read the whole story right now. It's
the second article. Let's build a CDN part two and this one make it easy to develop locally
is the first one. So in preparation for that, Matt had to do a bunch
of work right to understand how the pieces work, what they are, try running it locally,
and he cleaned all of those nodes up and he contributed them to the repo. So if anyone else
wants to try this, now they can. Now that's there. So let us know what you think. So that was one.
So let us know what you think. So that was one.
The second contribution, which was completely unexpected, is resolving the varnish TLS issue.
So this was a big issue.
This was a big issue.
And we went deep.
So Nabil Suleiman, he's someone that you may remember from a ShipIt episode.
We talked about the case cert.
It was a simpler alternative to cert manager that Nabil wrote because cert manager was
too complex.
I forget which episode exactly it was, but you can go and look it up.
He heard us talk about the issues that we had when it comes to Varnish connecting to
TLS backends or TLS origins and he wrote something
that solves the problem. It's called TLS exterminator and now Pipeley is using TLS exterminator to
connect to origins that require TLS termination. How does it work in a nutshell? We now spin two processes.
We now spin varnish and TLS exterminator.
Varnish connects to TLS exterminator,
which then vet proxies requests to HTTPS backends.
And that does the TLS termination and all of that.
So with that, we can now, if I go back to,
actually I was here, with that, we can now, if I go back to, actually I was here, with that, we can now add feedback ends.
Now these URLs, they have HTTPS. We could disable it. We can go via HTTP as well.
This is something to discuss. We want to disable it. I think we should keep HTTPS on. And if we want to keep HTTPS on, we need a component that terminates TLS
between Varnish and the Origin. So keep TLS on? Yeah. All right. So we're keeping TLS on. Great.
Because again, HTTP currently is available, but I think we should disable that. So it's HTTPS only.
We learned quite a bit with Nabil about why Varnish doesn't have support for TLS.
So if we go to Varnish cache, why no SSL?
There is a page on Varnish that talks about why SSL was not implemented.
And you may be thinking, who wrote this?
Poul Henning.
If anyone doesn't know who Poul Henning is, let's look him up.
This was 2011, by the way.
TLDR, before we move on, openSSL is too complex.
And when it comes to the implementation, if this was implemented in Varnish, which would
have complicated the code significantly, the SSL proxying would have required a separate process,
which is basically what TL as exterminator
is a separate process.
The difference is that not only would have varnish
been more complicated, it would have been slower.
Right, so like the whole code that does with SSL
would have slowed down varnish,
and that makes a lot of sense.
So who is Poul Henning? Poul Henning Kamp.
He's a Danish computer developer and he's known for work on previous projects such as FreeBSD
and varnish. So he's the guy that you can thank for FreeBSD. He had a significant contribution
and he is the top contributor on varnish. Some would say Varnish is his idea.
But what's really surprising, so let's go to Paul Heening again.
And he has not a FreeBSD.
There's that.
OK, so we'll go through GitHub.
phk.freebsd.dk
So freebsd.dk apparently is his TLD and he just has a subdomain in it. I think
that's really cool. So apparently Varnish has a moral license. I had no idea about that. And he's
very transparent about the accounting. Like he runs a lot of the software behind the Varnish,
like docs and a couple of other things. And he's very transparent about how he spends his time
and how much he charges for it.
And like, I was fascinated as an open source project,
how transparent it is and who contributes the most
and things like that.
So popping the stack and just keeping going through
who he is.
So FreeBSD apparently, MD5 Crypt, keeping going through who he is. So free BSD, apparently.
MD5 crypt, jails, nano kernels, time counters, and the bike shed.
We invented the bike shed concept.
Yep.
He's the guy behind the, so, and look at this.
When you refresh it, the color changes colors.
So bike shed.org.
Bike shirt.org is what we're looking at.
Pool, Henning, and camp has to come on and change a lot.
Oh my gosh.
I think it does.
I think it does.
And the pipe leak connection is just too strong to ignore.
It's so strong.
All right, so that explains why Varnish doesn't have SSL.
And Varnish Enterprise does, right?
So there's like the whole commercial aspect. But Varnish open source does not have SSL and Varnish Enterprise does, right? So there's like the whole commercial aspect,
but Varnish open source does not have SSL and there's a couple of ways to solve it.
And we may have talked about this on a recording that's not public yet, so
with Nabil, so we will wait for that to land. But what does this mean in practice?
And this is where I go to the terminal. So we're looking at the Pipeley.
Everything has been merged and anyone can follow along and we'll do the same here. Okay. So let's
do this. Alias j is for just right. So just is something that I love and I think I mentioned
about it. Just do it right. It was like one of the that's right Kaizen 16 I think not the last one the one before last so there's a bunch of recipes that people can run and just debug is the one that
we'll look at now this is in the context of Pipeley so Pipeley as you download it right now today
this is what it has so what it does behind the scenes, it's using Dagger.
And the reason why it's using Dagger is because it needs to create a specific environment
with different tools and it has to wire everything together.
So we can use, in this case, we're using Dagger to publish, package the container and publish
the container, even deploy the container.
So deploy is our thing now. We have deploy is wired up. So any commit to PyPly will go out and it
will deploy a PyPly application. And we'll see that in a minute. But now I just want to look
at debug. So what debug does, it adds some extra tools on top of the application container.
So what are the tools?
Let's just open it up and just have a quick look at what debug does.
So debug actually, I forget it's not here.
It's in Dagger, main.go.
So debug.
So for example, we get curl on top of the application container, which has just varnish,
t-marks, h-top, neo-vim, HTTP HTTP Stat, Sasquatch, which is an interesting utility.
Let's like watch with some extra features.
Go Top and Oha.
And Oha you will remember.
And then just, just obviously.
So it's just a way to interactively debug the container and try a few things out without
polluting your system.
I think that's, that's the key takeaway there.
So let's just, let's just run that.
Let's run debug.
And the terminal function,
it's what puts us in that container.
So I ran the command and right now I am in a container
and I have a bunch of toolings available to me.
So what are the two links?
If I do just, again, just is there. I have a couple of commands to run.
I could run these things locally but really I just want all that to be wrapped because typos and a couple of other things.
So what would you like me to run first?
Just backends.
Just backends. The first command. So let's see what just backends does.
So it just wraps varnish-adm backend. The first command. So let's see what just backends does. So it just wraps varnishadm
backend.list. Because varnish isn't running, there's no backends to list. What would be a backend?
Backend would be, for example, the changelog origin, the changelog application. A backend
would be the feeds origin or the assets origin. So this is where backends get plugged into varnish and varnish provides caching for those backends cool
So how do we start varnish? Let's see if just up look at that
See, that's why we have something like this. So just up boom. There it is. It's T-Mox is a terminal in the terminal
So there's quite a few things there. So just backends there's nothing there and if I do
just check that's the one it does the first request it fails and the second one we can see
we got a 200. Right again it's really fast and again this is messed up. All right it'll have to
be horizontally I'm sorry it will have to be be horizontally So just check won't fit a lot. But there you go. There it is. We can see HTTP 200
Okay, and we can see that the request came from we got a hit. It's a second hit from local
So this came from varnish cool and if I do we do just just backends we see the two backends which are healthy cool
What other commands should we run?
Let's do bench cdn.
I think that's where actually bench origin.
So bench origin, and you will recognize this.
This is going to OHA.
And we are benchmarking.
That's beautiful.
Yeah, it's not as good as it runs locally.
There's a bit more detail.
But it's pretty decent, I have to say.
So we have just benchmarked the change like I guess what we're doing
here so this runs locally the benchmark runs locally but we are benchmarking the
change log origin application which is production right now this is production
okay yeah so we're benchmarking production kick how many requests per
second 93.7 93 so about 90 requests per second. Now I'm in London. This is actually split between
New Jersey and Ashburn, Virginia. So there's two data centers that can go to either one.
It goes through the edge and then eventually connects there, which then it has to connect
to I think the database. It hits the database. requests per second not great, but only the CDN goes to the application directly
So let's bench
The CDN and we are sending a hundred thousand requests per second
Sorry, a hundred thousand requests not per second a hundred thousand requests and let's see how long it takes
So that took just under 10 seconds and we
completed 10,000 requests per second. So the CDN we can see it's
doing its job. I'm connecting to it locally. The latency is low and this is
our changelog.com CDN right. So now let's benchmark.
Let's go to CDN two CDN two is Pipeley deployed on fly that now proxies to the
origin.
So this is the new Pipeley that we're setting up.
And I think we already had this, but how does it behave with the TLS
proxying with all of that right now?
We have all those things in place and we're almost complete. Remember we had about I think 10,000
11,000 something like that. This one has 4,000 only so it's slightly slower
It's going to CDN to dot change of dot-com now the application itself has a shared CPU
Only two five six gigs of RAM. So it's like the, the, the, the smallest lowest cheapest CDN instance, sorry,
it's a cheapest fly.io instance application that we can run.
So we could make it quicker.
We could make it bigger, but that's not what I would like to show.
What I would like to show is if we benchmark varnish directly.
73,000.
73,000 requests per second. And actually it's quicker. It's 132,000. The problem is the
benchmark we're only sending a thousand requests. So let's just make it a little bit more. Let's
just send a bit more. Let's send Varnish. Let's send to it.
Let's go via HTTP 1.1.
Just need a couple of things and let's go a million.
So let's just go a bit more.
So let's benchmark Varnish.
I messed something up.
Let's see what did I mess up.
Bench.
That is 1.1.
There we go.
I just made a typo.
All right.
Let's benchmark Varnish.
So we are sending it a million requests per second
Where is this running? Everything is running locally. It's running inside of dagger main request total
How many requests total I am in request? Tell you said right is that man request total?
We're sending a million. Oh, I made a typo actually 10 million. We're sending 10 million requests
So, let's see How does it behave exactly? Oops, will we send it 10 million requests to it. So let's see how does it behave exactly.
Oops, we'll send it 10 million requests,
and we are more than halfway there.
So if I go to this instance, remember,
the same Pop OS instance.
And if I run Btop, I can see how this instance,
like what's happening here.
You can see the CPUs.
There's a lot of red.
So this is now CPU bound, actually.
And everything is local.
So there's no network because it happens in the same container, in the same namespace,
same everything. Which means that this is really as fast as you get it.
And there's our result. That's how many requests per second Varnish can serve.
211,000.
211,000 local.
So Varnish isn't slow, it's caching well. We can look at the distribution, 211,000. 211,000 local.
So Varnish isn't slow, it's caching well.
We can look at the distribution and because we're right there where Varnish is, there
is TLS exterminator that needs to talk to which terminates TLS, right?
So that's an external process which that connects to the origin and it can connect to multiple
origins.
So right now we have only change log configured,
but we'll have feeds, we'll have a couple more.
This will run next to varnish.
And I think the pieces are starting to come together.
Any thoughts?
This is cool.
Man.
Would we like, and here's a question,
would you like to scale those instances up
to see how much faster they will go?
If we provide bigger instances, well, what are we getting right now?
Against our so current setup. So our current setup which uses fastly we're getting yes between 10 and 11 thousand requests per second
So we're about halfway there. We're about halfway there. Yes
With the cheapest smallest instance. We're about halfway there, yes. With the cheapest, smallest instance, we're about 4,000 there. You're saying that's all a fast can do is 10 to how many thousand?
10,000?
It was about 10 to 11,000.
So there's a couple of things at play.
This is like the pop, which is closest to me.
I have seen it go faster.
So I've done a couple of other benchmarks.
Sometimes it goes to 16,000, 17,000.
So it can go faster
I think it just depends on network conditions load on their system, but we are sharing
right
Network with everybody, but if I can push
11,000 per second that's a lot of requests per second by the way. I think yeah, it doesn't matter like it's 4000
Yeah, so how fast is it enough? If we were to do here, you can see that right now my download. I'm downloading
1.27 gigabits per second and
My my network connection goes more my network connection goes all the way to 2 gigabits
Okay
right, so right now I'm about 10,000 11,000 so
basically I'm about 10,000 11,000. So basically, fastly is limiting my one connection. I mean, I say
one connection one IP to about one point something gigabits per second. And maybe we could benchmark
it elsewhere. But the point is there's you don't want like one user to use all your available
bandwidth, right? So you need to apply some throttling.
So let me show you something interesting. If I, for example, let's just do bench. So
I do just bench. And if I do bench, let's do, remember bunny, we have bunny changelog.com.
Remember that the other CDN, this is how the other CDN behaves. 1,700. So let's go to, and I would like to go, let's go 100,000 requests
per second. Let's see how that behaves.
100,000 requests total.
Mm hmm. 10,000 requests totals.
You keep saying per second.
Yeah you do.
That's what Adam was trying to fix earlier too.
Yeah.
Yeah. Sorry. 100,000 requests. We're sending 100,000 requests and we want to see how many can it serve per
second. And what we see here is that it stopped at 2000, at about just about 3000 requests.
They block you. Exactly. They throttle. And this was a surprise. This was a surprise,
right? So they have some sort of protection because I could be DDoSing them. Sure. Imagine
it would be like a hundred of us doing the same thing. Yeah. I mean we'd be sending hundreds of
gigabytes and they would just, sorry, hundreds of gigabits. They have to consume that bandwidth too
like as a... Exactly....infrastructure. Exactly, yeah. Okay. So as you can see, I mean I'm not liking this
behavior, I mean I can't benchmark it. So from a benchmarking now it's resumed, see, I mean, I'm not liking this behavior. I mean, I can't benchmark it.
So from a benchmarking, now it's resumed, see?
So there must be-
And then it stopped again.
And then it stopped exactly.
So it was just about a hundred requests it let through.
So it's just blocking me and then letting more requests-
Are you doing this via an API key?
How do you authenticate this?
Is this just a-
I'm not, I'm just hitting it as public like anyone.
Okay, I was gonna say, if you can do this via an authenticated way then you can always
just like pass a benchmark flag or something to get past this maybe but so
first thing I'm just hypothesizing how I would build it if I was building it you
know yeah yeah I would allow somebody to benchmark my system because there's gonna
be times you want to benchmark your system yeah I would definitely look into
that but I didn't have to do any such thing for
Fastly or for Fly.
I could just send it.
Is that a good thing though?
I wonder if that's a good thing.
Cause it just like letting anybody just like benchmark them.
You just sent 10 million requests to them.
If they can handle it.
Yeah.
If they can handle it.
Yeah.
It didn't just break a sweat.
Yeah.
Almost sent 10 million per second.
Yeah.
No, no, no, no, no.
That's too much. I need more computers. I, no, no, no. That's too much.
I need more computers.
I need my Windows machine for that.
That's right.
Two of everything is not enough in that case.
So my question, brass tax, is Pipeley fast enough?
That's the question.
Correct.
So let's scale it up.
And I feel like 4,000 requests a second versus the 10,000
to 11,000 you get on Fastly. Is that going to noticeably impact anybody?
And I would assume the answer is no.
Well, he's scaling that machine right now to test it out.
I have.
I know he is.
I'm doing it.
He's always scaling stuff.
Let's pause for one second.
Remember earlier in this show,
when we were talking about flying, being down,
and stuff like that?
That wasn't hate, that was just facts.
What he's doing right now is, in the moment of having a conversation is essentially upgrading that machine to be more performant.
Taking it from a cheap box to a slightly more expensive box. And this is all via the fly command line. So cool. It is the coolest tech, man. They really are doing some cool stuff there. I love it. Yeah, I'm just wondering if we were to replace Fastly with Pipeley, how well would we get the same thing and how does it compare?
But for that, I promise one more thing. So I'm going to deliver on one more thing, right?
Alright.
Did you notice anything different about my setup?
Like in here in the terminal?
No, no, no. Like looking at my camera, did you notice anything different?
It's super black behind you.
Are you in a whole different space?
Well, it was black before.
Yeah, it was always black.
Better camera?
Yeah, well, there was the black, it had some more detail, right?
So before, the black wasn't quite as dark.
Okay.
Whoa. You just green screen yourself? Not quite. However, the black wasn't quite as dark. Okay. Oh.
You just green screen yourself?
Not quite.
All right.
So what's going on right now?
So something just went behind your head and it looks like Grafana.
Yes.
Some sort of dashboard.
Grafana is behind your head, Gerhard.
So that's one of the early birthday presents, which I couldn't wait to open.
Yeah, your birthday is coming right up, isn't it?
It is, yeah, it is. I think by the time this will be out, it will be out.
Alright.
If you're listening to this, find Gerhard, tell him happy birthday.
Thank you. I appreciate that.
So I always wanted to have a big ass monitor. Like really, really big.
So big.
A bam as they call it.
A bab.
There we go.
I always wanted to have a bam.
Bam.
Bam.
It's got one.
And that's what happened behind me.
Like the whole screen, like the whole background
is actually now a one giant screen.
This is a real screen back there?
Yeah.
Is it a TV screen or is it a-
It is a TV screen. It's a bam. It's a Yeah. Is it a TV screen or is it a TV screen? It's a BAM. It's a BAM. Tell us
more about this big monitor. Yeah, I've heard of this. So
what is it? It's a it's a Samsung S95D. Okay. And it's a
65 inch TV. So it's big. And what it means is that I can talk
to anyone and I can see exactly what's happening across
every infrastructure.
Right now I have the changelog infrastructure running there.
And do you see those spikes right there?
Do you know what that spike is?
You just now.
Yeah, exactly.
That's me just now.
I did that.
I created a spike.
He's so proud of himself.
I did that spike. Yeah a spike. I did that spike. He's so proud of himself. I did that spike.
Yeah. Yeah, you did. So that spike right there is the benchmark that went directly to fly. Now,
on the left-hand side, as you look at it, that's all the metrics coming from the fly application,
our fly changelog application. On the right, it's all a honeycomb. And because it's a bit blurry,
you can't see the details, which is exactly what we would want, right? We don't want to
advertise all the details. But really, what's interesting is the shape of it. So honeycomb,
I can't figure out how to automatically refresh. Grafana in fly has that capability. So I just
need to manually refresh it. I have to click the refresh. So let me
do that. There you go. He's leaning over. I'm hitting refresh and then you should see
that other half refresh, right? Like half of the background refreshed. Yes. And actually
it's the same timestamp. So I need to go to the last 24 hours. There you go. Now we're
looking at the last 24 hours. So do you. Now we're looking at the last 24 hours.
So do you see those spikes there?
Yes.
Those spikes is the benchmark, which I did against a fastly against our CDN.
So you can see that we never hit those levels under normal operating conditions. Right.
That's like a hundred X what we normally operate.
So maybe being able to serve 10,000 requests per second
doesn't make that much difference since really we never hit those those levels.
I feel like we've gone to Target,
the three of us, y'all took me to the toy department and you said pick a toy.
I chose my toy. We went to the checkout, we, Target is a popular store, here
by the way, Gerhard, if you didn't know Target, we checked out, we successfully paid, we've
left, we've gone home, and you've not given me my toy.
Right.
Where is my toy?
Well, I can't get the monitor for you, you need to get it for yourself.
No, I mean, Pipely. Pipely. Pipely.
It's the toy.
So, Pipely, if you go to cdn2.change.com, it runs.
It now uses a component that terminates TLS to origins,
and now we need to add more origins.
While it's half as fast as the current CDN is,
we know that it can sustain all the load that we need to replace our CDN.
So the toy is, the toy will work. In terms of what comes next, we need to configure more
origins. We need to get, for example, the feeds one, the assets one, and we need to scale the
instances in such a way so they can handle the traffic. Right now we only save the assets, we only save the responses in the actual memory.
So we need to configure disk. There's a couple more things, a couple more knobs
to configure, but this is getting closer and closer and closer. I feel like the real toy is the Samsung S95D,
65 inch OLED HDR Pro, glare free with motion accelerator.
Gerhard, that sucker is expensive, man.
Well, eBay, half price.
That's what I say, brand new.
So you just need to shop around.
What do you would do what Adam does, you know?
Okay.
Do what Adam does, basically.
Well, I was just gonna let you know, you know,
my birthday is July 12th.
All right.
Just in case you're wondering.
Cool.
Mine's sooner, March 17th.
March 17th.
I wasn't really jealous of all your computers
you were talking about earlier.
But now you are.
That screen is amazing.
Holy cow.
Yeah, that is on the screen.
All right, so you scaled up our Pipeley But now you're on screen is amazing. Holy cow. Yeah, that is so all right
So you scaled up our Pipe Lee to well the performance x1 performance 1x. Mmm. It didn't work
Yeah, it failed. So maybe we're trying to fly
Let us down. Oh dang man. It was cool until I didn't know I didn't know what would happen. Maybe we
Yeah, let's see if you do fly CTL
Let me just do that. Let me go fly CTL machines list and let's see what's going on
Live debugging. Why not?
So we see performance one two
only two really
But the rest could not be scaled
And I don't know exactly why so So let's just do that again.
Let's do VM scale, updating machine.
See this, this other one just couldn't update it.
And I'm not sure why exactly.
That's the one in Heathrow waiting for machine to be happy.
Sorry, to be healthy, to become happy.
A healthy machine is a healthy machine indeed.
That's right.
So that's still waiting for machine.
Okay, so now it's moving to the next one.
So just how many Pipeley instances are we running right now?
One, two, three, four, five, six, seven, eight, nine, ten.
Ten.
So there's ten of them in different regions around the world.
And we've got two of the ten upgraded to performance 1x.
The other ones are on shared CPU 1X.
Exactly.
Which has also 10X, the RAM it looks like.
So the shared CPU is at 256 megabytes,
whereas the performance 1X is at 2048.
Exactly.
Yep.
So that's quite a scale.
Yeah, it's about 10X.
And I'm wondering if we do that 10x, like how will it behave?
And what would that affect the bottom line of running Pipeley? Because near now 10x are costs probably.
Because you upgraded every instance around the world.
I'm not sure how much it changes the cost. I mean we can check it exactly to see how much that would cost.
And maybe we don't need 10, maybe we need just one per continent. Maybe that will be enough or one per like East Coast West Coast
Like this is like you remember like the old one. So there's a couple of optimizations which we can we can change there
So what's the question? How much will it cost? Well, I was just wondering how much extra it is
We don't need to get the exact answer. Okay, these are just concerns that I have as we move forward and then is
Ten even the right number is a question. I mean, maybe it's smarter to leave it at the shared CPU 1x,
but have 30 of them versus 10 at the performance 1x,
for instance.
Yeah, I think so we can see that we
went with the cheapest one, smallest one.
So shared CPU 1x, you get, I think you get a bandwidth which depends on how big the instance
is, right?
You get like a fair share of the bandwidth.
So these instances that were costing like less, about $2 per month, just in compute
costs.
We went to performance 1x, which is 31.
So that is more than a 10x jump.
Maybe if we went to a 4x, sorry, shared CPU 4x,
which is about eight, that would have been a 4x.
Yeah, a 4x and also like a more realistic upgrade.
But I wanted to make sure that we get like,
we get the higher tier ones.
Performance 1x is like the lowest high tier one,
which means that you get a full core, it's not getting throttled, and my assumption is you'll
also get more bandwidth. And that's what we're testing here. If we go to the next tier of instance,
which is like compute, optimize in a way, it's like a huge jump. But does that translate to
bandwidth performance? So we're still going
through that i mean we can try benchmarking it again to see how it behaves and the reason why
you could benchmark it again is because the pop the one in Heathrow has already scaled
so let's see you know how this one compares we are pushing 480 470. Okay, so I think we'll get a similar result, I think.
You did 10 million again.
480, I did, I think 100,000 requests in total.
100,000.
100,000 requests in total, yeah.
Just to throw some load its way,
and see how that behaves.
And we're 4,000.
So apparently scaling up the instance
did not increase the bandwidth.
Interesting.
So the question would be, is this as much as we can get? And should we, could we go higher? I don't know.
Is the limitation then network? Is that what we just resolve to them because CPU and other things didn't really influence it?
RAM didn't influence it. Yeah, so I would ask for example, fly.
Like how do they allocate network bandwidth
based on instance size?
Like how do those limits work?
Yeah, that's not clear.
And so that would be one question.
And what I'm wondering is 4000 enough, right?
Because we're looking at the graph, we're seeing the spikes,
and apparently we never even hit 4000 requests per second on our existing CD.
And it means that the ceiling is lower,
but since we're never hitting that ceiling, maybe that's okay.
Not to mention that we've seen Bunny, for example,
this is a perspective which I haven't seen in Bunny before,
where we can see the throttling kicking in right? We can't even benchmark it properly because he throttles you
much earlier
And I looked through the config I went through the settings like CDN's
Apparently they're not all configured the same
Which is why I was looking varnish to see what can varnish do like where exactly is this bottleneck coming from and are we okay?
With the ceiling?
Is it necessary to have this, um, this throttling in place for her for, I guess, just the system, the uptime of the system, the vendors.
Yeah.
It would make sense for me for us.
Like, I mean, we would be at least temporarily not a pipe.
They would not have a lot of users, users.
I would say like we would deploy pipeline on prem basically, like it would not have a lot of users, I would say. We would deploy Pipely on-prem basically.
It would not be a service we're consuming.
Pipely would be software we deploy for us to use.
And so do we really need throttling
if we're our own user and we control our systems?
Oh, I see what you mean.
You see what I'm saying?
Because Bunny has it probably as a safeguard
because they're public
Whereas pipe Lee would be deployed for just our use case, right? So we don't need throttling now deployed for us But it'd be hit by randos around the world true
So we can get DOS
Yeah, we could
That is that is a real possibility. You send us 5,000 requests a second or DOSed to one pop at least. To one yeah. Yeah exactly. So I think some form of rudimentary throttling makes a lot of sense.
I don't think it would add very much in terms of software on our side to say you know you can make
it very rudimentary. This IP can only have so many requests a second done. I think you're at least
avoiding that low-hanging fruit. I haven you're at least avoiding that low hanging fruit.
I haven't looked into that,
but having this discussion is valuable.
I mean, this is why, you know, like,
I don't think we can have the toy
because we're still debating what the toy should be
and how it should behave.
But it's exactly, we're building our toy.
And I think this just makes it more real
because these are the steps that we would go through
before we take this toy into production.
And I think this is the perspective which is valuable to have, right? Like the level of care
and attention and detail that we go through to make sure that what we put out there will behave
correctly. And the comparison that we have right now is Fastly, which, you know, from some perspectives,
it behaves really well. we see performance is amazing
Caching not so good. But again, it makes sense why they know don't keep content in memory as
For example, we would because we would optimize for that
Which means that because we optimize for that we want to store as much of it in memory as possible memory that we pay
for or right now discs that we pay for wherever they may be
and then I think this will also we pay for or no discs that we pay for wherever they may be.
And then I think this will also will have questions about like, how should we size those instances, which is her that maybe the performance one X,
maybe it's a bit too expensive because we need to run a bunch of them.
And how many if you remember the first time we were running 16,
maybe that's a bit too many. Maybe 10 is a better number.
But even that might be might be too much.
Now, if we're looking at the cost, right,
we're paying $30 per instance.
And if we have 10 of those,
we would be paying $300 per month for the compute.
I think that's okay.
I think that's not crazy in terms of cost.
Let me ask you a question.
Maybe this is a stupid question, but let's ask it anyways.
We want to store, are we designing the system to be memory heavy
where we have terabytes of memory available to the system
so we can store all of our data in memory?
I don't think so.
Or just on disk?
Yeah.
And have lots of memory available if we need it.
So I think that we need both. I think that the data which is hot should reside in memory. And
think about how ZFS works the file system and when you have an arc. So this would be exactly that.
Yeah, we would want to store the most often access data in memory and the one which is least accessed on disk
so I think we need both because
The memory we can scale it. I mean if we go back to two gigabytes of memory right for $10
Let's say we go we keep the 1x we can get 8 gigabytes of memory
That doesn't seem a lot of memory. For example, I wouldn't know, and this is like where
the cache statistics would come in handy. How much data do we frequently serve? And I know that we
have the peaks, right? When we release something, there's like a bulk of content that we serve
often. How much is the bulk of how much is the hot content? I don't have an answer to that.
hot content. I don't have an answer to that. But all these things are getting us closer to those
concerns, shall I say, that the system will need to take care of.
Honestly, I don't think that we should give it more than, for example, 16 gigs, for instance, and even that might be a bit big. And I'm wondering whether all regions should have the same
configuration. And I'm thinking no, because maybe in South America and I know this
for a fact there's less traffic than for example in North America and maybe Oceania, I'm sorry Asia,
let's go with Asia again it's less traffic than we have in North America and even Europe
so then I think the same configuration across all regions doesn't make sense
but knowing how much data is hot I, I think that's something important.
How do we know that? Just based on like stats to the direct data itself?
Yeah, stats from the cache to see how much of those, like how much cache is being used.
And are there any configurations, and I haven't even looked into this, are there any configuration
terms of evictions? Like how frequently should
we automatically drop content? I think this is where our cache hit ratio will come into play.
Right, so if you don't store enough of it in memory, you will have a lower cache hit ratio.
While if you store too much, maybe you're being wasteful. I mean, having a high cache ratio,
cache hit ratio, while a lot of that data is infrequently used, you're paying wasteful. I mean, having a high cash ratio, cash hit ratio, while a lot of
that data is infrequently used, you're paying for memory that you don't need. The other
thing, the other question which I have is, are the NVMe disks fast enough? And if we
think about Netflix, Netflix does the same thing, right? They put there those big servers
in ISPs, they cache the content on those big servers so that they can deliver
them really quickly to customers wherever they may be. We're not going
to go there. So this is not that. But that's one pattern that they apply
because they realize the importance of having lots of content close to
users. Memory is not big enough, you need disks. Again, we're not there. We don't
have that problem.
While we're getting there, I think we have some decisions to make as we go.
I think roughly speaking, the dog hunts, I think 4,000 requests per second, well managed,
will be fine.
Yeah.
And I think we'll find out otherwise and be able to scale one way or the other
around such issues. What else is left in Pipeley's roadmap, you know, as we look towards the future now because let's do next steps. So I think that now we are finding the place where we can add feeds backend. Feeds backend and we also need the static assets one.
So I would add both.
When we add them, we need to figure out, do we store all that in memory?
And I think the answer is no, because especially static assets, they'll use a lot of memory,
but maybe disk.
And I think we should look into that.
Can we configure different backends?
Like how does that work?
We're basically getting like to the hard part of configuring varnish for our various backends,
and each backend needs to have a different behavior, I think. So that's something to look
into. Logs, sending logs to Honeycomb, I think that is a much easier problem to solve because
we would be using vector. And now we have the building blocks which
we have the first sidecar process if you want to think about that. If you want to think
about it like that, which means that we have there's varnish and there's a couple of other
smaller processes that support it. We have TLS exterminator that terminates TLS to origins
to backends. The second one would be this, in my mind,
it will be vector.dev, which is what we'd use for these logs.
So vector.dev would get the logs from Varnish
and send them to Honeycomb.
So it's an integration which I've used before.
I know how it works.
It's very performant.
It's fairly easy to configure.
And then we'd have another helping process that would work in combination with Varnish
to accomplish a certain task.
And Honeycomb and S3, like all those, it supports multiple syncs.
So collecting the logs on one side and just sending them in multiple syncs, that is very
straightforward because it just handles all
of that itself. And then really the last hard bit is purge across all application instances.
And I think that one is maybe a step too far to think about it now. But I think the way we...
So first of all, now we have an image to publish.
We are deploying the application automatically through CI.
That's like some plumbing that you want to have in place.
We have support for TLS backends, and that was an important one, especially when it comes
to other origins.
Because let's say if we are running in fly, we can use the private network to connect
to the chang change log instance. But
for external origins, like feeds, we would need to go to
HTTP. Because we didn't have HTTPS. Now we have HTTPS. I
think that's that's also like an important building block. And
now we're hitting the these there's like benchmarking. I
won't say we got sidetracked by it. But I think it's like
something worth considering
because you may end up building something that won't work. We won't be able to use this to replace
our current CDN. And the goal is to be able to say with confidence that PyPly is able to do the
work that currently our CDN is doing. And what does that mean from a configuration perspective, from resources perspective? I think
everything adds up and it feels like, it feels like we're more than halfway there. For real. I don't
mean like, will this work? No, no, we're more than halfway there to replace Fastly with Pipely for us.
All right, take us to the promised land. Here, give us that toy.
It won't be Christmas.
It will be before Christmas, Adam.
When is your birthday?
March.
March?
Okay.
Oh, that's too soon.
That's too soon.
Jared, yours is July?
Okay.
All I want for my birthday is Pipe Lee
and a Samsung S95D.
All right.
Well, I behaved very well, I think.
And my wife must love me very much.
Because it was a present from her to me.
So it's nice.
Yeah.
So she loves the nerd in me.
She has no choice.
They come as a package.
If she doesn't love the nerd, I mean, what's left?
Well, no.
That's what you see, Jared.
Don't answer that.
But this is not the show for that.
Don't answer that.
Well, that's cool. I love this exploration, though.
I like that there's a possibility to run your own thing like this, you know, to configure it the way we want to, I mean, to zoom out, the challenge has been that it's been hard
to configure Fastly, not as a CDN, but as a CDN we need.
Our particular use case, it's not that Fastly is not good
as a CDN, it's just that it has not been highly configurable
by us, it's been challenging over the years,
mainly just because they're, I think,
designed for different customer types.
We're a different customer type.
And we've been holding it not so much wrong, but it's just been a square peg around the whole kind of thing
where it's not perfect for our particular CDN needs.
I think we've had lots of cash misses over the years.
Like, why is our stuff not cashed?
It seems like it should be, you know,
we're not prioritized as a, you know,
as a thing to serve because the way the system works,
you know, and that's just it.
And we're designing something that serves that kind
of system where it serves the data,
holds more memory, has more available to it.
It's not a mess.
That's right.
I think it's cool.
Very cool.
Man, this tooling you build is so cool, man.
I can't believe how, how cool this stuff is that you built.
And it's really awesome.
Thank you. It's coming together.
It's, um, I love the TV too.
Yeah.
It is like a well rounded experience, right?
So the idea is to be able to have the TV on now.
It's a bit bright and it's, it bright and it's running a little bit hot.
I can feel it.
It's not winter yet.
I don't need heating in the office.
Well, it's the end of the winter,
but I'm able to see a lot of metrics.
I think that's something that I always loved,
to be able to see how things behave
and when they misbehave,
to be able to see and understand behave and when they misbehave to be able to see and
Understand at a glance what is wrong are getting DDoS'd am I running out of memory like which instances is
problematic and
I think this is just like a starting point
I literally threw the two dashboards that we have over there, but I haven't optimized them in any way
And I think having something like this just makes it a more, I don't know, like breathing living system.
Well yeah, you can see in real time what's happening. The metrics is the life.
It's each organ, so to speak.
It's like your own knock.
Yeah.
Yeah.
It is super cool. Super cool. I'm jealous. I want one.
I inspired you. The Christmas is far away, right? People can save. I know that we did.
And it's been in the making for a long time. So before I could get this, I had to get a system that is able to power it.
That's what PopOS does. One of the things which it does. It has a GPU, right, which is powerful enough
to be able to power it.
I have another monitor here, which does like screen mirroring
so I can change things here and set things up.
A system that's able to be on and that it's not too loud.
That was like another consideration.
A black wall so it blends nicely.
It was like so many things, like years in the making.
As Pipe Lee is, years in the making.
And it'll be just as beautiful as that.
I would love a tour, a Home Lab tour.
I would love that.
Coming soon to-
Well, I'm working towards that,
but I can show you one more thing, which was not planned.
I know we're a bit over time,
but if you want to see one more thing,
I'll show you my M25.
Every year, I basically take one of these machines online and I have
like a 24, 25. So this is this year, this is the machine that came online. It's running
through NAS. As you can see, it's an i9-990K. It has 128 gigs of RAM, so it's fully maxed out basically.
This is how, let me just move that screen a little bit.
This is, you can see all the storage has two pools.
It has an SSD pool and an HDD pool, so spinning disks, and some slower SSDs.
They're the EVO 870.
It's something that you need to have in place to be able to have decent storage
between Linux and Windows and Mac and everything to just work. So that was one of the projects
and I didn't have time to talk about it. Maybe next time. I know that you are a true NAS user
Adam. ZFS and like all that stuff, so yeah several poles several things similar to this
slightly BP beefier machine, it's a
Xeon processor, and it's a 42 10. I want to say silver yeah, I think there's
100-and-some gigs of RAM. I want to say 192 maybe 128
Okay, it's something like that. It's not 256. I know that for sure
Right. I just have a need for it. I mean, it's it's nice
It's a tinker's dream to have lots of RAM and a ZFS system, but I just don't need it
Yes, just you know, I caught myself just want to have it just to have it and I'm like, yeah
That doesn't make any sense, you know, like you spend all that money on the RAM just spending the discs instead
Yeah, you know cuz discs are expensive. Yeah for video storage
No
Something that that you will you would need
Be sure to do editing and especially if you edit it from multiple machines
You need like attend fast network a couple of other things, but yeah my home lab suffering right now
I don't have 10 gigabit everywhere. I do have it in the network. I just don't have it everywhere
10 gigabit everywhere. I do have it in the network. I just don't have it everywhere.
So I'm in the process of fixing that. There's some slight life updates for me that will make it more important, I should just say. I had a flood in the studio and I don't think I can stay
here anymore. Let's just say I got to go home. So I'm turning my home office into a true home lab
and work lab and that's in the making.
So it's a bummer.
Any more Techno Team?
Yeah, you know, I mean, I'll be close to the things
I play with more frequently.
I feel like I've always been like too location
and it's been challenging.
Cause like right now I can't access TrueNAS, it's at home.
I can't access that Windows PC, it's at home.
It's in the home lab, you know
And I've just sort of like stripped away more and more here to the point point that it's like it doesn't make sense to stay Any longer? Well your background will change. It's a pretty cool background that you have. Yeah, that's the thing
I got to make sure it's you know video ready and that's you know, I got a month to do that basically
Well, and I know we went a bit long. I know we covered a lot of stuff. I dig this. I love it
I'm glad you show this I would love on make it work. Do you mind if I promote that? No, no, go for it
Go for it. Yeah, go for it. I would love to see a
Tour of whatever you can share. It could just be iPhone. It could be low-produced. I don't really care
I just want to see what you're doing
Cuz you know what I love talking to Tim about in particular what I like him as a human being so cool
And I truly think we're not just friends on the podcast,
but I think if he wasn't 2000 miles away,
I would hang out with him and spend time.
Same with you.
And you're a lot more than 2000 miles away.
But I love, there's not many geeks I can meet
that nerd out on hardware.
Like you do and like he does and a couple others do
out there that are friendly
in the world like Tom Lawrence. We met him years ago at a Microsoft something or other
I think in New York. I haven't reached out since then he's become more and more famous
since then so now I just watch him on YouTube you know and I appreciate his takes and stuff
like that but there's not a lot of geeky nerds
who nerd out on hardware, like for no reason like we do.
Like we build things we wanna need,
and so we make a need for it, you know what I mean?
Yeah.
Maybe there's some true need, but you're like,
you justify it like this TV behind you,
cause like why not have a knock, like Jared said,
why not have this big thing behind you
and not let it be a green screen,
let it be a real thing?
Yeah.
Just cause, you know?
So makeitwork.fm to not hide the URL.
.tv, that's what I would say.
.tv, oh sorry,.tv,.tv.
That's all new, by the way.
That, oh gosh.
Yeah.
And that is, geez, I keep fat fingering it.
I put a comma there instead.
I haven't been there in a bit to dot FM.
So this is still running from your home lab, right?
Uh, no, actually this is running on fly and it has a CDN in front.
Yeah.
Cause last time it was on your home lab stuff.
Oh, yeah.
Uh, what was it?
Um, jellyfin.
Well, jellyfin is still on my home lab.
Oh, jellyfin.
Like the media still served from the air because of the IGPU.
But actually, like this one, if we if we if I would just click on this one.
So there's a couple of things here and I'm logged in.
So it's obviously the episode, like the audio, which is coming from transistors.
There is like an embed and there's obviously the embed video.
And as a member, when you sign in you
get the whole thing this is served from the CDN directly so this is like the the CDN um um content
and there's also Jellyfin so once you log in uh see the quality for this one wasn't very good
that's something that I'm still working on that's why I mentioned that I have to record my screen
locally which is what I did for this uh uh, because Riverside is not great, uh, with screen recording.
They improved it, but it's not there yet.
The quality is not as high.
Distributed podcasting is so hard.
Really is like, cause you want to share that screen with us, but then counting on Riverside
to record it in a resolution that is good for longterm uses.
Yeah.
Yeah.
Makeitwork.tv and.fm if you wanna go the audio route only,
but.tv is where you said to go, so go there instead.
Yeah, man, I want a studio tour.
I want something, don't take six months.
Do the simple version, Gerhard.
Or just, hey, listen, we can just zoom
and just show them in the real, you know?
Okay, I mean, that would be-
We can just FaceTime.
We can just show it to me. Yeah. We can just show me in the real, you know, okay. I mean that would be face time You just show it to me. Yeah
We could definitely do that that much that's much easier. It's the whole like log that I have to go through
Yes, so that's I'm still working on that
I I it took me such a long time to find like a good editor and I think I finally have him it took me
At least four months five months of proper searching
Yeah to get someone that I'm also able
to afford because this is still like everything self-funded, but it works. And my first, I
need to make, make it work, work before I can, you know, but even like make it work
TV now there are subscribers and there's like all of you like members, like people can pay
for it
So that's up and coming though. So I want to put to the spot
We did talk about CPU for you, so I'm hoping you're still excited. I am I am yeah, we're making steps
So yeah, maybe I'm keen to be part of that. I just just did not have time between everything
I'm saying you know we really just I've been focused on getting like the agreement solid
I wanted to make a solid promise
and have it be clear to folks.
And so I think that's like a simple thing,
but understanding your terms between the people
you're gonna serve, I feel like is,
you gotta examine that and have clarity there.
And so, cool.
Man, this has been a fun Kaizen, a deep Kaizen.
If you've stuck around to now, holy moly,
I'm not sure what's getting cut, but wow.
You are a trooper, you're a super fan,
and you should be a Plus Plus member.
I'm not gonna force you, but changelog.com slash plus plus.
It is better.
Bye friends.
See you in the next one.
See you in the next one.
Kaizen.
Kaizen.
Kaizen!
All right, that is changangelog for this week.
Thanks for Kaizening with us.
For the entire saga, head to changelog.com slash topic slash kaizen.
There you'll find all 18 kaizen episodes for your listening and now watching pleasure.
Thanks again to our sponsors of this episode.
Please support them because they support us.
Also because they have awesome products and services.
Thanks to Retool, to Sentry, and to Temporal.
Links in the show notes.
You know what to do.
And thanks as always to Breakmaster Cylinder,
the best beat freak in the entire universe.
I think so, do you?
I'm sure you do.
Next week on the Changelog,
news on Monday,
Anti-Res, yes, the creator of Redis, on Wednesday,
and on Friday, our first ever game of Friendly Feud.
Have a great weekend, like and subscribe on YouTube if you dig it, and let's talk again
real soon. Finally the end of changelogging friends With Adam and Jared and some other rando
We love that your love didn't stay until the end But now it's over, it's time to go
We know your problem should be coding And your deadline is pretty foreboding.
Your ticket backlog is an actual problem,
so why don't you go inside?
No more listening to changelogging friends
with Badam and Chairman and Silicon Valley.
No one gave a gad what come to an end,
but honestly that will probably be our finale
You best be slinging ones and zeros
And that makes you one of our heroes
Your list of to-dos is waiting for you so why don't you go inside no more listening to
change lock and friends batman jerry and people you know change line friends time to get back
into the flow change all your friends change lock your friends it's your favorite ever show
Change, love, new friends. Change, love, new friends.
It's your favorite ever show.
Favorite ever show.