The Changelog: Software Development, Open Source - Kaizen! There goes my PgHero (Friends)
Episode Date: April 5, 2024This is our 14th Kaizen episode! Gerhard put some CDNs to the test, we've taken our next step with Postgres on Neon & Jerod pushed 55 commits (but 0 PRs)!...
Transcript
Discussion (0)
Welcome to Changelog and Friends, a weekly talk show about Pirate's Booty.
Big thanks to our partners at Fly, the home of Changelog.com.
Launch your app as close to your users as possible for peak performance. Fly makes it easy. Learn how at fly.io. Okay,
let's talk. Yes, let's talk about Cloudflare's Developer Week happening all this week,
literally right now, April 1st through April 5th virtually.
They also have a meetup here in Austin that I'll be at on Wednesday, April 3rd in their ATX office.
Check for a link in the show notes to register for that.
Spots are limited, so secure your place right now.
And I'm here with Matt Silverlock, Senior Director of Product at Cloudflare.
So, Matt, what is this week for you launching for developers a bunch
of new tooling a bunch of new things that gets the next year or the next several months revived and
a resurgence for new things happening what was that what is that to you internally we call them
innovation weeks which is kind of the way we think about it which is how do we ship a bunch of stuff
that is meaningful to developers both getting some things over the line, getting some early things out,
sharing some ideas, some things that maybe aren't actually fully baked,
but kind of getting that out there and talking about it so that we get earlier feedback.
That all kind of comes back to like, how do we think about innovating?
And I think candidly, what's really, really helpful is kind of setting those deadlines,
setting that week to kind of rally the team and get things out,
actually helps us get things done, right?
There's always that tweaking for perfection, you know, another week here, another month there. It's nice when you set an immutable date, and get things out. It actually helps us get things done, right? There's always that tweaking for perfection,
you know, another week here, another month there.
It's nice when you set an immutable date.
You get things out,
gets it into the hands of the developers much faster.
Well, we're diehard R2 users. We had an S3 build that just set us absolutely on fire.
It kept growing and growing.
And I was like, this can't happen anymore.
We've had an affinity and a love for Cloudflare,
you know, from afar in really a lot of cases until we're like, you know what? R2 is pretty cool. We should use an affinity and a love for Cloudflare, you know, from afar in really a
lot of cases until we're like, you know what, R2 is pretty cool. We should use R2, you know? And
so we did. And I think I tweeted about it about a year ago. And then over time, a relationship
between us and Cloudflare has budded, which I'm excited about. But, you know, why are developers,
you know, we're opting for it, but for R2 in those cases, but why are developers opting for
Cloudflare products over Amazon Web Services or other providers out there? you know we're opting for it but for r2 in those cases but why are developers opting for cloud
products over amazon web services or other providers out there there's a lot of answers
to this but i think the one that i find kind of connects a lot of folks is we're building a
platform that makes it easy to deploy you know reliable distributed services without being a
distributed systems engineer because it turns out if i want to go and build something really
reliable on sort of an existing cloud i want to to build it across regions. When I've got an egress across regions,
got to pay for that. I need to make sure I'm spinning up shadow resources, right? When you
deploy to workers, for example, we just call that region earth, right? We take care of actually
deploying all of those instances, keeping them reliable, spinning them up where they need to
be spun up. If you've got users in Australia, then we spin one up there for you without asking you to
think about it, without charging you extra to kind of do that.
That ends up being really, really powerful.
You get to compute closer to users.
You don't have to think about that kind of coordination.
In practice, it's just really, really hard
to do that on existing providers.
So we find a lot of teams coming to us
so they can build applications at scale like that.
There you go.
Celebrate live in Austin with us
on Wednesday, April 3rd.
Again, check for a link in the show notes
for registering to that.
Spots are limited and I'll be there.
Otherwise, enjoy Cloudflare's Developer Week
all week long from April 1st through April 5th.
Go to cloudflare.com slash developer week.
Again, cloudflare.com slash developer week.
Well, should we Kaizen?
Do we need a pregame at all?
I'm prepared.
You never know.
I'm always ready.
I was born ready.
I came out of the womb and I was like, Kaizen!
See?
Studio-wise, that's why we are recording.
That's right. You cannot make this stuff up
that doesn't have to be
a wasted joke
I kind of did
just make it up
but I know what you're
trying to say
that wasn't true
Gerhard
I didn't actually say that
out of the womb
really?
I thought it was true
so you can make this stuff up
is my point
yes
of course
but you can't make it up twice
and so good thing we hit record alright Gerhard take us on a journey take us on a ride You can make this stuff up, is my point. Yes, of course. But you can't make it up twice.
And so good thing we hit record.
All right, Gerhard, take us on a journey.
Take us on a ride.
Tell us where we're headed.
Well, I want to start with a question and an answer as well.
I'm going to answer it.
Sweet.
I love it.
I always feel like I'm coming to a quiz or a test.
Yeah, or like this is going to be more painful for us than it is for him kind of a thing.
Oh, this is going to be good.
This is going to be good.
So do you remember what was the question
that we asked
or the proposal?
Actually, yeah,
that was a question
that we asked
in the last one,
in the last episode.
Yes.
It was like goals for the year.
Yes.
So we started with that.
Yes.
There was something else.
Oh, shoot.
Adam, do you remember that?
I do.
Yes. What was it it was
uh what do you want to do this year something like that no it's something else the episode
all right oh should we start a cd should we build a cd that was the question okay yes i'm well aware
i've been marinating here on this on this topic for a bit okay should we build a cd yet so the
follow-up to that is did we build a cdn did it happen oh did we build we build a CDN? So the follow-up to that is, did we build a CDN? Did
it happen? Oh, did we build a CDN? Did we build a CDN? Yes. Gosh. And you want us to answer that
question? Well, I think I can answer it. The answer is no. I didn't build a CDN. Adam,
did you build a CDN? I tried. Yeah. No, we didn't. Gerhard, did you build a CDN? I tried.
You both tried, but I have a feeling Gerhard tried a little harder than Adam.
Thank you, Jared.
I had some help. Just a feeling.
I had some help. It wasn't just me, by the way.
Okay, let's hear about it.
Do you know someone called James A. Rosen?
We do, because he's been instrumental in our community lately.
No, our CDN saga.
We're name-dro dropping him just constantly.
Yeah.
So he was very kind to give me an hour of his time,
maybe a bit longer.
And we tried building it.
Oh, okay, cool.
And so, yeah, so we did have a go at that.
And we stopped when we realized
that we cannot terminate TLS in Varnish
without something called Hitch.
Okay.
What that means is that if we build a CDN built on Varnish,
we also need Hitch,
so that Hitch is the component that connects to Fly,
to our Fly app,
because that puts up TLS.
And we need that component before we can do
even like the simplest Varnish config.
And our app has HTTP disabled.
So it only serves HTTPS.
And for that, we cannot do it without Hitch.
So that's where we stopped.
So you ran into a Hitch.
We ran into a Hitch.
There you go.
Yes.
That's exactly.
So why didn't you just put a Hitch in there?
Why didn't you just go grab a Hitch?
Well, it took us an hour to get to that point.
I see.
So you said, if I can't build a CD end in an hour, I'm not doing it.
I'm trying something else.
I did promise I'm not going to spend a lot of time on that.
So, you know.
That was true.
Okay.
We're just trying to see how far we can get.
Serious talk now.
We went with Forvarnish.
James was there.
You know, we had a couple of like the feedbacks through the last episode was really good.
We had a bunch of people basically get back to us.
I'm reading some names here.
Matt Johnson.
I think he's the one that wrote the most.
So thank you, Matt.
You know, I went through all of your comments.
I thought about them.
I also replied to them.
I don't have a lot of time,
but I did make time for that.
Those are some very good comments.
So thank you, Matt.
There was Lars Wickman.
Of course.
I still don't know how to pronounce his name.
Let me try again.
Lars Wickman.
There you go.
That's closer.
Closer.
His actually idea was on point.
Like, hey, have you heard of Bonnie?
And I was thinking, I haven't.
All right.
I haven't heard of Bonnie CDN, but Easter is coming.
Well, it's not about the bunny.
You're right.
It is not.
I was expecting you to say that.
However, changelog has a bunny.
What does that mean?
I have no idea.
Gosh.
Is this a mascot or something?
Try this.
In your browser, bunny.changelog.com.
Oh, my goodness.
Bunny.
Bunny.changelog.com. Oh,. Bunny. bunny.changelog.com
Oh, it loads.
Lars Rickman, thank you very much.
We tried it.
It loads our website.
Yep.
So it's a temporary CDN
that sits in front of changelog.
It's just there so that we can compare it.
The comparison was,
let's do synthetic probes,
synthetic HTTP requests
as we used to. Remember we had Pingdom
and then we stopped using Pingdom? Yes.
Well, I'm trying a new service. It's called
HyperPing. HyperPing.io.
Okay, lots of name dropping. It's going to be
a fun one. Nice. And I
tried it and I
liked it. HyperPing. What do you like about hyperping.io uh the whole
idea there's like a single person behind it leo hang on give me a second i have to find his
surname dicaprio no i don't think so but uh that would be close enough that would be sick but it's
not what a surprise that would be get him on b-a-e. B-A-E-C-K-E-R.
Leo Becker.
We got like a couple of emails back and forth.
I like the whole story.
It seems fairly simple and it works interestingly well, like surprisingly well.
Much more than Pingdom.
Having used Pingdom for many, many years myself, I went shopping and we used Grafana Cloud for a long time.
I think we still have Grafana Cloud, by the way, and the synthetic monitoring.
So I used quite a few over the years.
Uptime Kuma, big fan of Uptime Kuma.
Again, all these things we've used.
Uptime Robot, did you use that one?
Yes, I even paid for it for like a whole year.
Yes.
So I pretty much went through most of them.
So I liked HyperPing and I tried it
and I compared Fastly, Fly, Bunny, and Cloudflare.
Oh my goodness.
And we have more than a month's worth of data.
Who wins?
Do you want to guess?
I think I know the answer because you're very excited about Bunny.
So I'm thinking it's Bunny.
Yep.
I'm going to guess Bunny.
Yep.
That's it.
But by how much?
That's the question.
That I don't know.
So should I share my screen?
What do you think?
Or at least a window so that we look at some numbers and we talk about that.
As long as you read them out loud for our listeners.
I will read them out loud.
I will. Yes.
Okay.
Who names their CDN bunny though?
I mean, honestly.
I know.
I wasn't like, when I heard the name, I was thinking seriously, is this like a serious thing?
And apparently it is.
Bunnies are fast. Bunnies are fast bunnies are fast yes you're
right i think that's the whole idea yeah it's it's in there we can see that so are rabbits which is
okay so i just clicked on this link change it up on fastly and it basically puts me right in
hyperping this is the interface okay okay so what we're looking at is the last 24 hours
pings from all over the world,
basically every single location that Hyperping supports,
it's been running against changelog fastly.
This is changelog.com.
The average response time across all continents,
Europe, North America, Australia, Asia, Middle East,
South America, and Africa, all of them.
That's not all the continents, but okay.
All the continents that HyperPing runs it.
Okay.
And you said all of them, so I just want to make sure.
All the continents that HyperPing has probes in,
and the average response time is 422 milliseconds.
That seems slow.
It does, right?
So what we see is that Europe and North America is fairly stable, right?
So they're around 150 milliseconds. So fairly stable, fairly responsive, 150 milliseconds.
Where's our slow continent? So the slowest one is Middle East at 681
milliseconds and it's about the same.
So Europe and North America, 150.
All the other continents are around 500 or more,
between 500 and 700 milliseconds, okay?
Okay.
Average response time for the whole month?
372.
372.
So that's the number that we are comparing, okay?
So remember this is Fastly.
Okay.
This is Fly.
This is when we go directly to the app.
So we're not going to the CDN.
The average response time for the last 24 hours
is 268 milliseconds.
This doesn't make sense to me because...
Okay.
I don't have to tell you why.
Please explain.
But you know why. Because Fly is... There's't have to tell you why. Please explain. But you know why.
Because fly is, there's no caching at all there.
I mean, it's literally running through Phoenix
every time you hit it.
Not quite.
There's a proxy in front.
And they have edge locations.
So whenever you hit a fly endpoint,
doesn't matter where you are,
you will hit the edge location,
which is closest to you.
Yeah, but how does the fly proxy know that we have stale data or fresh data?
So as far as I know, it doesn't do any caching.
And by the way, if someone from...
That's my point. I said no cache.
Right.
However, it's traversing the fly network.
Okay.
So we are traversing the Fastly network versus the fly network.
Right. And we don't have multiple fly hosts
in multiple locations.
So it's all going back to a singular location
like Fastly is, correct?
Exactly, yes.
Okay, so Fastly had, so in Fastly's case,
we don't use the shield, right, for the app.
We had that issue, remember?
And we still don't have shielding.
So if the edge location doesn't have the page cached it will go to the
origin right and the origin you're right it's in a single region it's an ashburn which is the fly
origin don't tell everybody they're gonna find right we can we can edit that part out okay
it's not it's not like this is everywhere this has been versioned in our repository for like
at least six months
maybe even more
so yeah
now you're telling
them way too much
okay all right
all right
because people can't
read
TMI
okay so
Fly will do the same
thing and the
problem that we're
getting here is the
caching right
the caching that
doesn't seem to be
working as we would
expect it to on
Fastly
I think this is
basically the heart
of the problem
right
we are proving
we're using an external service that the caching doesn't seem to work the way the way it should expect it to on Fastly. I think this is basically the heart of the problem. We are proving,
we're using an external service, that the caching doesn't seem to work the way it should.
We get a lot of misses, which means that there's a lot of, like it enters through the Fastly
network, it has to go to the origin. And the Fastly network from HyperPing, again, this is
the perspective. The perspective is wherever HyperP hyper ping probes run which is why we're
using the same synthetic monitor to monitor all three destinations all four destinations so we're
trying to do like for like to have as few variables as possible anyways if we go to the last 30 days
263 okay so we can see that um north america like Europe or Iran or India, Middle East, Asia is high again.
Yeah, but it's lower than Fastly.
We're looking at 400, between 300 and 400 milliseconds,
not 600 to 700 or 500 to 700.
So the average response time across all the probes,
when we go directly to the app,
and basically we're going through the fly network,
it's 263 milliseconds.
That is the average response time.
So putting a CDN makes our app slower.
That's what I'm trying.
That's like the bombshell.
Putting a CDN makes our app slower.
That is the bombshell.
That's a shame.
Does that mean they're using faster switches?
Is this a hardware thing or a software thing?
What exactly do you think,
what's your hypothesis on what impacts this seemingly small
but relatively big in the grand scheme of things difference?
100 milliseconds or so is kind of a big deal.
Honestly, I don't think the network is as optimized as it could be.
Or, well, I just wonder how many POPs they have.
I mean, could Fly have more?
I don't think so.
I'm pretty sure Fastly has more, yeah.
Fastly is like a CDN, like first and foremost.
Yeah.
And they're a publicly traded company.
Like they are well deployed.
Yeah.
And before this, we didn't have another CDN to compare,
but now there's Bunny, right?
Bunny.changelog.com.
So let's have a look at that CDN.
Average response time.
Oh no, I'm seeing the numbers already.
Gosh, this is a massacre.
Like I'm literally reading the numbers.
Okay.
All I did was set up some audit rig,
let it run,
and let's see what the numbers tell us.
Say the number real quick.
53 milliseconds in the last 24 hours.
Average across all the continents.
Let's look at the last 30 days.
66.
66 milliseconds.
These are all from the same locations.
Yeah, exactly the same.
Same configuration, same everything.
So they're smoking on the high end, on the low end,
like in terms of the fast ones are faster,
the slow ones are faster.
Europe and North America are there like 20 to 50 milliseconds.
Really fast.
Australia, the same.
Australia is about like 50 milliseconds.
Asia is a little bit slower, like 60 milliseconds.
Middle East and South America is 120 milliseconds. Okay. Do these numbers give you pause, Gerhard? Do you know what
I mean by that? Did you stop and think, can this be right? Because this is a massacre. And so then
I turned to like, am I benchmarking wrong? I did. So I checked and I would really like to present
these numbers to others. And I'd like to basically find
what I'm doing wrong. Like we have been, we have been at this for years, literally years.
This is not a data point. And we're not complete idiots. Exactly. So, so I mean, I was thinking
how much, yeah, how much of our requests are cached versus not. So if I look at Fastly,
okay, this is, this is, I mean, we're looking at Honeycomb now this is this is i'm look we're looking at honeycomb now
we're looking at all we're looking at all the logs as events that come from fastly we're looking at
the home page because that's the only one that the probes are going to so they're going only to
our home page they're not going to feeds they're not going to any like They're not going to any, like, it's just a homepage. So we're looking at the homepage and we're seeing that in the last day, the last 24 hours,
we had just about the same amount of misses as we had hits.
What that means is that our miss ratio is 50% today.
So half of the requests do not get served from Fastly.
Fastly is just in between.
We still have to go to Fly and it has to come back.
So of course it's going to be faster.
Sorry, it's going to be slower.
Right, because it has to go through Fly.
It's going to add Fly's time on top of its time.
Yeah, exactly.
Roughly.
I was looking at this other dashboard, which is Bunny.
You can see the top right, the cache hit rate is 99.89%.
And that's with the exact same headers and everything that we're sending back.
It's exactly the same configuration.
Everything is the same.
It's not like we've configured this differently.
Do you do any config on this sucker?
All I did, serve stale.
That was it.
Basically, serve from CDN and go in the background and go and do fetches.
That's it.
This episode brought to you by bunny.net.
Gosh.
Sorry.
I'm paying for it.
I'm paying for Bunny, by the way.
This is running on my account.
So yeah, not sponsored.
Somebody at Bunny is pretty happy right now.
Thank you, Gerhard.
Yeah, it cost me a dollar
for the last month.
And by the way,
Gerhard.io is also on that same CDN.
So yeah, like when I've seen this,
so Darsh, thank you.
You're a genius.
So where's Cloudflare in this mix?
Exactly.
So let's move on
because I think we gave Bunny enough time.
Yeah.
I think we should talk to them if they want more.
So there is a epic, and I say epic issue.
It's 486, and this is a public one.
It's in our GitHub repository.
And you can see a bunch of things there.
So I did like screenshots.
I did like a lot of details, not what we discussed today
because I didn't have the data.
My last comment was February 25th,
but I captured a lot of details. So where's Cloudflare?
Cloudflare, we need the enterprise account to be able to set the header override, sorry, the host
header override. Without that, we get infinite redirects, fly redirects back to Cloudflare,
Cloudflare goes to fly, so we get like an infinite loop. So to be able to try Cloudflare, to compare Cloudflare to Fastly and Bunny, we need an enterprise account.
So Adam, what are your thoughts there? I am working on it. Amazing. So hard. So hard.
Literally worked on it this morning prior to the show, worked on it last week, worked on it weeks
ago. We have a buddy relationship there, but that was not given to us yet
to give this comparison, which is super unfortunate
because I would have loved to had those average numbers in this mix
because it would be nice to know how truly these large behemoths
compare to what was thought of potentially as a joke by name.
Bunny, like that. That's a joke by name, Bunny.
Like that.
That is a joke.
Bunny is funny and fast.
No joke.
You could use that tagline if you like it, Bunny.
Bunny is funny and fast.
Yeah, I wish we had the enterprise account too to make this comparison because obviously, I guess within an hour,
you hit a roadblock of hitch to build our own CDN.
Ultimately, my desire would not be to
build our own software. I think we're not in the business of making software necessarily,
although I think it makes sense when it makes sense. But as a media company, we're in the
position to promote those who are trying to innovate. We're promoting the innovators,
you know, and in some cases help them innovate by feedback loops and partnerships and
usage like this that's where i think we really fit in the mold of the grand scheme of developer
tooling right so my desire really not to build a cdn yeah i would like to use the best cdn and
promote that phenomenal cdn because that's what we do that That's our main thing. Our main thing is delivering a singular object across the globe
as fast as absolutely possible.
That's the name of the game.
So building a CDN, again, it was a joke.
We had a bit of fun.
Okay.
We weren't serious about it.
I wasn't serious about it.
Okay.
I was a little bit serious about it.
Well, we can do it,
but there are other options.
Yes.
Right.
And like in the heat of the moment,
you're like, you continue the joke but seriously i mean adam is super adult about this and he's on point adam's he's adulting exactly he's he's adult i have to be i have to
keep it straight here you know otherwise we'll just you know we'll be engineers and just have Just nerds. It's just nerds.
I like the idea of having this 20 line varnish config that we deploy around the world.
And it's like, look at our CDN, guys.
It's so simple and we can do exactly what we want it to do
and nothing more.
But I understand that that's a pipe dream
because that varnish config will be slightly longer
than 20 lines and we run into all sorts of issues
that we end up sinking all kinds of time into. And then we need to become varnish config will be slightly longer than 20 lines, and we run into all sorts of issues that we end up sinking all kinds of time into.
And then we need to become varnish experts.
I'm a nerd, but I'm an old nerd.
I've made the mistakes.
I was a little bit serious.
I was hoping you would get further.
Yeah, I mean, we still can.
That's why I didn't want to spend too much time on it.
I timeboxed it.
We talk about it, and we figure out,
do we want to invest a bit more time?
And that's fine.
It's not a problem. We give it another hour or two and see how far we can get. I mean,
it's not an insurmountable problem. It's just one that maybe want to sidestep. And I was excited
about Adam's proposal to try Cloudflare and I had a look at it, right? So we just basically keep,
you know, picking at this problem from different perspectives.
And the solution, which is simplest,
and it means that we have the least amount of work to do,
that's what we would like to pick, please.
Because we are not in the business of building CDNs, you know? In the meantime, by the way, I'm kind of in a holding pattern
because I have, you know, big things work in the works
that I would like to roll out.
But a lot of them, specifically, I'm working on custom feeds are dependent upon CDN changes.
And so I don't want to go make CDN changes inside of Fastly and then have to port those
over to somewhere else and or our custom CDN as I was, you know, thinking about how you
might roll something out.
So kind of blocked in that regard.
I have other stuff I can work on, so it's not like block, block.
But I would love to have our CDN figured out here sooner rather than later.
Go ahead.
So what is a good next step then, given that we want to solve this CDN problem?
To me, the good next step is we have to compare Cloudflare. We have to truly give it a try
to get the synthetics done right and feel good about that. And I think, you know, maybe as a
group here, are we pursuing the fastest possible CDN? Like, is that the true benchmark? Is that
what we want? Ultimately, we want speed, but is that, I guess, is 150 milliseconds down to 26
milliseconds in the North American region, just as an example?
Is that a big enough gap to pursue whoever gets to sub 50 milliseconds in North America?
Is that the goal for us?
I would say speed is obviously one of a handful of factors that we would take into consideration.
And it's probably near the top of the list.
I mean, because when you want to see the end, you kind of want fast cdn right not even kind of but you do there are other things like
how hard is it to hold for instance because that's important what does it cost do they have other
offerings that are compelling like there's all kinds of things like do they unlock stuff that
we couldn't previously do or that we might want to do. Are there partnership opportunities?
Obviously with our business, that's a huge aspect of it.
And so I think that's not just a singular variable.
And so I don't think we're just going to say,
well, Bunny's the fastest CDN,
so therefore we're going to use it.
But it's certainly a high watermark
and something that we wouldn't take lightly.
I think slow is a problem, right?
There's a fast enough,
but then there's also a not fast enough.
And at a certain point,
like we need to be fast enough.
Shipping MP3s around the world
doesn't need to be the fastest thing in the world
because people aren't waiting on them.
Their apps are downloading them, generally speaking.
Now, if you're on the website listening,
the faster you can get to
whatever that JavaScript event is, play through. There's's an event in which it can continue to play the rest of the thing. So it's downloaded enough to start. You want that to be as fast their app and like watching it download seven percent eight percent nine percent that's not good but that does not
have to be you know the only thing that matters of course we want our website to be as fast as
possible because that does matter okay friends the on-call scene is getting hot literally our friends our friends at FireHydrant have their new solution out there called Signals,
which you're about to hear are real reactions
from PagerDuty users
after seeing FireHydrant's on-call solution
called Signals for the first time.
PagerDuty, I don't want to say they're evil,
but they're an evil that we've had to maintain.
I know all of our engineering teams,
as well as myself,
are interested in getting this moving the correct direction.
As right now, just managing and maintaining our user seats has become problematic.
That's really good, actually.
This is a consistent problem for us and teams is that covering these sorts of ad hoc timeframes is very difficult.
You know, putting in like overrides and specific days and
different new ships is
quite onerous. You did the most
important piece, which is didn't tie them
together, because that's half the problem with
PagerDuty, right, is
I get all these alerts and then
I get an incident per alert. And
generally speaking, when you go sideways,
you get lots of alerts because
lots of things are broken but you
only have one incident yeah i'm super impressed with that because being able to assign to different
teams is an issue for us because um like the one the one alert fires for one team and then it seems
like to have to bounce around and it never does uh which then means that we have tons of communication
issues because like people aren't updated.
No, I mean, to be open and honest, when can we switch?
So you're probably tired of alerting tools that feel more like a headache than a solution, right?
Well, Signals from Fire Hydrant is the alerting and on-call tool designed for humans, not systems.
Signals puts teams at the center, giving you the ultimate control over rules,
policies,
and schedules.
No need to configure your services or do wonky workarounds in just data
seamlessly from any source using web hooks and watch as signals filters out
the noise,
alerting you only on what matters.
Manage tasks like coverage requests and on-call notifications effortlessly
within Slack.
You can even acknowledge alerts right there.
But here's the game changer.
Signals natively integrates with Fire Hydrant's full incident management suite.
So as soon as you're alerted, you can seamlessly kick off and manage your entire incident inside a single platform.
Learn more or switch today at firehydrant.com slash signals.
Again, firehydrant.com slash signals.
We could very smoothly and gently transition about Neon Tech Postgres.
Please smoothly and gently transition us.
So I would like to start by giving a shout out to Brenton Stevens from Neon Tech Support.
He spent, I think, at least two hours, maybe close to three.
We were pairing on a very specific issue, which has to do with how to configure SSL certificates when connecting CA certs specifically in Phoenix, when connecting to Neon Tech Postgres.
A couple of things there. The documentation was
almost correct, but not quite. So we went through a few things there. This is issue, I think this
was the one 492. So this is the one that we covered at length in the last episode. But I
added a couple of more things from the pairing session going through that last bit. We need to do this so that we verify peers in the SSL options.
Basically, we check that the endpoint that we're connecting to
has a valid SSL certificate or TLS certificate.
So that's what this was about.
That was fun.
So thank you, Brendan.
Funny thing, we used to work at Pivotal slash VMware.
Never met, but we met in this context.
So that was fun.
That's pretty cool, man.
That was pretty cool.
And Stephen Berry.
So if you hear your name, Stephen Berry, your name came up.
He was also in the support org at Pivotal slash VMware
doing RabbitMQ support.
And I think Stephen was Green Plum, I think.
Anyways, well, that was a good session.
So pairing felt very natural.
We had a good session and we figured it out.
So big shout out to Brendan.
And he taught me a few things,
interesting things about Postgres and extensions.
So there's more that we can use to dig into.
A cool thing that he mentioned is PgHero.
I haven't heard about PgHero,
but it looked really interesting.
So one of the things that we improved for this Kaizen,
we deployed PgHero,
we connected it to Neon Tech Postgres,
and now we can have insights.
This was pull request 507.
We deployed it on fly
and it's available on our private network.
So you can't access it.
And if you can,
let us know so we can fix it.
But no,
you definitely can't access it.
So,
did you have a chance
to play with it, Jared?
I did.
I'm running it right now
in any browser tab.
Okay.
What do you think?
Very cool.
It's showing me some duplicate indexes
that we have
so things we can improve on. Overall, very green, not very much red, so I'm assuming that as well as checks on what's healthy. And then there's all kinds of tabs,
queries, space, connections, live queries, maintenance, etc.
Which I haven't dug into those yet.
I just got it running and read the homepage.
So tell me more.
Well, this was mentioned as something that we can use
to basically get a dashboard into Postgres.
And anyone can, by the way.
It gives you a couple of things.
If you haven't spent years and years tuning or learning Postgres and anyone can, by the way, it gives you a couple of things. If you haven't spent years
and years tuning or learning Postgres, this is a quick way of at a glance seeing what is green,
what is orange, what needs your attention. And yeah, I like it. I just clicked around.
Everything seemed very interesting. So if we do suspect any issues with Postgres, I think this
would be the first place to start. Apart from the two orange things,
which is 11 duplicate indexes,
we will do that maybe to, I mean, it says here,
indexes exist, but they are needed.
Remove them for faster writes.
That's why we'll do that.
And the other one was slow queries.
We have two slow queries.
What does it mean?
It means that queries that take more than 20 milliseconds. We have one that takes the average time is 62 milliseconds. And we have 29,000
almost, 29,000 times this was executed. And the other one was 29 milliseconds. So we have two slow
queries, but we're talking milliseconds. Maybe we should dig into those.
I don't know.
But interesting.
Yeah, I actually rewrote that first slow query as a part of something else that I was doing.
It's not out there yet, but that one's not going to exist.
Okay.
And that only runs in background jobs anyways,
but not in page requests.
The second one is loading episodes by download count.
And I think it's loading a lot of them.
And I'm thinking that that is either our,
it's tough because this doesn't tie it back
to like what page is being requested
or anything like that.
That's either going to be our episode popularity page,
which is public,
or it could be a statistic in page
that's inside our admin
that shows downloads across multiple podcasts in which
case it's way less important and the fact that only has 6300 calls versus the other one which i
think runs in the background and has you know 4x that makes me think that's probably an admin page
so maybe not even worth addressing but got it cool to know and definitely i'll go remove those
duplicate indexes for sure because i'm a clean freak, you know.
Yeah, and seeing like what else is going on here.
So, you know, the queries that run,
do we expect those to,
again, it gives you like a good view.
The space that's being used by various things,
you know, what is unused, things like that.
The connections, right?
We're saying that we suspect connections.
I'm not sure whether we will be able to see,
and I'm clicking on that.
It's taking a while to load.
Maybe we'll just see what the connections are doing.
But if we suspect some connections
that are terminating prematurely
or hanging or anything like that,
maybe this can help.
I don't know.
What would it take for these queries
to tie back to like an error trace kind of thing
where it ties back to a page?
Is there a config in this?
Do you think it's an option?
I haven't looked, to be honest.
I'm not sure.
I think as a human.
As a human.
I think that a human would have to do that.
And I know nothing about PgHero,
so I could be talking completely out of thin air here,
but I just don't see how
it being a general Postgres diagnostics tool
would have hooks back into anything beyond that, that silo.
I see. I suppose when you call
Postgres, you're not saying, here's the page
I'm calling from, or the URL structure
I'm calling from. It's just simply a database call, right?
Yeah. Right, and this tool
is basically talking directly to Postgres, and it's
using a bunch of Postgres'
queries and tooling inside of it, and exposing
them via a web interface for you.
It's kind of just, you can get all this stuff with sql queries basically is this like an open source tool
that you could just use in any instance or is this like a neon thing no no pg heroes open source is
coming from uh i think spotify or shopify i sometimes get them confused okay i'm loading Confused. Okay. Me too. Loading the page up. PG Hero. So his GitHub username is Andrew Kane.
He's apparently the one that made the last commit.
I'm not sure whether he's the one that has the most commits.
Actually contributors.
Yes, he's first, which means he's the owner.
There we go.
So a performance dashboard for Postgres.
He's an action and it's battle tested at Instacart.
Okay.
I've seen a handful
of these Instacart
open source projects
that aren't owned
by Instacart proper,
but they all have
that little battle tested
by Instacart thing on them.
And so I'm guessing
he was allowed to create this
as part of his work
and open source it
under his own name,
which is pretty cool
if that's what they're doing.
Very cool.
Yeah.
Yeah.
Big thanks to Craig,
Kirsteins, and Heroku for the initial queries and Bootswatch
for the theme. So credits.
So this project seems to have
some history. Love this.
Everyone is encouraged to help improve this project.
Oh yes.
Oh yes. Here are a few ways you can. Get in there, Gearheart.
That's very cool. Yeah, we can Kaizen it as well.
If we don't have enough Kaizen to do
on our end. Yeah. MIT license as well, which is good. Yeah, we can Kaizen it as well. If we don't have enough Kaizen to do on our end.
Yeah.
MIT license as well, which is good, yeah.
All right.
So ready to move to the second one?
Or do we still want to PG hero it?
Let's do it.
Let's do it.
Cool.
So one of the things that I was very excited about when we transitioned to Neon, Neon Postgres,
is the ability to create branches,
database branches, where we can try things out.
I think that's a killer feature.
There's nothing to export.
There's nothing to import.
You just create a branch and off you go.
So pull request 508.
Enable changelog.com devs
to create prod db forks with a single command.
Now, that sounds very royal.
It's just Jared, let's be honest.
It is me.
I like how you roiled me.
So what do we think about that?
What do we think about that as an idea?
Love the idea.
Would love to see it working with a single command.
And we merged this PR minutes before recording. So no time to see it working with a single command. And we merged this PR
minutes before recording.
Right.
So no time to test it.
It's live, baby.
I don't know.
It looked pretty innocuous.
I went through the files,
changed,
and there wasn't much
that looked like
it was any sort of danger.
So I went ahead
and just merged that sucker.
And I do have
my WireGuard all set up.
I have Dagger 0.10.3 installed as as
you requested that's amazing okay i did all the things you know so you prepared yay i'm ready
for this to rock my world i know last time you surprised me and i immediately started asking for
more and you're like please please continue to be happy about this surprise
and so i felt bad you know i want to i want to really embrace what you've done for me here cool
so let's do it so in a nutshell this builds on a recently released dagger i'd say functionality
but it's almost like this is the third generation of dagger which enables anyone to write functions and then reuse those functions.
So use each other's functions.
You can think of it like the GitHub marketplace or Docker Hub, but this is for functions that
you could use in your pipelines and locally.
So more like NPM?
In some ways, yes.
In some ways, yes.
But so what's really cool about this
is that, for example, I wrote a function
that's called dbBranch.
And Jared is now going to run it for the first time.
The only thing he did, he just installed the prerequisites.
There's only two, really just one CLI,
the Dagger CLI for us,
because I already configured an engine,
a Dagger engine, which is running on fly
that we are connecting to again again, privately. This is not
exposed publicly, so it doesn't need to run
Docker or any other container runtime,
which is required for the engine. That's where all the
operations run. So,
there's a changelog directory
in the changelog repository. I'm assuming you're there.
Wait a second. There's a changelog directory
in the changelog repository? In the changelog
repository, that's right. Changelog in changelog.
This doesn't get me excited anymore, Gerhard.
I don't know about this.
It's changelog all the way down.
Oh my goodness.
So it is a bit meta.
So the idea with this is that we will put
any changelog-related functions in this directory.
The first one is dbfork.
It doesn't matter how it's implemented, it's there.
The next one, maybe, I was thinking,
it could be fresh data if you need it, right?
So you want to pull fresh data in your local Postgres instance,
and I could add it,
but let's see first how this works.
The other idea, because you have dagger call,
I was playing with this again,
this is a joke, is booty.
Excuse me?
Dagger call booty. If you need to edit it it's okay but i thought it
was funny i don't know what it will do but it was too good of an opportunity not to make the joke
let's just assume that you're referencing treasure you know like a pirate's booty
and that's what you're calling so that's good to me that's rated g you make it safe i'm done with both duties all right so back to db fork okay okay db branch
db branch so you're in that directory and you have wire guard running so that you can connect
to the engine and there's an nvrc, which, by the way, is not hidden.
You can just source it.
All it does basically sets an environment variable,
which tells it, hey, the engine is running at this IP address.
And because you have WireGuard running, everything works.
So the next thing that you could do is use Netcat
to check that you can connect to that port.
It's a TCP port, and this is using WireGuard,
and it's using that private IP address.
Sorry, private host name.
Okay.
So I'm in the directory,
and now I'm going to source Nversi.
Yes.
And I'm going to run NC.
I haven't run Netcat for years.
Tell me the command.
NC, and then there's a dash, V4 for Bose,
Z for I can't remember what,
and G, capital G, which is a wait flag.
And then you do one.
So you want it to time out in one second.
And then you give the name, sorry, the host name,
which is dagger-engine-2024-03-28.
It's yesterday's date, right?
It's not difficult.
Got it.
Dot internal.
This is a convention. Port 8080. Port 8080. And see if it works. Got it. Dot internal. This is a convention.
Port 8080.
Port 8080.
And see if it works.
Missing host and port.
So I'm not doing the command correctly.
I put them as the same thing.
Do I do like a dash P or something?
I'm just going to zoom in.
I'm still screen sharing.
You can see it right there.
And by the way, this is in the pull request as well.
So they wanted me to...
Okay, succeeded.
Nice.
That's perfect.
So that basically confirms that you're able to connect to the engine perfect i should run dagger version locally you should be
able to see 0 10 3 want to confirm that's the case that's correct nice so the next thing is to run
dagger functions dagger functions is a command that shows you all the functions which are available
in that repository in, in that path.
Dagger functions, initializing.
Nice. So this is going to set up the connection to the engine.
It's going to upload all the code and the engine will do the work.
So very little runs locally.
All right. dbbranch is the only one listed.
And the description is create a dbbranch on neon.tech and return the connection string.
That's it. Cool.
All right.
So the next step is to configure
a neon API key
environment variable because
that's what's needed to be able to talk to neon.
Okay. Do I have one of those?
I probably should.
I've been up in there.
You just log in, go to your account.
I'm going to do the same thing now.
neon.tech, log in.
I think continue with Google. I haven't logged in so let me go the same thing now. Neon.tech. Log in. Then continue with Google.
I haven't logged in, so let me go through the login now.
Okay, I'm there.
And you click on your username.
You go to, I think it's account settings.
And then you go API keys.
There you go.
Account settings.
API keys.
Create new API key.
That's the one.
Dagger branch.
Cool.
Okay.
Copied.
Nice.
So now if you go back.
Now I'm going to.
You do an export.
Neon underscore API underscore key.
All caps.
And then the value of that key that you copied.
Done.
Cool.
Rock my world, Gerhard.
Rock my world.
Dagger will.
Rock it.
Dagger will rock your world.
Dagger call.
Dagger call booty.
Not yet.
We're still working on that. Remember? Dagger call db-branch. Not yet. We're still working on that, remember?
Dagger call db-branch,
and then you give it a single flag,
which is dash dash neon dash api dash key.
Okay.
And the value is env, E-N-V, colon,
and the name of the environment variable
for the neon API key.
All right, cool.
I think I've typed it correctly.
I am now executing. It's connecting, it's initializing. Nice. All right. Cool. I think I've typed it correctly. I am now executing.
It's connecting.
It's initializing.
Nice.
Change log.
Oh, is it done?
Before starting the app,
run the following in
order.
That's it.
It's done.
It's done?
That was fast.
So this made a snapshot
or it's made a new
branch on neon.
Exactly.
This is using like the
neon API to do that.
So now you copy those
values, the exports,
paste them in your terminal,
go level up, and then boot the app.
Booty the app.
Oh, my app's already booted, though.
So I'll come back.
So this will now connect my app to that snapshot?
That's exactly what it will do, yes.
I'll believe it when I see it.
Is this the default way for all Neon?
Or is this like the way that we had to do it for our app because of circumstances?
No, I mean, all of Neon.
Basically, you create a branch, you get a connection string, and you connect to that branch.
And that branch is a fork of your primary branch.
And that's your production in our case.
And in everyone's case, like the primary one that you don't want to mess up.
And then you work on your fork.
And when you're happy, you basically delete the fork
and then merge the code, do whatever you have to do,
push it, and then it will use the main branch.
If you do like schema changes, how does that work?
As far as I know, everything happens on the fork.
It will not modify the main one.
So the assumption is that you will have some code,
maybe run a migration, that when the app boots, by the way, that's our case, it will make modify the main one so the assumption is that you will have some code maybe run a
migration that when the app boots by the way that's our case it will make the change on the main
branch well i don't want to pee in your pool here gearhard but it didn't work
okay what didn't work and you can't because you're we're not next door
but i like the expression
the proverbial pool
I like the expression
well because you were
you're using the
Boy Scout rule
and while you were
doing this work
you also upgraded
all of our
dependencies
oh I upgraded
and so I pulled
the code down
and I don't have
the correct
OJS installed
so now I have to
upgrade my
OJS
yes yes yes
so it's going to work
but it's just going to I'm forcing you to stay update I'm terrible you're just going to have to wait my Node.js. Oh, I see. Yes, yes, yes. So it's going to work. I'm forcing you to stay update.
I'm terrible.
You're just going to have to wait a little bit longer, I guess.
I'm terrible.
Keeping all those dependencies updated.
I'll stop doing that.
That's okay.
No, I like it.
It's just not good for coding together.
I know.
The changelog.com development team does not agree.
I will comply.
Okay, the app is now booting.
Nice.
And I am now going to load the homepage.
Sweet.
And I'm going to expect to see today's episode.
No.
The most recent episode was in December of 23,
so I'm still on old data.
Maybe my app isn't configured to use the connection string
or something to the snapshot. So did you do the copy-paste of the... Yeah, you need to do the connection string or something to the snapshot.
So did you do the copy paste of the...
Yeah, you need to do the copy paste.
Oh, you know what?
I did, but I did it in a different tab.
I got to do it again.
When in doubt, copy paste.
Exactly.
Yeah, and then copy paste again.
And if it doesn't work, did you paste?
And if you did, try again.
Copy paste.
Take two. I'm booting the app
and i'm loading the home page i'm expecting to see today's episodes
which i just shipped a ship it there it is socal no way ship at 97 just shipped today so i got
fresh data here no pee in the pool i say this is a penis only water no pee i got the freshness i'll
have a drink to that i'll have a drink to that of water so that's cool so i guess rewinding back to
my question so when anybody integrates with neon and they want to do this kind of integration where
they want to have db forks to give devs this superpower basically like this is a this is a feature of neon but is
it only living in the api and you've got to do your own coding in your application level to enable
your devs to use this feature because that's kind of what you did here right is you've had to add
some variables and make the application use different strings is this what everybody has to do? So, no. You can do it via the UI. Go in the UI, click on
the branch, boom. It works. You could also use the Neon CLI to do the same thing, but for that you
need Node, you need a couple of things, you have to install it, things like that. This, the function The function which I added is an interface to the Neon API.
And I'm using the Neon Go SDK,
which by the way is a community contribution.
And I'm looking for the name, Kistler DM.
So Kistler DM, he's a community member and he's the one, a Neon community member.
And he wrote the Go SDK for Neon.
So it's not a-
Dimitri Kistler. Dimitri Kistler.
Dimitri Kistler.
That's him.
So thank you, Dimitri.
That's basically the SDK.
And in the pull request,
the 508, Jared,
you can see exactly how I integrated it.
I'm pulling it down.
So again, it's an implementation detail.
I could have used something else.
I could have called, for example,
the Neon CLI,
but this seemed most elegant.
The idea is that the implementation
doesn't matter.
And to be honest, it doesn't even matter whether you use Neon. In our case, that's what we do.
But if we change, for example, back to Fly Postgres, it's possible, right? One day, who knows?
The idea is that this interface will not change. So you want to give simple, convenient interfaces
to your developers
so that they do not worry about implementation details,
which are ops or intra.
All Jared cares is a fork for the database.
All he cares about is fresh data,
which was what we discussed in the last episode.
Right.
And I try to deliver, so.
Well, this shares the,
or shows off the relationship between ops and dev, right?
Because as, in quotes, ops, your role, you care about good DX.
And the way you've given good DX is not necessarily the neon way you've given the way you think our application should deal with this as an interface to any database provider that has forking as an option.
Right? That's cool.
Okay. I'm glad you highlighted that because that's important.
It just shows off just the love for DX.
And really, I suppose, Jared, did that pass the test?
Are you a happy dev that your ops provided good DX?
Absolutely. I would say, but I don't like the folder changelog in my changelog.
We can call it something else.
That's okay.
What would you like to call it?
I can never be 100% satisfied.
It just wouldn't be me.
No, no, that's good.
I mean, we are kaisening on this, okay?
It was just like my first step,
so we're going to make it better.
Yeah, yeah.
I know you're not tied to it.
I just know that when you create folders,
they live for years.
So let's get this one right.
I think ops maybe.
Is it ops?
Are these ops? Sure. We can call it ops. We can call it functions. We can call it this one right. I think ops maybe. Is it ops? Are these ops?
Sure.
We can call it ops.
We can call it functions.
We can call it whatever you want.
Ops is fine.
Because that's the kind of functions it is.
Like we're going to put one in here called
DB
what did you say it was going to be called?
Fresh data.
I was basically going off your name.
You said you want fresh data
you do dagger call fresh data
boom you get fresh data
in your local Postgres.
Right.
And that one would actually pull it
into my local Postgres.
Correct, yes.
I probably would prefer that, but I'm happy with this.
Okay.
I just want to let you know that I'm super happy.
That makes me very happy.
You being happy makes me very happy.
All right.
Which makes Adam very happy.
Yeah, this is awesome.
We're all very happy.
I think this is, what I like too is this, I suppose, care, right?
Like we don't, we're kaisening, but I think there's a level of care here from one individual in the team to another individual in the team.
And so, Gerard, you have particular expertise and specificity when it comes to how you implement things.
And you don't just get the job done.
You think about the other person in the job roles. And that's what you've done here is
let me think about and have empathy for Jared and the other devs who may join our team in the future
to give them an elegant way to sort of like pull in fresh data as an example. That to me is kind
of cool. You know that I like to highlight that because we're humans making software for other
humans using the software to enjoy podcasts, to make more software.
What a many-layered onion there.
But that, to me, is kind of cool that you've got this humanity in the process.
That's a good Kaizen episode there, too.
Like, just a Kaizen thought process that you care.
And so, therefore, you put good DX in the processes.
Not just the work.
I appreciate that.
Thank you very much.
Let me ask you about neon costs.
If we had 10 devs doing this and they were all branching off of that one
and they were all coding against it,
aren't we paying more money
than if it was running on their local Postgres as an org?
I haven't looked at that, to be honest.
Okay.
That would be my next thought.
If I'm paying more money out of pocket
in order to do this this way,
I probably would just do the fresh data into my local and not run a branch.
And I get it that it scales to zero, but that's when you're not using it.
And maybe I go to lunch and leave my laptop on and my DB connection is still open.
And so just that idea of I'm paying money to use it nonstop,
it bugs me when I have so much raw horsepower just sitting here in this laptop.
Your DB connection being open during lunch, wouldn't that go idle?
If there's no background jobs that perform any actions,
yeah, if there's nothing running in the background,
then I think it should be fine.
But it's a good point, right?
We have a lot of horsepower locally.
The M1, the M, they're beasts.
So it would make sense to get that data running locally.
The question is, when we do that,
how good are the connections of those devs?
Do you want to pull all the data down?
And if it's fine, then yes, we pull all the data down
and it's running locally.
How often do you do this?
Do you have, locally, you only have one, right?
And if you want more than one branches,
then you need to have like multiple instances of changelog i think in your case and the changelog database in your case i
think you only work on one dev database yeah you don't need multiple so that's fine i will like
as a follow-up this is exactly what i was going to ask my primary worry is that it's not fast enough
that means you do like a lot of back and forth, back and forth,
so you're always paying the penalty of the database not being local.
And that was the main feedback that we got on my point last Kaizen.
When I said I'm excited about working this way,
there was a handful of people, which probably represent a much larger group of people,
who were like, I would never do that because I work on airplanes,
I live in the sticks, I go offline a lot. I want to code
wherever I am. I don't want to be connected to a remote Postgres. And I was like, I totally get
that. Where I'm mostly doing it, I have a fast internet connection. I'm wired in. I have gig
internet. I'm not paying extra to use it. And I kind of like the idea of developing against something
that feels more production speeds. So I was kind of like, I'm developing against something that feels more production speeds.
So I was kind of like, I'm cool with it.
That being said, the more that I thought about it,
I was like, yeah, I probably would just want to develop
against my local Postgres, generally speaking.
I just want fresh data all the time.
And so I think that's a good point,
and it's kind of a to each their own.
I would like to emphasize an approach that was very uh dear
to me when xp was still fun and young and everyone used to love extreme programming you always want
to do just enough and keep asking is this enough does this deliver on what you wanted so rather
than you know going and spending a lot of time on coming up
with a perfect solution, do just enough that looks like it's just enough and ask, is it enough?
So with that in mind, I think what we have now, this was very simple to implement. It embraces
Neon and the whole branching model. I can see some people wanting to use it and it was easier
to implement than the other option,
which has certain requirements from Postgres.
It has to do checks,
like am I importing the same Postgres version?
Are you running 16.2 or 16.1?
I mean, we had issues in the past where even like small miners,
they had like breaking changes between them.
There was like one bad indexing one,
which I remember it's still somewhere in our issues.
So it's a bit more complicated
because we don't control the local environment the other option and i'm very glad we are discussing
this we could spin up a postgres in the dagger engine there is that option of running postgres
as a service importing all the data but then the question is well where is the dagger engine running
currently it's fly and you want it to be local.
And to get it local, currently you need to
contain the runtime. It means Podman
or Docker or something like
that, and I don't think you want that.
Don't do that to me. Okay, so I won't do that
to you. You don't have Docker on your machine, Jared?
I don't run Docker on my machine.
I have it on my machine, but I only
power it up in desperate scenarios.
So you don't have the application launched.
Correct.
Just trying to understand, because I think that Podman or Docker
seems to be a prerequisite for most dev environments,
in some cases, shapes or forms, so that it's likely there,
but is it running, I suppose, is the next question.
I knew this, which is why I set up that remote engine,
so that Jared could actually test this.
So I knew that he doesn't have a container runtime locally and that's fine.
That's perfectly fine. So
as long as
you will run a local Postgres
which is the same version as a Neon
one, this will be very straightforward.
Or more straightforward than if we have
to do checks and ensure that it's the same one, then we
have to do failures and things like that.
So that's fine. No need for that. So I will always assume it's the same
version. I will pull the data down.
I know that you have a gig internet,
which will make it nice and fast,
even if we compress.
So next Kaizen.
That's what I'm thinking.
Cool.
Fresh data.
Are you still happy with fresh data?
Or booty.
Or booty.
I think fresh data.
I think we'll go for fresh data.
Dagger call fresh data.
It is. fresh data it is well friends i'm here with my good friends over at Cinedia, Byron Ruth and David G.
And as you may know, Cinedia is helping teams take NAS to the next level via a global multi-cloud, multi-geo and extensible service fully managed by Cinedia.
They take care of all the infrastructure, the management, the monitoring and the maintenance, all for you.
So you can focus on building exceptional distributed applications.
But before we even get there, we have to understand exactly what NATS is.
So Byron, when you get asked to describe what NATS is, what do you say?
How do you explain it?
It allows an application developer to adapt and evolve their application over time and
scale it based on maybe the unknowns that they have at the time that they started the
application. How that manifests is really in infrastructure complexity, developer components
that you have to bring in, whether they're streaming things, storage things, whether
there's operational complexities about multi-tenancy and considering the security, a consistent
security model around how your client components can talk to one another.
And once you start evolving your system, scaling your system, you're inevitably going to have to,
especially at the connectivity layer, bring in load balancers and proxies and network overlays
and things like that. And NATS provides you that foundation that allows you to not need to
introduce those additional things. You can Lego brick your NAT servers together to scale out to sort of a topology that you need
or use a managed service that Sanedia offers, for instance.
And you don't have to add primitives
that are very common when building an application
like KV and object store and streams and things like that.
That's all baked into NATs.
And so what inevitably will happen
is that you start out with something simple,
simple request reply, and that's fine.
But you're going to need to adapt and scale depending on, you know, your use case and your needs.
If you start with NATS, I think it gives you a better foundation, not needing to introduce additional dependencies as you adapt and scale your system.
I've got a total curveball one here.
A curveball?
Yeah, total.
Okay, let's hear it.
Oh, please do.
I'm not going to talk about NATS.
I'm going to describe what NATS is.
NATS is an intergalactic ready video conferencing system
like your favorite sci-fi show
when two ships pull up side to side,
they open a hail channel, they talk.
They might not know what the language is,
but the channel opens, communication happens.
So what happens when you open a video call?
Two people might talk. multiple people might talk,
the ships themselves might talk.
So what we're saying here is NATS gives software
the ability to have point-to-point
and point-to-multipoint communications
irrelevant of where they are,
irrelevant of how they're connected
and irrelevant of what languages
the connecting software is written in.
It's a video phone system for software.
And if you want voice recording,
you can have that as well.
We can add the ability to make sure things
and conversations are recorded for playback.
Yes, that is a curveball.
Thanks, David.
Well, there you go.
Today's tech is not cutting it.
NAT's powered by the global multi-cloud,
multi-geo and extensible service
fully managed by Cinedia
is the way of the future
for application developers.
Learn more at Cinedia.com slash changelog. That's S-Y-N-A-D-I-A dot com slash changelog. Again,
S-Y-N-A-D-I-A dot com slash changelog.
So I'd like to spend a little bit of time
talking about your 55 commits
and no pull request, Jared.
What is lurking in those 55 commits?
Oh my gosh.
That you've made since the last Kaiser.
This is like, we're kind of like a yin and a yang,
you know, we're kind of like opposites.
That's why it works so well, I think.
Yes, because you document everything and create pull requests.
It's yin and yang.
Isn't that what I said?
No, you said yin and yang.
Oh.
It's yin and yang.
I'm kidding, it's a Silicon Valley joke.
Oh, yeah, I don't watch the show.
It's a Silicon Valley joke.
You're asking what my last 55 commits did?
That's a hard thing to answer.
Anything noteworthy in those 55 commits
that we want to be kizening?
I'm sure there must be a bunch of things
that are noteworthy.
There were some things I saw in there.
Well, I think one thing is that you pulled out
TurboLinks.
That's true.
That's kind of like a big deal.
I suppose.
So the reason for that wasn't because it wasn't working or anything like that.
We are designing a new changelog news homepage, landing page design,
and we want to be able to do a clean break from our current assets,
which is our CSS and whatnot.
The weird thing about Turbolinks is when you are hopping back and forth
between pages that have different styles or a page that supports Turbolinks is when you are hopping back and forth between pages that
have different styles or a page that supports Turbolinks and one that doesn't, is you'll
often have the wrong style sheet applied.
Not often, that's not fair, but you will sometimes have either no style sheet applied or the
wrong one applied based on whether the destination page supports Turbolinks or if it's navigated
to directly.
There's just this weird uncanny valley of page navigation.
And the reason for Turbolinks was merely to allow for our player
to run across the site with the idea of people reading our website
like you would read a StumbleUpon-style website
with links and information and news.
Back when we were posting a feed of news on the homepage, maybe you're listening to a podcast, maybe you're looking at the comments, you click
over to this news item, you're going to read this story, etc. And we want that player to persist
throughout. We've since simplified our website and moved away from that model for various reasons,
which I could dive into if it's interesting, but we just don't think that we really need that
anymore. And what we
really want to be able to do is to navigate between different app layouts seamlessly. And in
preparation for that, I just took Turbolinks out. It also reduces our overall page weight because
there's a lot of JavaScript relative to how much JavaScript we use, supporting the Turbolinks
based navigations and form submissions, etc.
So that was one thing I did. And that was really just in preparation for this
new news page. Everything else is just house cleaning.
Our Twitter embed broke at one point. I thought it was
just Elon that did it. Changes to the X API and stuff.
And so I just left it for, oh great, they broke audio embeds.
No, I broke audio embeds at one point
when I was upgrading our paths
to the new way that Phoenix wants you to do verified routes.
So I fixed that.
A lot of house cleaning, minor changes,
like putting our Chain Talk Plus Plus album art
on the homepage as if it's another podcast.
Why not?
That's the kind of stuff I do as I'm just working on other stuff.
I'm just constantly improving.
The big stuff that I'm doing, which also is not a pull request,
but could be more of a pull request because it's one big feature,
is the custom feeds work that I referenced earlier.
That I think will land soon,
but probably just for us to use internally
and make sure it's all working right.
That's a good idea.
And then we'll roll it out as part of our
Chained Log++
revamp, which is
pending.
And that's all I've got to say about that, unless you have a specific...
No, no. I was just wondering, because I didn't look
through all the 55 commits.
Right.
Little things, you know, people write in, they say, hey, it'd be cool
if this would happen, or
the way you're sending, I mean, we have the nerdiest
listeners and readers, I love it, so
our conversations are always very technical
and with advice, you know.
The way our plain text, changelog news,
email or rendering was suboptimal.
And so
we had a few back and forths, and I realized
we're taking markdown content
and turning it into plain text to send a plain text email, which actually munges everything and
puts everything on one line. And I don't know that because I don't read plain text email. I'm a
normal nerd, not a super nerd. You know, I just read HTML email like most people do. But the ones
who have their plain text email, they're like, this is all one long line.
And then I realized, I was like,
yeah, but I have to switch it to plain text.
Then I was like, wait, Markdown basically is plain text.
Why don't I just send the Markdown as plain text?
And it's going to be much better formatted for them.
And so I did that and two people were super stoked.
So that felt good.
Stuff like that.
That's very cool.
That's a real Kaizen ink.
Just Kaizen Inc, man.
Just Kaizen Inc.
Kaizen Inc.
Very sweet.
Yeah.
Well, before we go, I do want to mention there is one other CDN to consider.
Oh, really?
I've had a conversation with their CEO.
They are deeply integrated with Fly.
In fact, they're built on top of Fly.
Some would say they listened to our episode before we even recorded it and shipped it.
I'm sharing it in our shared Slack as we speak.
So maybe as we circle back to this seemingly never-ending CDN saga, as you've said in our PR with the TM, so we're trademarking that, is the pursuit of who is the holy grail,
who has and holds the holy grail of CDN.
So Tigris talked to their CEO.
Really cool.
Obviously big fans of Fly, S3 compatible.
They have some big ideas.
Not all big ideas that we would totally embrace,
but definitely big ideas for an S3 compatible object storage
that is intended to be a CDM for developers.
So I would add that to your list, Gerhard,
as we continue down this path.
I'll keep working with our friends at Cloudflare,
which we've gotten closer, I should just say, with Cloudflare.
We're now promoting their developer week
happening April 1st through April 5th.
There's a meetup here in Austin I want to invite everybody to.
I can drop a link in the show notes.
April 4th, I'll be there.
It's in the Austin office here in Texas.
So if you're in the area, come and say hi.
There are limited seats.
So I'm obviously pulling for CloudFactors.
There's so much relationship investment there in terms of how we work,
but no ink on paper, no enterprise plan in hand.
So therefore they haven't paid their deposit basically, but they've definitely shaken the
hand. Let's just say, if that's a way to say it. So I'm, I'm leaning towards Cloudflare and I'm
hoping that there's no embarrassment whenever we do the testing with them as well.
Because that would totally suck, basically.
Build all this up and not really any better than Fastly Performed or even Fly with no real CDN, just simply the Fly network.
So anyways, I'll leave it there.
Dig into that.
Have you heard of Tigris before?
I have, and I'll check it closely.
I just heard about it. I haven't looked into it, but it's on my list. So thank you for that. I'll check it there. Dig into that. Have you heard of Tigris before? I have, and I'll check it closely. I just heard about it. I haven't
looked into it, but it's on my list.
So thank you for that. I'll check it out.
Yeah. I would add just to your curiosity
list. Don't go too deep.
One hour. Unless you really want to. One hour.
Whatever it takes. And hopefully
next time we come around to Kynesim,
we'll have gotten
our enterprise keys
from Cloudflare
because we're using R2.
We just need to move to the CDN and do a true test
before we really go deep on that relationship.
I feel like we need to do that.
And we're missing one thing.
Otherwise, it would have been this test.
What a shame.
To be continued.
Indeed.
The CDN saga, I like it.
I like it.
It's going to be amazing.
Whatever comes out of it is going to be amazing.
I'm going to add a couple more details
in discussion 499 in our GitHub repository.
If anyone wants to, for example,
to set up some monitoring on our public endpoints
to see how they perform,
to look at them, to see if they spot anything
interesting, different, unusual,
like the more eyes on this, the better. But I'm going to share my results and we'll see where we
take it from here. Discussion 499, by the way, is the one for the Kaizen 14. They don't put the
499 except for show page of the discussion. They don't put it on the index. So when they're looking
at the index, it's like, which discussion is 499? So that's why I clarified.
2014. Got it. Yeah.
There's one more thing which I want to mention.
And I want to say that Jared was right.
I like that. The thing which I was telling in January
shipped on February 29th.
Jared was right. Oh, yes.
It shipped on February 29th.
It did. It did. This was your
life project.
Exactly. Yes. So On your birthday, on your 10th birthday. On my 10th birthday. Not your 4th birthday. I did the It did. This was your life project. Exactly. Yes.
On your birthday, on your 10th birthday.
On my 10th birthday.
Not your fourth birthday.
I did the math wrong.
On your 10th birthday.
Exactly.
On my 10th birthday, it went live.
The name is Make It Work.
Make It Work.
Make It Work.
And if you look for that, you may find it in Apple Podcasts.
Oh.
But what you should do instead is go to video.gerhardt.io
That should be the
entry point.
Video.gerhardt.io sends me to a YouTube channel.
That's it. Cool.
Make it work. What exactly is this again?
Give a one minute refresher.
Yeah.
It's a new content space
that I created.
I was big on screen sharing. I was. You know, I was big on screen sharing.
I was big on video.
I was big on conversations that just go through all sorts of rabbit holes.
So talking of holes, the square hole, we'll talk about that in a minute.
So this is various conversations from various places.
And some of it is audio only.
Some of it is screen sharing.
And I'm slicing and dicing it based on the format.
So for example, with Eric,
when we talked about BuildKit
in the context of Dagger and Docker,
we talked about how much of BuildKit is in Dagger.
How does that relationship work?
What is good about BuildKit?
How did he discover BuildKit? Things like that. So I think we talked for maybe an hour, I think,
45 minutes. Part of it was video and part of it was audio. And that is a format that many are
familiar with. But what people are not familiar with is, for example, talks. Like how can you do
like a talk only online? For example, the square hole was a talk
that we submitted for KubeCon for EU that didn't make it through the CFP stage. So we thought,
you know what, we will just go ahead and do it anyway and just put it online exclusive,
not even rejects conf, like KubeCon rejects. So that was a talk that I was very excited about.
I thought it was like a very good idea
in terms of how it fits.
And it was a bit of Argo, it was a bit of Dagger,
it was a bit of Talos.
It was a combination of all of it.
And it's online only.
It's on YouTube.
So you can go and check it out.
The other one was like when I was at KubeCon,
I brought my 360 camera.
I bought a new one.
I was thinking, you know what?
Let's try how this would work. So I did some 360 recording and we were in the booths of various
companies, including Daggers booth, and we were recording, having a conversation,
we show the 360 video. I think it's cool. The idea is that this content space i'm thinking of once do you know
once.com the idea paid once i want it forever right i'm thinking doing something similar for
this so make it work.gehar.io is just a placeholder people that want longer form content
it's basically the things that i go deep on and it takes me a while to go through that.
And then I condense that in maybe an hour or two. I'm thinking of publishing it there
and charging once for it, and then you get access to all the content. So it would be paid content,
but it'd be more than a book or a course because it's something that spans a long time.
And again, I mentioned the project of a lifetime. I love producing a certain type of content
that includes screen sharing
that takes a long time to produce
and I had fun doing it
and AI is helping.
I mentioned that.
It's really helping with a lot of things.
One day it could even edit the videos.
Who knows?
I don't think we're there yet
but I'm curious.
It's definitely generating some great art.
It's definitely doing some good summaries like a lot of heavy lifting that i used to do in the past it's helping
with and i think it's only going to get better yeah i agree i mean we have an ai feature inside
of riverside that does summaries of descriptions and i don't take it verbatim but i allow it to
create my list of sorts so that i'm like, okay, what do we actually talk about?
Because it kind of creates the list for you and is essentially your AI brain to remind you what the conversation was about.
And I'll pull that out and recraft it in my own way, in the human way, of course.
But it's definitely helping me remember this table of contents we kind of put into podcasts that help write the description
voice the description plan for the show you know the publishing of it whatnot so i can definitely
see that and that's cool like even i'm not sure if this is ai from apple but like i'm still
i mean sometimes i'll even come to tears for the videos they put together of september last year
kind of thing you know whatever whatever kind of tell they give you
for your videos on the iPhone you know I'm like wow that had pretty decent music it showed all
the cool videos and clips and photos of me and my family and here I am a dad in tears essentially
you know like that's I didn't edit that they did that that's. So I imagine at some point it gets better and better. And then you just rely and trust, really, the edit, the editor, the AI editor.
Well, I still want to review it for sure.
But at least it doesn't take 10 hours to get there.
You get there like in an hour.
There's a certain level of trust that comes there, though, right?
You just preview it.
You're not actually in the details.
No, no, no.
The cut is here, not there.
You know, you're more like, okay, that's good taste that's it yeah yeah cool i had to use this because of kubernetes i had to use this
kubernetes sorry pod right you have to use pod so there's like a pod.gerhard.io and that's where
just like the small or like the audio only portions are. But some of this content doesn't fit audio.
I say a lot of this content doesn't fit the audio format
because screen sharing is very specific
and you need to see things.
And there's like a certain level of detail
that you would miss in an audio only conversation.
So yeah, every conversation which I do
needs to have screen sharing
because without it,
I feel a lot of detail is lost.
My perspective.
And anyways,
that was it.
So Jared was right.
It was 29th of February.
That's when it went live.
And we're exactly a month later,
29th of March.
Right on schedule.
There's a bunch of KubeCon conversations
that will go live
and including like the whole
like pavilion, the whole
like solutions showcase, like the whole, like a 10 minute walkthrough, like the entire thing.
So it's video only. Audio is just noise. If you were to listen to it, it's just lots and lots of
noise. It was a noisy floor, but yeah, it was fun. KubeCon was amazing, by the way. This was my
favorite one. I haven't been to a more, I know, immersive KubeCon.
There were like so many people, so many great conversations. My voice left me twice.
Like in the span of three days, it was like full on morning till evening and then parties. Luckily,
I didn't stay up too late. I went like into my nice quiet hotel and just like had a bit of time to you know
downtime that was good but it was it felt very personal this cube con felt very personal in the
sense that you weren't talking to vendors you were talking to friends there were 16 000 friends
can you imagine so many 16 000 it was it was an amazing conference so yeah that's awesome it was only a week ago I literally came back
one week ago
so
yeah
it was a busy month
cool stuff
we'll link up
to
make it work
I don't know
video.gerhard
pod.gerhard
just send us all the links
Gerhard
we'll put them
in there for folks
so they can
follow along
with your
life
project
sounds good
do you want to talk about the next Kaizen?
Or is this the bombshell that you want to end on?
I'm thinking Jeremy Clarkson.
I still love the man.
I still love the man.
The artist.
I don't know Jeremy Clarkson.
Jeremy Clarkson from Top Gear Grand Tour.
Kelly Clarkson.
Yeah.
No, I'm not a Top Gear guy.
You know I'm not a car guy, Gerhard.
I am.
So yeah. Is not a car guy gearheart i am so yeah is adam a car guy i was when i was younger and i did watch top gear yeah but there were just such expensive cars i was just like forget it i i will never be able to do this i
grew up absolutely poor in a small town in pennsylvania like my my hope was very low i just
love the silliness i mean you, you can be silly in any car
and the fun of it.
You know, they have some good shows.
Those are really like bangers as they call them.
I think those were like the most funny ones.
And in some countries that you would
maybe not even visit
because they're dangerous.
Anyways, it was a funny show.
So yeah, that's where the bombshell comes on.
He always ends his shows on a bombshell. And I'm that's where the bombshell comes on. He always ends his shows on
a bombshell and I'm not Jeremy Clarkson far from it. What exactly would you describe as a bombshell?
A bombshell is something, uh, that is like, you want to know more about. It's almost like, Hey,
like you've said this thing, like, no, no, you can't stop here. Like keep going. Like there's,
you know, it's almost like, yeah, we were just like cliffhanger yeah cliffhanger yeah i think so yeah cliffhanger okay something
unexpected something like oh interesting okay but um we can end there as well i think we ended on
the bombshell then we can end on the bombshell it's called make it work there you go. Yeah. Cool. All right. Happy Kaizen, guys.
Happy Kaizen.
See you next time.
Kaizen.
Kaizen.
All right.
This has been Kaizen 14.
That means we've released 13 other Kaizens prior to this one,
and you can find them all at changelog.com slash topic.
We would love to hear your thoughts about this conversation and what we should do next. Let us know in the comments.
There's a link in your show notes.
Thanks once again to our partners at Fly.io, to our Beat Freakin' residents, Breakmaster
Cylinder, and to our friends at Sentry.
We love Sentry and have been using their service for years.
If and when it's time for you to check it out, use code changelog. That helps us let Sentry know we're making an impact on their business,
and it helps you because they'll give you 100 bucks off the team plan. Once again,
use code changelog, all one word. Next week on the show, news on Monday, Scott Chacon,
co-founder of GitHub and now Git Butler on Wednesday. And Breakmaster
Cylinder, yes, BMC is coming back on Friends on Friday. Oh, I want to do that. I so badly want
to do that. Have a great weekend. Leave us a five-star review if you dig the show. And let's
talk again real soon.