The Changelog: Software Development, Open Source - Kaizen! Should we build a CDN? (Friends)
Episode Date: January 12, 2024It's our 13th Kaizen episode! We're back from KubeCon, we're making goals for the year, we're migrating to Neon & we're weighing the pros/cons of building our own custom CDN....
Transcript
Discussion (0)
Welcome to Changelog and Friends, a weekly talk show about CDN shopping.
Thank you to our partners at Fly.io, the home of changelog.com.
Launch your app close to your users.
Learn how at Fly.io.
Okay, let's talk.
Gerhard is here once again.
We are Kaizen-ing in 2024.
Yeah.
Great to be back.
2-0-2-4. Here we go. We made it. We in 2024. Yeah. Great to be back. 2-0-2-4.
Here we go.
We made it.
We're here.
Yeah.
The first Kaizen for this year.
And it happened so soon.
We all made it back from Chicago.
No crazy stories on the way home.
We already shared all of our crazy stories on the way there.
So here we are.
Did we actually share those stories though?
I think it was like in a...
I've learned how to say cheers, Adam and Jared style.
It involves a single glass.
Oh yes.
That was so funny.
That was one of my highlights.
Say more.
Say more.
So apparently the way you say cheers is both of you hold the same glass.
You hold it up.
So it was like you're almost like holding your hands. And you say cheers. I haven you hold the same glass you hold it up so it was like all was like holding
your hands and you say cheers i haven't seen that one before that was so fun that's funny that's
funny that is funny it was so inconsequential to me i don't even remember it no offense i think
it was a picture moment i think we have a picture of that somewhere we're holding the same glass
pretty much yeah we're pretty close we get pretty close around here we We're holding the same glass? Pretty much, yeah. We're pretty close. We get pretty close around here.
We're not holding
the exact same glass.
We're holding our own version
to the glass
and we're clinking them, right?
Is that what you're talking about?
No.
No, no.
Gosh, maybe I am missing it.
There is a picture of this.
The picture didn't happen.
Yeah, I don't remember that.
The picture didn't happen.
No, no, it didn't happen.
That's okay.
What happened in Chicago
can stay in Chicago.
Unless you have a picture
and then it can come out.
I'd have no problem with it.
I can look it up. It's there somewhere. Okay. I'd have no problem with it. I can look it up.
It's there somewhere.
I'll take your word for it.
So the receipts are in the show notes.
If Gerhard can come up with receipts, they will be in the show notes.
If not, then we just know he's just fabricating evidence.
Yeah.
Yeah.
Since this is the new year, can I just say that I remember when Gerhard used to version
our infrastructure by the year.
Yes.
And now it's sort of versioned, I guess, every two months or kind of continuously in a way,
really.
That was crazy, right?
Like we're a whole new era of continuous improvement.
Yeah.
I mean, I do, it's almost like a generation.
So for example, our Fly app, the one that is currently running production is 2022-03-13.
And guess how I remember it?
I just remember it just sticks with you.
And the next one,
the one that we're currently experimenting with
is 2023-12-17.
So 17th of December.
Okay.
This is the new generation of the ChangeLog app.
But it's already old and busted.
It's 23.
We're on 24 now.
Yeah.
Well, guess what?
We can delete that one and set a new one up and that's okay. It's too easy, right? Yeah. It's too easy. I like your propensity
to date stamp things because it's very nice for like remembering like, Hey, when did I do that
thing? What I don't like about it is it makes things feel old. For instance, one subdirectory
in our code base that I do not appreciate. And I'm here to air my grievances, is 2022.fly.
A, that's just an ugly folder name.
B, that's forever ago.
C, I have to go in there to do stuff with fly when I could just have all that in the root and just be chilling.
So I'd just like you to defend the decision-making process there, Gerhard, and explain it to me.
How did that come to be? So really, it was you to defend the decision-making process there, Gerhard. Explain it to me. How did that come to be?
So it really was the generation of the app.
It was 2022.
We've set it up for 2022.
And I just created the fly folder.
Because before, if you remember, we had the various Kubernetes clusters.
And we had them versioned by year.
And we were kind of straddling for a while, weren't we?
Yeah.
And exactly.
This was like our migration to Fly, which happened in 2022.
That's how long ago it's been.
And since then, we really haven't changed
the app. We've done a bunch of other things,
but that app, in
its implementation state as is.
There's a new
Fly.io directory where we're starting
to capture apps. Yeah, that's been there
for a while. Is it year-stamped?
It's just fly because the apps
are time-stamped in the directory.
And the one that we have there,
you'll see it's the Dagger engines.
Gotcha.
Because our CI also runs on fly,
the workers themselves.
And that's what that is.
So the new app,
which is basically part of a PR,
492, and we'll get to that in a minute.
In-flight, in progress.
Exactly. It's in flight.
It's also in that directory. So the app is timestamped and we have multiple apps because the idea is we have more than one. And like changelog social, for example, it's another app
that we run on fly, but that's in a different repo. Maybe we consolidate, maybe we don't,
I don't know. The point is it's a nice way to store all your apps because we have more than one
and then you know which one you're targeting.
It makes it very simple
to not make mistakes
when you want to work against a specific app.
You can't basically be in the root
and the root has changed and then maybe
you're targeting a different app instance.
This way it's very clear which
app instance you're working against.
Makes sense. Well said. Good defense.
Are those tied to the machine then, like you said?
Or does that make sense to tie it to the machine?
Or did I miss that part while I was trying to grok everything you're saying?
No, it's just the app instance.
So each app is backed by multiple machines.
Okay.
So that is like a subdivision of the app.
And this flad directory, flad.io directory,
is part of the 492 pull request, or this is predating that?
It predates it.
Okay, because I didn't see it in master.
It's there. I got it in my code base.
Is it there in master?
Mm-hmm.
Yeah, it's just hidden.
Oh, I see it.
Because it only has one directory.
It's not year. There's no year sub.
Yeah, that's why.
Okay, cool.
But more are coming.
More apps like this one, for example, the second one.
Because we've been doing this for a while, right?
We have two apps, like two change logs,
running at the same time.
And we don't want that to be part of a pull request
for too long.
This 492 is a special case.
Again, we'll come back to that.
But that's the idea.
You can have multiple apps running at the same time
and you do like a long blue-green.
Awesome.
Dig it.
One thing which I would like to do now,
because it is the beginning of a new year,
is take a step back and take a bigger take on this.
Okay.
So what I'm thinking, and we have time.
Okay, this is edited, so it's okay.
It's a big idea.
When I answer this real quickly,
everybody will know that there's like six minutes of silence
that got edited out when I was thinking about my answer.
What is the one thing that you want to achieve this year with regarding to change.com you can make it as big or as small as you want okay we have some big ideas they're more like
features though not uh infra we can do we can go there like this is basically so we don't we don't
constrict the creativity and the space right he wants He wants this as open-ended on purpose.
He's setting us up here.
Open-ended on purpose.
Yes.
And mine is big.
I can tell like mine is really, really big.
Okay.
Oh, wow.
Why don't you go first?
Okay.
It's as if I'm prepared.
Yeah.
I am.
So I'll go.
No edit necessary here.
Yeah, go ahead.
My birthday doesn't happen every year.
That's right.
You're a leap year, baby.
And this one is special because it also kicks off a new decade for me.
Oh.
So just to put it into perspective, the next time that my birthday coincides with a new decade, I'll be 60 years old.
So this is like a once a score, you're scoring.
Yeah, pretty much so after two decades
of hands-on experience which is well over 10 000 hours i have this urge to produce something that
i haven't done before something in the content space something that combines audio and video
and ai and ai is a very important element and 2024 is a combination of so many things for me that makes me really excited
for it because it doesn't come often no this is a lot of pressure i think this is it the next one
i'll be 60 so it's big i told he's big do you have more than that or is that all you're saying
that's all i'm saying because um so big that you're not going to put any sort of
box around it yet. Remember last time
when I've done this? Let's see if this time it works better. Bigging something up and then
disappointing you. So I'm not going to say anymore. Yeah, don't build it up too big. So content and AI.
And video and audio. Yep. Okay. Is there any more details? That's it. Just content space.
And it's going to ship on your birthday uh before but yes
there i'm going to do something special for my birthday for sure it's a one-time thing or is it
an episodic thing i think it's going to be an episodic thing but i have like all these interests
in hardware and software and combining things and it is the long term that i'm thinking about like
not months not even years decades something that can be tracked over decades. Something that when I'm 60, I can look back
and I can say, wow, the last 20 years have been amazing.
So that's the timescale that I'm thinking at.
Okay, so you're going to start something,
but you're not going to finish it.
It's going to be a new thing you're starting.
Yeah, something like that.
Okay.
How do you start and not finish?
I'll finish when I'm 60.
Or when he's done.
I can take, yeah, or when I'm done, exactly.
Okay.
But I'm, you know, this is enough.
I had fun.
How frequent are these episodes?
Are they yearly?
Are they monthly?
Are they weekly?
Let's see what happens.
Okay, wow.
He's not going to say.
He's not going to build it up.
That's a good goal.
I mean, I feel like I shouldn't have any goals shared after that one.
I mean, I'm going to sound like Amir Piker no matter what I say.
We can make a smaller one. I mean, this is big, right? There's like different timescales in the context of... This is like a project of a lifetime. Yeah, something like that. It feels
that way. Almost like a next evolution of something that I've been working on for a long, long time.
And ShipIt was part of it, by the way. That was just a small part. It was a stepping stone on
your way to this other thing. Pretty much. And before that, it was the RabbitMQ videos.
TGIR.
Those were fun.
That was like a whole year of videos.
Well, I hope you achieve that goal.
It's still live.
TGI.RabbitMQ.com.
People can go and check it out.
I was terrible.
But I've learned so much.
So go and have some fun.
See how not to do videos.
I was learning. that's how you learn
all right i like that one um adam what's your goal for the year
big or small doesn't have to be as big as gerhard's probably won't be i have two oh he
always does this he'll end up with seven yeah well i think we know what one goal is which is
to finally get plus plus in-house meaning not on supercast and i think that there's some things
that will all gain from that both you know how we promote it how you know listeners understand it
how it can grow how it can be embedded in the application process and just how all the workflows
work i think there's a lot of gain there and i know we've been taking incremental steps towards can grow, how it can be embedded in the application process, and just how all the workflows work.
I think there's a lot of gain there.
And I know we've been taking incremental steps towards that.
And I think one that you and I were kind of passionate about, Jared, so I think if you
don't mind me talking about the one we just talked about at the tail end of the last year,
which has the word J-O-B in it, and I guess an S.
Is that cool?
Can I mention that?
Well, we're goaling. So yeah, go ahead. Doesn't mean it's going to happen, but it's a goal.
Yeah, I think so. I think it's worth talking about. And it feels kind of even weird to put this as a
goal because it seems very simple, but I think to execute at the level
we like to execute at, it is not very simple. And so
we were talking with our friends at GoTime about just different ways
to sustain that podcast.
And during the conversation, the idea of a job or came back up is a way to
alternatively sustain a podcast. So that show has not had the best track record of being
well-sponsored, but it also is a really awesome podcast. And that's not its fault that it has
trouble gaining and maintaining
sponsors. I think it's just a challenging thing for the podcast industry. And I was like, well,
what if we found a different way to like give value back to that community? And so the conversation
sort of stemmed towards, you know, it may be a go focus job board. And then I think afterwards,
Jared and I had a brief conversation or it was in Slack or something like that. Like, what if it was just changelog.jobs?
So there is a.jobs TLD.
And so if we had changelog.jobs and we made it a SaaS product where you can subscribe to it to have jobs there frequently being the job promoter, not the job seeker, and we leveraged our podcast network and found a way to automatically or systematically
pull those job promotions in and out of the podcast to make them dynamic, basically, then
we have a real interesting way to have a job board that has an interesting economic footprint
behind it where it's SaaS based or one-off based.
And we really do a pretty good job
of this job we're not just like put it up and go post a job kind of thing but far more embedded
into the network i think if we can execute on that well then we have a decent i would just say
money maker on our hands that helps us sustain whenever sponsorships slim or you know as we
prop up plus plus and that becomes more and more
of a leg which honestly for those who support us on the plus plus side we want it to be more of a
leg to our chair i suppose in terms of stability but we never really anticipated being that like
traditionally sponsorships have always trumped the amount of revenue we can gain from plus plus
but i think there's like an untapped market on subscribers supporting you and i think that's
where bringing plus plus inside gives us that chance but then also this opportunity for
changelog.jobs being a great indie dev centered place to get and look for cool jobs and i think
one part of that is maybe the vetting process.
So lots of interesting things on how to execute,
not just throwing it up and, you know, there you go, post a job,
but something that's a bit more well-executed
and really for the indie market,
because most of the indie markets in the job space
that have been like boards have been bought up by the big guys,
you know, the big folks.
Or neglected.
Yeah, or neglected. I neglected i mean github
jobs is pretty cool but i mean obviously github is not jobs um and i think it went by the wayside
last time i checked early early on that was one of our first sponsorships here on this podcast and
it's kind of cathartic jared to say that uh my dream thing for this i suppose goal is to promote jobs i said it before we will never
promote jobs on this podcast i'm never saying never again man why are you bringing why are
you telling people again why are you bringing it back up it always bites me i mean i don't mind
that's cool but i think if uh i really don't mind i like being wrong honestly i love being wrong when
it's right to be right i suppose because i think if we do this right it could be a cool fun thing for the community
and it could be a good revenue driver for us and it'd be kind of cool to put that info behind that
like the front end the back and all the things that we've been building it would just be kind of
you know easy to extend what we're already doing well yeah so that's my one thing which i would
like to add to this because it does connect.
I was exchanging some DMs
with someone through our community.
Her name is Mary Hightower.
And I'll just read the one sentence
which is very relevant to this.
This was end of November.
So a few months back.
One thing, and I'm quoting Mary,
one thing I've seen in the changelog crowd is the perspective
of how to build software and teams well. I think that's something important because it is in
changelog's DNA to care about those things. And it's not me saying this, it's like someone that
has been in our community, and I'm sure that others must feel similar to this because there is a perspective on
what does it mean to be a good team?
What does it mean to have a successful community,
a successful relationship?
And coming back, change the grand friends.
Look at us, what we're doing now,
how open we are,
how we're trying to support those
that maybe are less fortunate than us
when it comes to their work
environment yeah well said i think that's uh i think that's on point and entirely relevant
and a reason why something like this which to me has always seemed like potentially a bolt-on
you know could actually be very integral and valuable, you know?
Yeah.
If we execute it right, which is always for us, you know,
strengths and weaknesses.
Our strength is our weakness.
We know that perfection is the enemy of progress
and progress over perfection.
And that's why we Kaizen.
And that's why we do MVPs and all these kinds of things.
Because Adam and I both desire the perfection
and sometimes we just don't build the thing
because we're like, well, we can't figure out how to do it perfect
or even well and so we're not going to do it right now.
Hopefully we can eschew that and get a jobs thing going
in order to provide that value
and to sustain shows like GoTime
and really our entire network when advertising wanes.
So to me, it's feeling less and less like a bolt-on moneymaker and more and more like
true value for all of us.
So I'm into it, whereas in the past, I've kind of poo-pooed it.
So I've had a change of tone.
Same.
Me too.
That's why I was so cathartic that we would actually go back to it that uh or just that would become an idea again i suppose like how in the world does that even
make sense and somehow it does make sense now that that's even something we're promoting or
suggesting and it has always seemed like a bolt-on that didn't really provide the value it always
seemed like well this would only be so that we can find one more way to sustain right whereas now i feel like if we can embed them into the shows
in the ways we think we can and swap them out when necessary dynamically and i think that's a big win
for us and a big win for the folks trying to find the right folks you know and i think if we could
do a good job on vetting who comes into that pool and just
some way to provide like you were saying gary with with you know quoting mary hightower
hi mary by the way you know i think that's great like building great software building great teams
i think has always been the fun part of the conversations we just had that conversation
uh with dan moore on uh letters to developers a cool conversation. The first shot of the gate this year.
And I think it's going to be a big hit for the year.
And it's show number one for 2024.
So Adam shared two.
Therefore, I will share zero.
Three.
Three.
No.
Yeah, one, two, three.
No, I'm thinking he stole mine.
So changelog++ 2.0 was exactly what I was going to say.
Sorry, Jared.
No, it's all right.
You're doubling down.
It's definitely going to happen.
We're on the same page.
That's a good thing.
We have, I mean, we should have similar goals, shouldn't we?
Yeah, bringing changelog++ on site in our control
and making it way better.
We have lots of ideas
and we've kind of been inching towards that.
We haven't gone all in on it because there's always been one more thing that pops up is more
important. For instance, even our big conversation today about Postgres is a thing that is currently
just more important than that. Although you're doing the bulk of the lifting on that, there's
other things that are popping up, I'm sure we'll be talking about soon, which are more time sensitive
than that. And so it kind of always gets pushed off and I just want to stop pushing it off and actually get it done
because a, we've had more subscribers recently. So thank you to all of you who joined. Yes. Big
time. It's been very uplifting to see so many people joining on, even in its current state,
which we know is not as good as it could be. And 90% of the people are there just to support us, and we love that.
But we also want to quid pro quo that and provide value back and make it awesome.
So it's been a thing that I think I even wanted to do last year and just didn't do it.
So it's like enough is enough.
Let's do this, and let's do it well.
And not perfect, because then it'll never ship,
but ship something and do the bulk of the work and then refine from there.
So that's my goal.
That's a good one.
That's a good one.
What's up friends.
I'm here with one of our good friends for Ross of Aboukdij. Faras is the founder and
CEO of Socket. You can find them at socket.dev. Secure your supply chain, ship with confidence.
But Faras, I have a question for you. What's the problem? What security concerns do developers face
when consuming open source dependencies? What does Socket do to solve these problems?
So the problem that Socket solves is when a developer is choosing a package, there's so much potential information they could look at, right?
I mean, at the end of the day, they're trying to get a job done, right?
There's a feature they want to implement.
They want to solve a problem.
So they go and find a package that looks like it might be a promising solution.
Maybe they check to see that it has an open source license, that it has good docs.
Maybe they check the number of downloads or GitHub stars.
But most developers don't really go beyond that. And if you think about what it means to
use a good package, to find it, to use a good open source dependency, we care about a lot of
other things too, right? We care about who is the maintainer? Is this thing well-maintained?
From a security perspective, we care about, does this thing have known vulnerabilities?
Does it do weird things? Maybe it takes your environment variables and it sends them off to the network,
you know, meaning it's going to take your API keys, your tokens, like that would be bad.
The unfortunate thing is that today, most developers who are choosing packages and
going about their day, they're not looking for that type of stuff. It's not really reasonable
to expect a developer to go and open up every single one of their dependencies and read every line of
code, not to mention that the average NPM package has 79 additional dependencies that it brings in.
So you're talking about just, you know, thousands and thousands of lines of code. And so we do that
work for the developer. So we go out and we fully analyze every piece of their dependencies, you
know, every one of those lines of code. And we look for strange things, we look for those risks
that they're not going to have time to look for. So we'll find, you know,
we detect all kinds of attacks and kinds of malware and vulnerabilities in those dependencies.
And we bring them to the developer and help them when they're at that moment of choosing a package.
Okay, that's good. So what's the install process? What's the getting started?
Socket's super easy to get started with. So we're, you know, our whole team is made up of developers. And so it's super developer friendly. We got tired of using
security tools that send a ton of alerts and were hard to configure and just kind of noisy. And so
we built Socket to fix all those problems. So we have all the typical integrations you'd expect,
a CLI, a GitHub app, an API, all that good stuff. But most of our users use Socket through the
GitHub app,
and it's a really fast install. A couple clicks, you get it going, and it monitors all your pull requests. And you can get an accurate and kind of in-depth analysis of all your dependencies.
Really high signal to noise. It doesn't just cover vulnerabilities. It's actually about the full
picture of dependency risk and quality. So we help you make better decisions about dependencies
that you're using directly in the pull request workflow, directly where you're spending your
time as a developer. Whether you're managing a small project or a large application with
thousands of dependencies, Socket has you covered and it's pretty simple to use. It's really not a
complicated tool. Very cool. The next step is to go to socket.dev, install the GitHub app, or book a demo.
Either works for us.
Again, socket.dev.
That's S-O-C-K-E-T dot dev. Okay, well, my next goal is to encourage you
to open up GitHub Discussion 485.
It's in our changelog.com repository
because the bulk of this conversation
is going to happen around that.
And if you're listening,
you can go there.
By then, it should have been done,
but you can see all the topics,
all the links, everything's there.
Links and show notes too, by the way.
I think the biggest thing for us,
and we mentioned this a couple of times,
it is pull request 492,
where we are migrating Postgres to neon.tech.
So that's the big thing. And it's the biggest
change I think that we have since Kaizen 12
is to set up
neon.tech as a managed
Postgres alternative to our current
Postgres which is running on
fly.io.
Let's open up this pull request
and let's take a look at it.
I'd just like to load some of the context
let's do it so when i started this uh the first thing which i did and this is almost like a the
boy scout rule uh update dependencies and i know that we should have bots that do this automatically
but sometimes especially when it comes to the major versions you would want to do that yourself
like for example erlang that was an okay one 25 to
26 with that upgrade um postgres this was like a bigger one from 15 to 16 nothing changed so it
was still good but those types of upgrades you would want to you know supervise you wouldn't
just want like a bot to do it for you and then you figure out oh there's like all these things
which i missed and to be honest end of the years are really good for these like big upgrades so that's like 490 which is a precursor to this pull request
we can come back to the elixir upgrade because by the way that's the one thing which didn't work
very smoothly but we can come back to that later and just focus on this one so we have a new app
instance as we discussed uh the 2023 1217 which running on Fly, and that is configured to use Postgres.
So Adam is the one that set up Postgres for us.
How was that, Adam?
How was the whole initial setup of Postgres?
You mean Neon?
Yeah, on Neon.
It was actually pretty easy.
Barely any convenience for my screen fan lovers out there.
It was pretty easy. i mean i think i
just went in there the only confusing thing was there wasn't the idea of orgs you know you create
a project and inside that project you invite people so that was kind of i guess the only
oddity i mean i did nothing besides you're giving way too much credit honestly i just
talked to the folks folks behind the honor amazing are amazing. You started it. Without that, this would not have been possible.
Well, if you want me to give you the real getting started story,
I began at All Things Open, really.
And so their CTO was there and their team was there.
Ralph and others were there.
And I was like, Jared, we should just go over there and talk to them
because we want to have a managed postcard.
Like, Garrett's been pushing for this.
And my first, you know, you might want things, Gareth,
but then I go and ask Jared, do you also want this?
Do you bless this?
Because Jared really is our, you know, our CTO, really.
And so if I would never make a tech choice without conferring with both of you guys
that that's what we should do.
And so I asked Jared, and he's like, yeah, that works.
Let's do that. And so I went over, he's like, yeah, that works. Let's do that. And so I went
over and we ended up getting them in a pod and we talked further and then we talked further
afterwards. And I just laid it out like, hey, we love fly, big love to fly, but we want something
that's future focused. And I think in my discussions with Kurt around Kurt Mackey, who is the co-founder
and CEO of fly.io, he was always like, you know, we have different ambitions
and databases are part of it,
but we know we're not providing the state-of-art thing.
It's good. It's good for everybody.
But this isn't something that we're sort of like laying further into.
Now, that may have changed in that year and a half ago conversation.
But I was always like, I know after our conversation,
Jared and I's conversation with Nikita,
Shamganath, the CEO of Neon, I think about a know after our conversation, Jared and I's conversation with Nikita, Shamganath,
the CEO of Neon,
I think about a year back,
right,
Jared,
about a year and some back now,
that he really laid out a lot of good promise
and he had experience in databases before.
Like he had been previously successful around databases
with like Memcache,
I believe.
I forget what SQL Cache
or I forget what his previous startup was
that was acquired,
but he had had some success and impressed us in that podcast about where they're taking postgres in
particular serverless managed postgres and then the idea of maybe getting to geo which they're
not quite there yet and then i think what really impressed me recently talking to them was around
the way that they plan to bolt in bringing this in this dev mode to Neon and Postgres,
really where you're,
and y'all can probably speak to this more than I can,
but the way you interact with a database
is one in production, but also in dev.
And so to innovate and to experiment
with the database at the dev level
always requires some sort of like cloning
of the production database and this weird flow. And they've made it a way because it's serverless and because it's sort of like cloning of the production database and this weird flow
and they've made it a way because it's serverless and because it's sort of ephemeral
to allow you to just branch off the database and this isn't a new concept necessarily for
databases i think who's it out there gosh their other name it's the sql one my my sequel one
planet scale planet scale yes thank you i thinkScale really began a lot of this branching idea
with Vitesse and whatnot.
So it's not a new concept, but it's a new concept to Postgres.
They have upstream commits.
They have a lot of promise.
And so we're like really enjoying the process of where Neon can go.
So that's sort of the precursor backstory.
Well, then all things open, talk to them,
talk to them about partnerships and stuff like that.
And they're like, let's do it.
And so they gave me the keys.
I went in, I opened up the project and I invited Garrett.
That's what I did to kick off Neon for the long story short.
But it really began a year or so ago, really the idea of Neon being something that we can use.
And just knowing we like to play with cool things, managed serverless Postgres is something we should be playing with.
And now we are.
Yeah.
So I'm very curious to see what Jared thinks about connecting
to a branch for his local development.
Would you do that?
Do you see yourself doing that?
Absolutely.
Is that weird for you?
You expect me to say no?
No.
I mean, do you see yourself?
Generally, I'm a naysayer.
You are.
Also, it's not local so it's
going to be slower and you need like an internet connection and all of that i agree it will be
not slower like docker for mac slower which for me was a long time naysayer like no i'm not going
to run on my development environment through docker i already have it set up we've i mean
that's years long thing with like how should people contribute let's set up docker containers
jared won't use it so it's not going to be good.
You know, that whole deal?
That's right.
I'm way less concerned about some slower query times in development
because I have a recurring pain with development
where I do like to have fresh data as I'm coding.
It's just more realistic. It's more enjoyable. It's just I
prefer that. And so I am often doing a fly proxy, a PG dump to my local and a PG restore or whatever
the actual command is in order to get fresh data. And I'll do that once a week. Every time I'm
starting up a new coding session, sometimes I'll be like, oh, this is fine. It's last week's data. And I'll do that once a week. Every time I'm starting up a new coding session,
sometimes I'll be like, oh, this is fine. It's last week's data. No big deal. Other times,
especially like there's a bug. Well, the bug often has to do with data that's in production.
That's not in development, of course. And so I want freshens. And so I'm just constantly doing
that. And it's just part of my workflow. You know, I go get a cup of coffee. It's not a very large
database. If it's large enough that you're going to wait for it. And that's a pain that I live with.
But I do want that snapshot to be relatively recent.
Being able to connect to a dev mode,
which is just a branch of production
that I'm assuming I can either re-sync
or just do a new snapshot whenever I'd like to.
And it just is somewhere else.
And I just changed my connection string.
I don't have to version Postgres locally.
It's one less dependency on my local box.
I'm here for it.
I haven't tried that yet.
I haven't used it.
Obviously, we are in flight with even doing this.
So maybe I'll end up hating it and be like,
nah, I'll just run my local Postgres
and do a snapshot, and everything will be fine.
But I'm definitely not naysaying it yet.
I'm excited to try it, and I think it's going to be better. But I'm definitely not naysaying it yet. Like I'm excited to try it.
And I think it's going to be better
than what I currently do.
I think that's a really cool idea
because it helps me figure out
what else is important part of this pull request,
the 492.
And my most important take on this was like,
okay, so if we do this,
what will this unlock?
What will this enable us to do differently
or better than we're doing today?
And what you're saying to me sounds like
that's like a great goal to work towards
because it will simplify things a lot.
You don't need Postgres locally.
One other thing,
it's almost like a complication to this.
What about contributors?
What about people that don't have access
to our production data?
And we will not be able to give them access to production data,
even if it's a branch.
They're currently in the exact same box.
They're already there.
They live there right now.
Yeah.
And that's one of my pains is people are like,
I'd love to contribute.
Cool, go clone the repo, check the contributing guide.
And they're like, awesome.
Can I have some data that's real?
Because they don't even have podcasts when they're, you know.
And we had seed data in the past.
And it's just like, we are not an open source project like most open source projects where it's
like there are dozens if not hundreds of strangers working together it's like we have a fly by
contributor once in a while and we want to enable them but oftentimes that person who comes maybe
once every few months is not worth maintaining seed information or i i
had a long like my to-do list right at the bottom of it's like find a way of just taking production
and sanitizing it and reducing it down to what they could use and provide that for people
and i don't have it done like there's no i don't have answers for them i'm like yeah you can
you can just do it without data it It's no big deal, hopefully.
And one guy was working on the player,
which he couldn't play an MP3,
so he couldn't actually do,
I can't remember what he was trying to do,
and I'm like, well, it's going to take me hours
to get you going.
So that's where we already are,
so we aren't losing anything.
We're not solving that problem, though,
it sounds like.
Maybe we are.
Maybe there's a way you can provide them
a branch with
a sanitized branch you know yeah i think this is where neon will be great conversation with neon
to see okay so when we do create a branch can we add some extra stuff that runs part of that branch
so it puts it in a state which is okay to share and then we can automate that in some way so that
whenever someone wants to contribute, they basically
connect to the latest one and they don't have to do anything
because the connection string doesn't change
and what we make available.
So that would be an interesting one.
And I kind of got that far. I'd have to go back and
find it, but I do have at least the start of
what is the series of SQL
commands I would run to take
production and sanitize it and reduce
it to useful but not real and I started
like writing some deletes and stuff like I probably have that somewhere but I never actually got to a
place where I could then it was all ad hoc like okay I'm gonna go get a snapshot I'm gonna delete
stuff I'm gonna give you the sequel file via Dropbox or something lame right so this could be
cool in that way maybe yeah so that sounds almost like a step four we're still at step
zero where we're still migrating towards it in that the pull request is open and one of the first
observations was that the latency increased and if you think about it it makes sense because with
fly postgres was local so we get like sub millisecond latency. In Neon's case, the Postgres is remote.
It's running in AWS, still the same region,
but it adds a couple of milliseconds.
And when you have lots of queries,
which we do on some pages, they add up.
So for example, when we started this,
the homepage latency just shot up by 3x.
And Gerard, you came here and did some elixir foo
and reduced the number of select
statements we had 70 plus now we have 15 so while it was 3x before now it's like maybe 10 which is
0.1 so that's a huge huge improvement so how do we feel about knowing that the latency of all our database queries will increase. Are we okay with that?
Yes, because we are leveraging cached information most of the time.
And also that I can now be more diligent as well.
So a lot of the reason is I never had a good enough reason
to go optimize that particular page.
And then I did, and I spent an hour or two,
and now it went from 70 to 15 queries.
And I could do that on other things as well.
I know you posted slash feed is also super slow.
477 selects, I think, which is too many for anything.
But that page is never live.
It's always pre-computed.
And so, I mean, when you hit it on fly directly, of course
it's going to hit, but when you hit it through
changelog.com, it's going to
a pre-computed XML file that's on
R2. So we've already
kind of solved for that in other ways.
And we can
use Honeycomb and know when stuff gets
slow, and then we go optimize it just like
developers do. So I'm
not really concerned with that. I think it's kind of
it sucks having network latency when
you don't need it. Like we could avoid it with
this other thing. But
I think the wins
outweigh the drawbacks.
What do you think,
Gerard? Is there a way to reduce it
natively? Like you said, they're in the same region.
Is there a way that
from an infrastructure standpoint we can put them closer even though they're different networks like how can
we get them in quotes closer to not have that much latency so there's nothing that we can do
like this team can do to improve that because we are already in the fly region which is closest to
the neon region so we can't basically pick another region either on fly
or neon maybe there are some improvements that neon or fly can do but it's the speed of light
that's what we're working against here so let's say we make it like a millisecond quicker it will
not have the same impact as for example if we optimize some of the queries so they we don't
have to run 400 plus.
If we could reduce those, that would help.
I think those are the biggest wins or the bigger wins that we should be looking at rather than physically getting these two things closer.
But are you saying that our fly machines, so we had a fly instance, multiple fly instances
that are running app servers, and we had one fly instance that was running Postgres.
And are you saying that those did not have network latency between them are you saying now there's more network latency they have a much lower network latency so we have they're still traversing the
network stack though right like they're not co-located on the same machine correct but it
doesn't leave the fly network so it's all happening within the fly network. And we have two Postgres instances,
so a primary and the replica.
This is on fly.
And we have the same setup on neon.
We have the primary, it's called a read-write instance,
and we have a read-only, which is a replica.
And the next point is like, maybe we should look into that.
Maybe we should configure to use read replicas.
But before we talk about that,
again, same setup in neon as we have in
fly the difference is that the physical distance is greater and there's more network hops and when
i say network hops some of them are invisible because you don't see all network hops that
happen but anyways we're just basically adding one maybe one and a half millisecond latency and
again these aren't always the same they're variable but basically we're adding more latency to every single uh sql statement per query exactly and
they just add up the more you have you're basically paying the network latency penalty for each of
those queries rather than having one query that you know does more and then it comes back with
all the results this goes back and forth back and. Right. And is there any sort of connection pooling
or other things we could do
in order to reduce that per-query cost?
We have all that set up.
We do have that.
It's literally, you run one,
you have to wait for the response,
you run another one,
and some of them do run in parallel,
but eventually you've run all these things
and all the responses have to come back
for you to be able to rebuild the page. While you use fewer request responses it will be quicker it's just just a law
of math and physics doing less costs less than doing more yeah pretty much but again it's like
it's the light of speed that that's what we're dealing with here no physical distances and we're
doing many round trips back and forth well let, let's work on that, Gerhard.
What can we do about that?
Let's kaizen, speed of light.
Can we slowly make that faster?
You know, iteratively?
I don't think in my lifetime, but I don't want to say never.
What about by 60?
You know, this could be your next 20-year project.
Yeah, maybe, maybe. you know this could be your next 20 year project yeah maybe maybe a shorter project would be to
i think to look at the read replicas i think they would help so having some read replicas
and having some i'm not sure they're in the same region but distribute them a little bit because
fly i mean we have this option of distributing our app. We haven't used it.
We're still like in a single region.
And we haven't used it because we haven't configured read replicas yet.
If we had a read replica in every single location,
this would be a lot more interesting.
So what do you think about read replicas in the context of our Phoenix app, Jared?
I think it's interesting.
I wouldn't put it like high priority just because of the obvious
reason like most requests are never hitting our app you know you say that you say that but remember
the issue with fly sorry not fly fastly oh my goodness me i was look leaving that like towards
later because that's not a fun one but we'll dig dig into it. That's bad again? Well, it's been bad since October,
and we can't seem to get anywhere with the Fastly support, that one.
So our hit ratio, it's really tanked.
It's way down.
Exactly.
And we've been trying to figure this out with Fastly,
what is going on, and we can't get the clear answer.
No changes on our end that we can identify.
No changes on our end, no.
Can you zoom out a bit and give a one minute
version of that problem and exactly what's happening so there's context okay so let's
talk about that um no he's excited you can tell to talk about this yeah i'm prepared for this
i really like man this took this burned a lot of my budget that I have for changelog. That's why this hurt.
This burned almost like a whole month of work budget.
This whole Fastly CDN thing.
It was really that bad.
And there's an issue.
It's issue 486.
It's a long one.
If you open it up to see just how much we talked, and James A. Rosen was there.
So thank you, James, for helping out.
It's honestly like it'll take you at
least 30 minutes to read it so can you imagine how long it took and this is only the public stuff
there's also something even longer which is a whole fastly support thread that i wouldn't even
want to open but anyways october 8th this is when it started our cdn cash misses increased by 7x. So we had about 750,000 cash misses
in a two-week period.
And after October 8th,
we had 5 million cash misses.
That's a crazy amount of number.
Now, this has improved since.
So we didn't do anything December 28th.
We are now at 900,000.
Now we see requests go up and down,
but we still have more than we should do.
Most of these requests are to the homepage.
80% of them are HTTP 1,
90% of them, 19, 1, 9, are HTTP 2,
and only 1% are HTTP 3.
75% of all text-to-HTML requests
are cache misses.
So this is like highly cacheable content
that there shouldn't be any misses
and we get no explanation
for why this just started happening.
I got so frustrated
that I want to build my CDN.
That's not a 20 years project.
So three years ago,
Kurt posted about this. He wrote the five-hour CDN on the Fly so three years ago kurt posted about this he wrote the
five-hour cdn on the flyo blog i already caught on about this i think on a kaizen briefly yeah
and actually it wouldn't be that difficult honestly that would be easier to do than deal
with all the fastly issues that's where i'm at now and this has been years this is not the first time
by the way this is a long long, long, long story.
I'm using a similar approach.
I have something like this configured in my Kubernetes clusters.
I have quite a few.
NGINX caches everything.
I have origins configured, and it works.
And you can serve stale content.
It's not rocket science, but at least would have full control over.
So what I'm thinking is, let's deploy some NGINX instances all over the world using Fly.
Let's serve all requests from those.
They'll have some local disks.
We cache all requests there.
Problem solved.
We're done.
That's it.
Worth a try.
And I'm thinking cdn.gerhard.io.
I even have a name for it.
Not a logo yet, but I can ask ChatGP gpt to create me one what do you think about that well i don't know what it takes to build a cdn
uh i think in the conversation one of it is streaming logs that is how we have built around
and the question was whether or not if cloudflare had that similar support. Because the obvious answer here would be, okay, if we're having challenges with Fastly, and they're aware of this stuff.
We've brought it to their attention that we have had challenges.
Multiple times.
And it's strange to me because we obviously have such, it's not like we're here trying to badmouth anybody.
But we do have a mouthpiece of the developer community.
And we're using the technology to showcase the technology.
So it would make sense, in my opinion, if you had that kind of relationship with such a content, I guess, media company is probably the better way to say it, that you would want to put some effort into ensuring that they get the right help to ensure
that these problems aren't there and maybe it's just a facet thing maybe it's an us thing i don't
think it is us because we've seemed to have exhausted every single possible thing we could
do around it and so the obvious next choice would be okay maybe maybe we're just we're not holding
it wrong it's just we can't hold it right and we can't figure it out because there's no support to
hold it right and so we go and talk to cloudflare we decide to build our own thing and
i think it really comes around what does it really take to build a cdn for the kind of company we
have and the kind of content we have that we need to cash globally does it make sense to build
something in the house does it make sense to move to the next key player in the industry which is
cloudflare they've shown desire to work with us we're talking with them it's not come to full fruition but there's
a lot of desire but i don't like to bet on desire necessarily so i don't want to say there's
something happening there but it's definitely on the table to talk about and they're talking with
us we just haven't landed the point of the deal and i think for us we look at infrastructure
partners like this like honeycomb uh like fastly has been like linode has been in the past like fly is like type senses is we want
integrated embedded partners not because that's what we necessarily want but because we see that's
what they get the best benefit of we get the best benefit because we get to have that deep
relationship and that conversation back and forth to improve. And I'm sure if Neon succeeds with us and we, you know,
fully migrate our Postgres there and we're super happy with all the things
we've been sort of talking about,
that there's going to be a deep embedded relationship.
I've kind of come up with this idea over the holidays.
This embedded sponsorship is different than just sort of flying by
and throwing some money at content
and hoping that you can talk to their audience.
It's far more of a partnership and embedded. And so that's why I go that route.
And I think Cloudflare has an opportunity to work with us if that works out. We've given Fastly
years to work that out and they haven't done it. And that's just a shame. I really would love to
have them figure that out. I've begged them in email, in conversations, and I don't mind saying that because I've worked it personally to the nth degree that I'm kind of sad and upset that that's where we're at. They are amazing, maybe not amazing for us, but we've just not gotten the kind of support we need to get past these challenges over and over and over. So I guess my question to you is, does it make sense
for us to build our own CDN? What does it really take? Should a small operation like ours try to
do that? Or does it make sense to go to the Goliaths and the behemoths like Cloudflare and
Fastly like we have done? Should we try something different? What should we do?
One thing which I want to mention here, and this is really important, is that
if we didn't have Fly, and
if we didn't have the partnership that we have with Fly,
I wouldn't be suggesting this.
So that's the first thing.
The second thing is, as crazy
as this idea was three years ago, when
Kurt laid it out,
having sat on it for years
and understanding
what we need, we're not that
complicated as like from a technological perspective. Like our app isn't that complicated
and it's not changing that much. We're not a big team. And what that means is that our needs are
fairly simple and straightforward, which means that some of the big companies, they can't really
meet them because they're too big. There's
too much there. There's like a lot of complications that we, 99% of the stuff we don't even care
about. We don't care whether it's Varnish, we don't care whether it's NGINX, we just care
about the experience and the experience is too complicated. So I'm sure there's a way that,
you know, we can make this work, but is it worth our time? And the answer is no. That's what I keep
coming back to. What we need is something really simple and we don't have that really simple thing so even like our config what
we need in terms like streaming logs it's such a simple feature that we that we require and yes
sure we can go and start the conversation with someone else but just back to jared's point it
would take him a few hours to explain to someone or to do something for someone what he
could do himself in like five or ten minutes there's like an equivalent there to what we would
need and it's really not that complicated and we're leveraging something someone like fly
which have have come light years in the last three years like they're like light years apart where
they were as an organization,
as like the services they offer.
Can you gush a bit about that,
that Lightyear change just real quick?
I mean, they are a partner.
They're not sponsoring this message I'm asking you to say,
but can you gush a little tiny bit
about their improvements?
Because that is the home of the change
of changelog.com.
I almost said the changelog.com, Jared,
accidentally.
You know, Fly is the home of changelog.com.
Let me change this question.
Since we went from Kubernetes to Fly.io,
how many issues did we have because of Fly.io?
Was Postgres a problem for us on Fly?
Not really.
No.
I mean, we had some issues, like minor issues,
but nothing big, nothing of the scale of Fastly.
How many times did we reach out to support and they couldn't help us?
I can't even count on a hand.
I can't exactly fly.
There you go.
From a technological perspective, the machines, the way they work, the deploys, I mean, they
just work for us.
They just kind of like meet our needs exactly where they are.
And things are fairly fast.
It's very easy to spin up new apps.
I know that not everyone has this amazing experience with Fly,
but we've served billions of requests in the last two years.
We're still good.
We didn't have anything big or anything bad to say about them.
I mean, I can talk, for for example why our dagger on fly has
been failing and there's something there's some problems with the wire guard i mean it's not all
great and we can talk about that but that's a very specific use of fly in a very specific context
and it's not their core competency necessarily like their core competency is what they provide
to us it's the edges where they're sort of moving and innovating that still need work which is part of the course yeah so i mean this basically has to
do with uh the fly that there's intermittent flooded io wire guard gateway issues uh when
you're connecting for example from ci from github in this case sometimes that whole setup and it's
very difficult to say whether it's fly or whether it's GitHub or Microsoft Azure where this runs. So it's difficult to say what exactly
is happening. We just know that specific combination isn't working well. But because
we have two of everything, it's okay, because we've been falling back to the GitHub runners.
Builds have been a bit slower, but they worked. So, you know, deploys were taking 10 minutes rather than... And I get the GitHub action
run failed emails
when my deploy goes out successfully.
And I'm like...
So I just want to like balance this out
in that we have had some issues with Fly,
but not in the path
that we really care about.
Like production hasn't been down
because of them.
And again, knock on wood,
it doesn't happen.
But, you know, it's been good.
Now, should we put all our eggs in one basket? know two of everything if we run everything on fly and the fly goes down we are
we're down yeah let me ask a different question then so if we did decide to build our own cdn
like this is one more thing for a small team of ours to maintain uptime to like, what will we be taking on in terms of burden to, it's one thing that we don't have the need
for, you know, let's just say 99%, like you'd said of a cloud flare or fastly feature set.
We really only need the good 1% because our needs are just limited and we don't have
exhaustive needs.
And we did decide, okay, let's build our own cdn again eggs in one basket we're
going to build it on fly if we decide to do that but what would it be in terms of like build time
burden to maintain you know if it's down like how do we i mean that seems like a we'd have to like
probably have more of your time i mean i don't know it just seems like we're taking on way more
uh responsibility because
fast is in front of everything and while there's some challenges there and there's some misses
of frequent of recent we're relying on them to do their job and they kind of do their job for the
most part you know we've had some issues obviously but we would be taking all that on ourselves does
that make sense so let's break it down in terms of, like,
the big pieces that we need to get into place.
We have one new application,
which is our CDN application.
All that is NGINX,
exactly as it's described in Kurt's blog.
We have an NGINX config that has all the rules
that currently we are defining in Fastly.
We distribute this app across all the fly regions,
maybe not all of them, but most of them.
So a couple like US West, US Central, US East,
South Americas, a few in Europe,
and all this is like literally run a few commands in fly
and you have all these app instances spun up.
They're the same config
same everywhere we get one dedicated ip it's an anycast ip again fly feature so regardless where
you are you use the same ip that would be cdn.changelog.com it will hit one of those fly
instances if the instance is down i think the way it works the fly proxy which you're basically
hitting the proxy on any of the edge,
again, where you are, it will redirect to running instance.
And then you have some very small rules
which basically tell you what you do.
So let's say that you're serving an MP3.
If you don't have the MP3, it will stream it from wherever it is
and it will cache it locally.
So you have some disks attached to every single Nginx instance
so that you have like a local ephemeral cache
of all the content that's requested in that region.
It's just simple config, you just add a volume, boom, you're done.
That's it.
I mean, there's not much more.
I suppose it's like the config for the NGINX, right?
So that Jared gets the logs.
That can't be it.
What about logs and stuff like that?
What about the things we need for stats in the application?
Logs, exactly, yeah. So NGINX logs, we'll get them What about the things we need for stats in the application? Logs, exactly, yeah.
So Nginx logs, we'll get them in the format that we need to.
We'll write them to a disk.
I mean, Fly has NATs.
That's how they distribute all the logs.
I know that's not always reliable.
There's small issues, and I know because I've been using this
for another project for the past year.
This is for Dagger, by the way, so I know exactly how NATs works,
how log distribution works in Fly. And the challenge would be to get those logs reliably from the Nginx instances to
S3. I think that's the one thing which is like an unknown, in the sense that I know the limitations
of NATs, which is internal to Fly, but maybe there's something more that we can do there.
We cannot do this without a little bit
of fly's help and what i mean by that we don't want our logs to get lost right fastly has been
very reliable as far as i know when it comes to delivering those logs i know that we can get them
in the right format because nginx is super configurable what i don't know is how reliable
will it be to get those logs from fly into s3 one tool that i've used and i love is called vector.dev
it's an open source tool very lightweight written in rust that consumes they're called inputs
so it can be anything from like a log file to standard instance out whatever has a couple of
sources and then this does transformations and it has syncs.
So we could co-locate some of those vector instances
right next to NGINX.
They're super lightweight.
Think megabytes of memory usage,
hardly any CPU usage,
and they could distribute those logs reliably.
They have back-off mechanisms.
They have all sorts of things.
So even that, I would have an idea of how to do.
Time-wise, we're talking days of my time
preach i like it so i think like by the next kaizen if i set myself to do this
this would be done by the next kaizen what about costs i mean we would have to compare apples to apples of fastly pricing versus fly pricing.
It looks like it's about 0.02 cents per gigabyte.
Mostly I'm worried about outbound data transfer.
100 gigabytes per month free, that's North America and Europe, and 2 cents per gigabyte
outbound data transfer. So I think we would do some sort of analysis
of what we are doing currently on Fastly
and what that would cost with our own CDN on Fly.
And that would be interesting to compare.
Yeah.
So if we did, let's go 5 cents per gigabyte,
we would still be within our sponsorship account because fly
sponsors our infra we would not exceed our sponsorship limit oh sorry no hang on i may be
wrong hang on hang on hang on let me time this this i have an extra times uh maybe it would be
slightly over slightly over but then we But then we have a bunch of redundant
info that we can shut off.
Maybe we can increase the sponsorship a little bit
or Fly can increase the sponsorship a little bit.
Right. We can always go back to them with
a new cause and idea
to say, I had an
idea, I suppose, and this may not
fly.
But I was thinking like the idea
of really simple syndication.
What if it was a really simple CDN,
like RSCDN,
like a repo we started up
where you could do the same thing
we're going to do
if we decided to do this.
And it became a template,
you know, via open source as it works.
Called RSCDN
and it's meant to run on fly
and you can spin up your own
really simple CDN essentially
and kind of follow our blueprint.
And I think that that's promotion for Fly.
That's obviously promotion for open source,
dogfooding in a way.
Because that's what we're asking for
is like just a really simple CDN.
Don't give us all the extras.
I mean, if we think of it as an experiment
to try out and see how far we can get,
maybe we can invest a little bit of time and see,
will this work?
I mean, we have the blueprint.
We have a couple of things which are out there.
I think we're relying a lot on Nginx
and Nginx caching, I know, is a feature
that's one of the Nginx Plus features,
especially managing the cache and
visibility into the cache so maybe there's other tools that are more CDN focused and open source
I don't know traffic I know is popular in the cloud native world as an alternative to Nginx
I don't know the reasons you probably do but just as an example like maybe Nginx isn't necessarily
the solution yeah I'm thinking something battle
hardened that has been used for this purpose for many many years even decades at this point
and there's only really three options there's varnish there's nginx or apache apache i would
discount because again i don't want to go into that so it's either varnish or nginx varnish is
a beast why don't we just export our varnish config and just import it
into our new thing we've already written the code i mean i've learned varn i know vcl now
i heard i know vcl that might just work really i get lost in those thousands of lines of
stuff that's what makes me think like is this a really simple cdn because when i look at our
varnish config on fastly I think it's actually doing more
lifting than we think
it's doing but maybe
some of that's a lot of
it's generated based on
we turn on a few
features and they
boilerplate out some
stuff but when I start
thinking about replacing
Fastly with anything I
go back to that varnish
config and I realize
okay I do have and I
have more rules that I
would like to deploy as
we take plus plus on
site and stuff it's
going to get more I'm happy to write an N take plus plus on site and stuff it's going to get more
I'm happy to write an Nginx config
I'm already writing VCL so
I'm not against it I just think like
you've used both which one do you prefer
at this point well it's
tough because I've only ever used varnish through
the fastly admin and so it's like
this weird you know thing that
you're doing and you're kind of you write it
directly but then you like it exports it to the right place
and you've got to set priorities
in order to get the code where you want it to be.
And so that's never what I want.
And I've written Nginx configs
the way I want to write them,
in Vim or in Sublime Text.
So I like Nginx better
just because I've never just gone
and downloaded Varnish and ran it.
So it's tough for me to compare,
but they're both fine.
I mean, I used to know Nginx very well.
I haven't run it personally for years.
But for me, Nginx configs are pretty straightforward stuff.
You can still screw it up good.
And I will say that ChatGPT led me astray
a couple times on Varnish stuff.
It's gotten it right,
but it also got it wrong a couple times
where I was like, nope, that's not how you do it.
I had to learn the hard way.
One plus is that ChatGPT and all the GPTs knows Nginx configs very, very well.
So when you're lost, you can be found.
I know at this point, all I'm trying to say is that
there's a lot of frustration that has built up over the years.
It doesn't seem to be getting any better. And someone's like, I want to do something about it.
And maybe this is not it. I mean, it's close to me, like the heart of a hacker. A hacker has to hack. The easy button to me, I mean, I'd love to do that, some sort of hacking. I think I would
love to investigate further really what it would take for us because i mean i love
to tinker just like you do but do we want to hold a cdn forever as our own responsibility that's not
really the business we're trying to be in i think that we are in the business of partnering with
great tech stacks and great infrastructure partners and helping them evolve to fit our needs
more so than us trying to like tinker.
I mean, I would totally tinker with this RSCDN kind of idea, but I think at the end of the
day, I want a great partner as a business.
You know, I want to promote a great partner to a great developer audience that makes sense
for them to try out and use on their own.
To me, Cloudflare seems like the winner of what we should try next unless you investigate and further, in quotes, sell us on the idea that this makes sense for us to build and hold ourselves.
Because if there's legs there, then that's kind of cool.
And maybe that's kind of fun.
It would put us more in the fly basket, which I'm not against because we can certainly circle back with Kurt and the team there and showcase our ideas.
And they love that.
They love the hacker spirit.
So I can't imagine we would get turned away with this idea.
I think my primary concern would be going against the grain in terms of infrastructure partners
and then going against the grain of building out a service that we may not actually want to manage ourselves.
But I like the idea of the tinkerer.
It would almost be fun to do just for the fun of it really
there's a limp into this as well that we could deploy
which is that we could leave
cdn.changelog.com completely alone
we have two domains on Fastly
and then we have changelog.com which is fronting our app servers
and those are two different things inside of Fastly
and obviously one has the bulk of the traffic
and the other one has way less traffic.
The feeds is going to be big,
but it's not even,
the logs we don't care about as much, right?
Like the MP3 download logs are the ones that we want.
That's the bulk of the traffic.
We could leave that alone for now
and tinker with changelog.com,
which is really just fronting our app servers anyways
and has a bunch of logic,
like where the feed rewrites are and go to R2.
There's lots that you could get done there,
but it's probably like 20% of the work
that it would be if you took them both on at the same time.
So you could build kind of a poor man's version of this
as a tinker, which maybe takes one day
for Gerhard versus three or something.
And we could roll it out and leave CDN alone,
and then if it doesn't work, turn it off
and go back to what we were doing.
So I think that's a way we could do it with way less risk
and probably more fun.
What about Cloudflare, Jared?
Have you looked at the logs?
Just enough to know that I think that we need
the enterprise plan before I can even play with the features,
which is kind of weird to me.
And they don't tease them where I would expect.
In the Cloudflare UI, you expect it to be like,
here's a feature you can't use, hit a button here.
But this feature just doesn't exist until you get to the docs.
And they're like, oh, log push, which seems to be exactly what we need.
It just writes your logs out in real time to R2.
That's the feature we need for It just writes your logs out in real time to R2. That's the feature we need
for our analytics.
And then I haven't looked at it for rewrite rules
and all the other stuff we're doing fancy.
How could I recreate the Varnish
functionality over in
Cloudflare? I haven't got that far yet because I figured
why do it if we're not sure yet.
I'm pretty sure we can get everything done
there that we got done in Fastly.
I just don't know exactly how, but the log push is an enterprise feature,
which we're just on a standard plan right now.
And so I can't even...
I'm sure we can get that blessed.
Like, hey, just turn that on for us.
Yeah, I just can't even look at it.
I haven't even looked at it yet because you just can't.
And that's been the main hangup, really.
Because, I mean, to zoom way, way back,
we wanted to actually run Cloudflare and Fastly side by side.
And I think, Jared, I can't recall, remind me why we did or didn't do that,
but we had the idea of doing it.
And it came around that we were always unsure of how to do essentially what LogPush does,
which is move those logs streaming to another service
so that we can consume them and use them for the stats and whatnot.
Or any blessed way that we could get the data that we need from Cloudflare.
The first time we looked at it, which was probably five years ago now, they just didn't
even have it.
Like they had your dashboard and they'll show you what you've done.
And that was it.
Like you can't say it.
Yeah, but how many requests to this endpoint did we serve?
Like they just didn't have that kind of stuff back then.
They seem to have that kind of stuff now.
There's other stuff called website analytics which is in
beta which has even more granular data so i think they're like been adding that over time and then
the log push service seems to be exactly what we would be after maybe there's an even easier way
that they have that's like this is a cloudflare way and i haven't i can just ask them that i
haven't asked them but the question is like hey if i wanted to count downloads to an mp3 endpoint
like how would i get that done i'm pretty sure most cloudflare engineers would be like oh here's to ask him that. I haven't asked him. But the question's like, hey, if I wanted to count downloads to an MP3 endpoint,
how would I get that done?
I'm pretty sure
most Cloudflare engineers
would be like,
oh, here's how you do it.
I just haven't asked.
And maybe the answer is,
you do it with LogPush.
Okay, well, we don't have that.
So that's where that is.
But I would be down tinkering
with this personalized Fly CDN,
even if it's just for changelog.com,
which just fronts our app servers,
we don't really care about the data.
We don't need to stream the logs.
We just need the rewrites to work
so it gets the feeds from the right place on R2
and the basics there.
And if that works great
and nothing works out with Cloudflare or Fastly
and the costs make sense,
then you just do the other part which is going to be harder but
once you've done the easy part the hard part becomes less hard i think it's worth trying a
couple of things i think if cloudflare will work from a certain perspective we should definitely
try out and see how far we can get i think this fly thing has some merit to it at least trying
it out and see again how far we can get maybe you will come across things that will be blockers like real blockers or kurt after he hears this he says hey you guys are crazy don't
do it that was a joke yeah actually i wrote that three years ago and i do not believe it anymore
please don't do that yeah that's true you guys are crazy or maybe he's like you guys are crazy
i love it maybe yeah let's do it
what's up friends this episode is brought to you by our friends at Neon. Serverless Postgres is exciting and we're excited.
And I'm here with Nikita Shamganov, co-founder and CEO of Neon.
So, Nikita, one thing I'm a firm believer in is when you make a product, give them what they want.
And one thing I know is developers want Postgres, they want it managed, and they want it serverless.
So, you're on the front lines.
Tell me what you're hearing from developers.
What do you hear from developers about Postgres managed and being serverless?
So what we hear from developers is the first part resonates.
Absolutely.
They want Postgres.
They want it managed.
The serverless bit is 100% resonating with what people want.
They sometimes are skeptical.
Like, is my workload
going to run well on your serverless offering? Are you going to charge me 10 times as much for
serverless that I'm getting for provision? Those are like the skepticism that we're seeing,
and then people are trying and they're seeing that the bill arriving at the end of the month,
and like, well, this is strictly better. The other thing that is resonating incredibly well is participating in the software development lifecycle.
What that means is you use databases in two modes.
One mode is you're running your app
and the other mode is you're building your app.
And then you go and switch between the two all the time
because you're deploying all the time.
And there is a specific, you know, part when you just like building out the application
from zero to one, and then you push the application into production,
and then they keep iterating on the application.
What databases on Amazon, such as RDS and Aurora and other hyperscalers,
are pretty good at is running the app. They've been at it for a
while. They learned how to be reliable over time. And they run massive fleets right now, like Aurora
and RDS run massive fleets of databases. So they're pretty good at it. Now, they're not serverless,
at least they're not serverless by default. Aurora has a serverless offering. It doesn't scale to zero. Neon does, but that's really the difference. But they have no say in
the software development lifecycle. So when you think about what a modern deploy to production
looks like, it's typically some sort of tie-in into GitHub, right? You're creating a branch,
and then you're developing your feature, and then you're sending a PR.
And then that goes through a pipeline, and then you run GitHub Actions, or you're running GitLab for CICD.
And eventually, this whole thing drops into a deploy into production.
So, databases are terrible at this today.
And Nian is charging full speed into participating in the software development
lifecycle world. What that looks like is Nian supports branches. So that's the enabling feature.
Git supports branches, Nian supports branches. Internally, because we built Nian, we built our
own proprietary. And what I mean by proprietary is built in-house. You know, the technology is actually open source,
but it's built in-house to support copy and write branching for the Postgres database.
And we run and manage that storage subsystem ourselves
in the cloud.
Anybody can read it.
You know, it's all on GitHub under Neon Database repo,
and it's quite popular.
There are like over 10,000 stars on it and stuff like that.
This is the enabling technology.
It supports branches. The moment it supports branches, it's trivial to take your
production environment and clone it. And now you have a developer environment. And because it's
serverless, you're not cloning something that costs you a lot of money. And imagining for a
second that every developer cloned something that costs you a lot of money in a large team,
that is unthinkable,
right? Because you will have 100 copies of a very expensive production database. But because it is
copy and write and compute is scalable, so now 100 copies that you're not using, you're only using
them for development, they actually don't cost you that much. And so now you can arrive into the
world where your database participates in the software development lifecycle, and every
developer can have a copy of your production environment for their testing, for their feature
development. We're getting a lot of feature requests, by the way, there. People want to
merge this data or at least schema back into production. People want to mask PII data.
People want to reset branches to a particular point in time of the parent branch or the
production branch or the current point in time, like against the head of that branch.
And we're super excited about this.
We're super excited.
We're super optimistic.
All our top customers use branches every day.
I think it's what makes Neon modern.
It turns a database into a URL and it turns that URL to a similar URL to that of GitHub. You can send this
URL to a friend, you can branch it, you can create a preview environment, you can have
dev test staging, and you live in this iterative mode of building applications.
Okay, go to neon.tech to learn more and get started. Get on-demand scalability,
bottomless storage, and data branching.
One more time, that's neon.tech.
I mean, I think, to be honest, I think Fly should have a CDN
because that's one of the first things that are fairly easy to run as distributed systems worldwide.
Because the state is decoupled.
It's the simplest use case, right?
Yeah.
So if Fly invests in something next, I think a CDN should be it.
The thing which we haven't talked about, maybe we should, is Superbase on the fly.
Oh yeah, because that popped up just recently
after we were already starting with Neon.
I mean, we wanted to manage Postgres for a while
and they weren't doing anything about it.
And so we're like, well, let's go talk to Neon
and then tell them the rest, Gerhard.
There is a Superbase Postgres on fly.io.
It's in the fly docs.
I think this was in December 13th or something like that yeah it
was fairly recent um superbase partnered with fly.io to offer a fully managed postgres database
on the fly.io infrastructure and uh low latency i mean just it's just like right there in the intro
i think that makes a lot of sense so yeah i think it was like bad timing i suppose in a certain or good timing depending on how you look at it i think the neon i think i really want to
see that through but it's interesting to see something like postgres appearing on fly as a
managed service through partnership so i'm wondering maybe a cdn is next and this is my
wishful thinking yeah maybe it's definitely an obvious move i mean it's not obvious that they
would partner with superbase i think that for me was kind of a pleasant surprise. It makes sense. Like, oh yeah, this is like a great partnership. I think both companies are very impressive and aligned in that way and it benefits both. So I thought it was a good idea. Obviously, I felt like it was late to the game because we had been wanting managed Postgres for a long time on fly. much so that we made a different move you know that's right and um still interested in maybe you know trying and comparing
the two obviously depending on how tightly superbase is integrated into fly's infrastructure
expect them to have that advantage in terms of performance yeah maybe that's uh maybe they go
out and find a a cdnfocused upstart that could integrate into Fly.
I don't know, maybe.
I mean, if I was to pick a CDN,
and I haven't tried them,
but I did a bit of research.
Key CDN looked interesting.
And not because it's based in Switzerland.
That has nothing to do with it.
But there's that as well.
So Key CDN.
It was real fast for you.
Real close.
Yeah.
One of your favorite places. Yeah. I haven't shopped CDN. It was real fast for you. Real close. Yeah. One of your favorite places.
Yeah. I haven't shopped CDNs for a long time. I just
have been happy for the most part
until October the 8th.
It's almost like a yearly
thing. Like every year something like that
happens and then we spend a few days with support
and we get nowhere. I end up going
in circles and say, you know what?
Flip the table. I build my own. And then I calm down. And you're you know what like flip the table i build my own
and then i calm down and you're like i don't really want to build my own yeah yes yes we
should here we are like yes you should gerard you should build this is like the third time this
thing has happened over like the last couple of years so i think it's i think there's something
there and it will happen again i'm sure just a Just a matter of time. I guess just to layer one more on, like, thank you, James A. Rosen,
for helping us out,
but to have to reach out to an ex-Fastly person
or for them to actually reach out to us,
probably with like,
oh my gosh, you guys are feeling so much pain.
I just need to step in and help you all.
That is just not cool, really.
That's not great.
But did you know that Vercel Postgres
is powered by neon
no is this an advertisement no it just sounded like that did you know that vercell postgrass
is is this a product placement where's the jingle yeah well the reason why i asked why i say that
is because you know suit base is available on fly and like it just makes sense to say well maybe
neon at some point will be also available on fly
yeah that might be to fly's advantage to do that right right it makes sense and you know but at the
same time i've had this back of the head thought that maybe neon will be acquired by vercell yeah
are they the only database provider on vercell now now? Well, Vercel Postgres is Neon.
So Postgres on Vercel is Neon.
You don't need an account.
I'm just reading from their docs.
I'm not at all advertising.
It is not SOC 2, Type 2 compliant, coming soon.
I'm just reading from their docs.
But it just makes me think, like,
maybe Neon will be acquired at some point.
I don't think so, but it just gave me this feeling because when i talked to nikita for these ad spots we did with them which was sponsored it was really his perspective was around the javascript developer and you never
bet against javascript this idea that he had said and you know they're quite embedded i just wonder
if there's like fruits they're happening where eventually they might get acquired by them i don't
know because vercell is such an acquisition behemoth these days.
They're acquiring a lot of different stuff.
And just a thought there.
But maybe at the same time we can expect to have a neon Postgres inside of Fly where we just basically have the same great features we love that we're thinking we'll love with dev mode and whatnot and branching
and copy on write and all the fun stuff they
provide, maybe it's just like, well, now
it's just network latency is gone. It's just
not there anymore because it's within
the fly-in front. And that's going to be a good thing
for us. The good thing is, really,
is that we have choice. We have so much choice
as developers. And that really is the fun part
of it, right? There is a lot of choices here.
It's almost the paradox of choice. Yeah, paradox of of choice in the green scheme we'll end up doing nothing again
like yeah we didn't do anything build your own there's 14 choices it's not the right one there
will be 15 choices now 15 standards we'll release our open source cdn and there'll be 15 of them
right yeah so i kept one more thing as last all right one more
like an easter egg it's not easter yet and it won't be easter next time we record i don't think
but still um part of the pull request 492 i snuck something in that i wanted to have for ages
oh my goodness did i notice it i don't know let's Let's have a look. This is a test. See if you can notice a feature which I snuck in pull request 492.
Okay.
I've now switched to the file changes tab.
I'm going to just scroll through the file.
Is that where I'll find it probably snuck in?
It's just some sort of file change here.
I think it's actually, if you look at the pull request of the conversation,
it's actually the second comment.
Actually, it's the comment, the first comment which I've made after the description don't give me all these hints man yeah too easy that was for adam
you keep looking at the color jared it's okay let's see who gets there first
is it this video no that's actually a surprise how the auto scaling slider works in neon which
is very counterintuitive so i left that gotcha there
and i've gave support to their product team about you know how that could be improved is it one
password yes well i'm glad you mentioned that because um i know i love one password and you're
doing more with this and what's happening here what's this about so in a nutshell, our application needs a single secret now.
Shh, don't tell them.
OP underscore service underscore accounts underscore token.
Single secret.
And during boot, the application uses the OP,
the one password CLI,
to inject all the secrets that it needs at boot time.
So it pulls them down from the 1Password vault
when it boots. And is that
hosted by 1Password Cloud
or where's the vault? Correct.
That's all 1Password Cloud, yes.
And so we don't have any
additional infrastructure for that? Nothing
additional, no. Spell it out
for us really detailed. Why is this cool?
I mean, I think I understand why it's cool, but spell
it out. We have a single secret that gives the app access to all the secrets that it needs,
and there's a dedicated vault for that app. What that means is that that secret only allows
the app to access just-in-time secrets when it boots. We don't write them anywhere. We could,
but we don't. It's all in memory. When the app boots, it has access. Boom, it pulls them down.
The secrets never leave one password apart from loading into the app's memory. When the app boots, it has access. Boom, it pulls them down. The secrets never leave 1Password
apart from loading into the app's memory.
We don't configure them in Fly,
which is what was happening before.
Every single secret the app needs,
we configure it in Fly.
Remember how we rotated secrets, Jared?
That's a pain.
So that process we no longer have to do anymore
because if you want to update a secret,
you update it 1Password,
you restart the app, and boom.
At boot time, the app picks up the new secret.
That's it.
Does 1Password Vault have some sort of a webhook or something
that they could trigger?
Because then you just take step two out.
You know, that's what I want.
Yeah.
Just let the app restart itself.
Like reboot my app when I add a secret kind of thing?
We've done step one one so please continue being
excited for step one before we talk about step two don't you love how i'm never satisfied by
you i'm like no not cool this would be cooler you and every other developer that's why we keep
kaisening this it never gets old you know what would be cool if you prove this right and before
you know it gerhardt's like can you just appreciate this for a second before you ask for more that's cool Gerhardt I'm loving this I'm loving this and it hasn't
been merged yet so again let's merge it first let's start using it let's get it merged okay
that's a nice easter egg well he did ask if this covered all the secrets and you said looks correct
so I think that's all we needed to worry about in there that is kind of cool so cooler thing I
think is that it's limited.
Even if it could somehow leak,
it's only the secrets that we store in 1Password
for that vault for the infra, right?
So there's a barrier, there's a perimeter
to its touch point of secrets.
That's it.
And if this was leaked, yeah, rotate the service token,
basically rotate all the secrets in the vault,
and we're good.
Again, that would be like a step number three where could we automatically rotate all the secrets in the vault, and we're good. Again, that would be like a step number three
where could we automatically rotate all
the secrets that were leaked from 1Password?
And that's almost like a 1Password request.
Yeah. This is where I also
say that we're working with 1Password
behind the scenes to make this
embedded partnership more apparent
as well. We're using this tech,
we're paying for this tech, we're not promoting
it because they're paying us, And we're actually pursuing them to
pay us. Not so they can keep promoting it because we love it so much. And we love to
work with them to share more of this story on the inside and maybe even have
that relationship where we're, hey, this is how we're using it. And Jerry's
response was, could there be a webhook? And maybe they're like, yes, there could be a webhook.
Reminds me of this book I read to my kids. But anyways,
that's cool. So hopefully
we can get a 1Password sponsorship here soon because
of just how we keep using it
and improving it in terms of our
infrastructure. That's awesome. I love that.
Been using 1Password since the dawn of time, basically.
I just adore it. It's awesome.
So does FlySecrets then
go by the wayside? Pretty much, yeah.
The only secret to which we set is this one
password token service token and then the one password cli loads all the secrets directly from
one password so when i want to add a new secret let's say i integrate a new service i go add it
to the one password vault and then i go restart the app i push the code that references it and
by the time the thing boots up it's going to have access to it. That's pretty cool, man.
I love it.
Yeah.
There's still a file there.
There is the env.op file
where we put what secrets you want.
That's part of the pull request.
I'll add it to there.
Exactly.
Because that's what gets it in the environment,
in the app's environment,
just in time when the app boots.
Okay.
What about dev?
Are we still using Durand for dev?
So, yes. So, for example, part of this, I have an nvr boots. Okay. What about dev? Are we still using direnv for dev? So yes. So for example,
part of this, I have an envrc.op
and basically that one I template just in
time, which does exactly the same thing.
But in this case, I write it
locally to my file.
I wouldn't need to. I could, for example,
run op every single time, the one
password, to load them in the env,
but I don't do that but it's an
option say that again different words right now if you wanted to use this in dev you would need
to run the command locally read to read the the op command dot exactly so like to read the nv.op
file and maybe template it like maybe write it to a disk or load it in your environments.
You would need to run things through the OPCLI.
Can I continue to ignore that
and just use my direnv as I have been?
Because my secrets obviously in dev
are going to be different than the secrets in prod anyways.
You could, yes.
What I really want to know is when this gets merged,
is my setup going to be hosed or not?
Oh, you have no secrets.
No, we shouldn't
because this just configures
it for prod.
So whatever you're doing
in development.
This is additional.
Additional, yes.
Gotcha.
I could use it if I wanted to
in dev, but I don't have to.
Correct.
Sweet.
Cool.
Awesome.
That's awesome.
Anything else?
I feel like that was
the coup de gras.
The Easter egg.
That's why I left it last.
That was it. Awesome awesome so my question is do we build a cdn or not that's what i want to know it's always that's like a
title let's build a cdn that might be a show title right there yeah that's that's the show title
kaiser kaiser build a cdn question mark yeah i like that to be determined i think well let's tinker i think
that's the answer let's tinker i like it and we'll talk about it again on the next guys and
yeah and we're merging uh the neon text we're going to take that into production
okay so all we are all good with the latency so all good there are some issues with the elixir
configuration i've left a couple of things for the Neon support.
I have a support case open,
so we're still back and forth on that.
I have a workaround which works,
but the official documentation doesn't work for us.
It's the official Neon tech documentation
for Elixir configuration.
Some issues with the SSL, with the SNI.
It doesn't work as advertised.
So we'll be on neon.tech
as of the shipping of this
podcast. So when people
listen to this, we'll be on
neon? I think so.
Depends when we ship it. That's a week from today.
Yeah, a week from today is fine, yeah.
So if you're listening to this,
go to changelog.com and see if
things are snappy or if the latency upsets
you. See if it loads.
So Fastly in front.
So by the way, Fastly will be serving your requests most likely.
Sign in to the website and we'll give you a cookie.
And if you have that cookie, Fastly just passes through to the apps.
And you'll enjoy slower response times because you're going to be hitting neon.
But we hope you enjoy that cookie.
An easy way to do that
is for free, right?
Just go to
change.com slash community.
That's right.
And hey, while you're doing that,
come and say hi in Slack
because we want to say hello to you.
Lots of cool people in there.
Lots of good conversation.
Home Lab's been active.
TV and movies has been active.
A lot of,
I think you got your
Wordle channel still yet, Jared?
I'm tracking that.
Oh, we picked up some Wordlers.
Thanks to State of the Log, we got a few new Wordlers.
Still going strong.
I'm still keeping my streak alive.
So, a lot of fun.
All right, y'all.
Bye, friends.
Bye, friends.
Kaizen.
Kaizen.
Kaizen.
That's it. Our 13th kaizen episode if you have a long road trip or a marathon to run you could go
back to the very first one and binge our entire journey along the way find them all at changelog.com
slash topic slash kaizen oh and you've probably heard that we're bringing ship it back real soon
but not with gerhard on the mic.
Maybe you're wondering how he feels about that.
So was Adam.
So for the Plus Plus folks,
how do you feel about us relaunching Ship It?
Change Log Plus Plus members, stick around for that bonus.
And if you haven't signed up yet,
now is a great time to directly support our work with a Plus Plus membership.
Ditch the ads, get free stickers and discounts on merch,
and hear about Gerhard's feelings at changelog.com slash plus plus.
Changelog plus plus. It's better.
Thanks once again to our partners at Fly.io,
to Breakmaster Cylinder, and to you for listening.
We appreciate you spending time with us.
Next week on The Change Log,
news on Monday,
Alan Jude talking free BSD on Wednesday,
and Techno Tim joins Adam
for the State of the Home Lab on Friday.
Have a great weekend.
Share The Change Log with your friends
who might dig it,
and we'll talk to you again next week.