The Changelog: Software Development, Open Source - Kaizen! Slightly more instant (Friends)
Episode Date: October 13, 2023Gerhard joins us for the 12th Kaizen and this time talk about what we DIDN'T do. We were holding S3 wrong, we put some cash back in our pockets, we enabled HTTP/3, Brotli compression, and Fastly webso...ckets, we improved our SLOs, we improved Changelog Nightly, and we're going to KubeCon 2023 in Chicago.
Transcript
Discussion (0)
Welcome to Change Login Friends, our weekly talk show about making continuous improvements.
Kaizen!
Thank you to our partners for helping us bring you world-class podcasts every single week.
Fasty.com, Fly.io, and Typesense.org.
Okay, let's talk.
Kaizen 12. We are here to iteratively get better.
Welcome, Gerhard, to Kaizen. Hey, it's good to be back. It. Welcome Gerhard to Kaizen.
Hey, it's good to be back.
It feels like I haven't gone anywhere.
I don't know where the two and a half months went.
It was just a fast car.
This one definitely snuck up on me.
So whereas last time around you were celebrating all my wins,
this time around I don't feel like i did very much
so i'm not sure what we're going to talk about but i think you're being modest i think this is a cue
for adam to compliment me i celebrate you now it's adam's turn to celebrate you to celebrate
your wins i'm fishing for compliments well i celebrate jared often and in public on podcasts. I did ask our neural search engine
on ChangeLog News the other day
for ChangeLog News. I asked it
to answer me
in the database of Adam
Sokoviak to talk about
how cool his co-host is.
And it said, Adam has never
done that on a show.
Busted.
Did you hear about this, Gerhard? We had a listener build a neural search engine Done that on a show. Busted.
Did you hear about this, Gerhard?
We had a listener build a neural search engine all using our open source transcripts.
Pretty cool.
Where is it?
I want to check it out.
Let me grab that URL.
While he's digging it up,
I want to defend myself here
in the fact that in the back channel,
he also asked the same neural search engine
my opinions on Silicon Valley,
and there was none.
So that's a lie.
It's broken.
It's true, which is incorrect.
It's alpha.
It lives at changelog.duarteocarmo.com.
Let me just send you the link.
Gerhard, put it in the show notes.
Oh, yeah.
I was going to ask you, is there a W or a U?
Okay, it's a U.
And you can use the search to just like search for what people have said.
You don't have a transcript or click over to the chat and you can ask a question to
Adam, for instance.
And I asked Adam, how cool is your co-host?
And he responded, I have not mentioned my co-host's coolness in a given context.
Now, as you said, Adam, Duarte has been furiously working on this.
It's a side project of his, so it's getting better,
but I think it did have some missing data because, yeah,
I also said Adam doesn't talk about Silicon Valley.
So maybe you say I'm cool a lot and just didn't find it in your transcripts.
Yeah.
I also asked the same neural search engine about our good friend Adam Jacob.
I said, I just said thoughts on Mongo question mark.
And it comes back with this like multi-paragraph argument.
He says, I have some experience with MongoDB from building my business chef and thinking
about open source communities.
I believe that the soul of why we create software should not solely be about monetization. It's
also important to differentiate and it goes on and on and on. But like it came back with a good response
and I was like, wow, that's so cool. As if Adam Jacob himself. It felt like I was
talking to Adam Jacob, right? Let me ask it again now that
Duarte has fixed a few bugs. By the way, when you go to this
URL, if you go there, switch from search to chat,
defaults to search.
Yes.
I think chat is the one that's a bit more interesting.
And then you can ask a question to,
we have a dropdown.
You can ask the question from a bunch of people.
Yeah.
Everybody who's ever been in transcripts, basically.
Very nice.
So let me ask Adam, what is Kaizen?
That's a good one. You're be disappointed generating response i'm not sure
sounds about right yep it's like live coding in a way but not really yep uh well we've definitely
mentioned kaizen i've definitely mentioned kaizen i'm not sure i've elaborated on the meaning of
kaizen so that could be accurate say it more this sentence. We'll get it into the search engine
now that you've said it, you know, specifically. Well, I'm a lot more narcissistic than you guys,
so I just ask questions about myself. So I asked Gerhard Lazu, how cool is Jared? And it answered,
I'm not sure, but Gerhard mentioned that he was impressed with Jared's knowledge and appreciated him creating something.
Gerhard also mentioned that Jared did a lot of work and has lots of commits.
All true.
Based on this,
it seems like Gerhard holds a positive opinion of Jared and thinks highly of
his abilities.
Approved.
It's getting hard to prove.
So there we go.
There's some reinforcement learning from human feedback.
That's a good response.
Gerhard approves this message.
Pretty cool.
He also open sourced all the bits.
So if you're a changelog news listener slash reader, you already know this and you have
the link.
If not, check out the links for that in our show notes so you can go play with it.
And probably even more instructive,
you can go check out how he built that.
He used super duper DB,
which I was not previously aware of.
And yeah, pretty sweet little side project for Duarte.
Should we begin most podcasts now just gushing about each other?
Just to adjust the engine.
At least the Kaizens.
Yeah.
Yeah, at least the Kaizens.
Yeah, we have to improve our perception of ourselves
and our impression of ourselves.
Naval gazing was mentioned in the past.
I think it's something different, but still.
The sentiment is similar.
Well, in good practice of podcasting,
let's give them what they came for.
Right.
What did they come for this time?
Some Kaizen or some naval gazing?
I think some Kaizen. I'm pretty sure everyone is here for the Kaizen. The one thing which I would like to come for this time some kaizen or some some naval gazing i think some kaizen i'm
pretty sure everyone is here for the kaizen the one thing which i would like to do differently
this time or at least try and see what people think is to talk about or start talking about
the things that we didn't do oh goodness you always talk about the improvements we did this
we did that how about what we procrastinated for so long then we didn't have to do it i think that's
part of the question that's kind of one of my favorite things to do in life that's your influence on all of us jared
some problems do solve themselves over time or become obsolete you know like well it turns out
don't even need that such as a caching solution that spans multiple Erlang nodes or clustered Elixir thingies.
Because, I don't know, we rolled out static-generated feeds to R2,
put them behind Fastly,
and just regenerate all the feeds whenever we need to.
And, you know, object storage is a nice cache when you're caching things that don't change all that often.
And that problem kind of solved itself.
I mean, I'm happy.
I'm grinning ears to ears over here.
There's probably a time in the future where I will want that again.
I'm starting to think of one.
But for now, you know, don't do it long enough.
And you may never have to do it the procrastinator's way.
When you add a cache like this, if you added this new cache variant to the multi-node cluster, like discussion, is it a discussion 451 or is it a pull request?
Yes, discussions.
Yeah, there's still pull requests either way, right? if we did that, what would have been involved to produce as new code,
new infrastructure, and then what would have been required to maintain it?
Well, our good friend Losh Beekman definitely showed us the way to write the code.
So it wasn't very much code if you looked at his pull request,
which we ultimately closed, much to his dismay.
We did not receive that one his solution was using postgres pub sub in order to
notify all the various app servers of a need to flush or refresh or whatever delete this
particular key from their caches and that's cool but phoenix has pub sub itself built right in
so he mentioned you could do it with that,
and that's probably how I would build it.
So basically just a fair bit of spiking it out and then coding it out to be robust,
and then maintenance would be minimal.
It'd be minimal.
But we would have to get releases involved
with our deploy process
because that's how you can actually achieve the clustering
is via releases, which we don't currently use.
So it would have required a little bit of infrastructure
changes from Gerhard in order for those nodes
to actually be able to cluster and talk to each other.
Otherwise, I think you pub sub through Phoenix
and just nobody else hears it.
Is that right, Gerhard?
That sounds about right, yeah.
It wouldn't be much.
Yeah, I think the nodes can still cluster, but um again it's a bit more involved i remember doing this with
rampant mq so you can definitely do it without reusing releases i've done it in the past so
i know it is possible maybe there are a few things different in phoenix but they shouldn't at the end
of the day still in the erlang cluster what are elixixir releases? What are they? Yeah, do you understand?
Is it like some sort of a,
I think of like a tarball or something.
Like, is it some sort of a...
You're packaging everything in a way that's self-contained
and you have the option of doing hot upgrades.
So like in-place updates, live updates, hot code reloads.
It just opens up the world
to a whole new set of possibilities.
Also, you no longer need Erlang to to be available i mean it's all packaged
part of the app and when you release it it's almost like a go binary but a bit more involved
because you have a bunch of other pieces but it's all self-contained so that you know when you
start it it runs it's everything's there you don't need extra dependencies right versus what we're
doing which is effectively booting up a docker image
and then telling it to start its phoenix server and go right like yeah so we like we install
erlang and a bunch of other dependencies so it's already in the image and then we just like add the
app code on top which gets compiled so it just boots up and it runs the code while this one
would just we wouldn't need erlang separately it would be all part of the release
now i did mention this i think last time and i don't want to you know go too much into it
but i mostly solved it like 90 solved it but then you know the idea was like let's just finish the
migration to whichever version of dagger it was at the time i think we went from q a q based one
to the go sdk a code-based approach.
And the focus was like, let's get that done.
Let's leave releases.
Like basically it was de-scoping,
so I would like get things, you know, out the door.
And if you look in the code, it's still like there,
like commented, like, hey, we have releases,
but, you know, Jared really needs this. I think it's actually in the to-do,
so let's just get it out there
and then we'll figure releases later.
So I know that they will come.
So it's just a matter of time when we end releases.
But yeah, I think releases were linked to this clustering.
The bigger issue is that now you have a cluster of instances,
which is both a good thing and a not so good thing.
Like how do you push out updates?
You have to roll through every single instance
and it just complicates things
because you're no longer updating a single instance.
And because we have a CDN in front,
which is able to basically serve 95 plus percent
of all the traffic,
especially with all the recent changes,
do we really need a cluster?
I think it's like the whole thing of simplicity.
We can keep things as they were,
and they've been running reliably for a bunch of years.
We don't really need to go to cluster.
We don't really need to run multiple machines
and then have all that communication between them run multiple machines and then, you know,
have all that communication between them.
And what if something,
I mean, network is unreliable.
Right now we have a single instance.
Everything is happening there.
Again, Erlang is amazing at this.
So I'm sure you have handled everything
really, really well,
but it's a big change.
From one to many, it's a big change.
Right.
We didn't have to do it.
Ultimately, we didn't have to do it.
That's the beauty of this.
One thing we have to change, though,
is how we mentioned Fly, because
I have, I think, Jared, you've resisted.
I thought we were going there, so I was preempting
the going there. I've always said
we put our app and our database close to our users
with no ops, which is a
true statement on their behalf, but
that's not what we're doing now.
And we were trying to go there, and
now I guess we're not. We're a single instance, not a geolocated application.
Although we could be, right?
We're just not going to do that.
Well, we have a CDN in front.
So what that means is that we are close to our users.
Most of the responses are cached.
Just not via fly, right?
Exactly.
Well, I'll have to stop saying that.
I'll just tell them that they can do it.
That's what I say.
I say, put your app servers and database
close to your users, right?
We do have multiple nodes though, right?
We have two fly nodes.
I don't know what the particular term is.
It's just one.
The app instance is only one.
We only deploy one instance.
I thought we had two at one point.
Oh, you're right.
We did go to two. I forgot about that. You're right, but they're not clustered. Right,'re right we did go to two i forgot about that you're
right but they're not clustered right they're just not clustered i forgot about that we could
go to as many as we want we just haven't yeah they're not talking to each other that's correct
yes because all that matters is if they don't have shared cache state then they can just be
i mean they're basically have passed through app servers right like they don't have any
they're all talking to postgres they They're all uploading to R2.
And so that's where our,
they have no local anything.
Yeah.
So we can, and we have,
we have two, you know,
because Gerhard always wants to have two,
even when he doesn't know it.
That's true.
I remember that, yeah.
Even when I forget it,
I still have two.
Where's the second one?
Just keep looking into our bag,
it's there.
I think they're both
in the same data center right now.
They are, yes.
But we could scale that horizontally at this point.
I just figured because we have Fastly
and because we're not, I mean,
we're a mostly static web app,
it's just kind of overkill,
but we were going to do it because it's fun.
Yeah.
If we didn't have Fastly,
then we would leverage the built-in no ops
geolocate inside of Fly.
Well, that's the thing is like with Fly
and with services like fly
that will do this they say you don't really need a cdn you know because you're running these app
servers all around the world you're basically you're but already got one yeah there's an extra
thing there by the way there's also the proxy the fly proxy which is what intercepts the requests
and that is distributed so so even when i connect to the fly app which our instance is
running in virginia i'm not connecting directly to the app i'm connecting to a proxy which is
like whichever is closest to me that happens to be in london and from there it's within the fly
network it talks to the app instance which is closest because we have two and they're both in
the same region it goes from heathrow through the fly
network into virginia and then the app eventually serves the request right now the other thing i
mean there's like two parts here it's the app that needs to be clustered but separately from that
if let's say the app instance is in tokyo let's give an example but then the Postgres SQL, the database is in Virginia. Like
now you need to have an instance of the database close to where the app is.
Yeah. Latency would be too much, right?
Yeah. And then use that for the reads. And then the writes would still go through the master. So
it just complicates things. And by the way, we're not doing that. We're using the master for
everything for both reads and writes. Right.
The primary.
Mainly because I think we don't have a lot of people
contributing content to the database, right?
So it's pretty located within the Midwest here in the States.
Obviously, you're in London,
but we don't have a lot of global users
generating content, creating content.
It's mostly reads.
It's read-heavy.
Very, like, 95% is read.
Which is why R2 made sense,
which is why S3 made sense, which is why S3 made sense,
which is why a CDM makes sense.
Yep.
Well, we did punt the need for all this extra stuff,
but we also gained some dollars back in our pocket.
In June of this year, our AWS bill was $155.21.
And the most recent bill from AWS S3 was $16.46.
Nice.
So moving to R2 really shaved off quite a bit.
And how much is our R2 bill?
I think it was like less than five bucks.
Yeah, I didn't even pay attention.
It was so small.
I think it was sub $5.
It was like three or four bucks for a month.
Now, we were holding AWS S3 wrong.
I have to mention that okay so the integration that we had when it comes to caching when you say we who's we me mostly jared oh you
i'll take this one it's okay i gotta get the the neural engines gotta know how we're feeling about
garrett so we gotta gotta put that clear i get things wrong a lot but sometimes i fix them so
fast people don't even realize they're a problem unless i admit it so i'm gonna admit it now So we got to put that clear. I get things wrong a lot, but sometimes I fix them so fast
people don't even realize they're a problem
unless I admit it.
So I'm going to admit it now.
He's going to admit it this time.
So I want to give a big shout out to James A. Rosen.
Oh, yes.
He's James A. Rosen on GitHub.
So he listened to the previous Kaizen, Kaizen 11.
He reached out via Slack
and we had two amazing pairing sessions.
If you go to the github discussion
for this episode which is github discussion 480 you will see a lot of screenshots a lot of details
about how we debug this and james was super super useful he worked at fastly in the past and he had
specific insights including some very nice like diagrams and formulas are all there
go and check them out and we went through a few debugging sessions and what that meant is that i
not only understood very well how fastly works how the shielding works what do the various things
mean again it's all captured in the discussion the problem was that the headers that you were
getting from s3 in the fastly configuration we were not processing them correctly which means
that the surrogate control and the caching was not being respected therefore fastly was hitting
aws s3 more often than it needed to and just hitting it from the shield because the shield still had like
that content cached the shield is a series of very big beefy servers like i think vertically scaled
and they keep everything in memory so they're super super quick but sometimes they need cycling
so based on the server which gets cycled the content content which was cached, maybe it's no longer cached. Therefore, it has to go back to S3.
So that configuration was not very good.
And Shield will basically have to keep going to S3
to pull content which has already seen
way more than needed to.
So we have three to four X improvements across the board
now that we're caching things correctly with R2.
Let me ask you this question.
How long have you been holding it wrong?
Like how many years?
All of them.
Is this 2023, right?
We established this relationship with Fastly back in 2016.
I want to say, right, Jared?
2016?
All of the years.
Well, we weren't always on S3.
We had local files for a long time.
I think it was somewhere six to 12 months, I think, roughly.
But again, the problem wasn't that big that we would see it.
I mean, you know, as we kept driving improvements,
we kept, you know, getting, you know,
closer and closer to like getting these things,
you know, improved, the latency reduced,
the cache hit ratio.
Basically, we were trying to get it high.
And there's also like a lot of noise.
So when you look at all these requests flying through,
it's not immediately obvious what the problem is
because things are kind of working.
And okay, the bill is slightly higher,
but it's not as efficient as it could be.
But it's not broken in that sense.
We moved to S3 when we left Linode, right?
We had all of the assets stored locally on disk with Linode, if I recall correctly. When we moved to Fly, we moved to S3. we left Linode, right? We had all of the assets stored locally on disk
with Linode, if I recall correctly. When we moved
to Fly, we moved to S3. Is that correct?
No, I don't think so.
I think we were on S3 prior.
So no, this is for the
MP3s. Yeah. Well, we had MP3s
stored locally. We were uploading them locally.
Remember Gerhard? And you had a volume,
a Linode volume that always had issues
with read latency and crap.
And then we switched to S3 while we were still on Linode.
And then later on, we moved to Fly.
And it was simpler because we already had our assets on S3.
So there was no moving of those.
I recall the order of operations.
It makes me also think about these problems at scale.
This is a small scale.
And I think it's part of the beauty of Kaizen and, you know, continuous improvement at our scale.
Like we've done things like Linode and Kubernetes.
Not Linode in particular being a bad choice, but Kubernetes is not necessarily a smart choice for us to do because we're in a single node for a long time.
It's obviously better at multi-node and there's much bigger problems that Kubernetes solves.
But, you know, that we get to sort of expose these very small problems, really. I mean,
we're talking about like a hundred bucks really in cost that was incorrect. And we've hunted it
down through like, why is this bill growing, paying attention, iterating, all collaborating.
I just wonder like at larger scales with larger teams who just have multiple teams with legit AWS billing issues, like hundreds of thousands of dollars,
hundreds of machines even, hundreds of instances.
And how this problem
permeates in a team at scale. I mean, how much money is being wasted
really by holding it slightly wrong or completely wrong?
Yeah, every system has inefficiencies
and unless you look at them they can be growing they can be worst case not getting fixed and that
it's always been a problem you don't even realize and benefit of the hindsight of course i should
have thought that it's a simple thing but unless you're paying attention to these things are doing
a conscious decision it's okay we will be improving and we'll be looking at this thing, we'll be trying to drive these small improvements. It took us a while to
get here and I think the details aren't exactly clear because we went through so many of these
cycles. In my mind they're starting to blur at this point. I know we talked about clustering for
so long to the point that it stopped being relevant. You know like, hey we can solve this
differently. I think that's the beauty of it.
But at the same time,
you should be driving improvements constantly.
I think that's why I want to emphasize this.
I want to share our story in that,
hey, even us, as amazing as we are,
again, going back to how this episode started,
we still get it wrong.
And it's okay to admit it publicly
and have a laugh about it
because otherwise you'll get miserable.
You really will.
Right.
Well, you get to, you know, laugh and learn, right?
Laugh and learn.
Pretty much.
I'm looking at this chat history between you and James and just seeing like equations of, you know, like fastly edge shield combined and all this offloading the R2.
And like this went deep this
collaboration and learnings that came from this is pretty deep yeah big shout out to james i really
enjoyed it's all there for you to enjoy and see and learn from if you want if you're using fastly
or any cdn i think that would uh really help the slo improved i mean if you open up honeycomb
that's the last thing which i want to mention here. And we have a link there.
So we're going to open up the dashboard.
I have to log in.
Of course, I have to log in.
All right.
Give me a second.
Do you have the same link open, Jared?
Which one?
The one, so this is in SLO,
96% of all feeds serve within one second,
last 60 days.
And I was saying, notice the gain since Augustust 31st so this is us moving the feeds
to r2 and we're seeing how we're using caching correctly and we're seeing this like boomerang
almost graph which we were just under 95 and then shot up literally shot up it's always like 45
degree angle this is the slo budget you know going up we're at 96.8
and we're looking good we're looking really really good so this is a combination of
feeds being now published to cloudflare r2 and us consuming feeds from cloudflare r2 with proper
caching which means that really Fastly delivers most of them.
And that just improves everything. Question, Jared, how do we expire the feeds, these various podcast feeds in Fastly? How does that happen? We hit their Purge API.
Nice. That's the bit which I was missing. So there's like two parts to this. Upload the feed to R2 and then hit the Fastly API to purge that specific endpoint.
Exactly.
Nice. simply by setting the request method to purge, which I think, is that an HTTP method
or did Fastly make that up?
But it's not like a post with a special parameter
or anything like that.
If you just do curl and then the endpoint,
it'll obviously get you that endpoint.
If you do curl dash capital X space purge,
then the endpoint,
you're just telling Fastly to purge that thing,
which is kind of weird
because couldn't you like DDoS a CDN that way
by like just continually purging somebody's URLs for them?
I mean, don't do that, dear listener.
Edit this out.
That's like a Fastly feature.
That's kind of cool.
I mean, it makes it really easy
when you're like, do I want to make sure
that this request is going to be completely pristine
is before I actually curl it,
I'll just curl dash X, purge it,
and then the next curl is
going to be fresh. Can anyone do that? I thought like you need like some sort of a key to do that.
Really? Anyone can do that. That's why I said, please don't do that. It seems like a,
are the end points hard to guess though? No, man. It's just whatever URL you're getting.
That doesn't sound right. I think we're holding it wrong. It works.
Okay.
Well, these things are beasts.
Like just like everyone listening to this,
there's such complicated systems that they all come together, right?
So there's a bit of appreciation
for how complicated these things get.
There's all sorts of edge cases.
There's always edge cases.
I don't know whether we are hitting one,
but this doesn't sound right to me.
Not like no one should be able to purge our cash, our Fastly cache, except us, if we have the correct key.
I agree, but you can just do that.
So let's put a pin in that and follow up, rather than live debug it.
What would happen if you purged the root of changelog.com?
Well, it's just going to regenerate it on the next request.
Just the homepage, right?
Yeah.
Whatever assets are there for the homepage. No, just that URL. It's just going to regenerate it on the next request. Just the homepage, right? Yeah. Whatever assets are there for the homepage.
No, just that URL.
It's just a single URL purge.
So you can't do like a wildcard.
No, you can't do a wildcard.
Well, what's in the cache for, let's just say, the root of changelog.com?
It's just whatever that response is.
Oh, I see.
So maybe we talk to James next session and see if this is, did I find a bug in Fastly,
a global purge bug that would allow very simple CDN DDoS by anybody against any Fastly endpoint?
I'll check the documentation first.
So this is like a feature, like known feature.
I think it is because I read about it.
I didn't learn this by trying it.
I read about it, but I don't remember.
I know that, now I'm Googling it.
Maybe we should just put a pin in it like you said.
That's what I'm thinking.
This rabbit hole is all deep.
Super handy.
It's super handy.
How many years have we been digging at Fastly,
this specific rabbit hole?
And that basically shows what it takes
to achieve mastery in any one thing.
In this case, it's Fastly.
Kubernetes, I think, is like another special hole.
And there's a few others.
Fly.io, there's like so many things within Fly.io.
Every single hole needs a shovel and some digging
before you can call yourself, I know this hole.
So anyways.
I found the docs.
URL purge.
A single URL purge invalidates a single object
identified by URL that was cached by the read-through cache.
This can be done via the web interface
by using the Fastly API
or by sending a purge request to the URL to be purged.
For example, curl-x purge,
HTTP, blah, blah, blah, example.com,
path to object purge.
There's no authentication or anything on that.
So that's right there in their docs.
Okay.
So it must be a feature then.
It must be a feature.
I'll link to it.
It must be a feature then.
It's a cool feature.
I appreciate it as someone who's developing against their system.
How do we disable it?
That's what I want to know.
Is there a button to disable it?
There has to be a reason why this isn't dangerous.
There has to be a reason why this doesn't matter all that much.
We can dig into that.
Speaking of buttons and Fastly.
He doesn't want to talk about it, Jared.
He's done.
That's all right.
I'm pushing your buttons on Fastly.
Okay.
That's great.
I pushed a few as well.
Not yours.
Fastly's buttons.
Appreciate that. these buttons so appreciate that
what's up friends there's so much going on in the data and machine learning space.
It's just hard to keep up.
Did you know that graph technology lets you connect the dots across your data and ground your LLM in actual knowledge?
To learn about this new approach, don't miss Nodes on October 26th.
At this free online conference, developers and data scientists from around the world will share how they use graph technology for everything from building intelligent apps and APIs to enhancing machine learning and improving data visualizations.
There are 90 inspiring talks over 24 hours.
So no matter where you're at in the world, you can attend live sessions.
To register for this free conference, visit neo4j.com slash nodes.
That's N-E-O, the number four, j.com slash nodes.
Finally, we have HTTP 3 enabled so if you have a client that supports HTTP 3 the
website should be quicker for you nice 20 to 30 percent was just about
literally I just like oh they should be enabled I just enabled it there's
nothing in the VCL by the way I was a bit surprised I was expecting some sort
of a conflict to change didn't okay so we have I think about 30 percent I'm
just going to click this link now to look at the uh the beta
edge browser in fastly and again i have to log in i'm doing that now just to see how many requests
in the last few days are now http3 5000 this was in the last one hour so in the last one hour
5000 requests were http3 huh do we know who's making those requests well we can dig
into them but basically we're serving http3 which is much much quicker and that was just a button
push just a button push i'm gonna say much quicker 20 to 30 percent speed i mean for me it's mostly
instant but http3 like many just like slightly more instant so i could i could appreciate that
slightly more slightly more instance exactly that's what happened like four milliseconds to i know or
like six to five something like that or six to four which is near nearly instant also we enabled
broadly compression so we they claim 10 to 15 reduction in traffic over gzip how do we measure
that i'm not sure i was looking at different ways of measuring it looking at the responses some don't seem to be smaller i mean even though they're
broadly genuinely broadly encoded but they're not smaller so not quite sure what's going on there
maybe i know that if it's already encoded with gzip it won't re-encode it as broadly but in our
case it's not i can see there is a difference in bytes, in the size of bytes.
So they do change, but they're not smaller.
Also, we are redirecting RSS at the edge.
So we have like an extra config in Fastly,
which means that most users
should get responses in one millisecond versus 400.
Most users that do what?
That go to RSS, 4 slash RSS.
Which is an old URL that we no longer serve from, right?
Well, it redirects to feed.
Right.
But we still get like thousands of requests going to it.
So, you know, no need to hit our app basically.
And there's a bunch more improvements there in Fastly.
So there are a few low hanging fruit like that.
I pushed a button in Fastly.
What did it do?
I pushed the enable web sockets button
oh tell us about that because it's a trial by the way and it's going to expire it is
so i'm waiting for a phone call on october 29th yeah we'll see what happens so this is part of
our oban web story which is a bigger story but includes this subplot which is a bigger story, but includes this subplot, which is that when we rolled out Oban Web,
which is a web interface for our background job processing library, Oban,
it uses Phoenix Live View,
which uses WebSockets to be super continuously automatically updated
and all those cool things that modern web apps like to do
and it worked great until it didn't work because you had to allow web sockets through fastly
because we're sending all changelog.com traffic through fastly and fastly requires you to push a
button for that to work and then here's what was kind of funny. I went and pushed the button,
activated it, go tested it. Didn't work. Dang it. I'm thinking, did this free trial actually activate, you know, maybe it takes somebody to actually on there and go do a thing for that to
actually work out. So of course I immediately blame somebody else. And then like a half hour
goes by. I'm thinking, you know, it's just going to take them a minute to actually activate the
half hour. You know, I go do something else, it's just going to take them a minute to actually activate the half hour.
You know, I go do something else, come back.
That still won't stink and work.
So I go back to the web UI and it has WebSockets turned off.
And I was like, well, maybe I didn't push the button, you know?
So I do it again.
Same process.
Reload the page.
WebSockets are turned off.
This, it turns out, is a bug in their web ui nice
not the feature this is this feature actual bug where web sockets were on but for some reason
the actual configuration ui just wasn't recognizing that they were on you didn't push it hard enough
that's what it was yeah that's right so then i go back to the docs and realize it was two steps and
i only followed the first step which was turn it on and the second step the second step it was
don't turn it off on accident no the second step was you have to go update your vcl
and put a snippet in to allow basically http upgrade requests to immediately pass through
versus hitting the rest of your config and then i saw when did that and it still didn't work and then i was really at the end of my wits
but i remembered a hack that you and i did years ago when it came to just manually setting the
back end on those cookie requests remember that yep and was like, you know what? This code looks a lot like that code. I'm going to copy the same hack. And now we have two hacks. And now we have two hacks and they both
work. Okay. I was going to say no wonder nothing works. Well, I mean, why only have one Gerhard
when you can have two? So not hacks, features. You like to have two of everything. So now we
have two hacks. Lovely. And have two hacks lovely and they both work
perfectly and oban web works and the web sockets pass through is a beautiful thing and i'm now
monitoring i'm watching our emails get sent without connecting to the production instance database
yay so do you still feel like a boss hang on because like you connected to the production
dbn since that was the moment when you felt like a boss.
You had all the power.
I can RMRF everything.
I did.
Well, you know, I can still do that whenever I want.
I do.
So I still have the power.
I just don't have to wield it if I don't want to.
So that's cool.
So I clicked the WebSockets button a bunch of times and I finally got it working.
And now it's enjoyable.
My favorite thing is to send off
changelog news
which has over
20,000 subscribers
and watch them
get their emails
you know
because like
the queue balloons up
and then it's just like
flying
and I'm just imagining
like you know
emails just flying
everywhere
that sounds amazing
okay
you have to show me
when it happens
that sounds really cool
it is kind of fun
okay I want to give a big shout out to Parker Selbert
Soren2 on github
you can check the debugging
chat in our slack dev channel
remember
don't archive the dev channel
ok
disable that but still
don't do it
it's a bunch of spammy robots.
We think we disabled it, but you never know.
You never know what buttons stick and which ones don't.
Right.
As long as you don't press that one, we're good.
And I also want to give a small shout out to Lars.
He's also helped a bunch.
So go and check it out.
The PR is 472.
All the details, what went into it the integration everything
thanks guys thank you both all right i think we're getting closer to the best bit okay because just
before we start recording this i opened the new pr on nightly remember what we talked about nightly
i saw this pull request come through okay i hope you didn't look at it because I wanted to see your reaction live.
I saved it on purpose just before the show.
Oh, is that why you waited?
Yeah, exactly.
I thought you were just procrastinating.
No.
No.
I was waiting.
It was like there for like ages and I kept improving it.
Not in a branch.
The branch is called Daggerize.
So if you containerize
and you use Dagger to deploy,
it's Daggerized.
Let's see if it sticks.
It's in the nightly repo, pull request 42.
Okay, so this was my marching orders for you
on the previous Kaizen,
which were doing Kaizen-driven development,
which is why I thought the PR got opened so late,
like literally like 10 minutes before we hit record.
Yep.
And this was to take changelog
nightly which is an old ruby based cron job based program yeah monstrosity i wouldn't call it a
program at this point yeah oh thanks no it's a script it's an app we're calling it an app
the changelog nightly app yeah the app chang app, ChangeLog Nightly, which sends out
thousands of emails every
night to just
dutifully, you know, like a script should.
For years, right? For years
on an old DigitalOcean server.
Which is pre-Lynode then, I guess,
Adam, because this predates
Linode. I'm seeing 2015.
It does, yeah. I mean, there's probably been
2015, so almost a decade
changelog nightly has been sending me emails and thousands of other humans. It's just amazing.
We should add up the math on how many emails nightly has sent out. Anyways,
I said, please get this off of digital ocean. Let's get it on the fly,
daggerize it. And you know, 10 minutes before you hit record, you finally got it done. That's it.
I was waiting.
I'm like,
I get this done.
Sweating.
Will it be finished?
It will be finished.
I am surprised.
I just thought
you weren't going to do it.
No.
I spent so much time
with this
and I took my time.
Sorry.
Sweet, sweet time.
And I even did
like a Cloudflare
pages experiment.
So it's in the Wrangler,
by the way.
That's up and running.
You can go and check it out.
nightly.changelog.place.
Nothing to purge.
I also did like a Cloudflare art experiment.
There were like some issues with index.html.
And if you go to this pull request,
you'll see, obviously there's a dagger pipeline,
which is captured as code, as Go code.
There's a GitHub actions workflow
that runs all of this.
There's an Nginx Conf.
So we are using Nginx to serve this.
There's something called Super Chronic,
which is a modern way of running CronTab.
Nice.
Go and check it out.
It's there.
Okay, I'm excited.
Proc file support.
Who remembers Foreman from the audience?
I remember Foreman.
Exactly.
Hasn't updated in three years,
but doesn't matter.
It's amazing.
It's all tech at this point.
It's new to us, I guess.
So we're using Foreman.
We tried, for example, to use multiple processes in Fly,
but when you do that, you get multiple machines,
which we didn't want.
We want a single machine.
This is really small, really simple.
We have a super chronic process,
which basically runs this cron.
It's really nice.
And the cron tab, which now is in the repo. We are versioning our cron it's really nice um the cron tab which now
is in the repo we are versioning our cron it finally happened and it works you can debug it
a bunch of things this is cool and the last thing remaining is the one password service account
integration okay which is the secrets right so 99 done let's see how long this is going to take to merge.
Too much.
Yeah.
That last mile.
So it's there.
And this just gave me a bunch of ideas.
I'm wondering if you, do you want to try it locally, Jared, to get some reactions out of you?
Or do you think you're not feeling brave?
I mean, you mean right now here on the show?
Yep.
Right now, right now.
How hard do you think this is going to be?
If it takes more than two minutes, we should skip.
Okay.
Let's try it and then we'll just skip.
All right.
If we must.
All right.
Walk me through this.
Do you have Dagger CLI?
Oh, goodness.
No.
Let's skip.
Let's skip.
Okay.
No, that's okay.
There's Brew.
I actually just reinstalled Docker on this machine.
Well, you got to wait for two minutes for Brew update. No, I just ran it because i literally installed docker the other day okay good much to my chagrin i got
docker back on this sucker docker okay we need to talk about that but not now
okay brew install dagger cli or brew install dagger basically it's brew install dagger forward slash tap forward slash dagger that's the command
boom oh do you have docker running locally yes good okay you'll need that oh good we'll talk
about that i thought you just said you should talk about getting me off docker yeah i should
but again it's it's a bit more complicated than that. But for now... All right, I have Dagger and I have Docker.
Excellent.
So now check out the branch.
Okay, check out the branch.
And then in the repo in the top level,
I'm assuming you have Go installed.
Oh, so much assumptions.
What's the branch name?
Daggerize.
Check out Daggerize.
Okay.
Which Go?
Doesn't matter. 120 120 if you have 120
you're good i know i just literally typed it in which go go dash dash version 1.20 that's good
okay i'm there dagger run dagger run go run dot go run dot dagger space run space go space run
space dot correct you got them all. I'm going.
Oh, I like this little UI you have here.
That's a little like.
That's Alex at Vito.
That's really cool.
Alex built this.
Episode 64, ship at 64.
I still remember it.
How do you have Go installed, by the way?
Yeah, I actually have it installed via ASDF.
Do you want me to set it as a global or local or something? I think you'll need a global one,
or you can commit it both work.
I forgot that you use ASDF2.
So you can either do global or,
I think global is the easiest one.
Dagger run, go run, dot.
Dot.
Yeah, that's a go run dot.
We're just basically wrapping.
Okay, it's downloading.
We are, we're connected.
We started the engine.
We started the session. We started the session.
We're connecting.
Oh,
success.
Name nightly usage,
nightly global options,
command version,
author Gerhard Lezou.
Gerhard,
come on,
give me some credit here.
I will.
The PR hasn't been merged yet.
You don't have to approve it.
By the way.
I wrote some of this stuff.
You did.
You did.
Actually not. None of the go code. wrote some of this stuff. You did, you did. Actually, not...
Okay.
None of the Go code.
Did you rewrite the whole thing?
Yeah, but isn't this the Nightly app right here?
No, no.
It's just the automation for the Nightly app.
Oh, I see.
So you get a CLI that does all the automation.
And it's called Nightly.
Okay, so you authored the Nightly CLI.
Correct.
Which basically is a dagger pipeline.
It basically ramps dagger and has a bunch of commands.
What are the commands that you see there?
What does it list?
Build, test, CICD, and help.
Great.
I think we should try build.
So now do I have to run dagger, run, go, run, build?
Yeah, or you can press up.
Yeah, I just pressed up.
Space build.
You just append build to the previous command.
After the dot, the current directory.
Yeah.
So you have the gorun.
Basically, which looks for the main.
It gets the main package.
And it's just like how Go works.
And then you tell it, again, if we had the CLI bundled,
it would be just a binary.
So we'd run the binary.
But in this case, we haven't built it yet.
And maybe we shouldn't. The idea is like all this code is there and we're running it from there
and it goes like does its magic. It's doing its magic. For those who are listening, this UI looks
like in the terminal, but it looks like a git commit graph, you know, where the merges and the
branches are. It's a lot like that, only it's multicolored. Right now it's executing bundle install,
which is why it's taking a little bit.
And it's very shiny.
And if everything goes correctly,
then I'll be very happy.
But if it goes incorrectly,
I'm going to look for this author, Gerhard Lazu.
Yes, if you know him, we need him.
I'm going to blame him.
So this is the beauty or one of the advantages Gerhard Lazu. Yes, if you know him, we need him. I'm going to blame him. So
this is the beauty, or
one of the advantages of
packaging pipelines this way.
They work the same on my machine, they will
work the same on your machine, and they will work the same
in GitHub, or any CI that you use for the matter.
GitLab, whatever you use, Jenkins,
even Jenkins. Even Jenkins.
Even Jenkins. It makes a couple of,
for example, like the whole provisioning
for the engine to be automatically provisioned it makes a an assumption that docker is available
and it's because you know it basically needs to spin up a container or where all this runs
so if you don't have that then you get like into issues where our platforms on the links is
supported anyways so it just basically shortcuts a lot of things in production we run these engines
in kubernetes in our case we run them changelog we run them on fly so we have a bunch of engines
of dagger engines deploys on fly we spin them up on demand they're just machines we suspend them
when we're done when the pipelines start we spin them up we run the code against those engines
they're stateful we have you know everything They're super fast because everything is cached.
And then we spin them down when the job is finished.
So that's why we don't have to worry about this.
And we're not using the built-in Docker that comes with GitHub Actions.
So we don't make use of that because we run our own engines.
And by the way, all that code is in changelog.
In this case, it's slightly different because the nightly repo is different.
So we do use the docker and github actions it automatically provisions the dagger engine and
then everything runs there and by the way you can look at the actions because part of this pull
request we added a new action it's called ship it and you can go and check it out this is a github
action you can see how that runs how long it takes a few things so go build worked now i ran go test while you were talking because i got bored
no offense scared no it's all good i talked for a while though all good yeah and it ran for you
just for our listeners sorry i had to pay back 50 examples zero failures okay so all my tests
are passing i just want to point that out but it took 20 seconds to run and the test runner took
0.85 seconds to run. So is it doing a lot of setup every time you run because it's inside of this
whole pipeline or is it not caching gems? I mean, it's faster than the build was, but it was still
20 seconds to run the test. So I think it will show you what was cached and what isn't. So if
you're looking at output, it tells you all the operations that were cached. Can you see that?
Yeah. So which parts weren't cached it looks like exec bundle install was not cached wasn't cat it ran it it fetched three use the rest install it a few maybe it's because there was
some that were only test so that's probably why it had to run it again because it's gonna get
let me run now a second time and see how exactly because by default it says without test if you look at the output
and then it says when you want to run tests it says with test there you go so there is 3.57
seconds um and those were all cached so much more reasonable pretty cool so by the way i ran the
same thing and for me it was nine seconds i ran exactly the same thing. And for me, it was nine seconds. I ran exactly the
same thing as you. Uh, the gems had to be installed. Sometimes the internet connection
has something to do with it. Sure. We're talking seconds is not a lot, but still.
This is rad. So what if I want to hack on this now and I want to like run the web server or
the rake file or et cetera. That's the new stuff that's coming it's not out yet that is the cutting edge oh yeah
i always know where the edge is don't i always can find the thing like yeah you can't do that
but you can spell it yeah all this stuff is on the edge so there's like a dagger shell coming
there's dagger serve coming and by the way these are like all experimental features
which may change but
exactly to your point you want to be running this locally you want to drop in that environment
that's been created for you want to do a bunch of things in these contexts now there is something
special about the build in that and this is again in the pull request in that you can use a dash
dash debug and what dash dash debug does you can
open the code and see exactly what it does but in a nutshell it exports a local image it builds a
local image and it this is a container image that you can then import in docker and then you can
get in the context of that this is a temporary workaround until we get dagger shell available it makes a couple of assumptions it
asks you to have dot env basically it requires these files so that it works locally and they
just need to exist i mean they don't need to be valid or anything you don't have to have
production credentials but they need to be set and also needs a github db that's the other thing
that you know i wanted to talk about how is this basically
wired together and what else do we need from nightly to finish the migration so i know we
have the secrets that we need but there's also github db which in my understanding is just sqlite
so as long as we you know stop it and move that across actually we don't even have to stop because
there's nothing to stop just move it across and that's what i did just manually this is the database by the way that we backed up thousands and thousands of times yes over to
s3 and realized we had just gigabytes of backups of nightly right so we have plenty of copies if
you need github.db i can find you thousands so that's all good the question is as you know in
fly there's uh the dist directory, which is stored on a volume,
but the database currently stored in the context where the app code is,
and that is not using a volume.
That's just a container image.
So I think we'll need to relocate this database to a place which would be persistent.
So Fly has some SQLite features, don't they?
Where they can just use their SQLite.
It does.
LiteFS, yes, that is correct.
We can use all of that.
But again, we need to put this database on the volume,
which is LightFS,
which basically has the LightFS feature on.
So I think we'll need to make a change around,
you know, where we configure this database
and where it's stored.
Because right now I think it's hard-coded,
the GitHub DB part.
I think it's just local to the app code.
So the app could will have to change
to find where the database is, basically.
Right.
So we need to have some sort of flag
which basically is able to configure
where this database is stored.
And then we put it on a lightFS volume.
The other thing is that in addition to,
so.env is easy to replace, obviously, on fly
because you just declare environment variables.
But specifically this uses BigQuery, which requires like a key.
You can't like do environment variables.
You have to have a file.
And so I know that that's been an issue in the past with read-only file systems or things that are going to get wiped away.
It's how do you actually get that file into place?
Is that going to be a problem?
So I was thinking, well well it needs to be a secret
right so we could store it on a persistent uh volume the same one as a database maybe
but i think really should be an environment variable i think it's a binary file right it's
not like a text file you can't read it yeah i don't think we have that option because it's
an old ruby gem that's reading this file like
it's not code that i wrote that loads that into memory right so if we can somehow put it into the
secrets and then when the app boots write that into a file then the app would read that file
maybe i mean the other thing which we can do i mean again i know it's not ideal but the container image gets only pushed to directly to the fly
registry sorry yeah exactly to the fly registry and this is the apps fly registry so really only
like if we have authentication or only fly can read this image so it's not like on a ghcr or
anything like that and even there it could be a private image but this goes directly to fly
so could we embed
this secret in the image yeah if the image is not going to be distributed then we could certainly do
that yeah it's just distributed for like deployment purposes but otherwise it won't be public it won't
go anywhere now if ci is doing that image creation then it would have to have access to this yeah so
that would have to be private somehow however that's where
one password comes in with the one password integration uh when the pipeline runs it could
read this file directly from one password it wouldn't even need to be stored in the ci that's
the improvement which i talked for a while to do for changelog as well the idea is that we don't
want to be storing all these secrets and get them actions we just want one secret which is the one password service account key or token i think it's called a key and then with this we get
access to one password to specific secrets which are read only and then once you have this secret
stored in github actions the pipeline can get access to anything everything else it needs
that's how i do it including files right because right? Because you can store... Is the key all necessary
or is there like a cert involved in that
or like a kind of like a similar to a SSH key kind of thing?
Or is the key itself the key?
It's more like a token.
If you think about like an API key or a token,
that's what this is.
This is a new feature that 1Password introduced.
They're called service accounts.
And before you had to have like a connect server running, which then connects to Vaults. It was like a more complicated setup.
We had this extra component. And I was very excited about the service accounts that were,
I think, announced in January of this year. And they're finally generally available.
The idea is that as long as you have this token, API key token, you can use OP,
the OPCLI, the 1Pass password CLI to talk to one password What's up friends?
I'm here with Vijay Raji, CEO and founder of Statsig, where they help thousands of companies
from startups to Fortune 500s to ship faster and smarter with a unified platform for feature
flags, experimentation, and analytics.
So Vijay, what's the inception story of Static? Why did you build this?
Yeah, so Static started about two and a half years ago. And before that, I was at Facebook
for 10 years where I saw firsthand the set of tools that people or engineers inside Facebook
had access to. And this breadth and depth of the tools that actually
led to the formation of the canonical engineering culture that Facebook is famous for. And that also
got me thinking about how do you distill all of that and bring it out to everyone if every company
wants to build that kind of an engineering culture of building and shipping things really fast,
using data to make data informed decisions,
and then also informed like what do you need to go invest in next. And all of that was like
fascinating, was really, really powerful. So, so much so that I decided to quit Facebook and start
this company. Yeah. So in the last two and a half years, we've been building those tools that are
helping engineers today to build and ship new features and then roll them out?
And as they're rolling it out, also understand the impact of those features. Does it have bugs?
Does it impact your customers in the way that you expected it? Or are there some side effects,
unintended side effects? And knowing those things help you make your product better.
It's somewhat common now to hear this train of thought where an engineer developer
was at one of the big companies, Facebook, Google, Airbnb, you name it. And they get used to certain
tooling on the inside. They get used to certain workflows, certain developer culture, certain ways
of doing things, tooling, of course. And then they leave and they miss everything they had
while at that company. And they go and they start everything they had while at that company.
And they go and they start their own company like you did.
What are your thoughts on that?
What are your thoughts on that kind of tech being on the inside of the big companies?
And those of us out here, not in those companies without that tooling.
In order to get the same level of sophistication of tools that companies like Facebook, Google,
Airbnb, and Uber have,
you need to invest quite a bit. You need to take some of your best engineers and then go
have them go build tools like this. And not every company has the luxury to go do that,
right? Because it's a pretty large investment. And so the fact that the sophistication of those
tools inside these companies have advanced so much, and's like left behind most of the other companies and
the tooling that they get access to. That's exactly the opportunity that I was like, okay,
well, we need to bring those sophistication outside so everybody can be benefiting from these.
Okay. The next step is to go to statsig.com slash changLaw. They're offering our fans free white glove onboarding, including migration support.
In addition to 5 million free events per month.
That's massive.
Test drive Statsig today at Statsig.com slash ChangeLaw.
That's S-T-A-T-S-I-G.com slash ChangeLaw.
The link is in the show notes.
OP is such a weird.
Just now when you said one password,
I think the word O-N-E one password, which is OP.
For the longest time, I've been like, what does OP stand for? Like, what is that? I mean, I've used it before. And I'm password, which is O-P. For the longest time I've been like, what does O-P stand for?
Like, what is that?
I mean, I've used it before and I'm like, why is it O-P?
I know.
Finally, it makes sense.
Why wasn't it like one P? I don't know.
No idea.
Well, in certain contexts, can you start command lines with a number versus,
I like certain, you know, in programming languages,
certain variables cannot start with a number in their name.
They have to have an alphanumeric character.
And so OP might just be more globally useful than like.
And that's probably why, honestly.
Up until now, though, like literally when he said one password,
I spell out the word one in my brain.
I think, okay, that's why it's called OP.
I just thought their command line was overpowered this whole time.
So I thought they were just calling it OP, you know.
There you go. No, there are a few like binaries i have for example
two digit two two letters t o three the digit three apparently it's a python package lib two
i had that as well oh it's gonna upgrade from python 2 to Python 3, kind of a thing. You could just alias 1P to OP if you wanted to, of course, but that's the easy button.
I like that idea.
That's what we're doing in our pipeline, 1P.
1P.
This key that you have for 1Password, it's in GitHub Actions and no one can see that, right?
That's not something that's public.
It's a secret, correct.
It's a secret secret.
That's cool.
That's cool that we could do that on the fly via CI though,
because, you know, that's the way you want it.
Exactly.
And then the secrets, they can modify in one,
we can modify them in one password
and we no longer have to update them anywhere else
because whatever is connected to one password
is able to retrieve the latest values.
This will be so much nicer.
That is the way to do it.
I sure hope they win, okay?
I sure hope they win because there's just some,
like, as a user, daily active user of 1Password
for a decade or more,
there's some oddities with how it operates
from a UX standpoint as a user.
Yeah.
The application, great.
Even the application I have some issues with,
but whatever.
It's a little strange.
So I just hope they
figure out how to
win long term.
Yeah.
Because that's a great feature.
Well, we need
another password manager.
We need two.
We've already established that.
Passport.
Okay.
Tell us more.
Well, what I know about them
is they're open source.
We did talk about that
via DM. And I did reach out to the CEO who I spoke with through our ad spots. I was impressed, and I didn't consider the license. So this is one of those moments where I was like, okay, you say you're open source. I'm going to assume you're the best version of open source, but I think it's AGPL, which is frowned upon in some cases. It's not open source. It's just not always not user hostile.
It conforms to the open source definition, does it not?
Right, it does conform.
But some businesses have issues with it.
And I think the TLDR response from the CEO was,
because this is used like an application, it doesn't have an issue.
Whereas if you're trying to consume the software and
repackage it, that's when AGPL actually has more of a compliance issue. And so to his defense on
that, I think that's kind of pretty accurate. Although the premise of PassPult seems quite
awesome, where they actually have, if I understand correctly, different than 1Pass, where we have
one secret and you get access to the whole vault which is encrypted. This
encrypts more like ACLs
where you have more finite access
controls to particular
buckets basically or folders
similar to like a file directory
and it's really designed for teams who may have
multiple projects with
hundreds of passwords per project
whereas if I go into 1Password
now, if i'm not in
my particular adam only one password it can get a little messy with like all the extras in there
like i can search for something and now i'm seeing you know our organizational secrets
where i don't really care to see them like i would love to have the same kind of access
but i don't really care when i'm searching for you know
adam's favorite password dot com or whatever and i'm finding like all of our github secrets and
like other things like it's just messy so i think there's some things with passport that they're
doing to sort of like compartmentalize where you keep your secrets who you give them to who has
access to them and different stuff it's a bit more fine-grained, their approach towards cryptography
and sharing and decrypting and encrypting.
Well, I'm all for trying as a second alternative.
Always need to, right?
What if the primary one fails?
Need to figure out how to sync things between the two,
but that sounds fun.
I got to have two, right?
You have to have two, correct.
You should have two.
I'm not sure you have two.
You should have two.
You should have two, yes. Even when you have to you should have to you should have to
yes even when you don't know it yeah that's right exactly even when you forget about it
oh hang on you're right i do have to yes i didn't forget about that
we could fix that problem in one password as well by the way adam we could create a new vault
currently we have a shared one which is shared amongst all of us i say all of us the three of us but we could create another one which is infra specific or
whatever specific and then just you know a few of us can be part of it i mean it's not that big of
a deal it's more like uh i suppose if you're in a larger organization like again we're a smaller
team so we have smaller scale problems yeah i think they're more like warts in this scenario
like it's not that big of a deal i can operate around it but like i use one password personally and in business and then also in the
secrets context i got like three different contexts i use my one password and so i was
just complaining a little bit that whenever i'm using it personally like literally adam
in this context not adam as part of change like just my own secrets in there i gotta like wade through you
know ssh keys and just like different things that are part of our infrastructure stuff which isn't
that big of a deal except for it's just not relevant in that moment got it maybe that would
make more sense to have an actual separate vault aren't there separate vaults yeah like i have a
private and a shared vault in mine i don't use it personally so i don't have this problem so i'm not
sure exactly the context,
but it seems like you could just activate your private vault
and not see any of our stuff.
I think in 1Password you can disable shared vaults, I think.
But I'm not sure.
Because you just basically toggle visibility.
You say, I don't want to see this vault in my 1Password.
I have a private, Jared.
We have changelog.com.
We have a shared.
And then you can have more sub-vaults.
So everything that I'm talking about is in changelog.com.
But whenever I search, this is all kind of almost TMI in a way.
But whenever I search in search in one password, it's...
It's everything.
Yeah, it's all vaults.
That's on iOS and desktop.
What was your master password set to?
I forgot.
TMI?
Is that TMI?
That is TMI.
Yeah, you almost got me.
Just kidding.
So close.
So close.
Mother's maiden name?
Here you go.
It's capital B.
Or the name of your first pet.
Yeah. So, yeah. When you search in there, it's capital B. What was the name of your first pet? X. Yeah.
So yeah, when you search in there, it's everything.
But I know what you mean.
I mean, there's a couple of things like in the UI,
which, you know, I wish they were improving just as much as we are.
I really wish that 1Password did that.
They should Kaizen things.
They really should Kaizen things.
They should like consult us in Kaizen. They should hire us to come there that. They should Kaizen things. They really should Kaizen things. They should consult us in Kaizen things.
They should hire us to come there
and tell them the Kaizen stuff.
Don't you guys know
that you're supposed to be improving continuously?
Here's the t-shirt to prove it.
Okay, so I'm excited about ChangeLog Nightly.
This is really cool.
We're the last mile.
We'll probably take us to the next Kaizen
before we get this over the finish line, it seems. Don't gerhard oh well i think there's just like a few things which
we need to figure out i don't think it should be that long i mean i was i was sitting on it for a
while i have to say but i think it'll be more interesting to figure out whether we want to
serve these assets from fly io like the files or whether we want to upload them to um r2 i just
serve them okay i mean these are like as low traffic as you want to upload them to R2? I just serve them.
Okay.
I mean, these are like as low traffic as you can get.
It literally just, we send one version of this
to Campaign Monitor and say,
email this to this many people.
And then we serve the index file
for anybody who hits open in web.
Okay.
So I just don't feel like any more than that's necessary.
What about buffer?
Do we keep buffer or can we remove that part?
No, we can remove buffer.
Okay.
We don't even use buffer anymore.
Yeah.
Because I know like the scripts, whatever it's called,
script something, if you get in the cron tab,
it does the generate, the deliver,
and there's like one more buffer.
So you want generate and deliver.
Yeah, I think the buffer is probably a no op at this point.
I definitely disabled it.
Okay.
So it probably just isn't doing anything. Yeah, maybe there's like a bunch of cleanup to do there
as well maybe or just leave it as is it's all good okay and then we want to deliver after we
generate and don't deliver if we don't generate correct okay what if something fails how would
you know that something fails i won't get the email okay that's a good one i've enabled sentry dsn okay what's that so super chronic has this built-in integration with sentry
where you can see your crons that fail directly in sentry so i've set up the same sentry dsn that
we have for the app for the changelog app so you should be able to see failures in cron using the
sentry integration that's cool i'll take it i thought so too
i'll take it's like oh i can enable this variable i enabled it it's a public one by the way
don't use it because we don't want to see your failures basically
i mean the sentry says it's okay for it to be public and we have had a public for a while no
one has you know thought about this so yeah why are you telling everybody
well we don't want to see your failures that's the point i mean why would you spam us with your
failures i mean go and get yourself a key that's the better idea i think right i mean i don't even
want to see my own failures let alone somebody else's right exactly maybe only if someone wants
us to kaizen their failures that's the only reason why they would share them.
That's right. So that's cool.
We have to figure out what we're going to do here, though.
For what?
Because we're going to get back another $30.70 when this movie is complete.
I think maybe we go out at KubeCon and we treat ourselves, you know?
To have a drink.
Hang on. How are we getting $30 back?
We pay for DigitalOcean.
You know, DigitalOcean,
that's how it can go away.
Oh, I see.
Yes.
Yeah, I don't know why
it's $28 a month,
but it is.
You beefed it up at some point
because that's not
bottom of the barrel.
That's like...
No, I don't know
what's making $28.
You can usually get in
on those things at $5, $10
on every VPS provider,
pretty much $5 entry fee.
So $20, that's a
beefy machine. I don't think I created that server. It's been so long. I'm sure that whoever created
it forgot. It's been 10 years almost. In fact, I'm almost certain I didn't do it, but. Well,
I think that's the cleanup. I think that's something that we can follow up on. But the
important thing is that we can go out for drinks at cube con when we see each other november 6th through 9th in chicago oh yeah finally after seven
years we finally meet one another oh my gosh i'm not sure what i'll do well you'll realize there's
more than a head like wow there's more to you look at the rest of you hold still
there's your arms oh my gosh you've got arms we've only be seeing each other's heads for
70 years now it'll be so weird it will be weird yeah so we're excited we're gonna be there we're going to be doing some recordings at the not at the venue because reasons
mostly reasons that i don't understand but regardless of that at the marriott marquee
which i assume is right next to the venue it's connected via breezezeway. Because people aren't going to want to Uber or Lyft over to talk to us.
And we'll also, of course, be at the main show and just enjoying conversations.
Gerhard, you've been to this event before.
Adam, you have too.
I have not, so I'm not really sure what to expect.
Maybe you guys should talk because I don't even know what it's like.
It's big.
It's lots of people.
It's a lot of energy.
A lot of things are happening.
Well, he's been there since it's been bigger since I was there,
which was just before the pandemic,
which was probably the biggest it had been since I'd been there.
I think the last year I was there was 2018.
Seattle.
That's been a while.
Big conference.
A lot of fun stuff.
Imagine nearly 20,000 people.
Is it that big?
20,000?
Close. Yeah, it's tiring people. Is it that big? 20,000? Close.
It's tiring too.
People overload. I could
only imagine being like a superstar
there having to talk to everybody about Kubernetes
and the cloud native
computing foundation and this
direction of everything.
Like Brendan Burns for example. When I talked to him
there last time, I think he
almost fell asleep during the conversation.
And it's not because I was boring.
See how fast you had to qualify that?
You knew it was coming.
He preempted it.
No, it wasn't because of that.
It was the last day.
Actually, it wasn't him.
It was somebody else that almost fell asleep.
But I can't remember the person's name.
They're from WeWorks.
Point being is that people were tired.
Very tired.
Yeah.
It is a high energy. You need to pace yourself for sure i've been to a few you do it's a marathon
like four days for some of these folks like the trainings the workshops the pre-meetings the the
deals that might happen as a result of being there you know if you run a company for example
you're probably going to not just be there you know talking to talking to people on podcasts, probably going to be selling your thing.
Yeah.
I think you need to be very deliberate
about the people that you want to talk to.
And it's something which I've learned.
And like, there's like a couple of like that you,
you know, if you really want to talk to them,
make sure you talk to them before the conference
and you set something up.
I want to give a shout out to Frederick
from Polar Signals.
He reached out, they launched today polar signals cloud
and i'm so excited to be talking to him uh ship it 57 was the last episode when we talked
and we had a couple of conversations starting with cubecon 2019 that was our first one
you can you know find him on changelog and um we'll be talking about polo signals cloud park a bunch of
things we had shipmas we had a couple of things uh with with frederick solomon i really want to
record with him really really want to record with solomon yeah uh so that's another one and uh eric
have you uh ever recorded with you have it's been a while though right you had a one ship
a while back but you we had a few but it's been a while but never solo it's always been with somebody else
never solo that is correct i'm thinking one-on-one maybe maybe well you have to talk to the person
i know which is you right no it's us technically but you know just saying yeah could be fun putting
it out there but again i would really really like to talk to him also.
Eric, I mentioned, he's a build kit maintainer.
He's been doing some really cool things with distributed caching.
Think R2, S3 on steroids, V2, basically all the, like some really cool stuff there.
All the twos.
All the twos, exactly.
He's on all the twos.
So, yeah.
And I'll be at booth n37
dagger will have a booth so you can come and say hi i'll be definitely there or you can also record
with us in the changelog room so we can talk in a bunch of ways but i'll say be deliberate if you're
hearing this and you want to talk to us make sure that you reach out
because we may not meet yeah that's how crazy it gets not because we don't want to because we
we didn't know yeah i would say twitter dms or slack that's free and open or emails or email
that works as well changelog.com slash community mastodon you can hit us up on
or on instagram or just leave a comment on this episode that also works right we have comments slash community. Mastodon. You can hit us up on threads. Or on Instagram.
Or just leave a comment on this episode.
That also works.
Right?
We have comments on the
website.
That's true.
Yeah.
Does that cover all of
our communication media?
LinkedIn?
Yeah, we do have a
LinkedIn.
You can get us there.
Gerhard's on LinkedIn.
I deleted it and I had
to create it.
All the spam.
Anyways.
Well, we do have the
phone number as well that they can call. You can call us as well.. Anyways. Well, we do have the phone number as well
that they can call. You can call us as well.
If you want to. Do we have a phone number?
Yeah. Really? We have a phone number.
Oh, yes. We do have a phone number. It's right
there in Fastly's cache. I would say hello.
Adam is looking it up. He doesn't know.
Well, I know it.
I just forgot the middle numbers
there. So it's been and always
has been 1-888-974-CHLG
and or 2454.
So 888-974-2454.
Again.
I'm just kidding.
No worries.
And if you call that number, Adam will answer it.
I will. It'll be awkward unless you have something to talk about. So if you're going to call it Adam will answer it. I will.
It'll be awkward unless you have something to talk about.
So if you're going to call it, think of it ahead of time.
What would I want to talk to Adam about?
That's right.
Otherwise, it's just going to get super awkward quickly.
And we've had some of those conversations that had turned out.
It still might get awkward.
You can give him some kudos.
Obviously, we haven't praised each other enough in this episode.
So we can always do that. I think if we're going to get kudos, you should put them we haven't praised each other enough in this episode, so we can always do that.
No, I think if we're going to get kudos,
you should put them directly into our transcripts repo.
Oh yeah.
So we can train our neural search engine on them, you know?
That's right.
Well, KubeCon, we're coming for you,
whether you like it or not.
Yep.
It's been many years since I've been there
and I'm excited to meet you face to face.
Yeah, me too.
Gear hard.
Me too, me too.
To see the rest of your body beyond your head.
That's too funny.
You said it.
I know.
You see, that's how it happens.
Just plot the idea and then see what happens.
Oh my gosh.
Chicago.
We got 30 more bucks to spend because of Digital Ocean going away.
So we're coming there on fire
with some money to spend 30 bucks per month amortize that thing for a year yeah we have we
have to switch it off first it doesn't count otherwise okay well you better you better wrap
that pr up then come on i know the pressure is on us if you want b is that's the kaizen
yeah kaizen will you be wearing your kaizen t-shirt, do you think?
Who is that question addressed to?
Yeah, who are you asking?
We all have Kaizen t-shirts.
I don't have one.
Both of you.
What?
Nope.
You don't have one.
We sent you one.
No.
How is this possible?
Well, maybe you did,
but I never received it.
It never came?
Well, you know,
there was a couple years
there where shipments
were not easy to land,
but I'm pretty sure
didn't I at least
give you a coupon code or something for the merch shop uh at some point yeah i don't uh
we talked about that but everybody that we work with has our shirts and the fact that you don't
is a crying shame i think we should just bring some to chicago with us or something that's a
good idea it's just a month away then we don't have to worry about all this shipping across the Atlantic.
That's a good idea.
Or the Pacific.
Well, I'll go to merch.changeland.com and I will order a Kaizen t-shirt and have it
shipped to the Marriott Marquis because that's where you'll be.
Yeah.
That sounds dangerous.
Do we have the size?
Because, oh, we don't have the size.
That was the problem.
Oh.
And that, yeah, that was the case since as far as I can.
You think so?
Yeah.
We better get to work on that.
That's why I always said last time.
Well, this is embarrassing to end the show like this.
I guess we'll have to iterate.
Well, that's one thing to improve.
And by the way, we can talk about all the other things in person
about what else you want to improve,
unless there's something specific that you want to shout out now what happens because i listen to these not only to realize which jokes didn't work
out but also what should i do next so if there's anything that we want to shout out now is a good
time i think we just want to get nightly shipped so that we can switch off digital spend money and
kubicon okay nice anything else i got 30 bucks that i want to
burn a hole in my pocket that's a quick one i'm pretty sure we can do that this week even before
this goes out challenge accepted i don't have any other marching orders at the moment go hard i think
uh we'll think of some in the meantime for our next kaizen but right now i haven't put much
thought into what we should do next okay or what you should do next in that regard.
Yeah.
I know Passbolt came up.
There's like a bunch of things like that.
Erlang releases.
So there's always things.
I think it's just a matter of priority.
Yeah.
And if you can't think of anything, oh, there's so much to do.
Right.
It's just like nothing obvious.
That's good.
Cool.
Cool.
Well, let's end it right there.
If you're going to be at KubeCon in November,
hit us up via any of those channels and let us know.
We'd love to record.
We'd love to just say hi.
We'd love to see the rest of everybody's bodies in Chicago.
Gosh, that'll be amazing.
Maybe too much.
So many bodies.
Everybody.
We look forward to seeing everybody.
We want to see everybody there. there yeah that's it all right
kaizen cool kaizen bye friends
so if your body is going to be or wants to be in chicago for kubeCon 2023 in November.
We want to meet you.
And one of the hidden secrets here at ChangeLog is we have a free Slack community.
Yes, you can go to changelog.com slash community
and tons of folks are in there
always talking about something.
It's a lot of fun.
So if you want to be in there,
just go to
changelog.com community and sign up it is free of course and we want to see you in slack so hey if
you're going to be at kubecon hop in slack say hello on twitter send us a dm whatever it takes
just let us know you want to meet up and we will happily give you instructions. We'll likely also put these instructions on social
media that you'll see. So that could work too. But either way, we'd love to say hi. We'd love
to meet you. If you're a fan of Kaizen like we are, we also have an awesome Kaizen t-shirt. You
can find that t-shirt and it's actually one of our most popular t-shirts. You can find it at merch.changelog.com.
And sadly, we learned on this episode that Gerhard does not have his shirt.
And I think the reason why is we only have 2X and small shirts available.
We have to do a restock on this.
But this is our most popular, the kaizen t-shirt continuous improvement
merch.changelog.com threads for fans of ship it and continuously improving love that t-shirt so
check it out but hey friends is over this show is done next week will be at all things open
monday news will happen.
And I'm not sure of the rest of the schedule, but we're playing it by ear right here in post-production.
If you're going to be at All Things Open, say hello.
Say hi.
DM us.
Hop in Slack.
All the good things.
We'd love to meet you.
Well, that's it.
This show's done.
We'll see you again soon. Kaizen. Game on!