The Changelog: Software Development, Open Source - Kaizen! Just do it (Friends)

Episode Date: September 20, 2024

Gerhard Lazu joins us for Kaizen 16! Our Pipe Dream™️ is becoming a reality, our custom feeds are shipping, our deploys are rolling out faster & our tooling is getting `just` right....

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Changelog and Friends, a weekly talk show about the perfect name. Thanks to our partners at Fly.io. Over 3 million apps have launched on Fly, including ours. You can too, in five minutes or less. Learn how at Fly.io. Okay, let's Kaizen. What's up, friends? I'm here with a new friend of ours over at Assembly AI, founder and CEO Dylan Fox. Dylan, tell me about Universal One. This is the newest, most powerful speech AI model to date. You released this recently. Tell me more. So Universal One is our flagship industry leading model for speech to text and various other speech understanding tasks. So it was about a year-long effort that really is the culmination of the years that we've spent
Starting point is 00:01:10 building infrastructure and tooling at Assembly to even train large-scale speech AI models. It was trained on about 12.5 million hours of voice data, multilingual, super wide range of domains and sources of audio data, So it's super robust model. We're seeing developers use it for extremely high accuracy, low cost, super fast speech to text and speech understanding tasks within their products, within automations, within workflows that they're building at their companies or within their products. Very cool. So Dylan, one thing I love is this playground you have. You can go there, assemblyai.com slash Playground, and you can just play around with all the things that is Assembly.
Starting point is 00:01:51 Is this the recommended path? Is this the try before you buy experience? What can people do? Yeah, so our Playground is a GUI experience over the API that's free. You can just go to it on our website, assemblyai.com slash playground. You can drop in an audio file. You can talk to the playground. And it's a way to, in a no-code environment,
Starting point is 00:02:10 interact with our models, interact with our API to see what our models and what our API can do without having to write any code. Then once you see what the models can do and you're ready to start building with the API, you can quickly transition to the API docs, start writing code, start integrating our SDKs into your code to start leveraging our models and all our tech via our SDKs instead. Okay. Constantly updated speech AI models at your fingertips. Well,
Starting point is 00:02:36 at your API fingertips, that is. A good next step is to go to their playground. You can test out their models for free right there in the browser. Or you can get started with a $50 credit at assemblyai.com slash practical AI. Again, that's assemblyai.com slash practical AI. Kaizen 16. Gerhard, what have you prepared for us this Kaizen? I think every time I don't know what to expect. And this time I do know what to expect. So what changed?
Starting point is 00:03:14 What's new? What's fresh? Well, I share the slideshow. I mentioned last episode, I have a slideshow with my talking points, a couple of screenshots, things like that. This time I shared it ahead of time and I prepared ahead of time as well. But also I've been making small updates to the discussion,
Starting point is 00:03:35 I think more regularly than I normally do. Discussion 520 on GitHub. I mean, we always have one for every Kaizen, but this time I just went a little bit further with it. And I think it will work well. Let's see. All right, well, take us on this wild ride. Adam's also here.
Starting point is 00:03:51 Adam. What's up? Hey, Adam. Everything's up. Whenever someone asks me that, everything's up. That's the SRE answer. Everything's up. Everything is up.
Starting point is 00:04:00 Otherwise, I'm not here. If something's down, I'm not here. You know it's up because Gerhard's here. Yep, so everything's up. Otherwise I'm not here. If something's down, I'm not here. You know it's up because Gerhard's here. Yep. So everything's up. I like that. Well, last Kaizen, we talked towards the end about the pipe dream. Oh yeah. That was the grand finale. So maybe this time around, we start with that. We start with a pipe dream. We start with what is new. Start where we left off. Exactly. Love it. So we mentioned that, or at least you mentioned, Jared, that, was it Adam? Can't remember anyways.
Starting point is 00:04:29 We will clarify this after I mention what I have to say. Wouldn't it be nice if we had a repository for the pipe dream, self-contained, separate from the application? Whose idea was it? I think it was both of ours. Adam said, can this be its own product or something? And I said, well, it could at least be its own repo, something like that. That's right.
Starting point is 00:04:48 So github.com forward slash the changelog forward slash pipetree is a thing. It even has a first PR that was adding dynamic backends. And we put it close to the origin, a couple of things so you can go and check it out, PR1. And what do you think about it? Is the repo what you thought it would be? Well, for those who didn't listen to Kaizen 15,
Starting point is 00:05:13 can you tell us what the pipe dream is? Well, I think the person whose idea it was should do that. However, I can start. So the idea of the pipe dream was to try and build our own cdn how we would do it single purpose single tenant running on fly.io it's running varnish cache the open source variant and we just needed like the simplest cdn that we needed which is i think less than 10% of what our current CDN
Starting point is 00:05:48 provides. And the rest is just most of the time in the way. And it complicates things and it makes things a bit more difficult for the simple tasks. How the idea started, I would only quote you again, Jared. Would you like me to quote you again? That was Kaizen 15. So many quotes. Sure. Let's hear it. I like hearing what I have to say. I like the idea of having like this 20 line varnish config that we deploy around the world. And it's like, look at our CDN guys. It's so simple and it can do exactly what we want it to do and nothing more. But understand that that's a pipe dream. That's where the name came from.
Starting point is 00:06:27 Because the varnish config will be slightly longer than 20 lines and we'd run into all sorts of issues that we end up sinking all kinds of time into. Jared Santo, March 29th, 2024. Change it on my friends, episode 38. Okay, so there you go. What's funny is, you know how when you're shopping for a car and you look at a specific car, maybe you buy a specific car and then you see that same car and color everywhere. After this, I have realized not just hearing the word pipe dream, or maybe the
Starting point is 00:06:57 words, if we can debate, is it two words or one? But I actually realized I say that a lot. I call lots of things pipe dreams and I didn't realize it until you formalized it. And now I'm like self-conscious about calling stuff pipe dreams. I think I did it on a show just the other day. I was like, dang it, because now it's a proper noun. And I feel like it's a reserved word, you know? It's almost a product. Yeah, it's almost a product.
Starting point is 00:07:19 If you could package up and sell 20 lines of varnish, we would do it. But if you can't, we would at least open source it and let the world look at what we did. So it has its own repo and it has its own pull request. So it's going to be a real boy. Does it work? Does it do stuff? I mean, I know you demoed it last time and it was doing things, but does it do more than it did before or is it the same?
Starting point is 00:07:42 Yeah, I mean, the initial commit of the repo was basically extracted what would have become a pull request to the changelog repo that was initial commit and we end up with 46 lines of varnish config the pull request one which added dynamic backends and it does something interesting with a key with a cache status header we end up with 60 lines of varnish config. Why dynamic backends? That was an important one because whenever there's a new application deployment, you can't have static backends. The IP will change. Therefore, you need to use the DNS to resolve whatever the domain is pointing to. So that's what the first pull request was. And that's what we did in the
Starting point is 00:08:26 second iteration. Now I captured what I think is a roadmap. It's in the repo. And I was going to ask you, what do you think about the idea in terms of what's coming? So the next step would be to add the feeds backend. Why? Because feeds, we are publishing them to Cloudflare R2. So we would need to, you know, proxy to that, basically cache those. I think that would be like a good next step. Then I'm thinking we should figure out how to send the logs to Honeycomb exactly the same as we currently send them. So that, you know, same structure, same dashboard, same structure, same dashboard, same query, same SLOs,
Starting point is 00:09:07 everything that we have configured in Honeycomb would work exactly the same with the new logs from this new CDN. Then we need to do implement the purging across all instances. I think that's slightly harder because as we deploy the CDN in like 16 regions, 16 locations, we would need to expire, right?
Starting point is 00:09:24 Like when there's an update. So that I think is slightly harder, but not crazy difficult. And then we would need to import all the current edge redirects from our current CDN into the pipe dream. And I think with that, we could try running it in production, I think. Good roadmap. I dig it. So our logs currently go to S3, not to Honeycomb, in terms of logs that we care about. And I know that I previously said we only care about our MP3 logs,
Starting point is 00:09:58 not our feed logs in the sense of statistics and whatnot, but that has since changed. I am now downloading, parsing, and tracking feed requests like I am MP3 requests. And so we would either have to pull that back out of Honeycomb, which maybe that's the answer, or somehow have it also write to where S3 is currently writing to in the current format for us to not have major rewriting on the app side.
Starting point is 00:10:28 Thoughts on that? So we can still keep S3, whatever intercepts the logs, right? Because in our current CDN, obviously the CD intercepts all the logs. And then some of those logs, they get sent to S3 indeed. But then all the logs, they get sent to Honeycomb. So you're right, I forgot about the S3 part. So on top of sending everything to Honeycomb, we would also need to send a subset to S3 exactly as the current config.
Starting point is 00:10:56 So yes, that's an extra item that's missing on that roadmap indeed. Alright, cool. So we add that item to the roadmap and I think it's all honky dory. Do you know how you're going to implement Purge across all app instances? Like what's the strategy for that?
Starting point is 00:11:12 No idea. No idea currently. I mean, based on our architecture and what we have running so that we avoid introducing something new as a new component, a new service that does this, we could potentially do it as a job using OBAN, I think.
Starting point is 00:11:34 Because at the end of the day, it's just hitting some endpoints, HTTP endpoints, and it just needs to present a key, right? If we don't use it, anyone can expire our cache, which is a default in some CDNs. Yeah, we found that out the hard way. Exactly. So that's something that we need.
Starting point is 00:11:52 I think an O-band job would make most sense. It's actually pretty straightforward. We already have a fastly purge function in our app that goes and does a thing. And then we just change this to go and background Java reset on all these different. Now there has to be some sort of orchestration of like, the instances have to be known. Maybe that's just like a call to fly or something. I don't know how... DNS. Okay, DNS based, yeah. We can get that information by doing a DNS query and it tells us all instances and then we can get all the URLs. Yeah, that sounds like a straightforward way of doing it.
Starting point is 00:12:22 Where's the data being stored? We upload? Currently? In PipeDream. PipeDream is just a cache, so you mean where's the cache data being stored? Okay, so PipeDream is just, what exactly does PipeDream do? So PipeDream is our own CDN, which caches requests going to backends. So imagine that there's a request that needs to hit the app and then the app needs to respond. So the first time, like let's say the
Starting point is 00:12:54 homepage, right? Once the app does that, subsequent requests, they no longer need to go to the app. Pipedream can just serve because it already has that request cached. And then because PipeDream is distributed across the whole world, it can serve from the closest location to the user. To the person. Exactly. And same would be true, for example, for feeds, even though they are stored in Cloudflare R2. The PipeDream instance goes to Cloudflare R2,
Starting point is 00:13:21 gets the feed, and then serves the feed. Gotcha. And so Varnish is storing that cache locally on each instance. Correct. In its local disk storage, or however Varnish does what it does. So by default, we're using memory, but using the static backend like a disk backend would be possible, yes. I was just thinking about expiring because we just did this yesterday
Starting point is 00:13:42 where we had to correct deployed slash published episode and we ran into a scenario where fastly was caching obviously because it's the cdn and then i went into the fastly service and purged that url and then it wasn't doing what we expected and i bailed on it and handed it to jared and j Jared checked into R2 and R2 was also caching. And so we essentially had this scenario where our application was not telling the CDN that this content is new, expire the old, purge, etc. And I just wonder, in most cases,
Starting point is 00:14:19 aside from the application generating new feeds, which happens usually at the action of a user. So me, Jared, somebody else publishes an episode or republishes. Couldn't the expiry command, so to speak, come from that action and inform the CDN? Yeah, exactly. Which is how it works right now with Fastly.
Starting point is 00:14:44 Like after you edit an episode, we tell Fastly to purge that episode. The problem we had yesterday is that Fastly purged it, but then Cloudflare also had a small cache on it. And so Fastly would go get the old version again and say, okay, now I'm fresh. And so we had two layers of cache
Starting point is 00:15:00 that we didn't realize. And so that's probably fixed now, but yes, it would be basically everywhere in our app that we call Fast realize. And so that's probably fixed now, but yes, it would be basically everywhere in our app that we call fastly.purge, we would just replace that with pipedream.purge or whatever, which would be an O-band process that goes out to all the app instances.
Starting point is 00:15:16 I see. So the question was mechanically how to actually purge the cache, not so much when. Yeah, because we already have when pretty much figured out. Gotcha. Which is pretty straightforward really, because when we publish, because we already have when pretty much figured out. Gotcha. Which is pretty straightforward, really, because when we publish and we edit or delete, those are the times that you purge the cache. Otherwise...
Starting point is 00:15:32 What's the point? Yeah. Otherwise you don't do it. Please don't. It doesn't make any sense. Change hasn't happened, so don't change. Okay. How plausible is this pipe dream? Should we rename it to something else because it's not this pipe dream like is it should we rename it to like something else
Starting point is 00:15:48 because it's not a pipe dream anymore or more or less of a pipe dream obviously i'm not suggesting that naturally but like it becomes real does it become an oxymoron when it becomes real i don't know i quite like the name to be honest i think it has a great story behind it, you know? So it just goes back to the origin. And the CDN is a pipe, right? I mean, it is a pipe. Yeah, exactly. Yeah. Yeah, I like that pipe idea.
Starting point is 00:16:12 That was like one of the follow-up questions. Do we keep a space or introduce a space? Or no space? That's a really important decision. Space or no space? What about a tab? Should we put a tab in there? We can.
Starting point is 00:16:24 Camel case, no space, about a tab should we put a tab in there we can't camel case no space space what would the listeners think i mean you've you've been hearing this story for a while and you've you've heard us think i think we should have a poll and that's how you know i know that's that's how we end with names like boaty mcbote face yeah i'm very aware of that no this is not that we're just asking like how do we how what would be the the way to spell it that would make most sense pipe dream one word pipe space dream pipe tap dream i'm not sure about that i think we can do one like us for fun or camel case indeed i'm leaning towards one word the miriam webster dictionary and the Cambridge dictionary
Starting point is 00:17:05 both say that it's two words I'm seeing it two words everywhere except for old English where it was pip dream all one word I'm leaning towards one word though just pipe dream one word and I'm leaning in the other direction
Starting point is 00:17:20 so we need a poll great well the repo name is already like lowercase pipe dream no spaces no nothing no no dashes nothing like that so you know i think it would make sense so yeah all right we'll run a poll see what people think see what people want what give the people what they want correct and when it comes to when we do switch it into production whenever you know that that happens i think we could maybe discuss again whether we rename it when it comes to, when we do switch it into production, whenever that happens, I think we could maybe discuss again,
Starting point is 00:17:48 whether we rename it, when it stops being a pipe dream for real. For now, it's still like a repo. It's still a config. It runs. I mean, if you go to pipedream.changelog.com, it does its thing, but it's not fully hooked up with everything else that we need.
Starting point is 00:18:01 I have a new name. Pipe reality. Pipe reality. Just let it marinate not now not yet media pipe media i don't know pipe log pipe log oh oh here's a better one change pipe pipely i think that's the winner i think that's the way there. I think that's the way there. Oh, quick by the debate before someone else buys it. Pipe dot Lee.
Starting point is 00:18:34 Oh yes. That one's almost too good. Almost. Yeah. Is this really where we're marching towards? I know this began as literally a pipe dream and it's becoming more real. You've had some sessions. You've according some sessions.
Starting point is 00:18:52 You've, according to the, maybe I'm jumping the gun a little bit on your presentation here, but you've, you've podcasted about this slash live demo this. We've been talking about the name. We've been talking about the roadmap. Like, is this really a true possibility to do this successfully? Well, based on the journey so far, I would say yes. I mean, it would definitely put us in control of the CDN too. A CDN is really important for us. So it's even more important than a database because we're not heavy database users and we'll get to that in this episode, I'm sure. So a CDN really is the bread and butter. Now we need something really simple. We need something that we understand inside out. We need something that I would say is part of our DNA, because we're tech focused and we have some great partnerships. And we've been on this journey for a while. You know, it's not something that one day we woke up and we said, let's do this. So this has been in the making for a while.
Starting point is 00:19:46 We're almost forced. In a way, yes. I would say encouraged. In a way, we're pushed in this direction. There are other options. But I think there is this natural progression towards this. And it doesn't mean that we'll see it all the way through. But I would say that we are well on our way
Starting point is 00:20:05 to the point that I can almost see the finish line. I mean, even the roadmap, right? Putting the roadmap down on paper, it made me realize actually the steps aren't that big and we could take them comfortably between Kaizens. And I don't want to say by Christmas, but wouldn't it be a nice gift, a Christmas gift? What do you think?
Starting point is 00:20:25 I mean, I think that's a bold roadmap let me add this to the roadmap or maybe I'm not seeing it in the repo and it's their test harness is there a test harness no there isn't a test harness now I would love to be able to develop against this with confidence especially once we start adding those edge redirects and different things I would love to have that as part of the roadmap so that I can fire it up and create an issue I would love that yeah go for it cool open source for the win cool
Starting point is 00:20:53 so I'm going to open source the issue and then you open source the code amazing I love that just making sure you didn't say PR is welcome you're moving on cool yeah can we revisit the idea of this being a product? Single tenant, single purpose, simple, seems like a replicated problem set.
Starting point is 00:21:13 Honestly, I think so. Honestly, I can definitely see this being part of Flutter.io. Well, there's this name for which we cannot name in regards to Fly. It's more of a class class of people i would say it's probably that and i'll be i'll be even more vague sorry listeners that's so vague that i don't even know what you're talking about there is some inside information i'm not sure how much we can share but then there's like tigris that has led the way in a lot of ways and i just talked to
Starting point is 00:21:40 ovaise because by the way they may even be sponsoring this episode. Fly is not only a partner, but also a sponsor of our content. And I had a conversation with OVACE, who is one of the co-founders of Tigris. And he shared with me that if it weren't for Fly, it would have taken them years to build out all of the literal machines across the world with the NVMe drives necessary to be as fast, to be what Tigris has promised. And I don't want to spoil it for everybody, but Tigris basically is an up-and-coming S3.
Starting point is 00:22:14 And because of the way that Fly networks, and because of the way that Fly handles machines across the world, and the entire platform that Fly is, very developer-focused, Tigris was able, I think within nine months, to stand up Tigris. And so you can deploy Tigris via a single command in the Fly CLI. And then you can also have all your billing handled inside there.
Starting point is 00:22:41 This is not an ad, I'm just describing it. But when I said that back in the day i was thinking about tigers because i first learned about them and knew about this story and i knew they were built on fly i knew their story was only possible because of what fly has done and i think that this pipe dream is realized or capable of being realized because of fly being what fly is and i feel like we have this simple nature, sort of the, I said really simple CDN,
Starting point is 00:23:11 but I'm not tied to that because RSS is, you know, kind of one, the really simple part of it. But I think that's kind of what it is. It's like, I feel like other people will have this and it can certainly live in this world of fly. I don't know.
Starting point is 00:23:23 There's a possibility there. I think we build it for ourselves and then we'll know more are you thinking make it private the repo it's still not too late you're gonna rug pull these people before it's even there's a rug down well yeah no one's using it so yeah private rug and we're it's a 60 lines of varnish i think we're getting ahead of ourselves right i think so. But once we start adding the test harness, once we start adding the purging,
Starting point is 00:23:50 which by the way is specific to our app, but maybe that would need to be generic, by the way. So if this was to be a product, we would need to have a generic way of purging. It doesn't matter what your app is. So there's a couple of things that we need to implement to make this as a product. And in that case, it would be in this repo, I think.
Starting point is 00:24:09 But it could also be like a hosted service, like Tigris is, maybe. Especially if we get the cool domain, why not? I can see that. And this can be our playground, like the pipe dream can be our playground. like the Pipe Dream can be our playground. But then the real thing, with all the bells and whistles,
Starting point is 00:24:29 could be private. Yeah, I think we build Pipe Dream in the open and then if we decide there's a possibility there, then you genericize it in a separate effort. The one thing which I do want to mention is that there's a few people that helped contribute. So I'd like to this is also time for shout outs of course to matt johnson uh one of our listeners shout out to matt
Starting point is 00:24:52 and also james a rosen he was there from the beginning so the first recording that we did that's already live the second one as well that we recorded i haven't published it yet i still have to edit it but that was like basically the second pull request that we got together and even though a bunch of work you know went obviously in the background before we got together when we did get together was basically putting all the pieces you know so we did like in this very open source group spirit and um yeah so there's that so i think keeping that true to open source would be important. And if not, then we would need to make the decision soon enough so we know which direction to take. But you're right. Rug pulls, not a fan at all. We should never do that. And even the fact
Starting point is 00:25:37 that we're discussing so openly about this, I welcome that. I think it's amazing, this transparency. So that we're always straight from the beginning what we're thinking, so that no one feels that they were misled in any way. Agreed. Agreed. I like it. Well, the last thing that I would like to mention on this topic before I'll be ready to move on is that we live stream the CDN journey, a change log with Peter Banugo. There'll be a link in the show notes. We got together and we talked about where we started, how we got to the idea of the pipe dream
Starting point is 00:26:09 and where we think of going. So if you haven't watched that yet, it'd be worth. There was a slideshow. Not as good as the last one, the last Kaizen, but it was, I'm happy with it. Let me put it that way.
Starting point is 00:26:22 Awesome. Cool. We'll link that up. Okay, friends, here are the top 10 launches from Supabase's launch week, number 12. Read all the details about this launch at superbase.com slash launch week. Okay, here we go. Number 10, Snaplet is now open source. The company Snaplet is shutting down, but their source code is open. They're releasing three tools under the MIT license for copying data, seeding databases, and taking database snapshots. Number nine, you can use pgreplicate to copy data,
Starting point is 00:27:05 full table copies, and CDC from Postgres to any other data system. Today it supports BigQuery, DuckDB, and MotherDuck with more syncs to be added in the future. Number eight, Vect2PG, a new CLI utility for migrating data for vector databases to SuperBase or any Postgres instance with pgVector. You can use it today with Pinecone and QDrant. More will be added in the future. Number seven, the official Supabase extension for VS Code
Starting point is 00:27:34 and GitHub Copilot is here. And it's here to make your development with Supabase and VS Code even more delightful. Number six, official Python support is here. As Supabase has grown, the AI and ML community have just blown up Superbase and many of these folks are Pythonistas. So Python support expands. Number five, they released log drains so you can export logs generated by your Superbase products to external destinations like Datadog or custom endpoints. Number four, authorization for real-time broadcast and presence is now public beta.
Starting point is 00:28:10 You can now convert a real-time channel into an authorized channel using RLS policies in two steps. Number three, bring your own Auth0, Cognito, or Firebase. This is actually a few different announcements. Support for third-party auth providers, phone-based multi-factor authentication. That's SMS and WhatsApp. And new auth hooks for SMS and email. Number two, build Postgres wrappers with Wasm. They released support for Wasm WebAssembly foreign data wrapper.
Starting point is 00:28:44 With this feature, anyone can create an FDW and share it with the Superbase community. You can build Postgres interfaces to anything on the internet. And number one, Postgres.new. Yes, Postgres.new is an in-browser Postgres with an AI interface. With Postgres.new, you can instantly spin up an unlimited number of Postgres databases that run directly in your browser and soon deploy them to S3. Okay, one more thing. There is now an entire book written about Supabase. David Lorenz spent a year working on this book, and it's awesome.
Starting point is 00:29:23 Level up your Supabase skills and support David and purchase the book. Links are in the show notes. That's it. Superbase launch week number 12 was massive. So much to cover. I hope you enjoyed it. Go to superbase.com slash launch way to get all the details on this launch or go to superbase.com slash changelogpod for one month of Superbase Pro for free. That's S-U-P-A-B-A-S-E dot com slash changelogpod. What's next? Custom feeds. That's one of your topics, Jared.
Starting point is 00:30:03 Custom feeds. So tell me about it. I don't know what it is. I know what it is, but I don't know what exactly about custom feeds you wanted to dig into. So custom feeds is a feature of changelog.com that we wanted to build for a long time. Probably not quite as long as we waited on chapters, but we've been waiting. Mostly because I had a false assumption or maybe a more complicated idea in mind. We wanted to allow our Plus Plus members
Starting point is 00:30:31 to build their own feeds for a long time. The main reason we want to allow this is because we advertise ChangeDog Plus Plus as being better. Don't we, Adam? Yeah, it is better. It's supposed to be better. However, people that sign up and maybe only listen to one or two of our shows,
Starting point is 00:30:48 whereas they previously would subscribe publicly to JS Party, for instance, and maybe ship it, they now have to get the Plus Plus feed, which was, because of Supercast, all of our episodes in one ad-free master feed. And so for some people people that was a downgrade because they're like, wait a second, I want the plus plus versions,
Starting point is 00:31:10 but I also don't want all your other shows, to which we were quite offended, but we understand. And that's been the number one request. I would call it a complaint, but actually our supporters have been very gracious with us. They ask for it, but they say it's not a big deal. But it would be nice. In fact, some people sign up for Plus Plus and continue to consume the public feeds
Starting point is 00:31:32 because that's what they want to do. We wanted to provide a solution for that for a very long time. And because it was Plus Plus only, I had it in terms of blockers. I had this big blocker in front of it, which was we need to get off Supercast first, because that's the reason why it's a problem is because Supercast works this way, which is our membership system that's built all for podcasters and it's served us very well, but it has some technical limitations such as this one. So moving off Supercast is a big lift and not one that I have made the jump yet because there's just other things to do and it works pretty well and lots of reasons.
Starting point is 00:32:12 And so I didn't do custom feeds for a long time thinking, well, we got to get off of Supercast first. And then one day it hit me. Why? Why do we have to get off of Supercast? Can't we limp into this somehow? Can't we just find out a way of doing it without getting off of Supercast? And the answer is actually pretty simple. It's like, well, all we need to know is, are you a plus plus member or not locally to our system, which lives in Supercast?
Starting point is 00:32:33 And then I remembered, well, Supercast is just using Stripe on the back end and it's our Stripe account. And that's awesome, by the way, they give us direct access to our people and no lock in and stuff. And so kudos to them for that. And so I was like, no, all we actually have to know is, do you have a membership? And all the membership data is over in Stripe. And so it's simply a Stripe integration away from having membership information here in changelog.com. So I built that, worked just fine.
Starting point is 00:32:59 And then I realized, okay, now I can just build custom feeds and just allow it to people who are members. And so we build out custom feeds. And it's pretty cool. Have you used them in Gearheart? Have you built a custom feed? No, I still consume the master feed, the master plus plus feed with everything. Master plus plus feed. Yeah. Okay. That's fair. But do you know what I would love to do to build one now? Oh, you would. Yeah. on the air. Let's see what happens if we do that. So changelog.com. How do I do that?
Starting point is 00:33:28 Like, run me through that, Jared. I sign in. Are you a Plus Plus member? Of course you are because you have the Plus Plus feed. Yeah. Okay. So sign in to changelog.com. Yep.
Starting point is 00:33:39 And go to your home directory, the tilde. Yes, I'm there. And there you should see a section that says custom feeds i do see it okay click on that sucker get started okay new feed all right there you go add a feed now you're going to give it a name that's required you know call it gerhard's feed yes sure gerhard's you can write your own tagline and that'll show up in your podcast app okay you can be like it's better hang on or i'm still a tagline jared made me do this okay okay moving on uh then you get to pick your own cover art because hey you may be making a single show feed maybe you're making all the shows uh you can pick the plus plus one you can pick a single show pick your cover art or you can
Starting point is 00:34:23 upload your own file uh-huh uh you get to pick a single show. Pick your cover art or you can upload your own file. You get to pick a title format. I know. So this is how the actual episode titles come in to your podcast app. Yes. So maybe you want to say
Starting point is 00:34:32 like the podcast name colon the title of the episode. Maybe you just want episode titles. You know put a format in there.
Starting point is 00:34:41 And then you can limit your feed to start on a specific date. Some people want like fresh cuts between their, like the old days and the new days. And so they want to start it on this date because it doesn't mess up their marked as red or whatever. Okay.
Starting point is 00:34:55 September 13th, start today. It'll start today. Okay. It's going to be empty. And then pick the podcast you want. Okay. So hang on. I used, see okay okay i see i see so upload cover art that's the thing which was messing with me because i wanted to add mine
Starting point is 00:35:17 but then it said or use ours and when you say or use ours, I'm basically changing the cover art, which I uploaded with one of yours. Interesting. Right. Ours as in a changelog cover art that previously exists. Got it. So you can like use JS parties or upload your own file and you'll have your own cover art for your own feed. Okay. So I've made a few changes. First of all, the name is Gerhard and friends. Okay. Description is Kaizen 16, this episode. Okay. The cover art, I uploaded one, but then I changed the changelhard and Friends. Okay. Description is Kaizen 16, this episode.
Starting point is 00:35:45 Okay. The cover art, I uploaded one, but then I changed it to ChangeLog and Friends. Okay. Starts today, 13th of September. Yes. Title format, I will leave it empty. And for the podcast, I'll choose ChangeLog and Friends.
Starting point is 00:36:00 Okay. Yes. And this feed should contain ChangeLog++ at free extended audio yes bam and automatically add new podcasts we launch i'm going to deselect that because i only want to change your friends save perfect boom it's there there you go you build a custom feed you can grab that url pop it into your podcast app, subscribe to it. Got it. And I found the first bug.
Starting point is 00:36:27 No, you didn't. So the bug is, if I upload my cover art, and then I select another cover art from one of yours, it uses my cover art, but not in the admin. In the admin, it shows me that it's using yours, but when I create the feed, using my cover art okay so you did both i did both yes and then submitted the form correct yes okay you are the first person that's done that i think of course of course people usually pick one or the other yeah so okay uh open an issue i will get that fixed i will let me take a screenshot so that i remember boom there awesome cool looks great actually hang on the picture which i want for this cover art is us three recording right now so if adam looks up i'll take a screenshot there you go that will
Starting point is 00:37:16 be my cover art okay good so well too good too good so custom so cool you know one thing i was gonna do which i haven't done yet, and this is a reminder, is I want to put the ChangeLog Legacy cover art in the list. Don't you think so, Adam? Like you can have the old ChangeLog Legacy logo if you want. That would be cool, actually. Yeah. Super dope.
Starting point is 00:37:36 Actually, that's an idea we had is like to expand these, you know, to like a bunch of maybe have like custom artists come in and create new cover art you can select from that might be cool very cool but yeah it's been kind of a screaming success honestly we have currently we have 320 changelog plus plus members and those 320 people have created 144 custom feeds so far including mine including yours i see yours Amazing. And the cover is your face. Correct. Yes.
Starting point is 00:38:07 Cool. So cool. So that's the feature. That's amazing. It worked very well, I have to say. I just still have to load it in my podcast player. But once I do that, amazing. Well, let's stop there then, because that's where I'm at.
Starting point is 00:38:19 And that's where I'm stuck, Jared. You're also stuck? Yes. So Gerhard's next step is to do what I've done. And I think he may have the same outcome. I don't know. My outcome was I loaded the URL into my clipboard on my iPhone, opened up Overcast, add podcast via URL, did that, clicked add URL,
Starting point is 00:38:43 and it says not a valid URL. Does yours have a start date? No. Okay. I don't think so. So yours has a bunch. The URL only contains feeds. It's forward slash feeds, forward slash Asha.
Starting point is 00:38:59 So it doesn't have the full. Oh, I might have changed. I might have screwed that up yesterday when I was fixing something else, when I was giving you your account. This been weeks for me i just haven't reported it to you yet and you've been waiting for this to do it why would you wait this long uh are you waiting for this yes public embarrassment okay no just the fact that i just haven't done it yet i'm sorry okay no that's all right i think that that copy url button should have copied the entire url
Starting point is 00:39:24 did it just give you the path, Gerhard? It did, yes. No wonder it's not a valid feed. So I literally fussed with that yesterday because I was giving Adam a different copy paste button and I might have broken it yesterday. Now, interestingly, if I hover over it, I can see the correct link. Yeah. But when I click on it, I only get the path. Yeah, the href is correct, but the data-copy value is incorrect. Yeah. But when I click on it, I only get the path. Yeah. The href is correct, but the data-copy value is incorrect. Right. So I'm pretty sure I broke that yesterday. So that used to work because all these other people are happy, but you're sad because I broke it yesterday. So I have a quick fix. You right click the get URL. Yeah. And you say copy URL rather than relying
Starting point is 00:40:02 on the click action. Right. And then you get the proper URL. Try that, Adam. Let's see if that works. Let's see here. Copy link. Did it solve my problem? Let me enter it. Boom goes the dynamite.
Starting point is 00:40:16 It's at least not yelling at me. It is taking its time though. So, well, the other reason why that was happening probably a few weeks ago is because if you loaded a feed that has all of our episodes, for instance, 1,000 plus 12 megabyte XML file, we would serve it slow enough that Overcast would time out and it wouldn't think it was a valid feed.
Starting point is 00:40:35 But then I fixed that by pushing everything through the CDN. Because at first, when I first rolled it out, it was just loading directly off the app servers. I know it's just a little bit too slow for Overcast. Right. Okay, next question then. This is a UX question. I am not a Plus Plus subscriber,
Starting point is 00:40:52 but I can click the option, and I assume it does nothing, to say this feed should contain Plus Plus ad-free extended audio. I haven't clicked play because I just literally loaded it for the first time now. But I'm assuming that I won't have plus plus content
Starting point is 00:41:09 because I'm not a plus plus subscriber. Is that true? No. I do have plus plus content. I'm thinking you are an admin and so it doesn't matter. Okay, gotcha. Yep. So does this check then only show up
Starting point is 00:41:26 for people who can check it? The entire UI for building custom feeds only shows up if you are an active Plus Plus member or an admin which is literally the three of us. Okay, that makes more sense then. You can't even build custom feeds. Now I did consider custom feeds for all.
Starting point is 00:41:41 Let the people have the custom feeds but Plus Plus people obviously would only be be the ones who get the checkbox. That's something that I'd be open to if lots of people want it, but for now I was like, well, let's let our plus plus people be special for a while. Is there a cost center with these custom feeds? Is there an additive to the cost if we were having to deal with costs? Marginal. Every custom feed has to be updated every time an episode is updated.
Starting point is 00:42:09 And so if we had 100,000 of them, there would be some processing and maybe hit some R2, too many put actions versus, it's free egress, but it's not free all operations. And so there's like class A operations, class B operations. And the more you edit those files and change them, I think eventually those operations add up to costing you money, but it's marginal on the margins. If it got to be a huge feature where,
Starting point is 00:42:36 I mean, if we had 100,000 people doing custom feeds, we'd find a way of paying for that. You know? Yeah, that's a different problem. But yeah, it's a marginal cost, not not worth considering gotcha okay so the copy can be updated pretty easily it's probably a fix going on already for that because it's so simple for the ships it'll be out there good well because i mean i was like well how do i get this url to my iphone i guess i can like copy it and like airdrop it to my iphone maybe it'll open in the browser and like, well, let me just go on the web
Starting point is 00:43:05 and get URL, essentially. Yes, our user experience assumes that our users are nerds. And so far, before I broke that copy button yesterday, there's been zero people that are like, now how do I get this into my podcast app? No one's asked us that, because all of our Plus Plus members completely understand how to copy and get into their whatever, you know,
Starting point is 00:43:28 they are smarter than me. Most of them. Now if it was a firm broader audience and this was a baking show and we're going to provide custom feeds for bakers or aspiring bakers, then I probably would have to have more of a handholding at and supercast actually does a really good job of handholding you into your private feed because it's not a straightforward mental process for most people, just for nerds. Yeah, I agree. It kind of requires some workaround. There's really nothing you can do about that, right? I mean, you're adding literally
Starting point is 00:43:57 a custom feed via URL that no index knows about, so it's obvious you have to do some sort of workaround to get there, to get your feet into your... Yeah, I mean, a better UX would be after the custom feed's created, we send you an email. That email contains a bunch of buttons. Each button's like add to Overcast, add to Pocket Cast, add to Apple Podcasts,
Starting point is 00:44:20 and depending on... I like that idea a lot. That's how Supercast works. Yeah, I like that idea a lot. Email them how SuperGas works. Yeah, I like that idea a lot. Email them every time it changes, that they go upon creation, and now that is immutable until, well, theoretically mutable until they edit it again,
Starting point is 00:44:34 and then it's muted, you know? So it's in stone. Yeah, it's mutated. We could certainly add a button that says, email this to me. You know, next to the get URL, maybe like email me the URL. That's a good idea. And that's like a fast way to get it into
Starting point is 00:44:48 your phone without having to do phone copy paste or airdrop like Gerhard did. Yeah, because you don't know about the email happening. So that's a good feature, even for nerds, because it's just easier that way. Well, that would have solved the problem of me having to get the data onto my iPhone. Totally. Which my email is.
Starting point is 00:45:03 Exactly. I think we should add that as a feature. It's a good idea. Hey, Jared here in post. That email it to me feature just shipped today. And that copy paste bug fixed. Kaizen. Custom feeds are here, y'all. If you're a Plus Plus subscriber, by the way, changelog.com slash plus plus.
Starting point is 00:45:25 It's better. If you are not a Plus Plus subscriber and you desperately want this feature, let us know. Because, you know, squeaky wheels and oil. Must be in Zulip. I don't know. It's the other catch, right? Anyways. Well, not even, Gerard's not even Zulip yet, so let's not get ahead of ourselves.
Starting point is 00:45:44 No, but what's the URL? Because I would like to join. changelog. Zulip yet so let's not get ahead of ourselves no but what's the URL because I would like to join changelog.zulipchat.com okay but can you just get on from there I don't know it's new to us
Starting point is 00:45:52 zulipchat.com I'm doing it now let's see login okay log in with Google go there you go
Starting point is 00:46:01 yes continue okay sign up you need an invitation to join this organization there you go yes continue okay sign up you need an invitation to join this organization alright go to our Slack
Starting point is 00:46:10 go to main scroll up a little bit you'll see there's an invite link to get into Zulip you have to go to Slack it's a Trojan horse that's how you do it that's right
Starting point is 00:46:20 you install one through the other listeners you can do this too you can follow these same instructions it is in Maine. I think it's Friday, September 6th. Okay. Jared posted it as a reply to after that conversation, now we're trying out Zulip in earnest.
Starting point is 00:46:34 And there's a link that says join Zulip here. And it's a long link that I could read on the air, but no one would ever hand type that in. No. I agree. You can put it in the show notes though, so it might be there. So there you go.
Starting point is 00:46:48 Yeah. We've shared our thoughts already elsewhere on friends with this, but you know, I'd be curious. We'll be so many Kaizens away. Well, at least one more Kaizen away
Starting point is 00:46:55 multiple months before we get Gerhards. By the next Kaizen, we may be like transitioned over to Zulip. We might be self-hosting it, but I don't think we should do that. No way. There's a Kaizen channel. This makes me so happy.
Starting point is 00:47:05 And it's for all ideas about making things better and stuff. I even put one in there, you can read it. Oh, wow, okay. I'm definitely going to check this out. This is nice. This is a very nice surprise. It was worth joining just for this. Oh, wow.
Starting point is 00:47:18 So cool. This is nice. Yeah, isn't that cool? I thought a Kaizen channel would be on point. So cool. So I was kind of thinking like, well, how do we replicate our dev channel over here? And it's like, well, dev is just one thing.
Starting point is 00:47:27 Like let's have a Kaizen and then different topics can be based on. Big thumbs up. Yeah. Big thumbs up. So amazing. All right. Awesome.
Starting point is 00:47:34 Custom feeds, Zulip chat, Kaizenine. What's next on the docket? Well, I'm going to talk about one very quick improvement, actually two, which I've noticed. The news. Yes. noticed, the news. Yes. I love the latest. Oh, you like that?
Starting point is 00:47:50 That graphic is so cool. I really like the small tweaks, also the separators, the dividers between the various news items. They just stand out more. I really, really like it. And it feels like more, the play button is amazing, by the way. I love it.
Starting point is 00:48:03 I can definitely see it. The play button stand out. Yeah. i love it i can definitely see the play button stand out yeah it feels so polished thank you it really does but the latest is so amazing and the news archive it's there and it works yes it is amazing so i appreciate your enthusiasm to tell everybody what the latest is i literally put an arrow and the words of the latest on our homepage that points to the issue because it's kind of, it could be discombobulating. Like you look at it on a desktop, at least on mobile, it goes vertical, but like on the left is kind of the information about news and the signups and stuff. And on the right is the latest issue, but you may not know,
Starting point is 00:48:42 like, what am I looking at when you land on the page? What's the thing on the right-hand side? And so I just put this little arrow, hand-crafted with SVG, by the way. And the word is the latest, like someone just scratched them on the page that points to that issue. So it's just kind of giving you a little bit of context. And Gerhard loves it, so I appreciate that.
Starting point is 00:49:01 It gives it another dimension. It's playful. It's, you know, like, there is some fun to be had here it's not just all serious it's not like another news channel but it's really really nice like the whole thing it feels so much more polished compared to last time i can definitely see like the tiny tiny improvements yeah very cool so much kaizen indeed polishing cool well the next big item on my list is to talk about twice 2x faster time to deploy this is something we just spend a bit of time on i was surprised by the way of the latest deploy it was slower than 2x but we can we can get there okay the first thing which I would like to ask is, how do you feel about our application deploys in general? Like, does it feel slow?
Starting point is 00:49:50 Does it feel fast? Does it feel okay? Do you feel surprised by something? How do application deploys when you push a change to our repo feel to you? Historically or after this change? Historically. Historically, I would say too slow.
Starting point is 00:50:06 Too slow. Okay. Adam historically i would say too slow too slow okay yeah i don't yeah historically too slow okay okay so what would make them not too slow like is there like um maybe like a two x duration that's so leading though i didn't be like i literally meant like how many minutes or seconds? I think we talked about that. Would it feel that it's, it's good enough? There's like this threshold that I'm not sure exactly. It's probably fuzzy, but it's the point where like, you're waiting so long that you forget that you're waiting and you go do something else. And I think that's measured in single digit minutes, but not necessarily seconds.
Starting point is 00:50:44 Like I can wait 60 seconds. Well, but not necessarily seconds. Like I can wait 60 seconds. Well, that's my seconds. I can wait one minute and maybe I'm just hanging out in chat waiting for that thing to show me that it's live. But as soon as it's longer than that, I'm thinking, well, I should come back in five. Then I forget what I was doing. I don't come back and I've lost flow basically.
Starting point is 00:51:04 So I would say around a minute, you know, 30 seconds would be spectacular. It doesn't have to be instant, but I think two, three, four, five minutes, it's getting to be where you're like, yeah, it's kind of like friction to deploy because you deploy and you're like, now I got to wait five or 10 minutes.
Starting point is 00:51:19 That's my very fuzzy answer. Okay, that's a good one. So what used to happen before this change, we used to run a dagger engine on fly so that it would cache previous operations. So that subsequent runs would be much quicker, especially when nothing changes or very little changes. The problem with that approach was that from GitHub Actions,
Starting point is 00:51:46 you had to open a WireGuard tunnel into Fly so that you'd have that connectivity to the engine. And what would happen quite often is that tunnel, for whatever reason, would maybe be established, but you couldn't connect to the instance correctly. And you would only find that out a minute or two within the run and then what used to happen you would fall back to github which is much slower because there's no caching there's no previous state and the runners themselves because they're free they are slower two cpus and seven gig which means that you have to when you have to recompile the application from scratch it can easily take seven eight ten minutes and that's what would
Starting point is 00:52:27 lead to those really slow deploys so what we did between the kaizens since the last kaizen let me see which pull request was that it was pull request 522 so you can go and check it out to see what that looks like. So when everything would work perfectly, when the operations would be cached, you could get a new deploy within four minutes, between four and five minutes thereabouts. And with this change, what I was aiming for is to do two minutes or less. And when I captured, when I ran this, like the initial tests and so on, so forth, we could see that while the first deploy would be slightly slower because, you know, there was nothing, subsequent deploys would take about two minutes, two minutes and 15 seconds, the one which I have right here, which is a screenshot on that pull request 522.
Starting point is 00:53:21 So how did we accomplish this? We're using namespace.so, which they provide faster GitHub actions runners, basically faster builds. And we run the engine there. And when a run starts, we basically restore everything from cache, the namespace cache, which is much, much faster. And we can see up there, basically like per run, we can see how much CPU is being used. We can see how much memory again, these are all screenshots on that pull request. And while the first run, obviously you use quite a bit of CPU because you have to compile all the links are into bytecode and all of that. Subsequent runs are much, much quicker. And the other thing which I did, I split the the let's see if is it here it's not actually here we need to go to honeycomb to see that so i'm going to honeycomb to look at that we i've split
Starting point is 00:54:12 the build time basically the build test and publish from the deploy time because something really interesting is happening there so let's take for example before this change let's take dagger on fly one of the blue ones and have a look at the trace so we have this previous run which actually took four minutes and 21 seconds and all of it is like all together it took basically three minutes there's like some time to start the engine to start the the machine, whatever, whatever. All in all, four minutes and 20 seconds. So in your run, for example, this one, which was fairly fast, it was two minutes and a half. If we look at the trace, we can see that, diagram namespace, the build, test, and publish was 54 seconds. So in 54 seconds, we went from just getting the code to getting the final artifact,
Starting point is 00:55:03 which is a container image that we ship into production. In this case, we basically publish it to GHCR.io. And then the deploy starts. And the deploy took one minute and 16 seconds. So we can see that, you know, like with this split is very clear where the time is spent. And while the build time and the publish time
Starting point is 00:55:21 is fairly fast, I mean, less than a minute in this case, the deploy takes a while because we do blue-green, new machines are being promoted, the application has to start, it has to do the health checks. So there's quite a few things which happen behind the scenes that if you look at it as one unit,
Starting point is 00:55:36 it's difficult to understand. So this was ideal case. This is what I thought would happen. Of course, the last deploys if i'm just going to filter these dagger on namespace by the way we are in honeycomb we send all the traces and all the build traces from github actions to honeycomb and you can see how i do that integration now repo you can see that we had this one 2.77 minutes minutes, right? Which is roughly 2.40. But the next one was surprising,
Starting point is 00:56:07 which took nearly five minutes. And if I look at this trace, this was again, nothing changed. Like stuff had to be recompiled. But in this case, the build, test and publish took nearly three minutes, which this tells me there's some variability into the various runs when it builds it.
Starting point is 00:56:24 I don't know why this happens, but I would like to follow up on that. As a TLDR, this change meant that we have less moving parts. And when namespace works, and this is something, again, that we need to understand, why did this run take longer?
Starting point is 00:56:41 It should take within two minutes, we should be out. Like a change should be out in production. Half the time is spent in build and half the time is spent on deploys. So when it comes to optimizing something, now we get to choose which side do we optimize. And I think build, test, and publish is fairly fast. The slower part is the actual deployment. So how can we maybe half that? How can we get those changes once they're finished and everything is bundled? How could we get it out quicker?
Starting point is 00:57:09 I love it. I think, do you have ideas on that? Well, I think the application boot time could be improved, right? Because it takes a while for the app to boot. When I say it takes a while, it may take 20, 30 seconds for it to be healthy, all the connections to be established.
Starting point is 00:57:30 Now, I'm not sure exactly which parts of those you know would be the easiest one to optimize but i think the application going from the deploy starting and the deploy finishing taking a minute and a half is a bit long so i'll need to dig deeper like is it when it comes to connecting to the database is it just the application itself being healthy like which part needs to be optimized but again we're talking this is like a minute and a half we're optimizing a minute and a half just to put this into perspective yeah and that's why i started with the question like how fast is fast enough yeah i mean i think if you're at 90 seconds you're probably right about there i would still go in and spend an hour thinking, like, is there a low-hanging fruit that we haven't looked at yet that we could squeeze 10 more seconds off?
Starting point is 00:58:10 And then I would stop squeezing the radish after that. I see. That'd be my take on it. Adam? Well, the flow, it seems, is every time new code is pushed to our primary branch on the repository, a new deploy is pushed to our primary branch on the repository, a new deploy is queued up. And this process happens for each new commit to the primary branch. A new application is spun up.
Starting point is 00:58:36 It's promoted. So if I deploy slash push new code, and then a minute later Jared does the same thing, my push does this this process my application is promoted jared's commit does the same thing his application is then promoted and that's via networking and then these old machines are just you know like thrown off and then the new machines are promoted they just fall by the wayside correct which totally makes sense i think you have things happening
Starting point is 00:59:05 that we want to happen. I agree with you on the low-hanging fruit, but on the app boot process, we've got even things like 1Password being, those things being injected from their CLI. I'd imagine that API call is not strenuous, but it's probably seconds, right?
Starting point is 00:59:23 So there's probably, in each thing we're booting up as part of the at boot process for every commit, there's at least one to several seconds per thing we're instantiating upon boot. Well, that's just me hypothesizing how things work. No, that's a good one. That's exactly what, you know,
Starting point is 00:59:41 we're like trying to hash it out so that we share the understanding that each of us holds so that we can, you know, talk about like what would, because we talked about this in the past and I really liked Jared's question. He was asking, we're talking about like Heisen Inc. and, you know, I was thinking about like, okay, what would the improvement look like? And can we, I mean, we can measure it and we can check, have we delivered on this? And until like the last deploy that went out, I was fairly happy with the time that the duration that these deploys were taking.
Starting point is 01:00:19 But based on the one which I have right in front of me, the build going from one minute and a bit to almost three minutes, I think that variability is something that I would like to understand first before optimizing the boot time. Is it the CPUs then that's impacting it, you think? Like the CPUs and the horsepower behind the build test?
Starting point is 01:00:39 Well, let's open up namespace. Let's go to instances. We can see the last build, which you can see here, like all the builds. This is inside Dagger, is that right? This is namespace. All this is namespace, Let's go to instances. We can see the last build, which you can see here, like all the builds. This is inside Dagger, is that right? This is namespace. All this is namespace, by the way. So we're using namespace for the runners. And I would like...
Starting point is 01:00:54 This is a third-party service? It is a third-party service, yes. That you just found or someone told you about? Exactly, yes. I am paying attention to various build services and, you know, depot.dev. I love it. Namespace.so? Namespace.so, yes. I am paying attention to various build services and, you know, depot.dev. I love it. Namespace.so.
Starting point is 01:01:07 Namespace.so, yes. Our trial is almost over. Exactly, yes. Now, how much will it cost us, by the way? Every minute. Three days left on your trial. Three days left on our trial, yes. I'm getting nervous here.
Starting point is 01:01:19 So hang on. Per minute, we're paying $ 0.0015 dollars which means that for 40 minutes like okay for an hour we're paying less than 10 cents for an hour of build time so you know pay as you go it's really not expensive so it's okay if i have to put my card because we're talking cents per month for our builds that makes sense what does a single build cost us then so when it's five minutes let's see i'll do the math now really quick hang on thank you hang on uh it's less than a cent a build which takes five minutes is less than as less than a cent is that right yeah less than a cent like 75 what is less than a cent zero cents no there was like another unity in the past
Starting point is 01:02:08 i forget what it's called whatever like it's satoshi no that's a different the 70 75 percent of a cent okay okay so it's like definitely reasonable that's reasonable yeah very very reasonable i would say if we get down faster, it's even less. Exactly. What exactly does Namespace do, though? Is it just a machine that has proprietary code on it that we send something to to do a build process? So Namespace, basically, it runs custom GitHub Actions much quicker because they have better hardware, better networking than GitHub Actions themselves.
Starting point is 01:02:45 So you can literally use Names better networking than GitHub Actions themselves. So you can literally use namespace to replace GitHub Actions. So they're just like, they just use the Actions API, but you're running on their infra. Exactly. Smart. Or you can use like faster Docker builds, you know, and they also have preview environments,
Starting point is 01:02:59 which I haven't tried in code sandboxes. That's something new. Sponsor. That's what I'm thinking because i have a shout out here and hang on let me just get the name straight to be clear they are not a sponsor but we're saying they should i think they should be i think uh hugo i just know his first name and i'm trying to find um because our credit card is expiring right we need those six cents don't we we need those six cents that's gerhard's credit card for now exactly you could use mine it's okay
Starting point is 01:03:24 hugo santos no relation no relation yeah no relation no relation but I think if there is someone that you should talk at namespace I think it would be him like as I was like setting all this stuff up he was very responsive even on the weekend you know to emails and I think he's one of the founders by the way so I thought that was like a very nice touch and he really helped like go through all like the various questions which I had and the various like, does this look right? So even like he even looked at the pull request to see how we implemented it.
Starting point is 01:03:52 And all in all, like the promise is there. We can see that it does work well when it works like two minutes, we get those two minutes, but sometimes it takes more. And then the question is, well, why does it take more? So that's something which I'm going to follow up on. Cool. Cool. Well, I'm excited for the follow-up and for this progress.
Starting point is 01:04:12 Indeed. Cool. Well, our friends over at Speakeasy have the complete platform for API developer experience. They can generate SDKs, Terraform providers, API testing, docs, and more. And they just released a new version of their Python SDK generation that's optimized for anyone building an AI API. Every Python SDK comes with Pydantic models for request and response objects and HTTPX client for async and synchronous method calls and support for server sent events as well. Speakeasy is everything you need to give your Python users an amazing experience integrating with your API. Learn more at speakeasy.com slash Python. Again, speakeasy.com
Starting point is 01:05:07 slash Python. And I'm also here with Todd Kaufman, CEO of TestDouble, testdouble.com. You may know TestDouble from our good friend, Justin Searles. So Todd, on your homepage, I see an awesome quote from Eileen. You could tell She says, quote, hot take, just have test double build all your stuff, end quote. We did not pay Eileen for that quote, to be clear, but we do very much appreciate her sharing it. Yeah, we had the great fortune to work with Eileen and Aaron Patterson on the upgrade of GitHub's Ruby Rails framework. And that's a relatively complex problem. It's a very large system. There's a lot of engineers actively working on it
Starting point is 01:05:48 at the same time that we were performing that upgrade. So being able to collaborate with them, achieve the outcome of getting them upgraded to the latest and greatest Ruby on Rails that has all of the security patches and everything that you would expect of the more modern versions of the framework while still not holding their business back
Starting point is 01:06:05 from delivering features, we felt was a pretty significant accomplishment. And it's great to work with someone like Eileen and Aaron because we obviously learned a lot. We were able to collaborate effectively with them, but to hear that they were delighted by the outcome as well is very humbling for sure. Take me one layer deeper on this engagement.
Starting point is 01:06:26 How many folks did you apply to this engagement? What was the objective? What did you do, et cetera? Yeah, I think we had between two and four people at any phase of the engagement. So we tend to run with relatively small teams. We do believe smaller teams tend to be more efficient and more productive.
Starting point is 01:06:45 So wherever possible, we try to get by with as few people as we can. With this project, we were working directly with members from GitHub as well. So there were full-time staff on GitHub who were collaborating with us day in, day out on the project. This was a fairly clear set of expectations. We wanted to get to Rails, I believe 5.2 at the time and Ruby like 2.5. Don't hold me to those numbers, but we had clear expectations at the outset. So from there, it was just a matter of figuring out the process that we were going to pursue to get these upgrades done without having a sizable impact on their team. A lot of the consultants on the project
Starting point is 01:07:23 had some experience doing Rails upgrades, maybe not at that scale at that point, but it was really exciting because we were able to kind of develop a process that we think is very consistent in allowing Rails upgrades to be done without like providing a lot of risk to the client. So there's not a fear that, hey, we've missed something or, you know, this thing's going to fall over under scale. We do it very incrementally so that the team can, as like I said, keep working on feature delivery without being impacted, but also so that we are very certain that we've covered all the bases and really got the system to a state where it's functionally
Starting point is 01:08:01 equivalent to the last version, just on a newer version of Rails and Ruby. Very cool, Todd. I love it. Find out more about Test Double's software investment problem solvers at testdouble.com. That's testdouble.com, T-E-S-T-D-O-U-B-L-E.com. So, I would like to switch gears to one of Adam's questions. And he was asking if Neon is working for us as expected and the state of Neon. So, is Neon working for us as expected? Based on everything I've seen, it is. Like I was looking at, for example, the metrics. I was looking at how it behaves in the Neon console. This is us for the last 14 days. So what we see here in the Neon
Starting point is 01:08:52 console, we see our main database. We can see that we have been using 0.04% of a CPU. So really not CPU, but in terms of memory, we eight allocated we're using 1.3 gigabytes 1.3 gigabytes used out of eight allocated so we are over allocating both cpu and memory so fairly little load i would say and things are just humming along so no issues whatsoever do we need to push this harder somehow like do we need to get the vector search in our database or something? Weren't you going to set us up an AI agent, Gerhard? Yes, I was. I didn't get to that, but that would not use this database, by the way. That would be something different now. PG vector, man. PG vector. Get it in there. Right. I would, but not in this production
Starting point is 01:09:41 database. So this is special, right? I mean, this is exactly what we want to see. If anything, we can, because we have the minimum compute setting set to two CPUs and eight gigs of memory. And I know that Neon does an excellent job of auto-scaling us when we need to. We didn't need to get auto-scaled because we are below the minimum threshold.
Starting point is 01:10:02 So we could maybe even lower the threshold and it would still be fine. So we're not using this to its fullest extent, is my point. No. So we need some arbitrary workloads in order to push it. Well, to see where it breaks. We wouldn't need it to break.
Starting point is 01:10:15 I think if anything, one thing that I would like us to do more is use Neon for development databases. And I have something there I haven't finished, but I would like to move on to that as well, if everyone's fine. Adam, further thoughts or questions around Neon? This was your baby. I think the question I have is, you know, while the thresholds are low and we're below our overallocation, you know, what should we expect? And this is good news. This is good news that we're not. Yeah, I'm just saying it's hard for us to use it
Starting point is 01:10:47 and see if it's good or bad because we're not heavy database users. And I was just saying we just need some more arbitrary workloads to actually flex this thing, but I was mostly just being facetious. Gotcha. I'm in the dashboard too, and I'm looking at a different section of that same monitoring section, which is like rows. I believe rows being added, which is kind of cool
Starting point is 01:11:06 because over time you can kind of see your database updates essentially, deleted, updated, inserted. So there's definitely obviously activity. We're aware of that. I think the other things that we should pay attention to in terms of is it working for us as expected, I would say some of that is potentially on you, Jared, and you too, Gerhard,
Starting point is 01:11:27 is that we've got the idea of branching. Gerhard, I know that you're familiar with it because you demonstrated some of this last time we talked, but being able to integrate some of those futuristic, let's just say, features into a database platform. This is managed. It's serverless. We don't have to manage it. We get a great dashboard. We get the opportunity for branches.
Starting point is 01:11:52 Have you been using branches, Jared? Do you need to use branches? Does that workflow not matter to you? I think that's the DX and the performance is the two things I think I care about. So I have a custom branch, which I use to not develop against, but to sync from.
Starting point is 01:12:12 I guess it's not mine, it's that dev 2024. That's the one I use. Maybe Gerhard created that, but that's the one that I do use. And so I pull from that. So I'm not pulling from our main branch, because there's just less load on our main branch to do that. And so I'm using that, but I synchronize it locally, manually, and then develop against my own Postgres
Starting point is 01:12:35 because I have a local Postgres. The one thing about it is because it's a neon branch, I will have to go in here and synchronize it back up to main and then pull from it. And I'm sure that's automatable, but I just haven't done that. I've been waiting for Gerhard's all-in-one solution. Yes, that's coming.
Starting point is 01:12:55 That's exactly, that's my next topic to share. What exactly is that? Well, that would mean tooling that's local to make all of this easy. Jared wouldn't need to go to the UI to click any buttons to do anything. He would just run a command locally and the thing would happen locally. He wouldn't need to even open this UI. Shouldn't that be a Neon native thing though?
Starting point is 01:13:18 It is. It does have a CLI. But the problem is you need to, first of all, install the CLI, configure the CLI, like add quite a few flags, connect the CLI to your problem is you need to first of all install the CLI configure the CLI like add quite a few flags connect the CLI to your local postgres like all that glue that's the stuff which I've been working on and I can talk about that a bit more and so the idea would be to just automate some of that not have to go through all the steps still do the cli installation like any normal user would correct but maybe a neon setup script that probably populates a file with credentials or something some command that yeah that you run locally that knows what the sequence of the steps is
Starting point is 01:13:57 and what to do it's for example maybe you don't have the cli installed well install the cli you need to have some secrets well here's the one the 1Password CLI, and by the way the secrets is here, like in this vault. So stuff like that, like all that. Yeah. Speaking of 1Password, did you notice their new SDKs? Did you pay attention to their new SDKs they deployed?
Starting point is 01:14:18 TypeScript, Go, a couple others for native integrations. Obviously we're Elixir, so it doesn't really matter to us, but maybe in some of the Go pipelining I know you've probably done would it make sense to skip op and go straight to go with the sdk because op is their cli right it's it's uh same it's not an sdk the sdk lets you native integrate into the language so it's possible to use something else, but at the end of the day, it's like the integration needs to work and the implementation,
Starting point is 01:14:51 whether you use the SDK or whether you use the CLI, is just an implementation. Just doesn't matter, yeah. What we care about is like, is it reliable, our implementation? Do we have any issues with it? So far, no. Yeah.
Starting point is 01:15:02 Are we using like service accounts? And that's something that we've been waiting because without service accounts you would need to set up a connect server which i didn't want to do so that was a big deal for us whether you use the cli or the sdk we could but it wouldn't make that much of a difference now if the application itself while it runs it was doing certain things maybe Maybe that's interesting. Maybe we could get like change some of the boot phase so that we wouldn't inject the secrets from outside the application.
Starting point is 01:15:34 The application itself could get them directly. But I really want to get Elixir releases going. And once we have those, things change a little bit. but it's all just like maybe shuffling some code from here to here, but ultimately it will still behave the same, you know, just like you would maybe bring it into the language. So I haven't seen their latest SDKs, but I would like to check them out. That's a good one for me to look into. Okay. So the tooling that Jared was mentioning to make things simpler for him, I've been thinking about it from a couple of perspectives
Starting point is 01:16:11 and I realized that to do this right, it will be slightly harder. And the reason why it's slightly harder is because I would like to challenge a status quo. The status quo is you need a dagger for all of this. Maybe you don't, right? So I'm trying a different approach. And the mindset that I have for this is Ken Beck,
Starting point is 01:16:37 September, 2012. For each desired change, make the change easy. Warning, this may be hard. Then make the change easy. Warning, this may be hard. Then make the easy change. So what I've done for this Kaizen, I made that change easy, which was hard, so that I could make the easy change. How hard was it?
Starting point is 01:16:58 So that's what happened. Well, let's have a look at it. So we're looking now at pull request 521. And 521 introduces some new tooling, but I promise it's just a CLI. And what's special about it is that everything runs locally. There's no containers. There's no Docker.
Starting point is 01:17:20 There's no Dagger. Everything is local. And I can see Jared's eyebrows go up a bit because that's exactly what he wanted all this time so what pull request 521 introduces is just which is a command runner it's written in rust but it's just a cli okay and if you were for example jared or even adam could try this if you were, for example, Jared, or even Adam could try this. If you were to run just in our repository at the top level, you would see just calls them recipes, what is possible. And the one which I think the audience will appreciate is just contribute. So remember how
Starting point is 01:17:59 we had like this manual step, like install Postgres, you knowlang get elixir get this get that i mean that's still valid right you can still use that manual approach or if you run just contribute it will do all those things for you running local commands it still uses homebrew it still uses asdf but everything that runs it runs it locally and the reason why this is cool is because, I mean, your local machine, whatever you have running, it remains king. There's no containers. Again, I keep mentioning this because that adds an extra layer. And what that means, stuff like, for example, importing a database in a local PostgreSQL is simpler because that's what you already have running. Resolving the Neon CLI again, it's just like a brew install. It's there and you wire
Starting point is 01:18:52 things together. You don't have to deal with networking between containers. You don't have to pass context inside of containers, which can be tricky, especially when it comes to sockets, especially when it comes to special files. So I i'm wondering how will this work out in practice and the thing which i didn't have time to do i didn't have time to implement just db dash prod dash import which would be the only command that you'd run to connect to neon pull down whatever needs to pull down, maybe install the CLI if it doesn't have it. And then just in your local Postgres, import the latest copy. Same thing for just dbfork,
Starting point is 01:19:33 which would be an equivalent of what we had before. The difference is that was all using Dagger and containers. And, you know, it was, I mean, have you used it, Jared, apart from when we've done it? There you go. Adam, have you ever run Dagger? Never. In the three years that we've done it? Mm-mm. There you go. Adam, have you ever run Dagger? Never. In the three years that we've had it?
Starting point is 01:19:48 Never. Not one time. There you go. How many times did you have to install things locally for you to be able to develop changelog in the last three years? Well, that's where my personal angst lies. It lives right there in that question how many times what is the pain level it's high for me so adam might be more excited about this than i am pull request
Starting point is 01:20:11 five to one i mean even you jared if you want to try it i mean if you do dry run it has a dry run option by the way it won't apply anything but it will show you all the commands that would run if you were to run them yourself for example and there may be quite a lot of stuff, right, when you look at it that way. But it's a good way to understand, like, if you were to do this locally, and if you were to configure all these things, what would it do without actually doing it?
Starting point is 01:20:37 So I tried it on a brand new Mac, and I think that's the recording that I have on that pull request. I might need to get a brand new mac so i can try this look at that but that's very very waiting for a good reason to upgrade you know there you go and honestly within five minutes depending on their internet connection everything should be set up everything is local the postgres everything what we don't yet have and i think this is where we're working towards, is how do we, first of all, cleanse the data so that contributors can load a type of data locally.
Starting point is 01:21:13 But I think that's like a follow-up. First of all, we want Jared to be able to do this with a single command, refresh his local data. And after I have the bulk of the work done, this step is really simple. How simple? Maybe half an hour at most. That's what I'm thinking.
Starting point is 01:21:28 So not much. So it should be done before the day's over. Yeah, it should be done. Exactly. It should be done. The one thing I'm noticing is that you're switching back to brew install Postgres. I'm just curious about that change.
Starting point is 01:21:41 So I mentioned it in one of the comments when I committed. Basically, when I was installing it via ASDF the problem was with ICU4C. I just couldn't compile Postgres from ASDF correctly. And since then in Homebrew we can now install Postgres at 16. So you can specify which major version, which was not possible, I think, two years ago when I did this initially. So there is that. Now, let's see where this goes. I'm excited about this. If anyone tries it, let us know how it goes for you. If you want to contribute a changelog, like how far does it get? And by the way, I tested this on Linux as well. The easiest way, there's like something hidden there in the just,
Starting point is 01:22:27 it's called ActionsRunner. What it does is exactly what you think it does. It runs a GitHub ActionsRunner locally. For this, you need Docker, by the way. And it loads all the context of the repository inside of that runner. So that's the beginning of what it would take to reuse this in the context of GitHub Actions. And what I'm wondering is, will it be faster
Starting point is 01:22:55 than if we use Dagger? That's me challenging the status quo. The answer is either yes, it is, and maybe we should do that instead, and it will shave off more time, or no, it's not not and then I get to finally upgrade Dagger because we're on a really old version. So you still work at Dagger right? I do yes very much so
Starting point is 01:23:11 yes. Okay I just want to know how much you want to challenge this status quo No no no that hasn't changed. I'm just kidding Cool. So for our listener if you want to try this github.com slash the changelog slash changelog.com. Clone the repo.
Starting point is 01:23:27 Brew install just. Just contribute. That's it. Try those three steps if you're on macOS. If you're on Linux, it's not brew install just. It's apt-get install or yum install. The installations are there. Yeah.
Starting point is 01:23:43 And just contribute. And what should we expect to see when we type in just contribute is instructions or no no we do actually run the commands it's gonna do it for you man if you do just dash n now what if you have an exact an existing repo like adam does can he do it and it should pick up where he yeah yeah give that a shot there adam i'm so scared. What you could do is if you want maybe start a new user, it shouldn't mess anything up, to be honest. It just installs. Maybe it does things differently or does things twice.
Starting point is 01:24:13 I don't really know. But it should be safe. I like this. I mean, this is... I did run just in our repository. You get contribute, devs, dev, install. These are all the actions or recipes. Correct.
Starting point is 01:24:28 Install, Postgres down, Postgres up, tests. And each of those have a little hashtag next to it, which is a comment, essentially, of what the recipe does. So over time, we can expect to see more of these just recipes if this pans out to be you know long term these recipes will potentially get more and these will be a reliable way to do things within the repository and it's all local that's the big difference because before i mean even now right because we still kept dagger we still have everything that we uh that we had so far that would always
Starting point is 01:25:01 run in containers which means it won't change anything locally and in some cases that's exactly what you want especially when you want to reduce the parity between test and production or staging and production but in this case it's local right so you want something to happen locally and local is not linux it means it's a mac so then you have that thing to deal with in which case brew helps and asdf helps and a couple of tools help but you still have to know what are the commands that you have to run in what order what needs to be present when and this basically captures all those commands it's a little bit like make which we had yeah right and we removed but this is a modern i would say version of that much simpler much more
Starting point is 01:25:43 streamlined and a huge community around it. I was surprised to see how many people use Just. By the way, huge shout out to Casey, the author of Just. I really like what he did with the tool, like 20,000 stars on GitHub. A lot of releases, 114 fresh releases, 170 contributors. Yeah, it's a big ecosystem I have to say without one more question on this without me having to read the docs thank you if you can help me on this can I do just dash
Starting point is 01:26:13 n install so I can just see what it might correct what it might do exactly okay and dash n it basically stands for dry run. Right. The reason why you have to do it before the recipe
Starting point is 01:26:27 is because some recipes can have arguments and if they don't, like if you do the dash N at the end, it won't work. So it has to be the command just, the flags,
Starting point is 01:26:36 and then the recipe or recipes because you can run multiple at once. Very cool. But yes. I assume that because like any good hacker that writes a CLI that's worth his weight in gold would always include a dash n right a dry run yeah good good job what was his
Starting point is 01:26:50 name the maintainer casey let me see if i can pronounce his surname he's casey on github by the way rodar more the blue planet apparently casey rodar more you can You can correct us, Casey. Shout out to Casey. Yeah. C-A-S-E-Y. GitHub.com slash C-A-S-E-Y. GitHub.com slash... I'm just kidding. I was going to say it one more time.
Starting point is 01:27:16 Thanks, Casey. Are we stuck in a loop? Rodarmor. Rod armor. Rod armor. Rod armor. Rod armor. Yes, I like that.
Starting point is 01:27:23 That's how we're pronouncing it. Casey Rodarmor. Correct us if that's. That's how we're pronouncing it. Casey Rod Armor. Correct us if that's correct or correct us if it's not correct or don't correct us, but go to getup.com slash Casey, C-A-S-E-Y. Just do it.
Starting point is 01:27:36 Just do it. Just do it. That's a good one. I like it. That's cool, man. Thank you for doing that. Not a problem. I enjoyed it.
Starting point is 01:27:42 It was fun. Okay. Homelab production. Homelab to production. So next week on Wednesday, you for doing that not a problem i enjoyed it was fun okay home lab production home lab to production so next week on wednesday it's talos con and i'm calling it justin's conference it's the garrison con the garrison con exactly i'll finally meet justin in person i'm giving a talk it's called home lab to production it's i think it's 5 p.m so one of the last ones we'll have a lot of fun
Starting point is 01:28:06 I'm bringing my Homelab to this conference so we will have fun I almost commented on that it's not quite a Homelab it's more of a mobile lab it is a mobile lab but I will have a router with me so it will be both the actual device and the router and yeah we'll have some fun
Starting point is 01:28:21 now you're bringing two of them with you or just one the device the Homelab plus the. Now, are you bringing two of them with you or just one? The device, the Homelab, plus the router. So two devices. Okay. Well, you want two of everything. Two of them, yes. Well, we are going into production. So we're going to take all the workloads from the Homelab
Starting point is 01:28:34 and we're going to ship them into production during the talk and we're going to see how they work. We're going to use Talos, Kubernetes, Dagger is going to be there. So yeah, we'll have some fun. So this is a live demo then, basically. It's a live, yes. Well, it's recorded because, you know, I want to make sure that things will work,
Starting point is 01:28:52 but I will have the devices there with me. You never know what Wi-Fi is like. And that's the one thing which I don't want to risk. Yeah, you can never. Even like 4G, 5G, even mobile networks are sometimes unreliable. But I'm looking forward to that. So that's like,
Starting point is 01:29:08 and it will be a recorded talk as well. So yeah. Well, that's good because TalosCon is on-prem free and co-located with SRE Day. However, it's also over with. By the time this ships, it'll be two days in the past. And so happy to hear Ger, that there'll be a video because certainly our listener will want to see what you're up to
Starting point is 01:29:28 and it's in the past tense. So there you go. And guess what? What? I'm going to be recording myself as well. Okay. What are you holding up there? I'm holding a Rode Pro.
Starting point is 01:29:41 Do you know the Rode Pros? Like the mini recording microphones? Yeah, You can like clip them to your shirt, something like that. Exactly. So I have two of those. Boom. And two cameras. I'll take them with me. They're 361. So I'll be recording like the whole talk and then editing and publishing it. So that's the plan. Cool. So whatever the conference does, great. But I also want to do my own. Yeah. So that's the plan. Full control. Indeed. Awesome.
Starting point is 01:30:06 Well, great conversation. Good progress this session. What do you call it? Disguising. Disguising, yes. What do we want to accomplish for the next one? Are we on the right trajectory? Like in terms of the things that we talked about,
Starting point is 01:30:19 in terms of what we think is coming next, did we miss anything? It'll be Christmas or just before Christmas. I think the just stuff with the database and branching with Jere being able to pull that down to be a small but big win. Okay. I think, you know, continued progress, obviously, on the Pipe Dream. Pipely.tech.
Starting point is 01:30:39 Pipely.tech. I like it. Did you buy the domain? No, but it's available. Not available? It is available for $10. Pipely.tech. I don't know.
Starting point is 01:30:47 I think we've got to get pipe.ly. Otherwise, we're just posers. But I like pipely.tech as well. So we might have to raise some money for this. If we're going to have to buy pipe.ly, we might need $50,000. The future's coming, and we're going there. Kaizen.
Starting point is 01:31:03 Kaizen. Kaizen. Bye, friends. What do you think about our pipe dream? Should we turn it into a pipe reality? A pipe-ly, if you will? Let us know in Zulip. Yes, we are hanging out in Zulip now.
Starting point is 01:31:24 It's so cool how we have it set up. Each podcast gets a channel and each episode becomes a topic. This is great because you no longer have to guess where to share your thoughts about a show. Even if you listen to an episode way later than everybody else, just find its topic and strike the conversation back up. There's a link in our show notes to join ChangeLog's Zulip. What are you waiting for? An engraved invitation?
Starting point is 01:31:49 Hey, it's still September, which means we're still trading free ChangeLog sticker packs for thoughtful five-star reviews and blog posts about our pods. Just send proof of your review to stickers at changelog.com along with your mailing address and we'll ship the goods directly to your mailbox anywhere in the world. Let's do this. Thanks once again to our partners at Fly.io,
Starting point is 01:32:13 to our beat-freaking residents, The Goat, BMC, and to our longtime sponsors at Sentry. Use code changelog when you sign up for the team plan and save yourself $100. That's almost four months free. Next week on The Change Log, news on Monday, Ryan Dahl talking Dino 2 on Wednesday, and a fresh episode of Change Log and Friends on Friday.
Starting point is 01:32:36 Have a great weekend. Leave us five-star reviews if you want some stickers, and let's talk again real soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.