The Changelog: Software Development, Open Source - Kaizen! Pipely is LIVE (Friends)

Episode Date: August 8, 2025

Gerhard calls Kaizen 20, 'The One Where We Meet'. Rightfully so. It's also the one where we eat, hike, chat, and launch Pipely live on stage with friends. ...

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to ChangeLog and Friends, a weekly talk show about doing big things. Thanks to our partners at fly.io, the public cloud built for developers who ship. We love Fly. You might too learn more at Fly. i.io. Okay, let's Kaizen, live in Denver. Well, friends, I'm here with Damien Schengelman, VP of R&D at Oz Zero, where he leads the team exploring the future of AI and identity. So cool. So, Damien, everyone is building for the direction of Gen. AI, artificial intelligence, agents, agentic. What is AuthZero doing to make that future possible?
Starting point is 00:00:58 So everyone's building Gen. AI apps, Gen. AI agents. That's a fact. It's not something that might happen. It's going to happen. And when it does happen, when you are building these things and you need to get them into production, you need security. You need the right cardrails. And identity, essentially, authentication, authorization, is a big part of those cardrails. What we're doing at 0.0.0 is using our 10 plus years of identity, developer tooling to make it simple for developers, whether they're doing. they're working at a Fortune 500 company, and they're working just at a startup that right now came out of Y Combinator, to build these things with SDK's, great documentation, API-first types of products, and our typical OTH0 DNA. Friends, it's not if, it's when, it's coming soon. If you're already building for this stuff, then you know, go to OthZero.com slash AI.
Starting point is 00:01:51 Get started and learn more about OTH for Gen AI at OffZero.com slash AI. Again, that's offZero.com slash AI. Okay, we're recording. Yeah, we're good. All systems go? Ready? Ready. Well, we're here.
Starting point is 00:02:19 How did Kaysen begin? It was a long time ago in Japan. Yeah. I don't know. How did our Kaisen? begin. 1,300. That was a year.
Starting point is 00:02:31 Something like that. It began with this crazy idea that, let's improve, but in a consistent way, so that every 10 ship-it episodes, we'll talk about the improvements that we drive in the context of change log. Share them publicly, and that was our reminder that, hey, 10 episodes will be talking about everything that we've done. Some called it navel-gazing. I don't think that's what it was.
Starting point is 00:02:57 That was me, probably. It was good fun And yeah, it's been four years It's been four years. It's been four years. It's been longer than four years. Has it? A decade. We've been chisning a decade.
Starting point is 00:03:10 We've just started calling it as a Kaiser. So, Kaiser 1 was about four years ago. Kaiser as it is materialized roughly four years ago, but the relationship, that's right. I bring that up, is I want to say one thing. It's like, these guys are my ride or die people. Gerhard's amazing.
Starting point is 00:03:26 Jared, you know you're amazing. but like the the magic and the beauty that comes from this relationship has just been tremendous and to be here and share them with you all and to share kyson 20 this navagazing approach to our platform but this constant attention to detail of improvement and i think particularly with what we'll talk about today is unique to us specifically in that we built some infrastructure that's used by us specifically, where a bohemist, not negatively, is just not maybe the right fit, and we've been holding it wrong to some degree. But this, what we're doing here is just a wild ride, and I'm excited for this moment.
Starting point is 00:04:04 Is there a way to hold it right? That's the question I continue to ask myself. It turns out, if you gaze at your navel long enough, there's cool stuff in there. Definitely. So thank you for joining us. But you have to do something about this. You have to have a proactive approach to it. We pull out a few of our treasures that we've found along the way,
Starting point is 00:04:26 and something that we've built over the last, when did we start Pipely? Pipely was, I think, 18 months ago, roughly. The idea was, shall we do this thing? I mean, we've been talking about it long enough. Shall we actually try doing something about it? And the beginning was, as all beginnings, like, can we do this? How long will it take? What will it take?
Starting point is 00:04:50 Do we know what even needs to happen? And that's how the conversation started. And many of you that have listened to those conversations remember us, right, how we were pondering. Should we, should we not? Are we crazy? Three wise men, the question mark was really the emphasis. Like, are we wise doing this? We have no idea.
Starting point is 00:05:10 Right. And then along the journey, the best part was the friends that joined us. So it turned out there was not such a crazy idea. And just the idea of improving something in public. so that others see how we do it, and it's our approach, and maybe it will inspire others. And I think that worked really well, and here we are today with friends.
Starting point is 00:05:32 With friends. And that's, again, that's the focus. So improving with friends. Makes me so happy. Thank you all for being here for that. Thank you. We appreciate you, very much. Absolutely.
Starting point is 00:05:43 So it all started with a dream, a pipe dream. Indeed. On a Kaizen number that I can't remember. 13, I think. Kaisen, 13. I think so. When we were lamenting our cash miss ratio on Fastly. But they had this really nice varnish as a service that we've been using for a long time.
Starting point is 00:06:05 We just didn't like the way that we had to use it through Fastly, which is through a web UI and a strange comment-based version and control system that we invented on top of it. Yes. Put the name of the person who does the thing and the thing you're doing as you update the config. which would produce this gnarly, I don't know, 1,000-line varnish config that would work mostly sometimes. And it was great when it worked, but when it didn't work,
Starting point is 00:06:32 it was very difficult for us to have visibility and debugability in order to fix that. And so I said on Kisen 13, wouldn't it be cool if we could just have this 20-line varnish config in the sky, just deployed around the world, and it could just be just for, us, everything we need and nothing we don't. And I, and Gerhardt got a twinkle in his eyes.
Starting point is 00:06:57 Yeah. I love a challenge. 20 line. I can give you 500 lines. Well, I think it's close to a thousand now, but anyway, anyway. So it was a pipe dream. But that began our journey down this particular path, which we are at the end of, or a milestone at least, you're never at the end of this kind of a path. Nope. Where we decided on our Kaizen 19, hey, we're ready. We're ready. to run this, we call it Pipely is the open source project. Pipe Dream is our instance of that because this has been our pipe dream.
Starting point is 00:07:29 And we're there. We're almost there. We are there. But on Kaiser 19, we were like right at the precipice. We are there. And I said, what if we just get together and do it together? Yeah.
Starting point is 00:07:40 And you guys said, yeah. And I was like, how about on stage in Denver? And they're like, sure. And so here we are. Yeah. There's your setup, Gerhard. The way Kisen's usually work is Gerhard works way harder than Adam.
Starting point is 00:07:50 and I do. We show up. Yes. And I say something like, Gerhard, take us on a ride, tell us what we're going to do. And so, Gerhard, take us on a ride. What are we going to see today out of Pipely and Tyson 20? Thank you very much. So we will start with a little bit of history so that everyone is able to visualize a couple of very important milestones. Then we're going to ask you for your questions. So you better think of some good questions to get back at B or at us. So let's see how that goes. And then we'll do something special because everyone took their time, this took considerable effort on all your parts to be here,
Starting point is 00:08:33 and we want to recognize that effort by doing something special on stage live. So let's see how that goes. All right. So first of all, let's start with the beginning. The beginning is July 26th, 2025. This is an important moment. we're all here today, is the first time that this is happening. Kaizen 20 is the moment.
Starting point is 00:08:52 Thank you all for being here. I'm sure that you all know this. Now, it says 20 episodes. Actually, it's 19. Like, one was republished, but it's been 19 Kaysen episodes. And this started actually in 2021, in July. Like, I had to look this up. I wasn't sure exactly how long it was, but that's when this started.
Starting point is 00:09:12 So this journey, officially, when we started having these conversations, all recorded, all remote happened many years ago and it took us this long to finally do this in person. Ten years. Right. And if you count exactly, like when this unofficially started yet, it's been a decade.
Starting point is 00:09:31 It's been a decade. So we're good friends now. I'm stretching it by one. 2016 was the year we launched what is now our platform. It's an open source platform. ChangeLaw.com you all know that. It's on GitHub. Some of you contributed. You're probably in the issues or at least reading them. But it's been
Starting point is 00:09:46 a journey. It's 2025. It's not quite 10 years, but I'm rounding up. Help me out here. Give me a gimmie. Yeah. Next year, maybe we do something even greater, right? That's right. Maybe we're building up to that. Cool. Okay, so the reason why we're doing this is because it's about friends. This is the new context. It used to be ship it. Before it was unofficial, it used to happen just among like a few people. And now, all of you are here. And we appreciate you. So thank you, very much, because as you know, it's so much better with friends. So change log and friends
Starting point is 00:10:21 is the context, and including friends that want to be here, but maybe can't. Next year, hopefully, or the next time we meet, it's a good reason to make this crowd bigger, but this is how it began. This is the first moment we have met in person, and it's amazing. All right. Now, you haven't noticed here, but this is something that I pay attention to, because the person that takes you picture is the pictures is never in the pictures so we need to acknowledge the person taking pictures there he is and that is Aaron he's over there he's still big hey Aaron hey thank you very much so yeah thank you for being here and capturing this moment for all of us that's so great so we are friends I think we
Starting point is 00:11:10 can call ourselves friends when you go for a beer and we go for a hike and we share a meal so that's I think what it means to the beginning of a nice friendship that enjoy making things better. That's really what brings us together. And we do it in a public way, we share, and even when we get it wrong, that's fine. Because it's always about improving. It's not about being 100% right
Starting point is 00:11:30 or knowing it all, figuring things out. And that makes me so happy. So, you all know this. We post these as discussions on the changelogog.com GitHub repository. This specific is discussion, five, four, six. And if you want to do a bit of digging to see what went into making this Kaizen, that's where you can go for like the technical stuff, for the pull request, for the code,
Starting point is 00:11:58 all of it is there. And I think the first step that we need to do is set the record straight. And the reason why I remembered it is because of that very small thumbnail. There's three people there. There's BSD PHK. So Ph.K is Pool Henning Camp. I didn't know who he was. until recently. He is the guy that had a huge contribution on free BSD, NTP time counters, free BSD jails, MD5 Crypt, and you all know this. Can I say it? Yes. The bike shed. That's it. That's right. Bike shed. Yeah. He invented the bike shed. 1999, the guy that's so, I think that's really important. And we didn't know, a bit of history. This was an important moment. And I think I'm going to switch the screen now to this.
Starting point is 00:12:46 this, because this is important, so Andrew O, he wanted to set the record straight, no, Algreen, sorry, Algreen, May 13th, he wanted to set the record straight about Varnish, which is the technology that we use, the relationship to Fastly and the relationship to Varnish Enterprise, and it's all here. So Fastly is not running Varnish Plus, which is a Varnish software product, and they have their own fork. So that is important. It's similar, but not quite the same. What we use is nothing from Varnish Software, not Varnish Plus.
Starting point is 00:13:26 We're using the Varnish, cache, the open source project, as anyone can consume it. So we have built, everything that we've built is on open source technologies. And that is important to us because that is in our DNA. So there's some history here. That's important. so we are building on Varnish Cash Open Source. All right.
Starting point is 00:13:49 Back to where we were. Cool. So we did a thing, and this thing is important, but actually we did a few things over, I think, four years. We did quite a few things. Ten years. Ten years. Yes.
Starting point is 00:14:05 Ten years. Okay, okay, ten years. Just keep that in mind. Adam's not going to let it go. Ten. Nine. Ten years later. Yeah, that's a good one.
Starting point is 00:14:16 So they were all good. I think most of the things were good, right, that we've done. Most of the things were good, but that's not all things. Right. Shall I fix this now? No. No, it's okay. Okay.
Starting point is 00:14:26 Sorry. My bad. So what are some of the things that you don't think went as well? Let's talk about that. Out of your collective memory, maybe even someone from the audience can tell us if you think that didn't go so well. S3 cost S3 costs
Starting point is 00:14:45 Yeah we spent more money we should have on S3 We're ballooning And we didn't know it Egress Until that one Kizen episode Where I finally looked at it And then I was like
Starting point is 00:14:55 Oh We should change this We should address this The bill went from like 15 bucks a month Like maybe 180 at peak I believe Which is still small But like we're not massive We're not in operation
Starting point is 00:15:07 Yeah Yeah That's not cool But now we're on R2 And we're spending like $8 a month Yeah, they basically pay us. So now it's good. They should pay us.
Starting point is 00:15:16 Yeah. Now it's good, but that wasn't good. I deleted a few things, but I should have, as you do. Yes. Right, flying too close to the sun, it's a good thing. There was one time where Gerhardt went in to, I don't know what it is anymore. A config or one of my pieces of code, and he changed something that to me was inscrutable. Like, why, what, who? But I had so much respect for the guy and so much imposter syndrome
Starting point is 00:15:45 that I thought, surely, he knows better than I do. I remember that. Yeah, yeah. And I must be a fool. But it looks wrong, but I'm going to go ahead and roll with it because Gerhard knows what he's doing. And then I found out much later you had no idea what you were doing. No, I don't know.
Starting point is 00:15:59 So this is something really important, and it's like at the heart of what we do, right? We are figuring stuff out, and we are okay to admit it publicly, right? Like, we mess things up, but there's no way you're going to learn if you don't make mistakes. Doesn't matter how much experience you have. Doesn't matter how many things you think you know. You never know. Let's be honest. You don't really know.
Starting point is 00:16:19 You're mostly making stuff up. Some things help you, but it's all in the confidence that you will be able to figure it out, will be able to push through. Just stick with it long enough. That's all it takes. All right. So today, we did the biggest thing ever. What was that?
Starting point is 00:16:39 A live show? A live show, yes, yes. Yes. we showed up in a city we don't live in right people flew here with us that's a big thing is that what you're referring to yeah that's one of the big things so what else uh well I'm so curious what we did
Starting point is 00:16:55 what did we do tell us what did Gerhardt do yeah let's scratch that what did Gerhard do yeah yeah yeah what did you do so I did something I will show you very soon trust me it's coming but it's the biggest thing ever Alright, so the problem, the fact that it's green, don't let that mislead you, it's a bad thing. Okay, color, psychology of color is very important. So what is this? The 17.93% is the cash hit ratio on our current production CDN, which is low, right? That's really low. It means that less than 20% of the requests get served really fast, 80% plus are slow and while it doesn't really impact us I mean it's all good for us it impacts you when you load something it takes a while to load and it shouldn't be
Starting point is 00:17:51 that way right things should be instant things should be very very smooth and when something goes wrong in the back end for example if 80% of the requests they have to go back to the back end it means that they can fail so the chances of something failing are fairly high there's something wrong in the back end and that's the other thing which I keep thinking about a lot. Yeah. And it's worth noting that our data is unidirectional. I mean, we rarely
Starting point is 00:18:18 change anything once we publish. Yeah. Episode one is still episode one. Yeah. Every once in a while, you know, we might put the wrong audio in the wrong episode or we might have to edit something that... Or you ship the entire wrong audio. Yeah. Like I've done recently. Adam did recently. So we make mistakes and when you make a mistake you want to be able to quickly rectify it,
Starting point is 00:18:41 purge everything, and get back to where you were. But generally speaking, you put an MP3 up on a CDN, and then you deliver that same MP3 in perpetuity. And so this number is abysmal. That's terrible. We should never have misses. And that's what I've been saying for 10 years. Right.
Starting point is 00:18:59 How do we keep having cash misses? Okay. And the problem is that in this case, it's mostly the website. So the change log website appears slow in a lot of cases. And that's not great. So that was the problem that we were trying, or that was the thing that we're trying to improve. That's how this started. All right. Episode 26, this is what Jared was mentioning when we began. Should we build a CDN? That was the moment we were thinking, should we do this? I mean, are we really at that point? And it took a while, but the conclusion was yes. I mean, we should at least try and see how far we get. That was 18 months ago. little tech, we had this up for a while, we talked about it at Christmas, a new CDN is born, it hasn't been updated recently, but it will be, but this is the home for the open source project
Starting point is 00:19:50 that we would like others to use at some point. I think it's getting there, it's not quite there yet, but we have made many improvements to make it easier to consume. Well, friends, I'm here with a new friend of mine, Harjott Gill, co-founder and CEO of CodeRabit, where they're cutting code review time in half with their AI code review platform. So Harjot, in this new world of AI generated code, we are at the perils of code review, getting good code into our code bases, reviewed, and getting it into production. Help me understand the state of code review in this new AI era.
Starting point is 00:20:38 The success of AI in code generation has been just mind-blowing, like how fast some of the companies like Cursor and GitHub co-pilot itself have grown. The developers are picking up these tools and running with it pretty much. I mean, there's a lot more code being written. And in that world, the bottleneck shapes, the code review becomes like even more important than it was in the past. Even in the past, like companies cared about code quality, had all this full request model for code reviews and a lot of checks. But post-gen AI, now we are looking at first. of all, a lot more code being written. And interestingly, a lot of this code being written is not perfect. So the bottleneck and the importance of code review is even more so than it was
Starting point is 00:21:19 in the past. You have to really understand this code in order to ship it. You can't just wipe code and ship. You have to first understand what the AI did. That's where code rabbit comes in. It's kind of like, think of it as a second order effect where the first order effect has been Gen AI and code generation. Rapid success there now. As a second order effect, there's a massive need in the market for tools like CodeRabbit to exist and solve that bottleneck and a lot of the companies we know have been struggling to run with especially the newer AI agents. If you look at the code generation AI, the first generation of the tools were just tab completion, which you can review in real time and if you don't like it, don't accept it. If you like it, just press tab, right? But those
Starting point is 00:21:56 systems have now evolved into more agentic workflows where now you're starting with a prompt and you get changes performed on like multiple files and multiple equations in the code. And that's where the bottleneck has now become code review bottleneck. Every developer is now evolving into a code reviewer, a lot of the code being written by AI. That's where the need for Code Rabbit started and that's being seen in the market like Code Rabbit
Starting point is 00:22:18 has been non-linearly growing, I would say. It's a relatively young company, but it's been trusted by 100,000 plus developers around the world. Okay, friends, well, good. Next step is to go to coderabbit.aI. That's C-O-D-R-A-B-T-A-A-I. the most advanced AI platform for code reviews to cut code review time in half,
Starting point is 00:22:40 bugs in half, all that stuff. Instantly, you've got a 14-day free trial, too easy, no credit card required, and they are free for open source. Learn more at codrebit.aI. This is something that has been bugging me for years. We run on fly.com. And flat.comio has points of presence all over the world. But our application only runs in a, well, in Ashburn, Virginia, because it's closest to the database. Of course, it's going to be close to the database, right? Because data has gravity.
Starting point is 00:23:16 But we wanted to distribute the application for a long, long time, but it was never the right model. With a CDN, that's exactly what we want to do, right? We want to get those instances all over the world, so finally we can say that after all these years, we are holding fly.org right. And it's been working pretty well. Is that the big thing?
Starting point is 00:23:38 I think it is a big thing. Is that the big thing your reference? No, no, no, no, it's coming. I'm just waiting for that moment. I'm just waiting for that moment. This is one of the things. We are holding it right. I want to say one of the thing, too, real quick.
Starting point is 00:23:49 Leave that there. That's fine. That's a slide's good. The next one is good. Next one? Yeah, the next one's fine. Let me show some things, but you see Fly here. They're not here.
Starting point is 00:23:56 We didn't make this about sponsors. We wanted to be about you all, us doing a normal live show together that wasn't like, hey, let's charge our normal smart for tickets. or whatever it was, we just want to go somewhere, have some fun, get together, and just share this story. But I do want to recognize that Fly and Kurt and the team there
Starting point is 00:24:14 have been extremely supportive of us, not saying you should use them, but they love us, we love them. And what we're building is really on top of the best platform we believe. So Fly is amazing. Fully agree. Yeah, fully agree with that. Jared? All good? Yeah.
Starting point is 00:24:32 All right. So this happened about six hours ago or seven hours. hours ago. This was 1 a.m. last morning. Okay, something went wrong. So things will continue going wrong. You'll never really get there. It's all about the mindset of, can we do it a little bit better? And again, we are figuring stuff out. So this was yesterday last night. So this is a question for the audience. Who would like to see us improve this specific crash live? Yeah? Yeah? All right. Let's do it. Cool. So, um, What do you think happened here?
Starting point is 00:25:07 Just like, let's do like a very quick understanding of what the problem is. What do you think happened here? Can you describe the architecture of Pipely, like Pipe Dream in terms of what's working out there? I can. So it's varnish instances. I mean, Varnish is really at the heart of it. There's a couple of other components around it, but Varnish has the heart of it. Varnish makes requests to backends.
Starting point is 00:25:32 The backends in this case would be assets, for example, restore static assets. assets. We were mentioning MP3 files, PNG files, JavaScript, CSS, that kind of stuff, which rarely changes. Then there's a feeds, backhand, feeds, stores, generated feeds for users, plus members, and shows, various shows. Every show has its own feed, and then you can also create your own custom feed, and so there's something like on the order of 600 to 800 feeds, I would say. Eight of which are way more important than the others because they are publicly consumed by all the podcast indexes. And those feeds get the most requests from all the platforms, the podcasting platforms that consume the change log episodes, and they distribute them to their audiences, or to your audiences, but through that platform. Our audiences.
Starting point is 00:26:23 Our audiences, yes, our audiences, for sure. And what this is, basically, we get these instances that are distributed around the world so that, that the delivery of that content gets accelerated. The one thing which I haven't mentioned is the applications, the change log application, the website, which is an important one. That's where many users go to, for example, look at the home page, look at news, things like that.
Starting point is 00:26:46 And that is the one which is most sensitive to latency, because as I mentioned, it's only in one location, close to the database. So we need to accelerate delivery of that website to users which are around the world, including Australia, South Africa, South America, all over the world. We have a very diverse audience.
Starting point is 00:27:04 And we want those users to have just as good experience as anyone else that's maybe in the North America. So in this case, one of our ten? Yes. Ten instances of the Pipely application... Yes. Ran out of memory. Misty Bird, 4931.
Starting point is 00:27:25 Yeah, Misty Bird. That's the one. Misty Bird crashed. That's what happened. Exactly. Okay. So now to your question to the audience was... Why do we think the application crashed? Why do you think it crashed?
Starting point is 00:27:35 And anyone can ask or accept Matt, Nebiel and James. Because they already know why it crashed? It ran out of memory. Yes, of course. Thank you. I appreciate that answer. Someone's paid attention, but why did it run out of memory? Why do you think it ran out of memory?
Starting point is 00:27:54 Because there wasn't enough memory. Right, I love some trolling. Seriously now. You're going to make me say it, right? Okay, so as more content gets cached in memory, the problem is there's like a configuration which I wish it was easier to make, but you have to manually adjust how much memory you give to varnish out of the total memory available.
Starting point is 00:28:19 So there's a dance that you need to make so that you know how much is enough so that when more memory gets allocated, the thing doesn't fall over. I wish this particular thing was easier, maybe something that we can improve but for now you have to fine tune it and find if you have four gigabytes of memory total
Starting point is 00:28:37 how much should Varnish be allowed to use and if you think it's four gigabytes it's way too much so that's what we're going to do now I'm going to switch to some live coding and show what happened so actually no one let me
Starting point is 00:28:52 maybe let me try this how's the font can everybody see the font can you see what's there I'll make it a little bit bigger a little bit bigger a little bit bigger. Okay, so this is the change, and if I'm going to undo this, you'll see what it was before. So we, I thought, let me take responsibility of this, I thought that 800 megabytes is going to be enough. This was the case when the application had 2 gigabytes. So in the instance had 2 gigabytes, 800 megabytes of headroom was enough, so the application wouldn't crash
Starting point is 00:29:22 because of out-of-memory issues. And apparently, when you have 4 gigabytes, 800 megabytes is not enough. So what happened is 33%. 33% should be enough for this to work. And again, you have to specify this explicitly. I'm sure that we'll improve this at some point. This is what this looks like. So we're going to push this change into production lie right now. Here it is. Here's the change. So we'll say increase varnish memory, or limit. Let's do that. Limit, Varnish memory, to 66%, 66%, right? Varnish memory can only use 66%. Okay, I committed, I...
Starting point is 00:30:08 What's going on? Of course, I need to connect. Right, let's do that. Let's connect. You're offline. Yeah, I am offline. Let's do this. Again, this is live.
Starting point is 00:30:16 Like, this is not recorded. It hasn't happened. Let's see how this... I normally record these things, but let's see what's going to happen. Good time to go live, Gerard. There you go. Let me just read this Bill Gates' quote.
Starting point is 00:30:28 Okay. You thought 800 megabytes ought to be enough. Yes. Bill Gates, 640K ought to be enough for anybody. Right, right. Apparently not. You know, it's not an unreasonable thing that you thought. Nope.
Starting point is 00:30:40 Okay. All right. So what happened is we committed and we pushed this commit, this one commit. And to deploy something into production, all that we have to do is tag the repository. So tag and commit. So RC3 is the last one that went out. You can see when it went out. It was two days ago.
Starting point is 00:31:04 We're going to do an RC4 now. And all we have to do is this. J tag. J stands for just. So I'll do just tag because I don't want to remember the command. It's quite long. So I'm going to do the Shah.
Starting point is 00:31:18 The Shah in this case is going to be, sorry, tag. So V1.0.0 RC. 4. There you go. Nice. We'll just do head, right? Of course, it's going to be head. And the discussion, this is the change log discussion
Starting point is 00:31:31 where you can basically listen about this thing. So this is us preparing Kaysen20. Remember that GitHub discussion I talked about, that's what this is going to do. And I'm going to push it now. No, push. Get push.
Starting point is 00:31:46 Oh. What happened undefined? That shouldn't have been fine. Looks like the connection. The connection dropped. Someone's blocking port 22. No, no. The connection dropped.
Starting point is 00:31:56 Yeah. Let me just go back. Let me read a Bill Gates quote. Yes, sure. Another one. Do we not have a house in there? No, no, it's all good. Your chat, GPT.
Starting point is 00:32:03 Give me another Bill Gates quote. That's a good push again. I mean, just make this. It's going to slow. There you go. That pushed. Cool. So the tag went out.
Starting point is 00:32:15 And what we're going to see now is, actually, this one right here. It's going to go to the actions. So this is a live, this one right here. 100 RC4, and it's going to push this change into production across all instances, it's going to roll them live, we're deploying this. So why is it significant? Sorry.
Starting point is 00:32:39 Let me set the record straight. Sure. Bill Gates in 1996 said, I've said some stupid things and some wrong things, but not that. No one involved in computers would ever say that, that a certain amount of memory is enough. Right. So I guess I take it back. Is that true? I don't know.
Starting point is 00:32:58 Is that factually correct? What is your source of information? Yes. So there you go. He may or may not have said that. So we're watching it roll out. Let's see. So now we're seeing like a live rollout and we'll see.
Starting point is 00:33:11 So publish and deploy tag is seeing all the validation. Go on, go on. It's moving on. It's going through the changes. Yeah, it's say Garris. But it's going to resolve itself. It just takes a while. Done.
Starting point is 00:33:23 So there you go. How long does it normally take, Gerhard? Well, last time. it took two and a half minutes. So we are about... And you thought this is a good idea. Of course, why not? Look, we're already creating the new machine.
Starting point is 00:33:32 Look at that. That's going. All right. So this is something that I think is important to get right early on. And that thing is how long does it take you to push a change into production? So how long does it take you to make a change and see that change we're all live? And this is something that we've been working on for quite a few years on this thing. We do push to production.
Starting point is 00:33:55 we own our production we don't develop in production that would be crazy even for us but it's like all like this mentality of if I'm going to make a change how long will it take me to see that change happen and if you can shorten that time
Starting point is 00:34:09 you're in a good place 10 years ago it took 20 minutes or so it was long it was a long time it was long enough that you go do something else and now it's two and a half minutes maybe and this is a global CDN
Starting point is 00:34:22 so this is we are pushing this change so all the instances of the global CDN that we run in where do we run them let's have a look so this is it so we are Ashburn Virginia these are the instances two days ago if I'm going to refresh this you'll see the new instances come live so that was two days ago that was like the last deploy let's refresh deploying 57 now that's the commit we have new instances coming up and you can see like that we do blue green of course we need to blue blue green we need to deploy the new ones the old ones are still there two of everything, it's an important rule, and yeah, this works fairly well. It will not have any downtime,
Starting point is 00:34:58 and we can look at that in a minute. So Ashburn, Virginia, Chicago, Dallas, Texas, Santiago, Chile, San Jose, California, Heathrow, of course, London, Frankfurt, Sydney, Australia, Singapore, and Johannesburg. So these are all places where we deploy these instances, and they will accelerate all the content to our users. Does anybody out there have a deploy pipeline that runs faster than two and a half minutes? Does anyone have a deploy pipeline? Let's start there. I got a hand up. Two people, three people, four people. So I think we're winning. Five people, six people. Cool. Two and a half. Two and a half. Nice. It only feels long when you're on stage.
Starting point is 00:35:45 Yeah. Well, now. Well, this is real. This is real, like not edited. This is what it feels like. Now, I don't think two and a half minutes is long. I think we can improve it, but you need to think about all the things need to happen behind the scenes, right? The allocation of resources, the health checking. Like, how do you know what you've put out there is correct?
Starting point is 00:36:05 And you need to wait a while to make sure that the thing doesn't crash. That's why you need to wait at least 60 seconds before you can say, yep, this is good. You need to do a few health checks. Because the thing starts falling over, you don't want to leave that thing running in production. Of course. So I think we're in a good place. The one thing which I wanted to show is if I come back here, can you see that?
Starting point is 00:36:24 Is that graph looking good to you? Yep, cool. So do you see this yellow line? This was the instance that crashed. So this one, if I, there we go, it's San Jose, California. For some reason, there is a lot of requests hitting this instance, and those requests, they don't look like human requests. So I think we may have some sorts of bot situation going here. on, some sort of, I don't know, LLM trying to learn.
Starting point is 00:36:51 There's, yeah, there's like a lot of, lot of requests. And if I'm going to look at this one, the CPU utilization, this is the same instance right here, the one in San Jose, California. I mean, look just how abnormal this instance behaves. And if I'm going to remove that, you can see all the others. So all the other instances, the CPU usage is fine, but this one is spiking up to 100%, because it has a lot of requests. Now, obviously, this is in front of the application, so five-minute load average.
Starting point is 00:37:18 We can also see here, San Jose, California. Yes? Can we see the endpoints that it's serving? The endpoints that it's serving? We can, yes. So let's do like a tiny review of right now. Jared's like, can we do X? Of course we can.
Starting point is 00:37:37 Of course we can. Of course, yes. No, we can. We can definitely do this. Okay. So I'm going to switch to this view. And this is the last seven days. and I'm going to make this a little bit bigger for you to see.
Starting point is 00:37:51 And can you see that all right? Okay. So you can see that we're here, right, July 26th. So the application went into production a few days ago. So not all of it, not completely, but we started rooting a bunch of traffic to the application to see how well it would handle our users. Is that the biggest thing we ever did?
Starting point is 00:38:16 Almost. It's getting close. It's getting close. It's getting close. But this basically shows that we had many, many steps towards this moment. And now, together with you, we've shown you how we can update something that's running live that is serving. I mean, this doesn't sound like a lot of requests, like a thousand, but the granularity is 30 minutes.
Starting point is 00:38:40 And we are sending a portion of the traffic, and it's handling it pretty well. And we can see that the biggest users, or like the most requests are coming from, ORD. Chicago. Chicago, that's the one. Frankfurt next, London Heathrow, and it's not me running low tests
Starting point is 00:38:57 and Singapore. So these are like the, and then San Jose, California, right here. So we are running live traffic, not all of it, a portion of it, but we wanted to see
Starting point is 00:39:08 will this thing continue working well. Did you know that? No. I don't think so. Yeah. So technically we're not launching Pipe Dream today because we launched it on Thursday
Starting point is 00:39:17 just meeting you, Exactly. Sorry, team. We did it. We barely launched it. We launched one out of five. One out of five, yeah. Yeah, so roughly 20% of requests go through our pipe dream at this point.
Starting point is 00:39:29 So whenever you go to changelog.com, that's it. One out of five requests hit the new instance. And we were able to see how does it behave with 20% of traffic. And it's working. No one complained. Adam didn't even notice. And that's like one of the best things. right like to roll things out you do it in such a way so that I mean it is a big thing to
Starting point is 00:39:53 us and to people that understand and know what goes behind it but to everyone else did anything change because if you do your if you do this type of job correctly all people see is maybe little improvements and if they're not paying attention even those they will miss they think you're always as badass well I think we are a good team and The other thing which I want to mention is that while what you see here again, it feels very, it just happened, right? It's like a small thing. The work that went into it, it was months and months of preparation, months and months of discussion, people joining us, working with us. I want to thank James. I mean, here's the first one that has joined this. Thank you, very much. James A. Yeah, thank you, James. Thank you. Thank you. talking in the various issues, the first one that we had, basically being with us for almost two years now, discussing about problems that we thought were problems, at some point we were
Starting point is 00:40:58 questioning, like, are we maybe too strict? Are we too demanding out of this? And no, no, I mean, we want things to be better. And this is why we want things to be better. So James, thank you very much for all the conversations, keeping us like big picture. like that's all objective perspective as well that was so so helpful and then Matt right Matt Johnson thank you very much for that thank you just thank you what a matter thank you it was all about like VCL and going a bit deeper just to have like another perspective on VCL I mean he has a lot of experience in VCR and Varnish in general but also documentation also like a diligent approach to how should we do this so that it's easier for
Starting point is 00:41:42 others. And that was like so great to have that help. Now that was months and months and months off. I mean, again, all of us have full-time jobs. All of us do something else. But in our spare time, we find a bit of time to help others. And this was that. So thank you very much for that. Matt had a good idea last night. Can I pitch it to you? Of course. Yes, please. Yes. So my desire was a 20-line BCL. Yes. You gave me a thousand lines. Matt says, we can count. You guys can just pull most of it out. into an inklet. Yeah, that's cheating, but yes.
Starting point is 00:42:15 I won't look. Yeah, of course. Great. Every time I go to the repo, I'll be like, oh, this is nice. We can give you 20 lines, Jerry. We can give you less than 20 lines, yes. I think that's a very good idea. Yeah, you just look number one, like step number one.
Starting point is 00:42:27 Yeah, exactly. We'll cut in that, for sure. So I wasn't prepared to do this, but let's do it. So we have, this is just, we have like a bunch of targets here. There's one that says, how many lines? Oh, my goodness. Right. So let's just run that live and see what happens.
Starting point is 00:42:42 So how many lines I'm going to tap this? So let's see how many lines we have. Okay. 961. Like, hang on. Let's see what's in Varnish. Hang on a second. Oh, no, we don't want that.
Starting point is 00:42:52 This is not correct. We just want ours. We just want ours. Okay. Shall we change this live? All right. Let's see what's going on. Why not?
Starting point is 00:43:00 So why not? So we want only V-C... Actually, that might work. Varnish. No, we want inside VCL. So yeah, that's just like one change. So let's go, just file. Do you want to do it?
Starting point is 00:43:10 Nope. I feel like you're stepping it. All right, so let's see how many lines, how many lines, there you go. So varnish, let's do varnish. All right, let's see that. Varnish, sorry, VCL. Okay, let's see if that works. There you go, how many lines? There you go.
Starting point is 00:43:24 These are the actual lines, all the lines. So 364, that's like the main one, and these are like the includes. 24 in this one, 106 and 15. Right. How many total? Less than 1,000. I think it's less than 1,000. 500?
Starting point is 00:43:39 Yeah, is. Getting to 500, between $450 and $500. A lot of these are just like static redirects. Any savants out to do the math, yeah? Do the math. Say again? Any savants out there, do the math. Look at that. 15. This is how we do it. 106 plus 24 plus 364. Look at that. 509. 509. 509 lines. So there we have our answer. Cool. So it's still good. Still good. All right. So.
Starting point is 00:44:03 Did you want to thank Nabil? Yes, of course. The beel. How could I forget the beel? Of course. So one really important thing is that varnish cannot terminate backends, which have SSL in front. So if the back end is talking HTTPS, varnish cannot use it out of the box. Varnish Enterprise and other products can, but the open source varnish cannot. We did a bit of digging, and we realized that's how we learned about Poole-Henning Camp, Ph.K for short, and he was always against including SSL
Starting point is 00:44:40 anywhere near Varnish because he would complicate things too much. SSL is really, really complicated. So with the Beals' help, we wrote, I don't know, like he wrote 50, 60 lines of GoCode that intercepts all the requests going to those back-hands, terminates SSL for Varnish, and presents the request unencrypted. A really simple, elegant solution,
Starting point is 00:45:02 one of the almost like sidecars that sits next to Varnish and helps it terminate requests which need SSL. So I'm not sure if that. Is that any of justice, Nebill? Genius. Absolutely genius. Let's hear it for him. Nabil. Good job, Nabil.
Starting point is 00:45:17 And his SSL termination. Thank you. TLS Exterminator. TLS Exterminator. That's the one. That's a good name. Cool. Well, friends, it's all about faster builds.
Starting point is 00:45:32 Teams with faster builds, ship faster, and win over the. competition. It's just science. And I'm here with Kyle Galbraith, co-founder and CEO of Depot. Okay, so Kyle, based on the premise that most teams want faster builds, that's probably a truth. If they're using CI provider for their stock configuration or GitHub actions, are they wrong? Are they not getting the fastest builds possible? I would take it a step further and say if you're using any CI provider with just the basic things that they give you, which is, If you think about a CI provider, it is, in essence, a lowest common denominator generic VM.
Starting point is 00:46:08 And then you're left to your own devices to essentially configure that VM and configure your built pipeline. Effectively pushing down to you, the developer, the responsibility of optimizing and making those builds fast. Making them fast, making them secure, making them cost effective, like all push down to you. The problem with modern-day CI providers is there's still a set of features. features and a set of capabilities that a CI provider could give a developer that makes their builds more performant out of the box, makes their builds more cost effective out of the box, and more secure out of the box. I think a lot of folks adopt GitHub actions for its ease of implementation and being close to where their source code already lives inside of GitHub.
Starting point is 00:46:54 And they do care about build performance and they do put in the work to optimize those builds, but fundamentally CI providers today don't prioritize performance. Performance is not a top level entity inside of generic CI providers. Yes. Okay, friends, save your time, get faster builds with Depot, Docker builds, faster get-up action runners, and distributed remote caching for Basil Go, Gradle turbo repo, and more. Depot is on a mission to give you back your dev time and help you get faster build times with a one-line code change. Learn more at Depot.com. Get started with a seven-day free trial. No credit card required. Again, Depo.com. Dev. We improved it. We're here. We did it.
Starting point is 00:47:43 Heisen 20. One more thing. This is too far out. This is too far out. Actually, no. We're good. We're good. Yeah, we're good. We're good. Anything else we want to cover before we do this? There's one more thing that we're going to do. one more thing chyzen 20 any any thoughts any any comments from the I mean you're here
Starting point is 00:48:05 do you have any questions do you have any comments do you have any thoughts anything we have a question where among friends yes come to the mic can we talk about how we're using dagger can we talk about how we're using dagger all right okay I can show you
Starting point is 00:48:20 how about that yeah cool so there is one very tiny change that I have here, which is using dash-slash cloud, this is an experimental feature that exists in the Dagger CLI. The reason why we do this is because we need a remote engine, a remote Dagger engine,
Starting point is 00:48:39 so that everything that runs in the context of Dagger is quicker. Over Wi-Fi, it would be over, like, my tethered connection, be very, very slow. So how we use Dagger, in this case, let's go here. We have a couple of commands. The one, I think, that's the most interesting one, for example, for running tests. So let me show you what that looks like.
Starting point is 00:49:00 I'm just doing just test. You can see there it does like the dash-dash cloud. Let's see how fast it goes. It starts an engine in the cloud. It connects to it. So we need to set up varnish. We need to connect varnish to all the backhands, the TLS Exterminator,
Starting point is 00:49:17 and you want to do it in a way that is as close to production as possible. So it's almost like we want to run the system as it runs in production, but we want to do it locally. Okay, in this case it's a remote engine, but from the perspective of how it gets put together, it's literally just the same context of running containers.
Starting point is 00:49:35 And because everything runs in containers, we get the exact reproducibility that we get in fly. So we get the same configuration, the same Linux subsystem, the same kernel, or like a very similar kernel. And what that means, I mean, in this case, it's even using Firecracker behind the scene, so it's very close to fly.
Starting point is 00:49:54 So we are able to test the system most accurately, and we're even able to run the system most accurately as it would do in production. And that part is hard because whatever you do locally, if I was running, for example, on the Mac, everything would be different. Even if I would have a VM, things would be different. So it's like that container is the container image,
Starting point is 00:50:14 the interaction between all the components, and when we ship something in production, it's as close as possible to that image. Even down like to the Go version, so right now, for example, what's happening here. We're pulling down, I mean, some of it is cached, we're pulling down various dependencies, and you can see they're all Linux ones. So Linux, Linux, and again, a VM on a Mac, it gets you close, but it's not the same thing. You get, like, subtle
Starting point is 00:50:36 differences. I'm not sure if that answers your question. Yeah? Cool. Good use of dagger. Any other questions before we, in the back, do the big thing? I think the Beale had one, was it? Yeah. Yeah. Does Varnish only cache to RAM, or does it also use disk? So it can use disk as well, but we don't have disk configured. So the configuration is cache to RAM only. It is the fastest one. If we have disks, as you know, the problem is the host they can't move around.
Starting point is 00:51:11 So it's no longer stateless. That has certain challenges about placement. But also, disks tend to be a little bit slower. Not by much, but a little bit slower. As an optimization, it would be worth exploring that, especially with NVME disks, which is what we would get and fly. So that is something as a future improvement,
Starting point is 00:51:30 but right now everything gets served from memory, which is exactly what happens in Fastly as well, because that's what gives you like that highest performance. So follow up to that. Our crash last night was because it needed more RAM than we allocated to Varnish. And in Varnish, there's no way to say, just use all the RAM available.
Starting point is 00:51:51 So if you tell it that, it uses more than it has available because he tries to allocate more than you have. So you need to set an upper limit. It can't know how much the machine has? Well, it knows, but it doesn't do the allocation correctly. Is that a bug? I don't know whether it's a bug. It's just not working well with the system available to the host. Maybe, maybe.
Starting point is 00:52:13 It's how many years old? 20-something. There's no way that bug is a bug, right? It has to be a decision. Right. So I gained a lot of respect for memory specifically in memory problems because they're very, very hard. I spent maybe three years optimizing memory allocations on the Rabbit MQ team for Rabbit MQ in the context of Erlang. And I've learned that there are so many subtle differences between how memory gets allocated on different runtimes. In the case of Varnish, obviously, it's C, so it's as efficient as it gets. But it's very difficult to know how much you should allocate ahead of time and what you should free, timing, like something has somewhere to give. And usually what you need to do is over-allocate. Basically, you give it more than it needs so that when you get these spikes, right, there's like enough for it to spike so that it doesn't crash things over.
Starting point is 00:53:09 Memory these days is cheap. So honestly, like skimping on that is not worth it. And there's a couple of ways we can go around it, like Discs is one of them, limiting what Varnish can do, but also seeing if the Varnish memory allocation can improve. Right. Because I know what it takes, for example, what it took in Rabbit MQ to do that. It took me years, and I was like full time on that, so it took a while, including to understand what happens behind the scenes. So you need to break it down in a way that you map and observe everything when it comes to memory allocation, and then you realize what needs to be tuned to your use case. That's the other thing, because every context tends to be different.
Starting point is 00:53:55 So the allocations, they are generic, to make them configurable to you or specific to you so that they're optimized for your use case, it takes a bit of understanding of what you actually need. I'm probably reducing it down too much, but I feel like you could just say, this machine has four gigs. Just use three, and if you need more, you've got another gig, in there. Right. So... Just don't crash. Yeah. True. Yes.
Starting point is 00:54:21 Yeah. But my question, my follow-up, I guess, would be, assume that we figure that out, that little dance. Yep. What happens when our instances become overwhelmed? Barnish might not be crashing, but maybe it's slowing down, maybe machines bog can't do stuff.
Starting point is 00:54:37 Yeah. Is there an auto scale? I mean, the answer is more RAM on the fly instances or more fly instances in other regions or in the same region. That's right. Yeah. So there is a big difference between regions by the way and this is something like this is the sort of thing that you only realize once you start using it okay so this is these are like the last 24 hours I'm not going to refresh it's a little bit outdated I did it like you can see here it was 1042 a ms it was
Starting point is 00:55:01 this morning before we started recording and what you can see here is to this one the San Jose California the SSJC is the one which shows like the highest fluctuation a lot of these like once they load up on memory like Heathrow they tend to be fairly stable but then you have a few, for example, this one, Santiago, Chile. This is like 2.3 gigs. So this one is not using all the available memory. This next one in Sydney is 1.84. And you can see that the memory is decreasing because it doesn't need. I mean, this is something I'm not sure why does this, for example. I wouldn't expect this line to go down. I would expect it to stay stable. So I'm not quite sure what's happening here. But this one, the lowest one in Johannesburg, it's using 816 megabytes
Starting point is 00:55:43 of RAM. So this shows you there is a difference between different regions and how many requests they need to serve. So what I'm thinking is we should optimize for the regions that we have, and the ones which are busy, we should give them bigger instances, maybe beefier instances, but others like Johannesburg doesn't need that much. So maybe we, and the problem is you can't use different scaling strategies for different nodes and they have multiple deployments, so it complicates things a little bit. But this is a refinement. Yeah. I think we need more listeners in Johannesburg. I mean, what's up with that? Exactly.
Starting point is 00:56:14 That's what I was thinking. Yeah, yeah, yeah. All right, there was one other question out there, I thought. It was back here. Yeah. I want to just put to describe my experience, and go back to see the paths look like in the text thing. Okay, yeah.
Starting point is 00:56:27 Yes. I like it, okay, so. Right. So. I forgot about that. Yeah, thank you. Thank you very much. So this is, we're looking at Honeycomb,
Starting point is 00:56:36 and one of the integrations that we have in Pipetream is we are sending every single request to Honeycomb and some requests to S3. So, like, we see what's happening with these instances. So what that means is we are able to, for example, if I come back to the boards, if I come back to the boards again, let's hope the tethering works.
Starting point is 00:57:01 It will be a little bit slow. Okay, so let's go pipe dream requests, and I think maybe pipe dream content, but let's go requests. We can see the get, okay, and now we can slice them and dice them. the 404s and the 500s. So maybe let's do gets.
Starting point is 00:57:18 Okay, so let's go to the method. And the only thing that we need to do, so this is one hour ago. So these are all the requests in the last hour, and we can say, give me the URL, which is what we've been asking. So group by URL, and what this is going to show us is the get, but also the number of requests.
Starting point is 00:57:38 So what you see here is that the most popular request is podcast feed, which got 203 requests in the last hour. And this is global. So we can do one more. We can say, let me remove that, and let me remove requests, and let me do a data center. So we can see which data center is hit the most. So let's go that, and we can see that Frankfurt,
Starting point is 00:58:04 71 requests went to podcast feed, Chicago, the next one, 47, San Jose, California. So these are the most requested URLs. If we zoom out a little bit, let's look at the last seven days. So we get like a bigger perspective. I did do seven days. Okay, so you can see when it went live. So you can see that for some reason, this one, I don't know who this person is, uploads 9,500 times.
Starting point is 00:58:30 Shall we go and check who this is? I don't think you'll know who this is. Avatar's. Somebody has a stalker. People. Let's see. Okay. So let's go to changelog.
Starting point is 00:58:38 Dotting. No, let's go. Where is it? Where was it? Not this one, this one. Okay, let me copy that. Copy pasting, hard. Uploads and I think this will be CDN. There you go. So let's see who this is. Oh, yeah. Someone, you have a stalker or a few. Who knows? Who is that guy? Yeah, yeah, yeah. You recognize him. Who's this other guy? Like 6QD. Who's? So that's Z4 or Z4. As you say. over here all right so let's see what's this next other person this is the fun side question right now why not okay cable cable there you go shall we shall we keep going do you want to find who this person is yes that's like the third third most popular one so we might
Starting point is 00:59:29 as well come on be adam yeah adam let's see all right let's try that uploads Nick. Nick. So someone is loading maybe a JS party page way too many times. That's possible. Oh yeah. But it would show up here if it was the case. True.
Starting point is 00:59:48 But then we have this like randomness thing. Randomness thing. What's that? Is dysfunctional doing something with something? Oh, on their website? Maybe. So you know what we could do? We could do user agent.
Starting point is 00:59:58 I think someone's hot link in us, man. It's a user agent and we'll see where these requests are coming from. It's going to be a robot. Empty string. Oh. don't want to be known. Okay, shall we do... Let's get to the San Jose URLs.
Starting point is 01:00:13 This is what we're trying to get, right? San Jose, URLs. Okay, let's do that. So, let's do... Okay, so it's not even here in the list, if you look at that. And look, this one has an empty string, so that's really interesting. We have somewhere an issue, so this needs to be...
Starting point is 01:00:27 Or, no, hang on, this was like when we're doing the testing. So maybe that's one, not so much practical AI. All right. Okay, so let's do server data center equals... S.JC. Yeah. Ron Query. Now let's see what do we get.
Starting point is 01:00:42 Most popular in San Jose. How you can do all this is in Dicing? Yeah, that's really cool, isn't it? It's so awesome. Look at that. Pocket casts. This MP3 was requested a lot, 428 times. Is this the last episode?
Starting point is 01:00:56 Yeah, that one just went out yesterday, right? There you go. So it's just a popular episode in San Jose. That's it. Well, that's better than I thought was going to be, which is a hacker. Yeah, no, this is good. Popular and Cali, not a problem. Look, overcast. Pocket cost and overcast and tenet pods. Yeah, they're just downloading stuff. This is good. This is good traffic. We love it. Yeah, nothing wrong with that. Cool. Cool. Cool. Cool. It's the one more thing. This is the important one. No, that was a big reveal. All right. Everybody wants to see what you got, Gerhard. All right, let's do it. Cool. So, coming back here, it's the one more thing. This is the important one. No, that wasn't, that's something else. Cool.
Starting point is 01:01:38 All right, I flash something. All right, I flash something, yes. Cool. So what we're going to do now, we are going to shift all the traffic. All the traffic. To Pipe DREAM, right? So all the production traffic is going to go to Pipe Dream. Shall we do it?
Starting point is 01:01:55 Get your podcast. Stop us. That's why we're here. All right, this is, okay, this is the moment. So all that we have to do, this is how simple it is. It's always DNS. No scriptic, it is. No automation.
Starting point is 01:02:07 It is DNS. It all comes down to that. Yes. We are going to delete these one by one. These are the A record that you're pointing to... Let me give my own recording of you doing this. Go for it. Okay, Kisen 20 in Denver.
Starting point is 01:02:21 Let's see that. This is going to be anti-climatic because you're going to delete those and then we'll be like... They're deleted. Now, if this one goes down, so let's see if the system can handle all the load, right? Like what can happen? So after this, the only requirement is we will... walk away, okay? We do this thing and we go for lunch or something like that. That's what's next. It's lunch, I think. All right. So, boom, we have four more to go.
Starting point is 01:02:45 Okay. Three more to go. Sorry. Yeah, after this one, three more to go. It's not deleting. It's not deleting. It's slow, right? It's my, it's my feather. Shout out to D.N. Simple, big fans. Thank you very much. One more. One more. Cool. And the DNS is like one by one. So more and more requests are going to go to this one, right, which is the pipe dream. The pipe dream. The pipe dream. There you go. All right. So delete. So we're at one out of three. Now half our traffic will be going to pipe dream.
Starting point is 01:03:12 Yep. Look at that. And now 100%. Now there was, well, we have one more. So we're 50-50. What's this one? That was it. That's CDN.
Starting point is 01:03:20 We have to do the same thing for CDN as well. You got to delete the CDN ones too. There you go. All right. So this is app requests now. Is this? This is no. Goodbye to Fastly.
Starting point is 01:03:29 This is it. This is the farewell from to Fastly? This is a triumph moment and a morium at the same time. Only if it works. All if it works, yeah. We may need to add these back. Like a dog. We still love you fastly.
Starting point is 01:03:42 If this crashes and burns, we'll have to go back for sure. The goodness is they have no idea what we're doing. Yeah, exactly. And this is not a live, live show, so we're not streaming this. So it's okay. That's right. It's in this room. Amongst friends.
Starting point is 01:03:54 If it doesn't work out, we edit this part out. So that's okay. He always says that we never do, okay? We ship it all. There you go. One more to go. I think this is one and one more to go. Now, how long is it?
Starting point is 01:04:06 We're at 60 second TTL. 60 second TTL. This should be fast, yes. All right. If you can get your phone out and hit our website, please. Let us know how it goes, but that's it.
Starting point is 01:04:14 We only have these two. Let's generate some traffic. Where's the real-time dashboard? Showing the activity. Real-time dashboard, of course. Let's do that. Let's do last 15 minutes. Let's see everything crash and burn.
Starting point is 01:04:25 Oh, my gosh. No. What's the worst of that could happen? Okay. Well, the worst thing that could happen is all our fliances run out of memory. Well, there's no peaks and valleys. Let's go back to here. here let's go to the memory there we go every one minute let's maybe go to the last one hour so let's
Starting point is 01:04:40 see we had like a spike there but that was this is like 1220 so I think it's still good I think what we're going to see so two things we're going to see we're going to see let's come back here to the pipe dream request we're going to see a change like these requests will start going up more and more traffic is going to start hitting look at that oh my goodness that's boom that was like the spike right there more requests getting resolved here status still 200 that's good So that's all good. I'm going to our website. And 404 is 500.
Starting point is 01:05:09 So things are looking good. Let's see what else do we have. Let's load a few more boards. And the opposite is obviously in here. It's fastly. Nice. It kind of works. Still.
Starting point is 01:05:21 It works on my machine. It's so fast. Fastly service stats. So these are fastly service stats. The request went up as well. I'm not sure what happened there. Why did they go up? Because I just told everybody to go to it.
Starting point is 01:05:32 Right. I don't think we had that. 30 people here, but still. Cool. So, that's the requests. All good? Plays. Playing N.B.3s. Nice. Let's play an MP3. Nice. Let's do Pipe Dream Service. So they have a few again. The cache, we can see things going up here. So this is the last seven days. We'll need to zoom in a bit, so let's go 24 hours so that we see the spikes. There you go. That was the spike. Yeah. So. What is that spike? This one is more hits. That's good. More hits. We're getting more hits. That's good.
Starting point is 01:06:05 More hits. So it will take time because the real question is was it all worth it? And the answer to that is, well, do we fix that cash miss ratio? Cash hit ratio, right?
Starting point is 01:06:17 Let's check it out. That's the actual answer to the question. What's the 17%? Are we going back to that? So let's go back to that. So let's go back to that. Last 24 hours, this is the home page, right? That's the one that we're focusing at.
Starting point is 01:06:26 Sure. So we're looking at the homepage last one day. We had 3,700 hits. Yes. And we had 33 minutes. I think that's better. That's better. It is better.
Starting point is 01:06:37 It is better. It is better. We did it. No way. No way. Let's see. 3715 plus 33. That's 99.1% hit rate.
Starting point is 01:06:48 That's a lot of nines. Yeah. Two. To be exact. That's two nines. That's way more nines than we did. That's two nines. There are some nines in there.
Starting point is 01:06:58 It was like 17%, 17%. So this is much better. All right. So obviously this will take a while right for all the traffic to start shifting through it is DNS right it's cached but things look at that things are happening we did so theoretically 60 seconds later those TLS records should expire exactly and then require a new hit which goes to a new route which is big change dot com and change dot com right now for me it returns a single IP address that's it bam and this is still in
Starting point is 01:07:28 ashburn no this is this is this is the pipe dream IP address okay this is it so this is one And if I do changelog.com, this is the DNS that updates the quickest. That is the Google DNS. That's very fast. So Google DNS knows we're up. There's also I use DNS Checker. Let's see if it's still a thing. It's been a while since I've used DNS Checker.
Starting point is 01:07:50 A service that checks, tries to resolve the IP address from a couple of locations, more than a couple. Let's see. DNS Checker. Let's go changelog.com. Try to see the end. What is the, we'll do Lucidian as well. changelog.com. This is the, I think, the important one.
Starting point is 01:08:08 So all the DNS is San Francisco. It is. I'm going to make this a little bit bigger for everyone to see. Our IP address. We can see all the locations. So all of it is the new IPs. We don't see any of the old ones. The world knows about what we did.
Starting point is 01:08:24 Yeah, I think so. They took notice. I think it's been good. I think. Yes. Question. That is a fly-to-io IP address, yeah. That is a fly-to-io IP address.
Starting point is 01:08:39 Let me show you that. So we have just IPs. We can see what IPs, the CDN is using. And you will see that it's this IP address, which is a dedicated one. That's the IP that we're using. That's it. Should they clap now?
Starting point is 01:08:56 Only if you want. Only if it was good. Thank you. Thank you. mind blowing we did a thing we did a thing thank you thank you thank you because you were a part of this and we did it with you and for you and this was so good uh thank you very much thank you so a couple of special things we've already thanked our yeah hypely folks uh thanks to a couple of denver liaisons dan more is dan here hey dan how are you dan thank you dan and kendall miller
Starting point is 01:09:28 Kendall leave, or is he still sticking around? He left. He was here earlier. Thank you to you two for helping us find this theater, for helping us connect with Nora again after all those years. We don't live here, and so I had no idea what I was doing or who I was talking to, so it's always awesome to have
Starting point is 01:09:46 locals and friends willing to help out and make it awesome. That's our show. One person didn't get thanked. Yes. Jason. Oh, yeah, Jason. Where is Jason? Come up here, Jason. There's Jason.
Starting point is 01:10:00 Jason, come here. Jason's our editor. He's behind the scenes, but he's very much part of this team. He doesn't get seen. He gets mentioned a lot, but critical, critical behind the scenes here at Change Log. Thank you, Jason. Thank you, Jason. Thank you.
Starting point is 01:10:17 All right. Anything else care hard? Thank you all. I really appreciate it. Thank you. Really, really appreciate it. Thank you. There you have it, our first ever Kaizen Live, but I'm pretty sure it will not be the last.
Starting point is 01:10:34 You know an idea has legs when you're already brainstorming version 2, before version 1 is even out there, and we certainly were, stay tuned for more. This particular episode is better on YouTube, and we have more videos from the Oriental Theater coming soon, including BMC's Live Beats show. Subscribe there for clips, shorts, and more goodies at YouTube.com slash changelog. And of course, join our totally cool, totally free hacker community Zulip at changelog.com slash community.
Starting point is 01:11:05 Have a great weekend. Share changelog with a friend or three who might dig it and let's talk again real soon. Game line.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.