Coding Blocks - 3factor app – Reliable Eventing

Episode Date: September 30, 2019

We discuss the second factor of Hasura's 3factor app, Reliable Eventing, as Allen says he still _surfs_ the Internet (but really, does he?), it's never too late for pizza according to Joe, and Michael... wants to un-hear things.

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to Coding Blocks, episode 116. Subscribe to us and leave us a review on iTunes, Spotify, Stitcher, and more using your favorite podcast app. And visit us at codingblocks.net, especially if you've never been to the website because it's actually really good and has a lot of content there. And you can find show notes, example discussion, and a whole lot more. Send your feedback, questions, and rants to comments at codingblocks.net, follow us on Twitter at codingblocks, or head to www.codingblocks.net and find all our social links there at the top of the page.
Starting point is 00:00:31 With that, I'm Alan Underwood. I'm Joe Zach. And I'm Michael Outlaw. This episode is sponsored by Datadog, the monitoring platform for cloud-scale infrastructure and applications, and the O'Reilly Velocity Conference. Get expert insight on building and maintaining your cloud-native systems and Educative.io.
Starting point is 00:00:56 Level up your coding skills quickly and efficiently, whether you're just starting, preparing for an interview, or just looking to grow your skill set. All right, and today we're talking about the 3D Factor app, a modern high-velocity scalable architectural pattern for building applications. If you remember last episode, we talked about Factor 1, which is real-time GraphQL with a heavy emphasis on low-latency subscriptions. In this episode, we're talking about reliable eventing.
Starting point is 00:01:24 But not before we give thanks. So, you know, we always like to thank the people that took the time to leave us a review. So starting with iTunes, we have Not The Best Coder, Guacamole, and Fish Slider. I like the Bang The Best Coder best cutter yeah i did enjoy that and on stitcher we have spotty dog so thank you all who took the time to you know sit down and actually write us again truly does leave a smile on our face and we we really enjoy that so thank you speaking of things we enjoy we had a great time at atlanta club camp thanks to everybody came out got to be some people you know we talked to for like years and just never really
Starting point is 00:02:08 got to meet so that was really cool uh what up elixman right everyone else that we made there that was really good we had a booth and so uh we're definitely gonna be doing stuff like that in the future so uh keep an eye out in your city yeah also uh gotta mention we uh we met beach from a complete developer so uh we're all big fans of the show, and so now I've got a water bottle to prove it. Yeah, and we even have a picture out on Twitter on that one too, right? Yep, had to get a selfie for that one. Yep, tons of hats were given away, gave away some AirPods. It was a fun event, had a really good time.
Starting point is 00:02:40 Don't forget the stickers, the buttons. Stickers, buttons, it was nuts, man. Like Outlaw was manning the booth all day. And yeah, it was a really, really good time. And I actually got to attend Joe's talk on Jamstack that he's done 25 times now. And that was the first time I'd actually heard it. It was really good. Yeah, I was sad.
Starting point is 00:03:06 I was telling Joe afterwards that it dawned on me that like I didn't get to see it and and he was saying that like that was gonna be the last time he did it and so i was like oh man not only did i not see it i'll never get to see it now because like every time we were at a place to do it i was always like working the booth so i never did see it yeah you know the sad thing was i totally missed the punch line like i had a little code demonstration it was all leading up to a point. And when I first opened up the code editor on the camera, I was like, oh, no, it's opened up to the punchline. So I was like, oh, quickly scrambling to close it all down so nobody had it ruined. And then I realized I was running late.
Starting point is 00:03:38 And so I kind of skedaddled. I sped up. And I realized I ended up missing the whole point of the demo. I missed the punchline. So, you know, gone with the whimper sorry atlanta for the whimper it was good though i enjoyed it oh thank you all right uh now back to three factor app and we got a lot of really great feedback a lot of people are really excited about graphql i was actually surprised i didn't think it was like as popular and people were as interested as it seems like they were after releasing
Starting point is 00:04:07 the episode. That's kind of cool. I mean, if you've seen GraphQL in action, it's hard not to be excited about it. I mean, especially for people who have written standard APIs over time. I still feel like most of the excitement is coming from the front-end people who get to
Starting point is 00:04:24 consume it. And the back-end people I talked to seem still a little worried about it, you know, and for all the reasons we gave. So, still, still it's interesting to see just how much it's growing in the front end. Cool. So, in this one, we're going to be talking about, we just mentioned a second ago, reliable eventing. And so this one has all kinds of interesting things that we're going to talk about. So I guess we should go ahead and dive in here. So the first one that feels sort of wrong to most people that have done anything in databases or anything is don't allow for mutable state. Get rid of it. Yeah, yeah. Get rid of in-memory state manipulation in APIs.
Starting point is 00:05:07 Well, that sounds like the dream, right? Like if you don't have to deal with state, then at that point, everything could just be, oh, man, help me out here. What's the word I'm looking for? Shared nothing architectures? No, no, no, no, no, no, no. Functional. Everything could just be functional. All your code could just be functional.
Starting point is 00:05:26 It does kind of flow that way. Yeah, there are a lot of lessons from functional programming that we're seeing kind of repeated in this architecture. Can I just say for one moment, we're talking about a three-factor app. It's only got three factors. The first one was GraphQL. The second one is Revival Eventing. Like, these
Starting point is 00:05:41 are pretty, you know, I would have thought of kind of being like some pretty novel kind of things that they're putting together here. But I see this really interesting to kind of take these particular slices and put them together in order to make an application. So I'm already like at, you know, first with GraphQL,
Starting point is 00:05:55 I was like, okay, interesting. And now you put in a reliable invention. I'm like, where are we going? Yeah. It's somewhere neat.
Starting point is 00:06:02 It's, it's going to get even more interesting as we dive into this further. So the next thing they say is, right, don't do the state in memory and persist everything in atomic events. Basically, meaning you just write the bit that you care about and you move on, right? Like you're not. Everything has to be just a small slice of what's going on, right? Like you're not, everything has to be just a small slice of what's going on, right? It's own piece.
Starting point is 00:06:30 Yep. Which sounds a lot like functional programming. And there's some advantages, just like we've seen with like functional programming languages. And we've talked a little bit, we've talked about too, and we talked about React and it's kind of like functional one-way directional flow.
Starting point is 00:06:42 And the advantages here, basically those messages are replayable and they're observable. Lots of different consumers can watch that. We talked a little bit about that with GraphQL subscriptions. And it's recoverable. In that, you can rewind or update things, but you can go forward with another message. Sorry, I didn't mean to over-talk. Everything about this section, though, made me think of queuing and or transaction logs. It wasn't even necessarily about writing the one little thing, like you said, and then moving on.
Starting point is 00:07:15 It was more like, hey, I'm going to write something to a queue, and then I'm just going to trust that it gets done, that it gets dealt with. And then that way, I don't have to care about it, that way I don't have to care about it, right? Like I don't have to block and wait, right? Like I can just move on to the next thing. And then should I crash once it's on the queue, who cares? It's on the queue. Something else can pick it up and run with it. Yeah.
Starting point is 00:07:37 So what you just said goes back to what Joe just said with the whole, you don't have to care about it because it's replayable and recoverable in that because you're just writing out the stream of events that, that happened to make like an order, right? The whole thing is you can rewind it and run it again. And in theory, you should always get the same thing back out. Assuming that you didn't change the business logic of whatever's dealing with that thing. It's kind of like what Git does, your favorite thing in the world, right? Like Git can totally replay events. Like if you do a rebase on something or whatever, it basically takes whatever branch you're trying to rebase against
Starting point is 00:08:17 and it just replays your commit log against those. So it's very similar to that. It's just everything has a small slice and so do you ever do this like when you're reading something or learning something new like you'll kind of base it on like something you already have you know some knowledge of whether like regardless of how deep the knowledge is like you have some idea of how something else works and you kind of like as you're learning this new thing or reading this new thing you're like oh well that's kind of like this or i could see how that will work right
Starting point is 00:08:47 so like as i was going through this section like i was kind of thinking about like things like uh you know queuing systems like i mentioned uh kafka might come up or do you remember the amazon simple workflow service oh i do yeah so that type of thing was coming through my mind where it's like oh okay i could see this like you know going through your order thing, right? Like, okay, I'm just going to put a job out to like process the payment, right? And here's the information, but I'm not going to block and wait on it. Like I'm going to let, I'm going to trust that something's going to pick that up and everything that needs to hear that or listen to that or whatever, or, you know, is going to get that event and, uh, hand address it. And then I'm going to listen to some other queue, you know, again,
Starting point is 00:09:29 or like, you know, some kind of PubSub kind of environment where like I'm going to wait for something to say that the order has been processed or, you know, it's been charged or whatever, or I can move on to the next step, whatever. I'm going to listen for that thing and wait there, right? And that's a whole other architecture we'll talk about here as we get towards the end of this episode. But I think that's what all other architecture we'll talk about here as we get towards the end of this episode but i think that's what all this is kind of building towards there
Starting point is 00:09:49 right like it is yeah and this has very strong ties to like cloudy architectures things we talked about last episode like streaming architectures i also want to mention that so pure functions basically the idea that if you've got no state then based on whatever input you you pass in you're going to get the same output, which makes it really testable, which ties in nicely with in-testing. It's basically just kind of, I think, an evolution of best kind of coding principles. We talked about this with clean architecture, too. If you apply the solid principles, if you kind of apply the same things to both small code and big architectural systems the same exact lessons in the same exact ways you get the same exact benefits yep wasn't there a math term though for that type of function where it was like uh it was just pure functions when there's no side
Starting point is 00:10:36 effects like one same input maybe maybe you're not talking about idempotent no that's also involved here though. And messages can be replayed without breaking anything. Right. Yeah. Yeah. Anyways. So what,
Starting point is 00:10:52 getting into this, it says the event system should have the following properties. And we mentioned the atomic, I thought it was worth calling out what atomic actually is. We have a link to the Wikipedia article here is, but the entire operation must succeed and be isolated from other operations, right? So if you're writing out part of your order, right? Somebody places an order on your site. Maybe you write the order headers, one piece
Starting point is 00:11:17 and then each one of the order details is another piece, like another isolated atomic event, right? Like, um, you know, line item one is going to be an atomic right line item two three four whatever right but basically they can't impact each other they all happen separately and they have to commit completely to do it yeah this is like the a and acid yes yep when we're talking about databases right acid transactions false possible unit also it's reliable meaning that events should be described sorry events should be delivered to subscribers at least once you never have to worry about messages not getting there but you might get duplicates oh hey you know what i want to back up on one thing I just said. Like, that Wikipedia article actually says that atomic commits fulfill two of the letters of acid.
Starting point is 00:12:09 Can you guess the second? Is that atomic and commit? Wait, atomic is asynchronous, trans- Isolated? Yeah. I can't remember all of the- Well, I'll give you a hint. It's A, C, I, and D.
Starting point is 00:12:25 So, it's atomic, C, I, and D. So it's atomic commits isolated and D. The other one was consistency. Consistency. It fulfills the A and the C, according to that Wikipedia article. Atomic commits and database systems fulfill two of the key properties of ACID. Yeah, apparently we need to do an episode on ACID versus BASE. No SQL versus SQL. I think that'd be a good one.
Starting point is 00:12:50 Yeah, just to finish that, it says consistency is only achieved if each change in the atomic commit is consistent. Okay, and consistency, we may use something like, you know, if you wrote it, you can read it right back out. Two readers can read it, always get the same values. Yep. I'm sure there's a better mathematical definition there. That's kind of how I think about it.
Starting point is 00:13:10 So this is where I thought – actually, it looks like somebody else put this in here. This ties into event sourcing, CQRS, one-way event flow and react, pure functions, transaction logs, queuing, streaming architectures, Lambda Kappa architectures, which we kind of got confused on, I think, last episode on the architecture versus the actual implementation of AWS Lambdas. Yeah, my point there is just kind of showing how this
Starting point is 00:13:38 stuff is kind of related to other things that are going on and other things that are hot in kind of programming right now. So these are all kind of various topics. Each one is easily their own show topic. But it's all kind of the same principle. We've got these atomic events that are replayable, observable, and recoverable. And I was just kind of drawing the lines there to show how they're kind of connected to those other principles and those other ways of doing things.
Starting point is 00:14:01 Which, by the way, backing up again, though, to what you had said, you had mentioned that this reminded you of the queuing and all that kind of stuff. That's the reason why the GraphQL subscription model works, right? This is exactly what you said. As things get written in there, then something else can subscribe to those changes, whether it's like the order was placed or whatever, like in the simple workflow. That's where it ties into that first part that we were talking about last episode. Yeah, if you remember, we talked a little bit about last episode too. We kind of talked about three factors.
Starting point is 00:14:31 The first was GraphQL. The second was the persistence layer, which is what we're talking about now with reliable eventing being the center for persistence. And then there were the APIs that operated on the other side of that. And so the idea is that GraphQL puts data in the queues. Those APIs put data into the queues. They both read out of the queues. You know, you're done. That's it.
Starting point is 00:14:52 So that's the main kind of bus for communication there. And so now we're kind of focusing on that middle part there, which is like the reliable eventing data structure or storage. Well, I guess like one of the things that, and I guess I'll look to Alan with this question though, is like really does, is Kafka like checking all the boxes for this section? A whole lot. Or am I misusing Kafka to think of it in these regards? No, a whole lot. A whole lot of these boxes.
Starting point is 00:15:23 Actually, I'd say a lot of of the event buses out there or not even event buses but queuing systems out there probably check the boxes the thing that kafka has over something like rabbit mq or some of these other ones is it's a persistent queue right so this whole replay ability or recoverable recoverability that's why kafka is specifically comes to mind. Right. That's where it can be really nice because in theory, you could set up a Kafka topic to never expire, right? So if you're thinking about like a bank system to where you start out with a zero balance and then you have a series of deposits and debits and whatever, maybe that's something
Starting point is 00:16:02 you never want to expire, right? Like you always want to be able to replay that stuff you could do it there would you probably put it in a database or some sort of backup storage at some place yeah maybe but you could i mean that that's where like the difference between other queuing systems like a you know kafka or not cute cafe um rabbit mq or like going way back into the day to like an MQ series, uh, you know, or those type of pub sub pub sub type environments, you know, I don't, maybe, maybe it's because it's been a minute since I've like really done anything in depth with them. So I'm trying to think like, Oh, I don't think that those messages stay around as long, but like you said, with Kafka, you can actually treat it like a
Starting point is 00:16:45 true transaction log, right? And then if we're going across the tenets here of what the three factor app is looking for in terms of the reliable eventing, because of the replayability and the recoverability really made me think, okay, fine. other queuing systems check that observable box, but they don't check the other two like the same way that Kafka does. It's one of the reasons why Kafka is so incredibly popular is because of that persistence ability. Like by default, the topics all last seven days, right? And then they start rolling off. But that plus the insane write speeds, the IO speeds with Kafka also smoke most of the other queuing system brokers
Starting point is 00:17:36 out there. So yeah, it's one of the reasons why it's so incredibly popular is because you can use it basically in place of any queuing system out there and also use it as reliable storage. Okay, so I think then the three of us are yes, yes, yes on Kafka or Kafka, depending on where you're from in the U.S., hitting the checkbox here. And what was the Hasara that created this, the three-factor app? They were very specific with Factor 1 being GraphQL. What other technology might have checked their boxes then for Factor 2
Starting point is 00:18:19 that wasn't Kafka since they didn't specifically call it out? So we'll get into that a little bit further or maybe a little bit deeper. Am I jumping ahead of what you want? Okay. It's not further than what I want because they don't actually call it out. It's, I mean, if we want to jump towards the very end. No, we can wait. All right.
Starting point is 00:18:37 We'll wait. We'll get back to that one because I think that one's worth talking about a little bit deeper when we do get there. So the next thing that they have up is this whole traditional, like the old way of doing, like saving data versus the three-factor way. I kind of like how they did this on every one of their factors, right? Like this is what it looked like in your traditional way and this is what it looks like in the three-factor way. Although sometimes the traditional was a little bit questionable because it was like, hey, wait, that's what we're doing now. Right, right. It wasn't necessarily new school. It was somewhat was newer school than what most people are doing.
Starting point is 00:19:14 So on this one, like the way that they went about talking about like API calls in the traditional way of thinking about things, you request, you submit a request. Data is loaded from the various storage areas. So like maybe you do an order, maybe you're going to pull some data from the database to mix in, apply your business logic. And then finally, you're going to commit that order and maybe all the other stuff to the database with that. In the three factor approach, the request is issued and you write each one of those pieces individually, right? So maybe the order header, the line items like we talked about earlier. You might have 20 records, but they all fired off and got committed. Yeah, and I was kind of trying to think about this. Remember we talked about the pizza?
Starting point is 00:19:59 Like if you order from Domino's, whatever, it kind of tells you, you know, your pizza is in the oven, right? So one way of kind of doing that sort of thing in transactional system would be like you basically have a record that has a state. And then as the state changes, you kind of update it. But the problem with that is that it doesn't tell you that all those messages have happened. So, you know, you can imagine like a website or something like it could go and fetch that state and see that it's changed. Now it's at this stage. But if it missed any steps in between, then it has to know basically that it can kind of skip to that right spot. And so there's no sort of like
Starting point is 00:20:29 real steps as to how it got there. But if you do have those steps along the way that you can show things in a different way, I don't know. I think it's really cool. You know what's funny is thinking about this, like trying when I was reading this, I was trying to figure out how do you put
Starting point is 00:20:45 this in perspective of somebody that's used to dealing with databases, how they'll think about this. If you've ever had a table that you're like, you know what, we need an audit trail for this table. That's basically what eventing is, right? So, so your, your pizza ordering thing is a prime example, right? Somebody places an order, that record goes into a table and says order was placed, right? And in the traditional database way things would work is you might say, hey, the pizza was cooked and you're going to add some sort of like, you know, pizza cook time here, right? And user Joe is the one who updated it and said it was cooked. Now that's usually only two fields in a record.
Starting point is 00:21:25 So you kind of lose any other stuff that happened along the way there. And then most RDBMS solutions to that would be, well, let's just create an audit table. You know, Joe updated the pizza cook time here and it's going to add an entry into this audit table. Or the new way to do that would just be through temporal tables. Or temporal tables.
Starting point is 00:21:46 Right, where it happens for you. Which is basically still, it's sort of inserting a record in that same table, but timestamping it and slicing it and all that kind of stuff, right? So then the next one is, okay, now I'm going to give it to the delivery person to take out. You know, that's another couple of fields that you're updating there, et cetera. So instead of all that in the eventing world is it's basically just having all those audit trail records the entire way, right? Order placed pizza ready, handed to delivery person delivered, et cetera. Right. So now you have this whole chain of records that tell you
Starting point is 00:22:23 the state at every single, it's not even the state. It tells you what changed, what just happened, what was the event that just occurred. Would it be like the courtroom stenographer, would you say? Very similar. Yeah, I would say so. Just constantly taking notes on what's happening? Mm-hmm. It would take really fast.
Starting point is 00:22:41 If you only ever have the state of it, then it's kind of a pain to figure out different kind of data points about that. If we've got a record of each individual event, I can look back and say, how long did it take to cook my pizza? This time minus that time. And if you don't have something like that in temporal tables, if you only keep the latest snapshot, then you're losing information.
Starting point is 00:23:01 It's pretty interesting. It's a different take on what most applications have done over time. So, so, okay. Hmm. Then do you have reliable eventing?
Starting point is 00:23:14 If you have a temporal table, does it check the box? So maybe not playable argument. Here's the only problem with that on temporal tables is that's usually the latest state of something. It's not the event that occurred, right? Well, it's not the event, right? But it is the state. Yeah, you're right there.
Starting point is 00:23:39 So you're always just writing the latest state. But it's not just the latest state, though. That's the one thing I want to clarify there, though, right? In the temporal table, you have the main table that would have what the current state of that particular record is. And then your temporal table, the history that goes along with that, that might have all the versions that that thing has. But it's the previous state. And that's where it's a little bit different, right? Previous states.
Starting point is 00:24:02 So let's talk about an order, because that's real easy to see. So let's say that you order a hundred dollars worth of something, the record that you're going to have in that table, the temporal and the regular table, because it's really the same thing is going to be a hundred dollars, right? Let's say that you then decide that you're going to return a $10 item. So the next record you're going to see in that temporal table is $90, right? So the problem is in a venting, that's not what you'd see. In a venting, you'd see all the items that added up to that order, right? So let's say it was 10 items that were each 10 bucks each, right? I see where you're going with this. So then when you go to remove that, return that one, you just see a negative 10. So you're not going to see the state of the order.
Starting point is 00:24:46 You're going to see the transactions that happened on the order. That's the big difference is you're not maintaining, you're not keeping track of the state. You're keeping track of the deltas more or less. I think that's the easiest way to think about it is, is it. Yep. Yep. Because then in terms of like, yeah, so like to counter what I said a moment ago, it's not that it's replayable because you're not able to like replay the event. You're just able to see that, hey, some event happened and it occurred and created this change to the final state at that time.
Starting point is 00:25:21 You'd have to calculate what the actual event was. Yeah, there you go. And there's the actual event was. Yeah. There you go. There's, and there's the difference. Yep. So that's, that is truly the big difference between the typical relational database way of,
Starting point is 00:25:34 of keeping track of records versus this event sourcing type way. So it's really like, you know, databases just need to expose the transaction locks. That's really what they are. That's kind of what this is. Yeah, totally. If they just made that like something that you could interact with to replay the stuff in your –
Starting point is 00:25:51 Funny you mentioned that. Here we go. There's companies that are specializing in building products around just that. Yep. So the next thing on this traditional versus the three-factor way is the how. So in the traditional way, you avoid using async features because it's difficult to roll those back when there's a problem. So think about that. That's a perfect example of this whole order system, right? You're going to submit an order, which is going to have an address and, you know, some other stuff on it. So you don't know where to be delivered to.
Starting point is 00:26:29 And then along with that, you're going to submit the order details, right? If any part of that fails, you need to be able to back it all out and tell the user, Hey, something failed here. So you're not going to do that in a bunch of async calls because you don't know how to tie those back together and say, Hey, I gotta I got to kill all these things back off and let the user know there was a problem. In the three-factor app way, you're going to async everything. Everything's just going to be a separate call, and it's going to write them all to the event log, and there's no need to roll back. Because if there's a problem, all you'll do is look at it and be like, hey, there was a problem with this event. Let's add another event to fix that. Right. So, or maybe you don't even have to do that. Like if it's in the simple workflow,
Starting point is 00:27:10 kind of like, I'm thinking of the Amazon simple workflow scenario where like, if you didn't finish, if you crashed for some reason and didn't finish it, then that's that item still in the queue. It is. So I guess here's what, when I say you'll add something to counteract it. So we were talking about Kafka earlier. One of the interesting things that's really hard for people to grasp when we talk about Kafka is you don't delete a record in Kafka, right? Like if you write some stuff to it, if you write an item was $10 on your order into Kafka, you don't go delete that out of there. I'm sure there are some ways to force it, but Kafka is a transaction log. I don't think there's a way to get rid of it.
Starting point is 00:27:53 Oh, there may not be. I mean, you could write a filter, and you could move everything from one topic into another, removing that one. And then you can delete the original topic, wait a little while for its stuff to get flushed out of whatever metadata that hangs on to it. Because it does like to hang on to stuff. And then you can put it back into a topic that's newly created with the same name. But you're moving stuff at that point. Yeah. So what I'm hearing is never use Kafka for your sensitive, personally identifiable information that isn't in an encrypted form.
Starting point is 00:28:22 Well, lock that down. For sure. Yeah. For sure. Don, for sure. Don't make mistakes. Here's the important part. You almost have, like, if anybody ever took an accounting course, like there's always this thing to where you're trying to balance the balance sheet, right?
Starting point is 00:28:35 Like you put an entry over here, you have to put the same entry over here. If you need to counteract something that was put in to a commit log, you have to put in the inverse of that. So, you know, Hey, I didn't really want that to be a $10 item on there. Then you're going to add a removing $10 item record right after that,
Starting point is 00:28:56 because you can only replay it. You can't, you can't delete, you can't update. So you cannot delete a record, nor can you update a record in Kafka. And it's the same way with these event streams. You cannot delete it.
Starting point is 00:29:11 You cannot update it if you are sticking to and truly adhering to this event sourcing thing. So I understood where they were going with the traditional, where they were saying like, hey, you're going to avoid, and that's a key word here. You're going to avoid using async because of the difficulty. So I was kind of imagining like, okay, yeah, I can see where you're going with that, I think, because you're probably going to create one transaction, you're going to do everything inside that one transaction, and then commit it when you're done. But then it got me thinking, I'm like, wait a minute, couldn't I create that transaction? Technically, okay, if we're playing devil's advocate here, couldn't I create that
Starting point is 00:29:45 transaction and then call a bunch of async functions where I pass that transaction in to those functions? You probably could. And I could, you know, roll that transaction back if I had to, if any one of those were to go awry. But there's so much additional work you'd have to do to make sure that, hey, what if that transaction never finished because it didn't get everything? Like, there's so many things you'd have to work around there, right? Like, why didn't that transaction close? Why did this thing stay open? Like, it would be a pain.
Starting point is 00:30:16 I get their point. You know, again, the key word there is that you're going to avoid it. It's not that you can't do it. But I was like, is it really that difficult? Or maybe I'm just oversimplifying it in my mind. I think that you'd run into so many edge cases where you'd be like, screw it, this is not happening. But it's effectively a distributed transaction,
Starting point is 00:30:36 which is famously hard. The two ways I've heard of dealing with it, it's basically like a locking, where you kind of like two-phase kind of locking, almost like you do like locking, where you kind of two-phase kind of locking, almost like you do in code where you kind of lock the data and say, all right, I'm going to change this stuff. Everybody back off. Okay, everybody's off.
Starting point is 00:30:52 Okay, now I'm changing it. That works okay for a small number of systems. But if you're talking about multiple systems, then there's what I've seen called the saga pattern, which is as miserable as it sounds, where you kind of create create like a like a checklist of things that need to happen and then you go through like do this little unit of work and if something fails then you need to have kind of like basically an undo kind of action that you
Starting point is 00:31:15 call on that which will either undo like in the case of like a you know roll back a database or offset with something like a kafka where you put in like another record to fix it. And it sounds awful to me. Well, this is why I asked though, because like I was trying to think back about it and I'm like, you know, I don't know if I've ever tried this, but I was like, oh, what would happen to that transaction object if multiple methods are running concurrently and be like, okay, yeah, I'm going to insert this record here and I'm going to insert that record there. And I'm going to modify this one and delete that one. You know, like a bunch of those methods are happening. Like what, what does the state of that transaction object look like? Like,
Starting point is 00:31:52 can you do that? Or maybe you can't. You can, we've actually seen some distributed transactions and the problem is they're actually way harder than what you'd think they would be. That's why I think like, maybe I'm just oversimplifying it. But here's one thing to point out, though. This whole avoid using async doesn't mean that from your web application, you're not going to make an async call to the server. You are going to do that, but you're going to make one. That's really what they're talking about.
Starting point is 00:32:19 You're not going to try and send everything off in a blast of messages. You're going to send everything in one bundled message that would they get handled on the server yeah when i think of async in these terms i'm thinking about like fire and forget right like one way to kind of send a message to just something like q is to say send message move on another way to send is like send message and confirm receipt and even there it gets complicated because you can say like did the leader receive it did the leader and at least one of its replicants get it? Did everybody get it? When do you stop? And it's a pain in the butt, man. That's actually a really good point is we should tie this back into factor one and even talk about the UI, right? So the big difference between this
Starting point is 00:33:01 traditional with the avoid using async, just what Joe said a second ago, and this async all the way is, if you think about an order application in most ordering systems, you go and you place an order. You're going to wait for the little spinning thing on the screen to say, hey, placing order, don't push any buttons, don't hit refresh, don't do whatever, right? If we're talking about the web. And then after that's done, it's going to take you to another screen and says, hey, your order has been placed. All is good. Right. In this async way, you could totally place an order and you could just get a little confirmation saying, hey, we sent your stuff. Right. Go do what you want to do. And then because the subscriptions are being handled through GraphQL, anytime that it's done behind the scenes, it's going to send a message back to your application. Your application will be like, hey, let me pop up a toast or something and let you know, hey, by the way, everything's cool with your order. It doesn't matter where you are on the site because it was able to respond to it when
Starting point is 00:33:56 it was done. See, the only thing with that, though, is like, I don't know if you have heard, but Joe mentioned this great, you know, experience if you go to pizza hut and order from them in which case it sings to you so like everything you just described is wrong and out of date and you should surf the internet over now and then yeah i don't i don't do we even call it surfing the internet anymore i'm pretty sure that's outdated you're just riding the wave now you don't surf it i think you're just on the internet yeah you are always unfortunately i am um you're still surfing i am well i mean you are from california I am. You're still surfing? I am.
Starting point is 00:34:26 Well, I mean, you are from California. That's right. Thank you, Yost. I appreciate that. So the next one that is the difference between the traditional and the three-factor is error recovery. Now, this one is really kind of cool. In the traditional way, you have to implement some custom logic to do any kind of recovery to rewind the business logic. So going back to the temporal table thing that we were talking about earlier, you know, your first record said 100 bucks. Your second record said 90. How do you revert back? You've got to apply some custom logic to diff the two and figure out what actually happened, right?
Starting point is 00:35:04 In the three factor app, it was all a commit log. You don't have to really do much of anything. You just basically replay the events, right? That's, that's kind of it. You might replay them in the reverse order so that you can sort of see what was happening, but there's really whatever business logic was there to write the event out in the first place, in theory, you can just play them again. I think it's actually Domino's, but they put the wrong pizza in the oven. They send a message to you that says, your pizza is in the oven. Then they realize the mistake.
Starting point is 00:35:38 Oh, crap. What do you do at that point? Do you like, sorry, we took your pizza out of the oven. Like, is there a custom event for that? Do you just put it in again? So you get your, you know, two of those. I'll stop singing it. But, uh, you know, it's a hard problem.
Starting point is 00:35:52 You can't stop singing it. Right. Yeah. Once the, once the cat's out of the bag, there's no putting it back in the bag. So, so the way that you would do like this error recovery is you probably play those things and you might add another event to fix the error right so okay pizza was in the oven no it wasn't messed up now we're going to add another record that says hey no the pizza's back in the oven right so so that's how you can kind of do some of this stuff and if you need to get back to a point in time you can truly
Starting point is 00:36:22 just replay the events if that's all we're talking about is for the recovery. So it's a, it's pretty interesting. Yeah. I'm trying to find something on YouTube, but hopefully we'll have a link to this amazing order experience. Maybe I'll have to order a pizza or something and record it for you. There we go.
Starting point is 00:36:40 It's a little bit late for a pizza order, isn't it? No. I live by college. It's a little bit late for a pizza order, isn't it? No. Primetime. I live by a college. It's never too late for pizza. This episode is sponsored by Datadog, a monitoring platform for cloud-scale infrastructure and applications. Datadog provides dashboarding, alerting, application performance monitoring, and log management in one tightly integrated platform so you can get visibility quickly. Visualize key metrics, set alerts to identify anomalies, and collaborate with your team to troubleshoot and fix issues fast. Try it yourself today by
Starting point is 00:37:17 starting a free 14-day trial. You can also receive a free Datadog t-shirt when you create your first dashboard. Head to www.datadog.com slash coding blocks to see how Datadog can provide real-time visibility into your application. Again, visit www.datadog.com slash coding blocks to sign up today. All right. So here we are back and we want to talk about some of the benefits of an immutable event log. Can't update, can't delete, can't change it, whatever. So the primary benefit of this is the simplicity when dealing with recovery. That was like the last thing that we left off with right before the break here, there's no custom business logic. You just replay the events. They're there in the data. You just play them.
Starting point is 00:38:12 And so due to the nature of processing the data, the individual data event, you essentially build up an audit trail, which you don't need to have additional logging for that, which I hadn't really considered that. It's like your data is also the log at that point. That's kind of nice, right? Is the fact that you could just sort of look at it and you know, you know what's happening. Yeah. It can be really difficult with these streaming systems to really inspect. Like, I guess you need some sort of specialized tool to be able to kind of like
Starting point is 00:38:38 roll through messages conveniently. Uh, at least with Kafka, it's not very easy to just go like, let me see these three events. It's like, uh... What you do is basically hook up a consumer and then you kind of pull for items. 1989 just called. Yeah, it's going to
Starting point is 00:38:57 keep doing that too. There's nothing I can do about it. That's amazing. Whatever. Yeah, it's not my phone. It's Doc calling. He's stuck. He means plutonium that's awesome bring the camcorder so here was the next i think you hurt his soul i hate that phone so much so much it was cheaper to get the phone than to not get the phone. And that's how you know they're selling your information.
Starting point is 00:39:29 Wait, what do you mean? It was cheap. I got that internet. Oh, oh. Yeah, but no one says you got to plug it in. Yeah, you have one of those bundles. Look, I'm married. When you have a partner, you got to make some concessions.
Starting point is 00:39:42 Pull out the phone. Yeah. I remember that in the vows yeah right for better or worse landline is required that's awesome all right we'll stop harassing you for a moment well only for a moment for a moment yeah so i feel so old now the other cool part of this is replicating your application is as simple as literally just taking your events and replaying the logic on top of them. So if you needed to move this over to another server, you just run it, right? Like you start over and you're good. That's pretty ridiculous, though. Yeah, I kind of take issue with this one because now it's like okay you're
Starting point is 00:40:25 just going to replay it on that other server so oh all the orders that went through the first time are going to go through a second time okay so that's fair that's a really good point yeah you may not run the payment part but but you're going to be delivering a bunch of stuff but you're not able to mutate that pass like these are immutable so how do you know oh on this other second server don't don't process the payment part so now you have that special logic when you do the replay hey we're gonna get into so that's a little bit that's a no i mean you you replay the messages not the actions yeah so like you would you would you know replay to put it in the oven message and you'd replay the got it out of the oven message. See, I actually love the fact.
Starting point is 00:41:07 You'd replay the order process, order received, order created, whatever. No, no. I love the fact that he takes issue with this because I have big issues with this too because I think the same thing. Are you going to put all kinds of feature flags in your application to be like, yo, yo, yo. When I replay this thing, don't actually do these 500 things right here that are typically done. I just want you to get us back to the proper state. Yeah.
Starting point is 00:41:31 But what I'm saying is like, when you replay those messages, why run them through the application at all? Why can't you just, you know, copy the messages from one to the other? Oh, we'll get into that shortly. What does that even mean? Why even replay them at all? Yep. Just skip to the end then.
Starting point is 00:41:45 Don't even read the book. Just go to the end then. Don't even read the book. Just go to the last page. But there's a problem. Remember, there is no end because these are not the state, right? These are bits and pieces that make up the state when you add them all together. Fair point. Fair point. Yeah.
Starting point is 00:41:59 Otherwise, if you skip to the end, you'd only have like one line item out of 10 and you don't even know who that line item belongs to. You're just like, somebody got something. Right. So now we're starting to – by the way, we're in the benefit section right now. We're in the reasons why an immutable log is awesome. It's goofy. So keep that in mind. Like we're going to be talking about the downsides of this here in a minute.
Starting point is 00:42:22 Oh, okay. Yeah. Well, remind me to come back to that one then okay so um this one i put in here was a little bit of side information this allows you to think in ddd instead of crud this was actually a decently long post and and i think to summarize, we've talked about how what we've done for, you know, God knows how long programming now. We had a tendency to think about the database first, right? Like anytime you went to write an application, like for me, I'm sure Joe, Mike, and a lot of other people, we just magically see a schema pop in our head for database tables, right? Like we know how this thing's going to lay out.
Starting point is 00:43:11 We know how it would work. Done it so many times. It's kind of easy. The problem is you start thinking about business problems in terms of creates, updates, and deletes and reads, right? And that kind of stinks because that means that you're not actually thinking about the business problem. You're thinking about the technical implementation of how you're going to write code to somehow fulfill that business problem, right? When you do things sort of in this event sourcing type of way, you kind of think about the business logic more than you do about how you're writing the data out. And that's kind of what this whole article was about. It was
Starting point is 00:43:53 an interesting take on it. And I appreciated the take because it forces you to think in the ubiquitous language instead of, oh, I'm going to go create a record for the order. And then I'm going to have to go create these order detail records. And then I'm going to have to go update that other record. No, no, no, no. What happens when somebody places an order? What, what is the outcome of that? Right? So it was kind of an interesting take on it. Yeah. So you could definitely say that three factor apps have a predisposition towards event streams. They don't mention any other kind of persistence. Like you can have them.
Starting point is 00:44:29 There's nothing precluding you from having a relational database or something in the mix here. I thought it was so interesting to see a kind of architectural kind of take on doing something that's so different from how I usually think about things. It's been very refreshing. Yep. And so the next one that came up was one of the very common patterns. So I've been saying event sourcing a lot. That's just one of the patterns. You know, they were talking about this whole section is about an immutable event log. Event sourcing is one of the ways that this is handled in the programming world of being able to replay events and rewind events and do all that kind of stuff.
Starting point is 00:45:09 So that's a call out. Uh, I don't know. Did somebody delete the, okay. Nevermind. I actually did. Yeah.
Starting point is 00:45:16 Don't delete that line. Yeah. I was deleting show notes. I see it's mutable man. Right. We needed a mutable show note page. You ever think right there. You ever question, like, there are some people on the internet, like, Martin Fowler is definitely one.
Starting point is 00:45:32 John Skeet. Like, you ever think that, like, maybe they just created everything that we know today. I don't leave my house much. Some of it we might not. Some of it, you know, whatever. But, like, they've got articles about everything that's out there and if you've you learn you read something today from one of them and you're like i don't quite get it and then 15 years later like oh yeah yeah yeah like in my mind history consists of like i don't know 200 people
Starting point is 00:46:00 the history is not very big. It is funny. Martin Fowler and Abraham Lincoln. There's a couple people in between. The only difference is Martin Fowler had a better computer and better internet access. Right. Like right on his stuff. Yeah. That is hilarious.
Starting point is 00:46:19 Yeah. It's easy to see that all of the same names show up and they did a lot of really incredible things. That's good. And I'd like to see more names. I hate to think of all the people that kind of get left out because, I don't know. I have mixed feelings about it because I know there's a lot of other people that are talking about similar ideas. But people like Martin Fowler, they really hit the nail on the head when they write an article or write a book about something they did.
Starting point is 00:46:40 It just really kind of stands out. And so even though there were a lot of people talking about whatever the various thing was at the time, sometimes like they just do such a good job of defining it. Yeah. Well, I mean, it's kind of like, you know,
Starting point is 00:46:51 a similar conversation that we had back with, um, the pragmatic programmer, right? Where those guys, uh, wrote about tracer bullets, right?
Starting point is 00:47:01 Yep. And now we refer to it as agile, right? This article that you're referring to from martin fowler is 14 years old right yeah it's not new stuff the the weird thing is is it all comes in cycles right like developers will will do something we'll find something painful and then we'll find something that made made that pain go way easier. And then all of a sudden we want to do everything that way. And then we realize, okay, that was a bad idea.
Starting point is 00:47:32 We shouldn't have done everything that way. And then we find other patterns, right? It happens throughout a developer's career that, you know, like the first time you find design patterns, right? Like you want to design pattern all the things. And then you find out, okay, well, maybe I shouldn't have thrown in like some sort of crazy pattern here because I didn't really need it. So. Oh, yeah.
Starting point is 00:47:54 So the first time I saw, I heard about Docker, I'm like, that's dumb. And then by the 10th, 12th, 14th time, you know, hey, this is pretty cool. Well, you can't. Docker all the things. You can't overdo Docker. I'm sorry. That does not fall into this category. Well, no, there's just lots of things.
Starting point is 00:48:08 Like, Microsoft was the first couple times I heard, like, that's dumb. Now I'm like, you know, it's kind of cool. I'm a slow adopter. There's some people out there that are pioneering, making the good stuff. So I need to figure out who the pioneers are today and then follow on their boot heels. On their boot heels. That's okay. Got it. Yep. All right. So you asked earlier about, does Kafka check all the boxes? And this is where,
Starting point is 00:48:34 this is where we start to get into what was interesting about their approach with three factor app is they don't necessarily talk about Kafka. They don't talk about RabbitMQ. They don't talk about MQ series or any of these other ones, right? They're basically talking about using database systems. And in these database systems, they talk about syncing data using change data captures or some sort of triggers to fire off that the applications could be aware of. Right. So that's where it was kind of interesting is like, there's these technologies available to answer these problems, but you can see a lot of the things that are, that they mentioned are actually built on top of databases.
Starting point is 00:49:16 The Hasura, I think is what they're called. Yeah. Um, like they even wrote something. I'm about to say Postgres. Yeah. They wrote a graph QL, uh, implementation for Postgres. Yeah, they wrote a GraphQL implementation for Postgres.
Starting point is 00:49:28 Yeah, so that basically as data hits Postgres, then it can emit an event out to the application server that goes back. So, yeah, that's why I said, yeah, Kafka checks all the boxes, but that's not necessarily the route they went when they were designing this three-factor app thing. Well, that's why I'm calling out. I'm questioning, like, okay, was there a reason? They were very specific with the first factor, right? Now we're in vague territory. Yes, very. Right?
Starting point is 00:50:01 And knowing that they are a database company, database company right then it's kind of like oh that's interesting like why yeah yeah so check this out so they provide a graphql wrapper around postgres right but they also offer real-time subscriptions and live queries so you can take a database query say this is what i want to subscribe to, and it'll just watch that because they've got something that's kind of inserted there somewhere around the log layer that's watching for the sort of events and updates.
Starting point is 00:50:33 Yep. So it's very similar to change data capture. If you're not familiar with what that is in a database, usually there's like SQL Server has it, I'm sure Postgres and many of these other ones do. But basically when something with some data hits a table, then it can notify something that's, that's waiting to be, to be notified about it. Right. So, yeah, I wanted to bring that up just for a second, just cause I think kind of like to explain it like a naive level, one way to do it
Starting point is 00:51:00 would just be the like query a table every 10 seconds and look for either a timestamp or a number to go up, like basically a road version type thing. But there are companies out there that have inserted themselves at a much lower level, and they actually watch those right-ahead transaction logs, which is kind of a feature of those relational databases where they kind of write stuff to logs in case they go down or whatever. In case there's a problem, they can roll that stuff back. And that's kind of how they keep track of stuff. So now there are these companies that are watching these logs and then kind of replicating those changes out to systems like Kafka or whatever. So I just think that's a really cool kind of field. And there's a couple of companies here, like we mentioned,
Starting point is 00:51:36 Hasura, Atunity is another big one. Debezium, Kafka Connect is something that's a little bit more higher level. But it's just kind of cool to see these organizations and tools that are doing stuff kind of at varying levels here, and you've got a couple things to choose from. I thought it was really cool. There's also the ability to do triggers. If you've ever done something where you set up a custom trigger on a database
Starting point is 00:51:57 so when a change is made, then your database will go ahead and call some sort of custom action or do some custom execution that will relay that change out there. I always try to stay away from triggers if I have another option just because, I don't know, it just feels kind of wrong to me. I've never understood that, by the way. People that avoid triggers in databases. The only reason I don't like them is because they're sort of hidden logic. That's the reason. That's the way that's the reason that's, that's the only reason. But the thing is they exist to
Starting point is 00:52:28 fulfill some purposes. So as long as you're trying to fulfill a purpose that makes sense, like writing before there were temporal tables, writing to an audit log, like that's a perfect example of where a trigger makes sense to me. And I just knew so many people that were like, no, no, no, don't do a trigger. And I'm like, okay. It would happen kind of in a weird spot. It was like kind of like outside the normal execution flow. So you make your update and you get a return message that says update made.
Starting point is 00:52:58 And you've got this other action that's kind of running. That's kind of invisible to you. It's kind of like running in the space between worlds. But you enforce logic there, like that was that was always the thing is you always get the argument that um you know people if somebody comes in and updates 100 records in the database how do you enforce the other logic that's supposed to happen after that a trigger right like what happens if the trigger fails yeah but every trigger fails data and the trigger fails well then then you got other issues right like if if your if your
Starting point is 00:53:30 infrastructure is not working the way it's designed to work then then it can't help you there but but the thing was like like the perfect example is this whole order order details thing right like if you go and update your order details table, and that should have updated the order total in your orders table, and you have people munging, you know, munging around in the database doing that, and you know, people are going to do that, then the best way to enforce that is through a trigger, right? Anytime you update the order details, then it should roll up and update the order total amount, right? That's if you were working in a world where people are doing everything in the database and people have their hands on the database.
Starting point is 00:54:10 Hopefully, you have an application controlling that logic, but that's not the reality everywhere, right? So that's where I think triggers matter. Well, I mean, even like you mentioned, you said something about like before temporal tables, but even with temporal tables in like a postgres environment it's done via triggers yeah so my new favorite book has a little section on trigger-based replication and what they say about is basically that it's got
Starting point is 00:54:36 greater overheads than some of the other methods we've talked about like watching those write ahead logs and it's a little bit more prone to limitations but it's's flexible. So it's basically like, it's okay. There's other ways of doing it, though. Well, I mean, if you're watching the transaction log, sure, fine. That's, you know, whatever. It's almost like – Yeah, it's a heavyweight solution. Right, yeah.
Starting point is 00:54:54 I mean, you've got to have some real knowledge in order to do that kind of stuff. It's almost like when we talked about aspects back in the day, right? Like, okay, totally, you can do some aspect-oriented stuff, and it's doing some IL weaving, right? If you're using PostSharp. If you're using PostSharp or some of the other solutions out there. And so it's performant, and it's performant because it's changing the underlying code that's being run.
Starting point is 00:55:19 That's the unique thing about PostSharp. So PostSharp is heavy-handed in that regard. Right. thing about post sharp suppose sharp is heavy-handed in that regard right most most aspect uh oriented uh implementations in c sharp or in the dot net space don't work like they're using reflection and other things which are heavier and that's why they're not as well heavier on the runtime on the runtime right they add some overhead so at any rate I digress with all that. I just have never understood the hatred for triggers. I get it a little bit, but whatever.
Starting point is 00:55:50 Yeah, let us know in the comments. Yeah. Alright, Nate the DBA, I'm watching. Watching those comments. This episode is sponsored by O'Reilly Velocity Conference. To get ahead today, your organization needs to be cloud native.
Starting point is 00:56:05 The 2019 Velocity Program in Berlin, November 4th through the 7th, will cover everything from Kubernetes and site reliability engineering to observability and performance to give you a comprehensive understanding of applications and services and stay on top of the rapidly changing cloud landscape. Learn new skills, approaches, and technologies for building and managing large-scale cloud-native systems and connect with peers to share insights and ideas. Join a community focused on sharing new strategies to manage, secure, and scale the fast and reliable systems your organization depends on. Get expert insights and essential training on critical topics like chaos engineering,
Starting point is 00:56:46 cloud native systems, emerging tech, serverless, production engineering, and Kubernetes, including an on-site CKAD prep and certification. Velocity will be co-located with the Software Architecture Conference this year, which presents an excellent opportunity to increase your software architecture expertise. Get access to all of software architecture's keynotes and sessions on Wednesday and Thursday, in addition to your Velocity Pass access for just 445 euros. Listeners to Coding Blocks can get 20% off most passes to Velocity when you use the code BLOCKS, all caps B-L-O-C-K-S, during registration. All right, so it's that time to talk about reviews because reviews are super helpful for us. We really appreciate it.
Starting point is 00:57:37 We love reading the names. We love reading those reviews, and it really helps us out a lot. So I've got to ask you, please, if you haven't done so already, go to codingblocks.net slash review. We've got a couple links there that will hopefully make it easy and painless for you. And we really super duper appreciate it for real. All right. With that, we will head into my favorite portion of the show. Survey says.
Starting point is 00:58:03 All right. Are you mocking me over there, Joe? I see you... I can do what I want. Yeah. You got to get your mouth away from the microphone when it gets louder. Yeah. All right.
Starting point is 00:58:14 So, back a few episodes ago, we asked, When you want to bring in a new technology or take a new approach when implementing something new or add to the tech stack, do you A, ask peers if it's a good idea before implementing it? The voice of many carries more weight. Or B, ask the relative team lead if you can implement it. If the boss doesn't like it, it doesn't matter. Or C, implement a proof of concept and get stakeholders to weigh in because if they don't know about it, they need to be sold on it. Or D, just do it. I can't waste precious time checking if others like my idea. And lastly, E, abandon it. It's already too much effort. All right. Memory's fuzzy on who went first, so Joe, I'll pick you.
Starting point is 00:59:10 All right. I'm going to say that when you want to bring in a new technology, take a new approach, that you are going to B. Ask the relative team lead if you can implement it. And I'm going to say that that got 37%. B at 37. All right. So am I supposed to pick what I do or what the others would do? You can play the game however you choose. That's your strategy.
Starting point is 00:59:35 Okay. So I'm going to say that most people said C, implement a proof of concept and get the stakeholders to weigh in. Because if they don't know about it, they need to be sold on it. The best way to prove this stuff is to show somebody. I'm going to hope that that was it. And we'll go with 20%. There's less than five. Okay.
Starting point is 01:00:02 So there's five options. So you're not very confident in your answer. So Alan is C at 20% and Joe is B at 37%. Is that correct? That is correct. So Joe is Ask the Relative Team Lead, 37%, and Alan is Implement a Proof of Concept at 20%. And Alan wins it. Woo-hoo!
Starting point is 01:00:28 All right. It was almost 41% of the vote. Wow, okay. Implement a proof of concept and get the stakeholders to weigh in. I like it. I'm super excited that people chose that. Mm-hmm. Wow.
Starting point is 01:00:42 Well, rock on. All I got to say about that. What was, what was number two? Was it, was it Joe's super close? Second was ask peers if it's a good idea. Interesting.
Starting point is 01:00:53 Nobody cares about the boss, man. Nobody cares about the boss. That's just a worthless job is what apparently our audience is telling us. Okay. But hold on. Number three, abandon it.
Starting point is 01:01:06 Actually, abandon it. Actually, abandon it wasn't even, it was like the last option. Nice. Like that one, that one super surprised me. I honestly expected that one would get a little bit more love than it did. So we've got proof of concept, ask peers. So the next one's going to be just do it? Yeah, just do it. Yeah, we've got some real go-getters.
Starting point is 01:01:29 All right. Actually, no, no, no. I take that back. No, no, no. Ask the relative team lead was the next one. Okay. Just do it was, you know. Next to last.
Starting point is 01:01:40 Yeah. All right. Fair enough. Yeah, yeah, yeah. That is. We got some go-getters. We got people that like to see some change. That's awesome.
Starting point is 01:01:46 Yep. Yep. I like to see that fire. I like it. All right. So kind of keeping in line with that survey, today's survey is, what's the first thing you do when picking up a new technology or stack? stack and your choices are take a course like on educative.io maybe or google the pros and cons and share the ones that support your opinion or being the best practices pray there are some
Starting point is 01:02:19 or lastly find the stack overflow answer that you most agree with and supports your theory. I'm looking forward to these answers. Very much so. This episode is sponsored by Educative.io. Every developer knows that being a developer means constantly learning. New frameworks, languages, patterns, and practices, but there's so many resources out there. Where should you go? Meet educative.io. Educative.io is a browser-based learning environment, allowing you to jump right in
Starting point is 01:02:58 and learn as quickly as possible without needing to set up and configure your local environment. The courses are full of interactive exercises and playgrounds that are not only super visual, but more importantly, they're engaging. And the text-based courses allow you to easily skim the course back and forth like a book. No need to scrub through hours of video just to get to the parts you really want to focus on. And amazingly, all of the courses that they offer have free trials and a 30-day return policy. So there's no risk. You can try a course today, like something like a practical guide to GraphQL from the client perspective. Now, does that sound relevant to the topics we've been discussing
Starting point is 01:03:37 recently? Like especially our last episode, right? If you didn't already know GraphQL, I highly recommend it. And here's the great thing, like Joe said about the, you know, no need to set up your environment. You can do this from your iPad, for example, or whatever your tablet of choice is. You don't have to have all of the tools and everything installed on your local environment. You can code and learn it right there all from your iPad. I'll tell you, I've been... Oh, sorry. But I wanted to add this part too, because going back to that GraphQL conversation, there was this amazing little story here that I had to share when we were talking about like,
Starting point is 01:04:22 oh, how do you compare R rest to GraphQL, right? And have you ever heard of the sandwich comparison? You have? Well, okay. I see Joe saying no, Alan saying yes. I don't feel so bad then because I guess we have no, no, yes then is where we're at. But they said to think of it like this. You want a sandwich with only bread, cheese, cucumbers, lettuce. So you walk into a restaurant where the only option on the menu is sandwich. And you place an order and you receive a sandwich that has bread, salami, lettuce, tomatoes, cucumbers, and cheese. And then you then remove everything you don't want in order to eat the sandwich that you wanted, right? How many of us have been to that in that
Starting point is 01:05:11 situation, right? That's how the REST API works. However, when you visit the GraphQL Cafe, you realize you can specify what toppings you want on your sandwich and receive it just the way you want it. That type of thing is from the Practical Guide to GraphQL. So now I want to tell you about the one I've been working through lately, which is Grokking the System Design Interview. And it's got a whole bunch of really great breakdowns on, and when I say a whole bunch, I mean 15 systems that you've heard about, like TinyURL, Instagram, Dropbox, Facebook Messenger, Twitter. on uh and that's a whole bunch i mean 15 uh systems that you've heard about like tiny url instagram dropbox facebook manager messenger twitter you can click into any one of these
Starting point is 01:05:50 and see a really thorough write-up about how that company works big architecture i'm talking about it goes everything like from the schemas involved to like the size of the databases and decisions they made as to like what kind of technologies and databases and queues and whatnot that they use. And you can see like a really thoughtful breakdown at from high to low levels. And it's really impressive. It's got 15. It's got a really big appendix to full of really good information on things like consistent hashing and redundancy cap theorem, all sorts of stuff like that, that maybe you've heard
Starting point is 01:06:23 of, maybe you haven't. But either way, if you're interested in high-level kind of architectural stuff and really examining some of these large organizations like YouTube or WebCrawler or Facebook's News Feed, then you've got to check this course out. And like Outlaw said, there's a 30-day return policy, so you can afford to give it a shot. And if you're not feeling it, then you can go through that refund policy. So I definitely recommend checking it out if that's something you're at all interested in. Cool. And as these guys pointed out right now, you can go ahead and get started learning today by heading to educative.io slash coding blocks. That's E-D-U-C-A-T-I-V-E dot I-O slash coding blocks coding blocks and you'll get 20 off any course all right now there are a
Starting point is 01:07:10 couple downsides of having an immutable event log at the center of your application like information isn't instantly queryable now you can do some snapshots and there's some kind of techniques that kind of help with that a little bit but it's kind of like we'd mentioned before where it's not really easy to just get a kind of pinpoint snapshot of how things are right now without kind of doing some processing there and by the way that one's huge and that's what outlaw was saying previously is like replaying stuff just to find out you know what the state of the order is. That's sort of crazy town, right? That seems like a very basic thing. What's the total amount of sales we had in the past week?
Starting point is 01:07:53 What's the balance in my bank account? Right. You got to replay everything. Replay every transaction since the moment I opened the account. Right. And so this whole thing about snapshotting is periodically saying, Hey, what was the state of this person's account as of 12 noon today?
Starting point is 01:08:11 Right. And then you save that in another record somewhere that says, Hey, this was the baseline. And then that way, if you need to replay anything after that, you can use that as a baseline and then play all the events on top of that one.
Starting point is 01:08:23 Right. But you still, it's not a simple query. Like think about reporting. This would kind of be a pain in the butt. Yeah. So you basically need some specialized application logic or you're bringing in a whole nother database system now to go along with your database in order to make it usable, which
Starting point is 01:08:38 is a pretty tough sell. Yep. And then here was this also going back to the Martin Fowler library of things. This is where something like CQRS comes into play. This is command query responsibility segregation. And this you can solve this in multiple ways, but it can get a little bit complex. So what CQRS essentially does is it separates the models for querying data and the commands for writing data. So you could essentially have like two separate applications and two separate data stores even, right? So your data rights might be going to a database and also
Starting point is 01:09:21 to maybe an elastic search or something like that. And your reads may only be coming from elastic search. Like you could break it up however you want because essentially you're doing two totally walled off things. Yeah, and that sounds really goofy when you say it like that. But when you think about it, most distributed systems have this kind of problem anyway. Like once you get to where you're dealing with a couple different systems and a couple different data stores, then you're constantly living in this world, just a matter of degrees at that point. And it's really
Starting point is 01:09:50 frustrating and it has a lot of problems, but thinking that you're truly in sync in all your data stores across the board at all times is just not true. It's a lie you may tell yourself and it may work out for the best most of the time, but it's you know it's
Starting point is 01:10:05 it's a very real possibility that the stuff is not quite in sync there's always some sort of like margin of error on them and that's actually the next bullet point is eventual consistency we talked about um the cqrs by the way back in episode 48 when we were covering clean code. It's been a minute. How to write amazing functions. Wow. So, yeah, I mean, the good side,
Starting point is 01:10:30 the good thing though, is that by you referencing this Martin Fowler, at least article, the Martin Fowler article, at least we're like moving up a little bit in time, you know, because this one only goes back to 2011. So we're getting more current.
Starting point is 01:10:42 That's just us rolling forward in our martin fowler transaction log there we go that's good yeah so if you guys think we're talking about like the cutting edge here ql and q's is like no it's like 2011 just called and said they want their concepts back i do think that a big part of this is why people are talking about streaming more and more now and graph ql and stuff again it's basically the tooling's got to look at good maturity level now where it's easy to do. It works really well.
Starting point is 01:11:06 There's a lot of good documentation out there and people are doing really cool stuff with like, we talked about Uber last episode. There are people building really cool, really relevant, neat, useful applications. Now that would have been really hard to do in 2011.
Starting point is 01:11:19 Oh yeah, totally. So here was another downside of this. And this is what I mentioned earlier is forcing event sourcing on every part of the system introduces some complexity that you may not need. Right. So, you know, that was what I was saying is us developers, we tend to, we find a new shiny toy and we want to play with that toy a lot. And so we're going to make that thing fit everywhere. And it may not, it may not make sense,
Starting point is 01:11:46 right? Like event sourcing may not be the answer for every little thing that you're trying to do. So evolving applications, which is another tough thing. We didn't actually talk about this earlier. We'll talk about it now, I guess. Changing businesses require a change to like various different schemas. So what we're talking about here is basically if you put a bunch of you know order messages in a topic as they came along and now you need to change the data type like say you you made a mistake and you had a an integer where you needed to float or you needed to drop a new field or rename a new field or split up these concepts to do multiple queues now instead of one big object and you've got a real problem on your hand because you've got all these things
Starting point is 01:12:29 that were processed and were you know used to using or being used by version one of the application now you are here with uh version two which has a different message format and so some of the different messages and some of the different schemes that people come up with for dealing with this uh support some sort of like schema evolution that fits nicely with applications but some of them don't and either way you're probably in for some amount of pain and so when we're dealing with immutable messages that and basically allowing for your application to grow then you've got some serious challenges there that you have to deal with. Yeah, this one, one of the ways that people try to deal with this is one of the things is called upcasting. And this is basically taking the old record format and trying to convert it on the fly to the new schema format.
Starting point is 01:13:19 But the problem is now you have completely defeated the purpose of these immutable events, right? Like the whole purpose of this event stream thing was these were immutable you can just use them right and replay them so it's no longer that now you're converting these things on the fly to make them meet the current business logic or or however things need to read these things yeah that's the thing i like at least about this. We keep talking about Agile is the way of the future. Agile is the way to go move fast, break things. Until we come to talking about
Starting point is 01:13:51 messages and queues and streaming. Try not to change stuff. And if you do, try to only add new things that aren't required. Otherwise it gets really tough. We have to keep around the old code forever if we ever need to reprocess. Which we said was one of the fundamental advantages of this. So now suddenly there's a fundamental advantage, that requires quite a bit of maintenance to keep up. So this is the next one that they have. So we just talked about upcasting.
Starting point is 01:14:16 This other one's called lazy upcasting is evolving the event records over time. But that means you're now maintaining code that knows how to understand the old, which is what Joe just said, and the new. So you've potentially got branching logic in your application now, not multiple versions of the application, but branching logic in your application to be able to handle the V1 events, the V2 events, three, et cetera, right? That could get really ugly and that could really make for a fragile application, right? Your events are great because those things are always there and they're immutable, but now your application has to know how to decipher all that. And that can be a real problem. Yeah, we didn't have that problem with the queuing systems in the days of York. So you
Starting point is 01:14:58 plucked the message off, it was gone and then it was done. So the message was never going to come back again. But now if someone wanted to reprocess the pizza cooking from three years ago, then we got a problem. And one of the big ways, too, that people mitigate this is by only keeping data around for a certain amount of time. Like we mentioned, Kafka keeps data around retention policy of seven days by default. So it's not that you're keeping necessarily everything around. Obviously, for a bank, that's another story. But for pizzas, do we really need to keep around last year's pizza history? So there's tradeoffs there and there's things you can do.
Starting point is 01:15:31 But it means more thinking about your data than you probably want to do. That just sounds like I think I threw up a little bit in my ears listening to this whole upcasting and lazy upcasting and having to maintain multiple versions because then i was thinking about it and i'm like oh crap man you're not even talking about like having to maintain multiple support for multiple versions on one side either you're talking about both the the caller that might put the event on there might have to know how to deal with either am i thinking of that wrong? I don't know if the caller would, but definitely the responder. The reader and the writer can both be writing different versions,
Starting point is 01:16:11 and some versions can be compatible with others and some not. There's some really sophisticated tools. I don't know if we've mentioned Avro yet or not, but it's basically a format that kind of supports having different versions of readers and writers that can communicate, and there are certain rules that you have to adhere to. And the schema that you kind of communicate with actually enforces those rules. So it's really nice that you can have that.
Starting point is 01:16:33 You can break those rules. You can force things. But it does have mechanisms for it. But ultimately, it boils down to the same thing. All it's really doing is forcing the ability to do that upcasting or downcasting. So it's all gross no matter what. And it really means having to know a lot about your entire ecosystem or it means doing a lot of work to support multiple different versions of things.
Starting point is 01:16:56 It's gross either way. Because I swear we just talked about something recently, and I'm thinking it was GraphQL, where one of the main advantages was you no longer had to worry about supporting multiple versions of your api because when you added things in it was backwards compatible and i'm pretty certain that that was part of our graphql conversation that was graphql but that was the schema for graphql on the server side so yeah you can you can add stuff to it and it'll work fine.
Starting point is 01:17:25 You don't have to version it because you can make that thing work. But now the stuff that's backing that, that's going and pulling data out of the event streams is going to have to know how to convert that stuff on the fly. Can I unhear this conversation? This hurts, right? With things like, I'll say,
Starting point is 01:17:41 there's different messaging and coding formats, but things like Thrift and Google Protocol buff buffers that basically has these notions of like required fields and if you said a field is required it can never ever ever be unrequired and you later can't go to an existing message format and add a required field so things get pretty tough there and you can imagine if you got your graphql using that stuff as this transport mechanism then it knows like hey i've got required fields here. I can always rely on them. And in the application, if you require a field that isn't required in your transport, then one day you're going to get bit. And by the way, the survey, the latest survey about,
Starting point is 01:18:21 you know, find the thing that supports what you like best in the three-factor app there were none of these downsides listed right like we had to go dig these up to be like wait a second there's some counterpoints to these event these event streams right like there are reasons why these aren't the only way that people write any app so well and honestly they say that there are numerous benefits and then they only list three. Right, right. They didn't put a ton. So, yeah. So, another, so we talked about upcasting. We talked about this lazy upcasting. Another thing is every time you change that schema, you upconvert everything to the latest, right?
Starting point is 01:19:02 That, to me, now I'm going to be honest, that requires some overhead, right? Like if you've got, you know, a billion events that are in some old schema and you're going to migrate to a new schema, updating that may take some time and it may take you down. I'll take that hit personally, if that's what we're doing because now you keep everything in a consistent state uh a usable state i would rather that than have branching logic in my code to know how to read 12 different versions of the schema okay let me play devil's advocate for a moment there okay is it that the three of us are coming at these downsides of the immutable log conversation? a Kafka as how this reliable eventing gets implemented versus they didn't specifically list anything like Kafka or queuing or anything like that.
Starting point is 01:20:15 And we know that they're a database company. So maybe that's why like they don't have to worry about these types of downsides. So I don't think to worry about these types of downsides. So I don't think so, because I mean, if we're talking about Kafka specifically, you don't have to have any kind of schema on anything. You could totally have one single topic and throw whatever data you want in that thing. So you can absolutely do that. There's nothing in Kafka that forces you to adhere to any particular message format or anything like that. Right. But you, they also wouldn't advise
Starting point is 01:20:52 you do that in a production environment too, right? Just like an elastic, for example, you can, you could have that schema schema wide open too, but they typically would recommend that you would lock it down. It's a different use case. So in Kafka, like a logging, like here's a good example would be a logging topic, right? Let's say you're taking logs from your elastic, you're taking logs from your web server, you're taking logs from all kinds of other stuff. You could totally throw those all in the same topic, just so you have one place that you can scan or look at for messages, right? So no, I don't think that's what it is. I, even if you think about backing up to a database system where you're going to use
Starting point is 01:21:30 that with change data capture for whatever your event stream is, if you go add a bunch of columns to a particular database table, because, Hey, this is my new stuff, right? Or if you modified an existing column and changes it to this other one, you still face those same challenges, right? Or if you modified an existing column and changed it to this other one, you still face those same challenges, right? Like you still have to go back and say, my application needs to understand what these changes mean. Yeah. But the difference though, is in that case, if you were to add a new column to a database, right? You could make that column be like not nullable, for example, and have a default constraint. And there are ways that where you could like, you know, there are practices out there where like, okay, if it's an existing table that already has data, you might
Starting point is 01:22:15 like add the column, let it be nullable, add the default value, then add the not null constraint to that new column with a default on it, right? You know, and then your other code, even if it's not reading that column in or writing to that column, now you have a default constraint on it to where it could at least, you know, it won't error. And the code doesn't have to know or be aware that you changed that table, right? Now, obviously, if you remove a column, or be aware that you changed that table. Right now, obviously if you remove a column or maybe if you change this data type, that might cause problems.
Starting point is 01:22:50 It may not even be changing the columns. It might be, Hey, when we initially started doing these events, the ID that we were keying off was first name and last name. Right. And nobody thought about the fact that John Smith was going to show up 500 times in a city.
Starting point is 01:23:03 And so, so in version two, you're like, you know what? Let's stop using the name. Let's start using the email address, right? So now your application has to know that V1 was using the first name, last name. V2 is now using the email address. So I think it goes beyond just the schema changes. It's how the data is used, written, and you expect your application to be
Starting point is 01:23:26 able to replay those events. And when you start changing either the schema or the data that was written to those things, you have a problem. I just wonder though, like, I still kind of question though, that maybe like this is less of, maybe this isn't as big a deal outside of kafka i don't know man so i do want to say that um like kafka is kind of eating the streaming world like there are a couple alternatives like flink is one i keep hearing about like storm uh different apache products and you know like amazon's got kinesis and aws or uh azure's got whatever their you know event hub so there's there's definitely other kind of competing formats but a lot of those support the same transport so things like protocol buffers uh avro uh thrift and those all kind of
Starting point is 01:24:17 have this notion of schema evolution kind of baked into them because that's it's a common problem for the run into so i don't think it's just a kafka thing but i do think it's a common problem for the run into. So I don't think it's just a Kafka thing, but I do think it's heavily entrenched in that kind of streaming culture. And that streaming culture is becoming more and more synonymous with Kafka. So I don't really know where that line is. And, you know, maybe I'm biased just because of what I'm working with. But I don't think it's totally exclusive. But I'll say this, though.
Starting point is 01:24:42 Even though they have this schema evolution thing, that doesn't make your applications work. Just like in a database, if you add a column to a table or you remove a column from a table, your application's not automatically going to work with that change in most cases, right? Well, the removal of a column or the changing the type of a column, I could definitely see that being a problem. Right. Or maybe if you changed, to your other point, maybe if you changed what the primary key was or how that part worked, I could see that causing problems. But you could totally get away with adding a new column that the application wouldn't have to know anything about and it would work fine. You can do that in Kafka too.
Starting point is 01:25:28 And that's what I'm saying. So like the evolution of the schema, like what he's talking about, that's there for your convenience, more or less like there are some tools that will take it into account and be like, Hey, this doesn't match, but that's an implementation detail. Like that's not necessarily, I think Joe even mentioned earlier, but that's an implementation detail. Like that's not necessarily, I think Joey even mentioned earlier, like you could totally ignore it. You can override it. You can be like, I don't care about the schema version. So you can totally do the same thing. You could add columns that don't matter at all. But if you were going to make any kind of breaking
Starting point is 01:25:59 changes in your schema, whether it's a database or in, in any one of these queuing systems, you're going to have to handle that. And the last one was, you know, maybe you convert your entire set of data up to the latest version of what that event is supposed to look like. And we've run into that problem with like, you know, databases before, like you mentioned the adding a column, same with stored procedures. Like we used to have rules where it's like every time you added a stored procedure, make sure, you know, or rather every time you add an argument to a stored procedure, always give it a default value that makes sense for existing applications.
Starting point is 01:26:30 And it's just hard to evolve those schemas when you're along the way means that you can't even get back to the correct state of what this thing was, right? That these events are supposed to add up to. So it's definitely a consideration when you start wanting to make something go to a full-on eventing system and a mutable event system is these kind of schema migrations or changes in your application and data tiers can, there are things that you have to put some serious consideration into. Here's another one that was kind of interesting. I hadn't even thought about it because I haven't really done any of these eventing systems is this consideration of how granular are your events, right? So what this means is how, how many
Starting point is 01:27:33 isolated events are too much and how many aren't enough, right? So if you're writing your order thing, right? Like, are you going to have an order event? Are you going to have order detail events? Do you need one for each item? Like if they ordered five of a particular item is having one line item with a quantity of five good enough, or do you need to have five for decrementing inventory or something like, you know, actually making the decision of where you split and say, this is enough information to represent a single event right like those are things it's almost like naming like oh my god naming is already hard now we got to figure out how to break up these events in a useful meaningful way you know once you've got you know like paintings
Starting point is 01:28:17 sometimes you'll buy them on the back it'll say like one of 173 or you know something like that some sort of like number because it's not that there was some big box of all these items, but they were some unique part of a set. And so which one you buy out of that set has some sort of meaning or value. Yep. The meaning and value, the meaning and useful is the key part of this, right? So they actually said if you have too many of these events, there's not going to be enough information on any one of them to be useful in any kind of way, right? Like it's, it's too granular. And if you have too few, then you're going to take a big,
Starting point is 01:28:53 like if you think about like a Jason blob or something, you're going to take a big hit on serializing and deserializing this stuff when you need to go use it in your application. So it's, it's definitely an interesting thing that if you're not in this world, you're to go use it in your application. So it's definitely an interesting thing that if you're not in this world, you're probably not even thinking about. I mean, at first thought, when you talk about, okay, we love our e-commerce example. Yeah. So when you think about like, okay, if you were to make an event of, hey, I'm going to add this
Starting point is 01:29:22 line item to the cart, right? And you might be tempted to think like, okay, well, that kind of event, I mean, technically, yeah, it's an event. It is an event. But, you know, is that too granular for this scenario? But then you kind of think about it and you're like, you know what, though? Hmm. You could then maybe remove the cable. Like if you now scale out to a large e-commerce site like an Amazon or a Walmart or whatever, now it's not necessary for the app server to care. Like your session doesn't have to be specific to a given app server, right? Because any app server could be picking up that stream. That's a great point, right? Because any app server could be picking up that stream. That's a great point, right?
Starting point is 01:30:08 I mean – So even – because initially I was thinking like, oh, that seems kind of silly. That seems almost like you've gone plaid, man. Like that's – you went off the deep end with it, but then I'm like, oh, no, I'm totally talking myself into it. Like, yeah. It's kind of cool what it offers you. I mean, here's where like this particular article landed was, okay, and you're using your ubiquitous language and talking about orders, what are the actions that can happen on an order, right?
Starting point is 01:30:52 Added an item to an order, removed an item from an order, returned an item, whatever, right? When you start thinking about that, what are the events that make up your domain-driven design use case? And then that's probably the right granularity. And I thought that was a good answer, right? That's taking your business needs into perspective and then using that to drive the data that's going to be behind the scenes. You know, one thing I wanted to mention, I just kind of thought about is like, whoa, is that, you know, keep in mind that we're talking about a certain type of application. This is not suitable necessarily for all, but what we're really talking about is high velocity, highly scalable apps. And we mentioned like Uber, Domino's Pizza, banking.
Starting point is 01:31:36 One thing we didn't mention, I just realized, is a tool that we're using right now, actually, which is basically Google Docs. Right. We're in a drive. We've got three people here in one document that are all subscribed to events. And so when I kind of move from one cell to another, or if I type something, everybody sees it because you're all subscribed to the same event. So we're in this case, we're all subscribing to one set of events. We all have the copies of the data in our browser. And we also have some sort of shared copy on that. So we've got a lot of events moving all over the place. Like I just clicked my mouse like 20 different times, you know, and so I just sent off like 20 different messages. So I think that while
Starting point is 01:32:14 you might be thinking to yourself that these kind of applications aren't relevant to you, once you start really kind of like opening your eyes to different applications that you use in a different day and think about the ones that are really kind of stand out, I think you'll find that a lot of the apps that are doing some of the most interesting stuff that we're seeing on the web today are actually built in this kind of manner already. But do you think, so just following with that, like I like, I like the example of this Google doc, but do you think they're actually writing these things out to an event stream? So for instance, like when you're mousing around, like, you know, you're moving your cursor up and down and you're clicking on things.
Starting point is 01:32:49 Totally. This is happening through some sort of sockets that are going back to a server and it's talking to all three of our computers. Right. But do you think this is actually being right? And right. And you think this is being, do you think this is being written to some sort of event log somewhere?
Starting point is 01:33:06 I don't know if it's persistent, but it would make sense to me that it would be some sort of queue going on that kind of shows. It's either that or basically you have to send my current cursor or, you know, I don't know. The only reason I say is I think you'd introduce so much latency into interactive apps like this, right. To where it's like, yeah, we don't want to write this thing out. You know, maybe it'll snapshot some things at certain points in time, but we know it does. Cause there's version history when you, when you do some things, but, but like moving curses around when you're not actually writing any data, if you're not cutting and pasting cells or anything like that, it seems like the overhead to it would be too much. Now, I don't know.
Starting point is 01:33:49 It's just an interesting aside to what you said. Yeah. You can actually look at your network tab and it's doing some weird stuff. I definitely see as I kind of click around and do things that it sends a message for each one. It's actually not doing websocks as far as I can tell oh really i missed the hookup somewhere huh uh interesting all right so lose my place in the notes the last downside i had here for for these event type systems is this one was kind of interesting fixing bugs could be a little bit more of a pain because these are supposed to be immutable events.
Starting point is 01:34:28 And if there was something wrong, then you've got to sort of be able to rewrite that history to get it back into the correct state. So that might be a bit of a pain. Like, whereas if you think about a traditional transactional database system, you just go write an update query and you're like, boom, everybody's good. Moving on. Nobody needs to know that there was a problem here. Nothing to see. Right. There's nothing to see.
Starting point is 01:34:53 Nothing to see here. But in this, you might have to figure out, hey, where do I need to insert a record in between these events or whatever the case may be to try and get this thing back into a good state. Because remember these event systems are more of a calculated state running it through some sort of algorithm. So that was kind of interesting. Hmm. So we all watching our network tab right now. No,
Starting point is 01:35:19 um, no, cause then we'll be stuck in cobalt and we won't figure out how to read that thing again. True, true. No, I mean, that's definitely an interesting case that I hadn't thought about. And I'm curious, like, well, would like, we seem to be picking on Kafka tonight. Would it even allow you to do that? And I didn't think that it could.
Starting point is 01:35:46 I didn't think you could insert something in between. No, it would have to be at the end. Yeah, so in that case, you're just stuck. Yep. Yep. It's kind of interesting. Maybe that doesn't matter. I guess it depends on the application, right?
Starting point is 01:36:00 Like, it's impossible to know all the uses of this. But, yeah, it's definitely something to consider. I mean, to Joe's point earlier, probably what you do, if you realize that you screwed up on a thousand orders is you'd probably take the data from that original topic, write it to a new topic and be inserting those values you needed in between as you wrote those records. Right. And then you kill that old topic off. So that's how you'd fix it. Right. You'd basically,
Starting point is 01:36:33 that sounds horrible. I mean, I'm used to going to the database, writing my little update, you know, and saying, whew, hoping you don't mess it up.
Starting point is 01:36:39 But when you think about going to the database, writing a little update script as a developer, hitting it and few, you know, you're hopefully you did it right. When you start thinking about bigger and bigger scale, like higher velocity apps, like your Uber, you don't want any developers
Starting point is 01:36:52 logging into the production database and running ad hoc queries. You need to have better processes for fixing mistakes like that. Right. Yeah, it's definitely not an easy thing. And you wouldn't, let's be honest, you wouldn't even want people doing that with a database sure you could write an insert record in between your other records you still wouldn't want that because we're talking about things at scale here probably more than
Starting point is 01:37:13 likely i'm still trying to figure out like any system that where you could like insert this other action in between after the fact you could do it what does that even mean you could do it in a database but you'd probably have to fudge the time stamp or something right on the records in between yeah it's dirty like and i guess that's the whole point of these immutable systems is hey this was the state this is what happened and if you want to put a fix in probably what you should do is insert a record at the end and say hey this is how I fixed the state. It's yeah. I mean, I'm sure that people have worked with these event systems a lot.
Starting point is 01:37:56 They've come up with clever ways because they've been in that situation where it's like, oh, we really messed that up. Got to fix it. I mean, everything that comes to mind would only be you doing something at the end to just like correct the state. It's the one that makes the most nothing nothing nothing is coming to mind where it's like oh yeah here's an easy button for where you could like inject something in and then replay it so that it then later works correctly it's dangerous right that just doesn't even seem that's like you rewriting git history up on the server i know you don't like that so well now you went there so now i'm like okay well yeah i guess it's happened get rebase and push baby it's done well with kafka in particular like it's got this whole
Starting point is 01:38:41 notion of kind of offsets which is how it keeps track things. And then all this stuff is laid out on disk. And it doesn't have mechanisms for updating. All that stuff has to kind of happen afterwards. So if you think about what it takes to actually insert a record into the middle of something, it's like, oh, you can't just insert. It's like inserting into an array. You've got to bump everything else down. So it's not just changing one message. It's like kind of changing everything they're on after.
Starting point is 01:39:05 And, yeah, it's just, I don't know, it's a weird concept. And all the data even for the offsets is kind of kept out separately from the consumer. So it's really hard to mess with the data. You have to really work hard even just to read it. So modifying is out of the question. You'd probably jack up so much stuff if you tried to force that hand. Like it would not be worth it. Maybe like modifying files on someone else's disk.
Starting point is 01:39:26 Yeah, it's not worth it. So a lot of these things that came in these downsides here, I got from an article on Medium, which I've grown to hate more over time. So I apologize for that. Oh, you don't see the thing like, oh, we see that you've read three articles this month, would you? Yeah. I'm like, man, get off me.
Starting point is 01:39:50 It's driving me crazy. And I get it. People want to get paid. But, man, don't create a site that's all about people sharing information and then try and block their information from the world. Yeah, they don't block the Googlebot, do they? Right. I hate paywalls. They drive me crazy paywalls i'm okay with but medium was never set up to be a site for a paywall right like it's
Starting point is 01:40:11 not the new york times you know wait a minute i i'm not seeing what you're seeing then really i get the thing that pops up a lot where where it'd be like pardon our interruption and you could just be like up close it and and you can still read the article yeah yeah you can still read the article but i'm talking maybe i misread that though because it was just like wanting you to sign in it wants you to sign in so it can track what you're looking at and all that kind of stuff and i'm like dude get off me yeah i just don't sign it yeah i don't either but yeah i just those things annoy me they do let you read the articles which is good because it could be worse like when you talk about a paywall yeah if that happens there will be um mutiny max.com instead of medium dot com i don't know somebody's gonna change it max max or men men.com maybe i don't know oh max
Starting point is 01:40:58 yeah nice okay yeah i think they would just have to spell it out at that point to avoid confusion otherwise apple might or we could call it large.com small.com i don't know whatever yeah oh so i was thinking psychic.com psychic yeah i never thought of medium as in like the middle i always thought it was like medium as in like a spirit medium or maybe like medium as in like a transport protocol i don't when you said like min or max, I'm like, what? You know what? Like, I don't think if I had stared at that word for the next 50 hours, if the spiritual world would have ever entered my mind. That's okay. Because I went to a third place. Where'd you go?
Starting point is 01:41:37 We all went to a different place. Because every time I've ever seen medium.com, I just assumed. The writing medium? It was like, this is the medium by which I plan to share. Right. Okay. Yeah. I could find that more easily than I could.
Starting point is 01:41:51 The, uh, the, uh, the Oracle. This is my medium. Well, I guess you guys don't watch the Hollywood medium then.
Starting point is 01:41:57 No, I've never heard of it. Oh, it's for real, man. Tyler Henry. I thought he was joking for real. And then he doubled down. down yeah i have no idea what
Starting point is 01:42:06 he's talking about so much better than theresa okay anyway yeah all right i'm lost it's something you guys look at how upset he is that's the best part i know man you guys gotta watch some more tv wait you said this is called the hollywood medium let me google this yeah tyler henry and then there's theresa the Long Island Medium. You're just making all kinds of names up. This does nothing for me, man. I think it's better because we're also looking, like, for those who can't see it, if you see the video when and if it ever comes out, you'll notice that, like, it's the 1930s version of Joe that we have on screen. It's just so emotional.
Starting point is 01:42:42 You guys! You're surprised that you can even hear the voice instead of it just being like a silent film every now and then from him. Dude, what's so special about this Tyler Hamry? He just looks like a goofy dude. He's just really connected to the spiritual world. He scribbles on his little notebook and then he
Starting point is 01:43:00 just asks these questions that just grab you in the soul. This absolutely bears me almost negative interest. Whatever. See, but there's the catch. Almost. So you're saying there's a chance. I never speak ill of the ghosts and the poltergeists around me.
Starting point is 01:43:23 I like how he looks at every shoulder as he says it. The poltergeist sucked the color out of his room, actually. Yeah, that's what happened. They're here. All right. So then the last little bit that we're going to cover here is they have this reference implementation, which really was not a reference implementation. I thought it was kind of odd that they called it that when really it was just a few bullet
Starting point is 01:43:44 points of what they do. Right. And I um, and I want to knock on them, right. They at least wrote the articles. We didn't. Um, uh, we're so lazy. Yeah. Yeah. So they have changed data capture, which we mentioned that's for shipping events from a database to your applications. Um, you could also do triggers, which we also mentioned, and then the event sourcing, which is this whole notion that you can rewind and replay these things. So it was all stuff we'd already talked about, but I figured it was on there, so we'd re-mention it.
Starting point is 01:44:21 You know, they do have a sample application you can start up with. It's a Hasura, but they've got like a Docker Compose file. There's a couple of things you can like spin up a free tier on Heroku. But yeah, it's got a Docker Compose file here that spins up. I don't know. I can't actually get the thing open. Hey, look, if there is a Docker Compose, I will try anything. Yeah.
Starting point is 01:44:41 Once. You know what? They named it Docker Compose. y a m l which offends me wait do you do why i was gonna mess with this right now but now i don't know you're not you're not going for the yaml it needs to be yaml you need to be that you need to be yaml maybe i'll still mess up but yeah it looks like it starts you up a pretty nice little console here so you actually can see their their has started console and kind of get started with the basic uh hello world and kind of take it from there cool so you would choose the option to
Starting point is 01:45:08 use the tutorial then that's what you're saying uh yeah and i would probably get halfway through it before i realized i knew everything and went off my own and then regretted it wait this is joe that we are typically accustomed to yeah i was about to say like wait realistic i would say like i'm just going to work through every tutorial I can find, and then I'll learn everything about it. But realistically, I started on tutorial one and I'm like, oh, this is too slow. Even just now, I'm like, oh, let me look at their app first.
Starting point is 01:45:34 It's like, step one, blah, blah, step two, Docker Compose. Let me go download it and go to the GitHub repository, and there's not a Docker Compose at the very root of it. I'm like, ugh. I can't do the same thing man like if the first three paragraphs bore me i'm like all right where's the end all right let's go to the last step before the end and then and then you're mad because you feel pain you're like oh i missed the
Starting point is 01:45:57 one sentence in the middle of that 12 000000 word page, man, whatever. Yeah. All right. So we'll have a couple of resources that we like, including to Hasuro, that reference implementation, implementation, and of course the three factor app. So you can check that out on our amazing show notes. All right. And with that, we will head into Alan's favorite portion of the show. It's the tip of the week. I still find it funny, by the way, that we call this tip of the week and we record every two weeks.
Starting point is 01:46:37 We're more consistent than we used to be, which was like every three or four weeks, but still not every week. Well, I have a tip every week. You just don't get to hear it every week. Ooh. Could be. That's a lie. That is a lie. Man, people that listen to this every week, that's a lot of listening. All right.
Starting point is 01:46:48 Anyways. All right. So, you know, we're giving Docker a lot of love. We've talked, you know, just minutes ago, you were giving Docker some love. So, you know, it's a thing. And so, what if you want to take control of your Docker environment? So you've pulled down all these Docker images. You ran that Docker compose file.
Starting point is 01:47:11 And you're like, hey, you know what? Like how much space am I using for all? I mean, Docker is great, but like really, like how much space am I using? How much space being eaten up? So with the system sub command for Docker, you can specifically use the DF command and you can find out just how much space you are using, how many containers, how many images, how many local volumes, all that your build cache, and how much disk space those various types are using, right? So it would be the command command would be docker space system space DF,
Starting point is 01:47:47 and you can see that. But just in the system sub command, you know, in general, there are other commands that you could execute. Namely, you might want to do a prune if you wanted to remove some of that unused data. So something like a docker space system space prune or just find like information about it. So if you wanted to do a system info or get real time events from it, there's a system events. So that's my command. These are legit. These are legit tips, too, because I've definitely had Docker things before and I've pruned the system and, uh,
Starting point is 01:48:25 and like recovered six gigs. Oh yeah. Yeah. I'm, I'm well into double digits, uh, looking at it on my system here. And,
Starting point is 01:48:35 uh, and even on like, you know, the, the work laptop, I'm well into double digits there too. And, and basically like what got me thinking about this is I was like,
Starting point is 01:48:43 uh, noticing like, man, I didn't think I was using that much disk space on my system. Where did it all go? And then I'm like, oh yeah, DF, there you go. Yeah. And it's actually hidden stuff, right? Like you're not going to see it. It is not in your face. Yeah. Yeah. Yeah. Exactly. Great, great tip. And that's what got me thinking about it because like you do like a Docker image LS, right, to just see what images you have pulled down. And earlier today on my work machine, I happened to be like, oh, yeah, let me go and see what images I had. And it was a long list that came back. And I'm like, whoa, that's got to be eating up something, right?
Starting point is 01:49:25 Like how much could it be eating up? Whoa, that's got to be like eating up something, right? Like how much could it be eating up? Whoa, that's how much it's eating up. Okay, I might need to prune. I might be due. I've got so many more tips that I've got to give now. All right, so I guess into mine. So the very first one I'm going to share from, I believe it was Martin on Twitter. So in the previous episode, I gave the control shift Z inside Visual Studio and the amazing clipboard history that you could do inside there.
Starting point is 01:49:52 So you didn't lose your stuff. So you weren't paranoid about doing another control cut, right? And I think they called it the ring, right? It was the clip ring. It was something really weird. Yeah. There was a specific name for it. It should just be the awesome clipboard history.
Starting point is 01:50:06 I'm going to look it up real quick. It'll see you, but clipboard ring clipboard ring. Yeah. Okay. So there's actually a feature in windows 10, the later versions here, whichever one they're on that Martin shared that actually has this feature for
Starting point is 01:50:23 clipboard history in windows. And he's, I've got a link here that is really nice. You go up there and you have to enable it in windows first, which is interesting, but in order to do the controls or to do the copy paste, it's windows C windows V to do that. But here's the thing that you need to know about it,
Starting point is 01:50:44 that you have to be careful about. And I want to call this out. So you just don't go enable it blindly and not think about this stuff is it will actually retain your clipboard even between reboots. So if you're one of those people that copies and paste passwords all over the place, please make sure that you go in and clear out that history. You know,
Starting point is 01:51:03 don't, don't let that stuff stay around. That'll never happen. It won't ever happen, but I've warned you. Now my conscience is clear. Because Outlaw gave the awesome tip of the Docker system commands, I want to give you one that actually Joe is the one that shared with me probably a month or two ago because I truly had. What?
Starting point is 01:51:27 Were you going to try and take it? You didn't type it. Yeah, I should have used it a month or two ago. Yeah, you should have. I was going to say, you can't have it now, man. So this, so I do a lot of Docker Compose things. So if you're not aware of Docker Compose, it allows you to kind of spin up a bunch of different services at once and it'll put them all on the same network so they can communicate all that
Starting point is 01:51:49 kind of stuff right awesome the problem is is when these containers are running they have a tendency to write a lot of logs they have a lot of data that happens and if you turn on debugging logs then it's just it's constantly throwing that stuff out on disk, right? Well, if you Docker compose down, it doesn't kill those volumes. Those things are still hiding back in the middle of nowhere, doing nothing. You'll never get them back because you killed that stack, but it's just eating up space. So if you do a Docker compose down dash V as in victor it will actually kill those volumes so if you go do your docker commands to check the system space after that it'll be empty it's beautiful nice so it eliminates a bunch of steps that i used to do and then the other thing that you reminded me of
Starting point is 01:52:41 is you said you did docker images right and you saw your list i had actually gone to a talk one time and i think it was the dude who did the uh the devops in docker that blew my mind like totally just completely blew my mind i think i was in that talk at that talk with you do you remember him going oh man i'm out of space and he he looked and he had a ton of docker images he's like oh man i need to figure out which ones i can kill here so i can do this And he looked and he had a ton of Docker images. He's like, oh, man, I need to figure out which ones I can kill here so I can do this demo that I'm trying to show you guys. Right. Okay.
Starting point is 01:53:12 If you're running into that, then you need another drive. And I'm going to recommend. I was wondering where this was going. Yeah. I'm going to recommend one. I have a link in the show notes for this. But no lie. I bought this for my laptop. Right. So I've got the gigabyte. I have a link in the show notes for this, but no lie. I bought this for my laptop, right?
Starting point is 01:53:26 So I've got the gigabyte. I need to do a review on that thing. I need to get that out for you guys because I truly do love this laptop. But one of the primary reasons I picked this gigabyte arrow was because not only did it ship with its own NVMe PCIe drive in it, it had a slot for a second one. Intel makes an NVMe PCI Express drive that is affordable. Like it's not the very fastest one on the market, but it gets close to two gigabit reads or two gigabyte per second reads and a little over a gigabyte writes in speed and a two terabyte Intel six 60 P usually hovers around about 185 bucks. So that's a whole lot of space and a whole lot of speed for not a ton of money when we're talking about SSD prices. So you can switch all your Docker images to go to that thing,
Starting point is 01:54:27 and then you don't even have to worry about it for a year or two. So the only place that I could find it for sale was on B&H, and it's $185. For a two terabyte. For a two terabyte. Very important. Yes, thank you. So I think I had looked on Amazon.
Starting point is 01:54:43 Maybe they're all sold out. Yeah, I don't know that it's there anymore. That kind of stinks. Let me see. Two terabyte Intel 660P. No, I'm sorry. It's there. The two terabyte is $193 on Amazon.
Starting point is 01:54:55 Yep. So, I mean, seriously, fantastic little drive. Oh, actually, no, it would be the same $185 because they have an extra coupon. Oh, yeah, I see. Save $8. There you go. Yeah, so you can get it on Amazon for the same $185 price. So if you're looking for some NVMe storage for not break-the-bank kind of money, you can get the 1TB for, I think it's like $90-something.
Starting point is 01:55:21 Let's see. It's sold out on Amazon. Yeah, on B&H it was like $95. Yeah, so $95, man. That's killer storage prices for that. So at any rate, I have a link there. Joe, your turn. All right.
Starting point is 01:55:35 Oh, sorry. I was reading this amazing book that I'm going to tell you about right now. It covers a lot of material, actually. The name of the book is Designing Data-Intensive Applications. And I'm sure you've heard some of the buzzwords. I've kind of been dropping write-ahead logs and different encodings and just sort of stuff. But I've been reading and loving this book. And I don't know.
Starting point is 01:55:59 It might be in the running for my favorite tech book ever. It's definitely a back-end kind of focus, like how distributed systems work. But it's just so interesting to me that I can go go and read a chapter on how like different data storage systems work. There's always some things I've kind of wondered about and kind of thought about a little bit myself. And it's kind of cool to hear how these big systems do it. And it's got really good recommendations. Like one thing I was actually just kind of perusing through
Starting point is 01:56:17 because I skipped around a bit. I wanted to see what they had to say about basically streaming and log structured events. And when you have deletes to do, and they refer to inserting tombstones, which is something that I haven't had to do yet with Kafka, but I know that's kind of like standard practice for like inserting these tombstone records
Starting point is 01:56:32 that refer to removing messages. So then those tombstone records are automatically ignored whenever you copy to another topic. So I just thought it was kind of cool. It's like, oh, they do have a mechanism for that, and it is immutable, and that's how they kind of deal with it. And so it's nice to kind of see that and read it. So a couple other things.
Starting point is 01:56:53 They've got sections on columnar databases. Oops, somebody have something? Oh, no, no, no, go on. Columnar databases, streaming architectures, particularly in the data storage section, that's been my favorite so far, where they talk about different database systems. They talk about Elasticsearch, they talk about SQL Server, Postgres, Redis, and they talk about different technologies underneath, which is basically the data structures and algorithms that govern those systems. So things like whether it's a hash index versus an SS table versus a log structure merge tree, B tree.
Starting point is 01:57:26 So it's just been really cool, and I hope to talk about it quite a bit on the show, because I think it's just chock full of info, and I feel like I get a little bit more about what makes these systems unique and different. That's awesome, and we will be covering this. Well,
Starting point is 01:57:42 in fairness, you haven't said the title of the book yet. Yeah, I did. Yeah, yeah i stuck it in there but it sounded like everything else it's designing data intensive applications i got a picture of like a flying pig on the front like a hog that's jumping and it looks like a generic o'reilly book uh which is not a bad thing but you know it looks like kind of like every other o'reilly book ever seen it's got like an animal on the front and you know kind of colors and the title even kind of sounds generic, but man, uh,
Starting point is 01:58:08 I am about a third through with it now and every section has just been kind of amazing. So I've been really happy with it and definitely planning on talking. I've actually been highlighting too. So I need to go back and kind of fill in some of the gaps like, uh, about those tombstones. Cause it's really interesting stuff that I apparently skipped over.
Starting point is 01:58:25 I think I'm really excited about the book. I think they call this flying pig a warthog. Yeah, it definitely looks more. It looks kind of like my dogs, actually. That's kind of what they look like. And they need to go out. Yeah, I have a love-hate relationship with this book. Wait, so...
Starting point is 01:58:40 Yeah, really? Yeah, so I love this book, but I hate it because I'll read something and then I'm like, oh, really? And I'll go like start Googling something else and be like, you know, so I can't ever get far into the book because like I'll read a page of it or a section of it. And I'm like, you know, get distracted because I want to go look for other things related to that topic. And this book is not short. You know, it is a very massive. Yeah. It's a big book. And I'm like, I will never in my lifetime get through this book at the rate that I'm going. You know
Starting point is 01:59:13 what I love about this, guys, is I feel like you have both now drunk my Kool-Aid, which is big data. But billions, man. I'm on it. Dude, big data is just so fun, right? Like there's so many problems to solve in big data. It's just a fun – it's a frustrating topic, but it's fun. Well, I think I spent so many years – They start off by debunking the big data terminology in the beginning. Okay. I haven't read it yet.
Starting point is 01:59:41 Well, I just spent so many years like – it's like, all right, here's my C sharp and here's my relational database. Like I can solve anything with enough if statements and for loops. And then like kind of like getting more exposure out of things, you know, tools. It's like, holy cow, like some of these tools that are specialized for the things that, you know, they're special at make really tough problems really easy. So I'll give you one little example where the book kind of helped me out. I was reading a little bit about how Elastic stores their data underneath and they use a log structure or LSM merge tree and one of their tricks is that they
Starting point is 02:00:09 append all changes. So anytime you change a document they write it to the end of their index and they basically keep track of where the document was earlier in the index and ignore it in future searches. So what the book was saying is like if you make an update to any record,
Starting point is 02:00:28 it's like doing a whole nother insert for every update. So earlier this week, I had an opportunity where I needed to update every single record in the index. I was like, so let me see. So if I do an update on every single index, then I should see the size of my index double. Even though I'm only doing an update, I'm setting one field, one small zero to a one. I did it, and sure enough, I went from three gigs to six gigs. I did it one more time,
Starting point is 02:00:54 and I'm like, let me try it again. And then sure enough, after the update ran, it went from six to twelve. Sorry, it went from six to nine, because it only repeated the same number of records. And then I ran a command to convince it and went back down to three okay so that's what i was going to say so you can absolutely like tell it hey compress this back so it basically kills off the old history yep and all it does for that is
Starting point is 02:01:15 basically goes back and kind of resources or its indexes and kind of trims that stuff out and we opens it back up for reallocation man that's super useful information that's all yeah and it's just like you know that that was like a little, you know, three paragraph thing. It's like, oh, by the way, here's something cool that Elastic does. I'm like, holy cow. Okay. Yeah, I'm excited about this. We're going to be doing a series of podcasts on this here coming up soon after.
Starting point is 02:01:39 I think it's after the three-factor app, right? I think we got a couple in the middle talking about DevOps. Okay. And, yeah, so we'll definitely be talking about NoSQL over SQL, data storage mechanisms, streaming architectures more, like all that sort of stuff. It's all good stuff. Excellent, excellent.
Starting point is 02:01:57 All right. All right, so. Go ahead. That's about it for the episode. We talked about reliable inventing as part of the three-factor app, which, as we mentioned before, is just a modern, high-velocity, scalable architectural pattern for building certain kinds of applications that we think is really nifty. All right.
Starting point is 02:02:14 And with that, subscribe to us on iTunes, Spotify, Stitcher, and more. Be sure to leave us a review if you haven't already. We really appreciate it. I know you hear us say it all the time, but we really do appreciate it. It puts a smile on our face. We love reading those things. So you can find some helpful links at www.codingblocks.net slash review.
Starting point is 02:02:34 And while you're up there, go ahead and check out our amazing show notes, our examples, discussions, and so much more. And if you have feedback, questions, or rants, then the best place to go is the Slack group, which you can go to codingblocks.net slash Slack and hop on in there and make sure to follow us at Twitter, uh, at coding blocks or heading over to kidding box.net and finding links,
Starting point is 02:02:54 social at page top. I think the transaction queue got out of order. It did. It did. The ordering is wrong. Hey, a guaranteed that you would get the message at least once. Don't say anything about order.
Starting point is 02:03:08 Don't say anything about dupes.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.