Coding Blocks - 3factor app – Async Serverless

Episode Date: October 14, 2019

We take an introspective look into what's wrong with Michael's life, Allen keeps taking us down random tangents, and Joe misses the chance for the perfect joke as we wrap up our deep dive into Hasura'...s 3factor app architecture pattern.

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to Coding Blocks, episode 117. Subscribe to us and leave us a review on iTunes, Spotify, Stitcher, and more using your favorite podcast app. And check out CodingBlocks.net where we write show notes, discussions, examples, more, and send your feedback, questions, rants, and comments to comments at CodingBlock. You messed me all up. Comments at CodingBlocks.net. Follow us on Twitter at CodingBlocks or head to www.CodingBlocks.net and find all our social links there at the top of the page. With that, I'm Alan Underwood. I'm Joe Zach.
Starting point is 00:00:37 I'm surprised that no one said it like this, so I will. I'm Outlaw Michael. I never promised you in-order messages. That's right. I really expected that one of you was going to say that. Oh, that fits really well with the show, doesn't it? Out of order messages coming your way here. Well, now you can't say it like that without giving a brief introduction of what we're going to talk about.
Starting point is 00:01:00 But first, this episode is sponsored by Datadog, the monitoring platform for cloud scale infrastructure and applications, and the O'Reilly Software Architecture Conference, the only conference that focuses exclusively on software architecture and the evolution of being a software architect, and educative.io. Level up your coding skills quickly and efficiently whether you're just starting, preparing for an interview, or just looking to grow your skill set. All right, today we're talking about the three-factor app, which is a modern high-velocity scalable architecture pattern for building applications and is the third part in the series. And the last episode we talked about factor two, which is basically how stateless apps and immutable events
Starting point is 00:01:48 help provide reliable eventing. And today, we're talking about async and serverless backends. Very nice. But before we jump into that, as we like to do, first, we want to give a big thank you to those who have taken the time to write us some reviews. So I've got iTunes this time. And so this is going to be
Starting point is 00:02:08 Code and 40K and then Buck Rivard. So thank you, you guys. Or gals. Alright, and go Chaos, by the way. 40K. Or Hammer. Oh my gosh, you guys are such nerds. Wait, how am I a nerd when you're
Starting point is 00:02:24 the one that knew what that was and I'm completely in the dark? Oh, man. Anyway, moving along. Did I miss something? Oh, it's me. So Jedi Knight Luke, thank you very much. And Molina, we really appreciate the views. Very funny.
Starting point is 00:02:40 So as much as I can pick on you guys for not recognizing 40K or maybe Star Wars, I did not get the reference about what was the show outlaw? Well, wait, wait, wait, wait, wait. Okay, so, yeah, so you're talking about Nico's review where he referred to us. Okay, hold on. Let me find it. He says, I love the chemistry between the three of you. Maybe if only second to Jeeza the hamster and Captain Slow. And we didn't get the reference.
Starting point is 00:03:13 No, no. Don't say we. I didn't. Don't say we. Even though I love the show, I didn't get the reference. Shows. Shows. Plural.
Starting point is 00:03:21 Plural. Yes. And so you want to share what those shows are because you got the reference immediately. Oh, immediately got this reference. And in fact, he has hit us up on Reddit with a similar – well, not similar, but a question about like, hey, that sounds like a reference to the Top Gear or the Grand Tour if you were to read it in Jeremy Clarkson's voice. And I'm like, absolutely. And by the way, you were the first person who's ever picked up on that, and you've been doing that for years.
Starting point is 00:03:52 So that was pretty amazing. Very nice. Well, I appreciate the reviews as always, so thank you very much. Alright, we're not skipping over this 40k thing, though. What is this 40k? Oh, Warhammer 40K. I mean, I assume 40K is a reference to Warhammer, a little miniature game. Okay. But then you said something about Star Wars, though. That was a reviewer, Jedi Knight Luke.
Starting point is 00:04:16 Thank you very much. Okay. So Jedi was a – Oh, okay. So you were mixing the two things because 40K was from an iTunes review, a different person. Yeah. Okay. All all right we're all on the same page we're at yes yes yes i think we are at yes yes yes that there should be a wikipedia page for yes yes no they really should you know there probably is yeah they probably you guys
Starting point is 00:04:38 would go on i'm gonna check all right so today we're talking about the three-factor app like we mentioned uh the first one uh the first factor was real-time GraphQL. And the second factor tied into that with the reliable eventing that was focused on immutable streams of data. And these two features put together kind of paved the way for the third factor here, which is async serverless. Woo! And what does serverless mean? Just out of curiosity. Because people hear this and I think people have the wrong idea of what serverless actually is.
Starting point is 00:05:11 Do you want the real answer or what we think it should mean? That's the thing. I know the first time I heard serverless, I was like, what is that? Huh? And then after you see it, you're like, oh, I get it. But anybody want to take a stab at it? Like a broad stroke approach to what serverless is? I really don't know.
Starting point is 00:05:33 I mean, like as a user, I've, you know, put some stuff up there and it's worked out really great. I can see what it means for my bill. But I really don't know how it works in the background, like how they get away with it. You outlaw? Yeah. Yeah, that's fair. I mean, I would say that the technical kind of tech conference kind of definition about it is going to be something along the lines of you're not worrying about infrastructure when you do serverless. So you're not going to spin up a VM or a Docker container or anything like that. So think, how could you get even smaller than a Docker container? And instead you're going to say, here's my function and you just run my function and you get charged for, you know, how many times that function gets called.
Starting point is 00:06:17 Totally. Compute on demand is really what it is, right? But the reality is it's still somebody else's server. There is a server running. And that's the whole thing. Like when people say serverless, it's like, wait, is this magic? No, no, no. It's basically you are sharing resources with a bunch of other people, and you only get charged for the compute time that you're actually doing. So if your function takes 10 seconds to run,
Starting point is 00:06:39 then you're getting charged for those 10 seconds that it's running. What was the CloudFlare thing that we talked about that was even smaller? Threading. Oh, yeah. Oh, yeah. Was that it? The thread? I think that's what they call it.
Starting point is 00:06:52 I thought it was like Edge something. I thought it was workers. It might have been workers. Cloudflare threading. I'm pretty sure it was called workers. Maybe it was workers. I don't even remember now. Yeah, Cloudfl called workers. Maybe it was workers. I don't even remember now. Yeah. Cloud, cloud,
Starting point is 00:07:05 cloud flare workers, workers. So the, you know, the whole idea is like, how could you get to an even smaller, more granular unit of work? Uh,
Starting point is 00:07:15 you know, as you, as you build out your cloud infrastructure, right. You know, we started with like, Oh, Hey,
Starting point is 00:07:21 cool. We got VMs and we can like have one machine run eight VMs at a time. And then we're like, you know, we can really do better with the resources there. And so then comes along Docker and it's like, okay, we can just containerize everything. Well, really, I mean, that even sounds misleading, right? Because containers have been around since forever. Right. And then it was like, you know, serverless comes along. It's like, hey, we can actually get this a little bit smaller and just focus on just the unit of work, you know.
Starting point is 00:07:50 So, yeah. Yeah, it's getting smaller and smaller and smaller every single time. By the way, I know we're out of the news section. Did you guys see the Dockers in Trouble articles that were flying around this past week? I did. Yeah, that was kind of sad where they're like having some money woes looking for the next round of funding. Yeah.
Starting point is 00:08:11 It'll work out. Yeah, it kind of sucks. I mean, it sounds like they kind of went all in on Swarm and Swarm's kind of getting their tails kicked from Kubernetes. Yeah, it's bumming. There's such an integral part of so many people's work lives, but there's just not a monetization.
Starting point is 00:08:30 We're not paying anyone. I'm not paying anyone money. It's a bit of a problem. I did want to mention, if anyone has a nice article on how serverless actually works at a fundamental level, I'd love to see it because it's so full of marketing jargon. Some people try to sell me stuff. I can't
Starting point is 00:08:46 really find an article on how it does actually work. Hey, I have a tip for you. The episode I did at Ignite last year with John Calloway that we actually deep-dived and talked about how Azure functions work and how they scale out, how you get
Starting point is 00:09:02 charged and all that. There's a lot of deep information. So I have a resource for you. Yeah, we'll have a link in the show notes. Yeah. So anyways, all right. So now that everybody's sort of on the same page on what async serverless is, basically you write a function and you can call that function,
Starting point is 00:09:19 usually through a web API or something, right? There might even be hooks that happen. You might have something that like async serverless indicates that there's something that happens that triggers it, right? More or less typically. And I think in what we're going to be talking about today, it's usually in a web API type sense. You can also do schedule stuff.
Starting point is 00:09:36 So you're going to schedule every hour, like the QIT runs off a serverless process that runs. It costs me like five cents a month. It's ridiculous. Yeah, it's amazing. I mean, these things are dirt cheap. And the only problem is that typically there's some sort of limit on these serverless functions. Like if you have a super long running process, it might get killed in the middle of it, right? Like I want to say Azure Functions has a cap of like five minutes.
Starting point is 00:09:58 After five minutes, it just dies. So, you know, you do have to be aware of some of that stuff, but that also means that you're probably not writing your apps in a way that are going to scale very well anyways if they're taking that long. And you don't want to assume state. Right. Right. Totally. State that can't be passed in, I should say.
Starting point is 00:10:17 Or state that can't be shared through some sort of queuing system out there. Yeah. It's a very much disconnected world. So, yeah. So I guess the, the first note we have here is the three factor app is pitching microservices that meet two properties. One. And the first one is, uh, I dim potency. It's always a tough one for me to sell is basically dealing with, uh, you know, possibly the possibility of having duplicate messages. So we're saying that you're guaranteed to have at least one delivery of a message, but
Starting point is 00:10:47 we can't promise you that you won't get it more than once. But the idempotent part of it means if you do get it more than once, it shouldn't create a new one, right? You should be able to identify that it's the same thing, right? Right. Yeah. And there's actually a couple of techniques, so we'll just hit it real fast. Like one way to do that is basically to pass like the whole amount rather than the change.
Starting point is 00:11:09 So like one example here is like maybe if you're buying a product and you're changing the inventory, you would send the whole inventory number instead of just the amount that it's changing. So that way if you get that message again, you don't just decrement again and get the wrong number. That's not a great answer though. There's other techniques because the problem there is like if messages coming out of order or something that you could possibly get messed up there if you can't rely on the state. So I've seen some other techniques too, and there's, there's probably even others, but like one is just to, um, for example, kind of check the state. Like you, you have some sort of number
Starting point is 00:11:40 or identifier or something where you say like, Hey, have you seen this ID yet? Have you seen this message? And then if not go ahead and make that, uh, that update. And, uh, no, I guess that's the only thing I can really think of. Oh, you know what though, speaking along that, and this should have actually been a tip of the week, but why not put it here in the middle of the show? So, you know, we've talked about GUIDs or UUIDs in the past, right? A lot of people don't even realize that outside of calling a GUID.new or something like that, if you're in C sharp or something like that, you can also create idempotent GUIDs
Starting point is 00:12:14 based off a hash of some value. So if you need a unique value so that it's idempotent, you could take order number plus some other field in your order that you care about and use that as the input for the hash to create your UID. So every time that thing comes back across, you'll have the exact same unique identifier. So, you know, for those that aren't aware that you can actually create UUIDs that are consistent and item potent, that can be
Starting point is 00:12:46 super helpful for you, especially in distributed computing situations. Yep. So the first property there was item potency or at least one delivery, once delivery of message. And the other was out of order messaging, which we kind of referenced earlier at the start of the show, which is the idea that it's possible for messages to be out of order. And in practice, a lot of times this doesn't really happen that often, but you need to be able to prepare to, you need to be able to deal with it.
Starting point is 00:13:11 So I swear I'm doing this on purpose. You got the giggles over there. Hey, so I want to back up for a moment, just real quick. So, uh, cause I don't think you called out the episode number for the Azure function talk and Cosmo DB talk. Uh, so that was episode 92. And then I'll have a link to that in the show notes, but then your comment about the,
Starting point is 00:13:33 the hashing in item potency made me think like, Oh, well, cause I know that you could like, you know, get an MD five hash, for example, of something that I was like,
Starting point is 00:13:43 uh, of whatever the thing is. And like, so I was going to produce the same result, right? Like that's the, that's kind of the point. But it was like, Oh, I wonder if that's a item potent, but there's actually a stack overflow, uh, question that looks to be unanswered. Uh, and, and in the question, he's like, Hey, is there a hash function that is item potent and he says i know md5 and shaw 256 are not so i don't know if that's accurate since it wasn't answered but he seems to be pretty so i know you know or uh sure of that it's not so in java i know for certain there is a uuid function that you can actually take in a string when you create this, and you will get the same UUID every time.
Starting point is 00:14:29 And I don't remember what hashing algorithm it was using under the covers. I may not have ever even really looked it up. So it may be that there's not an implementation in C Sharp world. I don't know. I know that in SQL Server, you can actually do the same type thing. You can take a hash of a value and get a consistent, um, unique identifier there.
Starting point is 00:14:51 So it's definitely a multiple stacks, but I couldn't swear to it on all of them. Yeah. I want to mention too, there's a kind of a nifty algorithm called bloom filters, which I found about fairly recently when I was reading about, um, databases.
Starting point is 00:15:08 I don't know if you're familiar with it but it's basically it's a it's a efficient algorithm for figuring out if an item is part of a set and it gives you a probabilistic answer so it either tells you no it is definitely not part of this set or maybe it is and can give you a kind of a rating there and what that's good for is um there's certain data structures that are really good at finding results but they are not so good when there is no result because it means they have to kind of scan the full set to see if it's in there but using an algorithm like boom filters it's really good at saying okay really efficiently this is definitely not in this database so we have definitely not seen this message and so what we could do is like if it's a more expensive operation to check if it's definitely there, then you can kind of use this as a shortcut to say like, okay, if it doesn't pass this check, then don't bother continuing. But does anyone else like when you hear Joe's description of this want to think of this as the dumb and dumber filter?
Starting point is 00:15:57 Because like immediately as you're describing it, I'm saying, I'm thinking to myself, so you're telling me there's a chance. Oh, gosh. I always say a blue filter. I'm thinking to myself, so you're telling me there's a chance. Oh, gosh. I always say a bloom filter. For some reason, I always thought of Photoshop. I thought it was actually a filter for some sort of crazy, I don't know, bokeh or something for visual effects. I don't know.
Starting point is 00:16:20 Maybe there is some sort of correlation there. I don't know why they call it bloom filters. Maybe it's the person's name or maybe it's literally a Bloom filter. Burton Howard Bloom. Okay. I just thought it was kind of neat and so I read that some data structures will actually do that sort of thing where if you go to check for a record, they'll say, hey, let's check the Bloom filter first and then if
Starting point is 00:16:38 the Bloom filter says that we definitely don't have it, then let's skip the search and just insert it. So anyway, I just thought it so anyway i just thought it was kind of nifty so digression so now we've got a couple uh we did this last time too um basically we kind of compare like your kind of traditional like relational database backed kind of request response app versus the three-factor app. So in the traditional app, we've got synchronous procedural code, and in the three-factor app, we've got loosely coupled event handlers.
Starting point is 00:17:11 And this episode is really focusing on that event handler part where we've got these async serverless operations basically kind of running and watching for things to respond to. And when we say synchronous procedural code, that's the standard way that probably most people have interacted with like web servers or anything. You make a call, that call is going to hit an endpoint and that endpoint is going to run a bunch of things in order, right? Like here's my order information, here's my order detail information, here's my customer information, insert it all into the database and some you know
Starting point is 00:17:45 specified order and then return something back to the user right oh yeah so yeah like a traditional like you hit the hit submit on the form it goes to the server and when that's done it returns control back to the ui and here we're saying like three factor like go ahead and throw it into the queue and then return immediately to the user and And then whenever somebody else on async serverless process updates that data again, then we'll reflect it because we're both kind of dealing with the same stream of events, but we're decoupled. So we're not waiting on anything on either side. Right. Yeah. I like the way he put it better because I was going to say like with the way you were describing that as the procedural, like I get where you're coming from.
Starting point is 00:18:22 But that kind of implies like, oh, you can't do anything async await on the server side either, which isn't the case. Like you could do things in parallel on the server side and still be in this procedural definition. The difference is, is that you're not like just looking for, you know, watching some queue and like grabbing the next event off of it and then responding back with an event. Yeah, that's fair. You're basically saying the workload on the server could be parallelized, but the UI is still waiting for a response. Right.
Starting point is 00:18:54 The UI is still blocked, right? Right. So, yeah. And this ties into the previous, not even the previous one, the one before it with the GraphQL was first. The GraphQL was first where it's actually subscribed as listeners, right? And that's why this works. Because when all those things come back, then your UI can say, hey, I got everything.
Starting point is 00:19:18 We're good. You know, do something. I don't know. Alert the user, whatever. Yeah. Now, I will say, I'll let one of you guys read this next one, but I take issue with it. Okay.
Starting point is 00:19:29 So, with that said, go ahead and say what the traditional. You say it, Alan. I don't want to say it. Go ahead and say what the traditional implementation is. I didn't make up this page, so I'll be happy to read it. So in the traditional, you typically deploy on a VM or containers versus in this new world of the three-factor app, you would do serverless platforms. Okay, what is wrong with my life then if the traditional is a container? So you feel old is what you're saying. Right? Well, I mean, because, okay, I get that there are, like, if you're working for, like, a Google or a Facebook, right, then you've probably been in a container world for, like, the last, you know, 20 years. Right.
Starting point is 00:20:16 I know that's a joke, but, you know, whatever. Right. You get the point. And maybe on, like, you know, small greenfield projects, maybe you can do those in containers. But if you're in a small, medium business that's not working on a greenfield application and you're on containers, I get that there are some, but that's, quote, traditional? Yeah. I don't think that's, quote, traditional. I take umbrage with that, too, because honestly, it's only been recently that people are like, oh, yeah, containers are production ready. Right. Thank you.
Starting point is 00:20:50 Outside of outside of Google, who's like, OK, you guys are silly because we've been doing this for a long time. Right. But but a lot of companies are like, well, I don't know if I can trust these things. Right. Like, you know, the old the old way of doing things was we're going to buy a fifty thousand dollar server let me make sure you understand how this works we're gonna buy a fifty thousand dollar server we're gonna install four vms on it only jim is allowed to touch it only jim's allowed to touch it we're gonna install four vms on it and you're allowed to have that one vm over here right like this is your vm so that's actually how things have worked traditionally in most environments that that is more traditional. And unfortunately, it sounds like from my experience, that's the more traditional way.
Starting point is 00:21:30 Yeah, and now you're old school because you're in Containerland. Well, I'm in Containerland now, yeah, so I guess. But according to this, I'm still in traditional and should move on with my life, apparently. But the funny part here is the whole idea with containers is when you think about a container, it's very much like a VM in terms of how you interact with it. And so I think that's why they're saying this is traditional because in the serverless, all you have is an API. You don't know what the OS is.
Starting point is 00:22:04 You don't know what the API type type is you just know that you have this url that you can hit and that's it right so i think that's kind of what they're trying to get at here so you know i get it maybe in a linux world unix world then you can definitely draw those parallels between containers and vms a lot easier. But in a Windows world, what are you going to do for everything that's graphical in your container, right? That's where that falls apart. Yep. Did you have any thoughts on this one, Joe? No, I just agree with you.
Starting point is 00:22:37 There's such a big gulf in people's work experiences. It's crazy. You'll talk to some people and some people are still doing jQuery and some other people are talking about state machines and Vue and it's such radical different experiences in the same server. Some people are like, Kubernetes, man, I just started working with Docker
Starting point is 00:22:54 and other people are like, you haven't been using Kubernetes with Terraform for the last three years? Yeah. Oh my gosh. You know, that actually meant I hate to tangent this, but we don't have a ton in this episode anyways, so it's only going to be like four hours long instead of the typical six. Oh, that actually meant, I hate to tangent this, but we don't have a ton in this episode anyways. So it's only going to be like four hours long instead of the typical six. Oh, that makes sense.
Starting point is 00:23:09 Right. But we, I think, I don't remember who it was this week because my brain's not working very well. But we had a conversation about how we have what we think everybody knows. Right? Like, you're so used to working in certain things, you just assume that everybody, oh, this is what everybody does, right? Like you're so used to working in certain things. You just assume that everybody, Oh, this is what everybody does. Right. And then you find out, wait, you don't, you don't do that. And wait, I don't do that. And it's easy to assume things. It's easy to assume because, because you work in Docker every day that this is really easy, right? Like, Oh dude,
Starting point is 00:23:40 don't even worry about that. Docker run it. It's good. Or if you're in visual studio, Hey, this is easier. If you're a Java person, like, Oh man, it's nothing, man. Just a great, we'll build that thing. You could, whereas somebody else comes along and it's a three day trip down that road. So it is sometimes hard to take yourself out of the mindset of this is what I do every day and think in terms of somebody else, it's either new to it or an old hat to it or whatever. Right. And I don't even know why that tangent came up other than the fact that, um,
Starting point is 00:24:11 that you were saying, yeah, I do containers all the time. So, yeah, I think everyone's got a degree of that. Even like some of the people that are working like the super fancy, you know,
Starting point is 00:24:20 high velocity stuff. And you, you talk to them and like some other things are like totally backwards, like either the deployment process or the way they do things or, uh, you know, high velocity stuff. And you, you talk to them and like some other things are like totally backwards, like either the deployment process or the way they do things or, uh, you know, they're working on a green field project that they just started three weeks ago, you know, and it gets canceled three weeks from now. So, you know, it's, it's hard to judge, but it's definitely scary to me whenever I talk to someone who's just doing something that I thought it was a way out there.
Starting point is 00:24:41 Like, Oh, this is examples. Um, like machine learning, like I've kind of like typically kind typically kind of you know turn my nose up a little bit at machine learning like nobody's doing that nobody's doing that it doesn't even work like alexa you know like uh siri all that stuff is terrible and then uh you like everyone's policy like job postings wherever for like microsoft or i don't know google and like it's all like you must have five years of machine learning experience and i'm like ah so it's good to kind of see things from outside your uh echo chamber yeah a different lens but that's why i kind of take issue with this though because like i don't know that you can call anything traditional if it's still not like widely adopted like if it's not the norm right like html okay i think we can call that traditional
Starting point is 00:25:26 right can you agree on that yeah right but then you know containers like i still feel like the adoption is still a little too new i i think that the ecosystem around containers is just getting to the point to where people feel comfortable with it, right? Like, the whole reason Kubernetes is so popular now is because they've built an entire orchestration engine around it that's, I don't want to say it's easy to monitor, but they built all the tools in, right? That hasn't been around forever. So running this stuff in production was a bit of a nightmare for some folks. So, yeah. All right.
Starting point is 00:26:03 So, okay. Well, semantics aside. Yes. You deploy on VMs or you're serverless. That's it. So, the next one is the traditional way is you manage the platform versus the platform is managed by somebody else, right? So, this whole notion of you're updating your VMs, you're having to keep security patches
Starting point is 00:26:26 and all that crap in place versus you just run an API that you don't know anything about. Yeah, that can be frustrating to you. Like I need a node version, this dot, this dot that, and it's not available. Or it's EOLing.
Starting point is 00:26:41 Right, yeah. You don't have to think about that stuff. They manage all the infrastructure for you. Yep. And related to that, so if you're managing your own infrastructure, then you have to have some operational expertise about how to kind of get that out there and deploy it. There's just extra things you have to know versus not having to know anything aside from checking it into Git and having it kind of automatically syncing up the serverless platform. Whoa, whoa, whoa. No, you don't.
Starting point is 00:27:08 There are plenty of websites out there where you don't have to know have you have you ever like seen any of the shodan reports of open oh yeah uh ports and vulnerabilities that are available like you don't have to have operational expertise oh god another tangent did you guys see the city of baltimore the they just got um uh ransomware oh there's plenty of those this one they're saying is going to cost in upwards of 18 million dollars and they're not paying the ransom so this is basically the costs are going to incur for the fallout from that so this this is that operational expertise, right? Like somebody didn't back things up. Somebody didn't lock down ports.
Starting point is 00:27:50 Somebody didn't do all that stuff. And that's the kind of stuff you don't have to deal with if somebody else is managing the service, hopefully, right? Yeah. Yeah. It was going to cost them. It sounds like it happened in May. It was going to cost them $76,000 to pay the ransom.
Starting point is 00:28:02 They're like, no way. And now they're looking at the bill for what it's going to cost to kind of restore that and fix that problem they're like yeah but but it's a no-win situation right like i mean you pay it then you're going to get another one right so well unless you like lock make that you know necessary take the necessary steps to go serverless really hard i mean oh man but yeah so that that is a big deal right like not having to be the person who owns all that stuff of i gotta make sure all the servers are patched you gotta make sure everything all the ports are closed down like this should help out with that just to you know speak a little bit closer to home though you realize that both
Starting point is 00:28:40 florida and georgia recently had some atlanta's been hit twice, right? Yeah. Yeah. I want to say it was like the Atlanta court systems maybe. It's so crazy, man. And then there were some small counties in Florida that got hit. Sorry, I didn't mean to interrupt you. No, no, no. We're living in a hard world right now. Like it's very difficult to secure everything properly. Definitely adversarial.
Starting point is 00:29:03 Yeah, totally. It's not fair. I don't think I expected that. I don't think I did either. Darn it. It's just not fair. I tried so hard. It's true.
Starting point is 00:29:24 The last comparison is basically autoscaling when possible versus autoscaling by default. And this, I think, is another one where a lot of people really aren't in this ecosystem either. And I think what it is is like this is Heroku, right? Or Hasura? Yeah.
Starting point is 00:29:39 So they're like a modern kind of framework that's dealing with modern kind of problems. And I think they just have a different viewpoint where they're looking at people pulling new greenfield apps on their environments. And so they're probably seeing things where mostly people are cloud. Mostly they're doing VMs or containers. And so it's not necessarily representative of the whole world. You know, I think that's a fair assumption. What you just said, most people are doing cloud.
Starting point is 00:30:03 And that's probably why the whole container thing even comes up, is because a lot of people don't even know that they're using containers behind the scenes, right? Like if you drag up a bar in your cloud environment and say, hey, auto-scale this thing out, it's creating containers for you, and you don't even know it. You don't care, right? If you're running on VMs, chances are you're not auto-scaling. Well, that might depend on how that service is running. If you're using an RDS, for example, an Amazon RDS, then maybe behind the scenes, that's implemented on containers.
Starting point is 00:30:33 But your EC2, well, if it's a Windows EC2 instance, it's likely not, right? Right. I mean, it's hard to say, but at least in traditional senses, if it's a standard VM, you're not auto scaling. But it is curious, sorry to interrupt, but it is curious to Joey's point about Hasura because their product is real time GraphQL and Postgres. So that doesn't really sound like something serverless when we talk about like, oh, you can't have state, right? Because a database is all about state. But they wrote their own event thing on top of Postgres, right? If I remember right, that was like one of their claims to fame on their site.
Starting point is 00:31:15 Instant real-time GraphQL on Postgres? Yeah, I think they have like an event triggering system for Postgres. So when data changes, it would fire an event that could then go out to your application. I believe that was correct. So, but here's the thing about the auto scale by default. And this, you know, going back to the episode that Outlaw mentioned earlier from Ignite last year.
Starting point is 00:31:38 Yeah. This is one of the cool things about things like Azure Functions or AWS Lambdas and all that kind of stuff is it will auto scale up for you. And you don't even have to think about it. It's not even a slider you have to do, right? Like if you put, if you bring in enough load that requires more memory or more compute than what's available on that one system and for what your workload is, it will auto do it for you. Like it, I forget Azure was something like it go up to like 300 instances of your function running at a time. And so you don't even have to think about it.
Starting point is 00:32:11 It's not even that you got to go in and make changes up in your cloud console. You straight up don't even have, you can put it out of your mind. Right. Cause to compare, to contrast it, then in a typical like cloud, you know, instance world,, you might say, okay, fine, I'm fine with you scaling up to this many instances because this is what my budget will allow for. Right. So maybe that's three, maybe that's two, maybe that's 10, maybe that's 100. But depending on what your use is and your budget can afford, you're going to have some kind of predefined limit there. Yep. When I first started hearing about microservices, I was like,
Starting point is 00:32:47 I don't know, it sounds like a big pain in the butt. You have to manage these dependencies and manage your deployments and versioning. It just sounds like a big headache. And then I heard about serverless, and I was like, okay, now I'm interested in microservices. This seems like a nice fit. Now that I don't have to worry about infrastructure, and all I've got to worry about is making sure I have the pieces out there to run my application.
Starting point is 00:33:06 Yeah, that sounds amazing. Yeah, so I'm on board. Yeah, so every function you ever write, every method of every class now, just imagine that as being serverless. You could actually have multiple functions and they can be bigger. I've definitely abused the serverless thing so far. I'm happy with it. This episode is sponsored by Datadog, a monitoring platform for cloud scale,
Starting point is 00:33:32 infrastructure, and applications. Datadog provides dashboarding, alerting, application performance monitoring, and log management in one tightly integrated platform so you can get end-to-end visibility quickly. Visualize key metrics, set alerts to identify anomalies, and collaborate with your team to troubleshoot and fix issues fast. Try it yourself today by starting a free 14-day trial and also receive a free Datadog t-shirt
Starting point is 00:34:01 when you create your first dashboard. I like the t-shirt, by the way. Head to datadog.com slash codingblocks to see how Datadog can provide real-time visibility into your application. Again, that's datadog.com slash codingblocks to sign up today. All right, so it's that time to ask you to please consider leaving us a review at codingblocks.net slash review, where we've tried to make it really easy for you. We know that it's not fun, but we really appreciate it and it's a big deal for us and it really helps us out. So if you could go to codingbox.net slash review, then that would help us out a lot.
Starting point is 00:34:33 Thank you so much. And thank you if you already have. We really appreciate it. All right. And with that, it's time for my favorite part of the show. Survey says. All right. You see Joe back there like, what?
Starting point is 00:34:52 Throwing me off guard. All right. So a couple episodes back, we asked, what's your favorite type of swag? And your choices were stickers, because they make my laptop go faster. That's a proven fact. Shirts. I wear them pretty often.
Starting point is 00:35:17 Or water bottles. Gotta hydrate. Coffee cups. Coating requires coffee. Hats, in case of bad hair day I wish super cute socks or bags
Starting point is 00:35:34 because they cost the most or pens and notebooks in case I need to write something down super quick alright you know I made a note for myself to remember which one of you went first last time so that I could make sure to alternate it, and I don't remember who. I think Joe went first last time. All right.
Starting point is 00:35:53 So that's why I brought it up. So that was done on purpose. Alan, I intended for you to go first. Man, this one I truly don't know what people are going to pick. I'm more interested in finding out what the answer is here because I don't think I'm going to get close. I'm
Starting point is 00:36:13 going to go with coffee cups because, you know, Coder's got to get some caffeine. So let's go 25%. Coffee cups, 25%. All right. so let's go 25 coffee cups 25 all right and i'm definitely going with stickers here because i can put it in my pocket when people give it to me all right stickers but i will say what's the percentage though oh yeah hold on percentage 80 oh come on man Wow that's confidence right there
Starting point is 00:36:45 I will say though people were super excited about the hats At Atlanta Code Camp They certainly were And Orlando Code Camp Yep Which is why when I tell you That Joe picked stickers At 80%
Starting point is 00:36:59 He's pretty close I'm sure And Alan picked coffee cups At 25% You're going to be shocked when you find out He's pretty close, I'm sure. And Alan picked coffee cups at 25%. You're going to be shocked when you find out the real answer because you're both wrong. Really? Whoa. Okay. So.
Starting point is 00:37:14 Shirts. Shirts was the number one answer. Really? Okay. 35% of the vote. Okay. Now, coffee cups was a strong contender for second place okay at 19 okay well as strong as it could be and uh stickers was third third place 13 okay All right, well. Hats was sadly the last one.
Starting point is 00:37:47 It was last place. Really? Yeah. I don't know if everybody has seen these hats. Right. I've got some Twitter links I could share with you. But yeah, surprisingly, hats was the least popular option. It went shirts, coffee cups, coffee cups. I mean
Starting point is 00:38:06 coffee cups. I never would have expected that one to be in the top three, let alone second place. Shirts, coffee cups, stickers, bags, socks. I thought socks would have done better before I would have picked socks over bags or coffee cups. I don't put socks
Starting point is 00:38:21 dead last besides pens. Have you seen the goofy socks people like to wear? No, because they're covered up. No, no, no. That's the beauty of funny socks. Goofy socks is that they're hidden until they aren't. And then you're like,
Starting point is 00:38:37 oh my god, you like Codembox? And pens and notebooks, I mean, come on. Really? Like everyone gives those away. You get too many of them. I can't believe Hats lost the pens and notebooks and then water bottles. You know, I can see water bottles being up there, not second to the bottom.
Starting point is 00:38:59 Yeah. Water bottles. Yeah. All right. Well, color me surprised. Right? Yeah. That's really interesting.
Starting point is 00:39:07 And to be honest, we're probably going to use these results to drive what we're going to buy. Well, I mean, I get shirts being popular. Right. That's definitely a popular thing given away at conferences and everything. That one's harder, though, because sizing. I get that one. Right? But, yeah, I was shocked to see some of those other ones.
Starting point is 00:39:24 So, yeah. Well, yeah, coffee cups for the win. Good coffee cups. Yeah. One size fits all. Oh, we have some of those. Do we? In the Cookbook store.
Starting point is 00:39:33 If you go to cookbooks.net slash – Store. Swag. Slash swag. Swag. Man, that reminds me. We always forget this. And this is probably why we just have some trickles in.
Starting point is 00:39:46 If you want some swag, you know, send us your information. Go to codingbox.net slash swag and send us your information. You can private message us, whatever, and we'll get some to you. I mean, we were talking about Nico's review from from earlier and he actually mentioned mugs so yeah i guess coffee cups you know are are legitimately a popular thing they're a thing i wouldn't have guessed as popular as they are yep so what we got on tap you know now that i say that though i'm i remember i have that amazing one um m Dev Show? Yeah. Yeah. It's like almost a soup bowl.
Starting point is 00:40:28 It's so gigantic, man. It's like you expect John Oliver to make a joke about how big it is. But it's awesome because it's like, you know, it holds everything you need. All right. Well, so for today's survey, we will ask, which relational database is your go-to? Now, should we do these by, like, mascots? I don't think all of them have mascots. This is a problem. Do they not? Does Oracle have a mascot?
Starting point is 00:41:02 No, I don't think so. Is your go-to database the elephant in the room? Is it the dolphin? What would – oh, shoot. I lost my place now. Yeah. What would that be? Yeah, what's SQL Server's mascot?
Starting point is 00:41:20 I don't think they have. They have like – Clippy. Clippy. Clippy. They have some sort of bendy-looking fabric thing. Yeah, they do. Yeah, they do.
Starting point is 00:41:29 I don't know what it would be. At any rate, maybe I should get serious about these because Oracle is just a boring logo. Right. Yeah. All right. So, seriously, is it Postgres, which is the elephant in the room, obviously? Is it MySQL, which is Flipper the dolphin? SQL Server, Clippy, Oracle, the big red box, and No, as in NoSQL.
Starting point is 00:41:58 And then we have Graph here, so Graph Database. Did you have a particular Graph Database in mind? No, same thing as NoSQL's there's so many of them just curious how many people have decided to dive off the deep end all right so no particular no sql implementation no particular graph database implementation just if you're in one of those then let's just consider it you're already on the fringe right already so we'll lump all those in so they can like have a um they'll probably like win and we'll be like oh my gosh we're so traditional i'll tell you d graph has a very cute logo does it look it's a badger
Starting point is 00:42:36 maybe or it's a yeah it's a badger i almost called it something unseemly. Wait, it looks like a skunk. Oh, you said it. That's not a badger, is it? It's a dis. Yeah, it's a badger. Whatever.
Starting point is 00:42:56 It's written entirely in Go. Apache 2.0 license. Yeah, I'm definitely interested in... I keep looking at D-Graph and Neo4j. Yeah. Neo4j, I think,'s like the, the big dog, but, uh, D graph just like has the cute badger.
Starting point is 00:43:08 So man, we should totally do an episode on graph databases because people who've never seen them get blown away by what they can do when they actually see them in action. Yeah. First I need to learn how to use it. Yeah. Yeah.
Starting point is 00:43:20 That's, that's not a requisite. All right. All right. All right. Or prerequisite. This episode is brought to you by the O'Reilly Software Architecture Conference. The O'Reilly Software Architecture Conference is coming to Berlin November 4th through November 7th. The O'Reilly Software Architecture Conference is the only conference that focuses exclusively on software architecture and the evolution of that role.
Starting point is 00:43:47 This conference is in the weeds with tech and covers complex topics from microservices and serverless to domain-driven design and application architecture. The Software Architecture Conference features different styles of learning from 50-minute sessions all the way to two-day training courses. The Software Architecture Conference also focuses on soft skills. O'Reilly knows that architects are having to communicate complex technical topics and their merit compassionately to both upper management and technical teams. The conference is here to help you navigate different communication styles like in their two-day training, the Architectural Elevator. O'Reilly knows how siloed a software architecture can feel. The Software Architecture Conference offers countless networking opportunities so that you can meet people who are working on the same tech as you and can offer personal experience and learnings that you can apply to your own work.
Starting point is 00:44:40 Many of the attendees are either aspiring software architects or doing the work of a software architect without the title. The conference offers a special networking experience called Architectural Katas, where you get to practice being software architects. Attendees break up into small groups and work together on a project that needs development. Software architecture will be co-located with the Velocity Conference this year, which presents an excellent opportunity to increase your cloud-native systems expertise. Get access to all of Velocity's keynotes and sessions on Wednesday and Thursday in addition to your Software Architecture Pass access for just €445. Listeners to Coding Blocks can get 20% off most of the passes to the software architecture
Starting point is 00:45:28 when you use the code BLOCKS during registration. That's B-L-O-C-K-S. All right, so on with the benefits of serverless architecture. We kind of handed this one already. No ops. No SQL to no ops. No runtime to manage. No RAM.
Starting point is 00:45:43 No disk. No versions of Java. No operating system versions of CVEs, no RAM, no disk, no CPU. When you create this stuff in like AWS or Azure, there aren't a lot of decisions to make, right? So I thought that was really cool. Let's say it's free scale. Basically, it scales based on utilization, which is just hyper-efficient. You know, you used to buy servers and you have a bunch of stuff just compute just wasting time then we got smaller with like ec2 instances or whatever and then you still have you know some kind of overhead there
Starting point is 00:46:15 and we're just shrinking it down further and further containers made better so just it's getting close to the smallest amount of efficiency which is very exciting and it scales up really well so it's just a really great scaling story. And we all know how Alan likes that. Pretty soon it won't even, like, that'll be too much. And we'll just be like threads. Right. Yeah.
Starting point is 00:46:34 Right. They also mentioned the cost is vector. I'm going to go ahead and look up how many requests you get in the free tier of AWS Lambda. Ah, see, so there were equations on Azure for this. And it's crazy how little you pay for even decent amounts of load. And I want to say they have like 10,000 free on Azure. Yeah, you should look at that. I'll tell you what, you look that up because I just found AWS.
Starting point is 00:47:01 AWS, in their free tier, they will give you 1 million free requests per month okay which is up to and also 3.2 million seconds of compute time per month which is that's a that's a whole lot of requests if you're bootstrapping a small business like if you can right if you could run static site first of all then like you win second if you can get static site, first of all, then you win. Second, if you can get your stuff running on serverless, then that is the cheapest way and the most resilient way. So it's not only the cheapest, but it scales up the most efficiently. Yep.
Starting point is 00:47:35 It's actually the same in Azure. You get 1 million free executions per month, and then it's 20 cents per the next million executions right which is i mean just ridiculous and then the way that they calculate this thing there's actually an odd equation i'm not going to go through the whole thing but it's because it you just your brain will shut down in the middle of it um find the circumference of your head divide by by pi, carry the two. That's not far off. But it's 0.000016 per gigabyte second of utilization. And that's the thing.
Starting point is 00:48:18 Like to define that, it's sort of weird. But you get 400,000 of those things free a month as well. So you get a lot of free executions and calls to these things before you ever pay a dime. But see, that's the thing that's so funny, though. It's like even in trying to describe it, you have to use the word like things. Like you get 400 of those things. You're like, what is it? I don't know. But you get 400,000 of them.
Starting point is 00:48:41 So, you know, enjoy your day. Now, I will say this. So they do have a nice example on their site that will tell you kind of what the cost would be. So I'm not going to go through the equation, but I'll at least give you the numbers for what ends up being the total cost. And this will give you an idea. So an example resource consumption billing calculation here. You have 3 million executions a month, and each of these runs for a second. All right? So that's 3 million seconds.
Starting point is 00:49:12 All right? The resource consumption, I think if I remember on Azure, it was something like everything assumes a baseline of 512 megs of RAM, and then it can go up from there and you can get charged for that depending on how much usage is there. But at any rate, so you're going to use 512 megabytes of Ram times 3 million seconds. And that's 1.5 million gigabyte seconds is how they come up with this equation. The monthly, you take out the monthly free grant and you're left with 1.1 million gigabyte seconds. And then you multiply it by this factor, the 0.0016, and then you come out with $17.60 a month. So for your 3 million executions and using CPU during those 3 million seconds, you're
Starting point is 00:50:02 only getting charged $17 a month and some change. That's pretty amazing compared to when you think about the VM world. Hey, Joe, I remember you were running your ColorMind site on the cheapest tier Amazon EC2 instance back in the day. And at the dirt low end cost, it was still 20 bucks a month. Yep. And it totally could have been running on zero still $20 a month. Yep. And it definitely – Yeah, it totally could have been running off zero, zero bucks per month.
Starting point is 00:50:28 Right. And so that just kind of gives you like this whole idea that, man, you get a lot of functions, a lot of CPU processing for less than $20 a month. It's really cool. I just did the math on it here. So let me just double check here. So at 400, 000 gigabytes per second that is roughly half a nickelback according to google google translate god he got us again man how many 50 cents was that yeah it's two-thirds of 50 cent so good to know in case you're cooking or whatever got us again translate anything in anything but yeah so great for cost um a couple uh service
Starting point is 00:51:17 providers we mentioned lambda and azure functions google has functions which i didn't even know uh zeit i never heard of that one. OpenFast is the one I usually hear about because one of the questions I had when I first heard about it was like, how do I run this locally? And so I heard OpenFast is the way to go. But really, this stuff runs outside of functions too. At least with Azure, the code that I had, I could just run it. It doesn't have to be run via some sort of crazy whatever
Starting point is 00:51:40 from just working on it locally. But I guess ultimately you want to kind of have it mimic that production environment. So OpenFast is an option for that. You're going to skip Kubeless and Native? What was that? Add in how to produce it. Kubeless and Native.
Starting point is 00:51:57 So if you're in the Kubernetes world and you want to do these, like, serverless type things, those are features. Those are your options. So cool. So, all right. So, you know, we kind of wanted to sum things. Those are features. Those are your options. So cool. So, all right. So, you know, we kind of wanted to sum things up a little bit. Like we talked about the various three parts, like the GraphQL, the reliable event sourcing,
Starting point is 00:52:13 and now we just talked about kind of async and serverless backends. And so we want to kind of like high level talk about when you would use this three-factor app just to kind of button things up. And we found some really nice guidance from Microsoft basically talking about streaming architectures, and it fit really nice with this article.
Starting point is 00:52:31 And so with three-factor app. So they recommend to use architectures like this when you've got multiple subsystems that subscribe to the same events. Which I thought made well enough sense. Like if you've got, say, a sales system and an order system and all this stuff is kind of watching the same things and all need to kind of run off, this is another way of kind of saying
Starting point is 00:52:52 microservices that can operate off the same input. When you really care about low latency events. So we kind of mentioned the Uber example before, where you really want to see that car driving towards you, or if it's driving away or just parking and trying to charge you anyway. But you know what?
Starting point is 00:53:10 This is something I realized looking at this is I think that that particular article we were looking at was talking about microservices specifically. Yep. And so there is a line I want to draw here that's super important to understand because this is one of the things that came out of that episode 92 or whatever is with serverless, you don't get that super low latency. So if you are managing your own microservice architecture, you can because the whole point is you can scale to meet the demand, right? And you might have more control over the physical proximity. Yes, yes. Whereas when you're doing something like Azure Functions specifically, because I'm more familiar with those,
Starting point is 00:53:56 you could actually have a couple second spin-up time. Because when you make that call, if that thing hasn't been run recently, it's going to reallocate it set up the the the cpu availability assuming it's down completely correct yeah so you can actually have latency just because it's having to get this thing set up to run for that first request and then subsequent requests might be faster yeah yeah so if you operated in like burst modes where like, yeah, you might get, uh, you know, every, every hour on the hour, you get 10 minutes of steady burst traffic fine. But that first one on the top of the hour is always going to take the hit of being a little
Starting point is 00:54:40 bit, having a little bit more latency than the rest. Yeah. So it's interesting. I wanted to point that out because this low latency is specific to microservices, not necessarily to serverless architectures. And that's another thing that is also sort of just a feature of these various different clouds is you can also run serverless on existing VMs that you have. So for instance, if you're in AWS and you have an EC2 instance, I would assume that you can tell the lambdas to run on your existing EC2 to leverage the additional compute that's not being used at any given time. Because you can do that in Azure. You can basically say, I want you to spin up on your own, which can increase latency. Or you can say, hey, always be loaded in one of my Azure VMs
Starting point is 00:55:26 so that this thing's always ready to go, right? Just leverage compute I'm already paying for, more or less. So, again, just be super clear here. This does introduce latency, especially depending on if there's gaps in between processing. Yeah, great point. Great point. So you're saying there's a in between processing. Yeah, that was a great point. Great point. So you're saying there's a chance.
Starting point is 00:55:48 There's a chance. So I also mentioned just complex event processing. I put a couple examples here like machine learning or aggregations or windowing is kind of an interesting concept where you look at events that are close together and kind of group them because that means something when things happen close together. But what do you mean by machine learning, though?
Starting point is 00:56:09 So real-time machine learning, I keep hearing about it. But basically the ability to adapt to events sooner rather than running into a 24-hour batch process. So there might still be some batching involved. I really don't know a whole lot about it other than to say that i keep hearing more and more about people trying to adapt to things sooner so like if you're like say an amazon or something you've got like reviews that you're watching out for and people are constantly bombarding you with fake reviews in order to kind of watch that and you want to be able to adapt to changes in their behavior as fast as possible well so if they start some new tactic or something you want to catch it in five minutes not five hours i guess what i asking then, are we talking about the training of a model or are we talking about the use of a model?
Starting point is 00:56:47 No, the use. The use of the model. I was talking about the training, but maybe only because I don't know what I'm talking about. Well, I mean, you might be talking about the training of the model if we're talking about streaming or online algorithms. Maybe that's what we're talking about. Well, the problem is for that online model, you're going to have to have state, right? That's, that's where things come in. That's where like, I would think that using the model for events that come in or data that comes in, that makes more sense using an existing model, whereas training the model, you've got, you got to have
Starting point is 00:57:19 state there somehow, which there are ways to share state with these things, but, but that seems like it would be more complicated, at least in my mind. Yeah. I mean, the, the online models are like way above my pay grade. I like, I mean,
Starting point is 00:57:32 yeah, there, there's still some learning I have to do there on how those work. So I can't, I can't tell you if that, but it would seem like it definitely, I could picture the use of the, uh, the model easily in a serverless environment.
Starting point is 00:57:49 So training of it gets complicated. So, again, I think I think the problem is, is these things that we're looking at right now that we pulled off this page are about microservices. Right. And so if you had a microservice set up for for doing a machine learning model that it's trying to train, that makes sense. Because that particular service can have the state. But if we're talking about serverless functions, that's sort of a different ballgame, right? Which page are you guys referring to, though? Do you have a link so I can be sure to save it or share it in the links?
Starting point is 00:58:22 Did we put it up there, Drew? Yeah, it's in the links did we put it up there june uh yeah it's in the um it's in the resources we like section oh is this the the microsoft event driven link yep architecture style okay yeah i'll tell you the the stuff i was reading about was not related to this article i was just reading about like real-time machine learning and it was specifically about kind of catching bad behavior sooner because of the kind of the way that the bad people were adapting but i've also heard of situations like locally um you know like the theme parks have like those bands you can wear on your wrist and like pay for stuff and like get on rides faster well they do a lot of real-time
Starting point is 00:58:52 analytics on stuff that too and i've heard is like kind of long term what they're wanting to do is like basically see like oh they've been writing rides for three hours straight like maybe we should send them a coupon for a drink or something in order to kind of get them to spend money sooner, which ends up being more money over time. That's pretty cool. I guess it's either way. Just another way to get our money. Yep. Yeah.
Starting point is 00:59:15 The last one that I got here, another example is basically high volume or high velocity data. I mentioned IoT. We mentioned Uber kind of thing. Like I do hear a lot about IoT devices and stuff that people are doing. I'm totally out of that loop. But I don't know. It mentioned IoT. We mentioned Uber kind of thing. I do hear a lot about IoT devices and stuff that people are doing that I'm totally out of that loop, but I don't know. It sounds cool. It makes sense for sensor data or something. You're watching temperatures or something. I don't know
Starting point is 00:59:34 what other sensors there are, but it sounds cool. Those are perfect examples of where serverless functions make a ton of sense. Yeah, you don't want to be looking at your batch process for like overnight to see if your nuclear core melted down. Right.
Starting point is 00:59:48 Right. Well, I mean, if you think about it, right, like the scaling factor alone is one of the interesting things is because maybe as a sensor goes bad, it starts spitting out a lot more data.
Starting point is 01:00:00 And, and so it's going to scale up for you and you don't have to worry about it. Right. But when everything's, you know, all peachy and moving along at the regular pace it should, you just have standard load. So that is a great example of when this might actually come into play. Yep, and we've got a short list here of the benefits. So basically one of the big things we've talked about a few times is having those producers
Starting point is 01:00:22 and consumers completely decoupled. And so they're both interacting with that event stream, but they're doing it in isolation of each other. So nobody's waiting on the other person. It's just a nice architectural pattern. No point-to-point integrations, which means you can easily add new consumers to the system without having to change necessarily the producers or consumers that you have in effect. You're basically adding new observers to this data stream, which doesn't necessarily require anything else to change. So I have a question on this then.
Starting point is 01:00:52 Are we saying it would then be an anti-pattern if you have one serverless function calling another serverless function? You know what I mean? So let's say that you place an order and it calls a serverless function, and then that thing is going to say, okay, I did my thing. It's almost like procedurally calling services. I've seen people do this before, right? Like they call one service and that service calls another service.
Starting point is 01:01:15 And it can kind of get nasty. Yeah, you end up with an infinite loop. Yeah, or, you know, you end up having to deal with things like service discovery and and various other things like it can truly get ugly as opposed to hey let me fire off events for my order my order details and all this stuff and they all go off to different server functions and then they all return back to your app at some point in time right oh yeah yeah great pose yeah the one downside here they didn't mention the, is just how grossly complicated this can get. Yeah.
Starting point is 01:01:47 Like, talk about service discovery and deployment changes, like major schema changes and how things talk to each other. Like, it can be a real headache to have to think about all these other parts of the system that are disconnected. Yep. Consumers can respond to events immediately as they arrive rather than waiting on you know their part so if it's something truly asynchronous like i mentioned the ability to like send a coupon about that eliminate or whatever that's something that doesn't have to wait for them to get off the ride and go ahead and respond to that as soon as it wants to which is nice high scalability and distribution distribution distributedness so it's just a nice cloud pattern, basically.
Starting point is 01:02:26 It kind of checks all those modern best practice boxes. Unlike all that traditional stuff that you do with containers. VMs. Abacuses. Subsystems have independent views of the event stream. That's kind of related to the decoupled consumers and producers, I think, unless I'm missing something. No, I would agree with that. I think so.
Starting point is 01:02:52 Yep. So, just kind of cool. So, they do have a list here of challenges. I'm going to add gross complexity. That's grosses in a lot, like 12 dozen. Not like grosses in other kind of gross. All right. So guaranteed delivery is a big deal.
Starting point is 01:03:12 We mentioned specifically that we're watching out for item potency and we're watching out for duplicate messages. But you have to make sure that those messages get there at least once, which can be, you know, a hard thing to do in a distributed system where a piece can go out or you can have like a network partition or something and something can't communicate and things get out of sync and that can be a problem. So that's something that you're constantly having to think about in systems like this. Yeah. And that's going to be on the application itself, having to know, hey, did this get delivered? Why haven't heard back? Like there's's gotta be all kinds of things put in place to handle that. Well, imagine,
Starting point is 01:03:46 imagine let's go back to our e-commerce world. Right. And so everything is event driven. Everything is, is three factor app friendly in this system. And the, the, one of the last events is going to be sent is,
Starting point is 01:03:59 Hey, the order that the customer placed, the click the button to actually place the order. And, you know, maybe 25% of the time, that message doesn't go through. Imagine what that would mean to an Amazon. Yeah. Guaranteeing delivery is actually a little bit harder than what it sounds. And on the gross complexity thing too i did
Starting point is 01:04:25 want to mention the like we kind of glossed over why it's so hard when you don't manage state like everything becomes a lot harder when you have to start thinking about a world where you don't have access to the data that makes up all the bits and pieces, you have to think about everything completely different. Just put it on a cookie. There you go. Just kidding. So, yeah. Also, I want to mention the processing events and orders. You can imagine if you get things out of sync,
Starting point is 01:04:59 you have a network partitioner, maybe some sort of leader goes out of the commission for a little bit and then kind of comes back in and sends stuff out of order. If you've got things that need that stuff to be in order, like, for example, if you start shipping an order that's already been canceled or you send somebody a coupon after they already bought lunch or something, things can get a little goofy. And that's a challenge in particular when you're trying to weigh processing things
Starting point is 01:05:24 in order against guaranteed delivery like these things are kind of at odds with each other you ever heard the byzantine generals problem no are the two generals problems where uh so it's uh it's a basic example of the example of this kind of situation where you've got two generals like on two different hills like that are about to prepare to attack a city, but they need to coordinate their attack so that they're attacking at the same time. And so General A sends a message to General B
Starting point is 01:05:51 and says, attack at noon. But they need to know that that message was received, right? Otherwise, they just run down there. And how do they know if the messenger got caught or didn't make it? Tripped and died in a ravine or whatever? And so, you know, the answer is basically like, okay, well, General B will send an ACK back and say, okay, I got your message confirmed. But how does General B know that its message got back to General A?
Starting point is 01:06:14 Because General A will send a double confirmation and then – So, how do you ever know? Yeah. How can you ever truly confirm anything? That's a really good i mean this whole like parts of this conversation made me feel like this was uh like a tcp con you know protocol conversation especially when we're talking about like events can come in out of order and you gotta like put them back in order and when you said you gotta send an act i'm like oh yeah there we go it's even more like a tcp
Starting point is 01:06:42 conversation but but to be clear here that's the important part of this is that's why item potency is so important is because that Byzantine type thing where you're talking about General A and General B. General A sends the order, right? He's waiting for an act from General B. If he doesn't get it in some predefined amount of time, right, then he's going to send the message again, right? And so he's still going to wait for the act. So the fact that it's idempotent means that you're not going to submit that order twice. You know, if it got it the first time, but the act didn't come back the second time it comes in, basically General B is going to just throw away the message and be like, I
Starting point is 01:07:19 already got that. I'll reply back and let you know I got it, but I'm not going to do anything with it. So the three-factor app is not UDP. It is not UDP. It is all about TCP. Yes. And there's also – yeah, we've talked about having an episode on distributed systems, and we can talk about more of this stuff kind of in general. But there's a lot of techniques for dealing with exactly that sort of thing, and they really relate to all sorts of kind of general principles in programming, like the two generals problem that we talked about,
Starting point is 01:07:45 sign and act and networking. So many of these patterns, and even what we're talking about today, kind of mirror patterns and functional program where you've got like this immutable state and, you know, these, these finite, like kind of isolated workers.
Starting point is 01:07:57 So I don't know. It's just, it's kind of cool to see stuff reflected, like principles reflected throughout the stack. So Haskell was right. Haskell is right. Haskell was right. So processing events exactly once. We mentioned that already.
Starting point is 01:08:10 Latency is an issue there, especially if you've got... Not especially. If you have producers producing faster than you have consumers consuming, then that's a death spiral. So latency is very important, not just in good user experience, not just in the good user experience,
Starting point is 01:08:26 but in like not ending the world experience or ending your world. But also the main reason I put it here too was because in the serverless function world, you could be waiting on spin up. So it's not like you're going to guarantee sub second response time. If that function hasn't run in a day,
Starting point is 01:08:45 you might be waiting three or four seconds before you get that response. Your next one might come back in, you know, 10 milliseconds, but that first one may take some time. Yeah, we talk about Kafka a lot, but like a lot of the benefits of using like a robust system like Kafka for basically its intended purpose of streaming events is it's got a lot of, I don't want to say guarantees, but it's got a lot of, uh, Oh, I don't want to say guarantees, but it's got a lot of technology and a lot of thought put into things like
Starting point is 01:09:08 processing events and orders and processing orders exactly once and giving you insight into latency. Like it's ground up and decide designed with all of these things in mind. And so while you can like go off and kind of write your own thing, maybe around a relational database in order to meet some of these, these boxes, check those boxes, it's an uphill battle.
Starting point is 01:09:28 And some of these problems are really hard. Yeah, they truly are. It's not fair. Somehow I just got reminded of an old YouTube video of Erlang the movie. Do you guys remember that? Yeah, that's right. I'm going to like, I'm going to put some links in there cause we're talking about functional
Starting point is 01:09:48 language. And of course we're like, we're giving too much love to Haskell. So I guess we had to bring up Erlang. We should also, Hey Joe, you said a picture earlier of, of a test that was like named the five,
Starting point is 01:10:01 uh, the five. Oh, that wasn't me. That was fun. Oh, you said it, the five states of programs. We should totally drop that in the show notes here because that was hysterically funny. Oh, do you want me to read it? No, no. I don't want to tell it. People need to go up there and see it.
Starting point is 01:10:18 Yeah, you got to go to the notes. Yeah, you got to go to the notes. I'm going to make sure I go ahead and save that picture right now. Yeah, codingblocks.net slash episode 117. It will give you a ha-ha moment for your day. Hey, maybe a funny dog video there, too. I don't know. Oh, we should put that in there, too.
Starting point is 01:10:34 Yeah, so this has been a funny night. Yeah? All right, cool. This episode is sponsored by Educative.io. Every developer knows that being a developer means constantly learning new frameworks, languages, patterns, and practices. There's so many resources out there. Where should you go? That's where Educative.io comes in.
Starting point is 01:10:57 Educative.io is a browser-based learning environment allowing you to jump right in and learn as quickly as possible without needing to set up and configure your local development environment. Yeah, so I worked through Big O for coding interviews and beyond, which is really great because it had a lot of visualizations and some ways of kind of testing my knowledge. So I could basically, you know, look at the end if I thought I knew a section well, I could script to the end and then figure out if I knew how to answer those questions. If not, I could just scroll up a little bit to re-go through some of that information. And now I'm 41% through Grokking the System Design Interview, which has been just amazing. I'm on section 13 of 31. And it's been like really in-depth kind of walkthroughs of various
Starting point is 01:11:39 architectures for things like Twitter or YouTube, things like that. And now I'm trying to decide what's next. And I am looking at one of the courses you started out a lot of the machine learning for software engineers, but I've also got my eye on the GraphQL course. So I'm not sure what to do next. Yeah. I mean, I'm with you there. They just sent out, I got it, you know, because we're signed up, we get notifications about like, Hey, here's the new stuff that we have going on. Right. And so they already had concurrency courses for, they called it their concurrency interview courses. They already had it for Python and Java, and now they've added it on for C Sharp. So there's the C Sharp concurrency for senior engineering interviews. And, you know, I'm with you though, but here's the thing.
Starting point is 01:12:20 Considering that Rust has been voted as like one of the most popular languages, they have learned Rust from scratch. And I'm really thinking that that's going to be my next one that I go after. And I can't stress enough how awesome it is the ability to like, you don't have to set up your environment to learn this, this code, like you could just start playing with it immediately. And because it's all browser based, you don't have to worry about setting that up. You can do it from your from your iPad. I haven't bothered to try it on a phone, but, you know, you could be on your laptop, on your desktop, whatever device and pick it up and continue. And, you know, I think I've said this before about how they have like an interactive coding environment right there in the browser. But not only is that like for like when it's actually time to code, but the actual examples themselves are code that you can interact with and edit and you can run it and you can be like, oh, that's interesting how that worked. What if I make this change? Then what happens? And you can do it right from the example code that they gave you, let alone when it actually comes time to write the code for whatever particular task you're trying to do. Yeah, and I'm looking at the system design for Yelp right now, which is what I remembered
Starting point is 01:13:25 specifically because of the numbers they went through. So they actually break things down by the database schema and how many bytes, estimate the number of items per day, and then take that by year. And it's just incredibly in-depth and just eye-opening to me to really think about things in this way. So I definitely really recommend that one. So start learning today by going to educative.io slash codingblocks. That's educative, E-D-U-C-A-T-I-V-E dot I-O slash codingblocks and get 20% off any course. All right. So all those resources that we like, we've got all the references that we've had in other episodes, like the three-factor example, three-factor app example, reference implementation.
Starting point is 01:14:09 We'll have that link to the Microsoft article we're talking about with architectural styles for event-driven architectures. And also, we'll have the Educator.io link to GraphQL from the client perspective. And on to... My favorite portion of the show. It's the tip of the week. It was also Nicholas' favorite part of the show. So yes.
Starting point is 01:14:35 I have a partner out there somewhere. Alright, so my tip of the week. It's a good tip, as you do. You know, like a gentleman. So we've talked about GitHub and Git all the time, right?
Starting point is 01:14:58 But have you ever thought about like, hey, you know, your email address could be exposed in your Git log. In any repo that you're participating in, right? Uh, you know, your email address could be exposed in your Git log in your, in your, in any repo that you're participating in. Right. So you can actually keep your email address private in your GitHub repo. Uh, now I didn't bother to second, this is where I got lazy. I didn't bother to check to see if like GitLab and Bitbucket and others allow you to do this too.
Starting point is 01:15:19 But you know, I imagine they might have a similar service, but if they don't, whatever. But yeah, you can set your email address private in your GitHub repo, or whichever, it might not be the repo that you own, but whichever one you're contributing to, by setting it to your GitHub username at users.noreply.github.com. Dude, that is an amazing tip. So you can do, so the command that you're going to do, this again, this will be in the show notes, but you're going to do git config username,
Starting point is 01:15:53 you know, so let me rephrase this. Git config, git space config space user.email space jandoe at users.noreply.github.com. Assuming Jane Doe was your username. I love that. I'm doing that tomorrow. Yeah. So,
Starting point is 01:16:13 cause it just dawned on me like something, I don't know why something dawned on me and I'm like, you know, I bet nobody ever thinks about that. And you know, that's just like a secret that you might not, you know, want spread out there,
Starting point is 01:16:24 but you know, at least not for like spam and whatnot, but you know, that's just like a secret that you might not you know want spread out there but you know at least not for like spam and whatnot but you know that's a killer yeah and then another one uh i thought i would mention this before i thought we'd already mentioned it once before but then like i mentioned this show to to alan and it took him by surprise and i was like man i guess we gotta talk about this on the show then. Yeah. But there's another really great podcast out there that if you haven't listened to, you will enjoy called Darknet Diaries. And it's true. Their subtitle is True Stories from the Dark Side of the Internet. And there's like all kinds of great, you know, things out there. There was a whole series on, I think, Alan,
Starting point is 01:17:08 you said you listened to the Xbox Underground. It was awesome. And it was like how that story built from just hacking, just trying to get your own things to run on that Xbox and how that got to, I don't even remember where that one ended up, the crazy world that one ended up. The crazy world that would end up. So crazy. So dark.
Starting point is 01:17:27 Yeah. So, at any rate, there's all kinds of... Oh. Another one that was really good, because you guys know my love of escape rooms and all that. Episode 43, PPP, was amazing. Oh, was that the one where they had the hidden DEFCON party? They went to DEFCON. Yes. And like you go to this
Starting point is 01:17:48 you go to the bar where the party is and you're like, man, this looks lame. And then you see like five people pop out of a a photo booth and you're like, wow, they were all in that same photo booth. That's crazy. And the next day everybody's talking about this massive killer party. You're like, well, where was it? They're like, you had to go through
Starting point is 01:18:04 the photo booth to get to it. Yeah. And they have like this whole, it's almost like a scavenger hunt type thing. They have multiple of them at DEF CON every year. Dude, that episode was amazing. Like so far, everything I've listened to on here was super interesting. There was one. I don't remember what episode number it was or else I would share. But just to give you an idea, one of the stories was along the lines of a company whose job it is to – they're basically like a physical security company. Can we gain physical access into your company's infrastructure?
Starting point is 01:18:45 And then once we do, then what kind of like, you know, digital access can we gain from that? Right. And they were talking about, um, you know, they, everything was going great. They had this great client, you know, relationship going on here in the U S and their customer was like, Hey, you're doing such a great job for us. Do you think you could do something for us in, uh, like our other countries? Like, you know, um, I don't know pick a random country, Brazil or whatever it was.
Starting point is 01:19:09 And they didn't speak the native language in that country. And they're like, yeah, we can try. And they ended up working their way in all by starting by looking at, I think it was Facebook specifically. They saw the employees – several of the employees took part in some kind of nonprofit or charity or something like that. I forget what it was or blood drive or something. I forget what it was. But it was literally like – what's the word when you're doing doing good for things for the humanity or people or whatever. Oh,
Starting point is 01:19:46 no, not charity. Uh, why do words elude me? I know all the best words. At any rate. Um, somebody's like,
Starting point is 01:19:56 people are screaming in their cars right now. Um, at any rate. Uh, uh, so, so, you know,
Starting point is 01:20:03 they were doing these like great things, right? And so what they ended up doing is they ended up going to one of these events where they knew that the employees were going to be. And we're like, yeah, we're from corporate. And we just think that this is amazing that you guys are doing this. We'd love to help out and sponsor whenever we can. We're only here for a couple hours, but we'll be back in town next week. Maybe we can get together for lunch or whatever. And because of that one interaction that was off site, these employees unknowingly were like, Oh yeah, great. No, sure. Let us know. And then they walked them in the door
Starting point is 01:20:38 to the office. And from there they had all the access they wanted. And it was just, there are great stories like that, that, you know, it's really an interesting show. Man, I am dying inside because I can't think of this word. Are you going for altruism? No, but it's oo-ism something or o-ism. I'll let us know in the comments. Yeah, there was another one too. Like, I think what even got us started on, got me started on this show was I thought that when we originally talked about this somebody was talking about like itunes reviews yeah yeah
Starting point is 01:21:10 yeah i heard about on software engineering daily yeah and and he like investigated itunes reviews uh and was finding out like how there were some shows so like we always ask, right? You know, and, and, you know, I mean, it's a little humbling to ask, but we ask, right? Cause, cause we really do appreciate it. We really do enjoy reading it. And, uh, but apparently there were some other shows. I don't know that they were necessary specific to tech. In fact, if I remember right, they weren't, they were like just more general broad purpose podcasts out there that, um, according to his research that, you know, it wasn't uncommon to find paid, uh, you know, services in like a China or in India or something like that, where, uh, you could hire that service and they would have, you know, some, some person might have 10,000 accounts
Starting point is 01:22:03 that they would go in and just, you and just hash out a bunch of reviews for you and for your show or whatever. It was actually subscriptions. They would have people that would log into their account and subscribe, download, download, download, download, then go back, cancel all the downloads, sign into the next account. Yep. So ridiculous.
Starting point is 01:22:22 It's just to gain those rankings in order to get real followers so you game long enough and then you don't need to. Yep. So ridiculous. It's just to gain those rankings in order to get real followers and then so you game long enough and then you don't need to. Yeah. Pretty much. When are we going to start doing that by the way? I'm too lazy. We too lazy. That's it. We're developing. I like how you both were like at the same time. Like you couldn't
Starting point is 01:22:37 possibly beat each other out any faster. Yeah. As soon as I found out it wasn't a script, it was actually people clicking. I'm like, uh-uh. Out. I would like totally programmed that uh all right so i've got three as you know oh my gosh i can't i can't you know show off no not a show off it's just there were some great conversations in slack and if you're not a part of our slack you should go to codingbox.net slash slack and join in on all the amazing conversations there. But the first one, this one is super cool. And I think you guys will like this.
Starting point is 01:23:09 So if you live in Azure and you need to do things in the Azure world, a lot of times when you start automating deployments, you use what are called ARM templates, right? Well, those are usually just blobs of JSON that, you know, aren't all that readable. As most, you know, templated type code isn't. But our friend Dave Follett, now super opinionated instead of super good Dave, he left a link in, I think, our dev chat that is an ARM viewer. It will actually visualize your deployment template. So it'll kind of give you a picture representation of what you're about to spit out into the world of Azure.
Starting point is 01:23:53 And that's super cool. And this is a, I believe it's a Visual Studio plugin. It might be a VS Code plugin that you take your ARM template, stick it in there, and it'll give you a nice picture representation of what you're going to do. Awesome stuff. Some of the drawings are a little scary. There's arrows pointing everywhere and you're like, oh my God, I just blew up the
Starting point is 01:24:14 internet. No, I recreated the internet. What did I do? Oh gosh. You were the alpha and the omega of the internet. Yeah. So that one is pretty awesome. And currently it has a perfect five-star review out of six people reviewing. So that's good because usually people complain more than anything. Out of six people reviewing. Out of six people. You know what I mean? That's a bit.
Starting point is 01:24:37 Hey, look, man, people typically don't sign in for accounts to review stuff. We need to get that link from Darknet Diaries and hand it to this guy and be like, go buy some reviews. That's right. Go buy some reviews. All right, so the next one, this is from Steven Metcalf, also Wraithlin over in Slack. This also came from our dev channel. This one was cool because I don't know
Starting point is 01:24:57 how useful it is, but it was really cool. There is a WSL plugin for JetBrains IntelliJ that basically allows you to run things against the Windows subsystem for Linux using IntelliJ. So I think what it boils down to is sometimes there are builds in Java that just don't work well on Windows. They work better in a Java or in a Linux environment.
Starting point is 01:25:21 And so I think this allows you to leverage the window subsystem for Linux to use the native C and C plus plus type stuff to get better builds out of it with, with less frustration. So that was kind of cool. Didn't know it existed. As a matter of fact, we'll have a link to that,
Starting point is 01:25:39 but even above that IntelliJ or JetBrains rather has a toolbox download that allows you to get some of this stuff. So all pretty neat. You were about to say? Well, there's a similar extension for VS Code. Oh, really? Yeah, I'm trying to find it now because I get pinged about it all the time. I'm like, hey, I see you have this. You want to install it?
Starting point is 01:26:03 And I'm like, no. Maybe one day i can't be bothered to click that install button here you go it's the visual studio code remote wsl extension lets you use the windows subsystem for linux as your full time development or environment right from code throw it up there in your tips of the week man you got an extra one all right and then the last one that i was going to do so there was a conversation i think it was in dev talk or dev chat or i don't even remember where it was but somebody had made the comment of man i really wish there was like a sql server management studio for mac it kind of sucks that there's not one well it turns out there kind of is but it it's not what you'd think it was called.
Starting point is 01:26:45 And that's the problem is it totally derails in there. I should probably just put up a webpage. It says SMS, SSM, MS for Mac and just redirect it here. Um, so it's actually called Azure data studio. And the thing that's interesting about this is they've created a UI that is basically cross-platform, and it'll allow you to connect to Azure Cosmos. It'll allow you to connect to a SQL Server. It'll allow you to connect to a lot of things. And it's actually not as robust as SQL Server Management Studio, but it's got a lot of features that you can use if you're on a Mac to be able to interact with that kind of stuff. So this is a great free tool that Microsoft is providing. So if you find yourself on Mac or in a Linux world, this might be able to.
Starting point is 01:27:34 But still specific to Microsoft databases? So it's not as like open as as a data grip then? It's not as open as data grip, but if you're connecting to a Cosmos DB, technically it has layers to operate as a Postgres or a SQL Server or whatever. So I don't know how far you can take it, to be honest. But I know that if you're connecting to Cosmos DB, you have a lot of functionality there. So, yeah. All right. Well, cool. I have a tip.
Starting point is 01:28:16 You don't have four? Don't have four. So I think I gave the Phoenix project as a previous tip some time ago, which is a great fictionalized account of kind of introducing DevOps and change management into the workplace. And it was really an entertaining read. And you should go read that or better go listen to it on Audible because that's way easier for me anyway, way more enjoyable. And I can do it while I'm walking the dogs. Well, the authors of that book have put out another book that is not fictional. It's basically
Starting point is 01:28:48 all about DevOps. In fact, it's called the DevOps Handbook. Very exciting to me is on Audible because I bought it and I started reading it and I just had a hard time with it. Now that it's on Audible, I'm going to be done with that thing in like three days. It's a lot of dog walking to do.
Starting point is 01:29:06 It's probably a really great book i haven't gotten very far in the actual print version of it and i just got the audible version today so i'm assuming it's going to be awesome and you should read it so you recommended something that might suck yeah but uh i mean going to say 75% chance it's awesome. All right. I like it. I don't remember what episode number this is, but it's the Docker for Developers episode is where we talked about the Phoenix project. That was a good episode. So that was like last year, last July, last June. Last April. How do you have that? Like, do you have like some sort of reverse index in your head?
Starting point is 01:29:46 What is that? So sometimes, yeah. Yeah, Dab lets him drive the car on Sundays. It's not very accurate. I'm an excellent driver. Yeah, 417 toothpicks. Oh, Wapner. Got to watch Wapner.
Starting point is 01:30:04 All right. Oh, gosh. So, all right. So, in this episode, we talked about the last factor of our three-factor app. So, we were officially out of factors here. We talked about the asynchronous and serverless backend. We're done until somebody comes out with the one-factor app. When the four-factor.
Starting point is 01:30:27 No, no, no. Oh, that's right. We went from 12-factor down to three-factor. No, you can't do two-factor app. Yes, that's too many. The half-nickleback app. Let us know. All right.
Starting point is 01:30:40 So with that, subscribe to us on iTunes, Spotify, Stitcher, and more using your favorite podcast app. And as Joe said earlier, if you haven't already, we would greatly appreciate it if you left us a review. I don't care what Darknet Diaries says about it. We still like to read them. And you can find some helpful links at www.codingblocks.net slash review. While you're codingblocks.net at, check show notes hours out. Examples more and discussion.
Starting point is 01:31:10 And send, send, send, send, send questions, feedback, and rants to Slack. And I just realized I missed a great opportunity to joke. I should have said a quarter of a nickel back. That's way funnier because they're both monies. Along with 50 Cent. Make sure at CodingBox, follow Twitter, at social links at the top of the page. Okay, Max Headroom.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.