Coding Blocks - Azure Functions and CosmosDB from MS Ignite
Episode Date: October 22, 2018This is a special episode recorded at Microsoft Ignite 2018 where John Callaway from The 6 Figure Developer Podcast joins Allen Underwood to talk about Azure Functions and CosmosDB. Find out what they... are and why you might want to try them out for yourself.
Transcript
Discussion (0)
You're listening to Coding Blocks, episode 92.
Subscribe to us and leave us a review on iTunes, Stitcher, and more using your favorite podcast app.
Visit us at codingblocks.net where you can find show notes, examples, discussion, and more.
And send your feedback, questions, and rants to comments at codingblocks.net
or hit us up on the Slack channel and find Joe and send them to him.
Yeah, that would be awesome.
And you can also follow us on Twitter at CodingBlocks or head over to CodingBlocks.net where you
can find all our social links there at the top of the page.
And with that, I'm...
Wait, hold on.
I'm Mylon Underwood.
I'm Joe Zach.
And I'm Michael Outlaw.
All right.
So as always, let's start off this episode with saying thanks to everyone who left us a review. So we greatly appreciate that. So from iTunes, well, before Alan and John Calloway dive into things here,
that you can catch me at Tampa Code Camp coming up on October 20th.
So if you're in the West Coast of Florida area around Tampa, St. Pete,
then you should come up and try to kick me in the shins.
But good luck because I'm faster than I look.
Yeah, and with that, you might be guessing like, hey, where's Alan?
He sounded a little different when he announced himself. And that's because this is going to be a special presentation of the talk that Alan gave at
Microsoft Ignite, along with John Calloway from Six Figure Dev. And, you know, Joe and I are just
here to entertain you in the meantime. Yeah. And if you're not subscribed to their show,
you should definitely check it out.
I think they were one of my tips of the week a couple episodes back,
a lot of episodes back now.
But it's a really great show, and you should check it out.
It's the Six Figure Developer.
You're listening to Coding Blocks and the Six Figure Developer.
Subscribe to us and leave us a review on iTunes, Stitcher,
or more using your favorite podcasting app.
Visit us at codingblocks.net or sixfigure.dev.com.
And with that, I'm Alan Underwood from Coding Blocks.
And I'm John Calloway from the Six Figure Developer Podcast.
All right.
And in this episode, we're doing something a little bit special.
We're coming to you from Microsoft Ignite's event right now.
And we're going to be talking about Azure Functions, Cosmos DB,
and focusing on writing
your applications and leaving the infrastructure to somebody else to deal with. So I think first,
we kind of want to give a thanks to Microsoft for having us down here at Ignite, right?
It's kind of a pretty big deal. They've got over 700 deep dive sessions going on,
100 self-paced workshops.
They have demos that you can watch, things that you can touch, keynotes from people like
Sadia Nadella. They have Paul Therot right outside the door here recording a session
right now. And of course, you get tons of swag, which I think, John, you might have
gathered a little bit.
Yeah, I've got t-shirts stuffed in every pocket of my laptop bag ready to go home with a brand new wardrobe.
You're going to have banners flying out the side of your car on the way home.
Yeah, and don't forget 30,000 of our closest friends joined us at MS Ignite this week.
Yeah, pretty big deal, and it's pretty awesome.
It's a massive event.
So I guess let's go ahead and jump into the meat of this thing.
And the first question is, what are Azure Functions? You want to give it a shot? So Azure Functions are just
little bits of code that have a small responsibility. So it's a way of standing
up some functionality that may be utilized or used by a variety of different services or applications. And you don't necessarily have to worry about the infrastructure.
Right. And that's kind of the key part right there. You don't provision any servers. You
don't patch any servers. There's none of that. You write your code and it just runs.
And it's a really cool thing. And the part of it that a lot of people
probably want to take advantage of
is the fact that it scales on demand.
And by that, that really means
that you're not going out there
and turning any dials or anything.
They have some simple heuristics that do that
that make it happen for you.
And another cool part, at least to me,
is they support just a ton of different languages.
So they have their standard ones that are like C Sharp, JavaScript, and I can't remember what the other standard one is off the top of my head.
I've probably got the notes further down.
But they've also got a ton of other ones, PHP, Python, which, I mean, what's your flavor of choice?
My flavor of choice these days is, of course, going to be.NET C Sharp.
Okay, cool. So I guess the big thing here is why should you care? Like why is,
why are Azure functions a thing? Something else to add to the resume, right?
I suppose that's always true. There's no doubt there. The other thing too is they,
if you've not played with cloud or you haven't ever
really jumped into the cloud and you feel like cost is a barrier or just you don't know where
to start for me these are a perfect way to jump in because they're dirt cheap like as a matter of
fact when you get up there we'll get into the cost later but it's just one of those things that for
pennies a month you can go in and start doing
something useful. Yeah. And they can, you know, like Alan said, they can be extremely useful and,
and a way to go ahead and deploy something that the rest of your application can utilize.
You don't have to worry about scale issues. You don't have to worry about if, if you've got
something wrong, it's going to bring down an entire infrastructure or entire application. If you need to make changes,
then you're just redeploying, making some changes, some small changes on particular functions and
just quickly redeploying. Yep. And I'm sure you've heard the term microservices. It's sort
of been a buzzword for a while.
Yep, I've got it on Square B1 on my bingo card.
Very nice.
So the thing is, is people talk about microservices, and one of the things that we've mentioned in previous Coding Blocks podcasts is that's great and all,
but the problem with it typically is there's a lot of additional things that developers have to worry about
or infrastructure people have to worry about or or
infrastructure people have to worry about because it's not like that stuff comes for free right you
have to you have to figure out how to scale the thing up how are you going to provision your
servers or your clusters or whatever this allows you to do it without thinking about any of that
yeah and on on six-figure developer podcast we had uh we had Richard Roger in Episode 56 come on and talk about microservices.
So if anyone missed that one, give a shout out to him and be sure to listen to what he had to say on the topic.
Excellent.
And like I said, the other thing about this is you can worry about your application, your business logic, your line of business applications.
You write things that are meaningful to your business and you let the other stuff be
worried and fussed about from somebody else. So one of the cool things, so you said that C Sharp
is kind of your flavor, right? Yep. A huge announcement here at Ignite is that Azure
Functions 2.0 is now live. You want to tell us a little bit about that? Yeah azure functions 2.0 is now uh ga and uh was announced during monday's
keynote or secondary keynote yeah um forgot if it was satya or or or the goo uh that mentioned that
but that's generally available now that brings out um.net core features that uh previously you
had to adhere to.net standard uh so we're we're now up to the latest and greatest bits.
It's amazing.
And that, along with it, brings the fact that you can now run this on Mac.
You can run it on Linux.
You can run it in Windows.
You now can use Azure Functions in Kubernetes, which is really cool.
You can use Azure Functions on the iot edge it's way faster
than version one from what they're saying like i don't know that they had statistics out on this
but i know they were talking about dot net core 2.1 earlier and it was it was like an order of
magnitude faster like one and a half times faster so i don't know if that completely translate to
this but pretty big deal and the other thing that they have here that's interesting,
if you've ever played with Azure Functions,
there can be a problem sometimes bringing in dependencies
because they sort of step on each other.
With this version 2.0, they now load their own context,
meaning that there's going to be less conflicts to deal with.
And we'll actually have a link to this release page that they put together. It'll talk
a little bit more about this. So you want to give us a little overview of how you actually use some
Azure functions? Yeah. So depending on how you've architected your application and what you're
trying to accomplish with your functions, there are a variety of ways that you can trigger the
functions or kick off the function to do their little bit of work.
There could be HTTP triggers, so you can call directly and say, hey, it's now your turn to go do the thing.
You can also have a timer so that, you know, once an hour you can kick off a function to maybe import some data or massage some data or whatever the case may be. You can put something on a queue or like a service bus or a service bus topics, something
like that, a variety of different ways.
Yeah, I mean, they have a whole slew of them.
But basically, if you can think of ways to get information into Azure, such as blob storage,
event hubs, Cosmos DB, IoT, whatever,
they have these hooks and these bindings that are built in
so that you can kind of kick these things off however works best for your flow.
So pretty awesome stuff.
Okay, so this is where I actually had it in the notes.
I knew I had it in there.
So C Sharp and JavaScript were, at least in version one,
like the top dogs with F Sharp also.
And then they had what they called experimental support.
And in here, I'll just list them real quick.
You have Bash, Batch, PHP, PowerShell, Python, and TypeScript.
And I wouldn't be surprised if in version 2 they've got more.
Yeah, and during one of the sessions, I think maybe one of the pre-day sessions,
they had mentioned that there was support for some of these languages.
I know that Python was one in particular that they mentioned,
but it was marked as experimental.
And there were some pitfalls with trying to write an Azure function with Python.
But now that 2.0 is generally available,
I think they've rewritten their Azure function bits so that now you can utilize
everything that your Python developer knows and loves. Oh, very nice. Did they say what the
default or the drawbacks in V1 were? Like the kind of things that would happen? I think they did.
I think I might have tuned out by that point. It's definitely information overload. So
that's not surprising. So I thought it would be useful because initially, John and I started to
put together an application in Azure. And then we found out that we weren't actually going to be
showing anything and we were only going to be talking about things. So I think what's more
useful is to talk about some of the best practices and things that will put you in a good position to be successful with Azure functions
because they really are awesome. They're exciting to talk about and they're exciting to play with.
So one of the first things that I found was you'll see all over the place they say avoid a long
running function, but they fail to say what a long running function is, right?
Like to me, I was worried about it's running longer than 30 seconds.
Is that a problem?
It seems from what I saw on the internet and various different issues posted
places, five minutes is a cutoff.
So if it runs for five minutes,
it's probably going to kill it and go off into no man's land.
Yeah, I did stumble across some sort of documentation site on the Microsoft
domain. So I'll see if I can find that again. But it did explicitly call out the five minute mark.
Okay, cool. I don't think I ever saw anything official, but it seemed to be what just kept
piling up. There's another thing that we're not going to dive into here because it's, it's kind of a subset or, or a, uh, yeah, a subset of Azure functions and it's called durable functions.
So they also allow for cross function communication using built in bits and Azure.
Uh, I want to mention it because it's worth looking into if you start developing some of these things and you find that you're going further. One of the key points for me, and this is big, that you have to understand is there's
only so much data you can send to an Azure function. So by default, the storage size that
gets sent in is 64 kilobytes. So if it's bigger than that, then you have to find ways to work around it.
So like, like for instance, one of them is you can, instead of calling that function directly and trying to pass in like 128 kilobytes of data, instead of that, you can put that stuff
on a storage blob somewhere and then have a trigger that kicks that thing off and,
and it can go pick up that data from the storage blob and use it. So it can go get data that's bigger than 64 kilobytes, but it can't take in more than that.
Yeah, and that's probably a good architectural choice anyway.
Instead of saying, oh, here's this terabyte of data that I need to send to this function,
why don't we figure out how to get that into storage of some kind
and then tell the function that it's time for it to go ahead and do its job.
Are you saying a developer would actually try and send a terabyte of data to a function?
We tend to do some crazy things sometimes.
If you leave the door open, right, that's what's going to happen.
If it can happen, it absolutely will.
Yeah.
So another one of the best practices for Azure Functions is to write them to be stateless.
And this is where a lot of people are going to scratch their heads best practices for Azure functions is to write them to be stateless.
And this is where a lot of people are going to scratch their heads
who haven't dealt with microservices
and that kind of things before.
So if basically what that means is
this thing shouldn't really rely on,
it needs to rely on as little as possible.
But if you need to use something
that's going to maintain state, then you're going to try and put the state on the data is what they said.
So it's an interesting concept, right?
Like if you get a data packet that comes into this thing, then probably what you want to do, if there's some sort of state you need to maintain, add it into that data, and then save it off somewhere. And then that way, the next time something needs to pick it up and run, it's got that
state available on the data itself.
Right.
Yeah.
And you want to write the function so that it's idempotent, so that it will return the
same value with the same inputs.
Yep.
I always love that word.
And I'm sure there's a lot of people that haven't taken CS and are like, idempotent?
What?
Just another thing I got to look up.
Yeah, we'll be sure to put the definitions of these funky terms on the show notes.
Yeah, definitely.
Here's another one.
And I think this is good practice regardless.
It is super more important for things like Azure Functions or microservices is write defensive functions.
You want to fill us in on what that is?
Yeah, so you want to plan for failures at each step. You want to make sure that you can recover
from those things that happen from time to time. Like we said, if it can happen, it will. So we
want to make sure that we can recover from any particular failure or any weird side effect that
we might need to account for. Yeah. And what that means is it needs to be able to pick up where it left off, right?
Like if you have an order from a customer and you're leveraging something like Azure Functions or microservices,
you don't want it to insert that order.
If it failed 10 times, you don't want it to create 10 new orders because you're going to be in a mess, right?
So you need to make sure that every step of any kind of transaction
type thing that happens, you have some sort of fail safe so that it can pick up and continue
where it left off. So this is the whole part of don't have it redo work. Try and figure out a way
to not do that. Yeah. Know about poison cues and how to use them. Yeah, this was interesting. So we'll have a link in the show notes,
but basically what it boils down to is if things fail multiple times,
there's something built into the Azure architecture that knows that this is a problem
and it'll identify it in the queues for you or in the Azure function.
So really, really neat stuff, and it's nice to know about.
Also know about how the function scaling works. And we'll get into that in a minute. So we'll do a little deep dive on that.
Oh, and so I guess actually, we're going to get into it right now. So
one of the things is, we're going to talk about using the consumption plan. So there's other
plans that you can use. I think there's a resource plan where basically
if you have servers set up already in Azure,
it can use their free processing to run Azure functions.
We're going to talk about the plan that says,
hey, just charge me when this thing's being used, right?
Don't use anything that exists already.
Yeah, so if we never actually trigger the function,
then I don't want to have to pay a
penny right which is mostly true we'll get into that in a second um but it's that's essentially
the gist of it so here's what they have and this is kind of cool the cpu and the ram scale up
automatically for you up to 1.5 gigs of RAM, which is a significant amount of RAM.
And so you'll have to plan accordingly, right? Like if you're typically trying to load your
entire product database into memory, this, you're going to have to figure out ways to go around
this, right? But it is nice to know that they at least tell you what the caps are.
This is another thing.
So the function apps that share the same consumption plan, they scale independently.
So if you have two or three function apps that you've deployed, they're each going to
scale on their own.
You don't have to worry about, well, this one's using the resources and this is using
more.
They'll all do what they need to do to run uh effectively so if
you've got three different functions and one of them needs to scale up then you don't automatically
have to scale the other two it does you don't have to you have you don't really have any choice over
it that's that's kind of i guess it's the good and the bad of this right again you're not worrying
about the infrastructure pieces and how all that stuff orchestrates. But yeah, if you have one function
that's getting hit heavily, that one will scale up for you and the other two would just do whatever
they need to do. So now here's one thing that is kind of interesting. And this is where I said
that, you know, if the function never fires, then cost. Okay, so here's what happens. When you write
your function, it's got code right so depending on how
big your code base is whatever the files themselves are stored in your function storage so you're
gonna pay for your storage for me for what we've done like the code that we've put up there it's
like a couple pennies a month like it's it's almost nothing so assuming you don't even you
never run that function you're gonna pay a couple pennies a month for the storage
because that's where your files are, but that's it.
Right, so if you've got your own MSDN subscription
and you're paying a monthly fee anyway,
you're probably not going to eat up your monthly allotment, right?
No, not at all.
As a matter of fact, you look at it and you'd be like,
wow, that's really cheap.
So it's a nice way to get started.
Next up we got, okay, so this is a gotcha. We mentioned the different triggers that you have.
So you have the HTTP web hook. It's basically, you just call a URL and it fires off. You have
a timer that you can set. Basically it's a cron schedule is what you can put on a timer.
So, you know, one minute, 24 hours, whatever you want to do.
Those fire off pretty quickly, right?
Like as soon as you call them or as soon as that time hits, they do.
If you're using blob storage as a trigger,
there can be a delay up to 10 minutes for the blob.
Now this has something to do internally
with how they have their stuff set up.
So it's not that it's never going to run.
It's just not going to happen immediately.
So if you're getting ready to plan out, you're like, oh, I want to go play with Azure functions.
Just know about that, right? It would kind of stink if you're sitting there waiting and nothing
happens. So if you're constantly hitting refresh, waiting for that trigger to have fired, you might
be disappointed. Right. For 10 minutes or for up to 10 minutes. So, but just be aware of it.
Now they did say there is a way around this and that's to
have an app service plan with always enabled turn on and basically what they said is when these
triggers fire they spin up the functions right and the blob trigger storage i guess is a more
relaxed one and so it's not in the always on mode so if you want to force it you can do that now i
don't know what that means in terms of consumption and all that kind of stuff if it costs more i didn't see anything on that but you know
fair warning all right what we got next so okay this is this is where the interesting stuff gets
in so we were talking about scaling earlier and basically what it does is it will automatically scale up to the number of instances of the function it needs to meet the demand.
So basically, if you've taken a look in the Azure portal and you've looked at the app insights for something, you'll typically see response times, right?
And usually they're going to be pretty low, under 200 milliseconds, whatever.
They use what they call basically their simple heuristics. And usually they're going to be pretty low, you know, under 200 milliseconds, whatever.
They use what they call basically their simple heuristics. We don't know what it is, but they have something that's like a simple enough algorithm to say, okay, it looks like we need to scale this up, right?
Because the request times are taking too long or the response times or whatever the case may be, it's taking longer than expected. It'll auto-scale it for you to get those requests or those response times back down to a good threshold.
Yeah, could you imagine having to write some of that infrastructure yourself?
Oh, my God, no.
I mean, seriously.
And that's the thing.
That's why microservices, anytime people talk about them like it's the big buzzword, it's like, I mean, yeah, it's cool.
But you have to have a real need for it typically, right?
With this, it's just like writing any code.
You don't really have to think about it.
You just do it, and it'll do it for you.
Yeah, I typically write one or two functions in a class or something at least once a day. So I'm always looking for an opportunity to break that out into a reusable thing
that I could go ahead and make that a true function, function, uppercase F function, I guess.
I mean, it's really cool.
The fact that you can do it is just amazing.
Now, there are some limits here.
So there's no maximum number of concurrent requests a function can handle.
So what that means is as long as your function is taking in whatever's supposed to happen and,
and spitting it out quickly enough, it's not like it's, it's serially bound, right? It'll just,
it'll issue as many requests and it can take as many of those things as possible because it'll
just parallelize them. Right? So that's, that that's excellent to know so it's not like you're just stacking up a queue waiting
for it to happen um the one thing that they mentioned and i i'm not clear on this honestly
is they say that you have to be concerned with the number of connections being used it said 300
is the limit so i don't know if that means that you have your function
and it's connecting to a SQL server
or it's connecting to an Azure Key Vault storage
or something like that.
I don't know after there's 300 instances
of these functions out there if that's like your cap now.
Thoughts?
Yeah, I'd be interested to look into that as well um not entirely clear on
what that is so we'll be sure to look that one up and if anybody out there in radio land has
has any ideas be sure to let us know yeah comment on this because i think we're planning on releasing
this on both of our sites so pick one you know come over and leave us a comment i'm sure that
john and i will be bouncing back and forth between the comments on the two sites.
So anybody has questions, we'll do our best to answer them.
So, yeah, that's interesting.
I'm not sure exactly what it means because because you don't control the scale factor, I'm guessing, again, you have to really build in the defensive part of this.
So now this is the part that is kind of exciting to me and
it's probably going to be super hard to explain in any kind of way that's going to make sense
so anybody who's driving if your brain starts melting here i apologize so billing is in gigabyte
seconds so what does that mean that that is not a metric that i'm familiar with. Mega gigabytes, son.
So when I read this, I was like, okay, I don't really know how to equate this.
So let me give you the definition that they say.
Oh, you've got math down here.
Yeah, man.
Yeah, man.
And I've even got a link to the page where they have it nicely drawn out, and we'll have a link in the show notes for that.
I highly recommend if you're halfway interested in this, click it because it'll, it'll more clearly spell it out. Right.
But I at least want to plug it, put the seed in and maybe you'll get it. Maybe you won't.
So it's a combination of the memory size and the execution time of the function.
So what that means is let's get down here. because I think I wrote it out a little bit. So first, before we get into the part of the gigabytes and the seconds and all that,
they give you, Azure gives you a monthly grant of a million requests
and 400,000 gigabyte seconds of resource consumption for free.
That sounds like a lot.
A million requests for free. So I don a lot. A million requests for free.
So I don't know.
Let's see who can do the calculator faster.
If you did a function every minute for an entire month,
what would that be?
So what is that?
60 times 24 times 30.
That's only 43,000 requests. so you still got a good ways to go before you hit
that million request mark right um let's see if you did that times 60 that's obviously gonna blow
out a million right yeah that's that's 2.5 million so probably every 30 seconds you're gonna be at
100 000 yeah i mean you've got a lot of room to play with things right like if you had a timer
that was running and firing off this thing all the time, you'd still be in pretty good shape.
But even if you blew that out, I mean, that's still...
Yeah, the math is nothing.
And that's what I want to get into.
So when it says the gigabyte, let's break that down.
So first, what it's talking about is the memory consumption of the function itself.
So by default, the functions start with a minimum of 128 megs of RAM.
All right.
To get your gigabyte of RAM, you're going to divide 1024 by this 128.
So let's do the, I'm sure I have this.
So 1024 divided by 128 seems like it should be like eight. Okay. So that's
going to be eight, right? That's going to be a multiplier. Now, if, if your function took one
second to complete, then that's going to be eight. It's going to be eight times one, right? So your,
your gigabyte second is eight. All right? Then, how does this thing work?
Let me see.
Minimum 100 milliseconds, 128 megs of RAM.
All right.
So then what you have to do is this gigabyte, you're going to divide it by, oh, man, what was it?
The factor.
I can't even remember now.
Now my math's all gone basically the the example they got here is if you
had somebody that was doing oh what was it a million executions no dang it let me get back
up here you want you want to you want to talk while i try and find this again you know what
makes for good radio is math hey we never learn our lesson.
Right.
All right.
So here's what they did, right?
So on their example, they had 3 million executions,
and they took a second just to make it easy.
Hopefully your Azure function is not taking a second.
It depends on what you're doing.
It could take longer, whatever.
So you have 3 million seconds, right?
Then they had 512 megs of RAM average used for these functions.
Then you divide that by the 1024, which is your gigabytes, right? Then they had 512 megs of RAM average used for these functions. Then you divide that by the 1024, which is your gigabytes, right? And that gave you 1.5 million gigabyte seconds, right?
Because it was half times the 3 million, essentially. All right. Now you take away the
400,000 requests that you get for free or not request the gigabyte seconds for free and you're left 1.1 million gigabyte seconds all right that's what you got free then you multiply this they
okay this this this is where i messed up so there's a resource consumption price which is
how many zeros is that one two three four all, it's.000016 cents per gigabyte second.
So you multiply that times your 1.1 million now,
and you come up with $17.60.
And then on top of that, you are going to pay for your per million executions.
That was an extra 20 cents.
All right? And at the end of the
day, after all is said and done, you basically end up with $18 that you paid for 3 million executions
that ran for a second each. Yeah, I could probably make, I could probably cover that.
That's not terrible, right? I mean, I can imagine all kinds of useful things that you can do with that kind of processing power and time.
And it's $18.
I mean, some hosting costs more than that nowadays still, right?
So pretty interesting.
Again, we know that math makes for a really super interesting radio.
But hopefully you're able to follow along with that.
But it gives you an idea that you're not going to pay through the nose for this, right?
And that's the big deal.
Now, they didn't put in there the cost of the storage.
Assuming that it's probably your standard class file and some code, again, it's probably a few pennies a month.
It's almost nothing.
And Azure Function, by definition, should be relatively small.
I would assume the code to make that thing relatively small would be small
itself.
You would hope so.
You,
you would certainly hope so.
Hey,
so it's that time again,
if you haven't already,
we got to ask you to leave us review.
It's a big help to the show.
It helps us doing the things that we're trying to do.
And if you get some value out of this,
then please let us know over there at, you can go to codingblocks.net slash review, and you'll find links to that.
But you can go to iTunes or Stitcher or Podchaser or whatever you're familiar and comfortable with
and leave us that review. And we would really appreciate it. It helps us out a lot. So thank you.
Yeah. And don't be afraid to share us with a friend, spread the word.
You've heard us tell you about Datadog. You know, they're a software as a service monitoring platform that provides developer
and operation teams with a unified view of their infrastructure apps and logs.
But did you know about these features? Like Watchdog? Watchdog automatically detects
performance problems in your applications without any manual setup or configuration.
By continuously examining application performance data, it identifies anomalies like a sudden spike in hit rate that could otherwise have remained
invisible. Once an anomaly is detected, Watchdog provides you with all the relevant information
you need to get to the root cause faster, such as stack traces, error messages,
and related issues from the same timeframe. Or what about trace, search, and analytics? Trace, search, and analytics allows you
to explore, graph, and correlate application performance data using high cardinality
attributes. You can search and filter request traces using key business and application
attributes such as user IDs, host names, or product SKUs so you can quickly pinpoint where
performance issues are originating and who's being affected. Tight integration with data from logs and infrastructure metrics
also lets you correlate these specific trace events to the performance of the underlying
infrastructure so you can resolve the problem quickly. And let's not forget about logging
without limits. Logging without limits lets you cost-effectively process and archive all of your logs and decide on the fly which logs to index, visualize, and retain for further analytics in Datadog.
Now, you can collect every single log produced by your applications and infrastructure without having to decide ahead of time which logs will be the most valuable for your monitoring, analytics, and troubleshooting. DataDog is offering our listeners a free 14-day trial,
no credit card required. And as an added bonus for signing up and creating a dashboard,
they will send you a DataDog t-shirt. Head to www.datadog.com slash coding blocks to sign up
today. Did I, man, I totally messed up in here. Like there's a few things that I left out and I
think we talked about them a little bit. One of the things that just made me think about this so we're going off
the uh the rails here for a second is you said they should be relatively small right but let's
say that you have two azure functions sitting out there and they both utilize the same like model
the same class right like let's say that you have some sort of accounting class that you
typically use. One of the things that's interesting about Azure Functions is if you want to share that
code, there's only a couple of ways to really do it. So you've either got to set up a NuGet package
that has those classes in it, which is cool. I mean, I guess that's one way to do it, right?
It seems kind of weird to have to create a NuGet package just to share a class, but you know, whatever. The other thing you can do,
because when you have a storage account with a function, you can actually go look at the storage
account and we'll get into that in a minute. And you'll see that you'll have your functions
at a particular level, just like you would on your regular
system, right?
Like you got your C drive, you got a projects folder, and then you've got two or three functions
listed there.
You can have another folder at that same level that's accessible by all those other three
functions.
And then you can do like, I think it's a hash load or a hash ref.
I can't remember exactly right off the top of my head.
But then you can reference those C sharp files or CSXx files which is really interesting because they have c sharp script apparently
which i don't know it feels kind of dirty you got any thoughts
yeah and i guess do you want to speak about the project the side project that we were... Sure, sure. So Alan had an idea for hooking into our podcast stats.
Coding Blocks is using one service.
Six Figure Developer is using a different service.
But they have similar stats.
So the idea was to go in, hit an endpoint, hit an API, something like that, assemble the data, and start putting it in blob storage or putting it in Cosmos DB
and then starting to write some reports against that data.
But we wanted to have the ability to put it on some kind of trigger
or some kind of time scenario that we can make sure that our statistics are updated pretty regularly
so that we know which episodes you guys like, which ones are the popular ones,
and just find out all that we can from that.
So as we were looking to write some of these functions and collaborate on some of this code,
since he had a different provider than we did, we wanted to see if we could utilize
some kind of core class library functionality that would
share our models, share some interfaces and things like that. And then
we can deploy our functions independently, our Azure functions
independently so that they can connect to the respective APIs.
Yeah. And that's where the rub comes in is,
so we're sharing the podcast model, right?
And then that way, so every episode has its own statistics.
Well, the thing that sucked is he's got his provider,
I've got my provider,
and to deploy those independently and try and use the same classes,
it was like, well, do we bake in the classes to each one of them?
That seems kind of wrong. And then there were these solutions. Like I said, you can
put it in NuGet for C sharp or you can have this shared folder type thing. And there's some drawbacks
to both, right? NuGet is, you know, that's a process that you've got to put in play.
The other one, if you're going to share the files, you can't just do that because if you make an
update to those files, your functions aren't aware that they changed.
Because basically they load up those files, and if there's any code changes to them, they're not aware of it, right?
It built it, and it's using those DLLs.
So you actually have to have a file watcher type call or configuration in your host.json in order to know to continually watch for those files
changing so there's there's a lot of weird things with it that i'm hoping that maybe maybe with
version 2 out that they have some some new things in there that make this a little bit more seamless
because my whole goal when we talked about this was let's get this stuff up in, we're using Visual Studio Online, right? Or which is now Azure DevOps or?
Azure DevOps.
Okay, okay.
So we've got our code up there
and the whole goal was,
hey, let's have the Azure function
deployed directly from our Git, right?
So we make a commit to Git,
go ahead and deploy that thing out.
But I couldn't figure out exactly
how you get that on to that separate class
into another folder over there.
So we haven't spent that much time doing it, but maybe we'll get back to it after all this.
Yeah, I've been doing some crazy stuff with our CICD pipeline most recently for business work.
So not this fun side gig stuff or side project stuff.
So I was
thinking that surely there's a way
to hook up some kind of CI
to CD pipeline or some kind of
automated build where
we could just have
a release script that could
deploy the independent functions
and whatever they need to be.
Whether we can
package them as, as the complete function package and, and, uh, deploy that into Azure,
or if it's something where we, we do have to mess around with files and, and making sure that the,
the correct versions of things are out there. Right. Yeah. It'll definitely be an interesting
thing. Maybe we do a followup afterwards afterwards, like if we get further down.
I mean, it'd be really cool to see this thing through because I think there's a lot of useful things that we've learned in it and that will probably save a lot of people a lot of pain and time.
So that aside, that was off the rails.
Hopefully it was useful.
The next thing I want to talk about is there's just a few tools that really make working
in Azure a whole lot easier. So Azure's functions specifically, there's the Azure core tools.
Those are really helpful, right? If you get those down again with version two out,
should work on Linux, Mac, and Windows. One that surprisingly doesn't pop up in a lot of places, and I don't know if you had a
chance to look at this one, Azure Data Explorer. Have you seen this? I think I've played around
with it or seen it a very little bit. I have not really spent enough time with it. Dude, so here's
what I'll tell you. The Data Explorer, first off, it's cross-platform so you can get it for windows you can get it for
mac i've actually got it loaded on my mac um and on the mac it's called microsoft azure storage
explorer and here's the thing you can hook up you can log into your account hook up and it gives you
a view into all kinds of things so So your Azure function storage, blob storage,
table storage, Cosmos DB, like it's, it's like this all in one tool to where if you're working
on things and you're pushing things into a queue or they've even got data lake store on preview on
Mac. Um, so I'm looking at it right now and I mean, you literally have access to basically all your files that you've stuffed up into Azure somewhere.
Now, that may not sound all that useful initially, but the reason it is is if you have your function doing something like pushing data into a queue or into a blob storage so that it gets picked up and processed later by something else,
when you run your function, you can go up there and look and say,
hey, did it happen?
Is there a new file there?
Is there something going on?
And so instead of having to write some code
to stop a debugger and do all that kind of stuff,
you can literally watch it live.
So if you're going to mess with Azure Functions,
I highly recommend going and getting that.
It's free download.
It's super easy to use. It's really
pretty intuitive. And then the other one, this might seem obvious,
maybe, maybe not, but Visual Studio and Visual Studio
code, like no-brainers to me.
The surprising thing to me, and I don't know, what did you use
when you were coding yours?
Visual Studio.
Okay, so you were using Visual Studio. I was on a Mac, so I was using Visual Studio for Mac, which has come a long way.
But also Visual Studio Code, they've made leaps and bounds in that.
Because when this thing first started, it was kind of a bare bones type deal.
They've got hooks for everything now.
And so it's actually a pretty decent experience if you want to do things in
visual studio code.
So highly recommend those tools if you want to jump into it.
You want to lead us into Cosmos?
Sure.
So, um, so Alan's idea was to,
to take all the data from the podcast apps or from the podcast and utilize the functions and put all of the data into Cosmos DB.
And then we would assemble the data there and then use that as our data store so that we could start writing reports and start making sense of that information.
Yep. And the reason why Cosmos DB popped up to me, really, I mean, being completely honest,
is because it's the new shiny toy.
That's really what it boiled down to.
But it's the fact that it's sold as the multi-model database, right?
And I was like, what does that mean?
You can put your relational data in here, your document storage data.
Like, you know know sounds interesting you ever
played with any document dbs a little bit yeah okay they have their place right right they don't
they don't replace relational databases but they certainly have good good utility and functions
yeah and unfortunately i've come across some people using document db in a relational database fashion? You know, I don't know your thoughts on it.
For me, it's less of a sin
to put DocumentDB type things in a relational world
than it is to try and force relational
on a document database.
Right.
So you feel the same way?
If I understand you correctly, yes.
So basically, don't try and force mongo db to
be your relational database right if you want to shove a json document into sql server i'm not
going to love you but i'm not going to hate you as much right that's basically what i'm getting at
because sql server is the solution for everything man he and i are going to have words. So, yeah, getting back to just this whole thing of Cosmos DB,
it's globally distributed, meaning this thing, you put data in it,
it's available everywhere, right?
And it's a multi-model database for any scale.
Like, those are big terms.
Your thoughts?
Are you confident?
It is some great marketing speak, and for the most part, from my experience, it is absolutely true.
That's what I understand, too. It's super fast.
So here's what they say.
It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements, SLAs, that no other database platform offers, period.
That's pretty big marketing speak.
I mean, nobody, nobody offers it.
It can distribute your data to any Azure region with a click of a button.
So when it says it's globally available,
that's if you want it to be.
I mean, if you don't have anybody in Europe using your data, then why have it there, right?
Yeah, keep it close to where it's going to be used.
Exactly.
Keep it at the edge is what they say, right?
Right, right.
At the edge.
So they have these, it's called multi-homing APIs.
And this part to me is super cool
because what it'll do is if you are in Europe and your
data is in Europe and you're using my app, it'll automatically figure it out, right? Like that's,
that's pretty cool. If you're in the U S and you've got, you've got your data over here,
then it's going to see that it's going to geo locate you and say, the closest data center is
here. Get your data from there, get it back to you. So it's as fast as it could be with no effort on your part.
Right, right.
Yeah, and it seems these days that more and more people are going global.
So it's extremely important to make sure that you're giving your users,
you're giving your customers the best experience possible.
And if that just means moving the data closer to them, then so be it.
Seems like a no-brainer.
So, yeah, no application changes.
As long as you're using their connection APIs and all that kind of stuff,
you put it in there.
If you decide that tomorrow you're bringing Europe online,
you basically say, hey, Cosmos, I want you in Europe, and it's done.
Checkbox, check.
Done.
Beautiful.
All right, so the multi-model thing,
this is probably the part that intrigued me the most, right?
So it uses, now this is going to be some stuff that I'd never heard before,
but I'm being honest here.
It uses the Atom Record Sequence, ARS, based data model.
This just means that it natively supports multiple models.
I mean, that sounds good.
Sounds great.
Right.
Um,
what that means is what,
what is multiple database models?
We talked about document DBs.
We talked to,
we talked about relational.
You got anything else?
Uh,
it looks like we've got graph.
We've got key value table column family.
Yeah.
And they say others,
I don't even know what the others are,
but I mean,
that just means the next one tomorrow.
That's right.
It'll support anything. Yeah. By the way way you ever looked at graph databases very very little bit
so cool man so cool um so the number of apis are available in multiple sdks they have them
in several languages because they want people to adopt it um so here's the interesting thing about the sql part of this so it's it allows for sql
but it's a schemaless json database storage with sql querying capability so i gotta admit i read
that as shameless json database storage that too oh the thing here that that kind of throws me a little
is your statement earlier about you know trying to force a relational into a document db type world
and it sounds like that's sort of what they're saying here except this natively supports the
ability to query it properly except you're not going to be enforcing schema which may or may not be a problem for you right
i guess it kind of depends on what your needs are if you're looking at the scale and all that
kind of stuff then then you got to weigh what what's the most important to you yeah the last
large project i was working on it was a can't say no no good
backing out now it was it
was a document DB and
lots of queries being
written against it sequel
like queries lots of
stored procedures being
written against it so
really really interesting
stuff completely flexible
so here's here's a few things and I mean I've So really, really interesting stuff. Completely flexible. Okay.
All right.
So here's a few things.
And, I mean, I've got them all pointed out here.
But the key here is the flexibility that you get.
So you've got a SQL provider.
You've got a MongoDB as a service, which is still all of these are running on top of Cosmos DB.
It's underlying data engine.
Cassandra, Gremlin, which is the interface for graph databases,
and Table, key value storage. So if you've ever worked with anything like DynamoDB or I'm trying to think,
you know some other ones off the top of your head, key value,
it's really simple type stuff.
They're usually typically like single index
type things but key value storage literally just if you think of like a hash table that's kind of
what it is right um so it supports all these things out of the box natively and at a scale
that basically nobody else can touch do we need it for our podcast thing i don't know that we need it right exactly it needs definitely
a strong word here um but but the the ease of implementation is another benefit right
definitely so i mean we we also probably don't need a relational database no that's a good point
we could do flat file storage really if you wanted to boil it down but but who wants to do that um
so the scalability of this thing you want to tell us a little bit about that?
Yeah, so it scales at per second granularity.
That's crazy.
Right.
Per second.
It's insane.
Yeah, so the longest you're going to wait is, oh, yep, that was it.
And then it's going to auto scale out for you to meet whatever the demand was.
Yep.
And storage scales transparently and automatically for you.
Yep.
Never have to look at it.
It says for a typical one kilobyte item, Cosmos DB guarantees end-to-end latency and reads in under 10 milliseconds.
10 milliseconds.
That's guaranteed. that's one of
their slas um it index writes in under 15 milliseconds at the 99th percentile of course
and that's all within the same azure region and it says the median latencies are way lower
under five milliseconds dude that's almost faster than what you get doing
local development with things yeah i was hoping for under four can't please everybody uh uh the
high availability what do we got there we got four nines of availability sla for single region
database accounts and five nines of read availability on multi-region database accounts. That's pretty awesome.
Tunable consistency levels, that means it allows you to choose how important your consistent reads or writes are.
And you see this a lot with horizontally scaled type things.
You can basically say, I need it to be more acid, right?
Like, I don't want a dirty read.
Or you can say, no, I have to have it and I'm
willing to wait for it. Don't worry about performance. Wait, what? Right? It's crazy.
What do you mean don't worry about performance? That's the whole reason we're doing it.
So basically what they say is you can rapidly iterate the schema of your application without
worrying about the database schema or index management they're saying that you do not index this stuff like have you ever ever fought sql
server trying to get performance out of some tables or some joins or some any queries oh yeah
yeah yeah and it's typically just a it's a constant go through all right what are you joining on what indexes are on the table
you know what do we need to do what kind of indexes we need to create are they clustered
they non-clustered whatever yeah thank goodness for database engine tuning advisor
and if you don't know what that is you should google it it tells me that i need an index and i
i believe it right the only problem with that is typically what will
happen is over time as you build up these things more and more and more what you'll find is you've
now got 20 indexes on the table and some of them probably aren't even used anymore right um so it's
great for finding the things that need to be done but it also can leave things behind but yeah i'm
not a dba i just i just play one when there's not one. Nice.
So they say right here, the database engine is fully schema agnostic.
It doesn't care.
It automatically indexes all the data it ingests without requiring any schema or indexes.
And everything it does is fast.
It seems like magic.
Sounds like it.
Yeah.
I mean, I kind of want to try it.
Who's it for?
Podcasters.
Obviously.
We need the fastest. Yeah.
Any web, mobile, gaming, IoT applications, anything that needs to handle massive amounts of data.
That's super awesome.
At global scale.
So, I mean, mean really really cool stuff
dude there's something else I left out on the
Azure functions and I think we should talk about it now
and this is what happens
when you throw together show notes super late at night
so one of the reasons
why Cosmos DB
was interesting to me and
we talked about a little bit is the ability to
bind so you can tell
your function to bind automatically to a Cosmos DB database or to table storage or to whatever
in a declarative way. So we've all written code that goes out and makes a connection to SQL server
or some database and then says, you know, Hey, give me the connection. Here's a transaction,
wrap it, do all this. You don't have to do all that.
You can literally, in a configuration file, say, here's my connection to this particular thing.
I want you to bind to this particular table and give me the data back.
That's awesome.
Nice.
So I wanted to mention that.
That's another reason why I wanted to go this route with these two things, because it's almost like a seamless type integration.
So now let's come back to part of the reason why we even said that we wanted to go this Azure Function route, and that's not having to deal with the infrastructure.
So what about monitoring?
Yeah, monitoring stuff's good.
It's probably better if your application's up and running, if your functions are doing
things, if, if users are clicking buttons.
Yeah, I agree with that.
So one of the cool parts is you get to stuff built in, like you don't have to go write
your own telemetry type monitors or aggregators or
anything like that. You can actually hook it in. So by default, there is monitoring built into
Azure Functions and it uses the storage that comes with the account, but it's pretty limited.
It doesn't have that much information, but you can also set up Azure Application Insights,
which gives you tons of stuff.
Like if you've seen the page, it's got the graphs and it's got all kinds of things on
there that shows you the latencies, everything.
Like you don't really have to do much.
Yeah, it's almost like the people that write that stuff know what is interesting and what
is useful.
They'll go ahead and give it to you.
And that same data that you'll see
in the application insights area
is going to be the same stuff
that will probably drive the heuristics, right?
Like the latency times and all that kind of stuff
that auto scale that out.
So you get that built in.
Like basically, it's not a problem.
Now, the only thing I'll say is
we talked about how cheap Azure functions work.
They said that if you hook up Azure Application Insights to your functions
and it's monitoring a lot, you can blow out your free stuff pretty quick.
So be aware of that.
But it's probably worth exploring and just figuring out how to dial it down.
But App Insights said my function was up all month.
That's right, because it stopped reporting.
All right. said my function was up all month. That's right, because it stopped reporting. And then the other thing too is
Cosmos DB for monitoring,
it's built in. If you go
into your Azure portal and you
look at the metrics tab,
you choose the database you want to look at,
and it gives you everything.
That's pretty sweet.
You ever gotten a call saying that the
application's down?
Only between the hours of 2 a.m. and 4 a.m.
That seems to be when it's popular.
Yeah, I mean, it's kind of nice knowing that stuff's built in.
And with Azure Application Insights, you can hook in notifications,
you know, have it text you, wake you up at 2.30 in the morning.
But yeah, man, that stuff's all there.
It's usually best if the thing that wakes me up at two in the morning is not actually human. I agree and subscribe to that thought.
Actually, I'm not really good at six in the morning either. So we've got a few resources
we like. We'll have some links in the show notes to that. I mean, that kind of wraps up our talk
on what we wanted to get out there. I mean, that kind of wraps up our talk on what we wanted to get out there.
I mean, hopefully this gives you some insight
into why you'd want to use Azure Functions,
what Cosmos DB would truly be useful for,
besides just being awesome for collecting podcast stats.
Yeah.
Yeah, and I want to hear from people out there.
What are you doing?
Are you doing this in your day-to-day business
are you playing with this on the side what are what are some fun and useful things to do with
these tools yeah i totally agree you know what's funny when i first heard about azure functions
it's probably been about a year or two ago the use case that i heard about was literally a lady
that was doing um she was doing a lot of data transformations, some ETL loading,
and she was using Azure for everything initially,
like the tools built for SQL Server doing the ETL on that,
and she said it was way cheaper using Azure Functions.
So she would push data into a function, have it do the transform,
and then push it to wherever it needed to be.
I was like, oh, that's a really cool use case.
So if you can think of a use case, just make it happen. But SQL Server is the answer to everything. Oh, man.
Hey, so I can put you on the spot here. Have you found or have you heard anything at Ignite that
you thought was really awesome so far that we should share other than these things?
There has been a lot of information. And I'll admit I've been sitting in the back of some rooms
and trying to get some work done.
So I just kind of let my ears pick up the keywords,
the buzzwords that I think are most interesting.
So I've flagged a couple of the sessions that I want to go back and review.
All kinds of offerings in Azure that I haven't yet really played with
that I really wanted to pick up and do a little bit deeper dive in.
That makes sense.
Yeah, I mean, Azure is definitely a big keyword here.
No question.
I think for me, it's been some of the orchestration like the AKS,
so Azure Kubernetes Services and things like that.
There was a presentation given, I think, I don't remember which company, it might have
been Xerox, but they were talking about how they took an application that used to take
24 hours to spin up at a new customer to get them to be able, it was Xerox, it was a document
storage type thing, 24 hours.
They converted it to basically using Azure Kubernetes services, 10 minutes.
10 minutes spin up.
So really cool stuff.
But with that, you know, hope you enjoyed this particular episode of the shared between coding blocks and Six Figure Dev. And, you know, as always, if you haven't already,
please, you know, go leave a review on iTunes or Stitcher
or wherever you get your podcasts from.
You know, definitely go check out the Six Figure Dev.
If you haven't checked them out, check us out at codingblocks.net.
You got anything else?
Yeah, I just wanted to say thanks to Alan,
thanks to Coding Block guys for allowing me to come on
and share some of the time.
Really looking forward to doing more in the future.
Yeah, definitely, man.
And by the way, he's got a test-driven development book out that we gave away previously.
So if you would like to get your hands and learn some test-driven development, he's got a great way for it.
We'll leave a link down in the show notes for that as well. Yeah. Practical test-driven development with C-sharp 7 is available on Amazon
and Barnes & Noble and anywhere you might get your books. Excellent. All right. With that,
we'll catch you on the next episode. All right. And with that, even though he's absent,
let's head into Alan's favorite. Well, I guess he wasn't really absent, is he? He's here in your ears, but he's
not present as we record this portion of it. So anyways, let's head into Alan's favorite portion
of the show. It's the tip of the week. Yeah, I'm going to start us off this time. I want to tell
you about codesandbox.io, which Dennis and I introduced me to this. So you should go follow
him on Twitter because he has been blogging and doing a lot of really cool things with it.
And what it is, is an online code editor.
So when I first kind of heard about it, I immediately dismissed it because, like, I have, you know,
code editors that I really like and they're really powerful.
And so why would I ever mess with this?
But what this does is give you, like, an instant setup and sandbox for doing, like, really common types of applications applications like Vue or React or Angular or a bunch of other logos I don't even recognize.
So you can go in here and just click a button and it gives you a React app set up with a little editor on the left there with a couple files.
And then on the right is a sample of what it looks like.
So it's literally an entire coding environment and feedback and working kind of setup
all in your browser.
And the editing is really nice.
But what I really like is that I can one-click share it.
So if I'm like messing with something
and I don't want to mess with the hosting,
I want to show, say, Outlaw, like,
hey, check out this cool little app idea I have,
whatever, just a little prototype.
I can click it, share it over to you without having to go say, you know, what I would do otherwise, which
would be like go to GitHub, set up something, mess around with GitHub pages in order to get
it hosted. So I don't have to like buy a website just to show somebody a quick example. So it's
kind of like a, I would say like an evolution on something like a code pen or something like that.
And it's actually really handy. You can host a bunch of stuff up there for free.
And then, I mean, it's kind of got its hosting built in there too.
And it also works really nice with Netlify too
if you want to kind of take it another step forward.
So you should go check it out
and go make some cool stuff really quickly.
You had to add on to you saying it was like a code pin.
I mean, like one of the ones out here
that I happened to find was Mario Kart.
And there's a working mario kart
you can actually see the code that they've already done but you can play mario kart
yeah that's awesome and you can um add an npm package and stuff like that i mean it's like
just kind of having an environment just set up on the website for you and it looks like
if i'm reading this number right i'm going to double check because it sounds too kind of crazy but uh they have 619 000 react websites uh hosted on code sandbox.io wow which are that sounds ridiculous
to me but um that's what it looks like so uh if i'm wrong let me know in the comments
all right well man now it's gonna make mine sound kind of boring in comparison, but
I found this, uh, we were at Atlanta code camp recently and there was this, uh, one, uh, one of
the tracks that I was on, they, uh, it was machine learning kind of conversation. And they were,
there was a, one of the slides was from the psychic psych-learn site about that Python package.
And it was a really cool cheat sheet.
We've talked about cheat sheets, but this was like an algorithm cheat sheet.
And let me put a link in here for you so you can follow along with me, Joe.
All right.
If you're trying to decide, basically the way Sci Sidekick has this titled is choosing the right estimator.
So basically the idea is like, hey, you want to use machine learning in your application, but which algorithm do you use, right?
And so they have this algorithm cheat sheet, and it's really cool.
It'll walk you through like a decision like a decision decision no not a decision
tree what's it called um ah dang it it's like a flow chart kind of yeah yeah where it'll be like
yes no questions and it's like hey if you have less than 50 samples then you can't even proceed
go get more data and then come back right but the cool thing though is that like you can click on
these algorithms and you can see the the behind it, like what that algorithm actually means.
So like once you get into a particular area, like if you're trying to do classification, right, it'll say like, hey, here's the good algorithms.
You know, you're probably like best fit, best choice kind of algorithms for, uh, classification type problems that you
might want to solve. Right. And in your different algorithms that you could use and why you might
want to use one of them. So it was really cool. A little cheat sheet there. And I thought I would
share that out. That's really nice. It's fun to look at too. It looks kind of like, um,
amoeba or something. Yeah. Yeah. All right. So with that, we hope you enjoyed Alan and John's talk.
And be sure to subscribe to us on iTunes, Stitcher and more using your favorite podcast app.
Be sure to leave us a review. You can head to www.codingblocks.net
slash review where you can find some helpful links there. And while you're at Coding Blocks,
check out our show notes our examples in discussion
you can hit any one of the show notes and participate in the discussion we encourage
that as well yeah and send your feedback questions and rants to the slack channel
codingblocks.slack.com and make sure to follow us on at twitter at coding box or head over to
codingbox.net and you'll find all our social links up at the top of the page and alan you have anything else to add nope that's it you guys are awesome take it easy
take it easy uh limit squeezy i don't know what does alan say goodbye i think that's right
that is something that he would say for sure