Coding Blocks - Alternatives to Administering and Running Apache Kafka
Episode Date: June 23, 2024In the past couple of episodes, we’d gone over what Apache Kafka is and along the way we mentioned some of the pains of managing and running Kafka clusters on your own. In this episode, we discuss s...ome of the ways you can offload those responsibilities and focus on writing streaming applications. Along the way, […]
Transcript
Discussion (0)
you're listening to coding blocks episode 237 subscribe to us on itunes spotify and more using
your favorite podcast app and leave us a review if you can visit us at codingbox.net where you
can find show notes examples discussion and a whole lot more hey send your feedback questions
and rants to comments at coding blocks or hang out on slack at codyblocks.net slash slack or oh there's your turn or head over to
www.codingblocks.net w to find our other social links to the top of the page with that i'm joe
zack outlaws out this week and i'm alan underwood and with the return of the real intro that's right
i'm sure this is short-lived he's gonna be so so mad. He is going to be bad, but yeah, that's what he gets for taking a vacation.
All right.
So in this episode, we talk, we've now spent two episodes talking about Kafka and what it is, or the Kafka platform platform, I should say, and what it is, the kinds of things that gives you.
And we touched on it in one of my tips of the last episode.
And so we want to talk about some
kafka alternatives in this episode but before we do that one of us is going to butcher a proper
name i think this will be you jay-z all right i'm filling in for a lot tonight so on uh itunas
we've got uh abugur7 that's i think you nailed it man yeah we've got
a twofer
so we've got
Spotify
we've got
and on audible
also again
so thank you very much
we really appreciate that
I think you pronounced
those last ones perfectly
yeah great
yeah
alright
and we've also mentioned it
we're going to keep doing it
here over the next couple
months atlanta devcon it will be coming september 7th you can go to atl devcon.com and you know i
should probably do it myself see how much it costs and all that usually it's dirt cheap so
you know be aware of it some cool stuff coming up yep uh definitely and also uh dev fest central
florida i've talked about this a few times. It's coming up.
It's been announced September 28th, 2024.
The call for proposals is up.
So if you want to give a talk, this is a really cool event.
They fly in people.
Historically, they flew in people from different areas of the world.
They brought in a lot of people from Florida and just other areas around.
So if you feel like traveling to sunny Florida in September, then you should submit a talk.
It's really cool.
We'll have a link there.
This is almost October in Florida, in Orlando.
So it's probably actually a nice time to be there.
Yeah.
It's really nice.
So you can still hit the water parks and stuff like, you know, bring bathing suit.
No problem.
Swimming's great.
All that stuff's fine.
You are into a hurricane season though.
So that's fine.
It just means there'll be a breeze.
It's have a plan B.
All right.
Awesome.
Okay.
So getting into the meat of this one, we want to talk about some Kafka alternatives or alternatives
to actually running Kafka and managing Kafka yourself.
And wanted to do an episode on this because I think
it's important because I think we've hit on a little bit between me, you and outlaw in the
previous couple episodes, like managing and running platforms, especially like this can be,
can be difficult, right? So knowing the alternatives out there that maybe make your life a little bit
easier is probably worth having in your back pocket in case you decide to go this route.
If data streaming sounds interesting to you, then maybe research some of these things before you
decide to go into your old Kafka cluster, right? Yeah, for sure. Imagine getting started with one
of these technologies and not going with one of the cloud vendors.
And you're going to spend the first week just messing around with certs and like, why isn't it deploying?
And why isn't it scaling? Where are my pods?
Yeah, no thanks.
Yeah. And I do want to say before we get into this, this isn't in our notes or anything.
But if you were going to run a Kafka cluster, Kubernetes does ease some of the pains of it, right?
Like if you use their operator, like Strimsy specifically, is done really well, and it helps you in a lot of ways.
Things that it does for you that you would have to do manually if you were to set up your own cluster on bare metal or even on VMs, right? So it's not necessarily easy,
but doing the bare metal and the VMs is definitely a step harder, right?
I would think.
Oh, yeah, for sure.
And I think, like, if you're trying to learn Kafka,
you've got enough just right there without trying to get into it.
Even Strimsy is fantastic,
but there's some serious documentation to read through there.
And, you know, the settings, like, there's some abstractions there.
You know, it's really good about being kind of transparent as to what it's doing but if you
didn't know Kafka already I think it would just make it that much harder yeah definitely all right
so the first one we want to bring up here and I think well Jay-Z and I kind of like this because
it's confluent and if you haven't heard of confluent, they've almost, I want to say over
the years sort of turned into the, the status quo Kafka resource. Maybe, I mean, that's where,
that's where Kafka connect came from, which is now part of the Kafka platform.
It's where most of the sources and sync connectors live, right? Like, so they are very much a housing of a ton of knowledge around
kafka and i think that makes sense because if i remember correctly one of the dudes who created
kafka at linkedin left and started confluent you know basically to be a commercial version
of it. Right.
Yep.
For sure.
And I think they still run it actually.
I'm not totally sure.
I forget the name of the person who did it,
but they had a good partnership and everything.
It's always weird to me when that happens.
Yeah.
I'm going to just go start my own company for this little,
you know,
open source project that I championed. I would think like whoever their boss was be like,
yeah,
I let you do this little project.
I let you make it open source.
You said it'd be great PR, be great for the firm, and now you're taking off with it.
Thanks.
Yeah, yeah.
No lawyers involved at all, I'm sure.
So here's the cool part about Confluent.
So what I have is I have a link directly to the Confluent cloud, and they have the pricing.
This has changed a lot, Jay-Z, since we did this. But here's the
cool part. And it really allowed us, me, Jay-Z, Outlaw, and several other people to get started
with Kafka without having to learn all the nuts and bolts of it. Because what they allow you to
do is you can spin up Kafka in the cloud and really all you're dealing with is pushing data
to and from topics. Yeah, I believe you could spin it up in the cloud and really all you're dealing with is pushing data to and from
topics. Yeah. I believe you could spin it up in the cloud of your choice too. I don't remember
exactly how the signup went, but I think you're able to like, you're able to either say like,
oh, I want AWS and it would do it. Or you could even say like, I want AWS, here's my project,
here's my auth. And so you're kind of paying some of those bills and they kind of tap their stuff
in on top of it. But I think that they made it really easy to kind of tuck into your area of
control.
Yeah.
So what's kind of interesting and what's changed a little bit here is used to
the pricing was just sort of based on your usage, right?
So your company will plug in a credit card and you know,
however much, you know,
incoming and outgoing data and however much you stored and all that you get charged for it. And if I remember right, it wasn't unreasonable. No, it was good.
Yeah. I mean, for, for what we were doing at the time anyways, you know, it was less than a
thousand bucks a month. I want to say, if I remember right, well, they now have several
tiers. They have the basic tier, the standard tier and the enterprise basic starts at zero
bucks a month. And then basically you can go in there and sort of choose your sliders about how much,
you know, network throughput you think you're going to need, how much data you're moving and
all that. And you'll get a price per month and it's usually not terrible. Then their standard
tier, it says is, is actually ready for most production type environments. So that's kind
of cool. Now those start at $550 a month, which isn't horrendous
considering they're going to manage and give you, they say two nines of, or is that, is that
considered four nines? That's four nines, isn't it? I always get confused. It's 99.99% uptime,
which is really good. And that's infinite storage. Uh have audit logs and various other things, right?
So that's actually really good for 550 bucks a month. And then the enterprise starts at 1650
and they give you that. Plus you also get private networking and you get the maximum
auto scaling capacity and they handle all that stuff for you. So again,
this is a really good way to get started so that you can actually work on creating applications and not managing Kafka infrastructure, which they're professionals at.
You know, it's funny.
There's actually a few things in enterprise that are not as good as standard, which is pretty rare.
But I imagine it's because enterprise has a lot more options and whatnot.
But KSQL, which that's a way of running SQL commands,
a kind of abstracting way from setting up jobs for little lightweight tasks.
That's to be determined in Enterprise, but it's available in basic and standard.
And then Apache Flink is coming soon in Enterprise,
but it's available in basic and standard.
It's kind of weird.
So definitely if you're looking at hopping in,
I would really pay attention to that chart. you can't assume that enterprise is just adding
stuff oh that's really interesting yeah for sure i mean yeah i guess when you're talking about that
type of scale they're they're not trying to host all your your running running apps too. But, but yeah, I can tell you in, in Jay-Z can
chime in on this too. We had a really positive experience when we're using Confluent Cloud,
right? Like it's, it's how we got up and going with starting to write applications that were
using data streams. Yeah. Even the, uh, the reps that we worked with and everything and you know like they
would kind of check in from time to time which is just really nice you know uh so there's definitely
some larger uh companies out there that you will never talk to a human being ever right so it's a
nice experience to have like hi i'm katie you're uh you know the southeast rep or whatever like
just check it in yeah they were really good and super nice people.
And anytime, if I remember right, anytime we had like technical problems that we ran into,
they'd be like, all right, let me get one of our engineering tier people on the phone with you
so that we can figure out how to make this work. So it was, it was very much a, we want you to be
successful. So we're successful relationship.
And that was really, really nice.
Yeah.
We had one dumb problem early on with a number of schemas.
You remember that?
There's a limit in the schema registry, which is kind of one of the add-ons.
And now I'm wondering why the heck we would never probably hit that problem in the first place.
So I don't know what was going on there.
But we were pretty early users after Confluent Cloud launched. I can't see what the limits are now, but I see that you can delete the first place. So I don't know what was going on there, but we were pretty early users after Confluent Cloud launched.
I can't see what the limits are now,
but I see that you can delete the schemas,
which wasn't available to us at the time.
It was fun times.
Yeah, we were almost like beta users
of the platform.
This has been five, six years ago now.
So I'm sure it's matured
and I'm sure it's gotten way, way, way, way better.
Yeah.
So, you know, a vote of confidence for them and for what they were providing.
All right.
So the next one, this was my tip of the week, I believe in last episode and I only sort
of teased it, but this thing is so cool that I think we have to talk about it.
So it's warp stream.
Uh, the, the website is www.warpstream.com as in like a
water flowing stream. It's all right. So in fairness, kind of what I did because they say
it better than I could hear and it's marketing material. So I don't think they really care that
we duplicated a little bit because you know, more eyeballs, the better. So I'll read off a few of these JZ read off a few of them, but it,
when you hear this, it's going to be like, Oh, that sounds interesting.
So before, before I even read a bullet point,
one of the important things here is,
and this comes from a blog article I read from them and it's actually one that
micro G had had given to us.
That was something like long live.
No Kafka is dead.
Long,
long live Kafka was the name of the,
the actual article.
And I read it and I was like,
man,
this is incredible.
And then I realized that it was an article that had been reposted on medium
from the main company that had actually written a blog post.
Like it was truly just a
duplicated article over on medium, which I guess a lot of companies do nowadays.
So the big thing is the people that created this, they were going, man, why is it such a pain? And
why is it so expensive to run Kafka and the cloud? And we've talked about this, me, Joe Zach outlaw, we've talked about this on several
episodes. If there's one product in any cloud that is like, that is absolutely worth the money,
it's blob storage. You can store as much garbage as you want up there and you don't have to worry about this size. You
don't have to worry about anything you can. Yeah. You can even tear the storage for things that
you're never going to access, but you want archived or whatever. And it's relatively cheap.
Like you can probably store all your photos up there from the past 10 years and it might cost
you 10 bucks a month, right? Like it's, it's actually a pretty decent deal. So I say all that to say this, they looked at it and said,
Hey, what are the features of cloud that are really easy to use and a pleasure to use?
And let's build a platform around that using the same Kafka protocols, meaning the same PubSub, the same, or not PubSub,
the same producer consumer and all the other protocols
and API type things that are there.
And so they sort of started from scratch with that in mind.
And that's where we ended up.
So now we'll read through some of these bullet points.
So they say WarpStream is an Apache Kafka
compatible data streaming platform built directly on top of object storage.
So blob storage, no inner availability zone, bandwidth costs, no disk to manage and infinitely scalable all within your virtual private cloud.
Zero, zero, zero, zero disks to manage that's amazing that's actually one of the biggest pains
yeah that is the biggest pain with kafka oh man i've oversized oversized my disc you know i didn't
need five terabyte ssds that's costing me a lot of money all right let's drop it down to one terabyte
oh doggone it we got got way more data than we expected.
And now we're getting data that's truncated or it's failing to write.
Oh, so let's resize it to two terabytes.
It's truly a pain in the butt to size your Kafka cluster in the cloud in a way that it makes financial sense but is still usable.
Yeah.
Yeah, that's fantastic.
That's enough reason right there i i will get to it but uh
latency is the thing i'm waiting to hear how they address that they do mention that you end up being
10 10 times cheaper than running kafka because a lot of those costs are going to be storage
and agents stream data directly to and from object storage with no buffering on local disks
and no data tiering so it that's going to be fast.
It's going to be, you know, low CPU, all that sort of stuff.
And, you know, there's going to be increased network activity,
of course, but, you know, cheaper.
Yep.
You can create new serverless virtual clusters.
So this is interesting.
This is their own terminology.
When they say virtual clusters, they're doing it in their control plane.
So I think it's a way to – because, again, they're not actually setting up a cluster of Kafka servers.
They're just using object storage, and then they have bits in the middle of it that sort of manage it.
So you could set – where you would typically, I guess the whole virtual cluster thing comes in
like this. You might set up a Kafka cluster in your, in your company for financial data, right?
Like, and you're doing all things through that. Then you might have, you might have another
customer that is for customer data because you want that more secured and locked down and all that, right?
That's going to be, you know, I've got three brokers, I'm replicating X amount of times, and you're doing the same thing on your customer data. I got five brokers and it's whatever
they do virtual clusters, because really they're not changing the underlying infrastructure.
They're just changing, you know, via software. Oh, this data is going to get routed over here.
And this one's going over here. So there's really nothing to maintain it's just it looks the same yeah it's
fantastic yeah that's really sharp uh also it's just all sorts of advantages that i like if you
ever want to start splitting stuff out or you want to start bringing stuff together it's a great
great step there yeah uh supports differentorts different environments, teams or projects.
Oh, we just said this.
Those are some of those advantages we were talking about.
Teams, projects without managing any dedicated infrastructure, which is nice to reduce that overhead.
And there are a couple of things that you don't have to do with WarpStream, like upscaling your cluster because it's about to run out of disk space. And that definitely is something that is a concern on Kafka.
And it's kind of a pain to
talk to figure out how to fix that we've talked about that like shelling in looking at files
trying to figure out where your hot spots are and what to do about it and so he makes the call okay
just just delete that one we'll figure we'll figure it out well well you know the other part
about that that's actually frustrating is we've mentioned we run Kafka and Kubernetes.
Stepping up the disk size usually isn't a problem, right? That's not terrible. But that example I gave earlier where you started out with, you know, five terabytes and, oh man, that's really
expensive. Now let's, let's drop it down to a two terabyte drive. That's a pain because now you have
to move all the data around you have
to repartition the data like it is an absolute pain in the butt so you don't have to do that
yep and like you would think that you might not run that very often but it's very easy especially
when you talk about like data retention rates you can imagine a a ceo or you know some some uh some
director or whatever saying we're paying too much for Kafka. Let's change the retention on this data
from 90 days to 30 days.
And we'll save money, right?
One third of the storage cost.
And you're like, well, yeah,
after we finish moving all the data over.
See you on Saturday.
Right, exactly.
Yep.
Also figuring out how to restore Quorum
in a Zookeeper cluster or a RAF consensus group.
I don't know how that fits in exactly.
You don't have to.
You just don't have to.
Yeah.
It's not something you have to worry about.
It's totally distributed.
Yeah.
Cool.
All right.
Rebalance.
You don't have to worry about rebalancing partitions in the cluster.
That's another one.
That's really fantastic and really painful.
I mean, this is hitting all the big pain points. That's another one. That's really fantastic and really painful. I mean, this is hitting all
the big pain points.
Right. That's great.
And like Alan mentioned,
it's protocol compliant with Apache
Kafka. So if you've got an application using
Kafka today, you could just
change the fixed point of this thing and it
should just work.
All the messages that your library is sending to
Kafka can now be sent to
warp stream and that's it uh which is really fantastic it makes it really easy to swap
between this stuff you know this is like some of those advantages of like truly open source
software like you know we've seen this with previously with mongo where there was like a
mongo uh api that a lot of people had uh started using and then mongo got kind of ticked and took
it away,
took the ball and went home.
But Kafka has not done that, so it's nice.
So if you want to take your existing application today and just try it on WarpStream,
that's a few configs away, which is really cool.
So we've talked about this in the past too.
Like Kafka has revved itself quite a few times,
but it's still super backward compatible
because they really have kept the apis
basically the same forever if anything they add to it it's it's very much
it's very much additive and not so much destructive as they move forward right
yeah yeah you're on like 3.62 now and uh or no it's higher than that but uh like looking at uh
i've been working on
upgrading something and uh it's very easy for me to say like oh let's see if this library still
comply with the new version we're moving to and i look it's like oh yeah it's as long as it's on
version 0.11 or greater yeah okay it's ridiculous so i just realized you were like i want to know
about these latency numbers we might as well hit it now because i don't have it in the notes because
they don't have it on their main page with all their marketing material. It's that bad. So yes
and no. And I've got, we've got, you know, something that we can sort of line it up with
and talk about in parallel. So in Kafka, you're talking millisecond latencies, you know, from the
time something is gotten from a producer to written
to Kafka to when a consumer can get it milliseconds, like seriously, five, three, two,
somewhere in that ballpark with using warp stream. They said, if you can handle latencies up to a
second, then that's what you're dealing with. Okay. So yeah, it's really not. So here's, here's the thing to wrap your head around and, and, and I can draw a parallel
with how Google cloud storage and pub sub work.
So when you're talking about a second latency, that means from the time a producer produces the right to kafka via the api or the
the protocol from the time that that's available for a consumer to read you're at a second all right
when you compare that to regular kafka like i said the the producer writes it, it's available to the consumer within two milliseconds, three milliseconds.
It's super fast and it depends on your disk speed and all that.
Now, what's interesting, and this is where I think it won't matter much to most people if you think about it like this.
That doesn't mean you can't get 2000 items a second out of this. It just means that if you write those 2000
items, the consumer is going to be notified about those 2000 items a second later. So you can still
stream 2000 events a second or 5,000 or 6,000 events a second. It's just that from the time
that it writes to when it gets read is a second later.
And the reason I said I can draw a parallel from this from Google Cloud Storage to PubSub is a similar type thing happens. And I'm sure you can do the same thing with Azure Event Hubs
and AWS Kinesis and their notification systems is when a document or a blob is written to cloud storage, you can have it notify via in Google,
a PubSub subscription that, hey, there's a new document there. So from the time that you actually
wrote that thing to when you're going to get the notification might be a second later.
We've definitely seen thousands of messages hit a second, right? So there were a thousand objects written to cloud storage.
We got notified of it a second later,
but there were a thousand of those records that came through a second later.
Right.
So that, I guess don't get tied up with a second being, Oh man,
I'm only going to get one a second.
Yeah.
No, you're not going to get notified of it until a second later.
And if you had 10,000 things happen or 10,000 things produced a second from that point in time, you're going to get 10,000 messages that a consumer can use.
Yeah.
So the throughput is not affected.
So if you get a million messages, it doesn't mean a million seconds.
It just, you know, however long it was going to take plus one.
Yeah.
It's, it's the latency. It's, it's truly, it's exactly what you just said.
Throughput's not affected. Latency is basically, you know, plus one second to what a regular Kafka
broker would be. That doesn't seem bad. I mean, I don't know, maybe if you, if you have some,
some medical equipment that needs, you know, 10 millisecond interval type reaction. Sure. If you're dealing with regular reporting data or most things that most businesses deal with, I don't see that being a problem.
Why didn't I think of this?
It's a good idea.
Beautiful, right? It's so beautiful. All right. So here's, here's the next bullet point. Never
again, have to choose between reliability and your budget warp stream costs the same,
regardless of whether you run your workloads in a single availability zone or distributed
across multiple. The reason they bring this up and we mentioned it on the last episode,
if you run a Kafka cluster, the way that you kind of should to where you have brokers
distributed across multiple availability zones, you're getting charged. If you're doing replication,
you're getting charged for data that's sent across from one availability, a availability zone
to another AZ. So you're, you're eating those network costs, the ingress and the egress costs on
that with this, because all you're doing is writing directly to cloud storage, whether it's GCS or S3
or blob storage, you're not incurring any costs. The cost of, of having things in blob storage
includes the network traffic that's used to write and read from it.
That's really nice.
You mentioned that their unique cloud-native architecture was designed from the ground up to be the cheapest and most durable storage,
to use the most cheap and durable storage available in the cloud, which is great.
It commodifies that storage, saves that money.
Warp stream agents also use object storage
as the storage layer and network layer.
So it sidesteps issues with interzone bandwidth,
which is really nice.
So basically what it means is typically
it's going to be the cheapest way to communicate.
The cloud nickels and dimes you.
It'll charge you extra for talking outside that network. And so they keep it all on the network it's really smart so what's interesting is when
they talk about their warp stream agents i think that's similar to like when when you have consumers
and producers you have like offsets and things that are stored in a kafka topic that's basically
what these agents and things are doing and And so instead of using Kafka topics,
they're writing it to blob storage.
Yeah. All that metadata,
whatever that any worker might need to fetch or,
you know,
anything that I need to pitch is,
is all going to be in the same spot.
Yep.
And,
can be run in,
uh,
BYOC.
So bring your own cloud or in serverless.
So it's nice.
This is like,
we talked about with like the,
um,
the fancy plans on confluent.
Uh, also, uh, you can provide all your compute and storage.
So basically, you can buy that warp stream as a control plane, which is really just really flexible.
And when you're in that mode, the BYOC, your data never leaves your environment. The only thing that's really happening with that control plane is it's managing probably offsets and that kind of stuff and routing to the different
buckets. So when we say that you're keeping everything in yours, all your applications
are running on your own compute. All your data is internal and you're accessing it from your
applications. Nothing leaves your area,
which is,
you know,
important for a lot of companies.
Yeah.
Compliance.
There's all sorts of reasons.
GDPR.
Yeah.
So serverless too is fully managed by warp stream and AWS will automatically
scale.
So I am just reading this line in the notes now,
obviously I don't understand what they're talking about here.
Are we saying that the control plane always runs in AWS or did I?
No, no.
So there's two modes.
There's bring your own cloud and there's serverless.
And if you do serverless, that means they run everything for you, right?
So now in this case, my guess is when you use theirs, it's going to have to access the data
somewhere.
So like when I mentioned, if you have a streaming app in BYOC, then that's going to be running
on your compute.
If you have a streaming app that's running in serverless, that's running in their cloud
infrastructure on their compute, which means that data is definitely crossing boundaries.
Right.
But if you don't,
if there's not sensitive data and that kind of thing,
then they manage the entire thing for you.
All you do is write your application and it just runs and they'll scale up
and down.
They say it'll even scale down to zero running instances,
right?
If it doesn't need anything.
So,
yeah,
so yeah,
those are the two modes.
All right.
And then like we,
like we said, you can run it in your, your own cloud, AWS, GCP, or Azure.
And agents are also S3 compatible, so it can run with S3 compatible storage, such as like mid-IO, which we've talked before, too.
So if you want to provide your own storage layer for whatever reason, or you're using one of the clouds that isn't natively supported, then that's the way to do it.
Yep.
And so some other things that aren't in the notes are on their page, on their homepage,
they actually show some cost estimates on, you know, running with WarpStream versus running
in the cloud versus running on-prem.
And I think, I don't want to talk about it too much here because the details are sort
of fuzzy because, you know, it's hard to break all that stuff down, but it's worth going over there and taking a look at.
Again, to me, this is a genius, brilliant use of super cheap commodity things that are available in the cloud, which is cloud storage, and then leveraging a piece of technology to just route that traffic
back and forth between that stuff that's that's absolutely brilliant to me so it's like jay-z
said why didn't i think of this because we've all had the thought man like man if we didn't
have to manage the disks like if we didn't have to do this and this basically allows you to do that
yeah that's uh that's really fantastic and we
did want to mention two other uh alternatives that have kind of gotten some some press uh
some paparazzi on them uh the first is red panda red panda is the slimmed down native uh kafka
protocol compliant drop in replacement for kafka and they also just recently announced red panda connected if you're
on social media and ever have talked about kafka or knew about kafka then you get ads for these
for this company all the time and its main kind of differentiator from like a kafka uh you know
from kafka literally is that uh performance so you know i mentioned that it's native i mean it's
like written in c++ they trimmed some fat some of that as native, I mean, it's like written in C plus plus they trimmed some fat,
some of the older features.
And even though it's,
you know,
Kafka API compliant,
the actual brokers and stuff themselves just run faster.
So you're going to save a little bit of money and they really are focusing on
that latency.
So unlike warp stream,
if that second really matters to you,
if you're like,
Oh,
I really need that data as soon as possible. I'm running medical equipment or something, then this might be an alternative that you want to look in.
You can skip that JVM overhead and use their bits.
And it's pretty cool.
Hey, real quick, because I'm looking at their page and their overview as well.
So there are two things.
It's almost like an inverse of what warpstream's
trying to do it says its powerful nature gives you up to 10 time lower latencies than kafka
which is really fast like crazy fast and it can reduce your total cost by six without sacrificing
data safety so it's not quite as low cost as what WarpStream is,
but it's faster, right?
And so cheaper and faster, like he just said,
but that's what they're pushing out there.
Yep.
So yeah, you're going to lose some flexibility.
You're going to lose some of those features from Kafka,
but you also save money and latency.
So it's pretty cool that you have these options to weigh
and there's another one i wanted to mention apache pulsar which is uh it's been around for a while
but it's it's newer than kafka it's very similar to kafka everyone who says pulsar immediately
follows it with you know and here's how it's different from kafka though it's very similar
uh but uh basically they just added some flexibility around the storage tier.
So they've got a compliant wrapper for interchangeability, but they also have their own protocol.
So it can work with your older apps, but they recommend using their API for some of the more advanced features.
But basically, they just made it easier to swap out your storage.
So they do have some options there for offloading functionality like s3 or gcs i haven't looked at it i don't know
um you know how how deep that goes or how well it compares to warp stream when i first heard about
it uh that was not really a major selling point you know it was more just about the like the
flexibility they've got some features for like multi-tenancy and replication and stuff so it
sounds like it's just uh another option i'm not too familiar with all the ins and outs of it but uh you know if you're looking at
kafka and you're looking at getting into it it's worth like looking at this and kind of thinking
of it as another variant if you want to man see i'm still looking at the red panda stuff so
apologies for bringing it back i was curious why why red panda connect like the kafka connect but uh yeah
yeah why why were you doing this kafka connect yeah i'm curious but they say so reliable it's
boring maybe that's one of them um they say that they have over 220 connectors i don't know it's
this is all super interesting the problem with information like this joe is i don't want to try at all yeah and so it's completely um counterproductive
but it's great information yeah for me like a pessimistic read of it would be we have our
it's not even pessimistic we have customers using our stuff and they're also using connect and so
we figured if we ran connect to our own way, then we could charge them for that.
And then, you know, it's a win-win.
Like the customers are managing less.
They're not like there's this component that they were constantly having to bolt on to our solution.
Now we own it.
It's simpler.
And we make a little money off top.
Oh, check this out.
They actually have a bullet point here for it.
Resource efficient.
Three times less compute resources than Kafka Connect.
A single binary that's 128 megabytes with all the connectors included.
So I believe that Red Panda is written in Go,
or at least a lot of their stuff is in Go, what they're talking about here.
Plus, plus, plus, but either way native.
Yeah, so it may just be the kafka connectors written in go but basically they're they're
saying you don't have the big bloatware of the jvm so we could do things better and faster
so yeah and we talked about a little bit it's definitely got that kind of that java ecosystem
to it where it's like oh yeah you could you could do everything. You just get it, put some jars here and configure the paths and then iron out any sort of differences
in this path or any shared libraries that you may need to shade or combine or reuse.
It's like, you know, it's just kind of a mess.
It's super flexible, but you're going to get your hands dirty.
Man, I want to try it.
So, all right.
Excellent. Okay. So, so this is usually the time your hands dirty, man. I want to try it. So, all right. Excellent.
Okay. So, so this is usually the time of the show, man. How is it? We've already talked for 40 minutes almost. Yeah. Kafka, right? Yeah. Yeah. So, so we don't have, we don't have outlaw
here, so we can't do the metal blocks. I mean, this might be the only way I could win actually,
but I don't actually, I brought a game. Oh, did you? All right. Well, okay. Well then before we the mental blocks i mean this might be the only way i could win actually but i actually i brought
a game oh did you all right well okay well then before we do that first if you haven't already
it we we say it all the time if you're listening and you made it this far in the show and you
haven't let us left us a review please you know drop us a line, leave us a review, uh, share a funny that the other people
might enjoy seeing. And as always, we appreciate those that take the time to do it. And it really
is heartfelt. We have read every single one of them since the, uh, advent of the show 10 years
ago. So very much appreciated. Please, you know, uh, do that. Yeah. Thank you very much appreciated please you know uh do that yeah thank you very much and this time so i came
up with a game that can play with just two people and also everyone in your cars on your hikes
walking your dog uh hanging out at the office uh it's very simple uh what you need to do is
guess the number that i'm thinking and if you get it right then you win win. And if you don't. Is it between one and three?
Any number.
Any real rational number.
Really?
And if you get it wrong, then I win.
All right, 42.
Dang it.
All right.
There you go.
I win.
All right.
Sweet.
You want to guess mine?
12.
How'd you know? Dang. Or connected. Don't say Wainwright. That would. All right. So you want to guess mine? 12. How'd you know?
Dang.
Or connected.
Don't say Wainwright.
That's what it is.
All right. So if you guess those numbers and you're in your car, you're walking the dog, whatever, congratulations, you won.
If not, then you lost big time.
That's right.
Just quit.
Awesome.
All right. So the last portion here, I thought that what would be interesting is to talk about the cloud provided things that are not Kafka protocol equivalent, right?
Like, so the things that we talked about here up at the front of the show, you can write the same code and point it to these different, different Kafka endpoints and
they should all work, right? Like that's, that's how it essentially goes. These aren't going to
be like this. However, they're functionally equivalent and we can talk about one of them
pretty well. And then the other two, I think, you know, should sort of follow suit. So there's Google cloud. They have what's called pub sub in Azure. They
have event hubs and an AWS, they have kinesis and those are all more or less functionally equivalent.
They're publisher consumer platforms that, that can work really fast and give you streaming
platform capabilities. So I think we can talk about pub sub and we talk about some of the good the bad
and the ugly with it and to be a pub sub um it's kind of interesting because it kind of straddles
a couple different lines like we've talked about rabbit mq you know a few other kind of traditional
cues and pub sub definitely has more of the what I consider to be more publisher or subscriber-oriented features.
So you have multiple producers, multiple consumers can share the same topic.
You're going to see numbers around latency.
It's got that kind of thing like we've talked about slicing off bread or slicing off butter where you can kind of configure and say grab 1,000 at a time.
And if you're publishing, wait until you have 50 or wait for five seconds
whatever comes first and it's got a very kafka like features but it's also got some features
that are more like a traditional queue uh which is which is pretty interesting so you know you
can have workers kind of grab items off the queue and if some reason they can't process them they
put them back and you can do retries and things like that well actually hold so there's one of the things that was
uh i guess not clear when i first started using pub sub is you can't actually have multiple
subscribers to the same to the same top or to the same subscription so oh not same subscription
but same subscription yeah well so the topic, you can create multiple subscriptions
and then you can have, you know, consumers subscribe to that subscription. But,
and here's what I mean. So for anybody that's, that's not fluent with this stuff,
just like Kafka has topics, PubSub has topics. That's where the data is actually written.
Now, the biggest difference between something like PubSub and Kafka is with Kafka, you just have other things subscribe to that topic.
That's it, right?
So where the data is written is the same place where it's consumed from.
It's very simple.
There's an abstraction in PubSub that is called a subscription and you have to set up that subscription for each application
that wants to be notified of things that happen in that topic and so when i said that you can't
have multiple consumers to the subscription as soon as one consumer gets notified of something
from that subscription and that record is act it's gone so it's it's
actually very much more like a rabbit mq type setup in that regard and that the data just sort
of disappears after it's been you know quote unquote seen right yeah and you can have yeah
like multiple workers sharing a subscription what uh what kind of uh weirded me out at first
is i remember trying to like go and use the ui and be like well let me just see what the data looks like i'm sure there's ui there
that just like show you the top 10 records or something i can see if there's anything in there
while i'm working on it and no there's not i mean there kind of is but the way you do it is exactly
like alan explained you have to create a subscription just to look at the stupid data in
there which is kind of frustrating user experience
but it makes sense i mean the thing is what it is and you have to follow its rules i wish google
would make a little bit easier to to do that but uh it does work and you know it's it's really not
intended for human eyes no it's it's not but on on the flip side here's the biggest downside to me
for each one of these pub sub event hubs and Kinesis is you are coding to a cloud implementation.
So you might ask, well, why wouldn't I just do this?
Because I don't have to manage the infrastructure.
I don't have to run a Kafka cluster.
I don't have to do any of that.
So into that, I'd be like, yeah, you're right.
It's a whole lot easier to just get started with. However,
you are definitely doing vendor lock-in when you do this type of thing, unless, and there probably
is some sort of library out there that is an abstraction over these things. I wouldn't doubt
it at all. But if you program directly to PubSub or to EventHubs or to Kinesis, you are basically writing code specific to that
cloud provider. And maybe that's the right thing to do, you know, especially if you're in startup
mode or you've cut some sort of great deal, which by the way, most people do with cloud providers.
So, hey, if we spend, you know, X amount of dollars with you per year, then you get a 70% discount, right?
Maybe that's worth it, right?
You can come to market faster with your code because you're not trying to write abstracts.
You're not dealing with Kafka clusters and all that kind of stuff.
But just know it is vendor lock-in.
Yeah.
And, you know, there's the old school.
I mean, it's not old school.
It's the proper notion that you can abstract around this and just have multiple you know connectors and not have this code invade your app and yeah you can totally
do that but you are going to spend a lot of time re-implementing the features and the configurations
and all that sort of stuff and then ultimately when you publish it you're going to be you know
going to pub sub and so you need to make sure it works with that and so developing those abstractions
gets really complicated when it comes to like pub sub type stuff. It's really kind of, it's got this like invasive effect on your application where it like,
it's such a strong paradigm and it's such a like a opinionated way of interacting with
your data that it's hard not to write your application around it.
Yeah, true.
Especially when it's so core, right?
It's like what we've talked about in the past, like when you're dealing with a database,
whether it be Postgres or SQL Server or whatever,
that's kind of the heartbeat of your system.
And so that is where you think about things first.
So yeah, it's real hard not to write things around it.
Now, here is a plus for using one of these
message streaming capabilities or platforms out there in different
clouds is because they know that's how messaging works throughout their system. It is super well
integrated into all their products. And by that, what I mean is like the example I mentioned earlier, if you're using Google cloud storage, right?
So GCS, it's real easy to set up a topic in PubSub to say, hey, every time an object is written or a new object is created, create me a notification in a topic.
Anytime an object is modified, put me a notification in that topic. Additionally, you can have all kinds of
things to where, you know, monitoring, if you have monitors set up for your, your applications or
whatever, you can have it fire off messages to a topic that you have subscriptions on for alerting,
right? Like these messaging cues and Google and Azure and AWS are very well integrated into all their products out there.
Right.
Another example, I remember when I was working, I think I'd gone to maybe in a Microsoft MVP event or something, but they were basically having people play with their serverless functions.
And one of the things that's beautiful is you don't even really have to code
connections or anything.
You just have a type of pub sub in there with a topic name or whatever,
and,
or not pub.
So I'm sorry,
a event hubs and it'll automatically flow data through,
right?
So they're very well hooked up between different uh managed services within
the cloud so again you're you're definitely doing vendor lock-in but your ability to sort of quickly
develop things because they have hooked it up also well increases like your your productivity
can go up quite a bit yeah you and yeah this stuff acts as a glue
between a lot of other services and stuff it just makes it so easy to kind of check that box and now
like you may be using it in a minor way and you may never even seen it but it's just interacting
in between two uh two systems and you're paying for it yeah now i i don't i don't recall ever
seeing in pub sub where where you have to set up partitions and things like that.
So like with Kafka,
you do right.
Like we mentioned that in Kafka,
you sort of have to say how many partitions you plan on having and all that
kind of stuff.
I remember seeing that similar type thing reflected in Kinesis.
Yes.
So Kinesis like AWS,
what they did it like they did,
they didn't even try and hide the fact that they
were using kafka under the covers right like they just passed through all the same stuff
whereas google pub sub definitely abstracted that stuff away i don't even know if they're using
kafka behind the scenes i'd be surprised if they weren't using something similar but
um you know just it's funny how that happens. And the reason that matters is with PubSub, when you're using their client SDKs or whatever, you don't have to worry about, hey, was this data partitioned over there?
Do I need to rekey it the same or whatever?
You don't care, right?
You just get data and it works when you do it in kinesis you actually have to care
about some of that stuff right because your applications will be written differently based
on how that stuff is stored behind the scenes it's been a long time since i did anything with
kinesis so i just gave it a quick google and it's very similar with kafka you know as you mentioned
built on kafka but uh it's streams and shards instead of topics and partitions. Right. Just like dumb stuff like that.
Yeah.
Yeah.
Rename it and it'll all be fine.
So I'm trying to think,
is there anything else worth calling out?
I think,
I think like Jay-Z said,
you end up coding very heavily around it.
And probably the best example I can give is when you write a Kafka application, you know, consumer producer,
it's actually pretty simple code, more or less. When you're doing something with like PubSub,
it can be very simple. It actually can, but it can also be very complicated. Like you can tell it,
Hey, how many things do I want to store in sort of a queue and how long do I want it to wait before replying and, and, and all kinds of things.
And you end up doing very specific things for, for that client, for that SDK.
So again, I guess it's, it's powerful.
You can have push pull.
They have different modes of operation, that kind of stuff.
And again, I'm sure event hubs and Kinesis all have similar things to this.
Um, so again, not Kafka protocol compliant.
None of these are, they are very much, Hey, if you know that you were buying into our
cloud, right?
Whether it's Google cloud or Azure AWS, which by the way, still, I think my favorite to date that I've messed with was Azure.
I don't know about you.
Yeah, I mean, I'm a Microsoft fanboy, but it's just something about the user interface and the way things work and the way they arrange things and organize it.
That just fits my brain better.
I think so, too.
Yeah. and their documentation
is still to this date is is one of the best out there uh but i i would say that if i were going
to rank these in terms of just my enjoyment of using them it'd probably be azure than google
than aws although aws by far has the most product offerings out there. So yeah, you just have to Google it.
Each one means.
Yeah.
Right.
Every,
every single time.
So,
but all three of these will give you streaming platform capabilities and
super fast,
low latency.
You know,
I don't know if AWS gives you a guarantees like one,
one time delivery guarantees and all that.
I'm sure they all do.
So you get a ton of functionality and you don't have to manage any infrastructure.
However, one of the things that's interesting that's worth knowing about, like with PubSub
for sure, you have a topic, you get charged for data storage in that topic.
As I mentioned earlier, you can't share a subscription, right?
You have one application that can use that subscription. So if you have multiple things, like let's say you have that,
those notifications being written to a topic, when data is written to a GCS bucket,
if you have multiple applications that care about those rights, then you're going to have
multiple subscriptions and you're paying for all those message deliveries between all those
subscriptions. So, you know, the things can start adding up pretty quick, depending on how many, how
many notifications are being sent and all that kind of stuff.
So it's all stuff you have to be aware of.
It's all stuff that is, it's not hidden from you.
It's in their pricing calculators, but it can sneak up on you real quick.
If you're not careful about how much you're doing and not realizing you're actually getting charged for all
these fees so yeah definitely uh get someone else's credit card for that yeah don't don't
put your personal one in there unless you just want the points if you oh yeah you're reimbursed
that's good there's your tip of the week that's right use your costco credit card because you get two percent back on every on every purchase right so yeah so i think that was i think that was a pretty good
walk down various alternatives to both kafka protocol swap out ability and similar functionality
i don't know do you have any other thoughts on any of this? No. Um, all good stuff.
All right.
I want to try a red Panda now and I want to try warp stream.
Both of those sounds super interesting to me.
Yeah, for sure.
All right.
Uh,
so we getting onto your favorite portion of the show tip of the week.
All right.
So I got one for you.
Uh,
there's an app on Android and iOS called Chord AI, and it uses artificial
intelligence, you know, to figure out the chords for a song. And the way it works is basically
they're integrated with various other apps on your phone and they'll ask your permission. So like I'm
on iPhone. And so I go in there and I can choose from my library, which opens up like Apple Music,
or I can choose YouTube and like find a song on youtube
and start playing it and it actually uses the phone's micro uh mic also microwave uh microphone
uh to listen and uh after just you know a fraction of a second will show you like a rough outline of
the chords that are used so i tried this on a couple different songs you know someone's where
i knew the chords very well and like yeah, yeah, I just dropped them in there.
And then I played some weird stuff to see what happened.
And it did a pretty good job of guessing.
And, you know, even like weird kind of like atmospheric sounds and stuff.
It would kind of give you a guess.
It's like, yeah, it's basically like a D flat minor nine.
I don't know.
But it worked out pretty well.
And it was cool.
And the free tier that I was using. And it it worked out pretty well. It was cool. And the free tier was all that I was using.
And it gives you some basic chords.
But if you want to pay a little bit extra every month, like $8 a month, I think, was for the fancy version.
And there's a discount, of course, if you pay for a full year ahead of time.
It just gives you more accuracy.
It was really cool.
I really like this because I'll hang out with my niece and
she'll want me to play some taylor swift song i've you know heard before but like i don't want to go
googling and you pull up the site and it's got all these ads or you have to watch a video and you're
like this thing's gonna be like four chords like it doesn't need to be perfect it just needs to be
close enough that i can sing along with it poorly and uh you know she'll get a kick out of it and so
it's so fast to be able to like hit play
and now we're hearing the song she wants to hear i'm getting the chords and so next go around like
i got it gcd a minor whatever that's awesome so uncle uncle joe is her taylor swift swap in that's
what i'm hearing yes yeah all right that's that's good yeah that's awesome do you do the dance also no she's she she doesn't dance in part she doesn't
for both of us okay fair oh that's amazing very cool all right so this actually came out of some
some garbage that i had to do recently and i call it garbage because trying to correlate things from
json and yaml and all kinds of other formats can be really tedious.
So here's, here's the setup.
I'll try and make this real short back in the day, you know, a couple of years ago,
uh, anytime that I needed to sort of explore data and find out where the relationships
were, what things were in a set, what weren't in a set, what things did,
you know, what wasn't, you know, excluded from the set, that kind of stuff. I would typically
load that stuff, like in the SQL server using SSIS, you know, whether it be a YAML file or an
Excel file or whatever, bring it in, turn them into tables so that I could do left joins, you
know, intersection jun unions, all that
kind of stuff. Right. Because that, that set language is very easy to me because I've done
database stuff so much in the past. Well, recently I had to do some stuff with JSON, YAML and various
other things from, from some cloud logs and data and whatnot. And we've mentioned pandas in the past from Python, right? And I forget,
it actually stands for something that's like Python and data analytics or analysis,
Python data analysis library. That's what pandas is. Pandas is, I keep saying pandas.
So it's really good. And the reason I say it's really good is because behind the scenes, you're using the pandas library and you can load in things like JSON. You can load in YAML, you can load in Excel, you can load in all kinds of stuff and you can quickly turn it into data frames or, or series data that you can use in pandas. So that's all cool and all, but what's really neat is if you
take that and you use those data frames and you put it into a Jupiter notebook, you can now visualize
the data that you have there. So I had to learn all this like in a couple of days, because I was
like, man, I've heard, I've heard out about Jupyter. What in the world is this? And then I realized, oh, it's just really a place to dump out your Python code, right?
Like, hey, I want to look at the data in this table and just put the name of the table,
and the next blob will be the output of that.
Well, one of the things that I got frustrated with real quick is I would try and dump a
data frame, which is really just a table.
I try and dump it, and it would only show me a table. I try and dump it and it would only
show me like so many characters and it give me an ellipsis after every one of them. And I'm like,
oh, come on, man. Like, seriously, I need to see this data. So here's another tip for you.
So the first one is use pandas, pandas and use Jupiter notebooks. And if you do that,
you can use anaconda, which has it all bundled into one, which was pretty easy to get up and running with, which I need to leave a link for that in the show notes.
But the next tip is if you want to show the entire output of that column, there's a cheat that you can do to where you can basically convert the table to an HTML output and it'll show you everything in beautiful, glorious format.
I will find the code for that and paste it here as well
so that was real nice and then wait hold on no our name is pandas it's an acronym for python
something something not a panda to be found on the website really yeah so maybe it is pandas
i guess so it's just it's very it's very annoying to me red Panda has red pandas all over the place. Very cute.
I love it.
Anyway, missed opportunity.
Come on.
I completely agree.
That's a boo for me on that.
That's really lame.
Yeah.
I don't even know what to say about it.
Oh, so a couple of things about pandas that were really helpful.
I needed reject searches, and you can do like an extract, which is really sweet because
you couldn't easily do that in SQL.
You kind of have the Python language at your disposal when you're doing some of this stuff.
So you can do some really crazy things to try and get data.
And then they have things like inner joins outer joins accepts unions all that so the whole
reason i even brought this up is without having to try and load things into tables which if you
ever tried to load an excel file into sql server oh my god it feels like dental surgery every single
time it actually was fairly painless so i used it a little bit and i was really happy with
uh just the way it worked you could get a huge data set and just say load it and it would load
exactly it should yeah it would load like a small percentage and it kind of like stream through and
all that stuff it was it's really seamless though so you could really get going with a very small
amount of code and like kind of forget all those details of dealing with like what if it's a million
rows what if it's only 10 what if it's only it's like yeah just give it to us we'll
figure it out yeah it really is fast and seamless and then the only reason i'm bringing up the
jupiter notebook is you could totally do this all on a python app and just run the python app
what the jupiter notebook gives you the ability to do is just quickly visualize the data that's
in a table or a series or whatever right like? Like super easy. So highly recommend it. If you haven't messed with
it and you're ever doing a bunch of data analysis and you're not a Python person, it's still probably
worth looking at because it is a nice tool to have in your tool belt. And then the last thing I want
to do here, and Jay-Z just reminded me of it a second ago when he brought up another AI thing because it's taking over everything.
So I saw this cool tool the other day that is really neat.
And it came up on a video on YouTube I was watching where somebody had taken drone footage of a house.
And it was a really interesting use case.
Basically, there were people having a house built somewhere and they don't live in that area. Right? So think a snowbird that lives up in New York, they were having a
house built down in Florida and they wanted to be able to see the progress on this house as time
went on without having to fly down every couple of weeks. Right? It's like, it's expensive. So
they were paying this guy to basically take drone fly arounds of their house so they could see what was going on the outside of it.
And what's cool is there's this AI call or this this AI app out there called Luma Labs dot AI.
And if you go there, you can upload video or pictures.
And it will take those videos or those pictures and turn them into 3d
renderings so what's really cool about this if you're on the site jay-z if you scroll down just
a little bit you can see that they took like three pictures of this chair in in in a room
and over on the right you can drag the thing around and you can look at it on top,
on bottom, spin it around, whatever. So this thing is doing a really cool job of turning
this, these few pictures into something that you can look at. And they were able to do the
same thing with this house fly around. They took the video uploaded it converted it and now they could actually like
you know sort of look around their house stop it at any angle they wanted to and keep checking it
out and it is it is super cool and it's so cheap it says that just one dollar per capture that's
it's totally worth it totally worth it how long would it take you to draw that thing in blender whatever like absolutely you couldn't like this is so cool man um the end this is also perfect
for jay-z because that one of the use cases they have here is game art so it says you can generate
high quality photo photorealistic 3d assets do you see the dude on the screen about midways down
the page no i didn't see that one
there's a guy walking and you can just spin them around like flip them over like it's it's oh yeah
yeah yeah and it's like it's like you said it's a blender type yeah it's ridiculous yeah so you
take a few pictures of some guy from multiple angles whatever and you have a 3d realistic object it's so cool yeah it's
really nice there's uh i kept looking at the one that's like almost like a dollhouse or something
that they had uh and it just looks so cool like it looks just fun as it is and you can zoom in
on them too by the way i don't know if you're using your scroll wheel on it. I mean, it's so good. So, so good.
Oh, I know.
This is from Dr. Strange.
Yeah, this is a Lego set from Dr. Strange.
Oh, I see it.
Yeah, there he is on the platform.
There's Spider-Man out there hanging out.
And if you look at the top of it, it's a Lego set.
Yeah, super cool.
Yeah.
So at any rate, I mean, like, seriously, super, super cool stuff.
If you ever had anything like it'd be, it'd be a fun thing to send to somebody.
If you got a few pictures of them and then put it up somewhere.
Yep.
For sure.
So anyways, uh, I think, I think that's it.
So with that, Hey, if you haven't already subscribed to us on iTunes, Spotify, not
Stitcher don't exist anymore.
Uh, and share it with a friend, Spotify, not Stitcher. It don't exist anymore. And share it with a friend.
You know, love it.
Also, again, if you haven't already and you'd like to leave a smile on your face,
leave us a review.
Codingblocks.net slash review is a good place where you can find some links for that.
Hey, and while you're up there, check out our show notes, examples, discussions, and more.
Go to codingblocks.net slash slack.
And send your feedback.
Like you already said, I'm on on twitter at cutting box the other one
the top of the page you can play that game sir that's right well done hey i thought i said it
all clearly maybe it sounded like i had a bunch of marbles in my mouth no no you started on what
was that rapper used to do real fast like twista or something like oh man there was there's other
but anyway yeah it's like it Oh man, there was other,
but anyway,
yeah, it's like,
it kept going faster.
I was like,
Oh my gosh,
he's so fast.
Let me see what I can do for a Southern boy too.
How about that?
Yeah,
pretty good.
All right,
cool.
All right.
With that,
we will be back next time.
Probably with outlaw in tow as well.
See ya.