The Changelog: Software Development, Open Source - Faktory and the future of background jobs (Interview)
Episode Date: November 18, 2017Mike Perham is back for his 4th appearance to talk about his new project Faktory, a new background job system that's aiming to bring the best practices developed over the last five years in Sidekiq to... every programming language. We catch up with Mike on the continued success and model of Sidekiq, the future of background jobs, his thoughts on RocksDB in Faktory vs BoltDB, Redis, or SQLite, how he plans to support Sidekiq for the next 10 years, and his thoughts on Faktory being a SaaS option in the future.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly.
Learn more at fastly.com.
And we're hosted on Linode servers.
Head to linode.com slash changelog.
This episode is brought to you by Auth0.
Auth0 makes authentication easy.
We love building things that are fun, and let's face it, authentication isn't fun.
Authentication is a pain.
It can take hours to implement, and even after you have your authentication in place,
you have to keep your code secure, up to date.
It's a mess.
Auth0 makes it easy and fast to implement real-world authentication
and authorization architectures into your apps and APIs.
You can allow your users to log in however you want,
regular username and password, Facebook, Twitter,
enterprise identity providers like AD and Office 365.
Or let them log in without passwords, just like Slack or WhatsApp.
Get started. Grab the Auth0 SDK for your platform.
Add a few lines of code to your project.
This can be a mobile app, a website, or even an API.
They all need authentication.
Head to auth0.io slash the changelog.
That's the number zero in auth0, not the word.
No credit card is required.
Sign up for auth0 and get the free plan or try the enterprise plan for 21 days.
Once again, auth0.io slash the changelog.
Again, the number zero in auth0, not the word zero.
And tell them the changeLog sent you.
You're listening to the ChangeLog, a podcast featuring the hackers, leaders, and innovators of open source.
I'm Adam Stachowiak, editor-in-chief of ChangeLog.
Mike Parham is making his fourth appearance on the ChangeLog today to talk about his new
project, Factory,
a new background job system that's aiming to bring the best practices he's developed over the last five years in Sidekick to every programming language.
We catch up with Mike on Sidekick, the future of background jobs, his thoughts on RocksDB and Factory versus BoltDB, Redis, or SQLite,
how he plans to support Sidekick for the next 10 years, and his thoughts
on Factory being a SaaS option in the future.
So Mike, this is your fourth time back on the show.
I mean, obviously we love you, man.
We love having you back.
Episode 92, episode 130, episode 159, Sidekick, Inspector, and obviously Sustaining Open Source.
But how do you feel to be back on this show?
You're like an OG around here.
A regular.
A regular.
It's like cheers.
Everybody knows your name.
Maybe I should host the show for a little bit.
That's not a bad idea.
That's not a bad idea.
Yeah, no. Well, thank you for having me back and and you know it's uh it's pretty impressive that y'all have have
stayed around that long and just continued uh with uh this podcast for so long it's a
it's quite an achievement i think this would be roughly episode 270-something-ish. So we're in the 270s.
Not bad.
Not bad.
Wow.
Around for roughly eight-ish years almost.
That's incredible.
Well, we're still here and you're still here as well,
which also is quite impressive.
Years and years of putting out awesome open source
and also running a business around it.
So congratulations on you also still being here after all these years.
Thank you.
I think we both had interesting journeys.
Certainly off the beaten path.
It is off the beaten path.
Well, on that note, let's catch up a bit.
I mean, I know that we can easily send folks back to those episodes and do a full on.
But, you know, you got kind of the main topic today, factory.
But what's going on with Sidekick?
That's like your claim to fame, so to speak.
You're sustaining an open source, so to speak, and you came out with Inspector.
What's going new?
What's new for you?
Yeah, so Sidekick is definitely my meat and potatoes.
That's what's paying the bills right now.
But I've made my career for the last five years on background jobs.
And so over the last year, I've done some thinking about new directions that Sidekick's architecture does not allow.
So where I went with that is building a new system called Factory, which is sort of an inverse to the design of Sidekick.
And part of that inversion allows it to be language independent. So whereas Sidekick is
tied to Ruby and sort of limits me and my customer base to people running Ruby,
Factory is designed to be language independent. And so you can use factory with,
with any language. So the idea is that, um, I can come out with this sort of opinionated
background job framework, uh, that's useful for any business application. And it doesn't matter
what language you're, you're authoring your business application in, you can leverage a factory as infrastructure
to scale your app. And, um, you know, we'll, we'll see how that goes. It's still early days,
obviously. Um, but this is something that I, I want to put my, my efforts into over the next
year and, uh, and see what I can see what I can make happen. So, that end, two weeks ago, I think it was,
about two weeks ago, I first announced Factory and unveiled it.
And so I'm approaching the sort of second alpha release this week.
And then over the next few months,
it'll just be sort of the daily drudgery of building up a new large system.
And hopefully building a new separate division of my business, so to speak, around factory and possible commercial variants.
Have you hired anybody yet? Are you still solo? What's the scenario there?
I'm still solo today,
but I'm thinking that if factory is successful
and I get a commercial variant that is selling well
and it looks like it is sustainable,
then I probably would hire one or two people
to join me in the effort of maintaining this.
Because ultimately, my vision for Factory
is an order of magnitude larger than Sidekick.
At that scale, at that size, I think I sort of run out of steam.
And I'll need help to support the number of customers that I have.
We don't want you to burn out.
Or work too hard. Burning out, is that the of thing. We don't want you to burn out. Or work too hard. One of the two, what's that? You know, working too hard, burning out.
Is that the same thing?
One can lead to the other for sure.
This move kind of reminds me of Heroku's move back in the day.
I don't remember when it was anymore, but you know,
they were originally, well, all the way originally,
they were actually like a web IDE for Rails, which was cool, but wasn't really a product that people were buying.
Then they were a hosting platform for Ruby on Rails.
And then they took funding and got big.
And I'm not sure when Salesforce acquired them, if it was before or after this move.
But the shift to Polyglot from them was very much a move to broaden their customer base. And so I see your move from Sidekick, which is Ruby-based, to
something that potentially can facilitate a service for lots of different languages,
lots of different companies. What drives the desire for you to get that much bigger
potential customer base? Is it just want to grow in the business? potential customer base? Is it, is it, uh, just want to grow in
the business? Is it money? Is it your board? Tell some of your drivers to say, you know what,
I'm going to move outside of my meat and potatoes and try for a, for a bigger, bigger pool.
Well, ultimately I think, um, background jobs are something that, um that can benefit almost every business application out there.
And a lot of the background job systems out there are language-specific, like Celery, for instance.
Or they are language-agnostic, like Beanstalked, but they are essentially abandoned now where no one's maintaining them.
And on top of that, I think a lot of the background job systems that are out there
that are language agnostic don't have a lot of, they don't have the years and years of
additional really nice features that become super useful in building
business applications. Sidekick has sort of proven that there is a market for these frameworks that
allow you to scale job processing across many machines along with the APIs that you inevitably need when you're scaling
your processing across many machines. Things like rate limiting, things like cron jobs,
all this kind of stuff are really useful generic tools that I want to bring to everybody. I'm proud of the fact that Sidekick really took jobs
to the next level in the Ruby world.
And so I want to do that same taking to the next level
for everybody else.
And not have everybody else reinventing the same thing
over and over, but slightly different.
If we can all standardize, I mean, with Rails, it was proven that
if we all standardize sort of on one framework that everyone can use,
then you can move so much faster and you can build so much more.
So you got six months in the works
as of the announcement post, which, like you said, was a couple weeks ago
as of the time of this recording, for those listening,
it was October 24th, I believe, of 2017,
sixth month in the works.
You said that you kind of inverted the architecture.
I guess we could just dive right in and talk about Factory
in terms of the way that you built it and all the fun nitty-gritty.
Tell us about this inversion from Sidekick
and how factories put together.
Yeah, so the way that Sidekick works
is that you have a centralized data store.
In this case, it's Redis,
and it's running on some machine.
And then you have one or more Sidekick processes
which are talking to Redis. And
that sidekick worker process implements all of the features that are required for job processing.
And so all those features are implemented in Ruby that is running in that sidekick worker process.
And so all of those worker processes, they all talk to what I call this dumb data store,
which is Redis,
in that Redis doesn't have any logic in it.
It just stores bits of data.
And all my logic that operates on that data
is all built in Ruby.
And so that's why you're locked into Ruby,
because all of the advanced Sidekick features that I sell are all implemented in Ruby also.
Now, with Factory, what I'm doing is I where instead of exposing a bunch of data structure operations like Redis does, I expose a bunch of job operations.
And so all of my logic, all of my feature logic can be embedded in that factory server daemon that is sort of the central hub. And now all of your factory worker processes can be implemented in any
language because they don't have any of those more advanced job features
that need to be implemented in them.
All they do is pull a job and execute it and,
and then tell factory when that, when the job is done.
And factory does everything else.
It keeps track.
It gives you the web UI
so you can sort of track all the jobs
that you have in your system
and errors that have occurred
and showing you your sets of worker processes
that you have out there
and what they're working on at any given moment.
And so that really makes the factory worker processes
much simpler than the sidekick worker process
now. And so in the last two weeks
since I announced it, I've had people implement factory
workers in six different languages, which is
pretty amazing to see.
Yeah, we actually noticed that. Listen out there. You got Ruby, Go, Python,
3, PHP, Rust, Elixir,
Node.net. Obviously
the Ruby and the Go ones are
first-party official
worker libraries.
Do you call them worker libraries?
Yeah, client libraries,
worker libraries.
Producer consumer libraries.
And the comment we have in the note is just kind of wow.
Kind of wow how many there are already, considering this is brand new.
So that speaks a little bit to, I think, your reputation in the community and the fact that people are getting excited about this.
It also probably speaks to what you just said, is that they are much simpler to do than the sidekick worker process.
Well, one of the reasons why I built the factory is because over the years, the last couple of years, I've heard dozens of people ask, is there something like sidekick for my language?
And so that tells you that there's a bit of demand there. I'm not sure if it's enough demand to warrant
venture capital investment and $100 million valuation, but that's not what I'm shooting for.
I'm shooting for ma and pa's artisanal software that I can build and maintain with me or a small crew over the next decade.
That's kind of the scale that I've always wanted to operate at.
Yeah.
Do you ever think about the speed at which a highly funded company,
so mom and pop, what they're concerned about usually is Walmart and Amazon
and these huge megacorps coming in and steamrolling them. First of all,
have you faced any fierce competition from startups or do you have that, uh, that fear
if factory successful, somebody can throw a hundred million into the rink and squash you?
Well, I mean, if they throw a hundred million into the rink, um, you know, what does that,
what does that entail? What does that mean?
Well, typically that means that they need to make a 10x return if that's venture capital.
Nobody invests without a return.
Or they want one at least.
Right.
A lot of people invest without a return.
Which means they've got to make a billion dollars.
But that means that their pricing needs to be commiserate with that funding and with that profit goal.
So I can keep my prices so low that it doesn't matter if some big corp wants to come in.
As long as I'm making X thousand dollars a year, I can sustain myself forever.
It doesn't matter if Amazon comes in and,
and you know,
Amazon's going to have SQS rabbit and Q is a thing.
There's all sorts of different queuing systems that are commercial.
As long as the market is big enough for another entry,
it's not a winner-take-all thing.
It goes back to what you said, too.
You've gotten, over the years, lots of people saying,
hey, Mike, is there a sidekick from my language?
And so that's definitely indicators for what people might call
product-market fit, where you've got an idea where sidekick
as a model has worked for Ruby, and others see it
and they're excited about it and
say well how do i use that for my language and now you have you know waypoints to say this is a good
thing to invest in yeah and people have you know some of the pushback i got on my initial
announcement was oh people should just use job library x or use rabbit or, or why would you, if you know, if you're in the
cloud, why would you use this instead of SQS or something like that? But, you know, that's kind
of like asking if the Honda Accord exists, why would you buy a Toyota Camry or, you know, why
would you buy a Ford F-150? Well, you know, they're different things. They mean different things to different
people. Different brands mean different things. They have different capabilities and different
opinions and different use cases for different business apps. A business app that needs to scale
to billions of jobs per day might be better off using SQS, where you can sort of scale infinitely.
But your ops headache is going to be commiserate with that choice. So what I find is that
the people that use my stuff tend to be smaller shops that they don't mind paying a thousand or $2,000 to solve a
problem immediately.
They just don't want a lot of ops headache and they don't want to roll their
own.
So they'd rather pay me to give them a pre-baked solution that has all the
opinions already baked in and solves a well-known problem.
And then they can get on with their life.
Yeah.
So what do you think it is
it's a secret sauce for sidekick that makes people want to have it for their language you know so if
psychic is focused on ruby what is the secret sauce have you boiled that down do you know what
that is well uh that's that's tough to say um there there's an aspect of performance because sidekick was certainly a much higher
performance than a lot of the existing ruby solutions um it baked in a lot of features
that the previous solutions all kind of separated into separate gems so you know taking the example
rescue one of the reasons i built sidekick was because Rescue forced you to integrate like six different gems just to get the web UI, to get retries, to get
sort of a threaded model where you could get a higher performance than just forking per job.
So having a strong opinion about how to bake all this stuff into one sort of comprehensive singular package
is important to some people.
It certainly makes your ops a lot easier where you can just depend on one,
one easy package instead of having to sort of tie together a half a dozen
different packages, all with different versions,
all maintained by
different people so i think i think that's um sort of sidekicks secret sauce and um in factory i try
to bake as much stuff into into it also to to try and you know sort of bake as much value into it so
that because ultimately that's what that's what people are responding to is is they want to see
the value in there that this solves their problem
and comes with a lot of little nice bells and whistles that they can use.
So one feature, we'll talk about, I guess, a comparison,
and one thing to state is since Factory is so new,
it doesn't have feature parity with Sidekick.
He wouldn't have released it for a while to get that done.
But one thing we talk about secret sauce or at least the bits that you make sure are coming over is it does have a Sidekiq inspired
web UI, which has become something that as a long time Sidekiq
user, I'm just very used to the web UI being there. And so that at least
seems like something that you know is very important to many people. Is that fair to say? For sure. Setting up the web UI
for Sidekiq for many people can be a bit difficult, especially if they have a fairly complex
web application. So baking it into factory
so that it's just a HTTP port
and they can hit it with their browser
really simplifies a lot of that.
And it's also,
going back to what I was saying before,
it's a big part of the value there
is it's a nice, attractive web UI
that is baked in.
You know, you looked at something like
Gear Man or Beanstalked. I don't
know that they have a UI that's built in. I think Beanstalked, the web UI does not come with it,
and you have to download a third party web UI and set that up, configure that. Whereas with
Factory, it's all built into the system um but where that's
useful is that because the web ui is baked into the the one binary is the versions always work
together the the storage and the web ui aren't two separate things that you need to that you can
possibly update separately and then now they need to talk to each other and deal with different versions.
It's just one binary.
They're part of one whole.
And so there's no configuration.
There's no management of the things
as two separate units.
You know what I mean?
Yeah, that's interesting.
It seems like your goal to push more features
into the server side but then also
to simplify in terms of deployment and management um storage like you're removing dependencies with
redis being removed i think we should dive into that a little bit in a minute but um have you struggled to find a balance between trying to make things simple and yet still make it feature rich inside that single binary?
Well, that's always a struggle. simple as possible and considering the trade-off of complexity versus features is part and parcel
to software engineering in general. But I have definitely struggled with what I can put into
factory and what I can't. There are some things where you're going to need the worker process that is executing the jobs.
You're going to have to get some implementation from it for some features.
For instance, the worker has to acknowledge that the job is finished.
If it fetches a job, it has to then tell factory, hey, I'm done with this now.
There's simply no way to not do that.
There's also things like APIs like rate limiting.
This is one example where your job may want to do some sort of rate limited operation.
And if I build that into factory, that's fine. But the worker is going to have to call that rate limiting API to ensure that the rate
limit is being enforced at any given moment. And so, yeah, there are some features that I can bake
into factory sort of transparent to the worker process, but there are others that I can't.
And so that'll influence a lot of the future feature design
and sort of the features that'll be in commercial versions
versus the open source version. This episode is brought to you by GoCD.
GoCD is an open source continuous delivery server built by ThoughtWorks. It provides continuous delivery out of the box with its built-in pipelines,
advanced traceability, and value stream visualization.
With GoCD, you can easily model, orchestrate, and visualize complex workflows from end to end.
It supports modern infrastructure with Elastic On-Demand Agents and cloud deployments,
and their plug-in ecosystem ensures GoCD will work well in your unique environment. To learn more about GoCD, visit gocd.org slash changelog. It's open source and
free to use, and there's also professional support and enterprise add-ons available from ThoughtWorks.
Once again, gocd.org slash changelog. so one of the major moving pieces that you've removed from factory, or I guess never included in factory, which is a big part of Sidekick and you mentioned it previously, is Redis.
But you've got to persist that cue somewhere.
So tell us about that had to make in designing and
the initial implementation of factory. Obviously, you got to store data somewhere,
you got to you got to track things and persist things. And so the, if I wanted to invert sort
of the logic and push the logic from the worker process into the actual server itself, I would need to embed a storage engine effectively.
And Redis is not embeddable.
Now, Redis has Lua hooks.
It has modules.
Right.
But all of those are not designed to build products on top of.
They are designed for the end user to customize Redis for their own application.
For instance, I can't package up Factory as a module and distribute it because modules for Redis can only be, you have to compile a module into Redis.
And on top of that,
I'd be building on,
on top of Redis,
which believe me,
I,
I considered for a while,
um,
because I've got,
you know,
thousands of lines of code,
which already have all these features implemented for it.
So I could,
I could literally just,
you know,
port the system over if I,
if I wanted to. But, um, I could literally just, you know, port the system over if I, if I wanted to,
but, um, but ultimately, um, I decided to go with a, an engine that could be embedded so that I could just ship a single binary, a single compiled binary. And, uh, so I looked around for a bunch of
different things and the best option that I sort of landed on today is RocksDB, which is an embedded key value store that is built and maintained by Facebook.
They use Rocks to power a lot of their internal services. that production sort of quality is there,
which you never know
if you're just using some random open source library.
I wanted something that had a good production usage already.
And Redis obviously has tons of production usage,
so it's rock solid. But rocks has rocks has proven to be extremely fast. And its production story, at least at Facebook is, is also good. So, you know, to me, that was the best option that I had. Rocks does have some trade-offs. It has some drawbacks versus Redis.
But like I said, it's the best of what's out there right now.
It's interesting to think about the long-term.
So a factory is successful and you continue building on the long-term implications of
that particular decision.
Because a lot of us don't consider our dependencies
really well thought out anyways,
but when you're putting together,
especially at the application layer,
you're putting together usually lots of different things
to serve different purposes.
And some dependencies, even like Sidekick,
it's pretty much the one you go to in the Ruby world now,
but in its day, for background jobs,
you could have reached for rescue and sidekick
and I think delayed job,
although that was getting crufty at the time.
So these decisions are kind of made
without extreme consideration sometimes.
And sometimes that bites you and sometimes it doesn't,
especially if they're more,
you can just swap them in and out, right?
The more modular or pluggable they are,
you say, well, I don't really like background job anymore. I'm going to swap out for sidekick. But with this
decision, anybody who's doing persistence, right? When you're selecting your persistence engine,
it has huge ramifications down the road. So it's a really big decision to make.
Give us some insight. Like, how did you go about, you said that it was production grade
and you liked that Facebook was behind it,
but what was your process?
What did you go about,
okay, I'm going to compare against Redis,
I want something embeddable,
but it sounds like you weren't completely sold
against Redis at the time,
even though you preferred an embedded solution.
Did you just Google around,
find all the embedded Go things,
and then compare them?
What's your style of picking a big dependency like that?
There's a couple things that I was looking for.
First was that I knew that I wanted to use Go to build it,
so I knew that I had to have something that would integrate
with my language of choice.
Number two concern was long-term support.
Is this thing going to be worked on for the next – in years?
Is it going to be something that I can submit a bug for and someone will look at it in days rather than months or years or never.
You look at something like SQLite,
that's a great example of a storage engine that is something that I would consider using
if I wanted a SQL engine for a product like, like factory, um, I definitely
would have gone with something like SQL light because it's, it has proven to be a long-term
reliable, uh, long-term supported, um, and sort of has, has this proven track record over multiple
years. Um, when I, when I first was looking for storage engines,
uh, I went straight to bolt DB, which has a really good reputation in the go community,
uh, as this nice embedded key value store, um, library and, um, Ben, Ben Johnson, the, the
maintainer has a very good reputation, um a very good reputation as a good developer.
And so I looked at Bolt.
Bolt is great.
The problem I found, though, is that it's very slow for the use case that the factory wants to use it for,
which is a lot of inserts and deletes really fast. Bolt is more of like a binary tree type storage system
where you don't necessarily insert a ton,
but you maybe read a lot.
So maybe for like indexes and stuff like that,
it's really good.
But with queues, you push a job into the queue
and then you pop it off really fast.
And so you're inserting and deleting with something usually within microseconds or milliseconds of each other.
And so that's where RocksDB's design really shines because it proved to be, you know hundred to a thousand x faster than bolt db
and so rocks db's design is what's called an lsm a log structured merge tree i believe
the idea is that it every persistence operation writes to a log and then
regularly the system will take that log and sort of persist it to an actual file that is sort of a binary tree.
But if the log just contains like an insert and then a delete, then it'll actually never get into the binary tree.
And so your writes, your fast, really fast writes, prove to just be constant order time instead of log in.
So if you know anything about algorithmic complexity,
that's a really nice advantage.
How did you get to that research?
Did you find a white paper, blog post?
Did you just tinker with it?
What got you there to get that understanding?
Because is that on the surface from the readme when i was originally building the storage subsystem for
factory i um designed it out with a few interfaces and then i just built two different implementations
one was a rocks db implementation one was a bolt db implementation and then i wrote a load test for it and ran a load test
and the pushing and popping on queues for Bolt
was about a thousand times slower than Rocks.
And even though Rocks has some disadvantages over Bolt,
namely it's written in C++,
so it pulls in the entire C runtime
and it increases the complexity of building factory.
The performance advantage outweighs that disadvantage.
What's interesting, too, is both these projects were born around the same time, 2013.
And this reminds me, Jared, so of the the call we had with oz you know just
kind of like the the perspective you have when choosing a database or a dependency as you asked
earlier you know like the the process you go through for which you you don't just use it because
it's popular because um you know somebody that you know and respect, you know, wrote it. You kind of do your own due diligence and A-B test in this case with Mike, you know.
Right.
Yeah, it's tricky, too, because on the other side of, I don't know what you'd call like
haphazard selection of dependencies or like pick the first one that looks good, you go
with.
On the other side of that is some serious analysis paralysis, which you can get
into as well, right? On the far extreme is, you know, trying 15 different choices and running
all these tests and spending months and months and still not pushing the needle forward.
But that being said, I think the best way, once you've gotten to a point where maybe you've
narrowed it down by features and by these other heuristics, support, the other things that Mike's been talking about, is you just got to see how it works for your use case.
You can use somebody else's blog post as a waypoint for your decision, but their use case may be even just slightly different than yours.
Like Mike noticed with Q, obviously, the reads and writes, the pushing and the popping happening really fast.
Well, if he would have read a blog post about Bolt and not tried it for himself and just thrown it in and kept on building, he would have missed out on an opportunity to have a much better performing dependency.
Mike, maybe pontificate on that. If you hadn't done that research, what would the user experience of Factory B
if you went the other route, which was a thousand times slower in your case?
Would it just be slower? Would you have released it?
Where would you be at?
I think anybody who's trying to build a professional
caliber tool is going to have load tests of some sort and sort of performance tests just
sort of get a baseline feel of performance of of the system so that you know over the next
few major releases you can detect regressions and that sort of thing.
So I think it was just natural for me as an engineer trying to build a quality product.
I just said, okay, well, I've got to build a load test here.
And then, so I built it with Bolt and ran it and got like 50 pushes and pops a second
and said, whoa, that's way slower than I
expected it was going to be. Um, and so once I saw that, I thought to myself, okay, I need to
either tune bolts or sort of determine, uh, what's wrong here. So I actually pinged Evan Phoenix,
who is also a really well-known Ruby and Go person.
And he told me about the LSM,
the log structured merge design,
and pointed me to LevelDB and projects like that.
So Evan said, oh, this is not just a flag you can tune in Bolt.
It's part of the design of Bolt.
It's why the performance is this slow.
And so that's when I said, okay, well, I'm going to abstract the storage out into interfaces and build another impl with LSM implementation of some sort
and looked around for level DB, uh, implementations. And
that's when I found rocks. Nice phone, a friend option, always useful. But that's, that's where,
uh, I mean, you, you talk to your, your friends in the industry and who sort of know what they're
talking about, certainly more than me at the time. Um, and, uh, and get a feeling for, you know,
what's, what's going wrong,
which of your assumptions are being violated here.
And my assumption was that Bolt was a good implementation for the pattern, the persistence pattern that I needed,
but that proved to be false.
So if you're doing, well, I don't want to i don't want to pontificate on
what the right use cases are for bolts but uh but suffice it to say that uh uh rocks has proven to
be um you know like i said orders of magnitude faster and so i i realized that i had to go with
something like rocks now i couldn't find any other level DB clone for Go
that was really sort of production-hardened.
And that's why I went with Rocks,
because I would have preferred and loved to have seen something that was native Go.
Something that I could tell is running in production and it's
going to be supported for the next, you know, in years. And I know that Facebook has several
engineers working full time on rocks, uh, and they're pushing new, new versions all the time.
So that is a very strong endorsement, um, to use it in my own stuff.
So, uh, just found it here.
We actually had Ben Johnson on the changelog back in 2015
talking about BoltDB. I think he actually compared
and contrasted it with LevelDB at the time
so he's very well aware of these different architectures.
And Adam, I don't think you were on that show.
I think it was just me and Ben talking. We also had Ben on GoTime.
He's somewhat of a regular around here as well.
Great developer.
And these aren't things that he would take
as a personal slight against Boltz.
This is just the way that they're built.
They're built for different optimizations
and Boltz to be not fitting
a factory's use case.
I've never met Ben.
He's a Twitter friend,
but he's always been
a perfectly pragmatic person.
And I would hope that we would have that in common.
You know, what is a tool that is awesome and appropriate for one use case could just completely fall over for another use case.
And that's just software for you.
So, you know, it is what it is
and like i said i pinged evan phoenix initially because i know i know he's used bolt before
um just for him to to sanity check uh i he he had access to the factory uh repository from day one
just because i like to have a backup person in case I get hit by a bus
who can actually sort of open source the code
that I was working on at the time.
And that way it's not just lost to the sands of time.
But I gave Evan access from day one.
And so he had access to the factory code.
And I pointed him to the load test
and I pointed him to the storage implementation.
And I said, am I going crazy here?
You know, I'm getting these kind of unusually slow numbers over what I expected.
And he was the one who looked at it.
So Ben just didn't have access to the repo.
And so that's why I turned to Evan.
So you selected RocksDB.
Now, both of those would have been embedded tell us about the
the ramifications of embedded with regards to i guess the practical use of factory so i always
think of it as people understand how to maintain redis because it's a standalone thing so they're
used to backups they're used to redundancy or whatever whatever really the ops side of Redis is.
When you have an embedded thing, it's just like
is all
your data just in that
binary? Where does it store stuff? Tell us about
that and how you back it up and stuff like that.
So embedded just means that
the storage subsystem is running within your
process. It's not running
as a separate daemon over the network.
So MySQL, Post my, my SQL,
Postgres, Redis, all of those are sort of network storage demons. Um, whereas embedded means that
it's running within my process and my process is the factory process. So factory is both a network
demon that sort of, uh, surfaces and API that workers can call, but it also has a storage subsystem so it can it places all these different data files that contain your persistent job data.
And all of that data is RocksDB effectively.
RocksDB owns the data in that directory.
And I just point RocksDB to say, here's the factory database.
Please open it up and let's get started.
So the backups would be similar to just a disk copy?
Well, not quite.
RocksDB does surface backup and restore APIs.
So you have to call the backup API
and you have to call the restore API.
And so factory exposes those APIs as a command line tool. So factory has a factory CLI command line tool, where you can say factory CLI backup, and factory CLI restore. And that's it. It just does it all automatically for you based on the backups that you've taken.
So I envision people taking a backup maybe every corrupted or the disk breaks or whatever,
then they can restore their latest backup and get back most of the data that has been lost.
So your overall take on RIP and Redis,
I keep calling it removing even though it was never there, but not including Redis.
Here you are, you're in alpha, you're out there,
you've seen people's feedback,
you've built the system around RocksDB.
Are you happy with that choice at this point?
Are you having buyer's remorse?
Are you missing Redis at all?
How do you feel?
Well, it's been a mixed bag.
Like any sort of decision,
there's good and bad parts of it.
With Rocks,
the tools that it gives me
are much lower level than Redis.
You know, I'm dealing with C++ APIs
rather than a nice set of
data structure commands that Redis exposes.
Ultimately, Redis is a great thing.
I love it.
I will never consider ripping it out of Sidekiq.
People have asked me to build Sidekiq for other storage engines,
and my take is no way.
Sidekick is optimized for Redis.
And Redis gives you a lot of, like you say, built-in ops knowledge in the industry.
People know how to run Redis.
There are Redis SASSs that are out there where you can just say, hey, here's $50 a month.
Give me a Redis URL, and boom, now you've got something that you can run Sidekick against.
Factory doesn't have that, but Factory is still brand new.
We're working on that.
Rocks doesn't give me some of the things that Redis does have, like replication. So I can't run a replica in another availability zone or another data center and have sort of an almost real-time backup.
So there are trade-offs for sure.
But ultimately, Redis didn't have that embedded mode that I had to have if I wanted to sort of centralize the logic into a single binary.
So, you know, the ease of use of factories is awesome because it's just a single binary.
You just run, but it comes at the, you know, these trade-offs of losing the built-in ops
lore that Redis has in the community. This episode is brought to you by Linode, our cloud server of choice.
Everything we do here at Changelog is hosted on Linode servers.
Pick a plan, pick a distro, and pick a location, and in seconds, deploy your virtual server, drool-worthy hardware, SSD cloud storage,
40 gigabit network,
Intel E5 processors,
simple, easy control panel,
nine data centers,
three regions,
anywhere in the world they've got you covered.
Head to lyndo.com slash changelog and get $20 in hosting credit.
And by TopTile.
TopTile is the best place to work as a freelancer
or hire the top 3% of freelance talent out there
for developers, designers, and finance experts.
In this segment, I talk with Josh Chapman,
a freelance finance consultant at TopTal,
about the work he does
and how TopTal helps him legitimize being a freelancer.
Take a listen.
Yeah, in my arena within TopTal,
I specialize in everything from market research to business plan creation, Take a listen. deal negotiation, how to value the company, how to negotiate that. And all those skill sets that
I have continued to hone over on the TopTal side are ones that I actually deploy every single day
in my own company. Freelancing can sometimes be seen as not legitimate or subpar work. Now,
I would argue that when you work with a company like TopTal, they put so much vetting into not
only the companies that you work with, but also the talent that you work with a company like TopTal, they put so much vetting into not only the companies that you work with,
but also the talent that you work with,
which I'm on the talent side,
that it adds a level of legitimacy
that isn't seen across other platforms.
And that for me, as the talent side,
is incredibly fruitful and awesome to be a part of, right?
I enjoy the clients.
I enjoy the other talent that I get to talk to.
I enjoy the TopTal team. And that
creates an overall positive experience, not only for TopTal, but for me as the talent and for the
client as the company on the other side. And that is really not seen or is the experience across
other platforms in the freelance market. So if you're looking to freelance or you're
looking to gain access to a network of top industry experts in development, design, or finance, head to TopTal.com.
That's T-O-P-T-A-L.com.
And tell them Adam from the Change Law sent you.
For those wanting a more personal introduction, email me, Adam at ChangeLaw.com. so mike you mentioned that there is no uh sass for factory there's one for redis you can get a
redis to go url or your insert you know redis sass underscore URL here.
Is that a thing that, I mean, because I'm thinking,
what's better than having a single binary is having no binary, right?
Like, let me just get my worker process,
and I'll just point it at a factory thing and be off to the races.
Is that something that's on your radar?
Absolutely. 100%. That was part of when I sort of inverted the design, I realized, hey, well, you know, if I'm centralizing all this stuff, there's no better description of centralization than SaaS. Because that's really what you're doing is you're paying somebody else to sort of run this thing for you. So yeah, I've thought about it. One of the sort of decisions that I've made in the past
is that I don't want to run a SaaS. I don't like the 24-7 ops situation. Well, I'm thinking about
how factory could be used in that mode. And it can certainly be used to build a business that
does that. But you want someone else to build the business. Exactly. So what I've thought about
and what kind of the direction I want to go
is I want to put out a factory pro
that I'm selling product
that sort of is the same sort of model
as Sidekick and Sidekick Pro and Sidekick Enterprise
that will have some additional features
that aren't in the open source version.
But then I can also have people who run SASSs that can either offer factory for free,
or they can offer factory pro for an additional monthly rate.
And then I take a cut of that.
So I can either sell it on-premises for anybody that wants to sort of run it internally,
or I can offer Factory Pro through these sort of VARs, value-added resellers, that are running SaaSs.
And I've already got, there's already one fellow who's expressed the desire to build a SaaS and who's already sort of announced that he's working in this direction.
So I'm really curious to see what happens long term here.
I guess I'm kind of curious of like, we asked earlier in the call, the state of Inspector, right?
And you had success with Sidekick and Inspector, you kind of took down the same path with a similar open source open core
pro model right and support or things add on to i'm just wondering if because of the the success
of the psychic model has been so great that you feel like it is the right way for everything you
do well i think that inspector and inspector Pro were less of a success.
I mean, I've essentially given up on the projects myself.
Yeah, I guess Inspector is a very sort of, how would I describe it?
It's very limited in what it does.
It's sort of, it's funny, I released the 1.0
and it's kind of a 100% of the functionality
that I ever wanted in it, really.
And the pro version, I guess,
doesn't seem to really add that much value
on top of the open source version.
So I really haven't seen much uptake from the pro
haven't seen that many sales from it. So and it could also be that it's just it's kind of a nice
to have but an optional piece of infrastructure. It's not core to any application development.
Whereas I think sort of background jobs and scaling business transactions across many machines, that's a lot more companies around the world see that as important to their app.
Right.
So.
Another question on that is, you know, Inspector is monitoring, right?
I think back in the day you mentioned when we had you on the show talking about this, it was of you know monitored but better if i can if i can recall back to some of the things you said
there's you know full-on sass businesses around monitoring you know so and the question jared
asked you was like you know is this the next sass for you and you said that you don't really want to
do that and so i see monitoring is this not that I'm saying you missed it, but like this could have been potentially a SaaS.
And then here you are with Factory that could be a SaaS.
Right.
I'm not really saying anything in particular.
I'm just saying like these are opportunities that could not just be open source and pro.
And going back to that, is that the right model?
Yeah, absolutely. I mean, it could be that the market that I was going for
the product fit just wasn't there with the direction
that the industry was going. You look at something like Datadog,
it has monitoring for all these various
different demons already.
They're a big business too.
And one of our sponsors too.
So, I mean, that's...
Yeah, no doubt.
I mean, I use them every day for,
I use them to monitor my own hosts.
So, you know, I know and love their service.
So, you know, it's quite possible
that there just wasn't the product fit
that Sidekick had.
And, you know, and we'll see if Factory has wasn't the product fit that, that Sidekick had. Uh, and, you know, and we'll see if, if factory has, is, is going to fall more on the Sidekick
side or inspector side, but, uh, you know, so far the uptake has been, um, has been nice to see.
And like I said, I'm going to put another six months to a year into this and see what happens
with factory, uh, see if it grows nicely or if grows nicely or if it just sort of plateaus early
and doesn't really go anywhere.
Time will tell.
Looking at this from a macro level,
which I guess Adam and I tend to do
because we talk to a lot of people in open source
trying to use different means and methods
of sustaining their work in open source.
We look at exemplars
and I think over time we've said,
you've got your Red Hat style, you kind of have your Git Labs,
you have your Webpack who's making open collective work,
you've got Evan Yu who's making Patreon work,
and you have Mike Parham who's making products,
kind of a freemium, or not freemium, what do you call it?
Like a pro and community edition work
i describe it as open open core yeah open core thank you just blank that um but then i start
to ask people is is there anybody else like is there another web pack for that model is there
another mic and i don't know if there is like is there somebody else who's taken or done the same model
that you've done in a similar scale with sidekick and made that exact same model work very well
in a similar fashion that you have that you know of well i i know that there's been a lot of
java app servers application servers like jboss or um i'm probably dating myself now but you know
you look at web logic and websphere uh they all have sort of community additions but then they
also have uh you know the big corporate enterprise version right and and typically the enterprise
version has additional features like you know i't know, replication and data grid caching across, you know, geo replicated data caching and all this sort of stuff.
So it is a thing to offer sort of a light community version, and then a more sort of product enhanced, more enhanced commercial version on top of that.
Yeah, so I think the Java world,
the world of application servers,
and when you get right down to it,
that's kind of what Sidekick is.
If you squint really far away,
Sidekick is kind of a Ruby application server.
You know,
it's using rails for its major framework.
Right.
But,
but at the end of the day,
um,
you're,
you're doing work across your,
your farming workout to a farm of a cluster of machines.
And,
um,
and that's,
you know,
part of what an application server does.
The only part that sidekick doesn't do is the sort of web side.
But that makes it sort of independent of the web or whatever the style of the day might be.
Right.
Well, let's get back to factory and as a service, the idea.
So you have one person who's potentially interested.
Let's say we have listeners whose ears are perking up,
perhaps they're an entrepreneurial ops people with some development skills
or whoever they think they happen to be.
And they're like, wow, I could be a value added reseller
for Mike.
What's the path there?
What's the process?
Like hit you up on Twitter, like email you privately.
Are you soliciting for people who might want to do this? Or are you just hoping that one person does it and you can like make
them the official factory service? I'm not soliciting actively, but I'm happy to
entertain offers. The last thing I want is, is, you know, a dozen different drive by emails from people who who want me to provide 90 percent of the tech work.
And then they just sort of spin up a bunch of Docker containers on EC2 and they're done.
You know, that's that's not really what I want.
I'm focused on the product, on the features and sort of kind of trying to determine what goes in the community version, what goes in a commercial version.
And so I would expect a SaaS to more focus on, obviously, ease of spinning up an instance.
But then things like reliability and sort of data storage, backups restores automating all that so that
anybody who's using factory doesn't have to deal with that or worry about,
you know,
a disc dying or something like that.
It's the last thing you want to worry about,
right?
It's,
it's a,
unless you have to do that,
you know,
unless you have to have that concern,
you want to offer that to somebody else if you can,
because you get back to doing the viable stuff,
which is building the product.
Right. Yeah. I mean, ultimately,
I'm really limited by what RocksDB offers
in terms of reliability
and sort of high availability.
You know, they don't provide
sort of a clustering mode.
They don't provide real-time replication.
So, you know, the best I can offer right now
is backups. So you can call can offer right now is backups.
So you can call the backup API once a minute
or once an hour,
but ultimately that's sort of what I'm limited by.
Let's talk about factory versus Sidekiq.
I'm sure we have lots of people out there using Sidekiq
and they're probably wondering,
what does this mean for Sidekick A?
And then B, should I be looking to switch off Sidekick
because factory offers performance or security
or some other thing, ease of use,
that Sidekick will never offer me?
What do you say to those folks?
Okay, well, those are two great questions.
The first thing I want to make clear
is that Sidekick will be supported for
the foreseeable future.
Um,
as long as I,
you know,
I'll say right now for the next 10 years,
sidekick will be supported,
you know,
without a doubt.
Um,
it's a big claim.
Yeah.
Well,
I'm making plenty of money right now where I can justify spending the next 10 years just focused on it.
So, you know, it will be around. It will be supported.
I'm just an email away if people have any problems with it.
As Ruby and Rails change over time, Sidekick will change with them to work as best as it can.
Where Factory shines is that Ruby itself has started to get this reputation as sort
of stagnant. People aren't seeing it as the hot new thing. They're not necessarily using it to
build new applications anymore. And so, you know, sidekick is as robust as the Ruby community is.
I mean, robust in terms of growth of my business.
So I don't want to see my business stagnate over time. So part of this effort is to bring
the sidekick conventions and, um, and opinions to all languages, uh, and at the same time,
continue to grow as, as those conventions as those conventions and opinions grow more popular in
all these different languages, hopefully my business will grow also. So the question of
should I use Sidekick or should I use Factory? Well, first of all, Factory only has like,
you know, a quarter of the features that Sidekick has today. So if you need something
reliable today, and you want to use Ruby, Sidekick is the natural choice.
But over the next year or two, Factory will continue to get more features. Presuming that
the open source version gets traction, it will see a commercial version that has more features in it.
And what you get with that is you get the polyglot design. So now whatever language your business decides to use in the future,
you can use Factory with it.
So it's a little more future-proof.
As the winds of the industry change and languages come and go,
hopefully Factory will be there in all those new languages.
Yeah, whereas with Sidekick, and go. Hopefully Factory will be there in all those new languages.
Yeah, whereas with Sidekick, both the users and the business is locked into Ruby, but with Factory,
like you said, your business is now future-proof in terms of reaching the hot
new interesting platforms and languages, but also as
your users' businesses change, they are not locking themselves
in.
They can continue to grow and adapt to their applications and not have to swap out their background and infrastructure to do that.
Pretty cool.
Ultimately, I see my value as not
I'm selling you a bunch of Ruby code and thus
limiting myself. What I'm selling is the conventions and the opinions
that a really nice feature-rich background job system gives you.
And ideally, that will scale to any programming language.
You know, maybe not assembly language,
but hopefully everything else.
Cool. Well, let's end with a little bit of a roadmap. Tell us where
you're aiming at for 1.0, perhaps, and
what's coming down the pipeline, and then we'll talk about how people
can hop in and help out. Sure. Well, as you mentioned,
the initial announcement was two weeks ago.
I'm preparing the second sort of alpha release right now.
I've gotten probably about half a dozen to a dozen different contributors that have really sort of populated the chat room and started submitting PRs.
I mean, I'm getting pretty close to 100 PRs now just in the last two weeks, which is awesome.
Wow.
I've had people contribute several major features, including job prioritization.
So Sidekick never had job prioritization where you could say like, in this queue, this job has priority nine or this job has priority one.
Like as in, you know, high or low priority has priority one, um, like as in,
you know,
high or low priority,
which one to run first.
Yeah.
Yeah.
Because people would ask me,
how do I,
you know,
my queue is backed up.
I've got a thousand jobs.
I want to push this job to the front of the queue.
And my answer was always,
well,
you need to have a separate queue,
which is like,
put it in the same queue.
Right.
Yeah.
And so,
um,
and that's because,
uh,
Redis didn't have a really good data structure for for doing that really efficiently.
Whereas now that I control the storage subsystem, I can implement that easily.
And so a fellow named Andrew Stuckey sort of really took a shine to factory when it was first announced.
And he suggested this feature and I said, it sounds great, although I'm not sure how to
implement it. And then this guy went along and just implemented it for me. So he really is the
MVP of this next release. He did a great job on implementing this thing. But yeah, so now if you have like,
you know, you have the default queue and you push 1000 jobs to it. Now you can say priority nine,
and that job will go all the way to the front of the queue. And it'll be the it'll be the next one
popped off, even though it was the last one pushed. Nice. So that I think, I think people
will find that really nice for sort of that use case of having like emergency jobs where you might have jobs that are sort of system generated and those are just normal jobs.
But then you also have user waiting type jobs.
Right.
And you want those jobs to fire off ASAP.
That's where job prioritization really, really starts to shine.
That's smart.
Yeah.
So that'll be in the next release, which I'm hoping will come out in the next week.
We're putting some polish into a Docker image
so that people will be able to download Factory
straight from Docker Hub
and run it right on their machine in seconds
rather than having to build it, everything manually.
We also landed a homebrew recipe
so that people can actually install it
from homebrew on OS X directly.
Sorry, it's called Mac OS now, I guess, right?
I'm so old school, I call it OS X still.
So yeah, just in the last two weeks,
the open source community response has been
awesome and it's been really invigorating to, to already have a few, um, uh, like I've
already landed two or three, uh, committers into the project who've, who've submitted
multiple PRS and thereby earned the right to be a committer.
So, uh, yeah, we'll see how it goes.
But the plan is over the next couple months,
over the winter,
to sort of stabilize factory,
evangelize its use,
try to get people starting to use it,
maybe even in staging or even in production if they're a little bit crazy.
And also try and get some trusted comrades that can maintain the worker libraries.
You know, I maintain the Ruby and the Go version, the Ruby and Go worker libraries.
But there's all those other ones that you mentioned that all have their own independent
maintainers. And I need to ensure that those maintainers stay up to date with the latest changes in factory,
especially since it's so amorphous right now.
It hasn't really solidified yet.
We're still changing the protocol.
We're still adding new commands every week.
So over the next couple of months, that'll solidify
and we'll really get a reliable set of commands
that all the workers will support.
You know, that was one, just real quick, Mike,
that was one thing we recently did have
the RabbitMQ team on.
They've been doing it for 10 years.
So we had them on to talk about
their 10-year anniversary of RabbitMQ.
And really, they shared a lot of lessons that they learned throughout that time because they've made all the mistakes and had successes along the way.
And they were very open with us to talk about those things.
One of the things they said, which I guess they've said that they've regretted to a certain degree is that there are so many client libraries
in various states of quality and support and maintenance and they're not you know they have a couple that are first party just like in your circumstance although you're just getting started
with factory is they didn't they weren't always diligent to you know bring those people along and
like keep those things quality and i can't remember the exact wording, but
really just, I guess,
helping that ecosystem flourish in a way where
for your particular language, there's a
high-quality worker library available, which may not
be developed by you or people that are employed
by you, but the people who are working on it are keeping it up to date,
are keeping it well-tested and well-documented
is hugely valuable to Factory
as an overall ecosystem.
Yeah, for sure.
And earlier I mentioned that
if Factory sees some success,
I might be hiring people.
I think hiring people to maintain
the worker libraries
in addition to maintaining the worker libraries in addition to you know maintaining the core factory
repo would be a natural fit you know these people have shown have shown an interest in the
background job system they've shown that they have the the willpower to sort of learn a new system and dive in and sort of
build something on top of it and and that goes a long ways in in uh building up a resume that says
that recommends them so you know we'll see we'll see what happens over time but uh
but hopefully i can i can get um you know half a dozen libraries for different languages that are well-maintained and reliable.
Because it really doesn't take more than that.
You need a JavaScript, you need a Python, you need possibly a Java or a C Sharp library.
And that's going to get you 90% of the industry.
And everything can kind of fall out from there.
So yeah, we'll see what happens.
Well, let's close with a call to action
or some sort of thing that people can do if they're interested.
Obviously, we'll have all the links and the show notes to the typical pages,
but if you had a specific thing that you could say to the open source community
with regards to Factory or what you're up to, what would you ask of them?
I would just suggest that they download and run the Docker image
that we're going to publish for the next
release and, and sort of give, take it out for a spin, um,
click around the web UI and, and sort of see, see what it can do.
Um, and we'll,
we'll try and publish like a rails app that can be used, uh,
to do work against factory because because you know, there's,
there's a couple of moving parts here. So it's,
it's not a trivial thing to just sort of download and run. Um,
but I'll make, I'll do my best to make that easy. And, uh,
and hopefully the listeners can take it for a spin and see what it's all about.
You mentioned that you can install via homebrew as well,
and you have to tap that first. You have a tap cause it's still about. You mentioned that you can install via Homebrew as well, and you have to tap that first.
Do you have a tap? Because it's still kind of
in flux?
Yeah, I think it's a cask or
a tap or something like that.
Brutap and Tripsys slash
Factory and then Brew Install Factory.
That's it.
So I guess it is a tap.
We'll link up the installation
docs, which also
has your Docker and Linux
information there as well
in the show notes.
And then you also have Gitter. So you mentioned
earlier being able to chat.
So I guess if anybody wants to talk
to you in real time, they can
go to your Gitter chat room for this.
Yeah, I'm
not in there all the time,
but I'm trying to jump in when I can.
I do find real-time chat to be kind of a time sink.
So I have been jumping into it the last couple of weeks
just because I'm trying to get people ramped up
and started with the system.
And when I initially released it, I didn't really have any easy way for people to and started with the system. And we've, you know, when I initially released it,
I didn't really have any easy way for people to get started with it.
So the chat room was in me walking them through it was really the only way.
But now that we got homebrew and we got Docker,
I think it's easier for people to get started without,
without having to get directions directly from me.
Yeah.
That's good.
It's an invasive community to gather around it and that's a good place to go at least.
And even if you're not there all the time, it's a good central location to at least queue
up some things.
Yeah.
And there'll be other people in there all the time too.
I'm sure some, some seasoned pros with factory pros who have upwards of two weeks of knowledge
of the system that can, that can help, you know, answer, answer questions.
Good one.
Well, Mike, it's, it's been a pleasure having you back.
I mean, I'm so thrilled to see, you know, our paths continue to align and then your
continued success.
And don't take my questions earlier as a, as anything negative towards the success you've
done.
Cause you've, you're, you're certainly a model for success when it comes to sustaining
open source and your
family and building a business around it.
You're very much a model
for people to follow. We appreciate you sharing your time.
Thanks. I appreciate the kind words.
Alright, thanks for tuning in to the show this week.
If you enjoyed the show, share it with a friend,
tweet about it, and thank you to our sponsors, Auth0, Linode, GoCD, and TopTile. Also, thanks to Fastly, our bandwidth partner. Head to fastly.com to learn more. We host everything we do on Linode cloud servers. Head to linode.com slash changelog. Check them out. Support this show. The Change Log is hosted by myself, Adam Stachowiak, and Jared Santo.
Editing is by Jonathan Youngblood.
And the awesome music you've been hearing is produced by the mysterious Breakmaster Cylinder.
And you can find more shows just like this at changelog.com or by subscribing wherever you get your podcasts.
Thanks for listening. time. Thank you.