Screaming in the Cloud - The Benefits of Mocking Clouds Locally with Waldemar Hummer
Episode Date: March 30, 2023Waldemar Hummer, Co-Founder & CTO of LocalStack, joins Corey on Screaming in the Cloud to discuss how LocalStack changed Corey’s mind on the futility of mocking clouds locally. Waldemar... reveals why LocalStack appeals to both enterprise companies and digital nomads, and explains how both see improvements in their cost predictability as a result. Waldemar also discusses how LocalStack is an open-source company first and foremost, and how they’re working with their community to evolve their licensing model. Corey and Waldemar chat about the rising demand for esoteric services, and Waldemar explains how accommodating that has led to an increase of adoption from the big data space. About WaldemarWaldemar is Co-Founder and CTO of LocalStack, where he and his team are building the world-leading platform for local cloud development, based on the hugely popular open source framework with 45k+ stars on Github. Prior to founding LocalStack, Waldemar has held several engineering and management roles at startups as well as large international companies, including Atlassian (Sydney), IBM (New York), and Zurich Insurance. He holds a PhD in Computer Science from TU Vienna.Links Referenced:LocalStack website: https://localstack.cloud/LocalStack Slack channel: https://slack.localstack.cloudLocalStack Discourse forum: https://discuss.localstack.cloudLocalStack GitHub repository: https://github.com/localstack/localstack
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
Until a bit over a year ago or so,
I had a loud and some would say fairly obnoxious opinion
around the futility of mocking cloud services locally. This is not to
be confused with mocking cloud services on the internet, which is what I do in lieu of having
a real personality. And then one day I stopped espousing that opinion or frankly, any opinion
at all. And I'm glad to be able to talk at long last about why that is. My guest today is Waldemar Hummer, CTO and co-founder at LocalStack.
Waldemar, it is great to talk to you.
Hey, Kari. It's so great to be on the show. Thank you so much for having me.
We're big fans of what you do at the Duckbill Group and last week in AWS.
So really glad to be here today and have this conversation. It is not uncommon for me to have strong opinions that I espouse politely, to be clear,
I'll make fun of companies and not people as a general rule. But sometimes I find that I've
not seen the full picture and I no longer stand by an opinion I once held. And you're one of my
favorite examples of this, because over the course of about a 45 minute call with you and one of your business partners,
I went from what you're doing is a hilarious misstep and will never work to, okay, and do
you have room for another investor? And in the interest of full disclosure, the answer to that
was yes. And I became one of your angel investors.
It's not exactly common for me to do that kind of a hard pivot.
And I kind of suspect I'm not the only person who currently holds the opinion that I used to hold.
So let's talk a little bit about that.
At the very beginning, what is LocalStack?
And what is it you would say that you folks do?
So LocalStack, in a nutshell, is a cloud emulator that runs on your local machine.
It's basically like a sandbox environment where you can develop your applications locally.
We have currently a range of around 60, 70 services that we provide,
things like Lambda functions, DynamoDB, SQS, like all the major AWS services.
And to your point, it is indeed a pretty large undertaking
to actually implement a cloud and run it locally.
But with the right approach,
it actually turns out that it is feasible and possible.
And we've demonstrated this with LocalStack.
And I'm glad that we've convinced you
to think of it that way as well.
A couple of points that you made
during that early conversation
really stuck with me.
The first is, yeah, AWS has two,
no, three, no, 400 different service offerings, but look at your customer base. How many of those services are customers using in
any real depth? And of those services, yeah, the APIs are vast and very much a sprawling
pile of nonsense, but how many of those esoteric features are those folks actually using?
That was half of the argument that won me over. The other half was, imagine that you're an
enormous company, that's an insurance company or a bank, and this year you're hiring 5,000
brand new developers fresh out of school. 2,000 to 3 three thousand of those developers will still be
working here in about a year as they wind up either progressing in other directions,
not winding up completing internships or going back to school after internships,
and for other variety of reasons. So you have that many people that you need to teach how to
use cloud in the context that we use cloud, combined with the question of how do you make sure that
one of them doesn't make a fun mistake that winds up bankrupting the entire company with a surprise
AWS bill? And those two things combined turned me from what you're doing is ridiculous to,
oh my God, you're absolutely right. And since then, I've encountered you in a number of my client environments.
You were absolutely right. This is something that resonates deeply and profoundly with larger
enterprise customers in particular, but also folks who just don't want to wind up being beholden to
every time they do a deploy to anything to test something out. Yay, I get to spend more money on AWS services.
Yeah, totally. That's spot on.
So to your first point,
so definitely we have a core set of services
that most people are using.
So things like Lambda, DynamoDB, SQS,
like the core serverless kind of APIs.
And then there's kind of a long tail
of more exotic services that we support these days.
Things like even like QLDB, the quantum ledger database,
or managed streaming
for Kafka. But
certainly the core 15-20 services
are the ones that are really most used by
the majority of people. And then we also
pro-offering have some very
advanced services for different use
cases. So that's the first point.
And the second point is totally
spot on. So Locustec really enables you
to experiment in the sandbox.
So we both see it as an experimentation, also development environment, where you don't need to think about cloud costs.
And this, I guess, will be very close to your heart and like in the work that you're doing.
The costs are becoming really predictable as well, right?
Because in the cloud, you know, work to different companies before doing local stack, we were using aws resources and you can end up in a situation where overnight you accumulate you know hundreds of thousands of dollars of aws bill
because you've turned on a certain feature or some you know connectivity into some vpc or networking
configuration that that just turns out to be costly also one more thing that's worth mentioning
like we want to encourage like frequent testing and a lot lot of the cloud billing and cost structure is focused around,
for example, hourly billing of resources, right?
And if you have a test
that just spins up resources
that run for a couple of minutes,
you still end up paying the entire hour.
And with LocalStack,
really that brings down
the cloud bills significantly
because you can really test frequently.
The cycles become much faster
and it's also, again, more efficient and more cost-effective.
There's something useful to be said for, well, how do I make sure that I turn off resources when I'm done?
In cloud, it's a bit of a game of guess and check.
And you turn off things you think are there, and you wait a few days, and you check the bill again.
And you go and turn more things off, and the cycle repeats.
Or, alternately, wait until the end of the month and wonder in perpetuity why you're being billed 48 cents a month and not be clear on why, restarting the
laptop is a lot more straightforward. I also want to call out some of my own bias on this, where
I used to be a big believer in being able to build and deploy and iterate on things locally because,
well, what happens when I'm in a plane with terrible Wi-Fi? Well, in the
before times, I flew an awful lot and was writing a fair bit of cloudy nonsense, and I still never
found that to be a particular blocker on most of what I was doing. So it always felt a little bit
precious to me when people were talking about, well, what if I can't access the internet to
wind up building and deploying these things. It's now 2023. How
often does that really happen? But is that a use case that you see a lot of?
It's definitely a fair point. And probably like 95% of cloud development these days is done in a
high internet bandwidth environment, maybe some corporate network where you have really fast
internet access. But that's only a subset, I guess, of the world out there, right? So there
might be situations where, you know, you may have bad connectivity. Also, maybe you're living in the
region, maybe you're traveling even, right? So there's a lot more and more people who are just
digital nomads, quote unquote, right? Who just like to work in remote places.
You're absolutely right. My bias is that I live in San Francisco. I have symmetric gigabit internet
at home. There's not a lot of scenarios in my day-to-day
life, except when I'm on the train or the bus traveling through the city, because thank you,
Verizon, where I have impeded connectivity. Right. Yeah, totally. And I think the other
aspect of this is the developers just like to have things locally, right? Because it gives them the
feeling of better control over the code, like being able to integrate it into the IDEs, setting breakpoints, having these quick cycles
of iterations. And again, this is something that there is more and more tooling coming up in the
cloud ecosystem, but it's still inherently a remote execution that just takes a round trip
of uploading your code, deploying and so on. And that's just basically the pain part of re-addressing the local stack. One thing that did surprise me as well
was discovering that there was a lot more appetite for this sort of thing
in enterprise scale environments. I mean, some of the reference customers that you have on your
website include divisions of the UK government and 3M, you know, the post-it note people, as well as a number of
other very large environments. And at first, that didn't make a whole lot of sense to me,
but then it suddenly made an awful lot of sense. Because it seems, and please correct me if I'm
wrong, that in order to use something like this at scale and use it in a way that isn't more or
less getting it into a point where the administration
of it is more trouble than it's worth, you need to progress past a certain point of scale.
An individual developer on their side project is likely just going to iterate against AWS itself,
whereas a team of thousands of developers might not want to be doing that because they almost
certainly have their own workflows that make that process high friction.
Yeah, totally.
So what we see a lot is, especially in larger enterprises, dedicated teams like developer experience teams whose main job is to really set up a workflow and environment where developers
can be productive, most productive.
And this can be, you know, on one side of setting up automated pipelines, provisioning
maybe AWS sandbox and test accounts.
And some of these teams, when we introduce Locust, Locust Act is really a game changer
because it becomes much more decoupled and distributed.
You can basically configure your CI pipeline, just spin up the container, run your tests,
tear it down again afterwards.
So it's less dependencies.
And also one aspect
to consider is the aspect of cloud approvals. A lot of companies that we work with have, you know,
very stringent processes around even getting access to the cloud. Some SRE team needs to
enable their IAM permissions and so on. With local stack, you can just get, you know, started from
day one and just get productive and start testing from your local machine. So I think those are
the patterns that we see a lot in especially larger enterprise environments as well, where there might be
some regulatory barriers and just process-wise steps as well. When I started playing with Local
Stack myself, one of the things that I found disturbingly irritating is there's a lot that AWS gets largely right with its AWS command line utility.
You can stuff a whole bunch of different options into the config for different profiles,
and all the other tools that I use mostly wind up respecting that config.
The few that extend it add custom lines to it, but everything else is mostly well-behaved and ignores the things it doesn't understand. But there is no facility that lets you say, for this particular profile,
use this endpoint for AWS service calls instead of the normal ones in public regions. In fact,
to do that, you effectively have to pass specific endpoint URLs to arguments, and I believe the
syntax on that is not globally
consistent between different services. It just feels like a living nightmare. At first, I was
annoyed that you folks wound up having to ship your own command line utility to wind up interfacing
with this. Like, why don't you just add a profile? And then I tried it myself and, oh, I'm not the
only person who knows how this stuff works that has ever looked at this and had that idea.
No, it's because AWS is just unfortunate in that respect.
That is a very good point.
And you're touching upon one of the major pain points that we have, frankly, with the ecosystem.
So there are some pull requests against the AWS open source repositories for the SDKs and various other tools,
where folks, not only LocalStack, but other folks in the SDKs and various other tools where folks,
not only local stack, but other folks in the community
have asked for introducing, for example,
an AWS endpoint URL environment variable.
These pull requests, unfortunately, were never merged.
So it would definitely make our lives a whole lot easier.
But so far, we basically have to maintain these,
you know, these wrapper scripts, basically,
AWS local, CDK local, which basically just, you know,
points the client to the local endpoints.
It's a good workaround for now, but I would assume and hope
that the world's going to change over the, in the upcoming years.
I really hope so, because everything else I can think of is just bad.
The idea of building a custom wrapper around the AWS command line utility
that winds up checking the profile section.
Oh, if this profile is that one, call out to this tool.
Otherwise, it just becomes a pass-through.
That has security implications that aren't necessarily terrific,
you know, in large enterprise companies that care a lot about security.
Yeah, pretend to be a binary you're not is usually the kind of thing
that makes people sad when security politely kicks their door in.
Yeah, we actually have pretty big hopes for the V3 of the V3 wave of the SDKs, AWS,
because there is some restructuring happening with the endpoint resolution.
And also you can, in your profile by now, have special resolvers for endpoints.
But still, the case of just pointing all the SDKs, the CLI, to a custom
endpoint is just not yet resolved. And this is, frankly, quite disappointing, actually.
While we're complaining about the CLI, I'll throw one of my recurring issues with it. And
I would love for it to adopt the Linux slash Unix paradigm of having a config.d directory that you
can reference from within the primary config file.
And then any file within that directory
in the proper syntax winds up getting adopted
into what becomes a giant composable config file
generated dynamically.
The reason being is I can have entire lists of profiles
in separate files that I could then wind up dropping
in and out on a client by client basis.
So I don't
inadvertently expose who some of my clients are in the event that that winds up being part of the
way that they have named their AWS accounts. That is one of those things I would love, but it feels
like it's not a common enough use case for there to be a whole lot of traction around it. And I
guess some people would make a fair point if they were to say that the AWS CLI
is the most widely deployed AWS open source project, even though all it does is give money
to AWS more efficiently. Yeah, great point. Yeah, I think like having some way to customize and like
mingle or mangle your configurations in a more easy fashion would
be super useful. I guess it might be a slippery slope to getting into something like, I don't know,
Helm for EKS and like really having to maintain a whole templating language for these configs.
But I certainly agree with you that just at least having plug points for being able to customize
the behavior of the SDKs and CLIs would be extremely helpful and valuable.
This is not, unfortunately, my first outing with the idea of trying to have AWS APIs done locally.
In fact, almost a decade ago now, I did a build-out at a very large company of a, well,
I would say that the build-out was not itself very large. It was about 300 nodes that
were all running Eucalyptus, which, before it died on the vine, was imagined as a way of just
emulating AWS APIs locally, done in Java, as I recall, and exposing local resources in ways that
comported with how AWS did things. So the idea being that you could write configuration
to deploy any infrastructure you wanted in AWS,
but also treat your local data center the same way.
That idea, unfortunately, did not survive in the marketplace,
which is kind of a shame on some level.
What was it that inspired you folks to wind up building this
with an eye towards local development,
and rather than run this as a private cloud in your data center instead.
Yeah, very interesting.
And I do also have some experience from my past university days with Eucalyptus and OpenStack also, you know, running some workloads in an on-prem cluster.
I think the main difference, first of all, these systems were extremely hard, notoriously to to set up and maintain right so lots of
moving parts you had your your image server your compute system and then your messaging subsystems
lots of moving parts i wanted to have everything basically much more monolithic and in a single
container and docker really sort of provides a great platform for us to just create everything
in a single container spin it up locally make it very lightweight and easy to use.
But I think really the first days of LocalStack, the idea was really, it was actually with
the use case of somebody from our team back then.
I was working at Atlassian in the data engineering team, and we had folks in the team who were
commuting to work on the train.
And it was literally this use case that you mentioned before about being able to work
basically offline and on your commute.
And this is kind of where the first lines of code were written.
And then kind of the idea evolved from there.
We put it into the open source
and then kind of it was growing over the years.
But it really started as not having it
as an on-prem like heavyweight server,
but really as a lightweight system
that is easily portable across different systems as well.
That is a good question.
Very often when I'm using various tools that are aimed
at development use cases, it is very clear that one particular operating system is invariably going
to be the first class citizen and everything else is a best effort. It might work, it might not.
Does LocalSnack feel that way? And if so, what's the operating system that you want to be on?
I would say we definitely work best on macOS and Linux. It also works really well on Windows.
But I think given that some of our tooling in the ecosystem is also pretty much geared towards
Unix systems, I think those are the platforms it really works well with. Again, on the other hand,
Docker is really a platform that helps us a lot being compatible across operating systems and also CPU architectures. We have a multi-arch
built now for AMD and then ARM64. So I think in that sense, we're pretty broad in terms of the
compatibility spectrum. I do not have any insight into how the experience goes on Windows, given
that I don't use that operating system in anger for, wow, 15 years now. But I will say that it's been top flight on macOS, which is what I
spend most of my time depressed that I'm using, but for desktop experiences, it seems to work out
fairly well. That said, having a focus on Windows seems like it would absolutely be a hard
requirement, given that so many developer
workstations in very large enterprises tend to skew very Windows-heavy. My hat is off to people
who work with Linux and Linux-like systems in environments like that, where even line endings
become psychotically challenging. I don't envy them their problems, and I have nothing but
respect for people who can power through it.
I never had the patience.
Yeah, same here.
I mean, definitely, I think everybody has their favorite operating system.
For me, it's also been mostly Linux and Mac in the last couple of years.
But certainly, we definitely want to be broad in terms of the adoption and working with
larger enterprises.
Often you have, you know, we really want to fit into the existing landscape and environment
that people work in. And we solve this by platform abstractions like Docker, for example, as I
mentioned, and also, for example, Python, which is some of our tooling is written in Python. It's
also pretty nicely supported across platforms. But I do feel just the same way as you, like,
haven't been working with Windows for quite some time, especially for developing purposes. What have you noticed that your customer usage pattern
slash requests has been saying
about AWS service adoption?
I have to imagine that everyone cares
whether you can mock S3 effectively,
EC2, DynamoDB probably, SQS, of course,
but beyond a very small baseline level of offering, what have you seen surprising demand for as customer implementation of more esoteric services continues to climb?
Yeah, so these days it's actually pretty insane the level of coverage we already have for different services, including some very exotic ones like QLDB, as I mentioned, Kafka.
We even have managed Airflow, for example.
I mean, a lot of these services
are essentially mostly like wrappers around the API.
This is essentially also what AWS is doing, right?
So they're providing an API
that basically provisions some underlying resources,
some infrastructure.
Some of the more interesting parts,
I guess we've seen is the data
or big data ecosystem. So things like Athena, Glue, we've invested quite a lot of time in making
that available also in local stack. So you can have your maybe CSV files or JSON files in an S3
bucket and you can query them from Athena with a SQL language basically. And that makes it very,
especially these big data-heavy jobs
that are very heavyweight on AWS,
you can iterate very quickly in local stack.
So this is where we're seeing a lot of adoption recently.
And then also, obviously, things like Lambda and ECS,
like all the serverless and containerized applications,
but I guess those are the more mainstream ones.
I imagine you probably get your fair share of requests
for things like CloudFormation or CloudFront, where this is great, but can you go ahead and add a very lengthy sleep right here just because it returns way too fast and we don't want people to get their hopes up when they use the real thing.
On some level, it feels like exact replication of the AWS customer experience isn't quite in line with what makes sense
from a developer productivity point of view.
Yeah, that's a great point.
And I'm sure that a lot of code out there
is probably littered with sleep statements
that is just tailored to the specific timing in AWS.
In fact, we recently opened an issue
in the AWS Terraform provider repository
to add a configuration option
to configure the timings that Terraform is using for the resource deployment. So just as an example,
an S3 bucket creation takes 60 seconds, like more than a minute, against read AWS. Against
local stack it's a second basically, right? And AWS Terraform provider has these relatively slow cycles
of checking whether the bucket
has already been created.
And we want to get that configurable
to actually reduce the time
it takes for local development, right?
So we have an open sort of feature request
and we're probably going to contribute
to the Terraform repository.
But it definitely,
I share the sentiment
that a lot of the tooling
and ecosystem is built
and tailored and optimized towards the experience against the cloud, which often is just slow.
And that's what it is, right?
One thing that I didn't expect, though, in hindsight is blindingly obvious, is your support for a variety of different frameworks and deployment methodologies.
I found that it's relatively straightforward to get up and running with the CDK
deploying to local stack, for instance.
And in hindsight, of course, that's obvious.
When you start out down that path,
though, it's, well, you tend to think,
at least I don't tend to think, in that particular
way. It's, well, yeah, it's just
going to be a console-like experience, or I wind up
doing CloudFormation or Terraform.
But yeah, the world is advancing
relatively quickly, and it's nice to see that you are very comfortably
keeping pace with that advancement.
Yeah, true.
And I guess for us, it's really like the level of abstraction
is sort of increasing.
So once you have a solid foundation
with CloudFormation implementation,
you can leverage a lot of tools that are sitting on top of it.
CDK, serverless frameworks.
So CloudFormation is almost becoming like the assembly language of the AWS cloud, right?
And if you have a very solid support for that, a lot of sort of tools in the ecosystem will
natively be supported on local stack.
And then, you know, you have things like Terraform and even the Terraform CDK, you know, some
of these derived versions of Terraform, which also are very straightforward because you
just need to
point the end, you know, the target endpoint to a local host. And then the rest of the deployment
loop just works out of the box, essentially. So I guess for us, it's really mostly being able to
focus on like the core emulation, making sure that we have very high parity with the real services.
We spend a lot of time and effort into what we call parity testing and snapshot testing,
where we make sure that our API responses are identical and really the same as they are in AWS.
And this really gives us very strong confidence that a lot of the tools in the ecosystem are working out of the box against local stack as well.
I would also like to point out that I am also a proud local stack contributor at this point because at the start of this year i noticed
ah on one of the pages the copyright year was still saying 2022 and not 2023 so a single character
pull request oh yes i am on the board now because that is how you ingratiate yourself with an open
source project yeah eternal fame to you and uhudos for your contribution. But in all seriousness, we do have quite an active community of contributors.
We are an open source first project.
We were born in the open source.
We actually, maybe just touching upon this for a second,
we used GitHub for our repository.
We used a lot of automation around doing pull requests and service owners.
We also participate in things like the Hacktoberfest, which we participated in last year to really encourage contributions from the community.
And also host regular meetups with folks in the community to really make sure that this is an active ecosystem where people can contribute and make contributions like the one that you did with documentation and all that.
But also like actual features, testing,
and, you know, contributions at different levels.
So really kudos and shout out
to the entire community out there.
Do you feel that there's an inherent tension
between being an open source product
as well as being a commercial product
that is available for sale?
I find that a lot of companies feel vaguely uncomfortable with the various trade-offs
that they make going down that particular path, but I haven't seen anyone in the community upset
with you folks, and it certainly hasn't seemed to act as a brake on your enterprise adoption either.
That is a very good point. So we certainly are're following a open source first model that we you
know the core of the code base is is available in the community version and then we have pro
extensions which are commercial and you basically you know set up you you you sign up for for a
license we are certainly having a lot of discussions how to evolve this licensing model going forward
you know which part to feed back into the community version of local stack and it's certainly an
ongoing evolving model as well but certainly so far the support into the community version of local stack. And it's certainly an ongoing, evolving model as well.
But certainly so far, the support from the community has been great.
And we definitely focus to kind of get a lot of the innovation
that we're doing back into our open source repo
and make sure that it's like really not only open source,
but also open contribution for folks to contribute their contributions.
We also integrate with other third-party libraries
who were built on the shoulders of giants,
if I may say so,
other open source projects
that are doing great work with emulators
to name just a few,
like Moto, which is a great project
that we sort of use and depend upon.
We have certain mocks and emulations for Kinesis,
for example, Kinesis Mock,
and a bunch of other tools
that we've been leveraging over the years,
which are really great community efforts out there.
And it's great to see such an active community that's really making this vision possible of a truly local emulated cloud that gives the best experience to developers out there.
So as of now, when people are listening to this and the episode gets released, v2
of LocalStack is coming out.
What are the big differences
between LocalStack and now
LocalStack 2,
Electric Boogaloo, or whatever it is
you're calling the release?
Right. So we're
super excited to release our
v2 version of LocalStack.
The planned release date is end of march 2023
so hopefully we will make that timeline we did release our first version of local stack in july
2022 so it's been roughly seven months since then and we tried to have a cadence of roughly six to
nine months for the major releases and what you can expect is we've invested a lot of time and
effort in the last couple of months and in the last year to really make it a very rock solid experience with
enhancements in the current services, a lot of performance optimizations. We've invested a lot
in parity testing. So as I mentioned before, parity is really important for us to make sure
that we have high coverage of the different services and how they behave the same way as AWS.
And we're also putting out an enhanced version and a completely polished version of our CloudPods
experience. CloudPods is a state management mechanism in local stack. So by default,
the state in local stack is ephemeral. So when you restart the instance, you basically have a fresh
state. But with CloudPods, we enable our users to take a persistent snapshot of the state,
save it to a disk or to a server, and easily share it with team members.
And we have a very polished experience with community Cloud Pods
that makes it very easy to share the state among team members and with the community.
So those are just some of the highlights of things that we're going to be putting out in V2.
And we're super excited to
have it done by you know end of march so stay tuned for for the v2 release i am looking forward
to seeing how the experience shifts and evolves i really want to thank you for taking time to out
of your day to wind up basically humoring me and effectively recovering ground that you and i
covered about a year and a half ago now.
If people want to learn more, where should they go?
Yeah, so definitely our Slack channel is a great way
to get in touch with the community,
also with the LocalStack team,
if you have any technical questions.
So you can find it on our website,
something called slack.localstack.cloud.
We also host a discourse forum.
It's discuss.localstack.cloud. We also host a discourse forum.
It's discuss.localstack.cloud,
where you can just make feature requests and participate in the general conversation.
And we do host monthly community meetups.
Those are also available on our website.
If you sign up, for example, for the newsletter,
you will be notified where we have these webinars,
take about an hour or so,
where we often have guest speakers
from different companies,
people who are using cloud development,
local cloud development,
and just sharing the experiences
of how the space is evolving.
And we're always super happy to accept contributions
from the community in these meetups as well.
And last but not least,
our GitHub repository is a great way
to file any issues you may have,
feature requests, and just get involved with the project itself.
And we will, of course, put links to that in the show notes.
Thank you so much for taking the time to speak with me today.
I appreciate it.
Thank you so much, Corey.
It's been a pleasure.
Thanks for having me.
Waldemar Hummer, CTO and co-founder at LocalStack.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast,
please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast,
please leave a five-star review
on your podcast platform of choice,
along with an angry comment,
presumably because your compensation structure
requires people to spend
ever-increasing amounts of money
on AWS services.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business,
and we get to the point.
Visit duckbillgroup.com to get started.