Screaming in the Cloud - Reliable Software by Default with Jeremy Edberg
Episode Date: July 10, 2025Reliable software shouldn't be an accident, but for most developers it is. Jeremy Edberg, CEO of DBOS and the guy who scaled Reddit and Netflix, joins Corey Quinn to talk about his wild idea ...of saving your entire app into a database so it can never really break. They chat about Jeremy's "build for three" rule, a plan for scale without going crazy, why he set Reddit's servers to Arizona time to dodge daylight saving time, and how DBOS makes your app as tough as your data. Plus, Jeremy shares his brutally honest take on distributed systems cargo cult, autonomous AI testing, and why making it easy for customers to leave actually keeps them around.Public Bio: Jeremy is an angel investor and advisor for various incubators and startups, and the CEO of DBOS. He was the founding Reliability Engineer for Netflix and before that he ran ops for reddit as its first engineering hire. Jeremy also tech-edited the highly acclaimed AWS for Dummies, and he is one of the six original AWS Heroes. He is a noted speaker in serverless computing, distributed computing, availability, rapid scaling, and cloud computing, and holds a Cognitive Science degree from UC Berkeley.Show Highlights(02:08) - What DBOS actually does(04:08) - "Everything as a database" philosophy and why it works(08:26) - "95% of people will never outgrow one Postgres machine"(10:13) - Jeremy's Arizona time zone hack at Reddit (and whether it still exists)(11:22) - "Build for three" philosophy without over-engineering(17:16) - Extracting data from mainframes older than the founders(19:00) - Autonomous testing with AI trained on your app's history(20:07) - The hardest part of dev tools(22:00) - Corey's brutal pricing page audit methodology(27:15) - Why making it easy to leave keeps customers around(34:11) - Learn more about DBOSLinksDBOS website: https://dbos.devDBOS documentation: https://docs.dbos.devDBOS GitHub: https://github.com/dbos-incDBOS Discord community: https://discord.gg/fMqo9kDJeremy Edberg on Twitter: https://x.com/jedberg?lang=enAWS Heroes program: https://aws.amazon.com/developer/community/heroes/
Transcript
Discussion (0)
And there's a really interesting use case around autonomous testing.
So, you know, there's automated testing, which is what you're familiar, everyone's familiar with, right?
You write a bunch of tests.
Then there's autonomous testing, where you essentially teach an AI how to test your system,
which then in theory finds a bunch of stuff that you didn't even think of to test for.
And the greatest data set to train one of those AIs is all of your previous inputs and outputs.
And so you can feed that into an autonomous testing system
and train this perfect testing AI basically.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
My guest today presumably needs no introduction for a
large swath of folks listening and or watching to this, but we can't ever
assume that. Jeremy Edberg is a whole bunch of things. Jedberg on the internet
and has been around forever. Today he's the CEO of something called D-Boss that
I'm certain we're going to get into, but he was the founding reliability engineer for Netflix once upon a time.
He was the first engineering hire at Reddit, a small website that you might have heard
of.
He's one of the original six AWS heroes.
He's been doing this a very long time.
Jeremy, thank you for taking the time to speak with me.
Thank you for having me. WIS transforms cloud security and is on a mission to help every organization rapidly identify and remove critical risks in their cloud environments.
Purpose-built for the cloud, WIS delivers full-stack visibility, accurate risk prioritization, and enhanced business agility.
In other words, nobody beats the whiz.
Leading organizations from the Fortune 100 to high growth companies all trust whiz to innovate at the pace of cloud while staying secure. Screaming in the Cloud podcast listeners,
that would be you, can also get a free cloud security health scan by going to whiz.io slash scream. That's w-i-z dot i-o slash scream. So I guess we could always start at the
beginning and work forward but that's dull. I like starting at the at the present. Maybe we'll go
backwards and see what happens. What's a dboss other than thing your CEO of? Yeah so dboss is a
company that makes an open source library called Transact, which helps people build reliable software by default.
And then we have a product that will help you run that
Transact application anywhere,
cloud, our cloud, on-prem, whatever it is.
And then we also offer a cloud for you to host
your Transact applications
if you don't wanna do it for yourself.
It seemingly seems that most people build reliable software
entirely by accident, but doing it by default
does seem like a decent way of approaching things. It is D-B-O-S, and I am pronouncing
that as D-BOSS correctly. I mean, I know you did work at Amazon for a time, so I don't
know if you're doing the whole AMI, it should be mispronounced as Ami thing or not, but
you know, D-B-O-S, D-BOSS, really six of one, half a dozen of the other. I haven't formed
an opinion on it yet.
There's a story there.
It used to be DBOS because the project started
as a database operating system.
Our co-founder was the creator of Postgres and Spark.
So originally that was the premise,
but what we realized was one,
that would take over a decade to build,
and two, it would be very hard to commercialize.
And so what we realized is that the most interesting part of it was
keeping state while running.
And so that's what we commercialized.
So as you look at this entire thing and look at the landscape
out there, it sounds almost like what you're building is sort of
a modern Heroku style thing, which, you know, everyone has
been trying to build except the company that bought Heroku.
But I digress.
They're sort of coming out of hibernation for a while.
But what makes yours different?
So that is actually what we were building,
but what we realized is that the most interesting part
is the library that you use to build with.
And so our premise is that everything
should be in the database.
Your application running state
should be as durable as your data.
And so that's what we do.
We essentially checkpoint your software into the database
so that you can roll back at any time,
resume from a crash, stuff like that.
And so what ends up happening is every piece
of your business process is a row in the database table.
This complements my everything is a database
if you hold it wrong life philosophy.
So I like this approach very much.
Yeah, and that's essentially what it is.
And so that's our main thing is that library,
getting you to build things reliably by default,
and then helping you run them.
We do have the cloud,
and we were trying to be the easiest, best place,
but it turns out that that's really hard.
And most importantly, it turns out that people prefer
the privacy and reliability of running themselves.
Well, I through no fault of your own, you've picked the wrong freaking day to have this conversation with me, because yesterday I was programming in the only two languages I know, brute force mixed with enthusiasm to build this dumb thing to stuff into a lambda function where it will live.
And once again, I am dismayed to discover that there's no good way to deploy Lambda functions from development into production.
There are merely different flavors of bad.
Originally I used the serverless framework, which went in some strange directions.
I don't use it anymore.
The CDK turns everything into a dependency update issue every time you want to use it.
The console gets you yelled at, but people
use it like that anyway.
And the only answer that I found that works reliably for me
is I got something working with the SAM CLI many years ago,
and I've just copied and pasted that template file around
ever since with minimal changes, which is not ideal,
but also honest.
Does it address that particular side of the world,
or am I thinking about this and from a different angle?
No, that absolutely is something we address.
So we really focus on developer experience.
And one of the biggest problems with serverless is the one you just said,
is that the local development story is terrible and the local testing story is terrible.
And so we focused on that.
We made it so that when you use our library,
it'll run locally exactly the same way
as it runs in the cloud.
So if you use our cloud, all you have to do is deploy.
If you use your own cloud, it's literally
the same application in production and in test.
And it works the same way.
And it's all one file.
That's another big thing.
When you use us, you can do pretty much everything
in one file.
We have crons right there, so you
don't have to use another service for that. We have queues built right in, so you don't have to use another service
for that. Because that's another big problem with serverless, right? Lambda is lambda and it's
compute. And then, oh, I need a queue. Okay, you need this other thing. Oh, I need storage. Okay,
you need this other thing. Oh, I need cron jobs. You need this other thing. And it suddenly becomes
this huge mess of config files
and tiny little programs that get deployed
all over multiple services.
And this is the problem with the CDK as well,
where suddenly you have the infrastructure deployment code
intermingled with the actual application code itself.
So then you wind up with, it's not really clear,
at least to me in many projects I see,
where the deployment stuff starts and stops
versus the application logic itself.
So we actually lean into that.
With us, like I said, you can do an entire fully reliable
program with one file because we use decorators.
So you decorate your code.
You say this function needs to be a step of this workflow.
You can do a cron decorator, a queue decorator,
and it's all in one place.
And so really, you don't even have to worry
about the infrastructure.
It's essentially derived from your code.
And we actually think this is better
for the developer experience,
because then you don't have to think
about infrastructure at all.
All you do is you write your code,
and you say step one, step two, step three, cron job, queue,
and the infrastructure is just taking care of for you.
You don't have to manage it anymore
because it's all in the database.
What languages do you support today?
Today we support Python and TypeScript.
The only two that really matter. Sure.
I mean, Rust matters.
You want to talk about it a lot and not do anything.
That's fun.
We'll probably have Go really soon and Java by the end of the year.
Okay. Because when you say,
oh, you can write an entire program in one file,
it sounds like, yes, I can write anything in one line of code
as long as I use enough semicolons,
like same theory applies.
Well, no, I said you can write a fully reliable
distributed system in one file.
Fair enough.
Which is an even grander claim.
It really is, and this gets at sort of
one of my personal hobby horses.
I see that there's a sort of cult of distributed systems
where relatively straightforward things
that don't necessarily benefit from being distributed,
in fact, there's some drawbacks to it,
seem to be forced into that paradigm.
Where do you land on it?
So I would say that most people, for most applications,
do not need a fully distributed system.
In 95% of the cases, your application will run with one or two,
you know, executors with a database.
Right. Maybe you want two or three for load balancing, for reliability.
But you often don't need more than that.
I think that is the biggest issue is people are, you know,
especially when microservices are popular, they're they're they're waning now, but for a while,
everybody built everything as a microservice,
you know, from scratch.
A lot of people, like, so we do everything in Postgres,
right, and people are like, well, what happens
when you outgrow your one Postgres machine?
You know what, 95% of people are never gonna outgrow
that one Postgres machine.
It turns out that they can do quite a lot with one machine.
I didn't expect to hear this coming from you
just based upon the read of your resume
I did at the beginning.
You've worked in places where you do folks
want something publicly at any of these places historically
and you're gonna get 10 million users on day one.
And that is something that leads to its own,
I guess, life philosophy that the rest of us
don't really have to deal with.
What if you get so many users, you outgrow a single PostgreSQL database?
It's, well, I should be so lucky.
That sounds like a problem that I can solve with all the money that comes pouring in.
I see a lot of folks doing the early optimization stuff, which is weird, and I understand cuts
against something I've been saying for years that people argue with in the other direction, which is even when
you're building something small, scrappy, and local, the only true time zone for
servers and databases is UTC because that is so freaking hard to unwind. Yes, I
know 99% of systems will never have to deal with this, but that 1% will keep you
up for years. Funny you mentioned that. So at Reddit, I always kept all of our servers in Arizona time.
Why Arizona time?
Oh, so you're the bastard.
Yep, yep. Because it was one hour off of where we were for four months of the year,
and the remaining eight months of the year, it was the same as where we are, which was Pacific time.
You didn't have to worry about daylight saving shifts.
No daylight saving shift in Arizona, so I didn't have to worry about daylight saving shifts. No, no daylight saving shift in Arizona.
So I didn't have to deal with it at all.
And calculating what the log time was versus the wall clock
was super easy because most of the year it's the same.
And some of the year it was subtract or add one.
Right. So I almost never kept anything in UTC
because of that, because I live in the Pacific time zone.
OK, if I were to go and talk to people at Reddit today,
since you were the first ops person there decade,
two decades ago, damn near,
are there still vestiges there
of things being kept in Arizona time
because of that decision you made early on?
I do wonder.
I know that the guys that came after me
for the next decade at least, kept it in Arizona time.
I don't know if they've still done that now,
but that would actually be a fun question to think about.
I know a few people over there.
I will look into it.
Okay.
What I do tell people is you should build
assuming that you're going to have to scale.
So I've been saying this for decades, build for three,
right?
Assume that at some point
you're gonna have three of everything.
And that's a good optimization to make early on,
but don't actually build it all.
Like you don't have to make three,
you don't have to use three availability zones
or three servers right off the bat,
but assume you will have that.
So don't use shared memory locally, stuff like that, right?
Like put things into a queue,
put things into a storage place that can be expanded.
Those are the kinds of things
you should be doing from the start.
And not to bring it back home,
but that's basically what our library
lets you do kind of by default.
Yeah, and that makes a lot of sense.
The, it's always a balancing act
because in the early days of building something,
it's you can basically spend all your time building for massive scale that will never arrive.
And that's all wasted effort and slows you down in some ways and you never actually ship anything.
There are far more companies out there that have failed from people over indexing on making the
code better and not the important problems. Like, do we have a viable business? Because when you
have a good business,
you can afford a bunch of people to come in
and rewrite things and fix your horrible time zone choices
and all kinds of other things down the road.
But it's a balance with anything.
Like, I don't need to go out of my way necessarily
to plan for massive scale,
but I also should very much not be making decisions
that are incredibly short-sighted
that I'm going to have to back out stupendously.
Because in the early days, changing architectures is as simple as changing lines on a whiteboard.
Yeah, yeah, and like, I mean, we're, I'm the head of a startup now, and we have a small team,
and there's definitely places where it's like, you know, this is not the best practice. But we know
that. And we say, this is not the best practice. This is what we're going to have to change
when we get to the scale where it matters
and then acknowledge that and then move on.
Because we know that when we get to that scale,
we'll be able to hire the extra people we need
to make those changes.
Yeah, there's a lot to be said for that.
So I have to ask now, this is sort of a rude question
to ask any startup founder,
but I'm gonna go for it anyway.
So you're building a library that makes it very simple to build
and deploy a bunch of apps.
Awesome.
Terrific.
Great.
Where's the money?
Where does it come in to the point where, and now in return for this open
source library, I cut you a check.
It seems like that's like step two, draw the rest of the owl
for an awful lot of folks.
That is a totally valid question, right?
This is the standard open core model question.
So we would love to see everybody in the world
building on our free open source library
because we just think it makes the world better.
Where does the money come in?
Once you start to need more than one of them,
it's actually complicated to operate.
And so that's what we provide.
So we provide a thing called Conductor.
You connect your multiple executors to Conductor
and Conductor manages moving the workloads around,
auto scaling, resuming, all of that stuff for you.
Or you use our cloud and pay us money for that.
And it's the same thing, right?
You get Conductor as part of using our cloud
and that manages all of your auto scaling,
VM management, all of that.
So we run our own cloud in AWS on a bare metal instance
where we run our own fleet of firecracker VMs.
And for those who don't know,
firecracker is the same thing they use to run Lambda.
And so we run our own fleet.
It's far more efficient,
so we can charge a lot less than Lambda does.
And if you use our programming paradigm
of everything is a workflow, we also don't charge you know, when you're
computing, like waiting for an LLM, because there's nothing
running when you're waiting for the response. And so you know,
if you use our cloud, it's super efficient. If you run it
locally, it's actually super efficient and private, and then
we'll help you scale and manage that's where the money comes in.
And that's where people pay us, they pay us for support, they
pay us for conductor, they pay us for our cloud.
And I just checked, well, you mentioned this as well.
You're MIT licensed as well, which
means that if you're planning on doing a licensing rug
pull down the road, you're doing a terrible job
setting the stage for that.
So good work.
Hey, we do not believe in that.
We believe in open source.
We believe in, we truly believe that the world
would be a better place if everybody used our library. If our company goes away, so
be it, as long as people are building reliably. Obviously, we don't want our company to go
away. We'd like to continue to exist.
What kind of applications work best on this? Because very often when I talk to folks who
are building a thing to construct applications with, they have a very specific vision in mind of what type of application it is. A lot of stuff
I build tends to be ephemeral, stateless stuff that's a lambda function and no source of
truth. That sometimes gets a lot of strange brokenness when I try and integrate those
things with other stuff. Conversely, some folks are like, well, okay, we have a giant
shared database. You should never do that. It's great. We're a bank.
We kind of have to do that.
It's a question of what are you targeting as a sweet spot?
Yeah, so we have customers all over the place.
But the main use cases that we see
are these data pipeline use cases.
So I need to get data out of one place and into another reliably
in a way so we can guarantee once and only once execution
because of the way we operate.
And that is important to a lot of people.
I need to guarantee that I got the data out of this system
and then it went into this other one, but only one time.
And that turns out to be a fundamental problem
for AI workloads because training your AI
or doing inference, you need to make sure
that your data is moving from one place to another.
And so that is a huge,
a lot of our customers are doing that.
They're using us for their agentic AI workloads,
managing their agentic AI,
or managing extracting data from their legacy systems
into more modern, often AI systems, things like that.
So we're working with an extremely large bank
that you've definitely heard of
to extract data out of their their mainframe system.
Mainframes are the curse of everything.
There's no good way around it for better or worse.
There's a you wind up persistently living in a in a weird place.
Let's put it that way.
Yeah, exactly.
Yeah, we're pretty sure that this mainframe is older than definitely older than our co-founders
and probably older than me.
This episode is sponsored by my own company, the Duck Bill Group.
Having trouble with your AWS bill?
Perhaps it's time to renegotiate a contract with them.
Maybe you're just wondering how to predict what's going on
in the wide world of AWS.
Well, that's where the Duck Bill Group comes in to help.
Remember, you can't duck the Duck Bill bill,
which I am reliably informed by my business partner
is absolutely not our motto.
As far as the, what's the next step?
What's the vision for this?
Is it designed to go to effectively do the same thing
and just keep iterating in the same direction?
Are there basically orthogonal pivots almost you can make
as you continue to grow?
Where is it going vis-a-vis where it is today?
Yeah, so what's interesting is because the way we checkpoint your software, we essentially
record all of your inputs and outputs of your functions.
And what that does is it means that we have this really interesting metadata base of inputs
and outputs of your functions.
And there's a lot you can do with that. There's a really interesting security play there. There's intrusion detection that can
happen nearly instantaneously with a simple SQL query. There is a lot of operational interest there,
because you can see the ins and outs, the workflows, how big the responses are, stuff like that,
the response times. And there's a really interesting
use case around autonomous testing. So, you know, there's automated testing, which is what you're
everyone's familiar with, right? You've read a bunch of tests. Then there's autonomous testing,
where you essentially teach an AI how to test your system, which then in theory finds a bunch of
stuff that you didn't even think of to test for.
The greatest data set to train one of those AIs is all of your previous inputs and outputs.
You can feed that into an autonomous testing system and train this perfect testing AI basically.
That's the next step. Then the next next step is being the operations panel for everything on the internet, right? So it's, you know, start putting in chaos testing, start putting in other
different ways of helping you operate storage management,
whatever it is, right, that's the long term future is, is let
us you write your code, we'll operate it for you, right? We're
the operations experts for you. So we'll operate your stuff, you write your code, we'll operate it for you, right? We're the operations experts for you. So we'll operate your stuff. You write your code. I like the approach quite a bit.
What's hard right now? What's the challenging parts? What keeps you up at night?
So honestly, the hardest part right now is getting people to know about what we're doing.
Getting the message out, Go to market strategy.
That is literally the hardest thing right now.
Engineering wise, I mean, I have to say this,
I'm the CEO, but I really truly believe it.
This is the greatest engineering team I've ever worked with.
These, here's my favorite example.
This is gonna ring super hollow
if it turns out that so far the company is just you.
No, no, no, no, no, no.
I have this amazing engineering team.
So we have these two co-founders who are grad students.
They came out and immediately started this company.
When I joined the company, so I wasn't there from the founding, I started a little after,
they told me, this was in July, they said, we're going to be launching Python support
on September 12th.
I was like, you have a specific day in mind three they said, we're going to be launching Python support on September 12. I was like, you
have a specific day in mind three months down the line. Okay,
whatever. Literally, September 11, that night, they deployed it
and it was September 12, they announced it. And I was shocked
that they were able to pinpointed that closely as a
startup. And this is just an example of what they can do. So
I do not worry about engineering. People asked some people asked me that like, Oh, what do you worry about?
I thought I'm not worried about engineering.
I'm completely good.
That is the hard part, especially with a dev tool, right?
Dev tools are really hard.
There's no instant aha moment like a consumer application where you just try it and you're
like, Oh, this is great.
Great.
There's like a learning process.
You got to have something to build.
That's the biggest thing is catching people
when they want to build something and try something new.
Which I find myself perpetually in the land of living with.
So I am possibly your target market.
So let me talk through what I think about
when I go to dboss.dev and the thoughts that I have.
The first thing I always do is ignore everything
you've put on your front page.
That is invariably for any given company,
marketing has workshopped that to death,
or at least should have workshopped that to death.
Great, and the thing I go to is one of the two places
on your website where you have to be honest, arguably three.
The one that we don't care about is terms and conditions.
You're going to be very honest and direct,
and if you spoke to people like that,
you'd get punched in the mouth a lot. The second or you might be a little
bit more direct is in the careers section because all right what technology are they
really using under the hood. People disclose a lot. But the one I start with is the pricing
page and I look for two specific things on it. The well I guess first what I'm trying
to figure out is is this for me because if I'm looking at this and I'm trying to figure out is, is this for me? Because if I'm looking at this
and I'm trying to do a side project
and it's $5,000 a month or call for details,
then I know I will not be going to space today
and I am not your target market.
When AWS released Amazon Kendra,
I was excited about it for the 30 seconds it took me
to realize it started at 7,500 bucks a month,
which was higher in intern money
to organize my data instead.
Then, so you start the price, the one price you have and this is $99 a month for your middle tier.
Fine, reasonable. That is, that is absolutely fine to deal with. But the other two are what I look for.
First, I want to see a freeze trial or free thing that I can get started with today,
because I might be working on a problem and my signing authority caps out at $20 I don't want to have to do a
justification for something new. You've got that. On the other end of the
spectrum we're a large enterprise. If there's not an enterprise offering where
the price is contact sales then it's you present as being too small time and
procurement teams get itchy at that. They don't know how to sign anything that doesn't have both custom terms
and two commas in the price tag.
So you wanna be able to catch the low end and the high end
and what's between those two doesn't matter quite as much.
So you've passed through that gate, good job.
Thank you.
It's funny you should say that
because we had this long debate at the company
about the pricing page and how important it is.
And I was on the side of, it's where most people start.
And some other folks.
In cloud, cost and architecture are the same thing.
And you ignore that reality at your peril.
I always start there because I wanna know,
not only what is it gonna cost,
but what are the axes that you folks think
about these things on?
And what are the upsell things?
And at what point does my
Intended use case start to look like a pretty crappy a fit for what you might want to do
Yeah, and that's exactly where I was at and it was funny because our more engineering minded engineers who you know
Were the ones who are like no no they're gonna look straight at the docks and
I said well some engineers will go straight to the docks And that's another place where you kind of have to be honest, right?
You have to, your documents have to be correct.
You would think that, wouldn't you?
Well, okay, sure.
I guess some people do shove marketing in there, but we generally try to make our documents
as accurate as possible.
And
Less about marketing and more about people write docs and they don't manage to step outside
of their own use case and their own expectations and they're too close to the product. So the docs make perfect sense if you've been
building with this already. A classic example I love to use for this is Nix. The Nix documentation
assumes you've been using Nix for a long time, including their tutorial get started problem.
Great. That needs some love. But the other thing I look for when I'm trying to decide for something like this, where I am building out a thing,
is assume that at some point,
our interests are no longer going to align.
Maybe your company is going to pivot
to social networking for pets.
Regardless of what that thing is,
there might be a time where I need to deviate
from the way that you've done these things.
What does the Exodus look like
to run this in my own AWS environment?
And that's something that's not as easy to see
from the Shiny web page.
You actually have to kick the tires on it.
Or in my case, I ask you, what's the Exodus look like
if I decide down the road that we're not strategically aligned?
So that's the best part, right?
Like I said, we care about developer experience.
The code is yours. It can run anywhere.
The data is yours.
Generally, most people bring their own database.
We do offer databases that you can use, but.
Real databases or horseshit databases?
We offer a free RDS, the smallest RDS instance.
It's RDS, cool.
That's a real database as opposed to like DNS,
one of my Hortshot databases.
Yeah, yeah.
Yeah, yeah, real database.
You have to use real Postgres for us, yeah.
But also you can use any Postgres provider.
So a lot of people use SuperBase or Neon or whatever it is.
But you can bring your own and most customers
who are more than just a tiny hobby
actually do bring their own. So the exit from our company is actually really, really easy. You just take your Transact app and
run it for yourself if you're not already doing that against the database that you probably already
own. Got it. And you can deploy it into your own environment. Does it presuppose that you have a
server of some sort to deploy this on or fleet of servers? Does it deploy directly AWS Lambda?
Yeah. If you exit our cloud, you would need your own server
if you're not already self hosting.
Yeah. OK.
I have to imagine the event of an exodus.
People are not like, well, I wanted to run on only on Lambda
for budgetary purposes.
Like, that's great.
You might not be the target market for this thing.
And that's fine.
I run it on Lambda because Transact does require to be running all the time.
OK, at least one. So can have what you have one. Oh, that's the AWS version of serverless
Yeah, yeah, yeah doesn't scale to zero and they're like, oh we've never said it scales to zero
And then I look at the way back machine when they first launched serverless and it says prominently scales to zero
It's don't try it pretend. I didn't read what I read
I remember these things so we do scale to zero on our cloud because we eat the cost of having that one
last thing running to wake up the rest.
So we do truly scale to zero.
Which is the right answer. Good job. I'm proud of you.
But yes, if you ran it for yourself, you would have to have something running all
the time.
It's counterintuitive, but making it easy for people to leave significantly
decreases the possibility that they will.
Yes, no, agreed.
And that was our philosophy way back in Netflix.
It was when everyone else made you call to cancel,
Netflix just put a cancel button right on your profile.
Oh, every time a company was whining and crying
when California changed the law
that if you can sign up online,
you need to be able to cancel online.
And the wailing and gnashing of teeth.
It's like making it difficult to cancel is not a business practice I find ethical.
What's the matter with you people?
Yeah, yeah.
You don't, don't annoy, don't think it's just inertia.
The only reason people continue to spend money with you.
You want happy customers, not hostages.
Right, exactly.
And so that philosophy is definitely carried through.
We make it just as easy to leave as it is to arrive. Yeah, there's just the right answer.
I like what you've done. There is some terrifying stuff you have about the open source version on
that pricing page. You mentioned that it has a built-in time travel debugger, which is awesome,
but yet presupposes at one point I built something that was working. So, you know, it's better for people who are good at things.
But you also say it runs on Linux, Windows and Mac.
That's insane. Explain that to me, please.
Because the Transact app is just a Python app or a TypeScript app,
anywhere that you can run Python, you can run Transact apps.
So you can run it locally on your Mac.
That's how most of us do development.
Linux, obviously.
And yeah, you can get, we do have people who use Windows,
actually a couple of our engineers use Windows.
One of our engineers, it was a long time veteran
of Microsoft, loves to use Windows.
And so he runs his stuff locally on Windows,
and it's great.
Got it.
So does this run, so if it sort of runs
in the developer environment,
does this run inside of a Docker container?
Does it run using the system Python, heaven forbid?
I mean, how does it work?
You can do it however you want, right?
It's up to you.
You can put it in a Docker container, that's fine.
You can splat it straight into your system Python
if you want, that's fine too.
Not recommended.
Professional advice, folks, do not do that.
When you break your system Python,
you will not be going to space today.
Not recommended, but you can do it
if you want to shoot yourself in the foot like that.
We don't, we are not opinionated about
what your development environment looks like
other than you have to have our dependencies.
So that's the main catch, right?
So using the system Python is probably not gonna work
because you can't install the necessary dependencies.
Another thing I look for in tools like this,
especially as I think of it from the more complicated
enterprise perspective, there needs to be something
of a golden path that guides me through things.
Otherwise you wind up with death by configuration options.
The decision fatigue, analysis paralysis
becomes a real thing.
So by default, do a bunch of stuff.
Like what are the worst examples I can think of
of making that painful for customers?
Go ahead to the console and try and set up
a CloudFront distribution.
They may have changed that somewhat recently,
but a few years ago at least, it was brutal.
There were at least 70 options,
five of which most people used, and it was terrible.
And it was confusing.
So you would need to give people a chance to deviate
from that golden path as well as having it.
So, okay, here I need to do something
a little bit different instead of using RDS for example.
I wanna use SuperBase or Neon or something else.
But once I've done that, I want to go back
to the golden path aside from that deviation.
So many tools are once you eject, you're done.
You're not getting back in the plane.
Yeah, and we totally agree with you, right?
That here's the golden path,
and the golden path is take our sample apps
and modify them, and then you can modify them
and deviate in whatever area you want.
That's our total philosophy for our entire product, actually,
is all of our competitors, when you want to add durability,
they make you rewrite your whole thing to their style, right?
You've got to say, this is going to be the reliable part for us.
You add a decorator to the one thing that you care the most about and start there,
and then you can expand from there.
So our entire philosophy is about gradual introduction, right?
It's take your big piece of software and make one thing reliable,
now make another thing reliable,
now make another thing reliable.
That's the challenge too,
is people wind up perpetually finding themselves
in worlds where the things they care about
are not necessarily things others care about,
and finding out where those points of commonality are
and where those divergences are is super handy.
One of the biggest problems you're gonna have
with getting something like this adopted
is that it's not adopted already.
And it's sort of paradoxical,
but the number one thing I look for
when I'm trying to do work with a deployment system is,
all right, what community support is there?
I want to ideally not be the only person
who's ever tried to hook it up to a load balancer,
for example. That's never great.
And especially if you're a, if you claim to be a hyperscale cloud provider, that's a problem for
those folks. For something like this, it's a lot more fluid, but it also increases the likelihood
that I'm going to blunder into sharp edges at some level. Yeah, yeah, that's a huge problem. And a big
thing with DevTools, right, is you have to bootstrap that community.
We are doing this typical what a lot of the DevTools now do.
We have a Discord.
All of our employees are there.
They can answer questions.
We set up Slack channels with most of our customers,
all of that stuff.
But you're absolutely right.
The bigger the deployment, the easier it is to run.
You don't want to be the first.
And I totally get that.
And that's a bootstrapping problem. And that's a bootstrapping problem that pretty much everyone
has. We've tried to make it as easy as possible by hiding most of the complexity.
And having a wide variety of examples in the documentation that get people to.
Yes, we have a ton of examples and.
I hate the oh, wow, there's five examples here
and none of them map to anything remotely
like what I'm doing.
And like, I often tend to view that as a leading indicator
that I might not have a great time with this
just because I am already off the beaten path.
When you build something purely in Lambda, for example,
without any real stateful stuff or server side things,
that's often not well supported
by an awful lot of stuff out there.
So I know going into that, that I'm an edge case.
Some things embrace that edge case
and some don't seem to know that they exist.
And that latter category,
I don't have a great time with those.
Yeah, yeah.
So if people wanna learn more,
where's the best place for them to go find out?
So if they wanna learn learn more, where's the best place for them to go find out?
So if they want to learn more about DBoss, obviously start at our website, dboss.dev,
D-B-O-S dot dev, if it's easier to remember that way.
That's the best place to start or docs dot dboss dot dev.
If you're the kind of engineer that we talked about earlier who likes to dive straight into
the documentation, that's actually a great place to start.
We've got a quick start there and so on.
I personally like to learn from just reading examples.
So like you just said, we have a ton of those
and that's usually where I start.
But yeah, our GitHub, please go give us a star.
We would love to get that.
It's just dbossinc on GitHub.
Those are the best places to start.
And we will of course put links to all of this
into the show notes because that's what we do.
Thank you so much for taking the time to speak with me today. I appreciate it.
Yeah, thanks for having me on. We'll have to do this again soon.
Absolutely. Jeremy Edberg, CEO at DeBoss.
I'm cloud economist Corey Quinn and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a 5 star review on your podcast platform
of choice along with an angry, insulting comment that is no doubt set to next time.