The Infra Pod - The bet on Postgres to be the backbone to run reliable services (Chat with Jeremy and Qian from DBOS)

Episode Date: July 28, 2025

In this episode of the Infra Pod, Tim (Essence VC) and Ian (Keycard) hosted DBOS building a reliable backend service, with guests including CEO Jeremy and cofounder Qian. The discussion delves into th...e company's motivation, their research work making software durable and reliable by default, their choice to be betting on Postgres, and how they integrate AI with traditional systems. 00:51 DBOS: Mission and Origins01:37 Creating Reliable and Durable Software11:13 Leveraging Postgres for Durability27:06 Spicy Takes: The Future of Postgres and Development

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to the InfraPod. This is Tim from Essence and Ian, let's go. This is Ian, builder of pool, agent, problem, solution, statements around identity. Couldn't be more excited to be joined by the DBS folks today, talking with durable execution. Jeremy and Sien, call us a little about yourself. Why did you involve this company, Jeremy? Yeah, so I'm the CEO of, we actually call it Deboss now, but CEO, I joined about a year. and the way that happened was actually kind of funny they reached out cold call
Starting point is 00:00:34 recruiter email i never answered those things for some reason i answered this one i met peter and chan and was just completely blown away by how smart they were and what they were building and knew that i had to be a part of this and that this product had to live in the world so that's how i got involved i interviewed with the whole team and here i am so shan what was it that you were doing that got Jeremy to jump ship and join you. What's the secret sauce? I guess we're building something very cool
Starting point is 00:01:04 that we really aligned our vision on making software really reliable and on making like in general life much easier for developers and for like DevOps. So what is DeBoss, Jerry?
Starting point is 00:01:21 DeBoss is a starts with an open source library called Transact that allows anybody to build reliable by default software and durable by default software. And we make it free because we hope everybody uses it because we just want everybody to build durability and reliably. Then we provide commercial products to help you run those things and observe those things. So we provide a dashboard, we provide a cloud service where you can host your code and you can do either of those things and then get all the workflow management and observability of our commercial products.
Starting point is 00:01:54 And so when you say durable and reliable, what do you actually mean? Like, what are the types of problems that happen with a solution like DVOS that you're solving for? Like what's an example of like the problem case that you're solving or removing? Yeah. So one example is that many applications are written as workflows or think data pipelines. For example, you are processing like thousands of documents. And then with AI, things are kind of more fragile because AI may err. may return, like server not available right now,
Starting point is 00:02:27 and sometimes their server may crash. So the problem is that you don't want to reprocess everything from beginning when you recover. You want to continue from where it left off. So that's where you need durable execution, where we remember what happened in the past. And then when you recover, you know that I've finished, for example, 1,000 documents. So I'm going to start from 1,001, and then finish. the entire pipeline that way. Who's the same for, right?
Starting point is 00:02:57 Is it a specific type of company, a specific type of like use case I'm building a CRUD app, I'm building an AI app? What are the use cases you're aiming to target and solve first? Yeah, so the use cases that we actually have seen a lot of are agenic AI,
Starting point is 00:03:13 where people need reliability in an agentic pipeline or just data pipelines in general. They need to guarantee that the data gets from this place to this place. But honestly, it's fine. or anybody who wants reliable, durable code, which has only been available to the biggest companies previously because it was expensive, right?
Starting point is 00:03:31 So we're really just trying to bring that ability to everybody. Gotcha. And so this would like, for those that are uninformed, this would be sort of, in a microservice architecture, this would be your typical. I have, you know, seven different microservices. I need to perform an action. The action actually occurs across the seven different things.
Starting point is 00:03:50 It all have, like, independent state. And so how do I ensure when I perform as action, like the action is completed, and if it's not that we have like some type of way to like bribe to repeat it or roll it all the way back. Is that sort of like a distributed transaction? Is that sort of the underlying problem statement of what we're trying to solve?
Starting point is 00:04:08 It's kind of like database transactions, but in a distributive setting. Yeah. Gotcha. And why is this a hard problem? Like why has this been solved before? Help us understand why build a company here, but more importantly, like what's the key?
Starting point is 00:04:23 unlock that you've discovered that solved it in a way that like makes this an incredible experience okay so I mean when we talk to developers at some companies like insurance companies and other companies they all have their in-house solutions for doable executions and then it's doable but when we talk to developers they said they wrote probably 30% of their business logic code, and then 70% of error handling and failure recovery code. So it's kind of this type of failure recovery and your execution code logic is extremely fragile, so we think we can leverage our research from our PhD research and also from our experience in managing databases to provide a more principled solution to the market.
Starting point is 00:05:17 Yeah. Yeah, the big unlock here was realizing that you could to make your code just as durable as your database transactions by using the same semantics as your database. So using the same semantics, you could make everything more durable, and then you can't rollback and so on. And that's the big unlock. And that's why we're building a company here. This is just the foundation of reliability. And so we wanted to bring reliability to everyone. Gotcha. What was the research
Starting point is 00:05:46 that you were doing at Stanford who said, hey, we actually figured out a way to organize the code or think about the problem or whatever in some way
Starting point is 00:05:54 that kind of like she opened this opportunity up in a way that it wasn't open. Yeah, so during the research phase, we tried to push the limits of like, to what degree
Starting point is 00:06:04 we can use database to replace the operating system for. And then we realized that what's really beneficial is the way we help developers to write software because if we continue with the old operating system
Starting point is 00:06:18 like API or POSIX API is not a principled or organized way to structure your code as we do research we realize people are more and more writing code at workflows and then it really clicks because
Starting point is 00:06:33 we do realize that if we persist the workflow execution state in the database then it's like the right amount of granularity of checkpoints. If you checkpoint every single like statements in your code or every single memory writes in your code,
Starting point is 00:06:50 that's kind of impossible, right? But if you checkpoint out of two coarse granularity, then you can't do fine-grained recovery. So we do realize that durable execution at a step level and then construct steps as workflows is the right solution, right approach for durability.
Starting point is 00:07:10 Yeah. The reason I think I met you guys before is because you all came from Stanford research, right? And one of my co-founder for my startup was Crystal's Cozy Rockas, you know, and the professor at Stanford, right? He's also on a DBOS paper. And so I'm just really curious because you all came from a research background, right? And the research around DBOS, maybe tell us more like where that research was about because it was really interesting concepts. Obviously you got Mr. Professor Stonebreaker involved and it's one of the legends in his space. Can you tell us what was that research trying to solve?
Starting point is 00:07:45 And obviously from that research to now, there's a pretty interesting journey. So let's start from there. What is the research about? What is that groundbreaking for you guys to pursue this? Yeah. So for the research, we call it a database operating system, where we wanted to explore how to solve spaghetti code issues, where you throw code here and there,
Starting point is 00:08:09 and then you want to coordinate them, you want to schedule them. And then it's really hard to manage because you basically throw your execution state everywhere. There are some in-memory, some are disk. So it all started from like the view that database techniques have evolved a lot during the past 30 years.
Starting point is 00:08:27 And people are using, and databases provide great abstractions, especially transactions where you can provide great isolations and consistency, atomicity, and durability to people's data. And then we're thinking how to leverage databases to achieve similar guarantees to your code. And then in a DeVos research project, we basically wrote several like prototype systems, like the file system, the scheduler. And then what made people really interesting was the serverless platform.
Starting point is 00:09:02 We built on top of a database. And then from there, we gradually figure out that serverless is interested. But what's more interesting is that the way we can embed durable execution in your code by leveraging the database. So that's how it gradually evolved to build a company. And I wasn't able to read your last paper. Because you have actually a bunch of the papers, like record replay transactions. This really interesting work that you've been evolving in the middle. And I think the last paper was the VLDB paper about the DBUS three years later.
Starting point is 00:09:39 Right. So I wasn't really read a read of abstract, and it kind of highlights like the commercialization process, I guess. Like, do you evolve the research prototype into commercialization? So walk us through that. Like, what have you learned the last three years from that prototype that I want to be using the operating system with a database behind to be scalable and all these things to not what you've done? What is the biggest learnings you learned that evolved into the year? Yeah, I guess one of the biggest thing we've learned is that developers don't really want to change their stack. In the original research idea, we think everyone should be moving everything to DeBoss, like to a new platform.
Starting point is 00:10:22 But then we realize that people do have their own preferred stacks. Like they have your own software tools, their own frameworks, and they don't want to change their code. or they don't want to refactor or re-architect their code to use any new techniques. That led us to think what is the best way to provide the capability of durable execution to developers without changing everything in their stack. And then so when we commercialize it, we started from building DBOS around an open source library. And that's how like basically all you need is install DBOS and then decorate some of your functions, and then you're done. You can use Deboss as a in-processed durable execution engine.
Starting point is 00:11:11 And then another finding is that Postgres is really popular, so when we commercialize it, we decide to build on top of Postgres. Is there any trade-offs here? Because I think the original idea on the research was like, if we actually take a full operating system, rebuild it from scratch, having the database as your full, you know, backstate management, the world will be crazy, right? No longer doing all these crazy things. The kernel will be so much more cleaner. I feel like from a research point of it, it's such a cool idea, right?
Starting point is 00:11:44 And obviously, you know, the commercialization reality is, okay, nobody wants to change. Everybody just wants to use code. Everybody just goes his Postgres. So you kind of had to retrofit it, right? And this retrofitting idea sort of like thing, I assume there's probably not an easy thing. to do because as my co-founder was a researcher and so I kind of understand like there's so much trade-offs like hey I want to be able to have full control I want to have full expressibility I want to have the full power of what I've been able to do and oftentimes when you retrofit you can't there's like
Starting point is 00:12:17 obvious trade-offs you just can't provide the full state management of everything run under the sun right so what is this sort of like fundamental trade-offs that you basically have to like take to do this retrofitting and is there some certain things you're trying to push beyond that hey I can't just use plain postgres for example I need to actually push beyond the boundaries so maybe trying to get some of that DBOS magic back into what you're doing like what are some of the things that extend maybe sort of like the possibilities let me talk about like some of that that tradeoffs you guys have to like navigate yeah Jeremy what do you want to take it yeah sure so the beauty um of our system is that there aren't a lot of trade-offs.
Starting point is 00:13:03 It's because it's a decorator around your code, you can start with just a small little piece, expand from there. And that's kind of what differentiates us from our competitors, is they require you to rewrite your code to the way they operate, whereas we do all the hard work for that you can write your code the way you operate. I mean, there's obviously some trade-offs. There's some problems with, depending on how you wrote your code, it may not fit into a workflow.
Starting point is 00:13:28 That's the biggest one. You may have to rework into a workflow with properly defined steps, although we would contend that that's just rebuilding with proper software architecture. There are certain workloads that just don't fit. You know, the video rendering, for example, was not a good fit for DeBoss. So in those cases, you just wouldn't use DeBoss. But the tradeoffs are actually pretty small. And of course, the big one, you have to have a database. Most everybody already has a Postgres database.
Starting point is 00:13:56 But for those who don't, that's one of the other tradeoffs. You do have to have one of those available. You mentioned going beyond standard Postgres. I don't think we'll ever do that. One of our big selling points is that you can use any Postgres from any Postgres provider that is fully compatible. So I don't think we never want to break that model. One of our promises, really, is that this will work with whatever Postgres you've got.
Starting point is 00:14:22 I mean, that's pretty interesting. So you basically are selling, hey, you've deployed Postgres. We're going to build to give you this durable thing to build apps on top of it. you understand postdress, you probably have your favorite postdress provider. And so are you selling people Postgres as a service, or are you actually selling a layer on top of Postgres as a service? Kind of neither. We definitely are not in the business of selling Postgres.
Starting point is 00:14:43 We don't want to be in that business. We have partners that are in that business, Stupid Base. Neon could get Aurora, RDS, whatever. We don't want to be in that business. We are not really even a layer in between. We're a library, so we're a programming model. and then we enhance that with our commercial products to the side, right? And that's another big differentiator from our competitors is we are not in the line of fire.
Starting point is 00:15:05 We are not in the code execution path. We are on the side. We don't affect your reliability directly. We improve it. We don't make it worse. And so what do you get a, like I pay for your commercial offering? What do I get? What are you giving me?
Starting point is 00:15:17 Like, you know, a mid-market company, XYZ with some budget. And I've got a wild, new stack. I've got to build and support some new AI stuff. Like, what am I getting? Why do I, why do I buy to your commercial offering? You're getting tracing. You're getting visualization of your workflows, and you're getting workflow management. So what does that mean? That means workflows failed over here. Let's move them over to this other executor. Let's pause this workflow in the middle. Let's fork this workflow and execute in a different path from this point forward, things like that.
Starting point is 00:15:52 You also have a time travel debugger, which is really awesome. So you can step back in time and replay actual things that went wrong. You know, when I was working on big distributed systems, that was one of the biggest things that would have been nice, right? It's really hard to reproduce these distributed system bugs. You don't have to. You can go back in time and reproduce them by replaying the actual thing that broke. And so what are some of the properties of sort of a DVOS workflow?
Starting point is 00:16:18 Like we use some fun words, like, you know, you say reliable and durable, but like under the hood, what are these things? What does it actually mean? from like a systems property's perspective. Yeah, so for DeBos workflows, we do require them to be deterministic so that when we do recovery, we basically replay from the beginning of the workflow and then see if we have recorded step outputs, then we've just reused the output of your steps instead of re-executing them.
Starting point is 00:16:45 And then within each step of the workflow, it can be anything like any external communication or writing to other databases, data sources. but we do recommend the steps to be item potent so that if you are, for example, if you're talking to Stripe, you can send over the request with an item potency ID, which could be the debaubst workflow ID. This way, because we can do automatic step retries, so during retries, we won't send duplicate payments or invoice to Stripe.
Starting point is 00:17:19 So those are the basic, like the basic is deterministic workflow and item button steps. But one caveat I'd throw out there is the request has to be deterministic, right? So my request to an LLM is deterministic. The response doesn't have to be. That's part of what we do is we manage those LLM workflows because you can say I didn't like that answer.
Starting point is 00:17:41 Let's fork this workflow and get another one. And that works great. Gotcha, yeah. And I was curious to dig in as like you mentioned the game that you're focused on AI workload to LLM-based workflows like drill execution. for everyone's certification. Help us understand what is it about building an LLLNative app
Starting point is 00:17:59 or LNative feature or something workflow that requires durable execution and why we didn't need this stuff before. I mean, we could argue that we needed before, but there's now like an increasing need for these things. Yeah, so LLMs have two properties, right? One is that they're non-deterministic. So you can get different answers from the same request.
Starting point is 00:18:19 And number two, especially nowadays, is that they're fairly unreliable. Sometimes they just don't respond. Sometimes they cut off halfway through, right? Most of the big providers right now doing load balancing, load shedding, et cetera, and so you can get these partial or non-existent responses. And so I think those are the two main things
Starting point is 00:18:37 that are driving the need. And then the other need is just around the fact that the AI is a piece of the puzzle. You still need to do, you need to get data into that AI so it can do something. and then you need to do something useful with what comes out of it. And all of that is coordination with other services. AIs today don't have the ability to directly act on the rest of the world.
Starting point is 00:19:01 Now maybe that'll change in the future, but right now, that's not the case. So I'm very curious about how you guys think about adoption, because it's really interesting that you can work with existing Postgres, which sort of assumes that for me to really run DeBoss in Real real is to go into my environments where wherever I've been running postgresters or like connect to my existing postgres. You do have like a quick start. Like I can sign up quickly on a website, try it out.
Starting point is 00:19:31 So just wondering how does your users really get to play with you is a typical journey is to go sign up to your website, try out deboss almost like the self quick start way. And then because I saw I have a deboss postgres, almost like a fast way to just run something. Is that the most common place to go? or people are like want to just go straight to try it in their environments given such a
Starting point is 00:19:57 it's not a new primitive but like I feel like this anytime you have a decorator or something a little bit new is not always that clear how people will get to really like learn this new paradigm yeah I think
Starting point is 00:20:09 we focus on selling to developers so as developers they really like code right so a very typical way to start is that They look at the library, download the library code, or pay-p-install. So we support TypeScript and Python, so they either go through the TypeScript or Python library,
Starting point is 00:20:29 and then install it, maybe install one of our Quick Start apps, look at the code, and then directly try out some decorators in their own code, connect to their own database, and then get started. So a lot of our users come with an existing Postgres database, and they basically just need to figure out how to connect to the database, provide a connection string, and then that's it. It's pretty easy to get started. Awesome. So you talk about large-scale systems, right? And sort of idea that I can just add a decorator and it just suddenly just work.
Starting point is 00:21:02 I think a lot of people that have been working on this for a while will get scared. You know, like, okay, does it really work? You know, what are all this sort of like the failure patterns? So I often find like it's almost like a database adoption story where like everybody's waiting for somebody else that's like maybe a little bit bigger. or maybe similar size to adopt it before I do. So what has been like your way to get developers to trust you, I guess? He goes, oh, is it really that magical?
Starting point is 00:21:26 Just one, Decority, you can truly get everything to work. What was the things you had to do to build trust for folks? I mean, a big chunk of our trust comes from the fact that our co-founders created Postgres and Spark. So that helps a lot. The other big area of trust comes from the fact that this is based in research, computer science, right? This is not gimmicky. This is real computer science. Correctness is pinnacle for us. We value correctness. We value reliability over speed of getting things out the door, for example. So I think a lot of trust comes from that. And then just from using it and seeing it
Starting point is 00:22:05 and seeing it in an action. That's interesting. So you say you're supporting LLM's workflows now. And obviously, the primitives you have is a pretty generalized primitive, right? Anything can run, any requests, any deterministic effect can be supported by DeVos. Is there any LM-specific patterns or even LN-specific, I would say, like, even framework or libraries that your eyes are trying to, like, make it even more easier? Because what I find is LM application folks are much more broader than your DOS. typical back-in engineers now. You know, because if you work on the back-end, you probably understand what the workflow is. You probably kind of understand what are the typical failures and requests and retries and all
Starting point is 00:22:52 these things. But like if you're just building an LAM app, you may not actually understand yet. I think eventually you will. When you started off, you may not. So are you more geared towards for folks that already understand these things, like more to end for engineers? Or do you have something that also is geared towards the folks that are just getting started building these apps as well?
Starting point is 00:23:12 I think both. So specifically for the AI use cases, we did showcase that we can add debaS to existing AI frameworks. If you look at our GitHub repo, we have a thing called durable swarm. Basically, we add deboss decorators on top of open AI swarm framework. And we also wrote a blog post about how to use deboss together with long chain or long graph. So the thing is, there are a lot of AI frameworks out there. But what we found interesting is that they don't have a specifically good way to build a reliable tool use. So when we say deboss is great for LLMs, it's also like when your workflow is driven by LLM responses,
Starting point is 00:23:54 you really need it to be re-dynamic. You can't have a predefined DAG, right? Sometimes it's not really expressive to have a static DAG to say step 1, 2, 3. You want it to be re-dynamic. Maybe there's a loop. Maybe there are branches, conditional branches, based on this and that. and also they are human in the loop. So there are a lot of complicated use cases
Starting point is 00:24:14 and the workflows could be really complicated. And then that's why DeBos is pretty easy to use in this scenario. Basically, we don't require you to a pre-define a DAG. The execution path is completely dynamic. And as long as it's a deterministic loop, it's a deterministic branch, DeBoss workflow can handle it and can recover it properly when a crash happens. Most of the LLM tools focus on how to interact with LLMs, how to handle the responses,
Starting point is 00:24:45 how to, for example, handle the prompts, stuff like that. What is lacking is really like when you have to combine LLMs with your external world, say the agents may send the email, the agent may do some online shopping for you, the agent may analyze some human input, then you really want to track what's going on. And I think most of the AI frameworks do not cross that boundary. So we think with DeBos, it's really easy. And it's really powerful to basically combine the AI world with the systems or infra world. I mean, I think that's a really important point in something that we haven't yet seen.
Starting point is 00:25:28 I think when you go and look at the generic examples of an agent or generic example of like building some of these AI apps, it's like how much of this is actually like this large distributed dynamic repeatable graph. Like, we're not there at a reliability standpoint. A lot of it has to come down to, like, we don't be e-vail frameworks. But I think there's been an over-focus on this problem, or just isn't the one being around e-val. Like, reliable AI is a classic reliability distributed systems problem, plus building, like, an actual model,
Starting point is 00:25:54 plus, you know, on the individual step function component, plus the overall system, plus, like, a security problem. And I'm curious, like, you know, when you're talking to customers, where are they in their journey of building with AI? They're like, and they realize, oh, shit, We actually need something like DVOS to build our agent or whatever it is that we're doing. Yeah, so the AI agent customers that we're getting are typically just starting out. They're just beginning, their new startups, they're building what they're doing.
Starting point is 00:26:23 The enterprises that we're dealing with are more along the lines of the data pipeline use case, oftentimes to extract data out of their legacy systems to put into an AI, either for training or for context or, you know, doing inference across their existing data, whatever it is, they're trying to extract legacy data out of their legacy systems. We've heard a lot of enterprises talking about wanting to do AI agents, but not a lot of them are actually doing it. They haven't figured out how or they know what they want it, but they don't know for what, stuff like that.
Starting point is 00:26:59 Yeah, that's super interesting. So we want to dive into what I think our favorite second, are, it's called a spicy future. Spicey futures. So we need a spicy hot take from both of you. It could be anything, like I mentioned before, but like give us some of your spicy hot take. Like, Jeremy, maybe you'll start.
Starting point is 00:27:20 What's your spicy hot take that you want to give out to the world here? So my spicy hot take on the future is that developers will no longer be worrying about infrastructure and they're barely going to be worrying about code. They're going to be using serverless products that actually work, they're going to be using durable libraries that make everything magically durable. And they're going to be mostly vibe coding. And that's actually
Starting point is 00:27:42 going to make the problem worse because they're going to need even more reliability as they are putting out code that they don't even understand. Very cool. All right, Chin, what's your spicy hot take here? I guess my take is less hot than what Jeremy said. My main point of view is that I think Postgres will take the world. It's pretty ambitious, but I do see a lot of uptakes on Postgres in terms of AI, especially in AI. Well, I think the reason is that Postgres is in general a really good ecosystem. The developers are super friendly and it's really extensible. So among other users, many of them just have a single Postgres database.
Starting point is 00:28:25 They use it as a durable backend to store dual-execution data. They use this as their own transactional store. They also use this at AI Vector Store. Sometimes they use this for analytics as well. So all they need is just a Postgres database. And recently, I also believe that Postgres is more scalable than people believe. When we present DeBoss, many people say, oh, DeBoss is not scalable because it's based on Postgres. But as we know, many large companies actually run on a single Postgres instance for a long time before they have to scale out.
Starting point is 00:29:03 And once you need to scale out, you can do sharding, can do a lot of optimizations to make it scalable. What's more important is that people do understand SQL. So instead of relying or building a brand-new storage engine from scratch, we want something that people know how to operate and how to manage. And then there are a lot of awesome Postgres vendors out there, and there are more and more every year. So I feel like, okay, another thing is Amazon, D-Sql is generally,
Starting point is 00:29:33 available. So I think in general, there are a lot of news in Postgres these days, and I'm just a big believer in Postgres. I know I'm kind of biased because one of our co-founders is the Postgres creator, but this is my hot take. It's a good take. I'm kind of curious. Do you think it's Postgres owns the LLP world, or does Postgres also own the OLAP world in your view of the spicy future? I think it will own both of the world. So when I talk to some of my friends in finance or in business. They actually, they are storing some of an analytical data in Postgres as well. And then a kind of a general direction is like how to build a core layer like by leveraging
Starting point is 00:30:16 Postgres query engine to core your data lake to query your S3 or other data. So I think it's in general is a very good direction and it's widely adopted. And I think Postgres syntax and community is very good. So yeah. My hot take is that in the future, all you need is to learn how to use Postgres. That's amazing. Well, Jeremy, I'm going to actually touch on vibe coding because, you know, everybody's been vicarating all day.
Starting point is 00:30:46 I'm sure you probably has. I think you mentioned like everybody will just be vibe coding all day in the future and then durable libraries and infrastructure. Like, you don't even know. Like, at one point, you didn't even know what infrastructure even is, I guess. So I feel like there's like so many different things that need to happen in the middle. I'm curious, like, besides you guys working on D-Boss, let's maybe park that aside of just the D-Boss layer alone.
Starting point is 00:31:12 You think of like today's, like the vibe coding is the cursor or the windsurf and the lovables. Is there anything in the next phase you think is necessary to make vibe coding even more viable for everybody to really do more? Because right now I feel like vibe coding is almost like a percentage. of what people can actually use and so much stuff needs to be done to review the code, tested, and AI can go with like haywire. What is your take
Starting point is 00:31:40 on what's actually going to be the next necessary thing here to make VibeCode even more approachable? Yeah, so the biggest limitations now, the fixed windows, especially if you have a large code base, you want the AI to be able to ingest and understand that whole code base.
Starting point is 00:31:56 Now, granted, a human cannot ingest an entire huge, large code base either. But I think that's probably number one. Number two is this idea of the infrastructure, right? So if you get even remotely complicated, you do need something that understands the infrastructure, either a library that can, you know, lay it out for you or the AI itself. But right now, today, AI is a real bad ad. If you ask it to build you a basic API at AWS, it's not going to be great at that because that involves a whole lot of services, right?
Starting point is 00:32:32 So we need to see this consolidation of services where it all kind of goes into one place so it's easier for an AI to understand and for an AI to generate. I think that's a big part of it as well. Are you using vibe coding? Yeah, so funny story,
Starting point is 00:32:52 I asked all of our engineers and we have an amazing engineering team and I said, are you guys using any of the AI coding assistance, any of the AI 8 IDs? And they all said, no. I said, are you using AI coding assistants? And they all said, yes. I said, how are you using them? We cut and paste out of quad or JATGBT, because then we can review the code and make sure it's decent and, you know, make sure that we're putting in good working code and that we understand it, as opposed to like using something that just magically writes code for you. I thought that was interesting,
Starting point is 00:33:23 because these are all extremely competent senior engineers. I personally have vibe-coded straight up, make a prompt, and take that code and run it, but not for work. I do it mostly for my wife. She has data analytics needs for being the PTA president. And so she'll say, hey, can you help me figure out, you know, how many people donated in each classroom? And instead of spending 45 minutes to figuring that out, I asked chat TPT to write me a Python program that does it in two minutes, right? All right, it takes me two minutes due to the whole thing.
Starting point is 00:33:55 So, yes, I have vibecoded mostly for not work reasons. I did do it once for work, but don't tell the other people I work with. I had to do some auditing of our AWS environment, and I did use some vibe code to generate a tool to do that for us. Yeah, yeah, it's funny. It's a don't tell everybody. I actually looked at the cheat code here. Yeah.
Starting point is 00:34:17 But that's, which is funny, like if you vie coding, at least right now, it feels like there's way more people that can actually start code. coding, it's like geared towards for people that couldn't even really code much at all, for them to feel empowered, then actually people that are spending their whole life coding right now. Yeah, I have an 8 and a 10-year-old. I'm teaching them how to code with AI assistance. Yeah, and I guess you just need to teach them what Postgres is and what a decorator is, and then you're golden.
Starting point is 00:34:46 Well, funny enough, my daughter came to her office and actually started quizzing Chen on what JavaScript is and Java TypeScript and stuff like that, right? Yeah. I think even for experienced coders, I know a professor who is using chat GPT or Claw to understand the Linux codebase. It's pretty good at understanding the structure of the code will help you understand the sophisticated code base.
Starting point is 00:35:14 I think it's very useful even for experience coders. Yeah. I think that's kind of where we're trying to guess, because I feel like you mentioned the durable libraries and the infrastructure layer is going to go everywhere, right? It makes no sense for developers. One day they still have to think about any of this stuff. But
Starting point is 00:35:33 the world is also moving in a very interesting direction right now. Cool. So I think we want to give a shout out to where could people find D-Boss? Like if I'm interested in using D-Boss, what's the social channels and place to get started as a developer?
Starting point is 00:35:50 Yeah, the best place to start is deboss.dev. From dbos.dev, you can link to our docs, which are chock full of information. You can also link to our GitHub, where you can go and give us a GitHub star. Please do that. Every star helps. But then when you're there, you can also download the code. You can look at the code, the Transact Library, free and open source, tons of examples in our GitHub as well. If you're the kind of who learns by example, GitHub examples is a great place to start. If you're the kind of person who likes to read the doc, docs are right there. And if you're the kind of person, who likes to look at the pricing page, then that's on our website.
Starting point is 00:36:25 All right, guys. Everybody should go try it out and get yourself the durable future that you all need. Thanks for Chad and Germany to be on our pod. You know, super appreciate it. And yeah, it was great conversation for all of you. Thanks so much, everybody. Yeah, thanks for having us.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.