The Changelog: Software Development, Open Source - Developer (un)happiness (Friends)

Episode Date: October 4, 2024

Abi Noda, co-founder and CEO at DX, joins the show to talk through data shared from the Stack Overflow 2024 Developer Survey, why devs are really unhappy, and what they're doing at DX to help orgs and... teams to understand the metrics behind their developer's happiness and productivity.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to ChangeLog and Friends, our weekly talk show about happy devs. Just a little happy dev, a little happy dev over there. Big thank you to our friends and our partners at fly.io that is the public cloud that helps productive developers ship learn more at fly.io okay let's get happy hey friends i'm here with daveenthal, CTO of Sentry. So Dave, I know lots of developers know about Sentry, know about the platform, because hey, we use Sentry and we love Sentry.
Starting point is 00:00:52 And I know tracing is one of the next big frontiers for Sentry. Why add tracing to the platform? Why tracing and why now? When we first launched the ability to collect tracing data, we were really emphasizing the performance aspect of that, the kind of application performance monitoring aspect, you know, because you have these things that are spans that measure how long something takes. And so the natural thing is to try to graph their durations and think about their durations and, you know, warn somebody if the durations are getting too long. But what we've realized is that the performance stuff ends up
Starting point is 00:01:23 being just a bunch of gauges to look at. And it's not super actionable. Sentry is all about this notion of debuggability and actually making it easier to fix the problem, not just sort of giving you more gauges. A lot of what we're trying to do now is focus a little bit less on the sort of just the performance monitoring side of things and turn tracing into a tool that actually aids the debuggability of problems. I love it. Okay, so they mean it when they say code breaks, fix it faster with Sentry. More than 100,000 growing teams use Sentry to find problems fast, and you can too. Learn more at Sentry.io.
Starting point is 00:01:57 That's S-E-N-T-R-Y.io. And use our code CHANGELOG. Get $100 off the team plan. That's almost four months free for you to try out Sentry. Once again, Sentry.io. Well, developers are unhappy. That's the sentiment, right? That is the sentiment. Why? That is the sentiment.
Starting point is 00:02:26 Why? Are you happy, Jared? You're a developer, right? Are you in the 80% rule or are you in the 20% rule? It depends on the minute of the particular day. Okay. Whether or not I'm happy or unhappy. It's a fleeting thing, happiness.
Starting point is 00:02:43 Am I satisfied in my work? Yes. Do I always think that? No. Am I a typical developer? Probably not anymore. We've been podcasters now for a long time. And so I don't hold a nine to five software job, which is probably the people mostly who are being interviewed or surveyed. I wasn't in that survey, so my sentiment was not in there. No, I did not take the Stack Overflow. We have Abhi Noda here with us from DX. Abhi, did you take the Stack Overflow survey?
Starting point is 00:03:13 I did not, but I've definitely been looking at the results. Interesting results. 80% is a large number. I mean, that's an overwhelming number, and it's not a small survey. Pareto's principle says 80-20. 80-20. That's the big principle.
Starting point is 00:03:28 Right. Apparently it's true in regards to developers' happiness. Or lack thereof, I guess. That would be the 20. The happy would be 20, and the unhappy would be 80. What's interesting about this, so this came out, as we said, from the 2024 Stack Overflow Survey results synthesized by ShiftMag. So shout out to Anastasia Uspensky at ShiftMag for really highlighting this particular point and pulling together a few other data points to try to figure out, she was trying to figure out why.
Starting point is 00:04:01 Why are they unhappy? And so you might think, well, it's the AI. The AI is taking away our joy. That doesn't seem like that's the case. At least that's what her conclusion is. It's not the AI. The AI is making us slightly more productive and maybe a little bit more apprehensive about the future.
Starting point is 00:04:18 But currently, I think developers who are in their seats writing code know that at least today they aren't being replaced in large swaths by AI. So if you're a good software developer today, you're not too worried about that, at least not in the present. And it's not the stuff they're working on necessarily, but it is other things. Other things like tech debt and complexity. And so that kind of comes out in all kinds of different ways, but that was her finding. Abhi, do you have, you run a survey company, right? You guys create surveys for folks?
Starting point is 00:04:59 He said it, Abhi. He called you a survey company. Dang, those are- Sorry, is that reductive? That's a jab. I'm just kidding. I don't know. I don't mean to reduce. I don't mean to reduce. I'm just throwing some jokes out there. It's friends. You got to do it, you know? No, it's fair. It's fair.
Starting point is 00:05:13 I don't mean to reduce, but you all help people do surveys. Yep. Just curious your thoughts as we kick into this topic. We help people do surveys and collect other types of data on their developers. Just to clarify. Good job, Bobby. Yeah. The survey is really interesting.
Starting point is 00:05:30 One of the first things that came to mind when I read that headline, that 80% of developers being unhappy, was something we see across organizations we work with. Something a little bit similar. We track something around that we call attrition risk. So what is the likelihood of a developer actually leaving a company in the next 12 months? And that number typically hovers around 10 to 15%. Okay. And so one of the first things that came to mind, what are the implications of 80 of developers being happy right if if only 15 of them are actually going to leave the company right and you know that amounts to a lot of unhappy employees who are not doing their best work who
Starting point is 00:06:21 are probably not clocking in the 40, 50 hours that we're hoping for, who may be, you know, phoning it in a little bit. So that was interesting to me, just reading that headline. Right. I always go bigger when I see something like a industry like software development. And I start thinking, and we don't have answers necessarily, but I start wondering like, well, how many of workers are happy? You know, just in general, like is 80% like ridiculously large. It is an absolute terms, right? It's four out of five.
Starting point is 00:06:50 That's a large percentage. But if we compared it to some other industry, you know, medical workers, teachers, plumbers, pick your industry, would they be at like maybe 75% or 85% or are they down there in the 40s and 50s? And we're way out of line. That's the question that I usually ask and I don't have the answer ever. So I kind of just twiddle my thumbs and move on. I was thinking that too, the macro versus the micro, what's the devs versus the world aspect of this? Because I would imagine that medical workers as an assumption are generally, especially since the pandemic, are higher to be unhappy for obvious reasons. A lot of pressure put on them, a lot of change. I think a lot of bureaucracy, a lot of things in that system.
Starting point is 00:07:37 Plumbers, I'm not so sure. Plumbers, if you're an indie plumber, you're probably pretty happy. Plumbers make pretty good money. Yeah. They generally call their own shots. Kind of hard to replace. Call them in a pinch. It's like, hey, listen, I got water on my floor, man.
Starting point is 00:07:51 You got to come help me out here. Right. And they jump on it. And they're like, hey, 500 bucks. Thank you very much. Right. All you did was turn the nut. Come on now.
Starting point is 00:08:00 It's a relatively stable industry. I mean, you're always going to have people with plumbing, new plumbing, plumbing problems, et cetera. So it's not as much affected by perhaps the Federal Reserve like we are. The medical industry went through a huge swell, of course, during COVID, where there was just so many needs for medical workers that their salaries went through the roof. They were in huge demand.
Starting point is 00:08:28 Of course, they worked ridiculously long and trying hours. And so that was probably not producing happiness. But the pay was really, really good. And now coming down on the other side of it, it's similar to the software world, where it's like demand is waning, jobs are harder to find, you may go unemployed for a while. And so there's probably a similar chart, if you were to chart overall demand. Teachers though, is it going to cross examine on that?
Starting point is 00:08:56 Teachers are never happy, are they? I mean, they're so under-resourced, they're struggling. I just feel like most teachers probably would love to be happier. I don't know. What's funny though, is I wonder how you can Venn diagram happiness with job happiness. Cause I meet a lot of teachers that are very happy, very joyful, very purposeful,
Starting point is 00:09:16 serving, loving people and happy in life. But then you say, are you happy with your job? And I wonder now if we zoom out to this happiness unhappiness level with devs even because some of the findings said that code was not what made developers unhappy because most of them are doing things on the side either through learning or for career development things like that i just wonder how much is it job unhappiness is it unhappiness
Starting point is 00:09:42 generally because a lot of people especially in the States, that's where my lens is, that's where I live, is generally unhappy. Like with a lot of things. So does that like spill over the cup? Trickle over. Yeah. I think it does. This particular question was specific to like, are you happy with your job? And so that is the context that we're talking about.
Starting point is 00:10:04 But of course, nobody just draws a wall up around their job. And as they walk through the door to work, all of a sudden they're like this different feeling person. These things do affect each other. It's interesting. The question was, how satisfied are you in your current professional developer role? And the options were not happy at work, how satisfied are you in your current professional developer role?
Starting point is 00:10:30 And the options were not happy at work, complacent at work, and happy at work. So actually, of that 80% who are reported to be unhappy, 47% are complacent. So they didn't say they were unhappy. They said, meh. They said, meh. So what this number is, is you take the happy people and it's 20 and then there's two categories that make up the 80 and a large part of that is not like i hate my job they're just like you know it's a job which isn't all that bad i mean a job is a job because it's work i mean it's not I know we have a culture and of course the desire to like follow your
Starting point is 00:11:07 passion and do what you love and all of these things, but that's the few and the proud usually who can actually do that. You know, it's not very many of us who can do what we love all the time. And it feels like I would do this if I wasn't getting paid for it. Like that's not the normal. And so just being kind of meh with your job is, it could be worse, right? It could be worse.
Starting point is 00:11:28 What I think is really interesting is the why, right? So why are developers satisfied or unsatisfied in their jobs? And I think the first images that pop into our minds might be pay or their managers or layoffs or AI. But if I'm reading this correctly, the top contributors to satisfaction are actually the developer experience or technical data, right? The tooling, the complexity of the systems and the code base. Am I reading that correctly?
Starting point is 00:12:06 Is that how you guys read it? That's how I read it as well. Yes. Technical debt and complexity are the two driving factors to this unhappiness, which effectively is developer experience. I mean, it's your work. And how did we get there? I think it's just like two decades of move fast and break things, isn't it?
Starting point is 00:12:24 I mean, isn't that just kind of how we've gotten here? That's my best guess. Maybe. Yeah, two decades of move fast, break things, hire a lot of people, churn a lot of people. Churn. Reorg many times. And now everything's a mess. Right.
Starting point is 00:12:41 So in this tracking that you do with regard to attrition, 15% to 25%, is that what you said? Yeah. In the next 12 months, likely to move somewhere else. Do you also get qualitative information about why? Why are they moving on? Is it similar things? Yeah. Yeah.
Starting point is 00:12:59 So we're focused on measuring the developer experience. A lot of the things listed here, difficulty of understanding code or developer environments, CICD, strategy on the team. A lot of these things are aspects of the developer experience we measure for lots of different companies. Then we correlate the two. We correlate these different aspects or facets of developer experience against who's at risk of leaving and who's actually left. And our data actually aligns quite a bit with what I'm looking at here with the Stack Overflow report. Yeah, the difficulty of doing work as a developer seems to be the preeminent cause of regrettable attrition for companies. Not pay, not liking your manager, not stock compensation. It often is just the
Starting point is 00:13:54 difficulty of actually doing work. Right. Which can manifest in technical issues, but also bureaucratic issues. Makes it harder to be productive. Right. You're feeling like you're not getting anything done or you're constantly working Jira tickets and you come in in the morning and you got 20 open tickets and you work eight hours and you sweat and you bleed and you leave and you got 22 open tickets. And you're like, I'm never going to,
Starting point is 00:14:20 I'm never going to get myself up from here into a place of progress you just feel like maintenance maintenance is all it is and I can see how that would be demoralizing especially over time yeah when it's not getting better I mean it's demoralizing for developers it's also demoralizing for leaders I talk to who you know who, are getting this type of data at their companies and quarter after quarter, despite making efforts to, to make improvements around this stuff, the data keeps, keeps coming back that things are slow. People are frustrated. It's hard to get work done. So I guess the question is, how do you solve that problem?
Starting point is 00:15:07 Is it the organization's problem? Is it the leadership's problem? Is it the product's problem? Is it the market's problem? Because I think a lot of that complexity comes from the fact that solving software problems are hard generally. Being blocked is very common. Having to help others level up or answer questions is very common and that's going to be pretty much a thing i guess potentially if ai starts to solve some of this for
Starting point is 00:15:31 us that gets to be reduced some this blockage so to speak this spending time uh looking for answers spending time answering answers or repeating answers for people. This, the blockage that comes from the lack of awareness of where to go next to be productive. At GitHub, I think I told this story to you before, Adam. Tell it again. At GitHub, we had a lot of these problems. Developer tooling, getting releases out, the builds, developer environments. People were leaving and they were telling leaders that they were leaving because it was hard to get things done at GitHub. This is back in 2020, if I'm getting my years
Starting point is 00:16:12 right. And what we ended up doing, we froze features for a quarter, All of GitHub engineering, no features. Whole quarter spent fixing these problems. It was dramatic, right? And things got a lot better as a result. And another example is actually Atlassian. Their CTO is very public about how they're focusing on developer productivity, developer experience. And at Atlassian, not only do they have a pretty substantial portion of their engineering organization that is devoted to this type of stuff, but they give all product engineering teams at Atlassian 10% of their time to be spent fixing things,
Starting point is 00:17:01 as they call it, fixing things that suck, right? That get in the way of productivity so but but to answer your question adam i like what do we do about it and what's preventing us from doing things about it i i think it actually boils down to the fact that you see a survey like this from stack overflow right uh people are unhappy you. It's because of technical debt and the developer experience. But to actually do something about it as a business, you have to be able to calculate that the cost of doing something about it is outweighed by the return on investment you're going to get after you do something about something about it and i think that's
Starting point is 00:17:46 a really hard problem right now and no one knows how to actually quantify this set of things the developer experience right it is something that you can take to the cfo or ceo and say hey like we we're slow because of x y and z and if we fix and Z, we'll be this much faster and it'll be worth it. That's the hard problem. So no one can make the case for doing something about a lot of this stuff because you can't talk about it in terms of dollars. Yeah. I think this speaks to our technical debt metaphor and some of the argumentation we've had on this show with friends about, is that a good metaphor or not? Because you can't really quantify it like you can actual debt.
Starting point is 00:18:34 You know, you can take your debt and your debt service principle and interest. You can take your interest rate and you can put that on a chart and you can extrapolate it and say, look, if we don't pay this debt down now, maybe not if you're the United States government, but if you're like an actual business, you can say, we don't pay this debt down, we're going to go bankrupt in 90 days. And that convinces leadership to be like, okay, it's worth it. But when it comes to technical debt, we lack that quantitative ability to extrapolate forward and say, we're going this slow right now. If we don't dramatically change things, start paying this down, we're going to grind to a halt in 90 days. Well, what has been the happiness level of past surveys? Can we just use Stack Overflow surveys as an example? Because that's what we're lensing off of anyways, if that's a correct adjective or verb. Has the unhappiness changed dramatically
Starting point is 00:19:21 from 40, 50 to now 80%%. Has it always been 80%? Is that maybe a good baseline? Like mostly people are unhappy. That's a good thing because the reason why I asked that question is because innovation comes from angst. Unhappiness is a version of angst, right? And so you can only innovate and change if you have angst as opposed to some degree. I mean, it's one place. Like greed also drives innovation, right?
Starting point is 00:19:46 Like I want to make money. Of course. I need to invent something to make more money. But the greed may be causing the angst of the developers, Jared. So, I mean, you know, they're in the same bucket, basically. Yeah, okay. The angst is there.
Starting point is 00:19:56 Therefore, developers push, organizations push, products change, innovation happens. You know, the new Amazon occurs. Because? I don't have past Stack Overflow survey data. Do you, Avi? Do you, Avi? I don't. No.
Starting point is 00:20:12 Dang. Yeah. Someone in the audience is like, I've got it, but I can't talk to you. The way they ask that question, I wonder if that's how they've asked it before. I'd be curious if you could. Can you restate that question?
Starting point is 00:20:24 Because I think that's a good point, is the and the only answers is multiple choice. This is not an open ended question of why this is a scoped response. And so the 80% is extrapolated from that scoped response. Can you restate the question and the options? Yeah. So how satisfied are you in your current professional developer role? Not happy at work, complacent at work, happy at work as, as a, someone who spends a lot of time on survey design.
Starting point is 00:20:55 I do see a few potential issues with the, I mean, so they're asking about satisfaction, but then the responses are about happiness, which in satisfaction or happiness are really different constructs. That middle option is complacent. So it's not happy, happy or complacent. But complacency does.
Starting point is 00:21:17 It's not really the perfect middle between not happy and happy. I think it captures the essence of in between not happy and happy, but it's not necessarily the perfect middle. So it's an interesting way that they've asked the question because is it measuring job satisfaction or happiness? Happiness, I think, is really hard to actually measure. So I think that's why they worded it around satisfaction. You know, happiness, there's a lot of literature about how to actually measure happiness. There's entire fields where they've spent years trying to figure out how to measure happiness and what happiness is. And usually happiness is the sum of moments of feeling happy, right? took the day and divided it up into however many minutes like in each minute or how many times
Starting point is 00:22:06 throughout the day did you feel have a feeling of happiness as opposed to non-happiness and that's kind of how you get to how happy you are right rather than a point in time reflection of happiness which is pretty difficult so anyways i'm nerding out here a little bit on the the survey design well i like that because you have experience that we don't have in trying to like craft those so that they are optimal because you only have so much time and, and opportunities to like pull somebody for their thoughts. And if you pull them out incorrectly on accident, then you're kind of wasting everybody's time. Let me rant for a split second here about Stack Overflow and URLs. Okay, so first of all, I appreciate you all doing the survey. No real hate here. But I was trying to
Starting point is 00:22:50 answer Adam's question, which was, do we have past year's results? So I went to this year's results, survey.stackoverflow.co slash 2024 slash professional developers, found the link to that survey question. And then I went to the URL bar and I changed the year from 2024 to 2023. 404 page not found. I mean, come on people. Respect the URL structure. This is like what address bar hacking is all about. Like, come on, help us get to things in a way that makes sense.
Starting point is 00:23:23 I just appreciate good URLs and that's not a good one. Anyways, that was my mini rant, because I was going to have answers for you, Adam. I was going to have last year's answer to this question. I wanted you to have answers so bad. I don't have it, man. Maybe ChatGPT or something else might have a hallucinated version of an answer.
Starting point is 00:23:43 The reason why I think you nerding out on that and camping out on the semantics of the question and the response is because it certainly, it corners the person. It forces the person in response time. At the same time, there are probably lots of questions in the survey. So they could be experiencing cognitive overload while at that particular question, while also being slightly unhappy for their day. They may have measured their happiness moments in that day and be like, you know what, I'm unhappy. Not saying it's skewed, but it's important to scrutinize the question and the offered options as a response because that is what the sentiment is drawn from. And so if it's skewed or not so much poorly worded, I would, I would prefer you to say that I'd be versus me. Cause you're the professional
Starting point is 00:24:33 at crafting these in quotes surveys, just kidding, poorly designed, kidding, you know, because that's really important, right? The way you ask a question, the options you offer is where the sentiment comes from. And if it is ambiguous or not super clear, it's clear why the answer is potentially skewed. And so to understand how the efficacy of the answer set based on the question, I think that's what's worth scrutinizing. All right, real time follow up. I used their user interface to find last year's results. And as far as I can tell, they did not ask this question in 2023.
Starting point is 00:25:13 Maybe it was just one year they didn't, but we did not have last year's answer to this particular question. That's not why at 404, okay? They still changed their URL structure, but had it stayed the same, it still would have 404'd because they didn't ask that ask that question so unfortunately we can't really go back and say you know which way is it trending or is this an anomaly or anything like that
Starting point is 00:25:34 okay i want to talk about uh this idea of how can people talk about these problems in terms of dollars right i would love to hear this yes yeah so so that's something we're working on at dx it's funny that you know adam posed the question earlier what why aren't people fixing this because that's the same question that we ask and our customers that we work with ask too when, when they surface all this data. So many years ago, I read this book called How to Measure Anything. Have you guys read that book? No.
Starting point is 00:26:11 Okay. The topic is, as the title implies, how to measure anything and, in particular, how to measure things that are seemingly unmeasurable. So when we talk about what is the dollar ROI or interest rate or cost of technical debt and poor developer experience that just a few minutes ago, we were essentially calling that unmeasurable. Right. In this book, they talk about how anything, if you want to measure something in terms of dollars or cost, that you can really do that with anything. So as long as you take something intangible, like technical debt or developer experience, and then you correlate it to something objective or something monetary. So an example of this would be the DORA metrics and the DORA report, which I know you guys have followed. So what they essentially did is...
Starting point is 00:27:10 Give us a primer for those who aren't caught up on that. What is DORA? So DORA is the DevOps Research and Assessment. But since, I want to say, 2013, maybe they've been publishing an annual report on the state of devops and and they came over like right now we're talking about tech debt and developer experience but eight years ago people were talking about devops and hey like what is the roi of investing in devops so it's the same problem history repeats itself and what they did was they said, here are some ways we can measure DevOps. So it was like metrics like MTTR and lead time, deployment frequency.
Starting point is 00:27:54 So they said, here's DevOps. And what they did is correlate it to companies' profitability, stock returns and increases, you know, EMPS scores. And by doing that, they were able to quote unquote prove and show the dollar ROI. Hey, companies, when they invest in DevOps and get X percentage better, their stock prices tend to be X percentage higher, right? They tend to be X percentage more profitable. And so it wasn't perfect. That seems a little bit brittle to me. A little bit brittle. Let me tell you what we're doing
Starting point is 00:28:30 with developer experience. So we have this construct of what is developer experience. So we have our version of what Stack Overflow has here, where we have, it's called the developer experience index. So it's 14 of the top drivers of developer experience.
Starting point is 00:28:47 So we say, okay, that's how we measure developer experience. Then what we've been doing is correlating that measure to different outcomes. And one of them is actually self-reported time waste reported by developers. So how much time do you... It's a series of different questions we ask about how much time do you lose each week? How much time is wasted each week due to inefficiencies? And when we correlate the two, and we found that like a one point increase in the developer experience index score, which is the average of these 14 different areas of developer experience. A one point increase in that score. So one point increase in developer experience translates to almost a 1% decrease in time wasted.
Starting point is 00:29:38 And so again, this isn't perfect. You could call this brittle as well. But I think it's a little bit better because you're directly asking the people. Yeah. And it's more direct to dollars. It's not like stock price, which is a little bit of a leap, right? A lot of things, so many things affect the stock price. So, you know, using this approach, we can, folks can say, hey, if we improve developer experience by X points, that translates to X percentage reduction in waste, which translates to X amount of dollars, right? So that's how we're approaching it right now. I like that approach.
Starting point is 00:30:13 I think that time waste is reported by the actual people wasting the time. And so it's probably relatively reliable. Of course, there's always trolls and thoughtless respondents, but you can't get around that problem. Or estimators. Yeah. Yeah, I'm totally just being inefficient because of all these other things. It's not me.
Starting point is 00:30:35 It's you. There's that whole, I mean, you can't really, maybe you just account for that in your numbers. But yeah, if you are saying technical debt, complexity, bureaucracy, whatever it is, all these factors ultimately for the business are costing money, slowing things down. Wasting time is really a decent measure for that, like how much time is actually being wasted. And so if you can track that against this DXIX, what's this thing called, the DX Index? DXI, yeah, DXI, what's this thing called? The DX Index? DXI, yeah. DXI, Developer Experience Index.
Starting point is 00:31:08 At the same time, I don't know, it seems like a pretty decent approach. Is that bearing fruit? It is. Yeah, I mean, we can't think of any other way to do it. I think the feedback we get is this is great. Like if we can make this an industry standard, then my CEO is going to buy it. But there's still an education and
Starting point is 00:31:27 marketing gap there where folks, what I just explained to you, it's hard to get that across in like a five minute executive summary, right? Right. I think you should have a, do you do an annual or semi-annual survey to the public like Stack Overflow does? No, we should. We already have the data because we're surveying hundreds of thousands of people. Right. Yeah. The nice thing about this particular measure or this combination of measures is that if it could be somewhat generalized and made public,
Starting point is 00:32:04 it's now a tool and a resource for people who don't have those quantitative metrics inside their company to say, look, this stuff really matters. Look what it did for Walmart and these important companies. It's moving their bottom line. It's making them more productive. And they've proved it out over N years. And so if that's public information that I can take to my leadership and use that, then convince them that, hey, let's call a feature freeze or whatever it is that I'm trying to get done. Right. Yeah. Hey, friends.
Starting point is 00:32:45 You know we're big fans of Fly.io. And I'm here with Kurt Mackey, co-founder and CEO of Fly. Kurt, we've had some conversations and I've heard you say that public clouds suck. What is your personal lens into public clouds sucking? And how does Fly not suck? All right. So public clouds suck. I actually think most ways of hosting stuff on the internet sucks. And I have a lot of theories about why this is, but it almost doesn't matter. The reality is
Starting point is 00:33:13 if like I've built a new app for like generating sandwich recipes, because my family's just into specific types of sandwiches that use Braunschweiger as a component, for example. And then I want to put that somewhere. You go to AWS and it's harder than just going and getting like a dedicated server from Hetzner. It's like it's actually like more complicated to figure out how to deploy my dumb sandwich app on top of AWS, because it's not built for me as a developer to be productive with. It's built for other people. It's built for platform teams to kind of build the infrastructure of their dreams and hopefully create a new UX that's useful for the developers that they work with. And again, I feel like every time I talk about this, it's like I'm just too impatient. I don't particularly want to go figure so many things out purely to put my sandwich app in front of people.
Starting point is 00:33:56 And I don't particularly want to have to go talk to a platform team once my sandwich app becomes a huge startup and IPOs and I have to like do a deploy. I kind of feel like all that stuff should just work for me without me having to go ask permission or talk to anyone else. And so this is a lot of, it's informed a lot of how we've built Fly. Like we're still a public cloud. We still have a lot of very similar low level primitives as the bigger guys. But in general, they're designed to be used directly by developers. They're not built for a platform team to kind of cobble together. They're designed to be useful quickly for developers.
Starting point is 00:34:30 One of the ways we've thought about this is if you can turn a very difficult problem into a two-hour problem, people will build much more interesting types of apps. And so this is why we've done things like made it easy to run an app multi-region. Most companies don't run multi-region apps on public clouds because it's functionally impossible to do without a huge amount of upfront effort. It's why we've made things like the virtual machine primitives behind just a simple API. Most people don't do like code sandboxing or their own virtualization because it's just not really easy. There's no path to that on top of the clouds. So in general, I feel like, and it's not really fair of me to say public clouds suck because they were built for a different time.
Starting point is 00:35:09 If you build one of these things starting in 2007, the world's very different than it is right now. And so a lot of what I'm saying, I think, is that public clouds are kind of old and there's a new version of public clouds that we should all be building on top of that are definitely gonna make me as a developer much happier than I was like five or six years ago when i was kind of stuck in this quagmire so aws was built for a different era a different cloud era and fly a public cloud yes but a public cloud built for developers who ship that's a difference and we here at change all our developers who ship so you should trust us try out fly fly.io are developers who ship. So you should trust us. Try out Fly, fly.io. Over 3 million apps, that includes us, have launched on Fly.
Starting point is 00:35:50 They leverage the global anti-cast load balancing, the zero config private networking, hardware isolation, instant wire guard VPN connections with push button deployments, scaling to thousands of instances. This is the cloud you want. Check it out. Fly.io. Again, fly.io. And I'm also here with Kyle Carberry, co-founder and CTO over at Coder.com. And they pair well with Fly.io. Coder is an open source cloud development environment, a CDE. You can host this in your cloud or on-premise. So Cal, walk me through the process.
Starting point is 00:36:28 A CDE lets developers put their development environment in the cloud. Walk me through the process. They get an invite from their platform team to join their coder instance. They got to sign in, set up their keys, set up their code editor. How's it work? Step one for them, we try to make it remarkably easy for the dev. We never gate any features ever for the developer. They'll click that link that their platform team sends out.
Starting point is 00:36:52 They'll sign in with OIDC or Google, and they'll really just press one button to create a development environment. Now that might provision like a Kubernetes pod or an AWS VM. You know, we'll show the user what's provisioned, but they don't really have to care. From that point, you know, we'll show the user what's provisioned, but they don't really have to care. From that point, you'll see a couple buttons appear to open the editors that you're used to, like VS Code Desktop or, you know, VS Code through the web. Or you can install our CLI. Through our CLI, you really just log into Coder and we take care of everything for you. When you
Starting point is 00:37:19 SSH into a workspace, you don't have to worry about keys. It really just kind of like beautifully, magically works in the background for you and connects you to your workspace. We actually connect peer-to-peer as well. You know, if the coder server goes down for a second because of an upgrade, you don't have to worry about disconnects. And we always get you the lowest latency possible. One of our core values is we'll never be slower than SSH, period, full stop.
Starting point is 00:37:39 And so we connect you peer-to-peer directly to the workspace. So it feels just as native as it possibly could. Very cool. Thank you, Kyle. Well, friends, it might be time to consider a cloud development environment, a CDE. And open source is awesome. And Coder is fully open source. You can go to Coder.com right now, install Coder open source, start a premium trial,
Starting point is 00:38:00 or get a demo. For me, my first step, I installed it on my Proxmox box and played with it. It was so cool. I loved it. Again, Coder.com. That's C-O-D-E-R.com. My idea for you, Avi, is a growth hack. Let's hear it. When you do this, it would make sense if I were you. This is probably how I would at least consider it's it's got a core audience if you have similar data i would release whatever you're doing or whatever data you can do around the time of this survey's announcement and to some degree venn diagram number one you associate the brand for dx
Starting point is 00:39:01 with a very beloved mostly beloved brand stack overflow. Some love, some hate. I thought you were talking about WordPress. Oh yeah. Yeah, exactly. Not yet.
Starting point is 00:39:11 Not yet. And then you can, you can draw correlators between the questions that, and the data they siphon from that question. Yeah. And then the question and or data set that you have that correlates and then diagrams across the two one to keep them honest and not so much that you have that correlates and Venn diagrams across the two. One, to keep them honest, and not so much that they're not honest, but to keep this survey,
Starting point is 00:39:34 which all surveys have an optimization opportunity. Yeah, they're all flawed. Right. There's no perfect survey. And I think you almost better off the entire community because you give not one data set, but two data sets. So how true is this? Your findings and cross-examination and Venn diagram may say, well, this is actually pretty close to true because we have corollary data and we can corroborate this findings. Second, you get to feature things like DXi and you get to have an opportunity for, now way more people know about DX and now find the benefit and or interest in your beliefs,
Starting point is 00:40:15 which is this DXi index being such a core thing to you all. Juan, that's the idea. How do you like it? I like it a lot. I'm writing it down. I like the cross-examination piece i do too yeah and then i would say the second thing and maybe this gives a foundation to a foundation which is in order to have an organization support or adopt this dxi this developer experience index what do they need to have in place to get to that point
Starting point is 00:40:47 like what is a mature data-driven organization look like that has the ability to actually index this index and have that for themselves right yeah i love it that's a question that's that was a question oh what did they actually like the question but that was also a question yeah i mean what's the foundation to get to this point to have a 14 metrics right isn't it 14 metrics yeah 14 different metrics we so we haven't open sourced it right now it's proprietary man yeah i mean there's that's actually one of the the biggest strategic questions i've been wrestling with for over a year now. Do we open it up or do we keep it proprietary? Well, when you have the opportunity to become the index, I think, I mean, obviously, you know way more about your business than I do and how important it is internally as a proprietary thing.
Starting point is 00:41:39 But I can see huge upside in the open sourcing of it. Yeah, agreed. I think we can open source it while putting like a copyright on it. So you can't necessarily, you know, you're not technically supposed to use it for commercial, you know, within a, within like Walmart can't actually use, deploy it. Right, right, right.
Starting point is 00:42:02 Yeah, I mean, we can get into the licensing discussions. And we're happy to, we do it all the time. Yeah, yeah, yeah. Yeah. Yeah, I mean, we can get into the licensing discussions. And we're happy to. We do it all the time. Yeah, yeah, yeah. And depending on what it is, open source might not even be the right word, right? Like maybe it's
Starting point is 00:42:12 Creative Commons. Maybe it's... Right. But you could still hold trademark and copyright against it. Like DXI could be
Starting point is 00:42:19 a trademarked thing, but also how you go about doing it and how others can go about doing it. You can just let that stuff loose. You don't let go of the copyright, but you just let other people doing it and how others can go about doing it you can just let that stuff loose you don't let go of the copyright but you just let other people use it so and you can't call it dxi it is a product right you trademark the dxi right that's what you're
Starting point is 00:42:33 saying right makes sense and they can be dxi compatible or you know whatever but that's in the weeds uh you were talking about the 14 yeah i mean to deploy it they you know we we have the survey items the the measurements for for those 14 and they deploy our platform right that's why it's proprietary because they can't do it without without our product from you yeah dx uh but in terms of like who it you know we work with the Pinterest, the Dropboxes, the Netflixes, you know, the bleeding edge tech companies. And we also work with, I mean, this isn't to be, you know, diminish these organizations, but companies like Pfizer, P&G, Tesco, Bank of New York, BNY. So I think what we've seen is the DXI works in all kinds of environments, not just the bleeding edge tech companies, but more legacy traditional enterprises as well. Sounds pretty cool. So you have found then if you have a DXXi, which a lot of these companies do via deploying your guys' proprietary platform and you're tracking time waste, you found that there is an inverse correlation between the two that is measurable and repeatable and reliable.
Starting point is 00:43:58 Yeah. That's pretty powerful. I mean, we all know it's true, but actually proving that it's true is a whole different thing. Yeah. I mean, we, so we, we did a meta analysis, which we've published and I'll give you guys a link to that. So we actually have a developer experience index.com. I have that vanity URL. Nice. Very nice. So long.
Starting point is 00:44:18 It is a little long. Gosh. But anyways, so we, you know, the, the, our value was point, I think it was like point some five. That's a really strong relationship between developer experience and Timeway. So then on an individual company basis, we can look at their relationship, right? We just run that correlation for an individual company. And we always see a moderate to strong relationship at any given organization as well and you find that it follows Pareto's principle as well in
Starting point is 00:44:51 terms of effort like 20% of your effort gets 80% of your results or as you continue to improve your DX are you is it trailing off or not that's a great question like do we see a different relationship at different at the edges right yeah we haven't studied that but i will add that to my notes as well that's that's really interesting yeah that would be worth knowing yeah it's even i mean i think it's it's logical that that would be the case in almost any effort at a certain point you're you're squeezing the radish you know but like what's the sweet spot for for companies where they can put in this much effort into their developer experience and get that much out. Yeah. I think that would be
Starting point is 00:45:30 worth knowing. Absolutely. Can we, uh, dig into these 14 drivers? It is out there. Can we talk about them at least? Yeah. Did you go to developer experience index.com? No, I just Googled get DX and then DXI and it landed me on this page that okay you can tell me if this is accurate the number one yeah that's the white paper the one number you need to increase roi per engineer yep and about two scrolls down you dig into figure two which which talks about the drivers and the outcomes yeah and so i'll do the work for you if you don't mind. The drivers are deep work, dev environment, batch size, local iteration, production, debugging, ease of release, incident response, build and test, code review, documentation, code maintainability, change confidence, cross-team collaboration and planning those are the drivers those are the 14 dimensions and those correlate to what's five five outcomes speed ease of delivery quality engagement and efficiency dude that's a good map that's a really good map to maturity in a
Starting point is 00:46:42 in an organization a debit organization like all those things on the driver's side are really good like what is my maturity level and what is my i don't know how you would describe it i'm trying to think on the fly here but how good am i how good are we at these drivers and then the correlations are obviously awesome like the outcomes the speed the ease of delivery quality engagement efficiency yeah but that's a good map i like that it's taken years It's the awesome, like the outcomes, the speed, the ease of delivery, quality, engagement, efficiency. Yeah. But that's a good map. I like that. It's taken years to arrive at those 14.
Starting point is 00:47:10 You could almost map softwares as a service on top of that sucker. I mean, there's like an offering for each of these drivers. I mean, there's a whole industry around. Totally. Oh, yeah. Production, debugging, incident response, code review. Yeah. Et cetera, et cetera. I just find that interesting.
Starting point is 00:47:24 Which one, which SaaS service correlates to change confidence most? That's the one that stands out to me a lot. Like change is hard anyways, and the confidence in change. You could be a senior engineer and feel good about it. You could also be a junior engineer and feel good about it. But like, what gives you the confidence? How do you measure that with a service, a tool, or a SaaS? Yeah, I think, you know, change confidence,
Starting point is 00:47:49 it's about, it's a lot about how easy it is to actually test a change, like get feedback on a change. So I think everything from cloud dev environments, right? All these things kind of interrelate, but so. Yeah. Like cloud dev environments that allow you to like quickly spin up a staging environment for just your change to easily manual manually test stuff obviously things like test coverage now you know ai's coming into the picture there with helping you know write tests or even manually qa uh your work so so
Starting point is 00:48:17 yeah change confidence is about like when you make a code change are you kind of like yellowing it when you ship it and just hoping it it works or do you actually feel confident that when you make a code change, are you kind of like yellowing it when you ship it and just hoping it, it works or do you actually feel confident that when you make a change, it also just has to do with code quality, right? Like if you make a change in one area of the code, does it, is it a house of cards or, you know,
Starting point is 00:48:36 cascading effects and everything. So a lot of things go into it, but it's ultimately about, you know, the developers feel like they can actually make changes or are they just monkey, you know the developers feel like they can actually make changes or are they just monkey you know duct taping things and hoping that it works when they deploy it another a newish feature of a lot of cloud hosts are preview branches that's another way where you can get change confidence netlify vercel etc they're providing a place where you can have your
Starting point is 00:49:03 development branch and it can be constantly publishing to a preview page on a subdomain, on a website. And so now you can both look at it yourself in production-ish and then also send it to your QA team or to your boss or whoever, your customer. I think that definitely helps with change confidence because previews are nice. But but yeah there's so many tools that uh overlap in these things as well documentation right so like now so just if
Starting point is 00:49:30 you're modifying code like can you even understand how that code works so you're confident and changing that little bit of cut right so yeah a lot of things all these factors interrelate right even something like batch size which is about incremental delivery. Like if you're working on a huge changes, huge PRs, there's so much more risk, right? You just have a lot more surface area. So if you're delivering incrementally, your confidence is going to be higher
Starting point is 00:49:57 for each unit of change. Is this your next big thing, the DXi? It's been in the works for a while um dxi is one of the big things the other is the core four which uh is the other if you go to that research that one rhymes so you know it's true dx core four yeah so if you go to like that research tab on our website and the is the developer experience index and above it, the DX core four is something else we've been developing. And that is speed, effectiveness, quality and impact, right?
Starting point is 00:50:33 So that's the outcome of this. But the real problem we've been trying to solve is, you know, we I think last year I came on here, Adam, and we talked about the DevEx framework I'd published with Nicole Forsgren and others. And so ever since we published that, people have been coming to us and saying, hey, Nicole created Dora, Nicole and Margaret created the space framework, and then you, Nicole and Margaret created the DevEx framework. We have three frameworks now for telling us what we're supposed to be
Starting point is 00:51:05 measuring in our organizations. How do we actually, so what to sum it all up, like what should we actually be measuring? And the funny, I was just talking one more, add one more. You got the core four.
Starting point is 00:51:18 This is the one to rule them all. This is the one to rule the one framework to rule them all because replace the other three, not replace this encapsulates all of them. This combines them all into one framework because everyone would ask us that question. And the way we would always answer that question was, well, it depends. I was just talking to a CEO at a big tech company who said, I was talking to Nicole. And I asked her, to sum it all up, what should we measure?
Starting point is 00:51:45 And she told me, it sort of depends. And I get that it's situationally dependent, but it would be really valuable to have something out of the box and standardized that we could benchmark across other companies and really have a way forward. So, and funny enough, I've had that same experience. I was talking to a CTO, actually at Capital One, who asked me, hey, I've been following your research for two years. So just tell me, what should we actually measure? And I said, uh, it depends. We can, you know, we can do a consulting engagement on that to like figure it out. But, you know, having a starting point that is, you know, out of the box is really valuable. So that's what the DX Core 4 is. I love that response.
Starting point is 00:52:31 And as a person who has to deliver that response frequently, my next response is always, I have to ask you 20 questions to answer that one question. Yeah. So you need to give me more time. If you want my true answer, the only way I can know what to respond with is to ask you several more questions. Yeah. So you need to give me more time. If you want my true answer, the only way I can know what to respond with is to ask you several more questions. Yeah.
Starting point is 00:52:49 And those questions may lead to even more questions. And so if you trust me, you've been following my data, give me a little bit of your time. And I will answer those questions by asking tons of questions.
Starting point is 00:53:02 Problem is no one has time. Well, they all want the silver bullet, right? Yeah. Give me a yes or no answer. Yeah, they want to go to the doctor and get the diagnosis, not go to the doctor and then have 16 follow-up appointments before you get the diagnosis.
Starting point is 00:53:19 And I get that. I mean, if you were a time waster, then that's different. But like, you can only answer that CEOs of top tech companies answer a question well, if you understand more about their specifics of their business. What are their particular drivers? Not anymore, baby. Now he says core four.
Starting point is 00:53:37 Core four. Yeah. I think the core four provides a pretty good answer. I mean, we want people to customize. This isn't, hey, do this and do nothing else. Oh, I like this, actually. Core 2, Core 3, maybe. But the DX Core 4 does,
Starting point is 00:53:54 and now we've started rolling it out to people, and it's landed well. I actually asked the CEO, I showed him the Core 4 right after hearing about that experience that they had talking with Nicole. I said, hey, we've been working on something for this. And I asked him, to you as CEO, does this seem right? Does this seem correct?
Starting point is 00:54:16 Does this seem like the right way to be thinking about and measuring developer productivity? And he said, yes. Wow. I was going to give you an idea, but that might actually be the answer. Cause rather than say, it depends. What if you said we have a survey that takes you five minutes to answer instead of saying it depends, you say it out, it spits out your customized core for it. Right. This is, this is your on-ramp like your specialized personalized on-ramp because I'm sure you can take that consulting session and to some degree distill it down into something a CEO who has very limited time
Starting point is 00:54:49 can answer in five to 10 minutes, right? Hey, I don't have an answer for you in this moment, but we have a very fast 10 minute or less. It really could be 15 if you want it to be, but most people it's 10. And if you answer these questions, I'll know exactly how to help you. That's like when, you know,
Starting point is 00:55:03 some of those personal wealth front and betterment, some of those robo investment advisor platforms, right? You go through like a three minute wizard and then they tell you what your investment portfolio should be. Should be. So that I like that. More ideas for you, Abhi. Two, you're taking away from here.
Starting point is 00:55:22 Yeah. Yeah. Lots of good ideas. Write that one down, Abhi. Write it down. I'll just here. Yeah. Yeah. Lots of good ideas. Write that one down, Abhi. Write it down. I'm just kidding. Write it down. Can we dig into these a little bit?
Starting point is 00:55:29 Sure. So the core four, speed, effectiveness, quality, impact, you say those are outcomes, not necessarily. Those are the dimensions. Those are the categories. Okay. Think of them as your stocks, bonds, cash, right? Right. To use the stock portfolio analogy.
Starting point is 00:55:44 You need that balance. Because if you only measure speed, everything else goes to crap, right? Like you're not doing it correctly. There's your move fast and break things right there. We're moving very fast, but we are breaking things. Not breaking things, yeah. So a balanced portfolio, this is a nice metaphor.
Starting point is 00:56:02 For each of these, you have a key metric. This is something that you're going to track. And then you have secondary metrics, so there's some balance there as well. But for speed, the key metric is diffs per engineer. And I don't know if I might take issue with that one. Tell me more. Yeah.
Starting point is 00:56:18 Probably the last time I was on this show with Adam, I was probably dissing that metric. Opinions change, man. Op PR group, but PR strategy, yeah. Opinions change, man. Opinions change. I'm a politician, right? You know, I flip-flop on the issues. Yeah, it just depends on who you're talking to. Yeah, but no, so it's been, it's been a journey for, I mean, just for me personally on this topic, because, you know, the whole reason I actually got into, you know, spending six, seven years on this whole problem space was because I
Starting point is 00:56:46 felt like metrics like diffs per engineer were reductive and not correct and not helpful. But one of the things that the core four optimizes for is, so we work with a lot of technical leaders, engineering leaders. And as we were talking about earlier, one of their big challenges is talking about rationalizing investments in developer productivity in a way that the CEO and CFO are going to agree to. And to do that, you need a shared definition of productivity that your CEO and CFO agree with. And to achieve that, I found that you do need some type of output measure, right? Like, we're not at a point in human evolution yet where most CEOs and CFOs are down with this idea that developer experience indexes is the one metric that matters for understanding, you know, the maturity or effectiveness of the organization.
Starting point is 00:57:48 A lot of CFOs and CEOs still think, I mean, there's fortune five 50 companies still measuring lines of code benchmarking that. Right. So we're, we're still at a point in human state of the art around software engineering, where output measures need to be a part of that conversation it needs to be part of the way you're framing developer experience and developer productivity if you want the people you're pitching this to to actually fund it and believe it and buy in so there's a marketability optimization here like that that's one of the reasons PR is for engineers in here. But the other reason is
Starting point is 00:58:26 we have come around to talking with a lot of companies like Uber, Microsoft, top tech companies where they use this metric as an input. It's not the sole metric. They're not performance valuing engineers based on this metric. But in aggregate, they are looking at this metric as an input to understanding how developer productivity is trending and compares to other organizations. And it's not useless, right? It is a useful indicator in aggregate. And that's why in the framework in the core four, there's an asterisk and it says not to measure this at the individual level. So this is only to be looked at at an aggregate team group organization level and benchmark that way. And we've found it more useful than not.
Starting point is 00:59:12 So it says diffs per engineer, though. Diffs per engineer, then asterisks. Not at the individual level. Right. Well, so, yeah, the metric is normalized. So you're looking at aggregate divided by the population. But in terms of like visualizing or reporting this, you're not looking at a list of people, right? You're looking at teams and organizations.
Starting point is 00:59:36 Yeah. Right. I do see the contradiction there, though. Yeah. Well, certainly at an individual level, at face value, it seems contradictory, but it does make sense. Yeah. Maybe you could reword that to say like averaged across whatever. I'll write that down.
Starting point is 00:59:52 Oh, man. So many good notes for you here. Yeah. At an individual level, certainly it's a bad measure. Well, the problem is it becomes a bad measure, right? That's Goodhart's Law. Like once everybody knows that that's what's being measured, well, we all know how to play the game.
Starting point is 01:00:07 Yeah. Same thing with, I mean, lines of code moved to a slightly larger batch, you know? Yeah. And so I can criticize that one. I can also criticize lines of code. I can criticize features or tickets.
Starting point is 01:00:18 They can all be criticized, but then you're at a certain point, you're like, well, what can we actually do then? If everything sucks, we're going to have to pick one and go with it. And I guess if the industry is somewhat standardizing around that, then it's a decent compromise. And I think there's more we can do. Right.
Starting point is 01:00:33 I was just talking to a company, actually working with a company, Silicon Valley Tech Company, and all the other core four metrics were quite a bit below like p50 like in industry peers but diffs per engineer was higher and they this was bad for them because they're trying to show to their executives that they're behind peers so they can get funding to make improvements right sure so we were just trying to dive into the data like why is your diffs per engineer inflated, even though clearly, like empirically and with the other core four data points, like you're not like a high performing organization. And so we couldn't really figure out an answer. I mean, there was a lot of speculate, like, you know, there's just there are a higher number of like config changes, like small PRs that aren't real changes. But like every company has that, right? Like we that should be kind of that,
Starting point is 01:01:25 uh, fuzziness should already be kind of accounted for in our benchmarks. And so that led to this idea, like, you know, could there be a weighted metric? So, so you're actually,
Starting point is 01:01:37 because not all dips PRs are credit equal, like we talked about, right? Some are one minute changes. Some are, some are one line changes that are actually eight hours some are you know 800 line changes that are two minutes like how do you so you know if we could apply some kind of waiting to like bucketing all these dips and prs so almost the same way we do estimation like t-shirt sizes or something
Starting point is 01:02:02 like that you know i was thinking could we use gen ai like llm to basically automatically try to categorize based on the title the description of the task and the code changes like you know was this like a big change or was this actually a small change then could get kind of like a weighted number that would be an improvement yeah to the signal you're getting out of like an output measure like this. Yeah, even with a confidence score alongside that would be really interesting. There's still challenges with that because the amount of change
Starting point is 01:02:38 does not always correlate to the amount of effort you can work an entire week on finding a bug and then you found it and it's a one character change and you're so exhausted by then that your commit message like says i fixed it or something you know and so like the llm just doesn't have much to work on boom you know if i guess if you can just say well it's fuzzy it's not going to be a percent it's better than merely measuring so did you come to my only guess would be like a culture of small diffs or a culture of you can't figure it out yeah why are they so much higher on that one metric you haven't figured it out yet we we don't know but they are definitely
Starting point is 01:03:16 higher and i mean like i told them look like if i had a little bit more time here i would take like a random sample of your 200 prs and then like random sample of other companies and like try to do what an LLM would. I would like look at the titles and descriptions and like try to figure out like, are your PRs generally smaller, you know, lower effort or size tasks than other companies that, I mean, that probably has to be the reason. I can't. It's an interesting problem, though. Yeah. My guess is you dig into that and you find there's some sort of scheduled pull request that's just like changing something that should be in the database, but it's not.
Starting point is 01:03:56 And so there's just... Well, here's the thing. Here's another twist. Okay. Okay. So the twist is we measure this two ways. We actually measure diffs per engineer self-reported, meaning we just ask developers on average,
Starting point is 01:04:10 how many PRS do you merge that you were the author of? Right. And we look at their actual GitHub data. And for this company, both numbers were like within point, like they were the same number, which is remarkable in of itself. Yeah.
Starting point is 01:04:24 I mean, that sounds fishy right there. Like what are the odds that they're like that close? Well, not exactly, point like they were the same number which is remarkable in of itself yeah yeah i mean that sounds fishy right there like what are the odds that they're like that close well not exactly but within like point point two three yeah i mean why wouldn't it right well i thought okay it all depends on how exact it is you know if you have a vote and you have 99 to 1 you're like okay but if you have a vote and it's 100 to 0 now you're like there was some collusion here like something happened well maybe people looked right maybe people looked at their own github activity and okay, but if you have a vote and it's 100 to 0, now you're like, there was some collusion here. Like something happened. Well, maybe people looked, right? Maybe people looked at their own GitHub activity
Starting point is 01:04:48 and answered the question. Which is fair because maybe they don't know and so they're going to look at it. Yeah, yeah. I'm just saying, if it was like exactly the same, then I'd be like, there's something wrong with our system here. Yeah, it wasn't exactly.
Starting point is 01:04:58 So it was very close. And so that does exclude like bot-authored pull requests for example and both measures both the github the objective one and the self-reported explicitly exclude bot authored i think you got someone in that org who just doesn't go to work and they just have a bot that uses their own ssh key and just does does you know every day merges and stuff and then you ask that person how much they emerge and they went and looked at their bot and they just guessed the right answer yeah does, you know, every day merges his stuff. And then you ask that person how much they merged and they went and looked at their bot
Starting point is 01:05:26 and they just guessed the right answer. Yeah, looking at outliers would be interesting, though the self-reported accounts for that because there's an upper bound. Like the top option is actually, I think, like 10, like 10 or more per week. You can't put in like I merge 100 per week. You can't be a self-reported 10x dev.
Starting point is 01:05:43 Yeah, yeah, exactly. Right. Oh, what an interesting problem dev. Yeah, yeah, exactly. Right. Oh, what an interesting problem though. Yeah, it was fascinating. What's up, friends? I'm here in the breaks with Dennis Pilarinos, founder and CEO of Unblocked. Check him out at getunblocked.com.
Starting point is 01:06:05 It's for all the hows, whys, and WTFs. So Dennis, you know we speak to developers. Who is Unblocked best for? Who needs to use it? Yeah, Unblocked helps large teams with old code bases understand why something has been done in the past. It helps them understand what happens if they make changes to it.
Starting point is 01:06:25 Basically, all the questions that you would typically ask a co-worker, you no longer have to interrupt them. You don't have to wait for their response. You don't have to, if you're geographically distributed, you don't have to wait for that response. You don't have to wait for, you know, you don't have to dig through documentation. You don't have to try to find the answer in Confluence and Jira. What we basically do is give you the answer by you just asking a question. The way that we got to the problem was a consequence of our kind of lived experience. We're actually going to call the company Bother, which is like, you don't bother me, I don't bother you, right? Instead of
Starting point is 01:06:58 like being tapped on the shoulder or interruptive Slack messages, we could just use bother and get the answers that we wanted. We didn't go with that name because it's a little bit of a negative connotation, but helping developers get unblocked by answering questions or surfacing data and discussions within the context of their IDE relative to the code that they're looking at is something that thousands of developers love so far. I think our listeners are really familiar with AI tooling, very familiar with code generation, LLMs. How is Unblocked different from what else is out there? A lot of code generation tools help you write the code to solve a problem.
Starting point is 01:07:35 We sit upstream of that. Our goal is to help provide the context that you need. If you think about where you spend your time when you're trying to solve a new problem, understanding how that system works, why it was built that way, what are the ramifications of changing something? That's the problem that Unblock tries to solve for you. We take the data and discussions of all of these, the source code and all those systems to provide that answer for you so that you can get that context and then you can go and write that code. We have this great example of a company who hires,
Starting point is 01:08:05 you know, very competent developers, it took them five days, that developer five days to write 12 lines of code. And his feedback to us was, it's not that it takes you five days to write 12 lines of code, it took them five days to get the information that they needed to write those 12 lines of code. And that takes probably about 30 minutes to write those 12 lines of code. And then it takes probably about 30 minutes to write those 12 lines of code and rip off that PR. Okay, the next step to get unblocked for you and your team is to go to getunblocked.com. Yourself, your team can now find the answer they need
Starting point is 01:08:37 to get their jobs done and not have to bother anyone else on the team, take a meeting or waste any time whatsoever. Again, getunblocked.com. That's G-E-T-U-N-B-L-O-C-K-E-D.com and get unblocked. I was thinking though, as you guys were talking about this, that measuring speed is, and it depends, depends right because like not every team can be measured speed wise on the exact same metrics which i think is why you have this key metric and then secondary metrics yeah because round it out you have the secondary metrics to sort of back
Starting point is 01:09:15 up and correlate to what the key metric speaks of and the collection process is via systems so collected data from a Git repo or other intelligence platforms and then self-reported. Yeah, that's a good suggestion. We didn't look at the secondary metrics. We got really trapped in why is this Diff's Per Engineer inflated?
Starting point is 01:09:37 I almost wonder if the key metric is what swaps out. Because on one team, the Diff's Per Engineer may actually be the primary driver of the data you're trying to collect. In a whole one team, the disparate engineer may actually be the primary driver of the data you're trying to collect. In a whole different team, lead time or processes or deployment frequency is actually the better key metric, and the others are the supporting. I don't know enough about your business how to do that. Yeah. This makes me want to go look at the perceived rate of delivery.
Starting point is 01:10:01 Perceived speed is one of those secondary metrics. This is the analogy here would be for like aerobic athletes, right? Heart rate versus perceived rate of exertion, right? Those are the kind of two. And like, there's a lot of flaws in heart rate because I mean, just the altitude, right? You could be training at different altitudes and the heart rate's different, even though you're kind of doing the same load or you just wake up up you didn't get as much sleep so your heart rate is more flexuring so yeah we we really like that perceived rate of delivery we literally just ask people to rate the speed at which their team delivers like one through ten just rate it or uh it's
Starting point is 01:10:38 like a five point scale it's not uh it's um like from extremely some like extremely fast to slow it's the actual speed the rating words not like a one through five thing yeah very much inspired by perceived rate of exertion which is on a 10 right there's 10 options for i think so yeah perceived rate of exertion or perceived rate of pain have you ever seen that for medical? Brian Reagan. I think they use that in healthcare. Yeah. Yeah. There's a great Brian Reagan stand-up where he talks about them asking him that question when he goes into the ER. And him thinking through what number should he say in order to get help as fast as possible.
Starting point is 01:11:19 He's like, never pick seven. You're always an eight. You got to say it. pick seven, you know, like you're always an eight. And here's the full length unedited clip of Brian Reagan on this awesome bit. I was going to edit it, but I was thinking like, gosh, I would just edit this man's comedy. And I just can't do that. So if you don't want to hear the whole thing, skip to the next chapter. There you go. Nurse finally comes in. How are you doing tonight? I'm on a journey. Do you have a painkiller or something?
Starting point is 01:11:51 This is killing me. So she goes, how would you describe your pain? It's killing me. She goes, how would you rate it on a scale of 1 to 10, with 10 being the worst? Well, you know, saying a low number isn't going to help you. Oh, I'm a 2. Maybe the high 1s. You could get me a baby aspirin and cut it in half. Maybe a Flintstone vitamin and I'll be out of your hair. You can go 10 to all the threes and fours
Starting point is 01:12:25 and such if anyone's saying such ridiculous numbers. I couldn't bring myself to say ten though because I had heard the worst pain a human can endure
Starting point is 01:12:38 is getting the femur bone cracked in half. I don't know if that's true but I thought if it is they have exclusive rights to ten. And now I'm thinking, what was I worried about? Was there like a femur ward at the hospital? They would have heard about me and hobbled into my room. Who the hell had the audacity to say he was at a level 10.
Starting point is 01:13:11 You know nothing about 10. Give me a sledgehammer. Let me show you what 10 is all about, Mr. Tommy A. No! No! No! How can I possibly say 10? No! No! No!
Starting point is 01:13:27 How can I possibly say ten? I can't. So I thought, I'll say nine. And then I thought, no, childbirth. I better not try to compete with that. And then I'm thinking, you'd almost be hell giving childbirth when your femur bone is cracked. No. You'd almost be hell giving childbirth when your femur bone's cracked. So I said, I guess I'm an eight.
Starting point is 01:13:52 She goes, okay, I'll be back. I'm like, oh, I blew it, man. I ain't getting nothing with eight. But she surprised me. She comes in, she goes, the doctor told me to give you morphine immediately. And I'm like, morphine? That's what they gave the guy in Saving Private Ryan right before he died. Okay, I'm a four. I'm a zero. I'm a negative 11T. Morphine. So they gave me morphine. Wow. All I know is about 15 minutes later, just with a hell of it, I was like, I'm an eight again. Guess who's an eight? And they finally checked me out. I'm walking
Starting point is 01:14:37 out in the hall going, say eight, say eight, say eight, say eight. Happy eight day. And so it's very much Goodhart's Law in a much funnier context. Yeah. Do you think, Avi, that you're North Star with DX as an organization? What you're trying to do is to define a path to happy developers. What do you think you're actually trying to accomplish? I mean, I know what you're doing as a result of giving survey results and this data, this, you know,
Starting point is 01:15:08 this formulaic and proprietary way to ask questions of an organization, how to disseminate this information and analyze it, that you're trying to help organizations be optimized. But like, do you think the true optimization factor is the path to happy developers? Happy and productive right i mean so that's the sack overflow survey once again confirms this right because people are unhappy because they're unproductive is another way to characterize the findings right people are
Starting point is 01:15:38 frustrated because it's hard to get work done because of their tools systems whatever therefore they're unhappy and they're unproductive, right? There's a lot of time being wasted here. So no, I would say our North Star is helping every organization just become, every tech organization become the best version of themselves, right? I mean, that has different meaning to different people. But, you know, I mean, I'm the CEO of a company and we have engineers. And so the way I think about it is, you know, are we with the people we have,
Starting point is 01:16:12 are we doing the best we absolutely could be? Are we as good as we could be? And, you know, we run the DXi and all these, the core four, and I'm looking at that, like, how can we get better? You know, to another company that probably just translates to, we spend a crap ton of money on engineers and we want to make sure we're maximizing that investment. Right. Or it might mean, look, our competitors seem to be like creeping ahead of us. How do we go faster without just hiring more people? So lots of ways to tell the story in like a one-liner, but yeah, it's about just being good at building software. And as a result of that, people are also happier
Starting point is 01:16:52 because all research repeatedly shows that happy developers are productive and productive developers are happy, as the Stack Overflow Survey also shows. I go back to the beginning of the conversation, which was 80% are unhappy. And what we failed to ask was why are the other 20% happy? Yeah. Because I feel like if your North Star is productivity, but that comes as a result, generally, in my opinion, and I don't know this qualitatively, is that you have productivity when you're happy.
Starting point is 01:17:27 And you can't create slash make happy developers unless you understand what makes them happy. So why is the 20% of the 80% that's not unhappy, happy? What is going on? Why are they happy? Yeah. I mean, whenever we look at this type of data, we're slicing and dicing. I mean, you do see some couple things I could share. You know, we do see some differences across, you know, cross-culturally. So for example, we tend to see higher sentiment around this type of stuff, at least self-reported from populations in India, for example, we tend to see, you know, higher satisfaction with
Starting point is 01:18:07 more junior developers, right? People who just don't have a frame of reference yet on what is good, right? They're still, they're new to the profession. So there's certain things that if we just looked at this data, it that the 20 happy are coming from you know certain countries or certain levels of tenure and seniority that could explain quite a bit of that 20 i mean and some of them are probably legitimately in good situations with good developer experience and greenfield projects with no technical debt where they're really happy yeah i'm in full control. I have autonomy.
Starting point is 01:18:46 Yeah. No one yells at me. I'm getting paid. I'm not getting laid off. Yeah. I'm not too old. I'm getting fired at 25 because I'm too old to code. That's the joke now.
Starting point is 01:18:57 It's like, you're just, you're, you've aged out. I'm 25. No, come on. Well, there's another interesting data point on their survey. another interesting question, which is about coding outside of work. And if you want an indication of somebody doing something because it makes them happy, it's something that they would do outside of work. And so the same exact work of developing, while there's 80% unhappy at work, 68% of respondents said that they write code outside of work as a hobby. That's like almost 70 out of 100 people.
Starting point is 01:19:32 That's a large number. And 40%, which there's some overlap there, these aren't mutually exclusive, code outside of work for professional development or self-paced learning from online courses. So these are people investing in themselves, caring about getting better at what they do. And that's kind of amazing. So we have this dichotomy of people who love to write software, generally speaking, and yet unhappy writing software inside of their organization. And obviously you can look at your DXi and follow the 14, but the closer you can make your engineering teams feel like they're doing their hobby, you know, like think about how a hobby works. It is self
Starting point is 01:20:11 directed, first of all. So autonomy is huge. Most likely, unless they have a bunch of kids running around, there's deep work involved. Like you can lose yourself in it. You can go into the, I was gonna say the garage, but if we're coding, well, maybe the garage, wherever it is that you write software and just pound away at it for four hours without any interruptions and really lose yourself in it. A lot of these 14 metrics actually are manifest in hobbies. And so if you can, obviously a business is a business. And so you can't just be like, everybody do what they want. It worked for a little while for GitHub till they got to about a hundred, I think, a hundred
Starting point is 01:20:49 engineers. I was there for, I was there for the ride, not at GitHub, but here podcasting and paying attention and using it as a product and going to conferences where Zach Coleman was traveling around and talking about their engineering led development. And everybody pretty much just works on what they want to, that worked for GitHub for a long time. Long meaning in years, not in employee count. Up to 100, it's not a large engineering team.
Starting point is 01:21:14 They're way larger now. But at a certain point, that thing falls apart because there's work that needs to be done that nobody would just naturally pick unless it was assigned to them and they're paid to do it. And so eventually that does. But if you can make your engineering team feel at least approximate, like they would be doing this as a hobby, then I think you're going to have a lot of happy programmers. That's how I've heard this described. You know, how can you kind of get the
Starting point is 01:21:42 same feeling of joy and flow from that you do when you're working on a side project right how do you get that same experience while working in your job how can you recreate that and if we could do that we would unlock a lot more productivity we would get a lot more out of engineers working at our companies yes i think that's a good way to good way to think about it and a lot more happiness too which everybody wins there like there's no losers these drivers these 14 drivers have you ever done a survey where you've like asked developers to rank order we do that drivers like they do that's already yeah that's for every organization we work with that that's one of the so first first, we capture the data on it.
Starting point is 01:22:25 And then based on how they stack up on each of the 14, we give them, hey, based on here's how you answered these 14. Now, out of these 14, what are the top one to three that would most benefit your productivity? Do you find that to be pretty subjective? Or are there certain ones that always float up to the top? There are definitely certain ones that tend to float more toward the top.
Starting point is 01:22:52 Such as? Documentation. Really? For sure, yeah. What's interesting is the Stack Overflow kind of, technical debt is not one thing, right? Technical debt is actually, like all 14 of these things, well, minus maybe two or three of them are actually types of technical debt is not one thing, right? Technical debt is actually like all 14 of these things. Well, minus maybe two or three of them are actually types of technical debt.
Starting point is 01:23:10 So documentation is actually a form of technical debt. Complex code is a form of technical debt. Slow CI, CD is a form of technical debt. So all the technical factors do tend to float toward the top, actually. But some of the cultural factors, cross-team collaboration, delays due to different teams
Starting point is 01:23:32 having to coordinate with one another is also, I'd say, a pretty common theme. Makes sense. Yeah. Is that across engineering teams or product teams? Or is that like systems versus, or ops versus devs? Yeah, good question.
Starting point is 01:23:55 These are the people, my response just now was deriving from how engineers report the friction. So from the perspective for developers, yeah, waiting on other teams, which could be cross-functional or it could just be other engineering teams that they have, you know, different services or whatnot. So that tends to be a big area of friction. Queues. It's always about queues, right? Yeah. CRCD is a queue, you know, being delayed or whatever is a queue. Yeah. I can't work on this until you work on that.
Starting point is 01:24:20 I can't work on that until you work on this. We can't deploy that because of this. It's all queues. Wait on code review. That's't work on that until you work on this. We can't deploy that because of this. It's all queues. Wait on code review. That's right. Then there's deep work. That's just meetings, right? Not just meetings. Less meetings, please.
Starting point is 01:24:33 No, not just meetings. What else? It can be people actually asking you questions. It can be code reviews. It could be, hey, can you fix this quick thing? Can you, customer asks for this thing. Can you take care of it? It could be support.
Starting point is 01:24:48 It could be incidents. So it's much more than meetings. Yeah, that's something we see a lot. Companies just say, oh yeah, we just need like no meetings Wednesday and then this problem's solved, right? And that's really the case. I was actually looking at DX,
Starting point is 01:25:03 what our top ranked areas were. What matters most to your teams? Well, no, what does our, yeah, what do our developers say are the biggest areas that should actually be improved? And for us, it's actually that code maintainability. So, you know, the ability, how easy it is to actually understand and modify code. It's actually also clear project direction. So the projects they work on having clear goals and direction.
Starting point is 01:25:35 And it's actually that batch size, which we've since renamed to incremental delivery. But, you know, are you working on kind of small continuous changes as opposed to large ships? We're the top three for us. Well, those first two are driven by leadership, aren't they, Ami? I was thinking you were showing us cards here.
Starting point is 01:25:53 I was like, dang. Yeah, yeah. Well, I'll say this. Our DXI score is three points above P90. So you're sitting pretty. Yeah, we're sitting really pretty. But yeah, even then, there's always room for improvement, right?
Starting point is 01:26:09 And actually, I just, I'm looking at the data now and actually our clear direction. He's smiling, y'all. He's not upset. He's smiling. Yeah, those top three I just mentioned actually are not above P90.
Starting point is 01:26:22 Those three specifically. Okay. So you got some, there's some room for improvement here. And same with code review turnaround actually is not above P90. Here's the better question, really. And I know you're poking fun, Jared, but.
Starting point is 01:26:34 I was. Yeah, and that's totally cool. And he likes it. You can't change what you don't measure, right? So now that you have this index and now that you have, you know, this awareness, even as a leader, you couldn't change it before if you didn't know it.
Starting point is 01:26:48 But now you have awareness. Your team has awareness. Your team that is answering these questions feels heard. If you're going and making change and you say, hey, because of these results or because of these findings we're getting from our DXI score score we're improving these things in these ways and they the morale changes the ability to speak to leadership and influence change changes you know all those things really come into play now this gives me a lot of reassurance as a leader actually because i wasn't sure before we ran this last uh we call them snapshots right this last kind of benchmarking survey and data collection exercise i really wasn't sure i was
Starting point is 01:27:32 very pleasantly surprised by how good things are right now i mean that's what i would expect out of myself as a leader right yeah but i wasn't sure am i just thinking we're good? Is it actually terrible working here? Or are we actually as efficient? Are we actually kind of at that high level of efficiency that I would expect out of the way we do things here? And we are pretty efficient. So it's good to see. Well, we all want to think that we're doing well in that which we set out to do well. But the worst place to be is to not be doing well and not know it.
Starting point is 01:28:09 Know it. Right? So at this point, of course, you are reassured because overall you're doing quite well. But even if you weren't, at least then you would know, okay, I thought I was doing well, but I obviously have some things to fix. Now, if we picked one of those three, let's not do commit change size or whatever that one is. Let's go to the other two and say, okay, these are room for improvement. So pick one of those two and just spitballing. What could you, Abinoda, as a leader do today, tomorrow,
Starting point is 01:28:35 in order to meaningfully move that at your next snapshot? Do you have any ideas? Yeah. I mean, the project direction, that's on me. Yeah, I mean, you were direction, you know, that's on me. And yeah, I mean, some of you were asking earlier, like Pareto's principle, but like some of these are trade offs, right? Because yeah, we can improve that. But that would involve a little more process, which would cost time and money. And given that it's already actually very good, you know, is that there's just like it's more like something we want to keep in mind and be aware of. So we can just lean in a little bit more there. The code maintainability, like that's already something we're really focused on. So that that was like validating is, you know, that's something we need to continue focusing on is. And how do you focus on that?
Starting point is 01:29:23 Like, what are your actual tactics good question you know having clear patterns like like i mean just really pretty strict code review and and not just code review but just making sure i don't know if you've seen like uh addy asmani's post uh like code is like a love letter to the next developer it rings a bell yeah it's so like that's in our like onboarding doc for engineering it's like look the only thing that matters here when you write code is like how easy is it for the other people on the team to understand that code and we really try to make decisions on how we build things around that principle. So it's good to see that then reflected in the data, right? People are saying that it is easy for them to understand and modify other
Starting point is 01:30:12 people's code here. So that's one of the ways. But yeah, it's a lot of hands on, like driving that principle forward, right? I've vetoed a lot of technical decisions, introducing new technology based on that principle that every new technology we add, every've vetoed a lot of technical decisions, introducing new technology based on that principle that every new technology we add, every new pattern we add is something else. Someone else has to learn, uh, and is going to slow them down. I'm not sure where you scored on speed, but I assume it's pretty well considering these were the only three that weren't great. Have you ever considered compromising a little bit of speed?
Starting point is 01:30:42 Like there's your trade-off is like, let's slow down a little bit. Because a lot of times just time to breathe and refactor and maintain actually improves code maintainability. If you have maybe your snow leopard moment, for instance. I'm not saying do a feature freeze or anything, but like small bits. So get this, I'm looking at the data.
Starting point is 01:31:01 So when I look at our core four, we are above industry p50 for speed effectiveness and impact but not quality when i look at our second some of the secondary metrics so perceived speed and perceived quality we are also above p90 on all of them except quality so yes i think we are we do sacrifice a little bit of quality for the sake of speed and i mean that's the the data shows that very clearly yeah now whether that's a problem or not that's up to is an interesting question yeah exactly as a startup i you know i love i much rather be p90 plus on speed right now than because we have quality issues, but they're not, they don't affect customers that much. That's the secret about our
Starting point is 01:31:51 quality problem. Like we do have quality problems, but I've actually had the principal on the team, like, look, we're not building payroll software here. Like when we have a glitch in a report, it's not like, it's not hugely disrupting or impacting our customers businesses. And if we can quickly resolve it, then we're good. So that's a principle we have here. Like we have really fast recovery, but yeah, we do have not great quality, objectively speaking. Yeah. Not abnormal for a company in your situation. I don't think think do you have happy developers it's a good question so we don't measure for the reasons i i kind of shared some concerns around measuring happy we don't measure happiness do you have productive developers well so yeah we do based on i mean our dxi score
Starting point is 01:32:37 is through the charts as i was saying but we do i mentioned earlier we do measure attrition risk this is interesting i haven't i haven't actually looked at this oh this is a real time real time demo okay i see okay i mean this i can't share out loud but okay so for risk of attrition we look at it very similar to like a blood test that you might get so when you get a blood test right right, they tell you like, here's the healthy range, right? Like, you know, if you're whatever blood pressure cholesterol is within, you know, this nanomilligrams to this, you're normal. So with attrition risk, the normal range, and I may be having that coffee this morning, but I think it's seven to 10% is the healthy range. So if your attrition risk is 7% to 10% of your organization has signals of being at risk of attrition, that's normal.
Starting point is 01:33:32 And I'll just say we're at the high end of the normal range, but I'm looking at the data. Our reporting will tell you where that kind of risk of attrition. And so I'm looking at it now and it's, I, I, I'm aware of what is going on here. It's good. He knows the inside story. Like he, yeah. Well, how large is your team? Uh, I can't, it's around 20 people. Well, I mean, just roughly, I'm not trying to drill down. My point is like smaller teams, like these generalized aggregated numbers, you can see like, oh, well, there's a skew
Starting point is 01:34:05 there because of this one person situation or whatever it is. Yeah. No, this is meaningful though. No, this is actually this, I was actually worried about some flight risk in this area on this team. And this data is actually telling me. So you're feeling better. I'm feeling better about our product because today we've gotten to go through all the data
Starting point is 01:34:24 and it's been really, you know, to be honest we've gotten to go through all the data. And it's been really, you know, to be honest with you, when I originally got the data, I was so busy. I didn't like fully go through it. But like you guys asking these probing questions. I'm having fun. Yeah, me too. Thanks for going through that with us. So, yeah, 10% risk of attrition.
Starting point is 01:34:41 We'll plot with you every snapshot. We'll give you an overall score, the changelog score from your business. Yeah. This is great. I do enjoy this. I think that what I kind of find fascinating is you mentioned the year 2020, and you were at GitHub. And I don't know if you know what year it is, but I do. Okay, it's 2024, just in case you weren't aware.
Starting point is 01:35:03 This is four years later, and you're this far with DX. Congratulations. Thank you. We've asked you hard questions. You've shared some insider information that is in this report that only probably you and some others get to see these snapshots. So kudos to you for being forthcoming on that. We've talked through DXI. We've talked to the core of four. We've asked you a lot of hard questions.
Starting point is 01:35:22 And you're like only a few years into this and you're this steeped in, I would say, trajection and maturity. Like you've got a strong team operating at speed, not the highest quality, but you understand where the lack of quality is and why it's okay to have that. And you have a pretty good foundation and some assurance personally, it seems on how to take action when you need to take action. That's a great place to be in. Yeah. Hopefully, you know, I mean,
Starting point is 01:35:50 the way I look at this as a startup is can we try to not end up like GitHub? Right. That's the goal. Oh, well, well, same old,
Starting point is 01:35:58 same old. And velocity specifically. Right. I mean, like they, they were at a point where they weren't shipping and people were leaving because it was so hard to ship. So like, you know, honestly, a lot of quite a few companies
Starting point is 01:36:13 we work with are kind of in that position. Like these big tech companies that went through that hyper growth and churn and, you know, tons of reorgs and are now confronting, okay, now we can't just keep hiring people but we need to be shipping faster like how where do we get what are the levers we can pull and so you know it's kind of like health right like if you can get ahead of this stuff and not you know have four decades of uh you know poor diet and exercise that hits you in your 50s. If we can kind of stay ahead of it,
Starting point is 01:36:49 I would hope that we're able to scale the business while staying P90 on velocity. That's the goal here right now. Would you be like GitHub insofar as they took a $7.5 billion payout? I would take that. I would take that. I would take a few billion dollars. I was going to say'd be okay yeah well positioned i would say i don't know who would acquire you like who cares about what you care about to the point where you're an
Starting point is 01:37:15 acquisition target i mean anyone in the developer business i would say so yeah github github microsoft amazon google right i mean, anyone in the cloud game cares a lot about like benchmarking, assessing maturity. Right. Yeah. I mean, even Salesforce is a little bit. Could you IPO?
Starting point is 01:37:33 We could. I don't. What do you want to do? I don't know. I'll do whatever, you know, fate has in store for us. I mean, that's what I, you know, we've pretty much bootstrapped the whole thing.
Starting point is 01:37:44 So we, you know, we kind of control our own destiny we don't have to yeah have an ipo we don't have to sell for eight billion dollars we just want to keep you know i'm just drawn to this problem and i think i shared this on the last time i was on the show, Adam, this all started seven years ago when I first became the CTO at a startup. And the CEO asked me, hey, Abhi, all the other departments here are reporting metrics each month. Can you start reporting some metrics on the productivity of your engineering team? That was seven years ago. And so I joke with people. I'm still just trying to answer that question because I couldn't answer that question seven, eight years ago.
Starting point is 01:38:23 I asked other CTOs, what are you reporting? And got 20 different answers and it was a hard problem. Yeah, for sure. There's a Silicon Valley episode about that close to the end of the final season. Kind of funny. And you're a, I think this was the other show we did together, that deep dive. I think that you're the only owner of the business. I think this was the other show we did together that deep dive. I think that you're the only owner of the business that you're the solo founder. I'm the majority majority. I have a co-founder, but yeah,
Starting point is 01:38:50 I'm the, the majority owner. Yeah. I thought it was singular owner. My knowledge is the time between the last time I had this conversation is, is diffing on me, but no, it might've been my previous business,
Starting point is 01:39:03 the pull Panda. That was, that was a singular owner of that. But no, this one I've, I've had a business partner, uh, since the beginning and you've taken the venture capital. You said bootstrap venture capital. Just, we took a little bit of angel investment at the very beginning from, you know, friends and family. Yeah.
Starting point is 01:39:22 But, but we never spent that, you know, friends and family. Yeah. But, but we never spent that, you know? So when I hear bootstrap, that means that you're, you're gained today, cashflow positive, or at least reinvested. Like if, if not break even, you know, or in the negative, you're in a negative potentially because you're reinvesting, not because you're losing money. Yeah. Now we're extremely cashflow positive. And I mean, like to the point where my biggest concern each year is how to spend some of that. So we don't pay taxes, corporate, you know, 20% tax rate. Cause that's right. Just gets, goes in. So yeah, we, I try to reinvest it. So we don't, it's better to reinvest it than pay, lose 20%. Do you do much advertising? I was going to say, we can invest some of that for you. I mean, we do. There's lots of opportunity, let's just say. Lots of opportunity.
Starting point is 01:40:09 Let's take that offline, Adam. I'm only kidding with you. Our listeners want to hear about how we get new sponsors and new partners. But they kind of do. No, they don't want to hear that. They'll eventually hear. We won't let them, though. No.
Starting point is 01:40:22 This is fun, though, been digging into this. I think that we want developers to be more happy. Obviously, I think the question to me is how? What makes developers more happy? I think productivity is obviously one key metric and maybe some secondary metrics could be what? I don't know.
Starting point is 01:40:37 Just happiness in life potentially, other things that influence happiness. Pay, perks. Pay, perks. Free beer and ping pong ping pong table definitely for a certain demographic rto no right i do think that that's my new tagline jared not rug pull not cool it's rto no it's a good one i was gonna say i just was gonna say i do think that like freedom to live your life in a way that suits you and still work is a huge driver for a lot of people more, more than money, probably right up there with like productivity and enjoyment of my
Starting point is 01:41:12 work is like, do I also get to live my life in a way that suits me? But that's maybe just me projecting. Cause it's always been my primary driver, even more so than money is freedom. And I've very much enjoyed it for many years, so I'm appreciative that I have it. So maybe I overemphasized that. But I'm sure there's a survey out there that answers that somewhere. The Stack Overflow or next year's DX. What are you guys going to call this thing, your public thing?
Starting point is 01:41:38 Probably State of... State of DX. DX or Developer Productivity. We kind of use those terms interchangeably. Careful now, because there is a state of organization run by a friend of ours who does state of JS, state of HTML, state of CSS. And they have this whole platform called state of. Yeah.
Starting point is 01:41:59 We could just call it developer experience index. You could also team up with them and have them help you run it or something. And then I'm sure they'd be happy to because they did create, although you have all your own software. So maybe it would be a square peg, ground hole. But I know they did have opened it up and like Google runs the state of HTML survey with them.
Starting point is 01:42:17 So like if you wanted to really use state of, I think there's definitely opportunity there. Anyways, in the weeds again, Let's call it a show. What do you think, Adam? Yeah, I'm down. Obby, thanks, man. It's been fun, Obby. Thanks for joining us, man.
Starting point is 01:42:30 It's been cool. Yeah, thanks for the invite. All right, bye, friends. Bye, friends. So RTO, no. Are you an RTO person? Are you being forced back to the office? Say no.
Starting point is 01:42:45 I'm just kidding. Maybe you an RTO person? Are you being forced back to the office? Say no. I'm just kidding. Maybe you can't say no. It's a hard thing because now we're in this world where we were once given this, hey, work anywhere. Hey, be remote. Hey, do whatever. Freedom. And the kind of jobs that we do in tech generally are jobs we can do remotely. We can do pretty much from anywhere. We can be nomadic and tour the world and have fun and enjoy our life or optimize for where we want to be in our life. And that makes us happy. But RTO is a thing. I say RTO no. If I had to RTO and I couldn't say no, man, I'd be pretty sad. And if that's you, I feel for you. There's a place you can hang though. It's called ChangeLog Community. We have a full-blown, fully open, no imposters.
Starting point is 01:43:32 Everyone is welcome. Zulip instance. We're replacing Slack with Zulip. You can go to changelog.com slash community. Sign up. Everyone is welcome. Come in there, hang out with us and call it your home. Hang your hat and you are welcome. And I welcome. Come in there. Hang out with us and call it your home. Hang your hat.
Starting point is 01:43:46 And you are welcome. And I want to see you there. I also want to see you at All Things Open. So speaking of RTO, ATO. But that's a good one. Allthingsopen.org. We love this conference. We go every single year.
Starting point is 01:44:00 We will be at booth 66 right by the ballroom. You will see us there podcasting with everyone we possibly can. Come by and say hi. Hang out with us. High fives, handshakes, and as you know, the occasional hug if necessary. And we can give you potentially a free ticket. Come hang out in Zulip. Or we can give you at least a 20% discount.
Starting point is 01:44:24 That's available to everyone. Use the code media, changelog20. Details are in the show notes. Camel case, media, and changelog, and then add 20 at the end. No spaces. There you go. The link is in the show notes. Follow that. That's the best thing, and I want to see you there. you there okay plus plus subscribers we've got a bonus for you on this episode if you are not a plus plus subscriber go to changelog.com slash plus plus it's better omg it is better i can't tell you why you just have to find out for yourself go to changelog.com slash plus plus drop the ads get closer to that cool ChangeLog medal. Get bonus content like today. Free stickers mailed directly to you.
Starting point is 01:45:10 And the warm and fuzzies. Who doesn't want warm and fuzzies? I know I do. I like those things. Okay, thanks to our sponsors. Sentry, Fly, Coder, Unblocked. Wow. A lineup of awesome sponsors.
Starting point is 01:45:23 Sentry.io, Fly.io, Coder.com, and Get Unblocked. Wow. A lineup of awesome sponsors. Sentry.io, Fly.io, Coder.com, and GetUnblocked.com. They love us. Go give them some love. And that supports us. And I appreciate that. Okay, BMC, thanks for those beats. You are awesome. That is it.
Starting point is 01:45:40 Friends is over. We're back again on Monday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.