a16z Podcast - a16z Podcast: Feedback Loops -- Company Culture, Change, and DevOps

Episode Date: March 28, 2018

with Nicole Forsgren (@nicolefv), Jez Humble (@jezhumble) and Sonal Chokshi (@smc90) From the old claim that "IT doesn't matter" and question of whether tech truly drives organizational perf...ormance, we've been consumed with figuring out how to measure -- and predict -- the output and outcomes, the performance and productivity of software. It's not useful to talk about what happens in one isolated team or successful company; we need to be able to make it happen at any company -- of any size, industry vertical, or architecture/tech stack. But can we break the false dichotomy of performance vs. speed; is it possible to have it all?  This episode of the a16z Podcast boldly goes where no man has gone before -- trying to answer those elusive questions -- by drawing on one of the largest, large-scale studies of software and organizational performance out there, as presented in the new book, Accelerate: The Science of Lean Software and DevOps -- Building and Scaling High Performing Technology Organizations by Nicole Forsgren, Jez Humble, and Gene Kim. Forsgren (co-founder and CEO at DevOps Research and Assessment - DORA; PhD in Management Information Systems; formerly at IBM) and Humble (co-founder and CTO at DORA; formerly at 18F; and co-author of The DevOps Handbook, Lean Enterprise, and Continuous Delivery) share the latest findings about what drives performance in companies of all kinds. But what is DevOps, really? And beyond the definitions and history, where does DevOps fit into the broader history and landscape of other tech movements (such as lean manufacturing, agile development, lean startups, microservices)? Finally, what kinds of companies are truly receptive to change, beyond so-called organizational "maturity" scores? And for pete's sake, can we figure out how to measure software productivity already?? All this and more in this episode!

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal. So one of our favorite themes to talk about on this podcast is how software changes organizations and the nature of the firm. And today's episode takes a different angle on the topic by drawing on the research of one of the largest large-scale studies of software and organizational performance out there. From the authors of the new book, Accelerate, the Science of Lean Software and DevOps, Building and Scaling High Performing Technology Organizations by Nicole Forsgrin, Jez Humble, and Gene Kim. Joining us to have this conversation, we have Nicole, who did her Ph.D. research trying to answer the elusive eternal questions around how to measure this, especially given past debates around does IT matter. She's now the CEO of Dora, DevOps Research and Assessment, which also puts out with Puppet, the annual state of DevOps report. And then we have Jez Humble, who is CTO at Dora, and is also the co-author of the books, the DevOps Handbook, Lean Enterprise, and Continuous Delivery. They share the latest findings about high-performing companies of all kinds, even those who may not think their tech companies, and answer whether there is an ideal organizational type, in size, architecture, culture, or people that lends itself to success here. But we begin this episode by briefly talking about the history of DevOps and where it fits in the broader landscape of related software movements. So I started as a software engineer at IBM. I did hardware and software performance. And then I took a bit of a detour into academia because I wanted to understand how to really measure and look at performance that would be generalizable to several teams in predictable ways. in predictive ways. And so I was looking at and investigating how to develop and deliver software
Starting point is 00:01:36 in ways that were impactful to individuals, teams, and organizations. And then I pivoted back into industry because I realized this movement had gained so much momentum and so much traction, and industry was just desperate to really understand what types of things are really driving performance outcomes. And what do you mean by this movement? This movement that now we call DevOps. So the ability to leverage software to deliver value to customers, to organizations, to stakeholders. And I think from historical points of view, the best way to think about DevOps, it's a bunch of people who had to solve this problem of how do we build large distributed systems that were secure and scalable and be able to change them really rapidly
Starting point is 00:02:24 and evolve them. And no one had had that problem before, certainly at the scale of companies like Amazon and Google. And that really is where the DevOps movement came from, trying to solve that problem. And you can make an analogy to what Agile was about since the kind of software crisis of the 1960s and people trying to build these defense systems at large scale, the invention of software engineering as a field,
Starting point is 00:02:47 Margaret Hamilton, her work at MIT on the Apollo program. What happened in the decades after that was everything became kind of encased in concrete in these very complex processes this is how you develop software. And agile was kind of a reaction to that saying we can develop software much more quickly with a much smaller teams
Starting point is 00:03:04 in a much more lightweight way. So we didn't call it DevOps back then, but it's also more agile. Can you guys break down the taxonomy for a moment? Because when I think of DevOps, I think of it in the context of the containerization of code and virtualization.
Starting point is 00:03:18 I think of it in the context of microservices and being able to do modular teams around different things. There's an organizational element. There's a software element. There's an infrastructure component. Like paint the big picture. for me of those building blocks and how they all kind of fit together.
Starting point is 00:03:30 Well, I can give you a very personal story, which was my first job after college was in 2000, in London, working at a startup where I was one of two technical people in the startup. And I would deploy to production by FTP and code from my laptop directly into production. And if I wanted to rollback, I'd say, hey, Johnny, can you FTP your copy of this file to production? And that was our rollback process. And then I went to work in consultancy where we were on these huge teams. And deploying to production, there was a whole team with a Gant chart, which puts together the plan to deploy to production
Starting point is 00:03:58 and I'm like, this is crazy. Unfortunately, I was working with a bunch of other people who also thought it was crazy and we came up with these ideas around deployment automation and scripting and stuff like that and suddenly we saw the same ideas had popped up everywhere, basically. I mean, it's realizing that if you're working
Starting point is 00:04:13 in a large complex organization, Agile's going to hit a brick wall because unlike the things we were building in the 60s, product development means that things are changing and evolving all the time. So it's not good enough to get to production the first time. You've got to be able to keep getting there on and on. And that really is where DevOps comes in. It's like, well, Agile, we've got a way to
Starting point is 00:04:30 build a level of products, but how do we keep deploying to production and running the systems in production in a stable, reliable way, particularly in a distributed context. So if I phrase it another way, sometimes there's a joke that says day one is short and day two is long. What does that mean? Right. So day one is when we like create all these... That's, by the way, sad that you have to explain the joke to me. No, it's... No, which is great, though, because so day one is when we create all of these systems, and day two is when we deploy to production. We have to deploy. We have to deploy and maintain forever and ever and ever and ever.
Starting point is 00:05:00 So day two is an infinite day. Right, exactly. Yeah. And successful products. Hopefully. We hope that day two is really, really long. And we're fond of saying Agile doesn't scale. And sometimes I'll say this and people shoot laser beams out of their eyes.
Starting point is 00:05:14 But when we think about it, Agile was meant for development. Just like Jess said, it speeds up development. But then you have to hand it over. And especially infrastructure and IT operations. What happens when we get there? So DevOps was sort of born out of this movement, and it was originally called Agile System Administration. And so then DevOps sort of came out of development and operations. And it's not just dev and ops, but if we think about it, that's sort of the bookends of this entire process.
Starting point is 00:05:42 Well, it's actually like day one and day two combined into one phrase. The way I think about this is I remember the stories of like Microsoft in the early days and the waterfall cascading model of development. Leslie Lamport once. wrote a piece for me about why software should be developed like houses because you need a blueprint. And I'm not a software developer, but it felt like a very kind of old way of looking at the world of code. I hate that metaphor. Tell me why. If the thing you're building has well understood characteristics, it makes sense. So if you're building a trust bridge, for example, there's well-known understood models of building trust bridges. You plug the parameters into the model
Starting point is 00:06:16 and then you get a trust bridge and it stays up. Have you been to Sagrada Familiar in Barcelona? Oh, I love, I love Gaudi. Okay. So if you go into the crypt of the school, Sagrada Familiar, you'll see his workshop, and there's a picture, in fact, a model that he built of the Sagrada Familiar, but upside down with the weight simulating the stresses. And so he would build all these prototypes and small prototypes, because he was fundamentally designing a new way of building. All Gaudi's designs were hyperbolic curves and parabolic curves, and no one had used that before. Things that had never been pressure tested.
Starting point is 00:06:46 Right. Literally in that case. Exactly. He didn't want them to fall down. So he built all these prototypes and did all this stuff. He built his blueprint as he went by building and trying it out, which is a very rapid prototyping kind of model. Absolutely. So in the situation where the thing you're building has known characteristics and it's been done before, yeah, sure, we can take a very phased approach to it. And, you know, for designing these kind of protocols that have to work in a distributed context and you can actually do formal proofs of them, again, that makes sense. But when we're building products and services where particularly we don't know what customers actually want and what users actually want, it doesn't make sense to do that. because you'll build something that no one wants.
Starting point is 00:07:23 You can't predict. And we're particularly bad at that, by the way. Even companies like Microsoft, where they are very good at understanding what their customer base looks like. They have a very mature product line. Ronnie Cahavi has done studies there and only about one third of the well-designed features
Starting point is 00:07:41 deliver value. That's actually a really important point. The mere question of does this work is something that people really clearly don't pause to ask. But I do have a question for you guys to push back. which is, is this a little bit of the cult? Oh, my God, it's like so developer-centric, let's be agile, let's do it fast, our way, you know, two pizzas, that's an ideal size of a software team. And, you know, I'm not trying to mock it.
Starting point is 00:08:02 I'm just saying that isn't there an element of actual practical realities, like technical debt and accruing a mess underneath all your code and a system that you may be there for two or three years and you can go after the next startup, but okay, someone else has to clean up your mess? Tell me about how this fits into that big picture. This is what enables all of that. Oh, right? So it's not actually just creating the problem because that's how I'm kind of hearing it. No, absolutely. So you still need development. You still need test. You still need QA. You still need operations. You still need to deal with technical debt. You still need to deal with re-architecting really difficult, large monolithic code bases. What this enables you to do is to find the problems, address them quickly, move forward. I think that the problem that a lot of people have is that we're so used to couching these things as tradeoffs and as dichotomies,
Starting point is 00:08:51 the idea that if you're going to move fast, you're going to break things. The one thing which I always say is, if you take one thing away from DevOps, is this. High-performing companies don't make those trade-offs. They're not going fast and breaking things. They're going fast and making more stable, more high-quality systems. And this is one of the key results in the book in our research is this fact that high-performance do better at everything
Starting point is 00:09:11 because the capabilities that enable high-performance in one field, if done right, enable it in other fields. So if you're using version-control for software, you should also be using version control for your production infrastructure. If there's a problem in production, we can reproduce the state of the production environment in a disaster recovery scenario, again in a predictable way that's repeatable. I think it's important to point out that this is something that happened in manufacturing as well. Give it to me.
Starting point is 00:09:35 I love when people talk about software as drawn from hardware analogies as my favorite type of metaphor. Okay, so I've got to, so Toyota didn't win by making shitty cars faster. They won by making higher quality cars faster and having shorter time to market. The lean manufacturing method, which, by the way, also spawn lean startup thinking and everything else connected to it. And DevOps pulls very strongly from lean methodologies. So you guys are probably the only people to have actually done a large-scale study of organizations adopting DevOps. What is your research and what did you find? Sure.
Starting point is 00:10:05 So the research really is the largest investigation of DevOps practices around the world. We have over 23,000 data points, all industries. Give me like a sampling. What are the range of industries? So I've got entertainment. I've got finance, I have health care and pharma, I have technology, government, government, education. You basically have every vertical. And then when you tell you around the world, so we're primarily in North America, we're in Amia, we have India, we have a small sample in Africa.
Starting point is 00:10:35 Right. Just to quickly break down, like the survey methodology questions that people have in the ethnographic world, the way we would approach it, is that you can never trust what people say they do. You have to watch what they do. However, it is absolutely true, and especially, in a more scalable sense, that there are really smart surveys that give you a shit ton of useful data. Yes, and part two of the book covers this in almost excruiting detail. We like knowing methodology, so it's nice to share that. And it's interesting because Jez talked about in his overview of agile and how it changes so quickly and we don't have a really good definition. What that does is it makes it difficult to measure, right?
Starting point is 00:11:09 And so what we do is we've defined core constructs, core capabilities, so that we can then measure them. We go back to core ideas around things like automation, process, measurement, lean principles. And then I'll get that pilot set of data and I'll run preliminary statistics to test for discriminant validity, convergent validity, composite reliability, make sure that it's not testing what it's not supposed to test. It is testing what it is supposed to test. Everyone is reading it consistently the same way that I think it's testing. I even run checks to make sure that I'm not inadvertently inserting. bias or collecting bias just because I'm getting all of my data from surveys. Sounds pretty
Starting point is 00:11:50 damn robust. So tell me then what were the big findings. That's a huge question, but give me the hit list. Well, okay, so let's start with one thing that Jez already talked about. Speed and stability go together. This is where he was talking about they're not being necessarily a false dichotomy, and that's one of your findings that you can actually accomplish both. Yeah, and it's worth talking about how we measured those things as well. So we measure speed or tempo, as we call it in the book, or sometimes people call it throughput as well. which is a nice full-circle manufacturing idea, like the semiconductor circuit throughput.
Starting point is 00:12:20 Yeah, absolutely. I love hardware analogies for software, I told you it. A lot of it comes from lean. So lead time, obviously one of the classic lean manufacturing measures we use. How long does it take? We look at the lead time from checking into version control to release into production. Right. So that part of the value stream, because that's more focused on the DevOps end of things.
Starting point is 00:12:35 Right. And it's highly predictable. The other one is release frequency, so how often do you do it? And then we've got two stability metrics. And one of them is time to restore. So in the event that you have some kind of outage, or some degradation in performance in production, how long does it take you to restore service?
Starting point is 00:12:50 For a long time, we focused on not letting things break. And I think one of the changes, paradigm shifts we've seen in the industry, particularly in DevOps, is moving away from that. We accept that failure is inevitable because we're building complex systems. So not how do we prevent failure, but when failure inevitably occurs,
Starting point is 00:13:06 how quickly can we detect and fix it? MTBF, right? Meantime between failures. If you only go down once a year, but you're down for three days, and it's on Black Friday, but if you're down very, very small, very small blast radius
Starting point is 00:13:19 and you can come back almost immediately and your customers almost don't notice that's fine. The other piece around stability is change fail rate when you push a change into production what percentage of the time do you have to fix it because something went wrong? By the way, what does that tell you
Starting point is 00:13:32 if you have a change fail? So in the lean kind of discipline this is called percent complete and accurate and it's a measure of a quality of your process. So in a high quality process when I do something for Nicole, Nicole can use it rather than sending it back to me and say, hey, there's a problem with this.
Starting point is 00:13:47 And in this particular case, what percentage of the time when I deployed something to production, is there a problem because I didn't test it adequately? My testing environment wasn't in production-like enough. Those are the measures for finding this. But the big finding is that you can have speed and stability together through DevOps.
Starting point is 00:14:03 Is that what I'm hearing? High performers, get it all. Low performers kind of suck at all of it. Medium performers hang out in the middle. I'm not seeing trade-offs. Four years in a row. So anyone who's thinking, oh, I can be more stable if I slow down.
Starting point is 00:14:16 I don't see it. It actually breaks a very commonly held kind of urban legend around how people believe these things operate. So tell me, are there any other sort of findings like that? Because that's very counterintuitive. Okay, so this one's kind of fun. One is that this ability to develop and deliver software with speed and stability drives organizational performance.
Starting point is 00:14:34 Now, here's the thing. I was about to say, that's a very obvious thing to say. So it seems obvious, right? Developing and delivering software with speed and stability drives, things like profitability, productivity, market share. Okay, except if we go back to Harvard Business Review 2003, there's a paper titled IT doesn't matter. We have decades of research.
Starting point is 00:14:55 I want to say at least 30 or 40 years of research showing the technology does not drive organizational performance. It doesn't drive ROI. And we are now starting to find other studies and other research that backs this up. Eric Bruney all of MIT. James Besson out of Boston University, 2017. Did you say James Besson?
Starting point is 00:15:17 Yeah. Oh, I used to edit him too. Yeah, it's fantastic. Here's why it's different. Because before, right, in like the 80s and the 90s, we did this thing. We're like, you'd buy the tech and you'd plug it in and you'd walk away. It was on-prem sales model where you like deliver and leave as opposed to like software as a service and the other ways that things. And people would complain if you try to upgrade it too often.
Starting point is 00:15:38 And the key is that everyone else can also. buy the thing and plug it in and walk away, how is that driving value or differentiation for a company? If I just buy a laptop to help me do something faster, everyone else can buy a laptop to do the same thing faster. That doesn't help me deliver value to my customers or to the market. It's a point of parity, not a point of distinction. Right. And you're saying that point of distinction comes from how you tie together that technology, process, and culture through DevOps. Right, and that it can provide
Starting point is 00:16:11 a competitive advantage to your business. If you're buying something that everyone else also has access to, then it's no longer a differentiator. But if you have an in-house capability and those people are finding ways to drive your business, I mean, this is the classic Amazon model.
Starting point is 00:16:23 They're running hundreds of experiments in production at any one time to improve the product. And that's not something that anyone else can copy. That's why Amazon keeps winning. So what people are doing is copying the capability instead.
Starting point is 00:16:34 And that's what we're talking about. How do you build that capability? The most fascinating thing to me about all this is honestly not the technology per se, but the organizational change part of it and the organizations themselves. So of all the people you studied, is there an ideal organizational makeup
Starting point is 00:16:48 that is ideal for DevOps? Or is it one of these magical formulas that has this ability to turn a big company into a startup and a small company into, because that's actually the real question. From what I've seen, there might be two ideals. The nice, happy answer is the ideal organization is the one that wants to change.
Starting point is 00:17:05 That's, I mean, given this huge n-equals 23,000 data set, is it not tied to a particular profile of a size of company? They're both shaking their head just for the listeners. I see high performers among large companies. I see high performers in small companies. I see low performers in highly regulated companies. I see low performers in not regulated companies. So tell me the answer you're not supposed to say.
Starting point is 00:17:29 So that answer is it tends to be companies that are like, oh shit, and they're two profiles. Either one, they're like way behind and oh shit and they have some kind of funds. Or they are like this lovely, wonderful bastion of like they're these really innovative high-performing companies. But they still realize there are a handful of like two or three companies ahead of them. And they don't want to be number two. They are going to be number one. So those are sure the ideal. I mean, just like anthropomorphize it a little bit.
Starting point is 00:18:02 It's like the 35 to 40-year-old who suddenly discovers you might be pre-diabetic. so you better do something about it now before it's too late but it's not too late because you're not so old where you're about to reach sort of the end of a possibility to change that runway and then there's this person who's sort of kind of already like in the game
Starting point is 00:18:20 running in the race and they might be two or three but they want to be like number one and I think to extend your metaphor the companies that do well are the companies that never got diabetic in the first place because they always just ate healthily they were already glucose monitoring they had continuous glucose monitors on
Starting point is 00:18:35 which is like DevOps actually They were always athletes. Right. You know, diets are terrible because at some point you have to stop the diet. And it has a sudden start and stop as opposed to a way of life is what you're saying. Right, exactly. So if you just always eat healthily and never eat too much or very rarely eat too much and do a bit of exercise every day, you never get to the stage.
Starting point is 00:18:53 Oh, my God, now I can only eat tofu. So, like, my loving professor-ness, nurture Nicole, also has one more profile that, like, I love. and I worry about them, like mother hen. And it's the companies that I talk to, and they come to me, and they're struggling. And I haven't decided if they want to change. But they're like, so we need to do this transformation, and we're going to do the transformation. And it's either because they want to or when they've been told that they need to. And then they will insert this thing where they say, but I'm not a technology company.
Starting point is 00:19:27 I'm like, but we just had this 20-minute conversation about how you're leveraging technology to drive value to customers, or to drive this massive process that you do. And then they say, but I'm not a technology company. I could almost see why they had that in their head because they were a natural resources company. But there was another one where they were a finance company. I mean, an extension of software eats the world is really every company is a technology company.
Starting point is 00:19:54 It's fascinating to me that that third type exists, but it is a sign of this legacy world moving into software. And I worry about them. Also, at least for me personally, you know, I lived through this, like, massive, extinction of several firms and I don't want it to happen again and I worry about so many companies that keep insisting they're not technology companies and I'm like oh honey child you're a tech company you know one of the gaps in our data is actually China and I think
Starting point is 00:20:18 China is a really interesting example because they didn't go through the whole you know IT doesn't matter phase they're jumping straight from no technology to Alibaba and Tencent right I think US companies should be scared because the moment Tencent and Alibaba are already moving into other developing markets, and they're going to be incredibly competitive because it's just built into their DNA. So the other fascinating thing to me is that you essentially were able to measure performance of software and clearly productivity. Is there any more insights on the productivity side?
Starting point is 00:20:48 Yes, yes, I want to go. This is his favorite rant. He's like jumping around and waving his hand. So tell us. The reason the manufacturing metaphor breaks down is because in manufacturing you have inventory. Yes. We do not have inventory in the same way in software. in a factory, like the first thing your lean consultant is going to do,
Starting point is 00:21:06 walking into the factories, points are the piles of thing everywhere. But I think if you walk into an office where there's developers, where's the inventory? By the way, that's what makes talking about this to executives so difficult. They can't see the process. Well, it's a hard question to answer because is the inventory the code that's being written? And people actually have done that and said, well, listen, lines of code are an accounting measure and we're going to capture that as, you know, capital. That's insane.
Starting point is 00:21:31 It seems like an invitation to write. write crappy unnecessarily long code. That's exactly what happens. It's like the olden days are getting paid for a book by how long it is and it's like actually really boring when you can actually write it in like one third of the language. Right. I'm thinking of Charles Dickens.
Starting point is 00:21:43 In general, you know, you prefer people to write short programs because they're easier to maintain and so forth. But lines of codes have all these drawbacks. We can't use them as a measure productivity. So if you can't measure lines of code, what can you measure? Because I really want an answer. Like, how do you measure productivity? So velocity is the other classic example.
Starting point is 00:21:58 Agile, there's this concept of velocity, which is the number. of story points, a team manages to complete in an iteration. So before the start of an iteration in many Agiles, particularly scrum-based processes, you've got all this works to you. Like, we need to build these five features. How long will this feature take? And the developers fight over it. And they're like, oh, it's five points. And then this one's going to take three points. This one's going to take two points. And so you have a list of all these features. And you don't get through all of them. But at the end of the iteration, the customer signs off, well, I'm accepting this one. This one's fine. This one's a hot mess. Go back and do it again.
Starting point is 00:22:33 and whatever. The number of points you complete in the iteration is the velocity. So it's like the speed at which you're able to deliver those features. So a lot of people treat it like that, but actually that's not really what it's about. It's a relative measure of effort, and it's for capacity planning purposes. So you basically, for the next iteration, we'll only commit to completing the same velocity that we finished last time. So it's relative and it's team dependent. And so what a lot of people do is say they start comparing velocities across teams. Then what happens is a lot of work you need to collaborate between teams.
Starting point is 00:23:00 but hey if I'm going to help you with your story that means I'm not going to get my story points and you're going to get your story points Right people can game it as well You should never use story points as a productivity measure So lines of code doesn't work Velocity doesn't work what works So this is why we like two things in particular
Starting point is 00:23:17 One thing that it's a global measure And secondly that it's not just one thing It mixes two things together which might normally be intention And so this is why we went for our measure of performance So measuring lead time and release frequency and then time to restore and change fail rate. Lead time is really interesting because lead time is on the way to production, right?
Starting point is 00:23:41 So all the teams have to collaborate. It's not something where, you know, I can go really fast in my velocity, but nothing ever gets delivered to the customer. That doesn't count in lead time. So it's a global measure. It takes care of that problem of the incentive alignment around the competitive dynamic. Also, it's an outcome. It's not an output. There's a guy called Jeff Patton.
Starting point is 00:23:59 He's a really smart thinker in the kind of lean, agile space. He says, minimize output, maximize outcomes, which I think is simple but brilliant. It's so simple because it just shifts the words to impact. And even we don't get all the way there because we're not yet measuring, did the features deliver the expected value to the organization of the customers. Well, we do get there because we focus on speed and stability, which then deliver the outcome to the organization, profitability, productivity, market share. But the second half of this, which I am also hearing, is did it meet your expectations? Did it perform to the level that you wanted it to? Did it match what you asked for?
Starting point is 00:24:40 Or even if it wasn't something you specified that you desired or needed, that seems like a slightly open question? So we did actually measure that. We looked at non-profit organizations. And these were exactly the questions we measured. We asked people, did the software meet, I can't remember what the exact questions were. Effectiveness, efficiency, customer satisfaction, delivery mission goals. How fascinating that you do it in nonprofits
Starting point is 00:25:00 because that is a larger move in a non-profit measurement space to try to measure impact. But we captured it everywhere because even profit-seeking firms still have these goals. In fact, as we know from research, companies that don't have a mission
Starting point is 00:25:13 other than making money do less well than the ones that do. But I think, again, what the data shows is that companies that do well in the performance measures we talked about outperform their low-performing peers by a factor of two. Our hypothesis is what we're doing
Starting point is 00:25:27 when we create these high-performing organizations in terms of speed and stability is we're creating feedback loops. What it allows us to do is build a thin slice, a prototype of a feature, get feedback through some UX mechanism, whether that's showing people the prototype and getting their feedback,
Starting point is 00:25:43 whether it's running A-B tests or multivariate tests in production. It's what creates these feedback loops that allow you to shift direction very fast. I mean, that is the heart of Lean Startup. It's the heart of anything you're putting out into the world is you have to kind of bring it full circle, It is a secret of success to Amazon, as you cited earlier.
Starting point is 00:26:00 I would distill it to just that. I think I heard Jeff Bezos say the best line. It was at the Internet Association dinner in D.C. last year where he'll come and ask him about an innovation. He's like, to him an innovation is something that people actually use. And that's what I love about the feedback loop thing, is it actually reinforces that mindset of that's what innovation is. Right.
Starting point is 00:26:16 So to sum up, the way you can frame this is DevOps is that technological capability that underpins your ability to practice, lean startup, and all these very rapid iterative processes. So I have a couple of questions then. So one is, you know, going back to this original taxonomy question, and you guys describe that there isn't necessarily an ideal organizational type. Which, by the way, should be encouraging. I agree.
Starting point is 00:26:38 I think it's super encouraging and more importantly democratizing that anybody can become a hit player. We were doing this in the federal government. I love that. But one of my questions is when we had Adrian Cockcroft on this podcast a couple years ago talking about microservices. And the thing that I thought was so liberating about what he was describing the Netflix story was, was that it was a way for teams to essentially become little mini product management units and essentially self-organize because the infrastructure, by being broken down into these micro pieces versus, say, a monolithic kind of uniform architecture,
Starting point is 00:27:14 I would think that being a organization that's containerized its code in that way that has this microservices architecture would be more suited to DevOps. or is that a wrong belief? I'm just trying to understand, again, that taxonomy thing of how these pieces all fit together. So we actually studied this. There's a whole section on architecture in the book
Starting point is 00:27:33 where we looked at exactly this question. Architecture has been studied for a long time and people talk about architectural characteristics. There's the Atam, the architectural trade-off model that Carnegie Mellon developed. There's some additional things we have to care about. Testability and deployability. Can my team test its stuff
Starting point is 00:27:50 without having to rely on this very complex integrated environment? Can my team deploy its code? to production without these very complex orchestrated deployments. Basically, can we do things without dependencies? That is one of the biggest predictors in our cohort of IT performance is the ability of teams to get stuff done on their own without dependencies on other teams, whether that's testing or whether it's deploying or whether it's planning.
Starting point is 00:28:12 Even just communicating. Yeah. Can you get things done without having to do like mass communication and checking in permissions? The question I love, love, love asking on this podcast is we always revisit the 1937 Coast paper about the theory of the first. firm and this idea that transaction costs are more efficient. And this is like the ultimate model for reducing friction and those transaction costs, communication coordination costs,
Starting point is 00:28:34 all of it. That's what, like, all the technical and process stuff is about that. I mean, Don Rynesson once came to one of my talks on continuous delivery. At the end, he said, so continuous delivery, that's just about reducing transaction costs, right? And I'm like, huh, an economist's view of DevOps. You're right. You reduced my entire body of work to one sentence. But it's so much Conway's law, right? It's what remind me what Conway's law So organizations which design systems are constrained to produce designs, which are copies of the communication structures of these organizations. Oh, right.
Starting point is 00:29:02 It's that idea, basically, that your software code looks like the shape of the organization itself. Right. And how we communicate. Right. Right. So which, you know, Jez just summarized, if you have to be communicating and coordinating with all of these other different groups. Command and control looks like waterfall.
Starting point is 00:29:17 A more decentralized model looks like independent teams. Right. So the data shows that. One thing that I would say is a lot of people jump on the microservices, containerization bandwagon. There's one thing that is very important to bear in mind. Implementing those technologies does not give you those outcomes we talked about. We actually looked at people doing mainframe stuff.
Starting point is 00:29:34 You can achieve these results with mainframes. Equally, you can use Kubernetes and Docker and microservices and not achieve these outcomes. We see no statistical correlation with performance, whether you're on a mainframe or a greenfield or a brownfield system. If you're building something brand new, or if you're working on existing build. And one thing I wanted to bring up that we didn't before
Starting point is 00:29:56 is I said, you know, day one is short, day two is long, and I talked about things that live on the Internet and live on the web. Yeah. This is still a really, really smart approach for package software. And I know people who are working in and running package software companies that use this methodology because it allows them to still work in small, fast approaches. And all they do is they push to a small package pre-production. database, and then when it's time to push that code onto some media, they do that.
Starting point is 00:30:28 Okay. So what I love hearing about this is that it's actually not necessarily tied again to the architecture or the type of company you are. There's this opportunity for everybody. But there is this mindset of like an organization that is ready. It's like a readiness level for a company. Oh, I hear that all the time. I don't know if I'd say there's any such thing as readiness, right? Like there's always an opportunity to get better. There's always an opportunity to transform. The other thing that really, like, drives me crazy and makes my head explode, is this whole maturity model thing. Okay, are you ready to start transforming? Well, like, you can just not transform and then maybe fail, right? Maturity models, they're really popular in
Starting point is 00:31:07 industry right now, but I really can't stress enough that they're not really an appropriate way to think about a technology transformation. I was thinking of readiness in the context of like NASA technology readiness levels or TRLs, which is something we used to think about a lot for very early stage things, but you're describing maturity of an organization, and it sounds like there's some kind of a framework for assessing the maturity of an organization, and you're saying that doesn't work. But first of all, what is that framework, and why doesn't it work? Well, so many people think that they want a snapshot of their DevOps or their technology transformation and spit back a number, right? And then you will have one number to compare yourself
Starting point is 00:31:43 against everything. The challenge, though, is that a maturity model usually is leveraged. to help you think about arriving somewhere. And then here's the problem. Once you've arrived, what happens? Oh, we're done. You're done. And then the resources are gone. And by resources, I don't just mean money.
Starting point is 00:32:02 I mean time. I mean attention. We see year over year, over year, the best most innovative companies continue to push. So what happens when you've arrived? I'm using my finger quotes. You stop pushing. You stop pushing.
Starting point is 00:32:15 What happens when executives or leaders or whomever decide that you no longer need resources of any type. I have to push back again, though. Doesn't this help because it is helpful to give executives in particular, particularly those that are not tech native coming from the seeds of the engineering organization, some kind of metric to put your head around, where are we, where are we at? So you can use a capability model. You can think about the capabilities that are necessary to drive your ability to develop
Starting point is 00:32:43 and deliver software with speed and stability. Right. Another limitation is that they're often kind of a lockstep or a linear formula, right? No, right. It's like a stepwise A, B, C, D, E, 1, 2, 3, 4, and in fact, the very nature of anything iterative is it's very nonlinear and circular. Feedback loops are circles. Right. And maturity models just don't allow that. Now, another thing that's really, really nice is that capability models allow us to think about capabilities in terms of these outcomes. Capabilities drive impact. maturity models are just this thing where you have this level one, level two, level three, level four. It's a bit performative.
Starting point is 00:33:21 And then finally, maturity models just sort of take this snapshot of the world and describe it. How fast is technology and business changing? If we create a maturity model now, let's wait, let's say, four years. That maturity model is old and dead and dusty and gone. Do new technologies change the way you think about this? because I've been thinking a lot about how product management for certain types of technologies changes with the technology itself and that machine learning and deep learning might be a different beast. And I'm just wondering if you guys have any thoughts on that.
Starting point is 00:33:53 Yeah, I mean, me and Dave Farley wrote the continuous delivery book back in 2010. And since then, you know, there's Docker and Kubernetes and large-scale adoption of the cloud and all these things that you had no idea would happen. People sometimes ask me, you know, isn't it time you wrote a new edition of the book? I mean, yeah, we could probably rewrite it. Does it change any of the fundamental principles? No. do these new tools allow you to achieve those principles in new ways? Yes. So I think
Starting point is 00:34:18 this is how I always come back to any problem is go back to first principles. And the first principles, I mean, they will change over the course of centuries. I mean, we've got modern management versus kind of scientific management, but they don't change over the course of like a couple of years. The principles are still the same. Technologies give you new ways to do them and that's what's interesting about them. Equally, things can go backwards. A great example of this is one of the capabilities we talk about in the book is working off a shared trunk or master inversion control, not going on these long-lived feature branches. And the reason for that is actually because of feedback loops. You know, if developers love going off into a corner,
Starting point is 00:34:55 putting headphones on their head and just coding something for like days, and then they try and integrate it into trunk, you know, and that's a total nightmare. And not just for them, more critically, for everyone else, who then has to merge their code into whatever they're working on. So that's hugely painful. Git is one of these examples of a tool that makes it very easy. And people are like, oh, I can use feature branches. So I think, again, it's nonlinear in the way that you describe. Gives you new ways to do things.
Starting point is 00:35:17 Are they good and bad? It depends. But the thing that strikes me about what you guys have been talking about as a theme in this podcast that seems to lend itself well to the world of machine learning and deep learning where that technology might be different is it sort of lends itself to a probabilistic way of thinking and that things are not necessarily always complete and that there is not a beginning and an end and that you can actually live very comfortably in an environment where things are by nature complex and that complexity is not necessarily something to avoid.
Starting point is 00:35:45 So in that sense, I do think there might be something kind of neat about ML and deep learning and AI for that matter because it is very much lending itself to that sort of mindset. Yeah, and in our research, we talk about working in small batches. There's a great video by Brett Victor called Inventing on Principle where he talks about how important it is to the creative process to be able to see what you're doing. And he has this great demo of this game he's building where he can change the code and the game changes its behavior instantly when you're doing things like that.
Starting point is 00:36:13 You don't get to see that. No, and the whole thing with machine learning is how can we get the shortest possible feedback from changing the input parameters to seeing the effect so that the machine can learn. And at the moment you have very long feedback loops, the ML becomes much, much harder because you don't know which of the input changes
Starting point is 00:36:29 cause the change in output that the machine is supposed to be learning from. So the same thing is true of organizational change and process. And product development as well, by the way, which is working in small batches so that you can actually reason about cause and effect. You know, I changed this thing. It had this effect.
Starting point is 00:36:44 Again, that requires short feedback loops. That requires small batches. That's one of the key capabilities we talk about in the book, and that's what DevOps enables. So we've been this hallway-style conversation around all these themes of DevOps, measuring it, why it matters, and what it means for organizations. But practically speaking,
Starting point is 00:36:59 if a company, and you guys are basically arguing at any company, not necessarily a quote company that thinks it's a tech company and necessarily a company that has like this amazing modern infrastructure, structure stack. It could be a company that's still working off mainframes. What should people actually do to get started and how do they know where they are? So what you need to do is take a look at your capabilities, understand what's holding you back, right? Try to figure out what your constraints are. But the thing that I love about much of this is you can start somewhere and culture is such a core, important piece we've seen across so many industries, culture is truly transformative.
Starting point is 00:37:36 And in fact, we measure it in our work and we can show that culture has a predictive effect on organizational outcomes and on technology capabilities. We use a model from a guy called Ron Westram, who was a social scientist studying safety outcomes, in fact, in safety critical industries like healthcare and aviation. He creates a typology where he organizes organizations based on whether they're pathological, bureaucratic or generative. That's actually a great topology. I wanted to apply that to people I date. I know, right. I wanted to play it to people. Too real. There's a book in there, definitely. I like how I'm trying to anthropomorphize all these organizational things into people.
Starting point is 00:38:13 But anyway, go on. Instead of the five love languages, you can have the three relationship types. So pathological organizations are characterized by low cooperation between different departments and up and down the organizational hierarchy. How do we deal with people who bring us bad news? Do we ignore them or do we shoot people who bring us bad news? How do we deal with responsibilities? Are they defined tightly so that when something goes wrong, we know whose fault it is,
Starting point is 00:38:35 so we can punish them? Or do we share risks because we know we're all in it together You all have skin in the game, you're all accountable, right? Exactly. How do we do with bridging between different departments? And crucially, how do we do with failure? As we discussed earlier, in any complex system, including organizational systems, failure is inevitable. So failure should be treated as a learning opportunity, not whose fault was it, but why did that person not have the information they needed, the tools they needed?
Starting point is 00:38:59 How can we make sure that when someone does something, it doesn't lead to catastrophic outcomes, but instead it leads to contain small blast radiuses. Right, not an outage on Black Friday. Right, exactly. And then also, how do we deal with novelty? Is novelty crushed or is it implemented or does it lead to problems? One of the pieces of research that confirms what we were talking about was some research that was done by Google. They were trying to find what makes the greatest Google team. Is it four Stanford graduates and node developer and fire all the managers? Is a data scientist and a no JS programmer and a manager? Right, one product manager paired with one system engineer with one. And what they found was the number one ingredient. was psychological safety. Does the team feel safe to take risks?
Starting point is 00:39:43 And this ties together, failure and novelty. If people don't feel that when things go wrong, they're going to be supported, they're not going to take risks. And then you're not going to get any novelty because novelty, by definition, involves taking risks. So we see that one of the biggest things you can do is create teams where it's safe to go wrong and make mistakes
Starting point is 00:40:02 and where people will treat that as a learning experience. This is a principle that applies, again, And not just in product development, you know, the lean startup fail early, fail often, but also in the way we deal with problems at an operational level as well. And how we interact with our team when these things happen. So just to kind of summarize that, you have pathological, this is a power-oriented thing where, you know, the people are scared, the messenger is going to be shot. Then you have this bureaucratic kind of rule-oriented world where the messengers aren't hurt.
Starting point is 00:40:30 And then you have this sort of generative, and again, I really wish I could apply this to people, but we're talking about organizations here for culture, which, is more performance oriented. And I just want to add one thing about this, you know, working the federal government, you would imagine that to be a very bureaucratic organization. I would actually. And actually what was surprising to me was that, yes, there's lots of rules. The rules aren't necessarily bad. That's how we can operate at scale, is by having rules. But what I found was there was a lot of people who are mission oriented. And I think that's a nice alternative way to think about generative organizations. It's to think about mission orientation. The rules are
Starting point is 00:41:01 there, but if it's important to the mission, we'll break the rules. And we measure this at the team level, right? Because you can be in the government and there were pockets that were very generative. You can be in a startup. And you can see startups that act very bureaucratic, or very pathological. Right. And the culture of the CEO. Where it's not charismatic, inspirational vision, but to the expense of actually being heard and the messenger is shot, et cetera. And we have several companies around the world now that are measuring their culture on a quarterly cadence and basis because we show in the book how to measure it. Westrom's type of was the table itself, and so we turned that into a scientific, psychometric way to measure it.
Starting point is 00:41:42 Now, this makes sense why I'm putting these anthropomorphic analogies, because in this sense, organizations are like people. They're made of people. Teams are organic entities. And I love that you said that the unit of analysis is a team because it means you can actually do something and you can start there, and then you can see if it actually spread or doesn't spread, bridges, doesn't bridge, et cetera. And what I also love about this framework is it also moves away from this cult of failure mindset
Starting point is 00:42:05 that I think people tend to have where it's like failing for the sake of failing and you actually want to avoid failure and the whole point of failing is to actually learn something and then be better and take risks so you can implement these new things
Starting point is 00:42:17 so you can achieve your mission so what's your final I mean there's a lot of really great things here but like what's your final sort of parting takeaway for listeners or people who might want to get started or think about how they are doing so I think you know we're in a world
Starting point is 00:42:31 where technology matters anyone can do this stuff but you have to get the technology part of it right. That means investing in your engineering capabilities, in your process, in your culture, in your architecture. We've dealt with a lot of things here that people think are intangible, and we're here to tell you they're not intangible. You can measure them. They will impact the performance of your organization. So take a scientific approach to improving your organization, and you will read the dividends. When you guys talk about,
Starting point is 00:42:56 you know, anyone can do this, the teams can do this, but what role in the organization is usually most empowered to be the owner of where to get started? Is it like the BP of engineering? Is it the CTO, the CIO? I was going to say, don't minimize the role of and the importance of leadership. DevOps sort of started as a grassroots movement, but right now we're saying roles like VP and CTO being really impactful in part because they can set the vision for an organization, but also in part because they have resources that they can dedicate to this. We see a lot of CEOs and CIOs in our business. We have like a whole briefing center. We hear what's top of mind for them all the time. Everyone thinks they're trying.
Starting point is 00:43:35 transformational. So, like, what actually makes a visionary type of leader who has that, not just the purse strings and the decision-making power, but the actual characteristics that are right for this? Right. And that's such a great question. And so we actually dug into that in our research. And we find that there are five characteristics that end up being predictive of driving change and really amplifying all of the other capabilities that we found. And these five characteristics are vision, intellectual stimulation, inspirational communication, supportive leading. leadership and personal recognition. And so what we end up recommending to organizations is absolutely invest in the technology, also invest in leadership in your people, because that can really help drive your transformation home. Well, Nicole, Jez, thank you for joining the A6 and Z podcast. The book, Just Out, is Accelerate, Building and Scaling High Performing Technology Organizations. Thank you so much, you guys. Thanks for having us.
Starting point is 00:44:30 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.