a16z Podcast - Innovation Through Software Development and IT
Episode Date: March 6, 2020One of the recurring themes we talk about a lot on the a16z Podcast is how software changes organizations, and vice versa... More broadly: it’s really about how companies of all kinds innovate with ...the org structures and tools that they have. But we've come a long way from the question of "does IT matter" to answering the question of what org structures, processes, architectures, and roles DO matter when it comes to companies -- of all sizes -- innovating through software and more. So in this episode (a re-run of a popular episode from a couple years ago), two of the authors of the book Accelerate: The Science of Lean Software and DevOps, by Nicole Forsgren, Jez Humble, and Jean Kim join Sonal Chokshi to share best practices and large-scale findings about high performing companies (including those who may not even think they’re tech companies). Nicole was co-founder and CEO of Dora, which was acquired by Google in December 2018; she will soon be joining GitHub as VP of Research & Strategy. Jez was CTO at DORA; is currently in Developer Relations at Google Cloud; and is the co-author of the books The DevOps Handbook, Lean Enterprise, and Continuous Delivery.
Transcript
Discussion (0)
Hi everyone. Welcome to the A6 and Z podcast. I'm Sonal. So one of the recurring themes we talk a lot about on this podcast is how software changes organizations and vice versa. More broadly, it's really about how companies, of all kinds, innovate with the org structures and tools that they have. And today's episode, a rerun of a very popular episode from a couple years ago, draws on actual research and data from one of the largest large-scale studies of software and organizational performance out there.
Joining me in this conversation are two of the authors of the book Accelerate,
the Science of Lean Software and DevOps by Nicole Forsgrin, Jess Humble, and Gene Kim.
We have the first two authors, so Nicole, who did her PhD research trying to answer the
elusive eternal questions around how to measure software performance in orgs, especially given
past debates around does IT matter?
She was a co-founder and CEO of Dora, which put out the annual state of DevOps report.
Dora was acquired by Google Cloud a little over year.
year ago, and she will soon be joining GitHub as VP of research and strategy. And then we also have
Jez Humble, who is CTO at Dora, is currently in developer relations at Google Cloud, and is also
the co-author of the books The DevOps Handbook, Lean Enterprise, and Continuous Delivery. In the conversation
that follows, Nicole and Jez share their findings about high-performing companies, even those that
may not think their tech companies, and answer my questions about whether there's an ideal org type
for this kind of innovation, whether it's the size of the organization, the software architecture
they use, their culture or people, and where the role of software NIT lives within that. But first,
we begin by talking briefly about the history of DevOps and where that fits in the broader
landscape of related software movements. So I started as a software engineer at IBM. I did
hardware and software performance. And then I took a bit of a detour into academia because I wanted
to understand how to really measure and look at performance that would be generalizable to
several teams in predictable ways and in predictive ways. And so I was looking at and investigating
how to develop and deliver software in ways that were impactful to individuals, teams,
and organizations. And then I pivoted back into industry because I realized this movement
had gained so much momentum and so much traction. And industry was just desperate to really
understand what types of things are really driving performance outcomes.
And what do you mean by this movement?
This movement that now we call DevOps.
So the ability to leverage software to deliver value to customers, to organizations, to stakeholders.
And I think from a historical point of view, the best way to think about DevOps,
it's a bunch of people who had to solve this problem of how do we build large distributed systems
that were secure and scalable
and be able to change them really rapidly and evolve them.
And no one had had that problem before,
certainly at the scale of companies like Amazon and Google.
And that really is where the DevOps movement came from,
trying to solve that problem.
And you can make an analogy to what Agile was about
since the kind of software crisis of the 1960s
and people trying to build these defense systems at large scale,
the invention of software engineering as a field,
Margaret Hamilton, her work at MIT on the Apollo program,
what happened in the decades after that
was everything became
kind of encased in concrete
in these very complex processes
this is how you develop software
and agile was kind of a reaction to that
saying we can develop software
much more quickly
with much smaller teams
in a much more lightweight way
so we didn't call it DevOps back then
but it's also more agile
can you guys break down the taxonomy for a moment
because when I think of DevOps I think of it
in the context of the containerization of code
and virtualization
I think of it in the context of microservice
and being able to do modular teams around different things.
There's an organizational element.
There's a software element.
There's an infrastructure component.
Like, paint the big picture for me of those building blocks
and how they all kind of fit together.
Well, I can give you a very personal story,
which was my first job after college was in 2000, in London,
working at a startup where I was one of two technical people in the startup.
And I would deploy to production by FTP and code
from my laptop directly into production.
And if I wanted to rollback, I'd say,
Hey, Johnny, can you FTP your copy of this file to production?
And that was our rollback process.
And then I went to work in consultancy
where we were on these huge teams
and deploying to production,
there was a whole team with a Gant chart
which puts together the plan to deploy to production
and I'm like, this is crazy.
Unfortunately, I was working with a bunch of other people
who also thought it was crazy.
And we came up with these ideas
around deployment automation and scripting and stuff like that.
And suddenly we saw the same ideas
that popped up everywhere, basically.
I mean, it's realizing that if you're working
in a large complex organization,
Agile's going to hit a brick wall.
Because unlike the things we were building in the 60s,
product development means that things are changing and evolving all the time
so it's not good enough to get to production the first time
you've got to be able to keep getting there on and on
and that really is where DevOps comes in.
It's like, well, agile, we've got a way to build and evolve products
but how do we keep deploying to production
and running the systems in production
in a stable, reliable way, particularly in a distributed context.
So if I phrase it another way,
sometimes there's a joke that says day one is short and day two is long.
What does that mean?
Right, so day one is when we create all these...
It's, by the way, sad that you have to explain the joke to me.
No, it's...
No, which is great, though, because so day one is when we create all of these systems,
and day two is when we deploy to production.
We have to deploy and maintain forever and ever and ever and ever.
So day two is an infinite day.
Right, exactly.
For a successful product.
Hopefully.
We hope that day two is really, really long, and we're fond of saying Agile doesn't scale.
And sometimes I'll say this and people shoot laser beams out of their eyes.
But when we think about it, Agile was meant for development.
Just like Jess said, it's speaking.
up development, but then you have to hand it over, and especially infrastructure and IT operations.
What happens when we get there? So DevOps was sort of born out of this movement, and it was
originally called Agile System Administration. And so then DevOps sort of came out of
development and operations. And it's not just dev and ops, but if we think about it, that's
sort of the bookends of this entire process. Well, it's actually like day one and day two
combined into one phrase. The way I think about this is I remember the stories of like Microsoft
in the early days and a waterfall, cascading model of development.
Leslie Lamport wrote a piece for me about why software should be developed like houses
because you need a blueprint.
And I'm not a software developer, but it felt like a very kind of old way of looking at the world of code.
I hate that metaphor.
Tell me why.
If the thing you're building has well-understood characteristics, it makes sense.
So if you're building a trust bridge, for example, there's well-known, understood models of building
trust bridges, you plug the parameters into the model, and then you get a trust bridge,
it stays up. Have you been to Sagrada Familia in Barcelona? Oh, I love, I love Gaudi.
Okay. So if you go into the crypt of the Sagrada Familia, you'll see his workshop and there's a
picture, in fact, a model that he built of the Sagrada Familia, but upside down with the weight
simulating the stresses. And so he would build all these prototypes and small prototypes
because he was fundamentally designing a new way of building. All Gaudi's designs were hyperbolic
curves and parabolic curves, and no one had used that before.
Things that had never been pressure tested. Right. Literally in that case.
Exactly. He didn't want them to fall down. So he built all these prototypes and did all this stuff.
He built his blueprint as he went by building and trying it out, which is a very rapid prototyping kind of model.
Absolutely. So in the situation where the thing you're building has known characteristics and it's been done before, yeah, sure, we can take a very phased approach to it.
And, you know, for designing these kind of protocols that have to work in a distributed context and you can actually do formal proofs of them, again, that makes sense.
But when we're building products and services where particularly we don't know what customers are.
actually want and what users actually want. It doesn't make sense to do that because you'll build
something that no one wants. You can't predict. And we're particularly bad at that, by the way.
Even companies like Microsoft where they are very good at understanding what their customer base
looks like. They have a very mature product line. Ronnie Cahavi has done studies there and only about
one third of the well-designed features deliver value. That's actually a really important point.
the mere question of does this work is something that people really clearly don't pause to ask.
But I do have a question for you guys to push back, which is, is this a little bit of the cult?
Oh my God, it's like so developer-centric, let's be agile, let's do it fast, our way, you know, two pizzas.
That's an ideal size of a software team.
And, you know, I'm not trying to mock it.
I'm just saying that isn't there an element of actual practical realities like technical debt and accruing a mess underneath all your code and a system that you may be there for two or three years and you can go after the next start?
but okay, someone else has to clean up your mess. Tell me about how this fits into that big
picture. This is what enables all of that. Oh, right? So it's not actually just creating the
problem because that's how I'm kind of hearing it. No, absolutely. So you still need development.
You still need test. You still need QA. You still need operations. You still need to deal with
technical debt. You still need to deal with re-architecting really difficult, large monolithic
code bases. What this enables you to do is to find the problems, address them quickly, move
forward. I think that the problem that a lot of people have is that we're so used to couching
these things as trade-offs and as dichotomies, the idea that if you're going to move fast,
you're going to break things. The one thing which I always say is, if you take one thing away
from DevOps, is this, high-performing companies don't make those trade-offs. They're not going
fast and breaking things. They're going fast and making more stable, more high-quality systems.
And this is one of the key results in the book, in our research, is this fact that high-performers
do better at everything, because the capabilities that enable high-quality.
performance in one field, if done right, enable it in other fields. So if you're using version
control for software, you should also be using version control for your production infrastructure.
If there's a problem in production, we can reproduce the state of the production environment
in a disaster recovery scenario, again, in a predictable way that's repeatable. I think it's
important to point out that this is something that happening in manufacturing as well.
Give it to me. I love when people talk about software as drawn from hardware analogies as my
favorite type of metaphor. Okay, so I've got to, so Toyota didn't win by making.
shitty cars faster. They won by making higher-quality cars faster and having shorter time to market.
The lean manufacturing method, which by the way also spawn lean startup thinking and everything
else connected to it. And DevOps pulls very strongly from lean methodologies. So you guys are
probably the only people to have actually done a large-scale study of organizations adopting DevOps.
What is your research and what did you find? Sure. So the research really is the largest
investigation of DevOps practices around the world. We have over 23,000 data points. All
industries. Give me like a sampling. Like, what are the range of industries? So I've got
entertainment, I've got finance, I have health care and pharma, I have technology, government,
education. You basically have every vertical. And then when you tell you around the world,
so we're primarily in North America. We're in Amia, we have India, we have a small sample in Africa.
Right. Just to quickly break down, like the survey methodology questions that people have
in the ethnographic world, the way we would approach it, is that you can never trust what people say they do,
you have to watch what they do. However, it is absolutely true, and especially in a more scalable sense, that there are really smart surveys that give you a shit ton of useful data.
Yes, and part two of the book covers this in almost excruciating detail.
We like knowing methodology, so I'd like to share that.
Well, and it's interesting because Jez talked about in his overview of Agile and how it changes so quickly and we don't have a really good definition.
What that does is it makes it difficult to measure, right?
And so what we do is we've defined core constructs, core capabilities so that we can then measure them.
We go back to core ideas around things like automation, process, measurement,
lean principles, and then I'll get that pilot set of data and I'll run preliminary statistics
to test for discriminant validity, convergent validity, composite reliability, make sure that
it's not testing what it's not supposed to test. It is testing what it is supposed to test.
Everyone is reading it consistently the same way that I think it's testing. I even run checks
to make sure that I'm not inadvertently inserting bias or collecting bias just because I'm getting
all of my data from surveys. Sounds pretty damn robust.
So tell me then what were the big findings.
That's a huge question, but give me the hit list.
Well, okay, so let's start with one thing that Jez already talked about.
Speed and stability go together.
This is where he was talking about they're not being necessarily a false dichotomy,
and that's one of your findings that you can actually accomplish both.
Yeah, and it's worth talking about how we measured those things as well.
So we measure speed or tempo, as we call it in the book,
or sometimes people call it throughput as well.
Which is a nice full-circle manufacturing idea, like the semiconductor circuit throughput.
Yeah, absolutely.
I love hardware analogies for software, I told you it.
A lot of it comes from lean.
So lead time, obviously one of the classic
Lean manufacturing measures we use.
How long does it take?
We look at the lead time from checking into version control
to release into production.
So that part of the value stream
because that's more focused on the DevOps end of things.
And it's highly predictable.
The other one is release frequency,
so how often do you do it?
And then we've got two stability metrics.
And one of them is time to restore.
So in the event that you have some kind of outage
or some degradation in performance in production,
how long does it take you to restore service?
For a long time, we focused on not letting things break.
And I think one of the changes, paradigm shifts we've seen in the industry,
particularly in DevOps, is moving away from that.
We accept that failure is inevitable because we're building complex systems.
So not how do we prevent failure, but when failure inevitably occurs,
how quickly can we detect and fix it?
MTBF, right?
Meantime between failures.
If you only go down once a year, but you're down for three days and it's on Black Friday,
but if you're down very small, low blast, very, very small blast radius,
and you can come back almost immediately,
and your customers almost don't notice, that's fine.
The other piece around stability is change fail, right?
When you push a change into production,
what percentage of the time do you have to fix it
because something went wrong?
By the way, what does that tell you if you have a change fail?
So in the lean kind of discipline, this is called percent complete and accurate,
and it's a measure of a quality of your process.
So in a high-quality process, when I do something for Nicole,
Nicole can use it rather than sending it back to me and say,
hey, there's a problem with this.
And in this particular case, what percentage of the time
when I deploy something to production?
and is there a problem because I didn't test adequately.
My testing environment wasn't in production-like enough.
Those are the measures for finding this,
but the big finding is that you can have speed and stability together through DevOps.
Is that what I'm hearing?
High performers, get it all.
Low performers kind of suck at all of it.
Medium performers hang out in the middle.
I'm not seeing trade-offs.
Four years in a row, so anyone who's thinking,
oh, I can be more stable if I slow down, I don't see it.
It actually breaks a very commonly held kind of urban legend
around how people believe these things operate.
So tell me, are there any other sort of findings like that?
Because that's very counterintuitive.
Okay, so this one's kind of fun.
One is that this ability to develop and deliver software with speed and stability
drives organizational performance.
Now, here's the thing.
I was about to say, that's a very obvious thing to say.
So it seems obvious, right?
Developing and delivering software with speed and stability drives things like
profitability, productivity, market share.
Okay, except if we go back to Harvard Business Review 2003, there's a paper
titled IT doesn't matter. We have decades of research. I want to say at least 30 or 40 years
of research showing the technology does not drive organizational performance. It doesn't drive
ROI. And we are now starting to find other studies and other research that backs this up.
Eric Bruny Olson out of MIT, James Besson out of Boston University, 2017.
Did you say James Besson? Yeah. Oh, I used to edit him too.
Yeah, it's fantastic.
Here's why it's different.
Because before, right, in like the 80s and the 90s, we did this thing.
We're like, you'd buy the tech and you'd plug it in and you'd walk away.
It was on-prem sales model where you like deliver and leave as opposed to like software
as a service and other ways that things.
And people would complain if you try to upgrade it too often.
And the key is that everyone else can also buy the thing and plug it in and walk away.
How is that driving value or differentiation for a company?
If I just buy a laptop to help me do something faster,
everyone else can buy a laptop to do the same thing faster.
That doesn't help me deliver value to my customers or to the market.
It's a point of parity, not a point of distinction.
Right.
And you're saying that point of distinction comes from how you tie together,
that technology, process, and culture through DevOps.
Right.
And that it can provide a competitive advantage to your business.
If you're buying something that everyone else also has access to,
then it's no longer a differentiator.
Right.
But if you have an in-house capability and those people are finding ways to drive your business,
I mean, this is the classic Amazon model.
They're running hundreds of experiments in production at any one time to improve the product.
And that's not something that anyone else can copy.
That's why Amazon keeps winning.
So what people are doing is copying the capability instead.
And that's what we're talking about.
How do you build that capability?
The most fascinating thing to me about all this is honestly not the technology per se,
but the organizational change part of it and the organizations themselves.
So of all the people you studied, is there an ideal organization?
organizational makeup that is ideal for DevOps? Or is it one of these magical formulas that has
this ability to turn a big company into a startup and a small company into, because that's actually
the real question. From what I've seen, there might be two ideals. The nice, happy answer is
the ideal organization is the one that wants to change. That's, I mean, given this huge n-equals
23,000 data set, is it not tied to a particular profile of a size of company? They're both shaking
their head just for the listeners. I see high performers among large companies.
I see high performers in small companies.
I see low performers in small companies.
I see low performers in highly regulated companies.
I see low performers in not regulated companies.
So tell me the answer you're not supposed to say.
So that answer is it tends to be companies that are like, oh shit, and they're two profiles.
Either one, they're like way behind and oh shit and they have some kind of funds.
Or they are like this lovely, wonderful.
bastion of like they're these really innovative high-performing companies, but they still realize
there are a handful of like two or three companies ahead of them. And they don't want to be number
two. Yeah. They are going to be number one. So those are sure the ideal. I mean, just like anthropomorphize
it a little bit. It's like the 35 to 40-year-old who suddenly discovers you might be pre-diabetic,
so you better do something about it now before it's too late. But it's not too late because you're not
so old where you're about to reach sort of the end of a possibility to change that runway. And then
there's this person who's sort of kind of already like in the game running in the race and they
might be two or three but they want to be like number one. And I think to extend your metaphor,
the companies that do well are the companies that never got diabetic in the first place because
they always just ate healthily. They were already glucose monitoring. They had continuous
glucose monitors on which is like DevOps actually. They were always athletes. Right. You know,
diets are terrible because at some point you have to stop the diet. It has a sudden start and stop
as opposed to way of life is what you're saying. Right. Exactly. So if you just always eat
healthily and never eat too much
or very rarely eat too much and
do a bit of exercise every day. You never
get to the stage like, oh my God, now I can only
eat tofu. So
like my loving
professorness
nurture Nicole also
has one more profile that
like I love and I worry
about them like Mother Hen
and it's the companies that I talk to
and they come to me and they're struggling
and I haven't decided
if they want to change but they're like
so we need to do this transformation
and we're going to do the transformation
and it's either because they want to
or they've been told that they need to
and then they will insert this thing where they say
but I'm not a technology company.
I'm like, but we just had this
20 minute conversation
about how you're leveraging technology
to drive value to customers
or to drive this massive process
that you do.
And then they say, but I'm not a technology company.
I could almost see why they had that in their head
because they were a natural resources company
but there was another one where they were a finance company.
I mean, an extension of software eats the world
is really every company is a technology company.
It's fascinating to me that that third type exists,
but it is a sign of this legacy world moving into software.
And I worry about them.
Also, at least for me personally, you know,
I lived through this like mass extinction of several firms
and I don't want it to happen again.
And I worry about so many companies
that keep insisting they're not technology companies.
And I'm like, oh, honey child.
You're a tech company.
You know, one of the gaps in our data is actually China,
and I think China is a really interesting example
because they didn't go through the whole, you know,
IT doesn't matter phase.
They're jumping straight from no technology to Alibaba and Tencent, right?
I think U.S. companies should be scared
because at the moment, Tencent and Alibaba are already moving into other developing markets,
and they're going to be incredibly competitive
because it's just built into their DNA.
So the other fascinating thing to me is that you essentially were able to measure
performance of software.
and clearly productivity.
Is there any more insights on the productivity side?
Yes, yes, I want to go.
This is his favorite rant.
He's like jumping around and like waving his hand.
So tell us.
The reason the manufacturing metaphor breaks down
is because in manufacturing you have inventory.
Yes.
We do not have inventory in the same way in software.
In a factory, like the first thing your lean consultant
is going to do, walking into the factories,
points are the piles of thing everywhere.
But I think if you walk into an office
where there's developers, where's the inventory?
By the way, that's what makes talking about this to executives so difficult.
They can't see the process.
Well, it's a hard question to answer because is the inventory the code that's being written?
And people actually have done that and said, well, listen, lines of code are an accounting measure
and we're going to capture that as, you know, capital.
That's insane.
It seems like an invitation to write crappy unnecessarily long code.
That's exactly what happens.
It's like the olden days are getting paid for a book by how long it is.
And it's like actually really boring when you can actually write it in like one third of the lane.
Right.
I'm thinking of Charles Dickens.
In general, you know, you prefer people to write short programs because they're easier to maintain and so forth.
But lines of codes have all these drawbacks. We can't use them as a measure productivity.
So if you can't measure lines of code, what can you measure?
Because I really want an answer. Like, how do you measure productivity?
So velocity is the other classic example.
Agile, there's this concept of velocity, which is the number of story points a team manages to complete in an iteration.
So before the start of an iteration in many Agiles, particularly scrum-based processes,
you've got all this works to you.
We need to build these five features.
How long will this feature take?
And the developers fight over it.
And they're like, oh, it's five points.
And then this one's going to take three points.
This one's going to take two points.
And so you have a list of all these features.
And you don't get through all of them.
But at the end of the iteration, the customer signs off,
I'm accepting this one.
This one's fine.
This one's a hot mess.
Go back and do it again, whatever.
The number of points you complete in the iteration is the velocity.
So it's like the speed at which you're able to deliver those features.
So a lot of people treat it like that.
But actually, that's not really what it's about.
It's a relative measure of effort, and it's for capacity planning purposes.
So you basically, for the next iteration, we'll only commit to completing the same velocity that we finished last time.
And so it's relative and it's team dependent.
And so what a lot of people do is say they start comparing velocities across teams.
Then what happens is a lot of work you need to collaborate between teams.
But hey, if I'm going to help you with your story, that means I'm not going to get my story points.
So it's like bad incentive structure.
Right.
People can game it as well.
You should never use story points as a product of team measure.
So lines of code doesn't.
work velocity doesn't work what works so this is why we like two things in particular one thing
that it's a global measure and secondly that it's not just one thing it mixes two things together
which might normally be intention and so this is why we went for our measure of performance so
measuring lead time release frequency and then time to restore and change fail rate lead time is
really interesting because lead time is on the way to production right so all the
teams have to collaborate. It's not something where, you know, I can go really fast in my
velocity, but nothing ever gets delivered to the customer. That doesn't count in lead time.
So it's a global measure. It takes care of that problem of the incentive alignment around the
competitive dynamic. Also, it's an outcome. It's not an output. There's a guy called Jeff Patton.
He's a really smart thinker in the kind of lean agile space. He says, minimize output, maximize
outcomes, which I think is simple but brilliant. It's so simple because it just shifts the words to
impact. And even we don't get all the way there because we're not yet measuring, did the features
deliver the expected value to the organization or the customers? Well, we do get there because we
focus on speed and stability, which then deliver the outcome to the organization, profitability,
productivity, market share. But the second half of this, which I am also hearing, is did it meet
your expectations? Did it perform to the level that you wanted it to? Did it match what you
ask for or even if it wasn't something you specified that you desired or needed, that seems
like a slightly open question? So we did actually measure that. We looked at non-profit
organizations and these were exactly the questions we measured. We asked people, did the software
meet, I can't remember what the exact questions were. Effectiveness, efficiency, customer
satisfaction, delivery mission goals. How fascinating that you do at nonprofits because that
is a larger move in a non-profit measurement space to try to measure impact. But we captured
it everywhere because even profit-seeking firms still have
these goals. In fact, as we know from research, companies that don't have a mission other than making
money do less well than the ones that do. But I think, again, what the data shows is that
companies that do well on the performance measures we talked about outperform their low performing
peers by a factor of two. A hypothesis is what we're doing when we create these high performing
organizations in terms of speed and stability is we're creating feedback loops. What it allows us to do
is build a thin slice, a prototype of a feature, get feedback through some U.S.
mechanism, whether that's showing people the prototype and getting their feedback, whether it's
running A-B tests or multivariate tests in production. It's what creates these feedback loops that
allow you to shift direction very fast. I mean, that is the heart of lean startup. It's the heart
of anything you're putting out into the world, is you have to kind of bring it full circle.
It is a secret of success to Amazon, as you cited earlier. I would distill it to just that.
I think I heard Jeff Bezos say the best line. It was at the Internet Association dinner in
D.C. last year where he'll come and ask him about an innovation. He's like, to him an innovation,
is something that people actually use.
And that's what I love about the feedback loop thing,
is it actually reinforces that mindset of that's what innovation is.
Right.
So to sum up, the way you can frame this is DevOps is that technological capability
that underpins your ability to practice, lean startup,
and all these very rapid iterative processes.
So I have a couple of questions then.
So one is, you know, going back to this original taxonomy question,
and you guys describe that there isn't necessarily an ideal organizational type.
Which, by the way, should be encouraging.
I agree.
I think it's super encouraging and more importantly democratizing
that anybody can become a hit player.
We were doing this in the federal government.
I love that.
But one of my questions is when we had Adrian Cockcroft on this podcast
a couple of years ago talking about microservices
and the thing that I thought was so liberating
about what he was describing the Netflix story
was that it was a way for teams
to essentially become little mini product management units
and essentially self-organize
because the infrastructure by being broken down,
into these micro pieces
versus say a monolithic
kind of uniform architecture
I would think that being a
organization that's containerized
its code in that way
that has this microservices architecture
would be more suited to DevOps
or is that a wrong belief?
I'm just trying to understand again
that taxonomy thing
of how these pieces all fit together.
So we actually studied this
as a whole section on architecture
in the book where we learnt exactly this question.
Architecture has been studied for a long time
when people talk about architectural characteristics,
there's the Atam, the architectural trade-off model
that Carnegie Mellon developed.
There's some additional things we have to care about.
Testability and deployability.
Can my team test its stuff
without having to rely on this very complex integrated environment?
Can my team deploy its code to production
without these very complex orchestrated deployments?
Basically, can we do things without dependencies?
That is one of the biggest predictors
in our cohort of IT performance
is the ability of teams to get stuff done
on their own without dependencies on other teams,
whether that's testing or whether it's deploying
or whether it's planning.
Even just communicating.
Yeah.
Can you get things done without having to do like mass communication
and checking and permissions?
The question I love, love, love asking on this podcast
is we always revisit the 1937 Coast paper
about the theory of the firm
and this idea that transaction costs are more efficient.
And this is like the ultimate model for reducing friction
and those transaction costs, communication, coordination costs,
all of it.
That's what like all the technical and professional.
process stuff is about that. I mean, Don Rynesson once came to one of my talks on
continuous delivery. At the end, he said, so continuous delivery, that's just about reducing
transaction costs, right? And I'm like...
Huh, an economist's view of DevOps. I love it. You're right. You reduced my entire body
of work to one sentence. It's so much Conway's law, right? It's would remind me what Conway's
law is. So organizations which design systems are constrained to produce designs, which are
copies of the communication structures of these organizations. Oh, right. It's that idea, basically,
that your software code looks like the shape of the organization itself. Right. And how we
communicate. Right. So which, you know, Jez just summarized, if you have to be communicating
and coordinating with all of these other different groups. Command and control looks like
waterfall. A more decentralized model looks like independent teams. Right. So the data shows that.
One thing that I would say is a lot of people jump on the microservices, containerization
bandwagon. There's one thing that is very important to bear in mind. Implementing those
technologies does not give you those outcomes we talked about. We actually looked at people doing
mainframe stuff. You can achieve these results with mainframes.
Equally, you can use the, you know, Kubernetes and, you know, Docker and microservices
and not achieve these outcomes.
We see no statistical correlation with performance, whether you're on a mainframe or a
greenfield or a brownfield system.
If you're building something brand new or if you're working on existing build.
And one thing I wanted to bring up that we didn't before is I said, you know, day one is short,
day two is long and I talked about things that live on the Internet and live on the web.
Yeah.
This is still a really, really smart approach for packaged software.
And I know people who are working in and running packaged software companies that use this methodology
because it allows them to still work in small, fast approaches.
And all they do is they push to a small package pre-production database.
And then when it's time to push that code onto some media, they do that.
Okay.
So what I love hearing about this is that it's actually not necessarily tied again to the architecture
or the type of company you are.
There's this opportunity for everybody.
but there is this mindset of like an organization that is ready.
It's like a readiness level for a company.
Oh, I hear that all the time.
I don't know if I'd say there's any such thing as readiness, right?
Like there's always an opportunity to get better.
There's always an opportunity to transform.
The other thing that really like drives me crazy and makes my head explode is this whole
maturity model thing.
Okay, are you ready to start transforming?
Well, like you can just not transform and then maybe fail, right?
Right.
Maturity models, they're really popular in industry right now,
but I really can't stress enough that they're not really an appropriate way
to think about a technology transformation.
I was thinking of readiness in the context of NASA technology readiness levels
or TRLs, which is something we used to think about a lot for very early stage things.
But you're describing maturity of an organization,
and it sounds like there's some kind of a framework for assessing the maturity of an organization,
and you're saying that doesn't work.
But first of all, what is that framework and why doesn't it work?
Well, so many people think that they want to snap.
snapshot of their DevOps or their technology transformation and spit back a number, right? And then
you will have one number to compare yourself against everything. The challenge, though, is that a
maturity model usually is leveraged to help you think about arriving somewhere. And then
here's the problem. Once you've arrived, what happens? Oh, we're done. You're done. And then the
resources are gone. And by resources, I don't just mean money. I mean time. I mean a
attention. We see year over year, over year, the best most innovative companies continue to push. So what happens when you've arrived? I'm using my finger quotes. You stop pushing. What happens when executives or leaders or whomever decide that you no longer need resources of any type? I have to push back again, though. Doesn't this help? Because it is helpful to give executives in particular, particularly those that are not tech native coming from the seeds of the engineering organization, some kind of metric to
put your head around, where are we, where are we at?
So you can use a capability model.
You can think about the capabilities that are necessary to drive your ability to develop
and deliver software with speed and stability.
Another limitation is that they're often kind of a lockstep or a linear formula, right?
No, right.
It's like a stepwise A, B, C, D, E, 1, 2, 3, 4, and in fact, the very nature of anything iterative
is it's very nonlinear and circular.
Feedback loops are circles.
Right.
And maturity models just don't allow that.
Now, another thing that's really, really nice is that capability models allow us to think about capabilities in terms of these outcomes.
Capabilities drive impact.
Maturity models are just this thing where you have this level one, level two, level three, level four.
It's a bit performative.
And then finally, maturity models just sort of take this snapshot of the world and describe it.
How fast is technology and business changing?
If we create a maturity model now, let's wait, let's say, four years.
That maturity model is old and dead and dusty and gone.
Do new technologies change the way you think about this?
Because I've been thinking a lot about how product management
for certain types of technologies changes with the technology itself
and that machine learning and deep learning might be a different beast.
And I'm just wondering if you guys have any thoughts on that.
Yeah, I mean, me and Dave Farley wrote the continuous delivery book back in 2010,
and since then, you know, there's Docker and Kubernetes and large-scale adoption of the cloud
and all these things that you had no idea would happen.
People sometimes ask me, you know,
isn't it time you wrote a new edition of the book?
I mean, yeah, we could probably rewrite it.
Does it change any of the fundamental principles?
No.
Do these new tools allow you to achieve those principles in new ways?
Yes.
So I think, you know, this is how I always come back to any problem,
is go back to first principles.
And the first principles, I mean,
they will change over the course of centuries.
I mean, we've got modern management versus kind of scientific management,
but they don't change over the course of,
a couple of years. The principles are still the same. Technologies give you new ways to do them,
and that's what's interesting about them. Equally, things can go backwards. A great example of
this is one of the capabilities we talk about in the book is working off a shared trunk or
master inversion control, not going on these long-lived feature branches. And the reason for that
is actually because of feedback loops. You know, if developers love going off into a corner,
putting headphones on their head and just coding something for like days, and then they try and integrate it
into trunk and that's a total nightmare and not just for them more critically for everyone else
who then has to merge their code into whatever they're working on so that's hugely painful
git is one of these examples of a tool that makes it very easy and people like oh i can use feature
branches so i think again it's nonlinear in the way that you describe it gives you new ways to do
things are they good and bad it depends but the thing that strikes me about what you guys
have been talking about as a theme in this podcast that seems to lend itself well to the world of
machine learning and deep learning where that technology might be different is it sort of lends itself
to a probabilistic way of thinking
and that things are not necessarily always complete
and that there is not a beginning and an end
and that you can actually live very comfortably
in an environment where things are by nature complex
and that complexity is not necessarily something to avoid.
So in that sense, I do think there might be something
kind of neat about ML and deep learning and AI for that matter
because it is very much lending itself to that sort of mindset.
Yeah, in our research, we talk about working in small batches.
There's a great video by Brett Victor
called Inventing on Principle, where he talks about how important it is to the creative process
to be able to see what you're doing. And he has this great demo of this game he's building
where he can change the code and the game changes its behavior instantly. When you're doing
things like AI... You don't get to see that. No, and the whole thing with machine learning is
how can we get the shortest possible feedback from changing the input parameters to seeing the
effect so that the machine can learn. And at the moment you have very long feedback loops,
the ML becomes much, much harder because you don't know which of the input changes.
caused the change in output that the machine is supposed to be learning from.
So the same thing is true of organizational change and process
and product development as well, by the way,
which is working in small batches
so that you can actually reason about cause and effects.
You know, I changed this thing, it had this effect.
Again, that requires short feedback loops,
that requires small batches.
That's one of the key capabilities we talk about in the book,
and that's what DevOps enables.
So we've done this hallway-style conversation
around all these themes of DevOps, measuring it,
why it matters, and what it means for organizations.
But practically speaking, if a company, and you guys are basically arguing at any company,
not necessarily a quote company that thinks it's a tech company and necessarily a company
that has like this amazing modern infrastructure stack, it could be a company that's still working off mainframes,
which people actually do to get started and how do they know where they are?
So what you need to do is take a look at your capabilities, understand what's holding you back, right?
Try to figure out what your constraints are.
But the thing that I love about much of this is you consterns.
start somewhere, and culture is such a core, important piece. We've seen across so many
industries, culture is truly transformative. And in fact, we measure it in our work, and we can show
that culture has a predictive effect on organizational outcomes and on technology capabilities.
We use a model from a guy called Ron Westram, who was a social scientist studying safety outcomes,
in fact, in safety critical industries like healthcare and aviation. And he creates a typology
where he organizes organizations based on whether pathological, bureaucratic or generative.
That's actually a great topology.
I wanted to apply that to people I date.
I know, right.
I wanted to apply it to people.
Too real.
There's a book in there, definitely.
I like how I'm trying to anthropomorphize all these organizational things into people.
But anyway, go on.
Instead of the five love languages, you can have the three relationship types.
So pathological organizations are characterized by low cooperation between different departments
and up and down the organizational hierarchy.
how do we deal with people who bring us bad news?
Do we ignore them, or do we shoot people who bring us bad news?
How do we deal with responsibilities?
Are they defined tightly so that when something goes wrong,
we know whose fault it is so we can punish them,
or do we share risks because we know we're all in it together and it's the team.
You all have skin in the game, you're all accountable, right?
Exactly. How do we do with bridging between different departments?
And crucially, how do we do with failure?
As we discussed earlier, in any complex system,
including organizational systems, failure is inevitable.
So failure should be treated as a learning opportunity,
not whose fault was it, but why did that person not have the information they needed, the tools they needed?
How can we make sure that when someone does something, it doesn't lead to catastrophic outcomes,
but instead it leads to contain small blast radiuses.
Right, not an outage on Black Friday.
Right, exactly.
And then also, how do we deal with novelty?
Is novelty crushed, or is it implemented, or does it lead to problems?
One of the pieces of research that confirms what we were talking about was some research that was done by Google.
They were trying to find what makes the greatest Google team.
Is it four Stanford graduates and node developer and fire all the managers?
Is a data scientist and a Node.js programmer and a manager?
Right. One product manager paired with one system engineer with one...
And what they found was the number one ingredient was psychological safety.
Does the team feel safe to take risks? And this ties together failure and novelty.
If people don't feel that when things go wrong, they're going to be supported, they're not going to take risks.
And then you're not going to get any novelty because novelty, by definition, involves taking risks.
So we see that one of the biggest things you can do is create teams where it's safe to go wrong and make mistakes
and where people will treat that as a learning experience.
This is a principle that applies, again, not just in product development, you know, the lean startup fail early, fail often,
but also in the way we deal with problems at an operational level as well.
And how we interact with our team when these things happen.
So just to kind of summarize that, you have pathological, this is a power-oriented thing where, you know,
the people are scared, the messenger is going to be shot.
Then you have this bureaucratic kind of rule-oriented world
where the messengers aren't hurt.
And then you have this sort of generative,
and again, I really wish I could apply this to people,
but we're talking about organizations here for culture,
which is more performance-oriented.
And I just want to add one thing about this.
You know, working in the federal government,
you would imagine that to be a very bureaucratic organization.
I would, actually.
And actually, what was surprising to me was that, yes, there's lots of rules.
The rules aren't necessarily bad.
That's how we can operate at,
scale, it's by having rules. But what I found was there was a lot of people who are mission
oriented. And I think that's a nice alternative way to think about generative organizations,
is to think about mission orientation. The rules are there, but if it's important to the mission,
we'll break the rules. And we measure this at the team level, right? Because you can be in the
government, and there were pockets that were very generative. You can be in a startup. And you can see
startups that act very bureaucratic, or pathological. Or very pathological. Right. And the
the cult of the CEO.
Where it's not charismatic inspirational vision,
but to the expense of actually being heard
and the messenger is shot, et cetera.
And we have several companies around the world now
that are measuring their culture
on a quarterly cadence and basis
because we show in the book how to measure it.
Westrom's typology was the table itself,
and so we turned that into a scientific, psychometric way to measure it.
Now, this makes sense why I'm putting these anthropomorphic analogies
because in this sense, organizations are like people.
They're made of people.
Teams are organic and terrestrial.
And I love that you said that the unit of analysis is a team because it means you can actually do something and you can start there and then you can like see if it actually spread or doesn't spread, bridges, doesn't bridge, et cetera.
And what I also love about this framework is it also moves away from this cult of failure mindset that I think people tend to have where it's like failing for the sake of failing.
And you actually want to avoid failure.
And the whole point of failing is to actually learn something and then be better and take risks so you can implement these new things.
And very smart risks.
So what's your final?
I mean, there's a lot of really great things here,
but what's your final sort of parting takeaway for listeners
or people who might want to get started
or think about how they are doing?
So I think, you know, we're in a world where technology matters.
Anyone can do this stuff,
but you have to get the technology part of it right.
That means investing in your engineering capabilities,
in your process, in your culture, in your architecture.
We've dealt with a lot of things here that people think are intangible,
and we're here to tell you they're not intangible.
You can measure them.
they will impact the performance of your organization.
So take a scientific approach to improving your organization
and you will read the dividends.
When you guys talk about anyone can do this,
the teams can do this,
but what role in the organization is usually most empowered
to be the owner of where to get started?
Is it like the VP of Engineering?
Is it the CTO, the CIO?
I was going to say, don't minimize the role of,
and the importance of leadership.
DevOps sort of started as a grassroots movement,
but right now we're seeing roles like,
VP and CTO being really impactful, in part because they can set the vision for an organization,
but also in part because they have resources that they can dedicate to this.
We see a lot of CEOs and CTOs and CIOs in our business. We have like a whole briefing center.
We hear what's top of mind for them all the time. Everyone thinks they're transformational.
So like what actually makes a visionary type of leader who has that not just the purse strings
and the decision making power, but the actual characteristics that are right for this?
Right. And that's such a great question. And so we actually dug into that in our
research. And we find that there are five characteristics that end up being predictive of driving
change and really amplifying all of the other capabilities that we found. And these five
characteristics are vision, intellectual stimulation, inspirational communication, supportive
leadership, and personal recognition. And so what we end up recommending to organizations is
absolutely invest in the technology, also invest in leadership in your people, because that can
really help drive your transformation home.
Well, Nicole, Jez, thank you for joining the A6 and Z podcast.
The book, Just Out, is Accelerate, Building and Scaling, High Performing Technology Organizations.
Thank you so much, you guys.
Thanks for having us.
Thank you.