Software at Scale - Software at Scale 53 - Testing Culture with Mike Bland

Episode Date: December 28, 2022

Mike Bland is a software instigator - he helped drive adoption of automated testing at Google, and the Quality Culture Initiative at Apple.Apple Podcasts | Spotify | Google PodcastsMike’s blog w...as instrumental towards my decision to pick a job in developer productivity/platform engineering. We talk about the Rainbow of Death - the idea of driving cultural change in large engineering organizations - one of the key challenges of platform engineering teams. And we deep dive into the value and common pushbacks against automated testing. Highlights (GPT-3 generated)[0:00 - 0:29] Welcome[0:29 - 0:38] Explanation of Rainbow of Death [0:38 - 0:52] Story of Testing Grouplet at Google[0:52 - 5:52] Benefits of Writing Blogs and Engineering Culture Change [5:52 - 6:48] Impact of Mike's Blog[6:48 - 7:45] Automated Testing at Scale [7:45 - 8:10] "I'm a Snowflake" Mentality [8:10 - 8:59] Instigator Theory and Crossing the Chasm Model [8:59 - 9:55] Discussion of Dependency Injection and Functional Decomposition[9:55 - 16:19] Discussion of Testing and Testable Code [16:19 - 24:30] Impact of Organizational and Cultural Change on Writing Tests [24:30 - 26:04] Instigator Theory [26:04 - 32:47] Strategies for Leaders to Foster and Support Testing [32:47 - 38:50] Role of Leadership in Promoting Testing [38:50 - 43:29] Philosophical Implications of Testing Practices This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.softwareatscale.dev

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Software at Scale, a podcast where we discuss the technical stories behind large software applications. I'm your host, Utsav Shah, and thank you for listening. Hey, welcome to another episode of the Software at Scale podcast. Joining me today is Mike Bland, a software instigator who previously worked at Apple, C-Event, and Google. Welcome. Oh, hey, thanks for having me. So, Mike, before we go into anything else, I want to ask you, what is the rainbow of death? Yeah, well, it's an expression of my bizarre worldview and sense of humor, but it actually does have a motive. It took me two years to understand it. Jeffrey Moore's Crossing the Chasm model that I borrowed from an ex-colleague, Albert Wong,
Starting point is 00:01:07 that sort of helps visualize what the innovators and early adopters, who I like to call instigators, over on the left side of Moore's Chasm, it describes what they have to do to connect with and ultimately empower the early majority on the other side, delivering what Moore might call the total product. And it's called the Rainbow of Death because when I saw Albert's initial use of this, he didn't call it that. So he called it his framework for helping, and he used it to visualize. He did a presentation on his work with the U.S. Digital Service and working with Customs Immigration Services and sort of used this diagram to show the different activities that they performed and what was more sort of the hands-on stuff that he and his colleagues had to do versus the things that they started doing to empower the actual client, the actual users, right? And so when I saw this model, it instantly in my brain snapped in that chasm and gave me a hook to tell the story of the testing group at Google, which, you know, shout out to your past guest, Bart Maturata.
Starting point is 00:02:36 You know, he, he was the one who originally started up and led that group with Nick Lasecki. I was fortunate to become part of that group. You know, they were both mentors to me. And then, you know, I eventually took leadership and carried it forward. But yeah, I was able to take that model, put it into a presentation, you know, kind of put it in context between the instigators and the early majority, and then sort of unfold the five-year story of the testing group. That's one thing where I have to kind of differ with Bart, where he's like, oh, yeah, we started doing this on GWIS, and a year later, other people started doing it. He kind of didn't go into the details that, oh, yeah, he also led this testing group thing, and it actually took us five years
Starting point is 00:03:20 to really get all the pieces in place so that these ideas could spread, make the right thing easy, and hit that tipping point, right? So that's what the Rainbow of Death is, is it's a device that helps tell that particular story. I actually, kind of going into Apple a little bit, I have to admit early, one of the hard lessons I learned is that the Rainbow of Death is a great storytelling device, but it's a horrible answer key. I kind of went into Apple and other, Cvent and other places after I was so excited about the Rainbow of Death and I could use it to sort of collapse five years of sort of fumbling around in the wilderness and muddling through and figuring out what worked. I was like, look, check it out. Here's the answer key. We just have to do similar things that hit similar notes. house where, you know, you can see how it all fits together and how it could fit together, but you still have to do things in a certain order and they take time. And when you're able to kind of go back and look at
Starting point is 00:04:46 the finished product and you can look at the diagrams and you can make sense of the structure and how things fit together and they were put together. But I think one of the things I would say about the rainbow of death when I would tell people about it is it sort of, this was the meaning behind the death part, right? Like the rainbow, it looks like a rainbow in my mind, right? Like it's a little bridge, it's different colors. And I just thought, rainbow, what doesn't go with a rainbow? Death, right? But what I realized is it also sort of represents the fact that some of our old ideas of how to do things or solve certain problems have to die in order for us to be successful in making that connection. And then it turned on me,
Starting point is 00:05:33 and it turned out that my idea had to die, that this was something I could come in with from the beginning and just shortcut and accelerate the process um anyways i i'm just going off on like 15 different tangents off that first question i hope that's okay no i think i think that makes sense right and for listeners who are not so familiar with mike's work this is the idea of introducing automated testing at a place like Google, where everything was really nice, the company was doing well. But developers didn't add tests, right? They're like, oh, it's too hard to add tests, it's not useful. And it really describes the story of going that change from no one adds tests to oh, everyone always added tests. It's just part of our culture.
Starting point is 00:06:26 And what fascinates me about this story is that engineering culture change, right? You see it happen so rarely at places where people do things differently compared to what they used to. So I'm always fascinated by the story that you share, which Mike shares really well on his blog. I have to say you have one of the best documentation of stuff on any blog anywhere. Yeah. I appreciate that. And now that now I'm realizing, oh, yeah, I forgot to talk about all that other stuff, the story itself. Right. Yeah. That's where it came from. Yeah. So one of the things that maybe listeners don't
Starting point is 00:07:02 know is that Mike's blog from way back, I don't even know if you know this, Mike, got me really interested in build tools and systems like Blaze and Bazel and convinced me to, it was part of convincing me of going for my first job, which is building this whole ecosystem of developer tools at Dropbox, where they were trying to do something similar to what Google did. And I was so fascinated by the history and also the infrastructure challenges of managing developer tools at scale
Starting point is 00:07:32 and pretty much defined my career from that point, because then you're just good at one thing, which is developer tools, which nobody else wants to touch. And then you keep on expanding on that. Well, I had no idea i'm i'm very honored and and flattered and humbled i think that's exactly why people should write engineering blogs you mean you don't even know who you end up impacting and now i kind of wonder that like are there any people even listening to this blog slash podcast thing maybe it was just convinces one person to do something slightly different.
Starting point is 00:08:08 Yeah. Yeah. Yeah. And I've definitely, you know, I can't say I've heard, you know, there's some value in kind of sharing things out in the open because you never know who you might help. I mean, and to be honest, that person might be yourself, right? Like that's one of the reasons I like to write things down is so I don't forget. But I figure if I make them as open as I can, not only am I helping myself, but it at least opens that possibility that, you know, someone else may get value from it too. So I'm really, and even if it's
Starting point is 00:08:52 just one person, right, it's still, you know, really great feeling to know that helped someone. Yep. I think it's that it also, it's like a great tool to clarify your own thinking, right? You go into a blog post thinking, this is what what i this is what i think my opinions are on the subject and then you keep refining that and then that's you end up learning so much about all these muddled thoughts in your head um at least that's what i find with the technical blogs but maybe the thing that i'm still looking you know it's been five or six years since my career started. And I still haven't seen the kind of cultural change that you've seen at Google when it comes to like automated testing. And one part of that is something that you publish on your blog today, which is this idea of I'm a snowflake, right?
Starting point is 00:09:44 Like I have such a unique setup that testing doesn't make sense for me. Maybe can you talk a little bit about that idea? Like, yeah, how often have you seen this? Quite often. I mean, it's, and by the way, thank you for noticing I posted that today. This is a, you know, thought that I've had for a while. And finally, yeah, I was just I put it out there on the blog, right, to start working it out exactly like you said. The idea is that, pardon me, at a molecular level, most software projects are more similar than they are different, right? Because we're all, you know, once we get the idea of what it is we want to code up, you know, whether we have requirements or just some idea that we're trying to prototype,
Starting point is 00:10:33 there's, you know, we start writing these lines of code, and they're all statements or expressions that we start ordering according to sequence selection and iteration, and then we'll package clusters of those bits of code up into, you know, functions and maybe classes, and then we'll bundle those and modules and packages, and eventually there'll be an API or something. And it's true that, you know, each of these applications may do something different, or, you know, they do something different from most other applications, even if, you know, they're not unique within their field. But, you know, programmers,
Starting point is 00:11:12 that really not just programmers, I mean, groups of humans like to sort of create an identity that sets them apart from other individuals, or in this case, other groups, right? So, you know, one of my favorite things, and by the way, I have to admit that I'm trying to tread very carefully around talking about Apple, even though I don't work there anymore, just because, you know, there's still a lot of great people there trying to do great work, and I hopefully won't say anything that will be disclosure sensitive, that will imperil what they're doing. But one of the great things that we were able to start building up over time was all these examples from people in different groups applying the test pyramid concept and writing smaller unit tests and things of that nature. And so, you know,
Starting point is 00:12:09 when you have people in the software engineering org working on like the operating system and the first party applications, people in AI ML, people in like the services group, like the backend for Apple music and all those other services and things like that, iCloud, if you will. In our internal IT department, right, like all these different groups, we started attracting people into what was our version of the testing group, which is the Quality Culture Initiative. And so the more those examples that we got, the more ridiculous it became for any project to say, oh, well, that won't work for us.
Starting point is 00:12:48 We're different. We're special. Right. It's like, OK, so you're going to point the finger at like OS code and microservices and AIML and IT stuff and say, well, we are so different from all of those who are like really similar to each other. It works for them, but not for us, right? And the thing is, yeah, at a molecular level, they are more similar than they are different from a certain point of view. But the chances are your project isn't going to be that unique either, looked at through that particular lens. So that mindset that, you know, I'm a snowflake, my project's a snowflake or whatever. It's so unique that, you know,
Starting point is 00:13:34 these standard things don't apply. That is sort of the biggest obstacle to get over. Oh, and also was pointing out, speaking of clarifying thoughts, this was one that came to me today when I was writing the blog. It's like, okay, well, you're using probably one or more programming languages that are mainstream. You're probably using one or more text editors or IDEs that are mainstream. You're almost certainly using source control, and that is almost certainly going to be Git today. So it's sort of very selective to say, we're so different from all those other projects. We use a lot of the same languages and tools, but we can't do this other thing. Right. So, yeah, just getting past that level of resistance is really the biggest challenge to automated testing adoption, because I was fond of telling people, you know, even at Apple, like,
Starting point is 00:14:26 we are not bringing in new technical concepts or techniques or anything like that. All this stuff has been figured out for decades. It's just a matter of trying to tailor the message, you know, those of us who identified as instigators within the Quality Culture Initiative, how do we put together a message that resonates in this culture and build up the evidence within this organization? So that is where we get into the other part of my blog post today. I talked about instigator theory, which is also related to the crossing the chasm model and the rainbow of death and all that sort of thing. And, you know, it's sort of like, I'm just thinking about this particular podcast. If you wanted to go deep and talk about the merits of dependency injection or functional decomposition or, you know,
Starting point is 00:15:25 designing to patterns and, you know, the ramifications for testability and get into that technical conversation, we can. But as you were sort of hinting at, you know, the greater need is, or rather the thing that's missing in most environments is someone just taking a step forward and advocating, you know, that we try, try something that's already there, that somebody already figured out. And to me, that's more, that, that speaks more to the need for, you know, leadership and salesmanship to sort of bring, you know, even though the ideas are decades old, you know, we still got to work hard to bring it to a new audience when we join an organization
Starting point is 00:16:11 where people just haven't, you know, they're in might be in that snowflake mindset, and they just haven't had exposure to it yet. Yeah, I want to go a little deep into one of those things which you described, which is a lot of code that we write. And I'm thinking about my day job, right? Involves basically if you take the Google metaphor, like a proto to proto, right? You are pulling data from some external system, changing its format a little bit and storing it somewhere else. So when I think about that, I find it hard to justify to myself that does this code, which is really simple data transformation, does it really need testing? And I'm trying to break out of my snowflake mindset there.
Starting point is 00:17:03 But like, what would your first reaction be? Yeah. My first reaction is to sort of understand what that transformation is. Right. Because there's a reason you're not just piping it straight through, you know, there's some sort of calculation that presumably produces business value of some kind.
Starting point is 00:17:24 Right. So I, I'd want to understand what that is. And, you know, speaking of proto buffs, it could be if there really is some sort of rote transformation, you know, you could create your own tool or code generation or DSL or something to sort of abstract from that. Because, you know, protobufs themselves, pardon me, at least as they were implemented, you know, at Google back in the day, like there was a protocol compiler, right? You have the data definition language and you say, I need this in Go and Java and C++, and it'll just spit it out. So, you know, the people who created protobufs figured out like, oh, I shouldn't have to write these proto definitions every time and test each individual one. I'll just, you know,
Starting point is 00:18:18 code gen it, right? So that would be one possible alternative to testing in this situation if the transformations are so rote that you have to just keep doing them that you maybe just – you write another tool to do it, and then you can test that tool and be done with it. you know, subtle or bespoke about transforming this particular data format to this other one, then I would, you know, I would have to say, like, I imagine if you got something wrong and nobody noticed, like, why are you doing it? And if somebody does notice, it's probably worth double checking with the test that, you know, you didn't do something that was at least off by one or, you know, inverted some condition or forgotten entire set of, or an entire conditional that should have been part of your logical flow. Like that, that's kind of where I come down on that one, is really trying to understand what you're trying to do and why, and then looking at the options. There's also the question of when, by the way. Something that I talked about, this is a talk I
Starting point is 00:19:38 put together after Google, before Apple, and the talk on my blog that I call automated testing. Why bother? That was the first, first place I started breaking down this idea of different phases of development where you've got your first phase is exploration, right? Like, you know, maybe you've got some requirements, maybe you don't,
Starting point is 00:19:59 or maybe they're incomplete and you're just trying to get a feel for the problem space, right? Like you're just trying things you want to for the problem space, right? Like you're just trying things. You want to understand. You want to get some visibility. And you're just prototyping things to get some early rapid experience and rapid feedback. And at that point, I'm like, yeah, maybe don't worry so much about testing then because it's early stages.
Starting point is 00:20:22 You're trying to figure out what direction you're trying to go in, right? You don't know the shape yet that you're trying to solve for, the shape of the puzzle. But then there's two other phases. One I call settlement, where maybe you're still exploring in one area of the problem space, but you've got a pretty good foundation in another part, right? There's some part that you have figured out, and you need that to keep working. So you're settling. So you're essentially laying some infrastructure. You need to make sure that infrastructure keeps working like you expected as you're still off exploring somewhere else, right? And then there's a phase that I call vision, which is about, you know, you feel like you understand, oh, I know exactly what I need to do now, right? Like, this is where
Starting point is 00:21:11 we need to go. So two points about that. One, a test helps really check whether or not you're in touch with reality, or whether you're making any unconscious assumptions. Which, by the way, I like to talk about code. You know, we write tests to validate that our code meets expectations. Requirements is sort of the tip of the iceberg. Then there's a whole bunch of assumptions we might have that aren't written down, right? So even if you're in that vision phase and you know exactly where you're going, writing tests validates your vision and helps you sort of find some of those blind review all their code. You don't have to be afraid of what they're going to do because you've got, you know, the tests, you know, someone's once described tests almost as like an extension of the compiler, right? For a particular project. So, you know, you've already mapped out this vision and things
Starting point is 00:22:19 are great. If other people are going to come in and be a part of that, it would be nice to have a compiler extension to make sure that they don't violate not only the requirements, but the assumptions implicit in the vision. Yeah, I guess I'll pause. No, no, I think that makes a ton of sense. And, you know, when I described the problem to you, it just kind of fit in my head that, yeah, I have to get out of that mindset. And at the same time, try not to apply it during that exploratory phase, right? Like when you're just beginning on something, it's very different you've sort of developed some muscle memory, you can at least make it easier on yourself later, right? Like you, you can, you know, you can maybe, uh, even though, even though you're prototyping, you're still decomposing into smaller functions and maybe
Starting point is 00:23:33 individual classes. Uh, not that they're necessarily going to, you know, make it to production, but it's, it still gets you a little bit closer to the point where if you do figure things out, oh, I need to settle on this, then you don't have to go rewrite the whole thing or totally try to change it without tests, right? You don't have to do the whole Michael Feathers working with legacy code techniques, which are fantastic, by the way, if you're in that situation. But, you know, once you develop some of this sensibility as to what makes for testable code, you can still write testable code or code that's close to testable, even if you're not writing tests yet, which then still sets you, you know, puts you a little further ahead than if you were just, you know,
Starting point is 00:24:25 writing thousand line functions right from the start. Just the way you write code changes over time if you have like a slightly different mindset about how testable it should be. Right. Now that makes sense to me. I want to now go back and expand a little bit, which I think think is like the biggest problem right which is like organizational slash cultural change right like when you have a large group of people who are used to a certain way of doing things and and in this case i'm thinking about like a large engineering group and in almost all all cases, let's say 5% to 10% of people
Starting point is 00:25:06 are actually thinking about unit tests or see value in them. And you have a sympathetic management or set of leaders who are like, you know, yes, we all agree that unit testing is a good idea. I've very rarely seen actual cultural change in such groups, right? There might be a few holdouts, a few people who always have unit tests in their own code or in their own PRs. And you have maybe even respected engineers who push for testing. But what do you think makes the difference? Because we know, we all know, and we've all tried things like introducing code coverage and like trying to get more reporting. And like, I've failed to have any
Starting point is 00:25:52 hope or like, I don't have any hope in all of those measures. I'm sure they all help piecemeal. And it doesn't seem like anyone has like a clear answer to this. What is your approach? Well, you know, we're just waiting for someone to come up with the perfect logical argument or the perfect set of data to prove the value or the perfect tool that's going to set us all free. By the way, I guess I'm taking it sort of as a personal mission to see if I can't restore at least a glimmer of that hope for you in this conversation. So I think what you find is all those things you mentioned are necessary but not sufficient, right? Like we need advocates.
Starting point is 00:26:38 We need things like coverage. We need people to speak to the value of these practices and things like that. But this gets into what I was articulating today in the second part of my blog post, what I call the instigator theory, which is sometimes it's easier to change the rest of the organization or even the rest of the world than it is your own team. And the reason being, like I said, and you know, today, I just put those two ideas in the same blog post. Going back to your point about clarifying your thinking, I was like, oh, I need to get both of these out. I'll just put them both in the same post. But then after I wrote it, I was like, of course, they're related.
Starting point is 00:27:22 Because if you're in a team that's like, no, you know, we're unique and that stuff won't work here. Uh, and you're the individual that's like, maybe we should try this. They could just shut you down, right? Completely. Unless you happen to be the tech leader, the manager, you know, somebody in a position to really influence their performance review that might get them to sit up straight and at least listen a little better. But most folks aren't in that situation. So my recommendation is not to bang your head against the wall with what, again, what Jeffrey Moore from The Chasm calls the late majority or the laggards, right? Where the late majority, these are the people that are like, well, can you prove it's
Starting point is 00:28:12 going to work for us? How do we know? How can we be sure? Can you give us some case studies and some data and blah, blah, blah? They're waiting, right? And that's the trick to the chasm is once things catch on with the early majority, then the late majority sees my cold, dead hands or whatever. And they make a lot of noise in the beginning, right? Like when you're just starting off and there's, there haven't yet been any major developments or results and they suck all the air out of the room. But then as you start putting points on the board, you know, they kind of lose their power, right? over time. So if you're stuck on a team where you're surrounded by late majority laggard people, and nobody's really receptive, what you can do is go find other teams. And just sort of put your feelers out. I mean, this is sort of like what we were talking about earlier about putting, putting out blog posts, putting out podcast episodes.
Starting point is 00:29:30 Sometimes you'll connect with someone or you'll be just the thing they were looking for, right? And if you can make those connections with other instigators, then together, other instigators and possibly early majority people, right? And together work on what Moore calls the total product, right? Like work on, like, what are the problems that need to be solved from a technical perspective? What tools do you need? But also work on the marketing, right? Like work on the community, you know, create this environment. It's not going to be any one element. It's going to be a whole collection of having the technical knowledge at hand, maybe teaching a lot with people from all over the organization, just working with these ideas and like, you know, the test pyramid and having everybody just sort of have a conversation about that and its implications and how, you know, how these sort of general principles and practices actually align with specific problems or goals that specific teams or organizations have, right? Because then if you are able to craft that message with others and work those ideas out and polish them and create that total product, then you can kind of work on your team from the outside in instead of the inside out, right?
Starting point is 00:31:12 Like eventually, you know, your material, your product, your case studies, your evidence of impact and all this stuff will be, you know, so great that either maybe your team finally understands what you've been saying the whole time and they feel like they're finally ready to embrace it. Or if nothing else, you've built a wide professional network and maybe you'll be privy to opportunities to join other teams that understand and embrace these sort of practices. So that's how I look at the problem, right? It's not like, oh, I'm on this team and I gotta keep charging up this hill until I conquer it. Right. It's really thinking a little bit more laterally and thinking, you know, about, um, not just how am I going to work within this team and work my way up the hierarchy till I get to the grand prize at the top and the CEO can go tell everybody to start writing tests. But, you know, realizing that, you know, a more effective use of your time and resources is to connect with the people who already get it, even if they're not on your own team, and building up that product and that feedback loop so that eventually you can maybe reach, reach through the, the extent of the early majority, which will eventually hit the late majority. And then, then you're there,
Starting point is 00:32:33 but it's hard. It's not easy. And it takes, it does take time, regardless of how much experience you have and whatever point you're starting from, it's, it's just going to take time. of how much experience you have and whatever point you're starting from. It's just going to take time. Yeah. Now, let's go back up even more, maybe. And let's say I'm an organization leader, like I'm a head of engineering. My first question is that if I see that the team that I've built or that I've hired,
Starting point is 00:33:05 or I've just been hired into this group, and I find out that there's not as much testing as I thought there would be, the first question is that how do I enable that as a leader? Is it much easier? You can imagine in many cases it's actually harder because you're far away from the code base. Right? Like, that's, that may be my first question. Yeah. Yeah. So so you're make sure I understand it correctly. You're a leader, you joined a new organization, you sort of understand the value, but you're not seeing it from the people that report up to you.
Starting point is 00:33:47 That's the question. Yeah, exactly. Yep. So what I would recommend is, one, you're going to have to take your time and try to put the feelers out and try to see if there are people who get it and find a way to make their work visible. This was another huge thing. I talked also in this post today about the more experience I get with different organizations and teams and things. Yes, there are differences. They're very obvious on the surface and they're important. But to me, what's more important is the commonality and the humanity and, you know, the human nature behind groups of people trying to accomplish something together. But, you know, along those lines, one of the big differences between Google and Apple is Google's internal culture is everything's open by default, or at least it was when I was there.
Starting point is 00:34:51 And of course, at Apple, it's very well known for its secrecy even internally, right? And it's not an insurmountable obstacle, but it just means you have to reconfigure your approach. And air you breathe. You hardly notice it unless you think about it. We didn't quite have that same degree, the same vehicles, avenues for sharing within Apple, right? No way in hell we were going to get away with putting flyers in all the bathrooms in Apple Park as much as we wanted to. But, you know, what we ended up leaning into instead was, you know, we eventually got Slack, of course. We had a Slack community. We were using that. We had a lot of trainings.
Starting point is 00:35:58 We started putting together something we called the Roadshow to start conversations and building up what we called our Ambassador Program, you know, where we were just really trying to cultivate leaders who understood the messages and the practices and, you know, could be effective advocates and things like that. And sort of the essence of what we were trying to bring forward in the Apple culture was the need to make quality work and its impact visible. And we started having more and more success with that that I can't really sadly go into too much detail about. But that's the thing, right? I've said it's sort of like the matrix, right? Where Morpheus says, you know, no one can be this work, this quality work being done, then I would say as the leader, try to do whatever legwork you have to do to try to find it, try to surface it, and then make examples of that. If you do find somebody three levels down from you that is writing tests and getting good results, have that person give you a presentation to your
Starting point is 00:37:29 staff and start making that visible and elevating that. It's definitely not up to you to go down and tell somebody how to refactor their code to do dependency injection. But if you, as the leader, say, you know what? This looks really good. I can see what this person is doing. And all of us can see the results, right? I think that sends a pretty strong message. And I will say, you know, without compromising too much information, you know, there was a particular large group that we were working with at Apple shortly before I left, where we had a sympathetic leader who did a fantastic job of, you know, creating that kind of space, right? Like he wasn't coming in and telling everybody, you know, just start writing
Starting point is 00:38:19 more tests or, or anything like that. He was like, Hey, what's this QCI thing? Oh, well, yeah, you know, try it out and show me the results. Oh, those are good results. Hey, you know, let's celebrate this. You know, it was just that kind of, you know, I'm trying to think of an appropriate verb, cultivation, right, of, you know, sort of establishing priorities by making things visible and sort of pointing things out. Yep. No, I think that makes sense to me. And perhaps a meta question here, right?
Starting point is 00:38:57 Like, it always seems like you need instigators, that you need people who are talking about testing. And you were doing this in the 2000s, and now you're doing this in 2022, right? Why is this not the default? I'm sure you've thought about this philosophical. Why is it that you have to go and evangelize testing 20 years later? Oh, man, how cosmic do you want me to go on this? Like, I know this is a technical podcast. Yeah, I think as cosmic as you'd like.
Starting point is 00:39:32 Well, it just so happens, I just finished reading the Bhagavad Gita for the first time. Okay. And one of the things, I'm not a spiritual or religious person. But, you know, I am a seeker of wisdom and insight. And as I'm reading this document, there are obviously certain aspects of the imagery and the metaphor that with the modern sensibility, it's like, this guy, you know, this warrior goes in the middle of the battlefield with his charioteer and, you know, says, hey, my family's on both sides. I don't want to fight this war. And, you know, the charioteer is, you know, Krishna, the supreme being, and he eventually
Starting point is 00:40:18 convinces him, no, you should, you know, like at that level, it's like, whoa, hold on. But of course, it's all a metaphor for the deeper struggles that we all face, right? was the notion of, you know, kind of doing your job, doing it as well as you can, but not being attached to the outcomes. And, you know, that sort of explains why I've been banging my head against this wall for 20 years, right? 20 plus, oh my god. But the other thing that really jumped out at me is, you know, I've heard this before, right? Like, I've also been reading the Daily Stoic, right? And Stoics are largely about, you know, only being attached to your own reason, you know, your own sense of reason and experience, right? Because you can't be attached to external things because they could go away, right? And of course, other religious traditions,
Starting point is 00:41:30 Buddhism is all about attachment and suffering and all these kind of things. So when you talk about we've known how to write these tests, and we've been trying to make this practice for a couple of decades. Some of these scriptures go back thousands of years, and we still have war. We still have evil. We still have, you know, people trying to figure themselves out and what to do with their lives. just part of the human condition to, you know, if you're really in the middle of an activity, say like programming, and particularly if you're an organization that has imposed certain requirements and deadlines on you, and they have leverage over your economic fate, right?
Starting point is 00:42:23 Because of performance evaluations and things like that, that's a lot to try to process. And if you haven't sort of had exposure to, you know, good coding and testing practices and, and on top of everything you're, you're focused on and you're dealing with and you're trying to juggle and somebody comes along and says, hey, here's something you've never seen or tried before, but I really think it should help. If you don't approach that in the right way, you're going to get pushback, right? People are going to be like, oh, I don't have time for this. My code is too hard to test.
Starting point is 00:42:59 Our project is unique. We have a deadline. We don't have the resources, right? Like, I think it's, I don't know if we will ever get to the point where it's so universally accepted that there wouldn't be a need for somebody exercising leadership to drive adoption. I don't know. Maybe I said I was trying to restore your hope. I don't know if maybe I just shattered it completely. No, I, I, I buy that argument. It makes a ton of sense to me that there's some things, the way I'm thinking about it is, is just the incentives that may always be somewhat off that you kind of need them to be like backfilled in you can imagine that even with typescript it's just or like with typed code it's so easy to not
Starting point is 00:43:54 have appropriate types even though it's pretty easy to introduce the right amount of typing and not cast just because it's that much easier and there's that deadline. And this is just an extension of that. Yeah. And I would say, you know, timing matters, right? Because if somebody is, you know, pulling 70-hour weeks to meet a deadline, that's definitely not the time to jump in and say, let's start refactoring and adding tests. But by the time they've made that deadline and hopefully had a couple weeks off at the very least, if the organization is humane enough to give that person some space to sort of reflect and recover from that experience,
Starting point is 00:44:45 that might be an ideal time to say, well, what would we do differently? What new things do we think we could introduce into the process that maybe could prevent a similar crunch in the future, right? Yeah, it's a complex system, man. You know, it's like, I know, that's the funny thing is, you know, as software developers, we pride ourselves on being able to manage all this complexity in our heads and in our code, you know, with, you know, all these, we're more set in our ways. And I think we're often more set in our ways. And I think we realize, um, because it's like, well, you know, if you just learned this one extra thing, it could reduce the complexity of a lot of what you're working with and give you more capacity to do more things in the future but if you don't
Starting point is 00:45:46 sort of market that message in the right way at the right time then you know people are just gonna shut you out with all the usual excuses you know my code's too hard and i don't have time and everything yeah like an organization where people are working 70 hour weeks the ship deadline after deadline after deadline is not a place where you're going to see that organic self-reflection, as you mentioned, to see these things kind of change. And then you're just always going to take the shortest path to deliver the most visible thing. And maybe that leads me to another interesting meta question,
Starting point is 00:46:27 at least in my mind, is you often hear about systems being over-designed and premature abstraction and keep things as simple as possible on one side. On the other side, I cannot believe that all design is premature and bad design. Like, sometimes you have to think how your system works, and it will help you, like, that extra piece of thing that's, like, slightly more generic, because you know that there's this use case that's going to come in three months down the line. I think there's also this aspect of being proud of what you're working on. You're doing something for the sake of doing it, and you're not that attached to the outcome.
Starting point is 00:47:15 I wonder if you've thought about how testing plays into that. When somebody works with tested code, they also feel like they're not working on throwaway, poor quality legacy stuff. Right. Yeah. Yeah, absolutely. And speaking of legacy, you know, Michael Feathers defines legacy code as code without tests, right? Especially, you know, I guess code without tests that you got to go change. But yeah, I think, you know, going to either extreme and sort of shutting your brain off is dangerous.
Starting point is 00:47:51 You know, either saying, oh, I don't need to do it at all right now. I don't even need to think about, you know, the abstractions I'm using. You know, let me just code, code, code. And we'll fix it in post, basically. But at the same time, yeah, like I said, during the exploration phase, that might not be the time to start locking things down. you know through uh the process of writing software where yeah you you come in and you know maybe there is no um there is no clear map for where to go and what problems to solve and maybe it's okay to do a little quick and dirty prototyping. But pretty quickly, you'll get to that point where it's like, oh, crap, my demo is now becoming part of the shipping product, right? And so if you've already developed some of those skills, some of that muscle memory, I like to call it, where even from the beginning, you're at least starting to recognize, oh, I should split that function up.
Starting point is 00:49:08 Oh, you know, these two loops really belong in two different functions or whatever. Oh, well, I've got this class that's actually doing three things. Maybe I should split those three things out into separate classes. And then my other, my original can just can just you know coordinate between them right like even if you're not writing tests yet you know you're still you're mitigating the technical debt that you'll eventually have to pay down um which again you know i'm surprised we got this far haven't even brought up technical debt right like and there's obviously it's a term that everybody hears it and they have their instantly their own idea that they, you know, understand it to be. I think one of the, you're probably familiar with Martin Fowler's technical debt quadrant,
Starting point is 00:49:57 where, you know, there's different degrees of technical debt. There's intentional and unintentional, right? And so I guess the point is, you know, you want to hit that right balance of that right debt ratio in the process, where maybe you're not, you don't quite have all the tests that you want to have in place if this experiment works out. But you can set yourself up so that if it works out, you're in a good place to start putting some tests in and to extend the product further from there. Yeah, I think it's all about just gaining enough experience to constantly evaluate what you're doing and make the right tradeoffs to set yourself up for success versus putting a stake in the ground from the very beginning and saying either no tests and no design patterns and no nothing in the beginning. but then again at the same time you don't want to just you know um try to lock everything down and say i'm not going to write a single line of code without any tasks and a design document and you know there's a sensible middle ground that i think yields uh more productive outcomes yeah and that's why i think it's so hard for our jobs to be replaced by chat GPT or whatever, because it's like every day you're choosing slightly differently based on the current state of things, the current state of your organization, the current state of your engineering team, the seniority of your engineering team, like how much they ascribe to the same values as you do.
Starting point is 00:51:43 It's just so hard. And every organization, every team within an organization is different, likely. And this reminds me, this is why, even on my blog, as well as in this particular podcast episode, the things I'm intending to focus on are not the specific technical details about how to write testable code. I've taught many, many classes about those details. I'm very happy to do so and very happy to discuss it. But at the level at which we're talking about, where it's like, okay, if I'm an individual, I embrace the principles, I'm, you know, trained, I have the skills, but the wall I hit is my own
Starting point is 00:52:33 team. Like, that is the harder problem. That's, you know, just somehow that is the niche that I've sort of landed in when it comes to drive adoption. It's not the actual technique or the tools or anything. It's how do you just get people to the point where they are willing to, you know, learn and embrace it and try it. And to me, the name of the game is leadership. You know, it's not about being the singular hero that you know just comes through and saves a day with his brilliance it's about like the denix engineer right which you know i understand some people are more productive than others maybe because you know i i i i'm certainly aware there's a lot of people that get into the industry because it pays well, even if they don't have a particular interest or inclination towards writing software or managing systems. You know, I like to presume people as, you know, being capable of, what am I trying to say?
Starting point is 00:53:50 I like to presume that people are capable of comprehending and incorporating these ideas around coding and testing, right? Like it's, and the hard part isn't, you know, yeah, trying to find the 10x programmer, or whoever the most brilliant person is, it's gonna say, Oh, all you have to do is x blah there. And everybody goes, Oh, my God, finally, somebody solved it. Now I get it, right? Like, no, that's not going to happen. Never has never will. But what you can do and what I've – and I have to give credit to my experience at Apple. I think what happened at Google was wonderful and fantastic. And people are still telling me today, hey, it's still going on. Like what you guys did is still happening, right? I know they're still publishing testing on the toilet.
Starting point is 00:54:44 It started in 2006 and it's still going to 2022. And I've spent pretty much the rest of my career trying to recreate that. And I think, you know, with Apple, I finally started, you know, after being in the wilderness and trying in different places and having limited degrees of success and learning various lessons along the way. But finally, at Apple, I feel like things clicked for can collaborate and build something culturally that's going to last. just about a month ago. Not because I particularly wanted to, but I had moved to California to work for Apple. And then in 2020, my girlfriend and I lost our home in the wildfires. And we took a risk moving back to Northern Virginia, knowing that one day there would be a return to office order, and I most likely would not be able to continue working at Apple. But the timing kind of worked out because what we built with the Quality Culture Initiative
Starting point is 00:56:12 was this community with a whole bunch of strong and accomplished leaders that could carry it forward. And, pardon me, you know, because, you know, one of the things that I don't think it's unique to Apple, where you have someone who does exhibit leadership, creates an environment, an organization, and then the fear is, oh, what if that person leaves, right? One of my goals has always been to work myself out of a job. And I feel like the timing worked out that way with Apple, where I was there long enough to create this community, this group of leaders, this environment. And together we crafted a really strong message and we started developing more and more evidence of impact and results so that it's almost a good thing that I left, right? Because I didn't want it to become all about me. I didn't want people to just only come to me with their questions or their
Starting point is 00:57:25 requests or things like that. And fortunately, by the time I left, there was a whole, there was a really good core group that, yeah, that were perfectly capable of taking it, taking it. I feel like I, I almost would have held it back if I was still there. Right. And so that's sort of my goal now is not to come in and be the one person that tells everybody like it is and, you know, make sure every line of code for that company is perfectly tested. I kind of see my role as just creating the right space for those instigators to connect and, you know, create something, create solutions that fit the culture that outlive any individual who may come or go, right? Yeah. First of all, I'm sorry to hear about your home. That's just deeply, deeply unfortunate,
Starting point is 00:58:21 like with the filefires yeah it sucked but I mean I have to say I call it we won the wildfire lottery I mean if you had to have your house burned down do it our way because we ended up with good insurance and we hired private adjusters that worked with the insurance company and you know
Starting point is 00:58:41 got us a pretty good chunk and we happened to land pretty softly at a place in Palo Alto for about a year. But, but yeah. But, you know, it's funny, speaking of not being attached to things, for whatever reason, you know, maybe, maybe it was the fact that I was just, you know, focused on work, focused on building this community, and really sort of developing. For me, it's all about teamwork. Like, that's the one thing, that's my creature comfort in all of this, right? Like, it's not about being the hero, but being, you know, a contributing member of a strong team, I love that feeling.
Starting point is 00:59:29 And that's what I had. And, you know, maybe that's what got me through not being too, pardon me, not being too affected by the fire. I think that makes sense, right? Pretty much, I've started thinking, what would happen or would I really enjoy it in the hypothetical world where I had a huge bank account and I didn't need to work.
Starting point is 00:59:56 I still feel like the whole idea of working with a group of people I like working with is kind of fun. And it's not only about the economic incentives at play. Speaking of the economic incentives, yeah, it's like, you know, I'm not a person that needs, I like living comfortably, right?
Starting point is 01:00:22 Like there's a certain baseline salary I expect, obviously, but that's not what gets me out of bed and doing what I do day to day. What I'm getting out of this is not the material reward so much as it is that sense of camaraderie, teamwork, shared achievement. Programming technology, this whole career is a shadow of what I would really rather be doing, which is playing music in a band. But unfortunately, I guess I had more coding talent than musical talent, so it just worked out backwards. But it's still like every – I sort of approach my job and my career with that same collaborative mindset, that same set of principles about we're here to create something together. And what's the best way we can do that? Yeah, I think about the music thing as well. Like I have, I made some electronic music in the past and I spent way too long
Starting point is 01:01:28 and it just doesn't seem like I have enough talent for that. But since we're close to time, I want to end with like a quick question. So if you had one piece of advice for that engineer who's thinking about, you know, introducing or trying to change the culture of their organization or their team by just a little bit, right? We've spoken about going cross-team
Starting point is 01:01:53 and finding people to work with who are like-minded. If you had to tie it up in a bullet point or three, what would your advice be for that engineer so that they don't lose hope? Yeah. Are you asking for a friend? Asking for hopefully a lot of friends who are listening in. Um, the, uh, the first place I would start is don't rush it, you know, just give it a
Starting point is 01:02:24 minute. Because going back to what I was saying about the rainbow of death and I thought it was an answer key, even if it was, it still meant we had to take one step at a time. Going back to that blueprint metaphor, like, okay, let's say you've got the architectural plans for the house,
Starting point is 01:02:52 but you still have to go pick a plot of land, clear it out, put in the infrastructure, build the framing, work the infrastructure through it, right? Like, you can't have the interior decorators and painters come in the same day as the guys there with the bulldozer clearing the field. So I think it's important to keep a perspective that, you know, it's going to take time. It's going to be a hero in 90 days or something. and your principles aligned in the right place, and you just live those values step by step, day by day, you'll ultimately end up where you want to be. And I also think in doing so, um you know yeah put those feelers out because as you're developing your skills you know by applying your values and that focus um it helps you recognize, you know, those of like mind. And eventually I really believe this, like eventually, you know, people of like mind in an environment will gravitate towards one another. And, um, so maybe one more bullet point.
Starting point is 01:04:45 Um, crap. There's so many, I have to pick one. I think, um, brevity is not been my strong suit. Um,
Starting point is 01:05:04 yeah, maybe go ahead. i was just thinking i was just reminded of that not being attached to the outcome right i think yeah um doing the right thing connecting with the right people like i think you will know like the the thing is is knowing that you're doing what's right um whether or not your team or even your manager necessarily recognizes it, hopefully they will over time. If you don't do what you need to do, they're never going to. But just having that, I'm not sure if faith is the word I want to use, but I don't know if there's a better one that, you know, if you're, you know, really striving to perform to the highest level for yourself and to connect with others who are doing the same, to me, that's the most important part of the work. It's not whether
Starting point is 01:06:01 you get recognized or promoted or... Now, granted, that also does potentially open you to abuse. I'm not saying that recognition and compensation and promotions are not important. They can be. I'm just saying, again, like you need to find the right balance and not focus so much on those outcomes that you lose sight of the importance of just doing the work and connecting with others that feel the same. Yeah, no, I think, I think that makes sense to me. Well, Mike, thank you so much for being on the show. I'd love to have you on again, because I think there's just so much content to discuss. Well, you want more? Okay. I mean,
Starting point is 01:06:46 I don't think we remember anything.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.