Embedded - 462: Spontaneously High Performing

Episode Date: October 19, 2023

Marian Petre spoke to us about her research on how to make software developers better at developing software. Marian is an Emeritus Professor of the School of Computing & Communications at the Open Un...iversity in the United Kingdom. She also has a Wikipedia page.  The short version of How Expert Programmers Think About Errors is on the NeverWorkInTheory.org page along with other talks about academic studies on software development topics.   The longer version is a keynote from Strange Loop 2022: "Expert Software Developers' Approach to Error". This concept as well as many others are summarized in Software Design Decoded: 66 Ways Experts Think (Mit Press) by Marian Petre and Andre van der Hoek (MIT Press, 2016). The book’s website provides an annotated bibliography. Marian has also co-written Software Designers in Action: A Human-Centric Look at Design Work. She is current conducting inquiries into: Code dreams: This research studies whether software developers dream about coding – and, if so, the nature of those dreams.  Following on from work on software developers’ mental imagery and cognitive processes during programming, this project investigates developers’ experience of coding in their dreams (whatever form that takes), and whether the content of such dreams provides insight into the developers’ design and problem solving. Invisible work that adds value to software development: The notion of ‘invisible work’ – activity that adds value in software development but is often overlooked or undervalued by management and promotion processes – arose repeatedly in discussions at Strange Loop 2022.  Developers asked for evidence they could use to fuel conversations -- and potentially promote change -- in their organisations. This research aims to capture the main categories of ‘invisible work’ identified by developers (e.g., reducing technical debt; improving efficiency; addressing security; development of tools and resources; design discussions; …), and to gather concrete examples of the value that work adds to software.   Transcript  

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Embedded. I'm Elysia White, alongside Christopher White. Our guest this week is Marianne Petri. We're going to talk about how you go from being an expert to being more of an expert, or maybe how to go from being a novice to being an expert. That might be more useful to more people. Hi, Marianne. Welcome. Hi, thank you for inviting me. Could you tell us about yourself as if we met at, I don't know, a Strange Loop conference? Okay. Hi, my name is Marianne. I'm from the Open University.
Starting point is 00:00:40 I pick the brains of experts to try to figure out what makes them expert. That's a good elevator speech do the brains get to stay in the experts yes sadly the brain picking is actually quite indirect all right i like how she prefaced that with sadly. We're going to do lightning round where we ask you short questions and we want short answers. And if we're behaving ourselves, we won't ask like why and how and are you sure, at least until we're done with lightning round. Are you ready? I'm ready. What do you think is the best first programming language to learn?
Starting point is 00:01:23 Any of them. One that doesn't put you off. There are languages that are designed for people to walk in without having to get crazy. So one is Scratch by Mark Guesdahl and others. But I think our entry-level courses are in Python. I learned basic, and then I learned machine language. The answer is, as long as your first language isn't your last language, I like that. it doesn't necessarily do permanent damage. Which is the best programming language overall?
Starting point is 00:02:02 Oh, come on. I know, that's such a terrible question. I decline to answer that one. I'll give you an alternative, however, which is that one of the single, empirically, one of the single best indicators of programming performance is how many programming languages the developer has touched. And so the key is not one language, but many languages and to learn lessons from all of them. And finally, I'm glad I learned AUC. I have touched many, many programming languages and the lessons I've learned is I don't like programming. No, no.
Starting point is 00:02:39 Next question. Let's see. Do you like to complete one project or start a dozen? Yes. I think the answer is both in balance. I try to be sort of focused active on two projects at a time so that I can play between them and then keep a backlog of all the other stuff that's in my head so I don't lose it. Are there any journals or magazines or YouTube channels or conferences that normal everyday software developers should follow to understand best practices? I don't have a good answer to that. The two industry conferences I've been to recently that have been an absolute joy from a developer's perspective have been Strange Loop, which sadly has just had its last episode,
Starting point is 00:03:28 and Joy of Coding in the Netherlands. I would recommend both of them. What is your favorite fictional robot? That's hard. Robbie the Robot from Forbidden Planet was a sort of meme from childhood. We used to call him Blobby Robbie. More recently, Mead, a sort of robot spaceship cross was entertaining, partly because there's an implied ethos in Mead. But, yeah, I think Robbie is kind of the iconic robot image that everybody has. What was Mead from? It's a film.
Starting point is 00:04:19 Oh, okay. I'll have to go look for that. I haven't seen that one. A robot film we haven't seen. Yeah. Cool. Do you have a tip everyone should know? I have one from my very wise mother.
Starting point is 00:04:33 Never make a statement when you can ask a question. It's a piece of advice that has stood me in good stead over 30 years. Well, more than 30 years. I'm kind of surprised that wasn't in the form of a question. I know, I know. There's a certain irony to that. It's the Jeopardy rule of life. I saw your Strange Loop talk on Greg Wilson's Never Work in Theory site, and this was the small version, although I have seen the larger one now. And it was about how experts think about errors.
Starting point is 00:05:11 Could you tell us a little bit about that? I'm not quite sure what you want to know about it. So that talk is one small slice out of decades of research on what makes experts expert. Greg Wilson and Mike Hoy's image for those talks was a 10-minute talk that would deliver something actionable from research to developers. And for me, the attitude to error thing was a really nice nugget to hand across that boundary. It's also incredibly rich. So the whole notion is that experts have a very different approach to error when it arises than, say, people in software factories. So instead of,
Starting point is 00:06:03 oh my God, there's a bug, swat it, get rid of it. They pause and they look at the error and they say, what's that about? Is that as trivial as it seems? Or is it part of an ecosystem, a collection of other things? Is there something else going on that we haven't thought about? And very often really important insights about the software come from paying attention to errors. And in a way that fixes the error, not fixes the blame. So it's a very, very open attitude. And that kind of embracing error as opportunity is a really, really useful part of that expert mindset. I like that a lot. And figuring out how to help people become experts
Starting point is 00:06:48 is something I've been thinking a lot about lately. How do you take people who are excited and willing to do more and to take classes and help them get over the hurdle of not even beginner to novice or novice to junior, but junior to engineer and engineer to senior. How do you help them become experts? Well, I'll relate things that I've seen in terms of the way the high performing teams bring people on side. Actually, first, I'll tell a story about one of the designers that we studied. So I was doing work with my colleague, Andre Vanderhoek, at University of California, Irvine. And as part of that, we recorded, or he and one of his PhD students at the time, recorded pairs of designers working together on
Starting point is 00:07:45 a design task. And in all of the companies they went into for these recordings, they asked for people who were really their best designers so that we could get sample material for people, for researchers to look at to try to understand what was in those dialogues. And in one of the cases, one of the designers was incredibly young. He wasn't the sort of person that you'd expect them to have delivered to us as their, you know, really high-performing designer. And so they stopped afterward and spoke to him and said, how did you get to be here? And his whole story was a story of asking questions. Every time there was something to do, he would pick the problem he didn't know how to solve.
Starting point is 00:08:31 He would find something he hadn't done before. He would navigate the design space, the problem space in a different way because he wanted to be surprised. So every project he worked on, he focused on whatever the design component was that was most crucial. He was trying to sort of sort out the shape of the solution before he started engaging with coding. He made lots of mistakes and he learned from the mistakes. Uh, he sought out open source projects that were in areas he wasn't familiar with. And so what he did was he, he gave himself a huge range of experience, um, in order to stretch himself and in order to give him a body of material. Now, not just to engage with, um, and build his knowledge base, although that was certainly part of it, but also
Starting point is 00:09:25 to reflect on so that he could look across the whole of that and begin to understand what works, what doesn't, and why. And I think that's a big part of it, is that business of looking for diverse experience, reflecting on it, thinking hard about what makes something better, what makes one solution better than another solution or what the trade-offs are. In terms of you helping people, I've always said that it is kind of a meme that the education is in the dialogues. There are points of engagement that are really, really important where somebody is coming to terms with something, it just needs to talk about it to somebody else, needs to get it out of their head, needs to compare their experience to somebody else's experience. And so creating an environment in which it is
Starting point is 00:10:15 perfectly reasonable to explore, it is valued to learn and experiment and make mistakes and then figure out how to fix them. And it's reasonable to have conversations about that as a rich environment for developing expertise. I want to go back to what you were saying about this particular person who explored different ways of looking at things, explored different, kind of walked their way through not being closed into one thing, kind of exploring different things. And what you said about experts and how you found that experts tend to see bugs or errors as a problem, not as a problem, as an opportunity, it's kind of a paradox because that sounds like, forgive me, in Zen Buddhism, there's a thing
Starting point is 00:11:10 called beginner's mind. And it's a thing people talk about. It sounds like maintaining beginner's mind, which is somewhat paradoxical. You know, if you say, oh, if you're a good expert, you can get into the beginner's mindset. But it sounds like that's sort of what you're talking about, being able to approach things without a lot of judgment to start with and see, okay, where does this lead me? What is this telling me that maybe my years of experience are locking me into a solution that maybe I'm missing something? That's a beautiful recap. One of the things that's really interesting that experts do, particularly when they're in a sticky problem and they're struggling to handle all the different constraints, is they relax constraints. Either they simplify the problem by taking away some of the design features they expect to put in eventually, and they focus on really the heart of the problem. Or they just say things. I literally
Starting point is 00:12:05 have sat in a design session where as part of the discussion, the expert in the room said, let's pretend there's no gravity. How does that change what we would do? And it's clear that we haven't figured out how to eliminate gravity. But by reducing that constraint, they really broadened the potential solution space and they got insight into the thing that was hanging them up. And so that whole sense of the inquiring mind, that whole business of continually re-exploring the nature of the problem and the understanding of what's needed is part of that designer mindset that distinguishes these high performers. How do you decide who's a high performer?
Starting point is 00:12:53 Well, the algorithm I used for finding experts when I went into organizations to study them was I would ask everybody who the expert was and go to that person. That person always pointed to someone else. No one admits, none of these people admits to being an expert because they're all too aware of their limitations. But the reality is that there's very often one person or a couple of people who are the people who sit quietly in the room, and then when they open their mouths to ask a question, that question changes the discussion. They're very often people with very deep knowledge, and knowledge that is, I keep talking about garbage can memories, experts with garbage can memories that are indexed, so they can go back in time and understand what they did on previous similar projects,
Starting point is 00:13:50 what the constraints were on those projects, how they made the decisions, and then they can reapply them. But they're also the people who are able to see opportunities to see past methods, to see past imposed constraints, to see past current technological obstacles, to find alternatives. One of the things that I found fascinating after graduating and in college was the emergence of the design patterns. Zeitgeist? Gestalt? I don't know. Whatever. Some word I don't really know. I was going to go through a bunch of German words.
Starting point is 00:14:32 Yeah. Elevator. I don't know. And it felt like that made people more expert because they got a wider variety of problems and relatively canned solutions and understanding of where things fit. Do people just need to, you know, read a couple books to become an expert? No, no, no, no, no. I mean, expertise is something that takes time to acquire. It takes time, it takes experience, it takes reflection. The point is that people can step onto the path toward expertise by adopting an appropriate mindset and then over time build up that knowledge base and build up that experience base and build
Starting point is 00:15:18 up that body of reflection. So the nice thing about patterns, as you say, is that it was encapsulating known solutions to familiar problems in ways that could be reapplied. And it abstracted them. But ideally, if patterns are well expressed, it also gives examples of how it works, where to look for how this is applied. And that's really, really powerful. As long as that doesn't become the end instead of the means. So patterns as a tool are incredibly powerful. And they do allow people to walk through things they might not have thought about themselves, and to consider alternatives that they might not have generated. One of the things that I try to convince people to do is to stop coding and to start thinking. I try to convince people to stop coding. Oh, no.
Starting point is 00:16:09 Yeah, but your version is never code again. My version is think first. And I go through times where I'm like, okay, write out pseudocode. Okay, write out pictures. But it's all really just don't type until you've thought about it. And you brought it up with your expert. How do you... There's so much coding without design that happens, I think, is what you're saying, right? How do we convince people to stop typing? Well, part of it is cultural. So there are huge differences in software development cultures that actually have a real impact on how people behave.
Starting point is 00:16:49 So in going to places like Strange Loop and Joy of Coding, I met all these developers who are reflective practitioners who are clearly out there trying to learn things, trying to think about things, who are open to conversations. Okay. open to conversations. Going into companies isn't necessarily the same thing because in a lot of companies, they are driven by KPIs, key performance indicators. They're driven by how many pull requests you submit. The metrics drive things. That's actually a culture that is really problematic for developing generalist expertise, for developing problem-solving expertise and the design mindset. Because the design mindset isn't about those quick fixes. It's about understanding that investment in the underlying issues pays off in terms of speed of delivery and quality of delivery overall. And so it may look like a much bigger upfront investment. But when I talk about high-performing teams, I'm talking about teams that deliver basically on time,
Starting point is 00:18:06 under budget, works first time without lab-ground spells. And they do that repeatedly. And part of that has to do with understanding and owning that development process in a way that isn't driven by management indicators, is actually driven by the engineering needs. So I'm very, very sympathetic to the position you're in. I mean, you saying think first, yes, absolutely. And it's very interesting to watch what these high performers do. They certainly think first. They certainly sketch things. They also sit in a corner with a pad of paper and their legs crossed and wave a pen in the air without writing anything down a lot of the time. But they think hard before they start committing
Starting point is 00:18:56 things to code. That doesn't mean they don't ever sketch. They don't ever make annotations and code. But what these people do is they design solutions, and then they begin to implement those solutions in a pseudocode that is an amalgam of lots of different notations and lots of different ways of presenting things. And when they've got the shape of things worked out, then they move on to code because that part's easy. That part's pretty straightforward. I mean, sometimes they'll just type code because it's faster because they know what they're... I suppose we should be distinguishing between normal solutions and radical solutions as the literature would have it. So there are certain things that are just a very familiar problem. This is another edition of what we already know how to do.
Starting point is 00:19:48 We will just do it. We will use a good solution for a known problem. I'm not going to make a diagram for string copy. I know how to write that code. That's right. And you just go to that. But for things that are new, they think first, as you say, and the ways they articulate. I mean, I did some studies about representation for ideas capture. And I did things like I wandered around after people, I pulled the envelopes out of their bins that they'd been sketching things on. I took pictures of their whiteboards. I kept track of the things that they were writing and drawing when they were shaping the solution in their dialogues and in their own minds.
Starting point is 00:20:31 And those were incredibly diverse representations. It had lots of different things in it. There were little bits of code, but there were also diagrams. There were also bits of analysis. There were also descriptions of analysis. There were also descriptions of things. The code that they wrote might have been in more than one language because they were borrowing from known elements here or there. And that's pretty typical. Do you think this has changed over time? As you describe this, I'm thinking back to my
Starting point is 00:21:01 early experiences as a software developer in the late 90s, early 2000s, where I think I was surrounded by people like this. This is how we did things. And there were discussions, and we spent a lot of time. I remember when I got assignments to do things, I would spend a month writing documents and stuff on how I was going to approach it before I wrote any code, and I was encouraged. I don't feel like that, at least in my recent experiences, that that's the way things are working most of the time. You were hired into a company and surrounded by fantastic experts.
Starting point is 00:21:36 I know, I know, I know. But I went to different companies after that. And it all went downhill, didn't it? No, no. Well, yeah, I don't know. I mean, over the time that I've been studying developers, the scale and nature of the software they've been building has changed. Yeah.
Starting point is 00:21:55 And so a lot of what's going on is people are no longer just building a piece of Greenfield software. They're building something into a product line or indeed designing a product line, or they're borrowing from all sorts of existing software and libraries, they're compiling things, they're composing things. So in some ways, parts of what people are doing has changed in proportion, if not in kind. But in terms of the problem solving, there's a real need to go back and think about the concepts, to think about the nature of the design, to focus in some places I've studied where they've shifted into some variation of agile practices, they don't always make the design documentation, the functionality documentation as visible and as prominent as it would have been in traditional teams. So they're using a lot of the same methods because I don't actually think there's a disjunction between traditional software development and agile development. I think agile just highlighted certain effective practices so that people could adopt them in a coherent way. But there are some interesting questions about
Starting point is 00:23:26 where some of those diagrams, sketches, early notes go in that process. And they don't necessarily show up on the wall or on the whiteboard. And so it may be that all of this is still happening, but it's not as publicly visible within public, I mean, within the team. It's not visible to the whole team in the way that it might have been. I also think that the dispersal of developers has had an impact. Geographically? Yes, yes, physical dispersion. So one of the places I studied for a long time, they had, each developer had a cubicle.
Starting point is 00:24:10 They each had an office, but the top part of the top half of the walls were glass, and they would use the glass walls as whiteboards. And so even though they were developing individually, they were working on their component individually, when they sketched something on the whiteboard, people could look across and see what they were doing. And I saw a number of dialogues that happened simply because somebody came running out down the corridor, knocked on the door and said, I just saw this, hang on a minute. Or indeed, just stood in his or her own office and drew an alternative. And they had a kind of dialogue through the windows. And I think it's harder now, or it's less spontaneous now, if people are working in separate offices, and they haven't found an alternative, a replacement for that kind of
Starting point is 00:25:01 impromptu interaction to do that kind of explicit sharing. So open offices versus cubicles versus closed door offices. Is there research that says one is better than the other? There's only one right answer here, by the way. I don't have an answer to that. Fair. I do believe that that research exists. I'm sure the research exists.
Starting point is 00:25:31 I don't have it in front of me, but somebody I know cites it religiously at management every time they try to institute open offices. Then I always wonder about the quality of the studies and all of that. How do you make a good study that looks at software development? Do you give fake problems? Do you integrate into a team for five years? How do you study these things? Yeah. Okay, there isn't a single, here's how you do a good study answer.
Starting point is 00:26:01 There are lots of ways to do studies. So a lot of the work that I've done, for example, has been me going to teams and sitting with them and watching what they do, or alternatively interviewing developers, and then they show me what they do, or they send me examples of their materials. We also do experiments where we have much more focused questions,
Starting point is 00:26:22 and we ask developers to do designated tasks and then compare across different places. The key to all of it is that you have there's an awful lot. The way that you design the study depends on the question that you want to answer. So it's there isn't a single this this the way that you marry research. So, okay. So I do a lot of work with PhD students, teaching them the craft skills of research. And one of the fundamentals we have is about research design. And it's, we call it the one, two, three of research design. Number one, what question, what's your question? And why does it matter? What does an answer look like? That's number one. What's the question? Number two, what evidence would satisfy you in answering that question?
Starting point is 00:27:12 And then number three is what technique would deliver that evidence? So how to design a good study is figure out what you want to know and what knowing would look like. So if I want to understand where innovation comes from in development teams, I will probably start by looking at an innovative team and go talk to all of those developers and ask them questions about how they perceive innovation and how they see it arising and whether the things I've identified are for things like differently organized companies, differently organized teams, different domains, because what I'm trying to get is a representative sample. But working at that intensity limits the number of places that you can go to study. And so I'm not going to be talking about statistically significant results if I observe five teams in five different companies. And so the real answer is that we use a collection of different methods over time that look at the question from different angles and using different evidence. And then we reflect across all of that evidence to try to understand the underlying phenomenon. Once we've understood
Starting point is 00:28:46 it well enough, we can articulate what we think is going on, and then we can go test whether that's true, whether we can find anything that contradicts that theory of how things work. But it isn't simple. So there's a lot of ethnographically informed work where people are sitting in companies watching things as they happen naturally. There are what we call field studies where there might be some intervention where, for example, we might ask people to do a particular, well, the example that I gave where we were looking at pairs of developers solving a particular design brief at the whiteboard. I mean, we went and filmed them in their companies, but it was our design brief. And so we could look at all those pairs. We could look across all the pairs to see how their behavior was similar and how it differed. But arguably, I think we had about 10 videos at the end. that's a very small number to be representative of the whole of software development or the whole of the range of design styles that are out in industry.
Starting point is 00:29:54 And so it's not necessarily simple. There are a lot of people doing survey work or doing tasks on things like Mechanical Turk. For that, you need a very, very specific question. You need a pretty good idea of what it is that you're asking about, or you end up with a lot of examples of very weak evidence, and so on and so on. So there are lots of different ways to do it, and it actually requires thinking over time. But it also depends what kind of answer you need. Sometimes you just want a, you know, finger in the air kind of answer. Is there any reason to think there's a difference between these two methods? Let's have a quick look at them. Or does this work at all? A demonstration, a really simple demonstration
Starting point is 00:30:46 of concept might be a very, very limited kind of study. So it comes down to that match between what you want to know and then how you choose to find it out. These seem like psychology design studies, as opposed to computer science, where you're looking at data. Okay, so fundamentally... Sorry, that came out really badly, didn't it? Where you're looking at data like that wasn't data, where you're looking at numeric data. But the thing that you're asking about, if I'm talking about the nature of expertise, I am talking about human behavior. I mean, one of the reasons that computing is such an interesting domain
Starting point is 00:31:26 in which to do research is because software is limited basically by our imaginations. Whatever we can imagine, we can probably build over time. But the key is the human imagination, is the human ability to affect the designs that are in their minds. And so for me, anything that we do, the software that we use is going to be written by people, or maybe a collaboration between people and machines let's just go with people um it's going to be read by people and importantly puppet and and then implement and then and then uh operated by machines yeah um and the ultimately it's going to operate in a human socio-technical world.
Starting point is 00:32:26 And so there's an awful lot of, I mean, there are lots of systems that are very much oriented to technology, but even the ones that seem like the human part of it is irrelevant, it turns out that it isn't. So for example, one of my informants works in embedded software in the automotive industry. And there are examples there where the software worked absolutely to spec, but what it didn't take into account, it worked very well with the car. What it forgot was that there was a human driver in the car. And so, for example, there were automations to parts of braking systems that caused fatalities simply because once that automation was invoked, it surprised the driver. And the driver was
Starting point is 00:33:23 unprepared to what happened with the vehicle. The vehicle is now behaving in a different way. And so I don't think that the separation between looking at how people think about software and how software operates on the machine should be absolute. There's actually a relationship, a very, very important relationship between them. I mean, it may be that different people want to focus on different parts of that arc, but they're intimately related, whether we like it or not. You've talked about experts and about high-performing teams. They aren't the same, are they? No. How are they the same and how are they different?
Starting point is 00:34:07 So typically, high-performing teams have an expert as part of them or someone with expertise. But it doesn't mean that everybody on the team is an expert. And one of the things that's really interesting about how these teams operate is that they're embedding, first of all, they're embedding this designerly mindset in what they do. And they're reinforcing it with the development culture that they have and the dialogues that they have across the team. And so what they're doing all the time is helping everybody to be the best they can be. So, for example, experts don't make fewer errors than non-experts. They make just as many, if not more. But they have much better safety nets for catching errors.
Starting point is 00:35:02 And one of the things that high-performing teams do is include in their culture practices that mean that there's a really good safety net for error, because code isn't owned by one person, code is owned by the team, and people look across and help each other and do things together. And so the likelihood that they will find errors that arise is very, very high, that they'll find them early is much higher, and therefore that they'll be able to address things while it's in production, not when it's out of the world. And so a lot of the business about high-performing teams is having enough expertise and having that really well-founded culture that embodies this designerly mindset, that embodies reflection, that embodies a learning culture, that includes
Starting point is 00:35:57 the kinds of checks and balances and the kinds of practices that help them catch errors, but also help them think beyond individual strengths. And so one of the things that characterizes high-performing teams is very often, it feels like it is greater than the sum of its parts. Christopher recently mentioned something where you heard that the goal or one of the primary duties of a senior software engineer should be to create more senior software engineers. Yeah, I like that. I don't know how many companies I have been in that that's actually part of the company.
Starting point is 00:36:39 And maybe it's because I've done a lot of startups. Big companies have that culture more than small startups. I would say that's probably true. I mean, startups, they have a job. They don't have a lot of money. They've got to get from here to there. But even big companies, I don't hear that being a goal anymore. They do just as many layoffs as anyone else.
Starting point is 00:37:03 Well, I mean, yeah. I'm not sure a layoff is related to... That's usually a higher corporate directive than the development culture of your junior people. It's a measure of loyalty. Oh, sure. Okay, question. I have an actual question I'm getting to. Is it my responsibility to be curious and go to conferences and read books and think about things?
Starting point is 00:37:28 Or is it my employer's responsibility to give me the time and tools to do that? Again, my answer would be yes. Yeah. I mean, the key is individuals have to want to improve. They have to want to embrace things. But also companies need to be canny about what creates an effective culture and about how they invest in their people
Starting point is 00:37:56 so that their people can deliver the best products possible. And it's been very interesting working with different people. For example, there's one developer I spoke to who actually wanted to talk to me because he had been working with a company for a long time that had a terrific company culture, one where they were very interested in everybody getting better, everybody developing and learning and having opportunities. And then something changed in the company, and it went stale. One vice president. They lost that ethos. Yeah. And so he left because he wasn't happy anymore. And he went to a new company. And his question to me was,
Starting point is 00:38:38 okay, I'm now in charge of 20 developers. How do I not make that mistake? And that's a really interesting one. I mean, it has, there has to be a commitment all the way through to understanding. So let me back off a little bit. One of the things that I was asked at Strangelook 2022 by a number of the people I talked to was to investigate what we ended up referring to as invisible work, the kinds of tasks that developers do that are really important and that add value to the software but are not recognized by management, by promotion, and so on. And so I spent the last year interviewing people and trying to characterize invisible work and trying to get evidence about the financial gains of addressing invisible work,
Starting point is 00:39:38 about making space for things like refactoring code, building tools, learning new methods, having postmortems on projects to reflect on what worked and what didn't and so on. And I think that's part of it is understanding. One of the things that I heard at Strangeloop this year was people referring to almost just two different groups, developers and the MBAs. And when there's a disjunction between the engineering and the management, then you get this drift into error by proxy or error by KPI and all sorts of things that don't value the investment activities that pays off actually surprisingly soon in terms of product improvements and hence potentially in terms of the bottom line for the company. So there's a dialogue that has to go on. And I think that every engineering manager who can straddle, who understands the engineering, but who can speak management speak, who stands up for invisible work, who makes evident to management who is less well-versed in the engineering what the benefits are of having a learning culture,
Starting point is 00:41:09 of having reflection, of playing with alternatives, and so on. They need to do that because that information has to pass both ways so that everybody understands where the value lies. And that comes up over and over and over again. Very often, so one of my colleagues, Iram Rauf, had did some work with freelance developers to try to understand their attitude to secure coding. And the biggest determinant that she found was whether anybody was willing to pay for them to do secure coding. So true. As a freelancer, I can confirm your findings. I start out with, it should be secure.
Starting point is 00:41:58 And they're like, but we need it a little faster. And I'm like, okay, but it should still be secure. But we need it to be cheaper. I'm like, no, at this point, I can't help you. Right. And so the whole point is to try, it behooves us to try to articulate both the cost of not doing that and the benefits of doing that in terms of the kinds of outcomes that those clients can hear and understand. Okay. I have a question that I've been sitting on for the past 20 minutes that's been developing in my mind. You aren't just studying this stuff for fun. Presumably.
Starting point is 00:42:32 I assume. I mean, it is fun, but you're not just doing it out of the goodness of your heart. You develop findings about the way people work and things that work, things that don't, the properties of high-performing teams and high-performing people. What I see in software development a lot, and I've seen a lot of companies, and that's not necessarily a good thing for me, but is that some things trickle out of formal study of software development, and they disseminate through people over time, and they become folklore. And so in many companies, you end up with this culture of, well, this is the way we do things. And it has this piece from Agile, and this piece from something else, and this piece that I read in a book somewhere. And it's this mishmash of folklore and they develop a culture out of that. And usually it kind of sort of works, but it is not what you're talking about in terms of a well-formed, well
Starting point is 00:43:35 considered way of working. How do you, not you personally, but how does academia, how do these studies bridge the gap between, okay, we have this information. How does that get... Actionable. Not just actionable. How do you convince people to take a look at this and make changes? How are companies, how do you get into companies and say, hey, look at this is what we found. This kind of office works, this doesn't. This kind of design works or design method works, this doesn't. I just don't see a lot of times that people are paying attention to this one. And it's an accurate observation. So it's an irony to me that it has taken me as long as it has to get into communities like Strange Loop in order to have that conversation, because those are the places where there are lots of people with their ears open. And because they are, I'm sure that there are, you know, numerous high performers there. They're the ones who have the position in their companies
Starting point is 00:44:55 to take the information back and make change. And it's interesting because the reason I got invited to Strange Loop was because of the It Will Never Work in Theory live sessions. And Greg Wilson, for as long as I've known him, which is decades, has been trying to get research results communicated to industry in ways that industry can hear. And the It Will Never Work in Theory blog was one of the efforts that he made to do that, which was fantastic. But then he kind of felt like people weren't reading the blog. So then he went to this 10-minute format, give me something that's actionable that you found in your research in 10 minutes.
Starting point is 00:45:46 And that means make it clear, make it pithy, leave out most of the evidence, don't do the academic thing. And that, in terms of my research, that's probably had more traction than anything else that happened in my career, that 10-minute video. That's what got me into Strange Loop. That's what got me into Joy of Coding. That's what got me into Strange Loop. That's what got me into Joy of Coding. That's what got me onto your podcast. And I think there is a real gap. It is a really hard thing to do. It isn't helped because... So I want to make very clear, as an academic, I see myself as a mirror. All I'm trying to do is to understand and reflect the nature of expertise that already exists. I'm not here to tell anybody how to behave based on what I think. I don't think
Starting point is 00:46:34 I'm right. I think you guys are right. And all I'm trying to do is to distill the wisdom that has arisen out of all of this observation over time. However, there are academics who think they know better. Yes, I've met some. And it's not helpful because what happens then is there's an assumption. lot of good in them where the initiative failed because it came with too much evangelism or too much policing. So one of the things that I see in high-performing teams is they're always paying attention to new developments, things that are coming, new ideas. It is very rare that they find something in some prototype that an academic has developed and then want to buy that prototype and use it in their company.
Starting point is 00:47:32 What they'll do instead is they'll say, oh, that's a cool idea. Let's take that idea and reapply that in our work. And unfortunately, there are a lot of academics who don't understand that that's a perfectly legitimate form of adoption. So I think that part of the problem is that the language of academia is very different from the language of industry. The talk that I would give about how experts respond to error in academia would have to be very different from the one that I gave at Strange Loop, because the one at Strange Loop focused on concepts and examples, whereas the one in academia would have to concentrate on the evidence base and on the relationship to existing literature. And so the forms of what we really need is more people who broker that dialogue, who can make the translation step from research to industry or from industry to research.
Starting point is 00:48:35 The other part that's actually quite difficult is that the timeframes are very different. Industry wants stuff now very quickly, and academia works at a relatively glacial pace on the flip side industry can be very set in its ways and conservative and stubborn about making changes especially when money is involved because i think like taking the open office example that alicia loves to cite, I think there's tons of research out there that says open offices are terrible for everyone. Everybody I've
Starting point is 00:49:10 ever worked with at an open office hates it and says how they'd love to get rid of it, but it's cheap. And so that's the really tough thing is, okay, yes, here are the best practices. You'll do better. You'll save money. But over here is somebody with a real estate balance sheet and they can't make that jump from timeframe from, you know, if I do this over five years, I'll end up better off. But you need somebody to explain, look, you don't buy plastic screws because they don't last long. You wouldn't even consider it. So don't treat your developers like they're cogs and then they won't leave and you won't spend 30% of your time interviewing new people wondering why you can't hire anyone. That's right. So if you can find a relevant piece of evidence where relevant is
Starting point is 00:50:07 determined in terms of their value system, that's how you make a change in practice. So yeah, you want the statistics that says our turnover rate has increased by this much since we went to open plan. And look, we lost three of the people who were core to our business. And if we can reframe our observations about what works and what doesn't in terms of the values of these different stakeholders, that's what we have to do. We need to be able to speak everybody's language. So back to the experts talk, which I do understand is only part of your research and you have other books that I probably should be mentioning and all of that, but... No, that's the one.
Starting point is 00:50:53 There's the 10-minute version on never work in theory. There's the 45-minute to an hour-long one that is on Strangeloop, and I'll link both in the show notes. Thank you. One of the things from the shorter version that really, really hit me as something actionable that I could start doing is pair programming. And the reason I never, I mean, I've done pair programming in the past. I've done it with people. May I pause you? Yeah.
Starting point is 00:51:27 Do you mean pair programming or pair debugging? Ah, right. That was actually part of it. Yes. Pair debugging is what you're recommending. And I've done both. I've done pair programming and pair debugging with one person who was remote and basically my contemporary we had a lot of the same skills but not a lot of the same knowledge
Starting point is 00:51:52 and we became really good friends and had a really fun time doing things together but pair debugging especially when the skill sets are different, so that the expert has to explain what's happening and therefore has to articulate it and therefore has to think about it. And the more junior person is hearing this thought pattern and looking at the code and probably feeling like they're not contributing anything by gaining experience, both in design and development as well as implementation. Why doesn't everybody do this? Why haven't I been doing this? I love pair debugging. It's fun. I know. And yet, even you and I, who are in the same building, often working on the same project, don't always manage to do it. So, because of the agile movement, there's a lot of research on pair programming, particularly with student programmers.
Starting point is 00:52:52 And there are real advantages with students to pair programming in terms of just the kinds of dialogues that you've articulated. What I see much more often in the teams that I study is I see very little pair programming, but I see routine pair debugging. And I saw that even well before Agile was articulated. And because what they're doing in there are key things that happen with pair debugging. So you've already explained it, that you get the dialogue between somebody who sees further and somebody who's just handling a bug at the moment. But it's a really good way to make sure that, for example, more members of the team are familiar with code base, to get people to look across each
Starting point is 00:53:45 other's shoulders, to get new perspectives on things, to pick things up that might have been missed if there's only one person going over and over and over it. To start dialogues about other things. Yeah. And it's a very, very powerful mechanism. And as I say, I see it spontaneously in almost all of the high-performing teams. In fact, there is one company that I studied where they were using pair debugging as an onboarding process. So what they did was they provided selected pull requests to the new person. The pull requests were distributed across the code base. And it meant not only did they trawl through the different parts of the code base, but they also then sat down with somebody else on the team who was the expert in that part of the code base,
Starting point is 00:54:38 or the most knowledgeable about that part of the code base. And so they met the team as well as meeting the code. And in the course of that, it built their confidence because they were doing useful work while they were also becoming part of this bigger picture. And I thought that was a brilliant way of, a strategic way of using pair debugging. Some of the features of pair debugging come up with rubber duck debugging when you're a secondary person or maybe primary is a stuffed animal. But you don't get that knowledge transfer. Yes. So what rubber ducking gives you is it gives you that externalization. and very often just saying something out loud that you're thinking about
Starting point is 00:55:26 basically causes people to articulate assumptions, to stumble over things that their mind just drifts across and so on. So there's real value in rubber ducking. But as you say, what you don't get is the exchange. And you don't get the laughter necessarily. I think that the key is how important is laughter, amusement, jokes, and I can make it feel like it's an actionable story, I feel like it's better code, in part because it's easier to read. But that's the study I want, is does laughter make for better code? Okay. Can I make writing noises now so I can write that down?
Starting point is 00:56:43 Giggling and programming programming together at last i mean it's interesting to me i always um wanted to bug the coffee machine um at the places i was studying because it was interesting how many insights happened at the place where people got coffee together they'd bump into each other they'd have a few words they'd exchange a joke there or something and they could just ask a question um And that happened over and over. And in some places have embedded that. So for example, I was at Mozilla in Toronto, and their kitchen is amazing. There's so much work that happens as people pass through the kitchen and listen to other people's conversations, chime in here, chime in there,
Starting point is 00:57:25 exchange information. It's all very brisk, but it's incredibly powerful. And I think part of that is, I had a student named Martha Hawes, whose doctoral research was on student teams were also the teams that had better outcomes at the end because they learned in that time they built trust they had awareness of each other um and they found it easier to ask questions that's the part i'm absolutely terrible about i remember it was my second or third job i I think, oh, HP Lab. So like my second, third job. And I didn't know enough to be useful. And I spent my day at my desk trying to understand what was going on. And I skipped lunch because I was trying so hard to understand. And my boss came by and said, you're doing it wrong. You need to go to lunch. You need to understand these people more than the technology. And I was so shocked because that went against everything I believed in. I just had to learn all of the information. And there was John saying, no, no, no, no. Stop reading the manual and go have lunch with these people.
Starting point is 00:59:06 I like John. It was weird. And it's still not something I'm good at. But it's part of it. It's part of what you're trying to understand as you're trying to figure out how the software will serve them. Well, they served the wasps my tuna sandwich when I finally did show up. What? served the wasps my tuna sandwich when i finally did show up what you have several books um and we are almost out of time so can you give me a speed run down of your books the only one i think i'd like to give the rundown of is software design decoded um which is basically 30 years of empirical research distilled into 66 paragraphs with
Starting point is 00:59:49 illustrations. Everything in the book, it's a collection of insights. Everything in the book is grounded in empirical studies. And there's a website associated with the book that has a lot of the bibliography for evidence that we built on's a, a bit of a Marmite book. Um, the people who understand what it is, love it. And the people who actually want a how to book hate it. Um, but it is, it does capture a lot of the elements of this design mindset. Um, reflection, and culture that builds into expertise. And it was actually, it's an interesting one because it really did take 30 years to get to the point where I could co-author a book like that. I liked the part about sketching best, you know, about externalizing thoughts and sketching the problems and the solutions. We've talked some about that. But you also had one about experts design elegant abstractions. Yes.
Starting point is 01:00:56 And the paragraph starts, while all developers create abstractions, experts design them. A good abstraction makes evident what is important, what it does, and how it does it. How do you design a study for that? Okay. Well, I certainly haven't designed a study to watch somebody create abstractions. That's the kind of emergent observation that happens over time over lots of studies, where you collect the examples as they arrive, and then over time make sense of them. That insight aligns with another insight, which is about focusing on the essence. And I know my colleague, André van der Hoek, when he teaches his class on software design,
Starting point is 01:01:42 one of the insights from the book that he really stresses with the students is to focus on the essence, because it's really easy for people to sit down and immediately code the bits that they know how to code, which is almost never the part that's the hard part, or that's the crux of the problem, or is the defining element of the solution. And so all of this business about abstractions is starts a little earlier it's about learning to ask the right questions it's about directing attention to the heart of the problem what's the essence what's the core challenge or issue are there analogies in other domains what can we learn from them what are the dissonances among the alternatives i can think of to address this thing. And I have to credit Mary Shaw there, who often talks about insight through attending to dissonance and its relationship
Starting point is 01:02:36 to innovation. So, if we have lots of examples or use cases, what do they have in common or how do they differ? What's most important about them? What's the thing we can't do without? And very often in the course of stripping back to that essence, experts are identifying the thing that they have to express. And that's the first step. Expressing that as a good abstraction is something that they learn over time. I don't know that I would know how to teach somebody to make good
Starting point is 01:03:11 abstractions. But the whole notion of trying to strip away detail, trying to find the essential bits, trying to ask the right questions is, I think, a mindset and a set of practices that people can learn. It does seem a little hard to learn from this book. I mean, your expert sounds like a wizard or a perfect person. And some of these things, I think I do them. Maybe I'm an expert. Maybe I just think I'm, what is it, Cronin Duggar? Dunning-Kruger.
Starting point is 01:03:49 Thanks. So the response we've had from the people who use the book is that they use it, first of all, they read it and they recognize some of it. And sometimes they recognize things and say, oh, I used to do that, and I forgot about that. Yeah, I had to start doing that again. And so they use it as a kind of a way of refreshing themselves. Now, when they come across something they don't recognize, they say, what's that about? I need to understand that. So I don't, in terms of starting from scratch, I think that almost everybody that we know who uses the book will focus on one thing at a time. So for example, there's one group that was, while they were doing their stand-up meetings, they'd pick out one insight per stand-up meeting and talk about it
Starting point is 01:04:39 a little bit and kind of hold it in mind. Because it was just a way to kind of do a refresh on ways of thinking, things that are useful, things, practices we might have forgotten, insights we might have forgotten. In terms of people starting out and trying to build expertise, again, the book is a means for reflection. You don't have to try to embed 66 things. I mean, someone spoke to me about the book being like 66 mantras, and it was like, oh, that's too many mantras. That's just too many. And the point is, you don't have to do everything at once. You do one thing. And when that makes sense to you and becomes much more part of your practice, you can move to something else.
Starting point is 01:05:28 Andre and I are currently trying to do the exposition of the mindset as kind of a path to learning it, learning to acquire that mindset. So we're trying to do the how-to book that's the longer version of this. But we've been at it for 10 years. It's going to version of this. But we've been at it for 10 years. It's going to take a while. But the sketching stuff is interesting to me because I started in this realm because my driving interest was the relationship between language and thought. And computing gave me an incredible arena in which to investigate that relationship, partly because there were artificial languages that we designed ourselves. But I've ended up over the years not just looking at programming languages and pseudocode, but also at what people sketch.
Starting point is 01:06:19 There's a lot of research about the importance of sketching and design, not just in software design, but across design domains. And there's a lot of research about the value of multiple modalities and creativity and swapping between the visual and the textual and so on. And it's just very interesting to me to try to... Sketching is one of the insights, one of the ways that we have into the kinds of mental imagery that people are using, the kinds of ways that people are thinking internally about problems. out what it is that these innovators, these designers are doing in their minds that allows them to encompass incredibly complex problems in many cases and to keep all the balls in the air and to find these lean, elegant ways, routes to a solution. One of the things that was very interesting, I did a study at one point where I asked people to work on a problem they currently had, and I just sat and
Starting point is 01:07:32 watched them. And as they, in most cases, literally sat on a chair with an empty pad of paper and a pen or pencil in their hand and waved the pencil around in the air and never wrote anything on the piece of paper, I would intervene with questions. I would interrupt them to ask them, I don't know, if they smelled something or what color it was or what kind of thing. It was very interesting that very often as I was watching people, I could see them thinking, I could see them sketching in the air. And very often they would drop the pad, say, excuse me a minute, run down the corridor to talk to a couple of colleagues. And then they solution to one of the big obstacles in that design space. And then very often that sketch became an icon in their design and they would return to the sketch and they would interrogate the sketch and they would redraw the sketch and
Starting point is 01:08:40 they would challenge the sketch on a regular basis. This whole business about externalizing thought, we talked about the rubber ducking. That is about externalizing thought verbally, but sketching externalizes thought usually visually or in mixed media. And again, it's a way to make things explicit and to allow other members of the team to interrogate the ideas and to have really helpful critical dialogues about what's going on. There's a lot of sketching. Marianne, it's been really good to chat with you. Do you have any thoughts you'd like to leave us with probably the main one is that if anyone
Starting point is 01:09:29 has a topic they want researched it would be I would invite them to get in touch with me and if there's a response to any of this I'd love to hear it that's kind of funny because there's a whole section in the outline that I did not get to with our Patreon listeners asking questions about do coding standards actually increase readability and which trends are headed towards unsustainable futures and do requirements and specifications really make for projects more likely to ship on time? So you've already got a bunch of those, but I'm sure there'll be more. Well, I'd be very interested in a conversation about that stuff, because there are some,
Starting point is 01:10:14 they're one of the patterns that happens a lot in terms of tools, where I'm speaking, using the word tool very, very broadly, in terms of notations, in terms of modeling, in terms of development tools, is that most tools are built by someone to solve a problem that person has, right? And some of the best tools in software development were evolved that way. But what happens that ossifies things, that makes them stale or less effective than they could be, is this notion that they have to ideas in it, but it works in a way that doesn't fit well into the development culture that exists in my team, why would I want to change what's working in my team to adopt the tool? So instead, I'll just adopt the idea of the tool. And the failure to recognize that that's a really valid form of adoption is problematic to me
Starting point is 01:11:27 and there are lots of examples in terms of things like modeling languages things like how specification roots and so on where what ultimately happens with a lot of the tools that people create is that once the big adoption hump goes, people will select the parts of that tool that work for them and continue to use that and throw the rest away. And that's worth really paying attention to. It doesn't mean that the tool was a failure, because very often what becomes embedded practice is still something that was influenced by that tool, but the tool itself may not be the one that they're using. And that was true. So UML is a great example of that. Another example is formal methods. And all of these things are things that carry overheads.
Starting point is 01:12:33 And what people negotiate within their own practice is that tradeoff between the costs of using it or the costs of adopting it and the value it provides. And what they're looking for is the sweet spot where they're getting a sufficient return on investment. the parts that give them a return and they will potentially abandon the parts that are not helping them unless there's some kind of management structure that demands it. And it's the people who develop the tools often are not happy about that. But if you look at, you know, why are there so many different variations of agile? You know, we could have a whole conversation about what agile is that would carry on for an hour. And the answer is people are selecting parts of the ethos
Starting point is 01:13:19 and parts of that set of practices that work for them. And understanding what works in what context, that's an interesting study. Cafeteria agile. She talks about tools, UML and whatnot, and I'm sitting here thinking about WalkWe and Godbold and VS Code. Yeah, those are the indie tools.
Starting point is 01:13:43 Those are the edgy. Our guest has been Marianne Petri, Professor Emeritus of the School of Computing and Communications at the Open University in the United Kingdom. She's also the co-author of Software Design Decoded, 66 Ways Experts Think. Thanks, Marianne. This was a really interesting conversation.
Starting point is 01:14:05 Well, thanks, both of you. It's been fun. Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group for their many suggestions on research that Marianne should do. And thank you for listening. You can always contact us at showatembedded.fm
Starting point is 01:14:22 or at the contact link on Embedded FM. We will forward things to Marianne, of course. And now a quote to leave you with from software design decoded 66 ways experts think. Experts solve simpler problems first. Experts do not try to think about everything at once. When faced with a complex problem, experts often solve a simpler problem first, one that addresses the same core issues in a more straightforward manner. In doing so, they can generate candidate solutions that are
Starting point is 01:14:51 incomplete, but provide insight for solving the more complex problems that they actually have.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.