Embedded - 488: Two Slices of Complimentary Bread
Episode Date: October 31, 2024Adrienne Braganza Tacke spoke with us about her book Looks Good To Me: Constructive Code Reviews. It is about how to make code reviews more useful, effective, and congenial. Adrienne’s book is ava...ilable now as an ebook at manning.com or a paper copy later in the year (Amazon link). Check out the example Team Working Agreement from Appendix A. Adrienne’s personal website is adrienne.io. Transcript
Transcript
Discussion (0)
Welcome to Embedded.
I'm Alicia White, alongside Christopher White.
Our guest this week is Adrian Braganza-Taca,
author of Looks Good to Me and Coding for Kids,
the Python version.
We're going to be talking about code reviews.
No, don't leave yet.
I know, I know,
we all have some damage here, but let's just see what Adrian has to say.
Hi, Adrian. Welcome.
Hi, you too. Thank you for having me on the show.
Could you tell us about yourself as if we met on a panel discussing technical writing? Sure. I am a software developer turned accidental developer advocate,
and I've been in software development for over about a decade now,
anywhere from C Sharp to JavaScript.
Now I'm taking that knowledge and I am trying to decipher developer topics like code reviews
and really make them more approachable, more fun, and make them come back into the spotlight again.
Because even though we think we know these things, we really don't.
But the more important thing is that I really, really like pastries and vintage shopping.
And I love Age of Empires, too.
That's my favorite computer game of all time.
All right.
We do this thing we call Lightning Round,
where we ask you short questions and we want short answers.
And if we're behaving ourselves, we won't jump in with our own answers
or ask why and how.
Are you ready?
I am ready.
Is it easier to read code or write code?
Oh, write to code, definitely.
What are you going to be for Halloween?
Oh, my.
I am going to be Adrian with a job because I currently don't have a job.
I was part of the Cisco layov. So Adrian with a job.
Let's just say that. What is the best thing to see on vacation in Portugal?
Oh, best thing to see is the Ponte Luas Bridge from a restaurant called Muro do Bacalhão.
What is your favorite Portuguese pastry?
Oh, I mean,
Pastel de Nata, hands down.
There's a reason it's famous.
Okay, I need to know what that is.
It's like a little egg tart in a flaky pastry shell,
and they're really small.
They're like little mini egg pies.
Yeah, they're really good.
You can eat like 12 of them.
Are these like quail eggs?
Oh no, they're sweet. They're made out of custard, like pastry cream inside.
Is that your favorite pastry of all time?
That's a tough question. It's probably one of them because it's one of those I could eat
a lot of and not get sick of it.
So that's kind of my
bar there.
Actually, my
favorite... No, I'll go with that.
That's a pastry. I'm now veering into
desserts, which people get upset because they're
like, that's not a pastry. So yes, I will say
that is one of my favorite pastries of all
time.
Favorite campaign in Age of Empires 2.
Oh, I can't decide.
I'm sorry.
I can't.
All of it. Complete one project or start a dozen?
So, idealized Adrian says complete one project.
Real Adrian says start a dozen.
If you could teach a college course, what would you want to teach?
Oh, pastry.
Like how to be a pastry chef.
Specifically, how to create viennese.
I can pronounce this incorrectly.
It's any pastries that deal with bread or dough.
So that's what I would like to do.
Can we just have this show about pastries?
We totally can.
You're going to come back and do a pastry show with us.
I've decided.
Down.
I'm down.
Do you have a tip everyone should know?
A tip everybody should know is that our library system is actually pretty awesome.
And there are a lot of books, even new books, that you can likely find there.
So check out your libraries because there have been so many books that I have found at the library system.
And your library will offer you electronic books.
And movies and audio books.
They have all sorts of stuff.
And local community activities.
And I hear that our local library is going to have
an art display in November.
That's awesome. It's all going to be octopuses
made out of paper.
Even cooler. I'm going to go
to your library system now.
She's plugging her own
art installation.
Yeah.
Okay.
So your book is Looks Good
to Me. That's the title is Looks Good to Me.
That's the title.
That is the title.
Yes, okay.
That is the title.
I just don't want there to be confusion.
But that phrase means, I waved my magical wand over it and I didn't look at it and I don't care.
And please go back to your own job and quit bothering me so I can do my own things.
Why didn't you name it? I don't care, and please go back to your own job and quit bothering me so I can do my own things. Why didn't you name it? I don't care, go away.
I think the reason...
Do we have a relationship with code reviews here? Maybe it's biasing us? Okay, sorry. Go ahead.
I think you're the right folks to talk to because this is exactly why I wrote this book. No, that exact feeling that you've just described is exactly why I wanted to write about code
reviews. We are all quite familiar with this now infamous acronym, this or even the emoji,
if you use the thumbs up. And it doesn't look good to me. The process does not look good to me.
Everybody knows that, oh, I don't want to say everybody knows, most developers have this type
of relationship with that phrase. And so I wanted to dive deeper into why that is, why it is such a
key part of the software development process. So hated, so dreaded.
Why doesn't it work?
It makes sense logically,
but then what causes it to not work as we intend it to?
And that's really the catalyst for why this book was written.
Okay, why doesn't it work?
Do you have two days to talk about that?
350 pages ish.
Yes. If you want to really dive deep into it, definitely check out my book. That's exactly what I write about, but also how to fix it. But I think the, one of the largest reasons that
it doesn't work is that we don't really outline it as a team in terms of what actually goes on in
the process and what we want the process to do. So I found that a lot of teams, including my own
and several teams that I've been a part of, it's either this is the process and everyone just kind
of follows it, especially if this is their first time being acquainted with a code
review process. So they're like, oh, this is the code review process. And they take that with them
and think that is how all code review processes should be. Other times they just don't have one
at all. And so when they do get to a team that has one, they're like, what is this bottleneck?
This is not
normal software development. I think people take their very first experience of what a code review
is supposed to be like and bring that forward with them into their careers. And that can be a good
or a bad thing. But as we've seen, and as I've experienced, most of the time, it's a bad thing
because it's unenforced the way that they think it should be.
There's a bias in the system. Their tools are the wrong ones that they're using.
They are not using the tools that they should be using and are enforcing the processes the
way they want them to be. There's a lot of people problems, a lot of human bottlenecks that are in the process as well. So
out of all of those things, I think they just contribute to a process that is not what we
intended to be. One of the things I liked best about your book was Appendix A. I admit I haven't
read it all. I mean, I've read all of Appendix A. I haven't read all of your book because I got lost in Appendix A and just wanted that more than anything else.
Could you talk about the team working agreement?
Absolutely. I think that this document, this team working agreement is what we call it. We didn't start it. We meaning the
software development industry. We kind of borrowed it from the project management world. And it's
this document that's supposed to get everybody on the same page. It's supposed to make implicit expectations explicit. And it is such a great thing to use for a team's code review process because there are
so many implicit expectations that we have around the code review process.
And like I mentioned, everybody has different experiences with the code review process.
So someone may think, oh, I can just merge straight
to production or I can approve my own pull requests if they are using pull requests,
while someone else may have come from a more strict process and go, what are you thinking?
That's something you never do. And when you combine all of these different experiences and
different levels of strictness in the code
review process, that's kind of where the friction comes about with teams because people are not
living up to the expectations of their colleagues or their managers when it comes to the code review
process. So this document, I talk about it in the book where it's tedious,
but the first thing you have to do is to actually sit down with your team and talk about these
different things. You have to come to an agreement on what those different pieces are about your code
review. Some of the things I suggest adding into the team working agreement,
which have usually been the cause of a lot of debates or tension are things like,
what is your actual process, the steps and the workflow? Because you'd be surprised if you asked
your team, hey, can you tell me what actually happens in our code review process from beginning
to end? And a lot of folks have different workflows. They'll have different steps.
Yes. One person's like, I ask people for reviews, nothing happens. I merge my code. Someone else is
like, you do these five things in order. And if you don't do them, then you can't pass anything.
And that's why my code never gets merged. Exactly.
Nobody knows the rules. What are we supposed to do with code reviews?
Exactly. So that's number one is getting essentially everybody onto the same page.
Another thing is what are your actual responsibilities in terms of author and
reviewer for the reviewer, for example, what are you supposed to focus on?
A lot of folks get into really long debates with their authors because they are leaving feedback
on things that the author feels like shouldn't be feedback, or they leave comments on things
that they feel don't need to be addressed. So that's another really good thing to discuss is what is the responsibility of the reviewer?
What should I look for?
And alongside of that, what are topics or issues that are blocking versus non-blocking?
So if you have differences on what you think is severe enough to stop a pull request from
going through, those need to be sorted out because
that's, again, another source of debate, of tension. And also a reason why code reviews
take so long is because you just get into these really long debates about who's correct or why
are you stopping my code review? Or it's the other way around. Oh, this is something so silly,
just merge it through. So these are all the gaps that we don't
talk about, the things that are implicit that we think we know and think everybody else on our team
knows. But if we take the time to actually talk it out, to work it out, and to at least start with
some sort of baseline and put them in this team working agreement, that means you've all agreed to it.
That means you've all co-created this document. And that means you all can enforce this in your
process. So now instead of, I think we should block this PR because I think we should do it,
it's I'm blocking this PR because our team working agreement says we should. And now it becomes a more objective thing versus a subjective thing.
So I can go on about it.
I love this thing too.
But the whole point is to make a lot of these implicit expectations and guidelines explicit
so that your team can use them and actually progress the process forward rather than get
stuck in these debates.
I do like the idea of blocking versus non-blocking comments. And where blocking is,
no, I don't think you should go on. And non-blocking is, I'm making this comment,
I think you should think about it, maybe reconsider what you're doing, but if you opt to continue.
Or follow up in a later PR or whatever.
Right. Go ahead and merge this. We can talk about it later.
Personally, I've always used a priority system for my comments, whether they were bugs or ought tos or maybes or nits or kudos, which are the good things, which I do think you should
point out. Do you use a priority system or is it just blocking or not blocking?
A combination. So as you were talking about it, that brought to mind another thing I talk about
and have used on my teams and that's, I call them comment signals. Other folks call them categorizations. There's a person, Paul what should I look at as an author versus what should
be safely ignored or what should not necessarily mean I have to look at this right away.
And this is something I bring up because for me, whenever I receive my pull requests and I see
comments, I automatically assume that all of them means work.
Like every single one means there's something I have to change. So that's kind of...
And each one is a little tiny stab, either with the...
Yeah. Yes. Yes. So part of decreasing the delay in a code review is for an author to know,
well, are all of those comments actually all work,
like I'm assuming? Or maybe there's actually only two things that require feedback or addressing,
rather. And the other few are just things, like you said, to look at later on or a nitpick.
And I think that's really important for reviewers to do so that if they
are leaving feedback, authors can easily categorize the feedback that they receive.
And then they can mentally say, okay, here are the ones that I need to address right now. Here
are the things I need to look at versus sifting through all of the comments that may not be labeled or categorized and then having
to determine what needs to be worked on versus what does not need to be worked on. And so, yeah,
the prioritizations that you were talking about, my team used different ones too. We had nitpicks,
even though I absolutely hate nitpicks, but we had one senior that a lot of his feedback only fit into a nitpick, so we kept that comment signal.
But the other ones we used that worked for our team were needs change, which it's very clear we needed to work on something.
Small change that could be updated easily in the PR.
Needs rework. So this one is we're seeing something
and there's actually a lot more discussion that needs to be had. Typically we talked offline and
said, you know, there's a lot more things that need to change here. And then we also had a level
up, which we added later on, which is akin to, this is something that you could take a look at later on.
It doesn't block it right now,
but something to look at for the future to improve this code.
And then something like praise, which is similar to kudos.
If you saw something really, really awesome or a really elegant solution,
we would use that to tell our colleagues, hey, awesome job. This is really,
really cool. Or I didn't know that. I learned something from you today. Thank you for sharing
it. So those are the different ways we categorize our comments in our code reviews.
A lot of people, when we give criticism, are taught to use the ham sandwich method.
I learned it was the turkey sandwich method.
Whatever. It's compliment, criticism, compliment.
And when you're trying to tell somebody that their
presentation needs work, you try to start with
a, I really like the material.
You should consider not using the Darth Vader voice,
but I also really like your shoes.
We never do that for code reviews.
That's because it's supposed to be efficient and cold and bloodless
through electronics.
Yes.
Not yet.
Although many things in code reviews should be done by robots.
I don't want to talk about formatting.
I'm,
that's over.
Yeah.
Um,
is there a way to convince my team that they do need to highlight the good
stuff as well as the bad?
I mean,
the,
the compliment or criticism, whatever the sandwich, just well as the bad. I mean, the compliment or criticism, whatever, the sandwich,
just call it the sandwich. It is a good tactic to use. Not everybody agrees with it because
some folks are just like, just tell me, tell me what it is because I don't need the buffer of the two slices of complimentary bread to tell me and to prepare me for the criticism.
But other folks do. Other folks like it that way. One thing that I think would make comments better
is to just write them in a more objective way. And I think a lot of folks have trouble writing objective comments.
I think that's where a lot of the feelings of being attacked or critiqued, the feelings of
it being bloodless, soulless, that it's just this terrible thing that we have to
see everything that we did wrong. That's where it comes from because the feedback that we get is just
written poorly. It's terrible communication. And with that just comes a higher chance for
miscommunication, for misinterpretation. And if you don't understand each other, author and reviewer,
then again, it delays the code review because now you have to continue talking back and
forth to get to the same page. And so I do dedicate an entire chapter to writing comments
because that's how important it is. And there's one tactic think, a way to help you formulate an objective way to ask somebody to do
something or to give some sort of critique. And so that is the request or the critique,
if you will, you can use it as well. And that's what is it you're asking the author to do? What
is the piece of feedback,
the rationale behind it? So why are you saying this? A lot of people forget to add that part.
Sometimes they just give the request or the critique and then you're left wondering as an author, well, this sounds subjective or okay, well, tell me more, tell me why. So the rationale
is really, really important because now you're giving a reason as
to why you're saying that first part. And the important part about the rationale is that these
should be objective sources to support what you're trying to say. So is that the team working
agreement that you're citing? Is it a blog post that you've read that's related to what you're
trying to say or just supports what you're trying to ask
for? Is it a coding convention? Is it something objective? Because if you just say, I think it
should be done this way, that's when people have a problem with it because, well, why is your way
better? Why should I listen to you? What about that is objective enough for me to accept that type of request or critique?
And then the final R there is the result.
So some sort of measurable end state that the author can compare their changes to or
try to get them onto the same page of what the reviewer is asking them for.
So request, rationale, response, or result. And having that be a way for reviewers to compose their comments, I think helps reviewers write it in a more objective way rather than just saying, oh, you should change this and then not have any rationale behind it, it's more likely for the author to take that as
something subjective rather than objective. And that's another thing I say is that reviewers need
to be objective. But I think this would work for critique as well. Having an objective reason for
what you're saying usually tends to have the author accept that feedback much better.
So you're saying if somebody posted a PR, a lengthy PR, and then got a response back from
a very senior reviewer that simply said no, that this would not be an appropriate comment.
Christopher's still mad about that.
It didn't happen to me personally. I had almost as bad things, but that happened to someone I knew.
I mean, using
our pattern here,
you could say,
with very few words,
you could say, please break up,
no too long,
make smaller PR.
So, using the three there,
I could totally understand.
But yes, even a big old no is missing some context.
Do you understand?
So, yeah.
I have a question kind of, not to get Marshall McLuhan,
but when I first started doing development,
I was at Cisco actually.
And back then the tools, the development tools we had
were, you know, email and a very simple bug tracking. actually. And back then the tools, the development tools we had were
email
and a very simple bug tracking.
Clearcase? No, this was before Clearcase.
This was CVS.
Well, I mean, if you have to choose between those two.
CVS for source control and
we had our own bug tracking
system, which they might still be using. I don't know.
Probably not. Anyway, the bug tracking
system did not have code reviews as part of it. You couldn't put code into it. It was just, here's the
title. It is the most beautiful thing about GitHub. I'm not done yet.
Is how the PRs work. I'm done. Sorry. It's important.
And so when you wanted to have a code review, you pulled a diff yourself in CVS, you pasted
it into an email, you sent it to your team with code review please
and the ticket number,
and then you put a little description in. And people would email you back,
reply all, and everyone would have a conversation over that. It doesn't sound like it, but that
feels different to me than the way modern GitHub process with JIRA and code reviews works.
It felt more personal. It felt more, I'm having a conversation with my team
than I'm having a conversation with a robot
my team are participating in.
And I'm not saying either one is bad or good.
My point is,
how much do the tools influence
how people review code
and how they,
I guess the attitude they put to it and maybe how they feel about the comments.
Because I felt like I was sending an email to my team and they were responding back to me and we'd have a discussion.
Whereas now when I put a PR up, I feel like I'm throwing a stone tablet over the wall for people to peck at.
I know it's not a big distinction, but I've been thinking about that lately. And I think it matters kind of what medium the code
reviews are done under. No, you bring up a really great point. And that's the tools do matter. And
what works for your team is going to differ team to team. So yes, pull requests, merge requests,
change requests, this notion of using a tool to put
those code changes up and have comments left on it. That's certainly the more well-known and
popular way to do it. But like you've mentioned, that's not the only way to do it. And what you've
described, this email review is still being done. It's just not something that I discuss in my book and it's not
something that should be ignored either because that works well for other teams who may like to
talk to each other, who feel that they can talk to each other. And if it's critique or constructive feedback and the medium of talking via email works
for that team, then why should you move over to pull requests or a tool that uses something like
that because everybody else is doing it? That's a really important thing to remember is that what works for your team is the best thing to use. So it doesn't
have to be a tool that uses pull requests. There's another form of that called patch changes,
which I think is very similar where everybody discusses a patch change and looks at the code
and discusses it via email as well. And again, if it works for your
team, the process that works for your team and that actually achieves the goals that you intended
to, whatever that those may be in your reviewing mechanism, I believe that's the best one to use.
And the other process that I also think about when part of my research of looking at what
code reviews used to be like, Michael Fagan and IBM, he's been credited for the first kind of
formal inspection process that we do. And for the folks who complain about our PR process today,
they should go through that.
And I was reading about it.
I'm like, oh my gosh, there's a moderator.
There's a full meeting set up.
You know, code is printed out on paper.
Like, oh my gosh.
I've done that.
I've had those meetings.
Yeah.
And so in my opinion,
there's a time and place for that as well.
And I would love for teams
who have only done the PR
process to actually see what that would be, do a Fagin inspection and take a look, sit down,
have a really formal meeting about it and discuss just the code changes that you were about to
rather than do it to the pull request and see, is this actually a better way to review code? Because now you're face-to-face, it's more synchronous.
You get all the questions
or hopefully all the questions answered
and you do it in this type of way.
Or do you like the asynchronicity?
It's gonna depend on the teams.
Depending on the team, there are different attributes
of different tools and different ways
that will work for them.
And so I'm really happy that you did bring that up because it is a very valid way to do code review.
And I will always argue that if there's multiple eyes looking at code, no matter how you do it, that is way better than not having any type of review at all. What about the reviews, I mean, more Fagan-like, where everybody gets in a room and nobody's
read the quote ahead of time except for the author?
Have you done those?
Can we call those as terrible?
I mean, I have not personally done that.
What comes to mind is maybe you could do something Amazonian-like where it's not fully
the same, but before every meeting, they do give folks around five minutes at the beginning of each
meeting to read through meeting notes or read through a document about what they're about to
discuss in the meeting. And so I think that's still valid because if you give the
time to the team to all look at it at that same moment at that time, then you're at least laying
some sort of foundation for the rest of the talk. But I would think that having a more asynchronous
form, depending on how large your team is, would be a better way to look at
that code. How long does it take you to do a code review? I mean, do you want more? I can ask a more
detailed question, but. Well, that depends. If it's an ideal pull request that I have received, then it usually takes me around
half an hour, half an hour to 45 minutes to really take a look at it, dig deep into the
pull request description, see the code and do what I feel is a proper review.
If it's not an ideal PR, I send it back if it's too large because I'm like, no, I'm not going to review this.
It's impossible for me to properly review this.
And sometimes there are pull requests that just cannot be broken down despite it being large.
And so then it takes the whole team to look at it because one person, again, is not enough.
And if it's such a large change, we all need to be apprised of it.
But in an ideal state, around 30 to 45 minutes.
Let me add a few parameters and maybe the answer is the same.
Sure.
Given it might take you N hours, whatever N is, to write some medium, small amount of code,
say 250 lines of code. How long would it take you to review code of the same complexity?
So you write this piece of code, I don't know, let's call it spy driver, and you're going
to review I squared C driver. Those are not equal complexity.
Just go with me here.
But let's say you're,
you're writing a ball of code takes you an N hours.
How long does it take you to review it?
And remember in lightning round,
you did say it's easier to write code than it is to read it.
That's true. You knew that was going to
come back and bite you, right? All of those questions
are going to come back.
Mostly the pastry ones.
I mean, that
depends, right? Am I
reviewing my own code?
Then, yeah, it's going to take even shorter
because I'm very well acquainted.
The question you're asking is
what level of effort, fractional level of effort should be applied to reviewing that's applied to writing?
I guess fraction, yes, but fraction doesn't have to be less than one.
Sure, right, right. But it can be team-wide or whatever. the obvious answer there is yes, it's going to take a bit more effort to review code. And that's
also why I said writing code is easier because when you review code, assuming we're still in a
PR process here, it depends on how well was the context and nuance captured in the description?
How well is the code actually written? Are there code review
comments? Is this something I'm familiar with from previous issues or not? Is this a part of
the code base I'm familiar with or not? There are so many parameters that definitely influence how
well and how long it's going to take me to review that code. And so the more unknowns there are,
and the more unfamiliar parts there are, it's going to take longer. So I would argue that
just the same amount of code being asked to be reviewed of me would likely take longer.
Okay. That's really an important thing to say that I don't think enough people hear. same amount of code being asked to be reviewed of me would likely take longer.
Okay. That's really an important thing to say that I don't think enough people hear.
If it takes you an hour to write a blob of code and a coworker gives you,
coworker who is the same as you, you just don't share a history,
gives you a blob of code of that same size and complexity, it's going to take you an hour to review it.
And don't try to do it in 10 minutes.
It's not going to work.
I think that's not necessarily a thing that developers need to hear,
but that needs to be part of it.
Managers need to hear.
Code reviews are not free.
When I was teaching, I gave students like 10 minutes to review each other's assignments and discuss them.
And I thought this was plenty of time.
The assignments were not exactly the same.
Everybody implemented their things differently.
But, you know, they were all doing pretty much the same thing.
And one of the students asked me to show them what I meant by review each other's
assignments. It wasn't like I was going to grade them, come on. And the student, Carrie, who asked
me this was entirely correct. It took me 40 minutes and the rest of the class to review her assignment.
And some of that was because I was talking and answering questions and generally showing off what knowledge I have.
But 10 minutes wasn't long enough to look at strange code, even if I knew what the code was supposed to do.
And how do we take away this mindset of code reviews are supposed to take 10 to 20 minutes?
I think, I mean, I do have that guideline and I
will say that as a caveat, it is just a guideline that that is something to strive towards. But
the main goal there is just to have pull requests be smaller. So on one hand, there are code changes
that can be so small, so atomic that it would fit into that guideline of 10 to 20 minutes.
Oh, yeah.
But yes, what you're saying, I completely agree, is that we're asking folks to maybe they take that guideline and say, oh, I need to only spend this much amount of time reviewing.
And that's not something that I
am advocating for. And because what I advocate for is for people to give a thorough review,
more than just some arbitrary time and what that means. And depending on the code,
that could mean a lot longer than 20 minutes. That could mean a lot longer than an hour.
And that could mean a lot more folks than just yourself looking at that code.
So how do we tell people that this is the case?
Hopefully they listen to this podcast and understand that, hey,
reviewing code is a lot more work and it's a lot more involved than writing code.
I mean, we have things, we have robots that are writing code for us, right?
So even we're not doing that anymore.
We could conjure up code in a second.
So the reviewing is even more important now than just the writing.
And so I think if we start to tell folks that reviewing is still important, number
one, code review is still important, number two, and the fact that it takes a lot of different
people, different varying amounts of time to get to a level of understanding for themselves on the code to normalize that and to say, don't try to rush through the review.
Or one other thing that might cause this is, for example, incentivizing wrong metrics for the team
to quickly approve your pull requests or how many pull requests are you closing a day. If we have
those sorts of metrics. Write myself a new minivan.
If we have those kinds of metrics, those definitely play a part in why developers
might rush through reviews. And again, as I think we all know, any metric that's used can be abused.
Yeah. But I think if we just try to normalize
and tell developers that thorough review
means really taking some time to go through it
and understand it and that that's okay,
that's number one.
But number two, also telling tech leads, managers
to encourage that and to value a proper review rather than just going through it quickly
to give their team the environment to do that that's another part that needs to be there do
you think there's a threshold um at which i mean i can imagine a section of code that has many lines
talking speaking of metrics lines of code is a terrible metric, but the complexity of one piece of code
of a certain length
versus another piece of code of a certain length
can be quite different.
One might have an algorithm, one might have
complicated mathematical things happening in it,
one might be a string parser.
Do you think
there's a line at which we shouldn't
be doing a code review review this should be a design
review yes and i think that those should come much earlier in the process i mean if we if we
think about what i think a normal workflow would be usually you you plan and you design what this feature or this fix might be, and you start to write that code.
And then for most people, they don't have any other type of check in place or review in place.
The next check or even final check or only check that might come might be the code review process.
But yes, if it's something that is integral to the program,
if it is something that's a bit more complex, I would argue that, especially if it's something
that a lot of implementations might be available, it is worth discussing, certainly, in something
like a design review to actually get to an agreement before the code review.
Because then if you're only discussing and reviewing for the design at the code review
process, it's too late. And if you decide, oh, there was a better way to implement it,
or, oh, there was an edge case that was missed. Well, the code's already written. And now what
do you do? You've wasted all that time.
If you do decide to go with the other implementation or the developer who wrote it
is definitely going to feel upset because they've spent all that time writing it. Now they have to
start over. All of those things should be caught much earlier and something like design should be discussed much earlier,
way earlier than the code review. Do you ever see a PR and realize, wait,
I don't want to do a PR here with my GitHub system. What we need is actually to take a step back and
either have the author go through the code or do the design review we should have done.
Where do you pull the brake lever and say, we're doing this, the design, the PR is not what we should be doing right now?
In an idealized world, we could pull it and do it over, but it's never an idealized situation. It depends on the urgency of the code that is needed. So if this was something that we had planned and we promised would be delivered at some sort of deadline and we have reached that part and we
are already there. It's a balancing act. That's another thing that developers have to do is,
okay, there's so much we can do to be proactive and to make sure things are as good as they can
be before they are out. But there's also the business value that we have to
deliver and the needs of the customers that need to be out there. You can't spend all the time of
your career crafting this one perfect solution and never deploy it out because that's not likely
what the business needs nor what you were hired for.
But it depends on that type of urgency and what type of feature that is.
If it's something that is super new, for example, that happened to me at one time, we promised one new feature is going to really, really help a couple customers.
And in the code review process, we did find that we missed a couple edge cases.
So we really took the time to say, okay, if we spent one more sprint on this, this is going to
be better in the long run. And this is also not going to cause this potential edge case to happen,
especially if we got a new client onboarded and they had to use this feature. Sometimes it's able to be
delayed in that way. We make the change and we do that to address the edge case and it's good to go.
Sometimes it's a little bit more clear and you're like, nope, this customer has been asking for this
for like three months and they have been waiting for it. We promised it. The CEO's name
is on here. Like we need to deploy no matter what. And the value of having it out, even if it is
a little bit buggy, outweighs the time delay that it would take to fix it and make it perfect.
If it did require more than just a simple fix, like a day
delay or something like that, if it did take a whole sprint or two to fix, we would choose to
deploy that out right away because the benefit outweighed the time delay there and to make it
better, perfect, idealized developer code. So these are the choices that are going to differ team to team.
Some will really be stringent about it and say,
no, this is going to cause more work for us in the future.
But this is where teams need to align and say,
where is that line for us?
What matters for us? And what are we okay with? And if
we do go through and deploy something out that might need work later on, do we have mechanisms
in place to make sure that doesn't get lost and that it doesn't go to the backlog and it doesn't
become technical debt? Those are the other things that are, I guess, outside the
scope of just the code review itself that a team should be thinking about and should have processes
in place to address so that these types of things don't occur, meaning technical debt or things,
a buggy code is just left out there in the wild. What I feel like is related, your book started with
a story about a guy writing a feature quickly and then going off on vacation. His co-workers
can't figure out the code or fix the bugs for the big important demo, and then the story ends.
As I said, I'm not finished with your book. Do we ever get back to the story? Ooh, see, these are the types of things that now I wish that it wasn't about to be published
because no, we do not get back to the story.
But I think the story was there to kind of lead you into the book to say, if you don't
want to be in this situation, continue reading the rest of my book so that this
never happens to you. It's more of that type of, come join me on this journey in this book.
But that would have been actually a really cool thing to do is to end and finish that story at
the end. Maybe I can tell my publisher to add that part and then we'll get this book in 2026.
I found it hard because it sounded like it was a short feature,
like the guy going on vacation whipped it out in less than a couple of days.
And so I wasn't sure how code reviews would have helped.
I can see that point.
Yes, it is something completely new. It's a larger feature and it was something that none of the team was familiar with. I think the point I wanted to
make there was just, this has happened before to me. It happens to everybody. Yeah, yeah. There's just the rockstar engineer who decided to spit something out over the weekend or over a couple days in isolation and just had it ready to go there or merged it straight in, even worse. And while having a code review may not have caught everything, it certainly, in my experience and in the experience I'm talking about, would have at least given us a way to there's this new chunk of code that is going to cause us some headaches and
overtime, and at least give us a chance to take a look at it. So this comes from one of the main
principles that I really believe in, and that's the more eyes that are on code, the better.
And that story was ideally setting the stage for that in this book.
Was getting the book edited like having your code reviewed?
Absolutely. It was very close. I talk to people who are unfamiliar with code reviews or software development.
I talk about the editing process as a proxy because it's pretty much the same thing.
And it was like code reviews because I spend a lot of time writing these chapters, writing these examples, adding anecdotes.
It is still something that I have conjured up myself. And so when it goes to
reviewers and it goes to my editor and you see a bunch of lines here to make it more succinct,
you see some comments there that says this doesn't flow. You see feedback that says, this is unclear. Something is missing here.
It absolutely is like the code review process because there are pieces of feedback that made my book a lot better.
And I know I talk about this in the book where I say, don't let code reviews affect you and who you are as a developer.
When I write this book, I would say it's a little different,
but it's still hard to see some of the pieces of feedback.
Now, granted, there were other pieces of feedback that were not as nice,
just like in code reviews.
Some folks were like, really?
Do we really need to add this section here?
Or this is superfluous and not tell me why. So I had that same mindset on of I'm open to your feedback,
but if you just give me this like harsh piece of critique without telling me why,
I'm less likely to listen to it. If there were other folks who had great feedback, they gave me a critique and they said,
I actually, one example was I was reading this chapter and I was reading this particular section
and this to me felt like it would be better in chapter three. And that would fill the gap that
was in chapter three that I was expecting to read after reading
the intro of chapter three.
So it was a very in-depth comment and explanation of why they were giving me this feedback.
If I had just gotten something that says, I think this would be better in chapter three.
Okay.
Okay, bud.
Sure.
That's how I would accept that
type of feedback without any rationale, without any explanation. It sounds a bit more subjective
and it's the same in code reviews. If you don't give an explanation or at least come from an
objective background when you tell your feedback, then it's less likely to be accepted by the author. So
absolutely, editing was very much like code reviews.
What made you write a book? Write this book?
I think I, again, got lucky. I get lucky a lot. Manning reached out to me. I didn't even set out to write anything. It's not like I
pitched this. Somebody just reached out to me and said, hey, we have seen some of your conference
talks and some of your work. And we wanted to know if you were to write a book with Manning,
is there any topic you have that you think you could write a few
hundred pages about? And I'm like, yes, code reviews. And so the opportunity came up and
now I'm actually very thankful and very happy that it did because I hope this book becomes the code review book. And so I've spent a lot of time on it.
It's a labor of love, and I really just want it to improve code reviews everywhere.
So, yeah.
One of the things that I liked about the book was the areas that felt a little bit like a workbook. The team working agreement in
the appendix where it asks
you to list the prioritized goals
for the review process because not
everybody is in it
just to find bugs. Sometimes it's more about
the mentorship or the
learning transfer. But also
the list of responsibilities
for the author
and for the reviewers. It was just nice to have
check boxes I could check off. And I wanted to be able to do more on that side.
Basically, I want to turn your book into a workbook,
which is not a useful question at all.
What do you think of that? There we go. No, you're on the right track. As I would receive
feedback from my reviewers, I had three major review cycles. So one was after the first four
chapters were written, one was at halfway point point and one was at the end of the final
manuscript. I received similar feedback where they said they liked the interaction,
they liked the posing of questions, and they liked the parts that they could take back to
their team and either start a discussion with or fill out with their team. And so before
that feedback, there weren't many opportunities to fill things out yourself or do things in a
more interactive way. After I got that feedback, that's why I have the PR template examples there.
That's why I have the emergency playbook starter. That's why I have the emergency playbook starter.
All of these things I didn't even think of.
I was in the mindset of just, here's what I need to share with folks.
But I never thought about actually giving them something to take and go forth after
being inspired by the book, let's say.
So that was one part, which certainly improved the
book a lot. I'm also now thinking about how to actually turn this into a workshop because I also
present the best parts of this book in a conference talk. And a lot of developers come to me and say, do you think this is something
that you could talk to my team about?
Or how would you orchestrate starting this conversation
and getting us to that foundational point
with our code review?
And one of the parts that I think
would make a great workshop
would actually be the formation
of the team working agreement and part of chapter three, where you talk about and discuss your code review process.
So one part is actually building it if you don't have any at all.
The other one that I think would apply to most teams are you have a process in place, but there are lots of gaps or people don't know all the steps
or people are not happy with it. And so in chapter three, I talk about how to go through it, how to
find those gaps or find the weaknesses in your process, and then start to address those things
and change your workflow. So those are two ideas I've had for workshops. I have not pitched them yet.
They're very much in the high level stages, but you're on the right track and the appetite is
there. So you saying that is certainly making me feel like I need to think about that more to start making some workshops out of those particular portions.
One of the lists was the list of reviewer responsibilities.
And before we started the show, you asked if things were different in the embedded world. And some, but not as much as you would think, really.
But your list included compiling and testing the
code. Oh.
Christopher's like, oh, I
know where she's going now.
That part is not always
easy, especially if you're
doing a bug fix on, say, a
smartwatch that only happens
at midnight or whatever nonsense.
I was actually a little
surprised to see compiling and
testing as part of the reviewer responsibilities. I totally expected to see them as part of the
author responsibilities. Do you always do that? I really don't unless it's requested or I see
something I don't understand or believe is a bug and I want to test it myself so that I can make good review comments.
Not always, but certainly when the process warrants it. So this came about because there was a different team I worked on. It was a smaller team and we didn't have a lot of automations
in place. So I talk about how the idealized state is, yes, the code review process is one part of it,
but it's also a complementary part to a larger continuous integration pipeline.
And a lot of the checks that we would have would be run in an automated way in the later parts of the pipeline. But because a lot of folks use
GitHub or GitLab, sometimes those checks do start happening closer to the code review process. And
sometimes you can actually run pre-build checks or you can run some checks, uh, maybe at the same time as when the pull request
is opened. So, but to go back to my small team, uh, we didn't have any of that. We didn't have
any automations in place. We didn't have any of the things that make our lives easier as reviewers.
And so there were a lot more responsibilities for us to make sure that what we were reviewing would pass and be okay to merge into the rest of the code base.
So part of that team was for us to actually pull down those versions and tested ourselves in our environments and make sure that everything was okay. That was a, instead of having, say,
a staging environment, or even some folks have an automated way to get that code, run it,
compile it, build it, deploy it to a test environment for it to be checked and do a report
and make sure those things are working or even have some automated
tests run against that and just get a nice report to say, hey, yeah, it's good. Everything's good.
We needed to do those parts ourselves. We needed to take down that code, compile it ourselves,
and do those manual checks ourselves. And that just came out
of necessity. And I added that to the reviewers list because there are a lot of teams that I've
spoken to that are small teams that are not at the idealized state at all. Talking about
using a code review process or even going through the PR process through GitHub was still something
brand new to them. That was still something that was shiny and just being integrated into their
process and is something that, you know, is, it was a start, a stepping stone to making their
process better. So to add something like that, to pull it down yourself and test the
code yourself was to me, not something too weird to add into the reviewer process. And hopefully
I have described in that section that it's not always necessary, but the more important thing
is that our responsibility as reviewers is to make sure we've done a thorough review.
And if a thorough review means pulling down the code, compiling it and running it ourselves, then that might be something that just has to be done on that specific team.
And this is part of the team agreement is whether or not this is something you do, whether or not this is something you do for all bugs or all bugs that are not typos.
You get to choose and you get to agree on what the goals are and what the process is.
I'm trying to put in a code review process for a new team that I'm working with.
They're just familiar with design reviews.
So they want the process, but it does have some downsides.
It takes more time, and I bill hourly, so time is expensive to them, and I don't blame them for wanting to maximize my output and maybe undervalue the review.
But the way that I'm starting is to break up your team working agreement
and then get them to agree on each part individually.
And once we're all done, I'll present it to them as finished.
By the way, if you're listening, this is totally not manipulation.
But the first thing I was going to do
was go through the prioritized goal for the review process.
You list some possible goals, finding bugs,
code-based stability and maintainability, knowledge transfer and sharing, mentoring,
and record-keeping and chronicling. And I did an informal poll on the embedded Patreon Slack,
and finding bugs was the second most important.
Code-based stability and maintainability, I think, was the most important.
And record-keeping and chronicling was the least.
Is that how it goes for most teams?
Yes.
That's part of why I added it as a distinct goal.
And your poll is very, very close to my own experience as well.
It's almost the obvious goals to have and why you're doing a code review in the first place.
And so that's great.
That makes me happy to hear that those are the top two goals.
Record keeping and chronicling, I can understand why
it's lower on the list. Some folks have noted that they have other systems in place to have
that type of chronological record. So for example, maybe their ticket systems or their project management systems or some other external tool
where all the additional detail and context and all the things that we would want to know
and what I'm advocating you add into the pull request, they keep that somewhere else.
And that's possibly why they abstract that away from the code review process itself and think that it's not
a goal. Another reason is a lot of folks who may not do reviews this way, maybe they do reviews
in say pair programming or mob programming, they feel like they get that much faster review cycle there. And the goal for
the record keeping is not necessary because they've already shared that knowledge across
the members. They've shared it either with a pair or the whole team or the mob and that it is unnecessary. That's where I say it's actually quite necessary
because even though that's cited as an argument to say,
well, we don't even need code reviews at all.
We already do way better, way faster,
way more in-depth reviews during pairing or mob programming
that why would we even need to do a formal process at all?
I agree that you may not need as thorough of a process as if you were not doing pair programming
or mob programming, but the record keeping part is actually the main reason why I think you should
still do it. And I argue that if you do engage in pair programming or
mob programming, that you can just do a more shortened version of a code review. It can be now
just a, I don't want to say rubber stamp because now people are going to misinterpret and be like,
oh, Adrian said you could just rubber stamp a pull request. But what I mean is the process itself
may not have to be as thorough and as long
as if you were not doing pair or mob programming.
So having a type of record,
especially if you don't keep any of that context
or decision-making anywhere else,
I would argue that a code review
is still absolutely necessary
and is probably the only place where you would keep all of that decision making in context that was discussed in pair or mob programming.
I've done pair programming and with the right pair, I adore it.
Mob programming sounds interesting, but I think there are enough voices in my head that I don't really want that.
And I have to think all of the times that I write code and then I go away and then the next morning I clean up and fix it.
And you couldn't do that as a mob.
I think you'd need more time.
So I guess I'm curious about mob programming, but that is an entirely
separate show and we are nearly out of time.
Yes, there's so much more we could talk about that.
Pastries. Yes, let's do that. Let's do that as
a second episode, third episode.
We'll all buy a set of pastries and then review them see that's an award-winning show
right well and you know instead of robot or not we'll have pastry or not is it donut a pastry
oh geez yes see this is where when you first asked me what's your favorite pastry my mind
immediately went to something that people consider a dessert.
And I'm like, well, pastries can.
So my favorite type is it's like this.
I don't even know what you call it.
I had it in one restaurant.
I don't remember the name.
And it's like a very soft cake like warm vanilla cakelike thing with some vanilla ice cream on top.
Sounds almost like bread pudding, but with ice cream.
Sounds like cake.
It is not as mushy as bread pudding.
It's not just as straightforward as cake.
The hot component with the cold component is very important.
That's a big part of why I like it.
But I don't know.
I need to find that place again.
And the consistency of it was very hard to describe because it's like right there, right
in the middle.
Well, let us know when you remember where you found it.
I'm always in for a good dessert, but I don't think that's a pastry.
Yes, I have gone through this conversation that's
why i'm like well no most people yeah it is birthday cake ice cream a cake
is a hot dog a sandwich it is if you slice the bun at the bottom
well there you go there we go adrian do you have any thoughts you'd like to leave us with
uh yes just the fact that you are listening about code reviews is a really really good start um
more than that just i think if everybody cared not just in code reviews, not just in software development, but in everything, if we all just cared a little bit more in the things that we do, I think we would all benefit from it.
So take the time to care just a little bit more than usual.
Our guest has been Adrienne Proganza-Takke, author of Looks Good to Me Constructive Code Reviews.
It is out in electronic copies.
You can find it at manning.com
or you can get a paper copy from Amazon in a few months.
We do have a few to give away.
And as it is a Memfault supported show,
email us and tell us what you like about Memfault
and do that by November
15th.
And we will send you, we will send
Code.
Randomly chosen three people
a code for a book.
Digital book.
Digital book.
Digital book.
Thanks, Adrienne.
Lovely.
Thanks.
Thank you to Christopher for producing and co-hosting.
Thank you to our Patreon listener Slack group for their and co-hosting thank you to
our Patreon
listener Slack
group for their
questions
I didn't call you
by name but I
did try to use
your questions
this week
thank you to
Memfault for
sponsoring the
show and
thank you for
listening
you can always
contact us at
show at
embedded.fm
or at the
contact link
on embedded.fm
and now a quote
to leave you
with from
President Gloria
Macapago Arroyo.
The power of one, if fearless and focused, is formidable, but the power of many working together is better.