Two's Complement - Getting CRUFTy
Episode Date: January 12, 2025Ben unveils his latest acronym-based software discussion framework while Matt patiently waits for the punchline. Our hosts explore alternatives to technical debt, debate the value of naming things, an...d Matt questions his ability to remember five letters for more than fourteen minutes. Ben has written a blog post going into more detail since the recording.
Transcript
Discussion (0)
Why are we doing this in the first place?
Hey, Ben.
Hey, Matt.
Well, how on earth are you, my friend?
I'm good.
And we were just chatting.
And actually, I'm going to stop you right there.
And I think you're clipping.
I'm just looking at your stream.
Do you mind turning the gain down on your microphone a tiny bit?
I'll do my best.
Hold on.
Yeah, the thing on the back.
Now, a lazy editor.
I don't know if I just turned it up or down.
I think you turned it up.
Yeah, you turned it away.
This?
No.
No?
That seems worse.
That's better. That seems good. That seems good. Okay. Hey, right. All right. Now,
likely as not, that will actually make it into the edit because I'm too lazy to take it out.
Anyway, hi. We were just talking as we are, as we were planning in the sort of 35 seconds before I
just said, oh, I'll just hit record. And you said you had an idea for a talk. And then you gave me the option of talking about it ahead of time, which would be sensible or going into a cold.
And I'm like, Hey, let's just do it. And then I hit record. So tell me about your idea for a topic.
So this is a, this is a follow-up to a prior episode, the prior episode on technical debt. Okay. Yeah.
And so in that episode,
if I remember some of it partially,
I had some somewhat formed ideas
about alternatives to ways of thinking of technical debt.
And I think I was very clear in that podcast,
like this is new material.
These are like new ideas that I'm trying to smoosh around, form into something reasonable.
Right.
And not to recall the pain of our iterative versus incremental podcast, but I have a new iteration on those ideas.
And I actually...
Is it an incremental improvement, would you say?
It's an,
it's an iterative improvement. Okay. All right. So you're doing it. Don't do it. You made me do
it. Those are the worst names. And actually this ties into the other thing. Oh, those are the worst
names and names are important. Names are very important. They give things power sometimes,
uh, names. If you're, if you're, you're a regular listener of this podcast and you heard us talk about patterns thinking and how patterns thinking is an important part of learning new skills, learning to think in patterns and giving those patterns names.
If the patterns don't have good names, no one will remember them.
Right.
And it sort of defeats the purpose.
Well, not the entire purpose, but a lot of the value of the of the patterns i mean let's just in in terms of um in terms of the the the like gang of four patterns book you know part
of the the the thing that gave that the power is the fact that now we have a new language we can
use and i say a singleton then you know what i mean or i say a flyweight and you know what i mean
exactly and it carries a lot of baggage with it and and without the name being both attached to
the concept and being memorable and sort of concise,
it doesn't have as much sticking power or wouldn't be as important.
Right. Exactly right. And that's the problem with iterative versus incremental,
because they both start with I and no one can remember what they mean.
But there's another name that we have that we use a lot in software and it doesn't really have
a single meaning and that is technical
debt. And we talked about that. Right. Although, you know, you, I think you made a reasonable
argument that there is a single meaning for it and then everyone has used it wrong ever since.
Yes. But then de facto and de jure are two different things. And de facto, when we say
technical debt, we just mean bad programming things that we'll go back and fix later, sort of.
Sort of, exactly. So yeah. So the original definition that Ward came up with is no longer
the definition that people use. They sort of make up their own. And therefore, it's lost a lot of
value as a thing that we can use as a pattern to talk about, unlike the singleton or the flyweight
or the decorator or any of those gang of four patterns, where it's like, people, I think,
generally agree on what those mean. I literally yesterday asked a whole room full of software engineers, what does technical debt
mean to you? And I got 10 different answers, right? They were all sort of related.
Only six people in the room.
Yeah, exactly. Yeah, actually that's a hundred percent correct. And so, you know, it's, it's,
it doesn't have as much value. and so part of that original podcast was
okay maybe we can find a way to make this a little bit more useful and i've given this a lot of
thought and i i now know that i've come up with an an improved version of this because it has a
catchy acronym oh okay to give my lack of enthusiasm.
Go ahead.
What is your new acronym?
That was the best response in this entire podcast.
Ben has a catchy acronym.
Oh, okay, fine.
I mean, you've come up with some good ones.
I'm not going to dispute there are some ones, but, you know, I know about, like, you know, your FIRE acronym, but unfortunately, I can probably only remember what one or two of the letters stand for now now i'm just put myself on the spot it's probably fast
something repeatable uh yeah what does remind me fire fast informative reliable and exhaustive
okay see i've got one one out of four we got the most important one which is okay fast anyway all
right so tell me what your acronym is.
All right.
Here we are.
Everyone have one ready?
The handy acronym is CRUFT.
Oh, man.
That's a good one.
Isn't it, though?
Isn't it, though?
Okay.
And so CRUFT stands for complexity, which we've definitely talked about a lot.
Yes.
Risk.
Yes.
We've talked about a lot.
Uses or use cases, depending on exactly how you want to put it.
The sense of weakness and the force.
The useful aspects of your software, which are arguably the most important part, right?
It's the thing that hopefully, if you're doing it right, is making you some money, but not necessarily.
Feedback, which is an essential part of any software development process.
Okay, yeah.
And team, specifically team size.
And that relates to bus factor.
And I can talk about all those kinds of things.
Okay.
And so my hypothesis here is that when you're talking about technical debt, what you're
probably doing, and I think this is actually true of most senior software engineers.
And they just, unlike me, they don't spend the time to like build systems out of the way that they
think because they're not trying to teach them to other people it's like ben why did you do that
it's like i don't know it just seemed like a good idea it's like no i need a better answer than that
right it's like i don't know i can't really explain it to you so because i'm in this situation
where i'm forced to explain my thinking to other people, I think that there are
lots of situations in which senior engineers are making decisions along these dimensions and not
even realizing that that's what they're doing. They just sort of do it out of their experience,
right? And so I kind of think of this, if you wanted to share this thinking, as a five-dimensional portfolio optimization.
You've got these five dimensions.
On brand for day job.
Exactly.
Exactly right.
And it was not lost on me.
Somebody, when I was explaining this, mentioned to me, it's like, okay, so your problem with technical debt is it's a financial metaphor for financial people.
And then you replaced it with another financial metaphor.
It's like, okay.
A good observation.
Portfolio optimization is not strictly financial and also hopefully not very metaphoric, right?
Like there's no interest rate of technical debt, you know, despite how many times people
want to draw that
there's no that's not a real thing that's a pretend thing it's not a real thing but i think
that these dimensions are real in the sense that they are quantifiable and i'm i i think i can
explain how you could quantify every one of them okay and you i'd be interested in because i was
saying that the weakestly defined thing but yeah i get the complexity can be somewhat defined there's any number of both
algorithmic you know complexity but also just lines of code is not a bad proxy for complexity
or number of systems or libraries or dependencies or something like that so yeah i get that
risk yeah how do we define risk how can you numerically define or quantitatively?
So if you really wanted to stand on the shoulders of giants, there are many people who have tried to quantify software risk in the past. And I think a lot of those techniques are very effective if you want to apply them rigorously. If you want a very rough estimation of this, just look at the incident issues in your backlog or the error incidents in your backlog or
whatever it might be. And maybe you have a different category for things that are speculative
risks, right? Right. That's what I'm used to doing. Yeah. It's not the idea of like, you know,
you write down the risk of like, what would happen if, and you come up with a few of the things that
are really important and you go, well, how do I offset those? And the more of those you can think of, like,
well, what if this happens? What if that, have we accounted for this? What if the business changes
direction, whatever, those kinds of things could be seen as risks, but yeah, I get it. Yeah. GitHub
issues is not a bad proxy for it, at least in some mechanism of, you know, anything, any outstanding
known issue that you have is definitionally a risk.
It's like, Hey, right. If two people try to do this at the same time, then we get an exception
thrown. And then we, somebody has to deal with this at customer services. That is a risk that
we're, and we're riding that risk and yeah, got it. Okay. So we're C R this is an easy acronym
to remember at least. And very, very apt a good one so like the risks are like unwanted
behavior in your software the uses are wanted behavior like if you want a really simple way
to think about it right oh that is interesting yeah okay so uses here in this instance is like
yeah functionality that people are happy with or happy enough with that's providing value to
someone somewhere as you say maybe it's
making the company money maybe it's like the raison d'etre for your open source project whatever it
is uh and that is one of these things that has kind of a uh it'll have in the optimization
problem it should have a negative weight risk with respect to complexity and risk we hope it
should have the opposite sense to that because obviously you want to optimize for the highest uses and the lowest risk presumably amongst all these things or
so i would actually argue that really what you want there is a high correlation between the uses
that you have yeah and the uses that you think will make you money. So more uses are not strictly better.
Okay, yeah, yeah.
Even, and this is where it's like,
applying this practically,
I think you're going to wind up with the constraints
where it's really not possible to,
or it's very unlikely that you're going to be able
to increase the uses without increasing complexity or risk.
Right, and that's required complexity versus unnecessary complexity we've talked about before, to some extent.
It's necessary.
If we want to get something done, then it takes a bit of...
Yeah.
Okay.
Yeah.
But I would argue that there's probably a sort of equilibrium with uses.
It's not that you're trying to maximize it or minimize it.
You're trying to hit the right amount for the things that you
understand will be valuable. The value. And yeah, yeah. Again. Yeah. That makes sense. No. Right.
Now, practically on most software projects, there's a never ending list of things that people
think might be valuable. And so you're never, you're never really going to like, like, all right,
we've added all the functionality or software that we ever need to add. We're done. Like that never
happens. Right. Yeah. But I think the, the important thing to understand
about that is it is entirely possible that there are uses in your system that you want to remove.
And I think that this model gets very interesting when you start thinking about the trade-offs
between them, right? So you're like, I'm going to reduce risk by removing uses, right? Like
here's a use case that we don't want to support anymore because it has a lot of risk
associated with it right there's cases that we don't handle there's edge cases that we don't
handle and we could handle those edge cases and that would cost more complexity but another way
to handle this is just remove the use case yeah yeah yeah absolutely yeah and i mean we don't do
that often enough i think is is probably the point that you're sort of getting at here or, you know, without it being, you know, there in front of
you in numbers, you know, one way to change our objective function that says, you know,
we do want to maximize, which is some kind of relation, as you say, a ratio of these things
is to say, well, let's just get rid of the thing that doesn't really work that people don't really
use. Or when they do, they always hit this edge case. Let's just say we don't do that anymore. It's fine. Yeah. Okay. All right.
You've sold me on the you now. No, I am sold. I like it. And it's not the inverse of either C or
R. And in fairness as well, while there are sort of correlations between these things,
they're not all totally orthogonal axes, are they? You know, like complexity and risk sort
of go hand in hand. I mean, like some of the risk is the fact that something is complex yep yeah um but they are optimization you're gonna wind up
with situations where there there are relationships between the the dimensions yes right you increase
one and you decrease or increase another one and it's sort of like there's in some cases no way
around that right right right you know you buy you buy apple shares and you've got more tech sector risk
and that's that's okay sometimes they are maybe different or whatever there's some yeah some
something something portfolio optimization problem here risk so that's actually interesting because
maybe we should pause here on the you but like one of the things that the we do in our day jobs
is that we have ways of taking the many things that we have and applying a risk
model to it, which obviously is different from the risk we're talking about out here, which is
essentially the objective function. It's the thing you want to minimize or at least take into account
when you're trying to maximize something else. In the case of what we do in our day job, it is
maximizing the amount of money you make while reducing the risk that that requires to go
forward, the risk. And requires to go forward the risk
and that's that's kind of some weighted sum of all of those things um and so yeah here that's
what you're talking about with the ratio of like well okay we increase the uses but it also increases
the risk and the complexity maybe those net out we don't actually it's not actually better
right yeah exactly right okay all right sorry and then oh you've got something else to say on that
or no no that's ready to go to the
f which yeah i am about to try and remember i'm definitely a normal human who can remember things
more than 14 minutes uh no tell me what the f was feedback feedback okay all right so how do you
know if your uses are valuable i think that's like the most important form of feedback. It's like, did the software that we build
actually make any money?
And in trading, that's easier to do
than in a lot of different contexts,
which, you know, you put the strategy into production,
you see if it makes any money.
But there's lots and lots and lots of other situations
in which you need to create feedback loops.
Are my users happy with the way that the thing's going?
Are, yeah, is, you know.
Is my software using the amount of memory that I expected it to use.
Right.
You know,
if I made a change to my software,
did I break anything?
That's an obvious one.
That's on brand for us.
That's a feedback loop.
Yeah.
That's what,
have we increased or decreased the performance?
Let's just get both sides.
Exactly.
Exactly.
Can I measure it and see?
Yeah.
Okay.
Yes.
Yes.
And so the way in which you, oh,
I guess one step back to you for just a second. Yes, sir. I made the claim that all of these
are quantifiable and I would say easily quantifiable. I think the easiest way,
the sort of like, you know, the, if complexity is just like roughly the number of lines of code,
which I think is actually a pretty good estimation of complexity, the pretty good estimation for
uses is the number of passing tests.
Those are the things that you know your software can do.
That is true.
Okay, but I'll take you back on that
because one of the things that you were saying
is things that people want that are used
that are generating business value
because I can sit down and write a thousand tests
for a piece of code that nobody actually needs or wants.
No, no. I'm specifically saying that uses are not necessarily valuable.
Oh, I see. Yeah, yeah. So actually to back that up specifically. Okay, got it.
Yeah, yeah, yeah. Like their behavior of your software, are they valuable? I don't know. If
they're not valuable, then we should remove those uses because it probably removes complexity and
risk. Yeah. Right. But yeah, the uses are just, these are the things your software can do.
Whether they're valuable or not,
maybe the feedback will tell you, right?
I see.
Yeah, yeah, yeah.
Okay.
I don't know.
So slightly weaker coupling
than something you can directly measure,
but it's part of that whole.
So yeah, so feedback can be user satisfaction.
It can be performance.
It can be test coverage.
It can be- Test time, CI it can be CI build times. It's just anything you can
measure objectively about your code. What about things like log incidents? I mean, you kind of
alluded to this with the R thing here, but the number of warnings that you're logging out or
info, that's all, I mean, it all falls under feedback, but it may well be that that feedback in some ways
is again, very sort of co-linear with risk.
If you'll say, well, one of my risk metrics
is how often do we get an exception thrown
that we track in our exception tracker or whatever.
And then that's part of the feedback.
And it's also, it contributes to the risk.
Do you see those as being non-orthogonal in that way?
Yeah, absolutely.
So one of the things you can do is you can mitigate risk by adding feedback, right?
So if you have better observability in your system, right, you have better application
metrics, you have better alerting, you have better things like that, then you can probably
take more, you know, you can have more risky things that don't result in as much risk because
you'll be able to respond to them quickly.
Like we're going to put this strategy into production.
If it starts tanking,
we'll know about it right away and we can turn it off.
Or if an exception starts happening,
we can,
we can handle it right away.
Whereas if your feedback is lower because your observability is lower,
then that risk is effectively higher because you can't see what's going on.
Right.
Yeah.
Yeah.
Yeah.
That's interesting.
Okay. So, all right.
And then we've reached Cruft.
The T, yes.
What is the T?
The T in Cruft is-
Teams, teams.
I remembered it.
I'm a good learner.
Team or teams, yeah.
So the bigger the team, the better.
I see, yeah.
So you're trying to maximize the number.
Well, actually, yes.
So here's the thing.
The answer is sort of like, sort of yes, but not really yes.
So obviously we know that big teams are bad, but why are they bad?
Right?
If you have a team of 50 software engineers working in a code base, that's going to, you're
going to have to do a lot of things to make that work.
Right?
Yeah.
Yeah.
Yeah.
So the thing that we're actually trying to maximize here and i chose t
because it's it fits the acronym but i was gonna say i mean once you've got to cruff and you're
like what now uh so i'm trying to i'm trying to think about like okay so we're talking about
complexity we can measure that with lines of code we're talking about team team is a little bit of
an abstract concept but i think it has a very clear quantifiable thing which is bus factor okay
you are trying to maximize bus factor let's stop talking about buses running people over we you
know you you and i know that the way that we used to do this in our in an old job that we were in
together was the crypto factor where some arbitrary coin that somebody had bought on a whim because it
had a funny name goes massive and they never have to work again.
And so they,
they retire and that's slightly less,
um,
uh,
injurious,
injurious.
Wow.
Yes.
But anyway,
the point is that if somebody leaves for whatever reason,
right.
And you have a massive hole in your ability to maintain and continue
to develop your software or support it, then you have a problem.
Yes.
And yeah, we can call that bus risk.
We can call it crypto risk.
I like crypto risk.
Crypto risk is good.
Although then now that brings in a whole other thing.
So I get it.
It's sort of politically charged something.
Lotto risk?
Lottery risk?
Lottery risk.
Yeah.
Yeah.
I mean, what's the difference between crypto and lottery?
I don't know.
Again, wrong podcast.
Wrong podcast for that.
Yeah, yeah, yeah.
But yeah, so I think that that number you do want to maximize.
And it is generally a, I would guess, I'm not a modeler of mathematical things,
but I would guess that it is generally a multiplicative
factor on the other things, right? So like, how many people do we have that can understand the
complexity that we have, right? Manage the risks that we have, add new functionality, use cases to
our system, you know, interpret the feedback and know how to respond to it. That is the number of
people who are in that sort of lotto risk set.
That's interesting. Yeah, yeah. You could even sort of go along as far as to say,
let's go through all of the issues that we have outstanding right now. How many of them could
more than one person, could I assign to more than one person and they would get it done? And that
would give you a way of saying like, well, and you could do it for everyone and kind of build a pattern of the entire team, which team members can solve which bugs or issues or things or risks that you have and go, where are my blind spots here?
Oh, Trevor, if Trevor ever quits, we're doomed because nobody knows what Trevor's doing, you know, or whatever.
And that then, okay.
And so I think there are ways you could build to quantify it.
It might be a bit trickier than the other ones it might involve a little bit more gymnastics
but i do think it could be done i mean you could even have like labels and get have whatever say
hey could be done by blah blah rather than assigning it to people specifically and then go
hang on a second which which which issues do we have that only has one person on it right yeah
yeah or whatever yeah yeah. I'm trying to solution
this right now, but no, I like it. So cruft. Cruft. Cruft. And I think if you try to relate
this back to what a lot of sort of colloquial uses of technical debt are. So like going back
to the original thing here, you got a room full of six people and you got 12 opinions about what
technical debt is, right? Like those opinions are probably not
wrong. They're almost certainly not wrong. They're probably born from direct experience and sort of
hard fought lessons. It's just that they mean different things, but we're using the same word
to refer to them. Right. Yeah. So a real basic example of technical debt, as many people use it
is essentially abandonware, right? Like you've got the haunted graveyard project where it's like yeah we've mentioned this yeah in that situation the the t factor has gone to zero yeah
right and it's all risk all the time uh yeah there's not really you can't manage the risks
anymore you can't manage the complexity anymore you can't add new use cases to it anymore because
t is now zero right um another situation is when things get too complex, right? Like the complexity factor
goes way up. It becomes very hard for people to understand. And now you can't add new use cases
to it anymore. Right. Because you're by doing so you're probably taking on a risk. I can go in and
I can change the software, but I don't really understand how it works. If I break it, that's
going to be really bad. Yeah. Right. So all of those dimensions come into play in that scenario. And there are other things,
like I need to get something done quickly. How do I get something done quickly? I implement the
happy path for a use case, and I don't worry about the risks. Right? I say like, we're going
to catalog these risks. Hopefully we're going to address them at some point but this thing this functionality needs to exist by friday and i'm going to implement
only the minimum amount of code that i need to to make that work and i'm going to take on a bunch
of risk in order to accomplish that what's that i said to do check password return true yes exactly
yeah exactly it's fine for the investor meeting it's less fine uh when they
say how quickly can we get to production right right so so this is my my next my next iteration
of the of this thinking because i do think it's a shame that the phrase technical debt is so sort of like blurred in its meaning, right?
But it does fit a lot of cases and it's very easy to talk to non-programmers. I think we,
you know, we litigated this the last time around. I mean, everyone has sort of some
gut in understanding of what I say when I say like, Hey, I'm going to do this quickly,
but it's going to cost us in the longer run. And you're like, oh, I get it.
You're borrowing from some mythical thing and that puts you in debt.
And later on, you have to pay it back with some kind of interest,
which again, we've said doesn't really exist in the same sense.
But like it doesn't, it does bring some of the thought processes through,
not thought processes, it does have the right kind of smell to it, right,
for it, which is why it's so attractive
cruft obviously is a great acronym and it is an acronym i just that's its main strength really
is the acronym and but also it is a word that we use to mean exactly that and so we call it cruft
yes but i mean the thing is it does i mean to to pick holes uh it is still an incredibly broad set of things that
you've just defined in that which is fine because presumably you're covering this is like all things
in fact craft in this instance is is a an all-encompassing aspect of the entire project
right it covers all of the parts of it the good and the bad because you've got the u in there
that's a usage and you've got like the feedback which is usually good and and the team which is hopefully
a good thing and so it's not like it isn't in its own right a negative sense like if i say oh my
gosh this is a bit this is a lot of craft here you know obviously colloquially we know what we
mean when we say craft we mean it's like you know it's the belly button lint of the code of the code
phrase it's like there's the goo around the edges that you have to pick out.
So yeah, I guess it's a good acronym to think about how to run a project and think about
when you're making a decision about whether I should add a new feature or whether I should
test this in a particular way or whether or not we should just send Trevor off on his own odyssey for three months and then see whatever he comes up with you can measure it using this this the
cruftometer and sort of say are we okay with this are we still within the bands of cruftiness that
we're okay that's okay I mean maybe that's how you could accept this like like we accept that
all projects are crufty because they're programs right and they're written by humans and they're
an engineering solution to a a very high dimensional optimization problem of like,
well, we need to get this out by Thursday.
Oh, I don't really understand how this thing works.
All right, I'm going to use a library I already know.
All these kinds of things are being balanced
in our head all the time.
Yeah, I suppose I'm trying to rationalize
what cruft really means.
I know, you know, it's a great acronym.
Right, right.
Well, so here's how I'm intending,
here's how I use it myself.
And here's how I intend for other people to use it if they choose to use it, which is
if you're talking to a non-technical person, use technical debt as a metaphor.
That's great.
Like that's what Ward originally intended.
That's a great way to describe to somebody who's not a programmer why you're doing things
other than adding new functionality that's going to potentially make you money. It's like, why are you guys refactoring? What's
refactoring? Why are you spending your time? Use technical debt. It's great. That's what it's for.
If you have programmers talking to other programmers about the trade-offs that they
are making in code, you can do much better than saying, ah, that thing's got a lot of debt.
You can be more specific because you're a programmer and you should. So for example, if you are reviewing a PR and you see some code in a PR and you're like,
I don't think this complexity is worth the use.
Like you added in this super complicated arg parser thing instead of just slicing off the
first two string parameters and you added like 10 lines of code.
And it's like, I get that that's better, but I don't think it's worth the complexity.
Right.
And then you can have a discussion about whether that's true as opposed to like,
this code says it's full of debt.
I hate it.
Right.
Right.
That, that makes a lot more sense to me.
Yeah.
Essentially you're handing someone a cheat sheet of things to talk about when either
justifying a decision or considering trade-offs and, you know, in a code review situation,
these are the words that you should be using. You know, this seems risky or are we okay with
the risk that this thing won't work? Is it tracked in an issue somewhere? If it does go wrong,
do we have the feedback that will tell us how to come and find it quickly?
Who else knows
about this? Does anyone else understand this piece of code? Could it be less complicated than this?
Maybe we should just... Is there something off the shelf we could already use? And I've presumably
missed one of the letters in this example. Yeah. Who even asked for this? Is this something we
wanted to do? Yeah. I think I got the ball now.
In the first place. Yes. Yes. And going back to the start of this conversation,
I'm just suggesting that this is a better pattern. If you want to think in patterns,
this is a better pattern for programmers to talk to other programmers about aspects of their code
that they either want or don't want. Right. Yeah. As opposed to just debt, which doesn't have a concrete enough definition or isn't as concrete
as programmers are capable of talking about.
We are capable of talking about lines of code complexity.
We are capable of talking about potential risks like, oh, if we get this message type,
we don't handle it.
What will happen?
Oh, the system will restart and it'll go into the dead letter queue.
Okay.
That's an acceptable risk.
Great.
Right.
We are capable of talking about these trade-offs in more specific terms. That isn't a metaphor, a financial metaphor,
and we should. And if you don't like cruft and you can't remember what the words are,
that's fine. You should come up with these terms for your own team, right? Like for your,
the group of people who are going to be in this code together, who are going to be wearing those
risks, who are going to be getting those pages in the middle of the night, come up with your own terminology for
this, right? But you don't have to use the same metaphor that you use to explain to the CFO
why you need another person on your team, but you do when you're talking to another
programmer about code. It's just lazy to use debt. Yeah. I know we use it colloquially. Yeah.
No, I'm with it. I'm with it. I don't know how to weave this in, but it must be.
Oh, no, he says, Ben, I can see the look on his face.
It's like, what's Matt going to say?
But just for our international audiences, I believe we have,
you should know that Crufts is a dog show in the UK.
It's the equivalent of the what is the the big dog show
in the u.s i'm trying to think what it's called now the can no oh yeah like the kennel club
something no i feel there's something else now now i should have i was going to try and google
it but in this particular position my keyboard is so close to the microphone all you'd hear is
the clacketing but uh yeah so whenever we say craft i that's exactly
what i think of in fact you know at a two two three jobs ago we had the the um the c++ library
of things that c++ really should come out of the gate with you know command line parsing and
printing strings with commas between them and stuff like that was called c cruft as in crufty
bits of c and c++ that were just like we just needed to do this because it's pasting the
language together so cruft is well
taken, well understood, but
I don't think of the dog show whenever anyone
talks about it. So if you can come up with
S on the end of your acronym, if you need
another thing, then that would make my day.
Cruft.
But then it would lose
the
programming meaning, so maybe not.
No, that's awesome um consider me a convert
to cruft well i i i appreciate trying to poke holes in this because the temptation with all
of these kinds of things is you know like you're you're you're building castles in the sky
and they're not really applicable.
And I try to think about things where it's like, is someone actually going to respond in a PR using these terms, the way that they've defined them here, and result in a better
conversation and a better solution?
And if that's not actually happening, then you're just wrapping yourself around an axle
being like, oh, and then I'm going to define this like this,
and I'm going to do this like this.
And it's just systems that no one cares about.
No, I think it's valuable.
I mean, these are not, no disrespect to your insight here,
these are not novel concepts, right?
You've just found a really good way
of putting five key concepts together
that make sense and have a catchy acronym, which
is a great way of making it memorable and giving you a conference talk to prepare for,
which I'm expecting anytime soon.
And so where is this going to be presented?
You've already presented it internally.
So yeah, no, I should present this somewhere.
But yes, the very fact that it is not novel is is a good sign because again this was all about me taking the things that i intuit and
i'm not special there are lots of other senior software engineers who have you know built things
for a long time internalize this stuff and they sort of intuit things in this way and it's just
taking those and just putting a name on them so right like we're not doing anything new or
interesting here we're doing things anything new or interesting here.
We're doing things that we've done for years.
I mean, especially some of the risk management stuff.
It's like years and beyond software,
years and years and years.
We're just giving it a name so that when we talk about it,
we can say, yeah, I think this is too complex
or I think this is too much risk
instead of like a very long-winded explanation
of all the things that no one is ever actually going to read
in a PR because they're just like, I TLDR. Yeah. Yeah. Yeah. Well, cool. I mean,
that seems like an obvious place to close this thing. And I'm going to go away and think about
this more. I'm doing a lot of compiler explorer work at the moment, which has somewhere in the
region of 800 open issues, about 40 open PRs. And yeah, so this gives me a new tool in my arsenal
to start thinking about things there.
I mean, there is a project, my friend,
that has a lot of you, for sure,
but an awful lot of people have added a ton of things
that are useful for them,
and then they've disappeared off the face of the earth afterwards,
which is completely reasonable.
It's an open source project.
You know, you add your thing.
But we don't necessarily have the test to cover those things which means that they may break and
we don't know and that is a big r and it makes it hard for us to change things and so you know
this is uh what i'm going to be looking at over the next few days is trying to work out how to
bring this beast back under control so that uh that we can make um make it more um or less crafty more crafty
i don't know uh better better fantastic well um i guess i will see you the next time we do this
yeah see you next time
you've been listening to two's compliment a programming podcast by ben rady and matt godboll
find the show transcripts and notes at www.twoscompliment.org
contact us on mastodon we are at twos compliment at hackyderm.io at inversephase.com.
Forgive my lack of enthusiasm.