The Changelog: Software Development, Open Source - Laws for hackers to live by (Interview)
Episode Date: July 16, 2020Dave Kerr joins Jerod to discuss the various laws, theories, principles, and patterns that we developers find useful in our work and life. We unpack Hanlon's Razor, Gall's Law, Murphy's Law, Kernighan...'s Law, and too many others to list here.
Transcript
Discussion (0)
So of these laws, the one that I actually verbally say out loud the most, I believe,
is YAGNI.
You ain't gonna need it.
I say YAGNI to myself or to others, even sometimes outside of the world of software.
I'll just say the acronym to a friend and they'll be like, what?
I'm like, never mind.
And we build things that we don't need all the time.
And again, it kind of goes back to the idea of the cleverness or the, I know what
I'm going to need later. Mythos, it just lets us down so often. And so many times, we're just not
going to need that thing that we're building. It's funny you said Yagni. When you were saying
there's one that I say a lot, I was thinking, it's got to be Yagni, because it's the same for me.
You saw it coming.
And it must be the same for any other engineer
who's ever had to work on a feature
where they're just thinking,
is anyone really going to use this?
Bandwidth for Changelog is provided by Fastly.
Learn more at fastly.com.
We move fast and fix things here at Changelog
because of Rollbar.
Check them out at rollbar.com.
And we're hosted on linode cloud servers head to
linode.com slash changelog this episode is brought to you by digital ocean droplets managed kubernetes
managed databases spaces object storage volume block storage advanced networking like virtual
private clouds and cloud firewalls developer tooling like like the robust API and CLI to make sure you can interact with your infrastructure the way you want to.
DigitalOcean is designed for developers and built for businesses.
Join over 150,000 businesses that develop, manage, and scale their applications with DigitalOcean.
Head to do.co.changelog to get started with a $100 credit.
Again, do.co.changelog to get started with a $100 credit. Again, do.co.changelog.
All right, welcome back, everyone. This is the ChangeLog, a podcast featuring the hackers,
the leaders, and the innovators in the world of software. I'm Adam Stachowiak, Editor-in-Chief here at Changelog.
On today's show, Jerry went so little to talk with Dave Kerr about laws that hackers live by,
the Impact Handling Razor, Gauss Law, Murphy's Law,
Kernighan's Law, and too many others to mention, so let's just get into it.
So Dave, I found your repo on github and it immediately caught my interest because it's one of those lists and there's all these awesome lists this is a list not of links to
other places but this is a list of hacker laws you describe it as laws theories principles and
patterns that developers will find useful.
And I thought, this is very cool. Let's talk through some of these laws.
But first of all, tell us why you created this repo and where the idea came from.
Thanks. Yeah. I mean, I was kind of inspired by the awesome lists as well.
To be honest, I use them all the time, especially when I'm exploring a new technology. And to a certain extent, the idea for HackerLaws as a repo came from that, partly through my work as a consultant.
So I'm an IT consultant. So I work with lots of different engineering teams and I work with lots
of different organizations. And I would occasionally find myself kind of saying things like,
you know, this is kind of an example of Conway's law here where what's happening is that the systems that are being built are reflecting the organization
structure rather than actually adhering to a say a sensible designed architecture which sounds like
the sort of um smart ass thing a consultant would say um yeah the more i thought about it the more
i would jot down certain things like things i would would hear, like the 80-20 rule, which I'd always read when I first started learning about programming was that you spend 20% of your time writing the first 80% of your code or your project.
And then you spend the last 80% of your time doing the last 20%.
And then realizing that this you know, this has
a name, this is called the Pareto principle. And it has a whole bunch of real world examples,
which it's based on. So I started jotting these down just on like an empty markdown file to start
with. And then once I had a few pulled together, I put it on the GitHub repo. And every time I came
up with an idea, I added it as an issue to kind of
remind myself to come back to it. And then a couple of my colleagues made suggestions or said, hey,
what about this? What about that? And then it kind of grew from there. And then I guess I was,
just through sheer luck, I tend to try and publicize when I've added a new law on Reddit
or Hacker News. A couple of times, those posts have generated lots of discussion.
So that's kind of brought a lot of traffic over to the repo,
which then brought more ideas for laws and spirited discussion.
So it kind of just grew from there, but it's fairly organic.
Yeah, today there's 15,000 stars.
You got 55 contributors.
It looks like I counted off and 13 language or so it's
been translated into other languages so this is a very somewhat typical success story on github you
know you put a thing out there you work on over time and over time there's here comes contributors
there's interesting conversation of course because these laws are often referenced or thought about
by hackers and developers whenever you see a list of them, you're like, oh, this is awesome.
Here they all are.
And what I found interesting as I went through this list is that a lot of my interpretations or my memory of the particular things is slightly off of what they actually are.
Or can't be described by me in a way that shows that I've internalized it.
You know, sometimes you just memorize a phrase and you just kind of broad brush apply it.
I actually wrote a post recently about why so many developers get dry wrong because we
did a show with the pragmatic programmers last year where they were rejiggering their
book for the 20th anniversary.
And one of the things they said is they had to rewrite the dry section because so many
people misunderstood what they meant by don't repeat yourself.
And that was a case where a lot of us can memorize the acronym and can just misunderstand
what the actual point of what they were trying to say was.
And in that case, it's a distinct point, but it makes a big difference what they meant by that.
I think that's completely it.
And it's also sometimes the kind of intersection
or overlap of these things.
Like I can't remember where it was
that I saw a whole bunch of engineering principles
kind of like printed out on the wall.
And one of them was something like
KISS is greater than DRY.
DRY is great, but still keep it simple.
So sometimes it's okay to repeat yourself, like if you're writing unit tests or whatever,
and it does make the code more readable.
And I think that's something that's kind of interesting about the laws, although some
of them are called laws.
One thing that I've tried to do is make it clear that I don't necessarily
advocate that any of them are correct or not,
but a lot of them only have limited applicability in the real world.
And some of them are just kind of humorous or,
or sort of slightly out there and a bit about organizations.
Right.
And just to put a point on what the dry misunderstanding is for those who
haven't heard this dry don't repeat yourself is that every piece of knowledge should have a single
i'm reading it now unambiguous authoritative representation within a system and the slight
misunderstanding of that is don't repeat yourself and so I just wrote some code and I don't want to repeat that code.
Now, if that code is the writing down of knowledge, then a lot of cases that applies.
But we often take it to mean just don't repeat, don't type the same thing twice.
But it's not really about that.
It's about having a single place for each piece of knowledge in the system.
And that distinction does make a big difference
because we tend to prematurely dry up our code
in a place where it doesn't actually make sense.
You're not actually repeating knowledge, you're just repeating procedures.
So slight distinction, but big difference in practice.
And I think dry is a really good example of that
because I see it even in code editors nowadays
when you've got things like static analysis tools that will say repeated lines and unit testing I think is a really good
example where to me a well-written unit test I can look at it in isolation and understand how
it's setting up its expectations what it's executing and what it's kind of asserting right
but if you were to make all of that unit test kind of scaffolding dry,
you'd end up with a whole bunch of helper functions
and stuff like this.
And sometimes that's useful,
and sometimes it does make it more readable.
But actually, the kind of authoritative source of truth
is probably the function that's under test itself.
And the unit tests are really there
as a scaffolding or an acetation framework.
So it's a really interesting one, Dry.
Yeah, so we thought for this conversation, so Dry, one of them, of course there's I wouldn't say there's hundreds of laws and principles, but I think there's a couple, a few dozen.
We obviously don't have time to talk through them all, and I find some more interesting
than others. I'm sure Dave, you find some more interesting than the others. We thought we would just kind of ping pong back and
forth, talk through some of these laws and principles,
the ones that maybe come to mind often for us and that we think are generally applicable and
interesting for folks. Of course, everyone will have their own take on which ones are good,
better, best. But if I had to ask you, Dave, what's the one that you think about the most
or that you apply the most in your day-to-day work of consulting or programming what would you respond with i think in the world of programming probably one of the ones that i
tend to think about a lot nowadays is kernigan's law so kernigan's law basically says that debugging code is twice as hard as writing it.
So, therefore, by definition, if you write your code in the smarter way as possible, you are not smart enough to debug it.
Right.
And I don't know if this is just because I'm getting older or if it's because I work on a lot of open source projects, so I have to do a lot of context switching but increasingly i really kind
of felt that message which is that actually think about your future self when you're coming back to
this or to the contributor who wants to jump into your project who's maybe new to the language or
the platform or whatever and it's not about being smart or clever. It's not unless you're doing something ultra specialized
like chipset optimizations or something.
Much of the time, it's about creating a sensible abstraction
of the system that you're working with.
And nothing makes you feel less smart or clever
than that really cool trick that you put in there
or all these abstractions when you're trying to unpick it
a year and a half later and
work out what's going on so that that one when i saw that there was a name for this kind of like
made me laugh and i thought yeah that's that's funny debugging is twice as hard and i have looked
at my own code through the debugger and just gone what what's happening here how can this be happening
yeah absolutely so that this law quote-unquote, comes with an assertion, which is that debugging is twice
as hard.
And maybe it's an understatement.
Maybe it's 3x.
Maybe it's four times as hard.
But I think we definitely spend more time debugging than writing when it comes time
to do that.
And it kind of goes back to read versus write.
You know, you write it once, you read it many times.
Sometimes you rewrite a little bit.
But we spend more of our time reading the code
than we do writing the code.
Just like we spend oftentimes more time debugging the code
than we do writing that initial implementation.
I love the way that he uses the word clever there
because it really does make you feel clever
when you come up with a solution to a problem
that requires a sidestep or a special use of the language
that you know, and maybe not everybody knows,
and maybe you'll forget it later,
and you won't even know that trick later,
or you just learned some sort of esoteric aspect
of your favorite programming language.
To use that to solve your problem feels really good.
And a lot of times, that's the stuff that we programmers love.
Like, ooh, I came up with this clever solution.
But it turns out that the actual smarter solution,
not as clever, but the smarter way of doing it
because of this knowledge of I'm going to be reading this later
or I'm going to be debugging this later is,
is there a more straightforward way of accomplishing this?
Can I remove the cleverness?
And that requires a humility to say,
yeah, I'm smart enough to do this clever trick,
but actually I'm smart enough to know
that I should not do this clever trick if I can avoid it.
And if you can avoid it, your code is much more useful.
Or at least leave a couple of comments in there.
If you're going to do something cool like,
yeah, I'm not going to multiply this by two,
I'm going to bit shift instead.
Exactly.
That's what it might be. Although interestingly enough enough this law was one of the ones which generated the most heated debates
i think on reddit when they published this because a lot of people said and i think quite fairly that
well to their minds clever code is code which is simple which is elegant the clever code is this
is the code where someone has avoided unnecessary abstractions
or whatever. So I think that that's a fair counterpoint. Yeah, maybe tricky might be the
way to think about it. Like when you're pulling your tricks out of your rabbit out of the hat,
that kind of code is the code that can become problematic. Yeah, absolutely. And then in terms of, I suppose, the world of consulting,
there's lots of laws that are to do with organizations, and I'm sure we'll talk about
them. But one that I think really sticks, and I do talk about this with clients, with engineers
regularly, is actually Goodhart's law. It's a statistical law, but in its kind of simplest form,
it essentially says when a measure becomes a target,
it ceases to be a good measure.
And the reason I find this one really important is
when you're doing consultancy work,
you're often maybe involved in changing things,
changing how organizations work or building new things.
So of course, people want to measure, are we doing things well? Is what we're doing,
you know, making people more productive or less productive? And that's great. And that's natural.
And that's good. We want to measure and make sure that the changes we're making
overall having a positive impact. But in our desire to do that, we can sometimes kind of go too far
and actually cause problems.
And I hear this a lot from people
when they say things like,
how do you measure engineer productivity?
Right.
And then my kind of answer is,
well, basically you can't.
You can try and use metrics
like lines of code per day,
or you can try and use metrics like lines of code per day, or you can try and use metrics like average time
to close a pull request or whatever.
But the problem is, as soon as anyone knows you're measuring that,
they're going to also know that to a certain extent,
you're using this as some kind of target.
And the smartest people there are just going to game the system.
So if you start measuring how many
bugs are attributed to an individual developer, then developers will stop working on complex code.
Or if you're going to start saying that productivity is equal to the number of lines
of code changed, you're just trivializing the fact that you can spend two days debugging
the system and make a two or three line change, which has an enormous impact.
And you find this all the time in organizations with things like KPIs, where if you put them
in place, you've got to be very, very careful because if they do become targets, it's then
easy for people to try and gain these targets or feel threatened by those targets.
Are these being used to rank me or monitor me?
And actually, sometimes users to a certain extent
in defense of engineers and say,
well, it's very difficult to measure the productivity of a craft.
And this comes down to something which is in many ways
still a common misconception about software engineering.
Software engineering is not like an assembly line where you can measure the productivity of certain systems and their efficiency.
It is much more like a craft.
It is an intellectual activity.
And those kind of activities are very hard to measure.
And you wouldn't necessarily say, how can we measure the productivity of every ideation meeting we have?
You wouldn't even consider that because you understand that this is more abstract intellectual
exercise.
I had a similar situation back when I did contract development with people asking for
estimates around building out of an application.
And I would always tell them that the closest thing, these are non-technical people who
are trying to get a business started
or try to build an aspect of their business.
And I would say the closest thing you have to understanding
the software development process is like building a house.
But that metaphor fails in so many ways
that if you think that it's like building a house
where we can lay out the design of the house
and we can lay out how many stories and how tall the walls are and all the design of the house and we can lay out how many stories
and how tall the walls are
and all the details of the house
and then you can go out and get a materials list
and then you can go out and get subcontractors
and you can pretty closely come up with a budget
for the design of a house,
especially if it's a cookie cutter house,
but even a custom home.
Here's the plan.
Here's what these parts cost.
You can take off a room, save this much money, et cetera.
That's the closest most people get to understanding custom software.
And it falls apart almost immediately because we don't have the design of the house in custom software.
All we know is we don't have that at all.
And so estimating what's our price for this project at the outset is a fool's errand. It's actually,
I believe, impossible. And so I'd have to explain that to people. And that's a tough pill to swallow
when you're trying to say, can I afford to build this software? But it's just the facts of how it
works. Yeah, for sure. It is a tough pill. It's very difficult to say to a client, you know,
I can't tell you how much this is going to cost because I can't tell you how complex it is.
Because I won't know until I know more about the domain.
And then when I know more about the domain,
I'll be able to say, well, there are certain parts
that are more complex than expected
and you can choose to have them, you know,
with the associated cost or take them away.
So why the word architect is a really strange word?
Yeah.
Because most architects don't really do architecture. Architecture is about designing something where you know the end states. Yeah. Because most architects don't really do architecture.
Architecture is about designing something where,
you know,
the end States.
Yep.
I always think of software architecture or systems architecture or
enterprise architecture.
It's a bit more like SimCity.
Like,
you know,
you set up industrial zones where you know,
you're going to have to have like access to lots of electricity or you'll
set up super highways close to the airport.
You're kind of like planning for growth.
You've got a certain idea of where you want to put things
to keep things in a certain order
and how you're going to move around resources.
But you're also kind of planning that things will grow organically as well
within certain areas and just trying to do your level best
to kind of gauge it right this episode is brought to you by our partners at Algolia. Make every search lightning fast and deliver the results your customers want every single time.
Algolia's search as a service and full suite of APIs allow teams like ours and teams like yours
to easily develop super fast search and discovery experiences.
And best of all, and this will be love, Algolia obsesses over developer experience.
Their mission is to give development teams the building blocks necessary to create a fast, relevant search experience Thank you. That's A-L-G-O-L-I-A.com. so of these laws the one that i actually verbally say out loud the most i believe
is yagni you ain't gonna need it i say yeah i need to myself or to others even sometimes outside of
the world of software i'll just say the acronym to a friend and they'll be like, what? I'm like, nevermind.
Is you ain't going to need it.
And I think that it's so true in so many contexts.
It's so easy to get into it, to get into the mix and start planning.
And we just talked about how you don't really know, you know, some of the, the system is
emergent, right?
Like kind of like city planning or like SimCity.
And we build things that we don't need all the time.
And again, it kind of goes back to the idea of the cleverness
or the I know what I'm going to need later mythos.
It just lets us down so often.
And so many times, we're just not going to need that thing
that we're building.
Yeah.
It's funny you said Yagni.
When you were saying there's one that i say a lot i was thinking
it's got to be yakni because it's the same for me it's all coming and it must be the same for
any other engineer who's ever had to work on a feature where they're just thinking
is anyone really going to use this yeah or who's looked at their own code and thought
why have i spent all this time
kind of like abstracting this away
so that I can use a different kind of file system
when I know I'm never going to use a different kind.
I'm never going to use like a different storage mechanism
for this whole thing here
that I built an abstraction there for.
I didn't need that.
Oh gosh.
Let me throw my friend Nick Nisi under the bus
who is a good friend and a good engineer
and a JS Party panelist.
We're working on some software around JS Parties game show.
We have a Jeopardy-style game show called JS Danger.
And we built a web app so you can actually have a game board.
And in that web app, you have the contestants
and they have their faces.
So these are the three people who are the contestants. And we put their avatars in there. And I built the first version of things, I was
like building out the JSON structure of how we're going to load this data as we can reuse this game
board. And I just go out and I figure, well, we'll just load a URL and make an image source, you know,
so I just go out to their Twitter profiles, and I right click and I can't remember if I download
the file, or I just grabbed the URL and throw that into the JSON blob. Then I pass it off to Nick to continue working on this. And he decides that instead of
just a string, which holds a URL to an image, he's going to have like a handler function,
which does something else. And that way we can just like put their avatar or their Twitter name
in whoever it is, and it will go determine whatever their actual current photo is
and all this kind of stuff.
And then, I mean total Yagney by the way,
we're going to use this game board once a month,
once every few months, and we know the contestants beforehand
and it takes about 30 seconds to go grab those URLs.
But a dynamic lookup was nice
even though Yagney until something changed in Twitter's API to go grab those URLs. But a dynamic lookup was nice,
even though Yagney,
until something changed in Twitter's API and the core's rules or something.
Anyways, he couldn't deterministically
figure out what the URLs were anymore.
And so then he had to write a proxy server
in order to resolve the actual URLs
of the avatar images
and get a token and all this kind of stuff.
And so sorry nick
by three under the bus there we've all done it he was just over engineering a thing that it was
totally agony and he had fun doing it there should be like an extension to the law which is like
yakni but i want to code it yeah sometimes i realize that that's what's going on with me
like my dot files on GitHub,
I get a new computer like once every five years.
I don't know why I go to the effort
of trying to automate all the sets of it.
And it is totally acne,
but it's also kind of like,
I just kind of want to do that.
Right.
And yeah, sometimes I find myself doing that thinking,
am I writing this because it's actually useful
or I just think it's cool to have that handler function that can do this but look guys we can also do xyz
right yeah we can but no one no one needs us to right which i mean if it's for pure joy and it's
on your own dime or your own time i'm totally cool with it i feel like the lazy part of me
is probably the one that says yag me the most, because I have these two battling things.
I have the desire to build cool stuff and to think ahead and be smart,
but I also have this desire not to do extra work.
And so that, what I'll call lazy programmer,
is the one that usually says Yagni,
because the one that gets going is like,
oh yeah, and then I'm going to do this and that.
And then I start thinking, do I want to actually build out all these things?
No, no I don't. And so I usually going to do this and that. And then I started thinking, do I want to actually build out all these things? No, no, I don't.
And so I usually say Yagni.
And again, I think this is where it's interesting
seeing how the laws play off each other a bit
because we've already spoken about the Pareto principle,
but that applies here as well,
which is that it kind of is Yagni,
which is that 20% of the features
are going to be used 80% of the time.
Absolutely.
The vast majority of the features you're developing
for an application or a solution or whatever,
it's like a hockey curve or whatever.
So the small number of core features,
the 20% are going to be used extensively.
And then there's going to be a whole bunch of stuff
that's not used at all.
Or maybe just not used to the extent
that justifies the effort involved in building it.
And then there are those exciting moments when you're kind of whiteboarding
or starting to put something together in the editor and you're thinking,
yeah, this is cool.
And I could extract this into some kind of interface or plugin mechanism.
And it's those moments when you kind of have to stop and think,
yeah, but am I actually going to need to do this?
And if I do do this, is this going to be one of those projects
where I registered a domain name and kind of never got any further.
Right. But one argument towards Yagni, even if you aren't going to need it again. So I did a
show with Saul Ponson recently, and he wrote VisiData, which is a very complex tool for
visualizing data inside of the terminal. And he said, if this was just for me, I don't need all of this.
But I want to have something nice.
And so what I do is I open source it.
And now it's worth all my effort.
It's worth all the stuff, even if there's things that I'm not going to need again,
because all these hundreds and thousands of people can benefit.
So one thing that my Yagni brain often misses out on
is the opportunity of providing an abstraction that other people are
going to use. I don't think in reusable libraries that I can open source as independent little
things very often. Often I'll look back at code and be like, holy cow, this is a library right
here. This could be an open source project. But some people think that way. They think, well,
maybe I'm not going to need this function again, but other people might need it. And so I'm going to actually take the time to build an abstraction, put some documentation
together and release it as open source. And now even though Yagney for me, somebody out there is
benefiting. I think that's a nice counterpoint to that principle. Yeah, yeah, absolutely. And that's
that kind of open source mindset of, you know, if I share it, then it could also grow on its own as well. It could get better.
Yeah.
I suppose a counterpoint again to that as well, counterpoints to counterpoint would be,
you can design it for open source and you can design it for other people to contribute to it
with a plugin mechanism or whatever. That kind of reminds me of another law, which I think is
really, really important in software design, which is Gaul's law, which basically says that a complex system that works is invariably found to have evolved from a simple system that worked. Systems are not created as complex systems. They start off as simple systems, and they evolve over time.
The example that often gets used is the internet,
which started off as a way for academic institutions to share data,
and then it's become what it is today.
But you could look at things like Kubernetes as an example.
It probably started off life.
Of course, it started off life much more simple than it is now right all of these extra features
and abstractions for storage systems and different container interfaces and so on
they kind of got added on over time as needed but initially they weren't there as abstractions
they evolved but who knows because then if you just kind of let different people contribute in
different ways, you also run the risk that you lose coherency and different people have different
ideas about how things should be pluggable or extendable and you end up with a project that's
no longer internally consistent. So I guess you also need to kind of make sure that when it is
evolving, at least it's evolving with a set of principles
or patterns or whatever
so that it still makes sense to people.
Yeah, so if you do set out
to build a highly complex system,
I guess what is the takeaway there?
It's that you start,
like you have to break it down
into a series of not complex systems, right?
Like you need to somehow get to a point
where your starting place is not complex
because if you design a highly complex system, according to Gauss' law, that's going to fail. to somehow get to a point where your starting place is not complex.
Because if you design a highly complex system,
according to Gauss' law, that's going to fail.
But if you know that the domain in which you're tackling is highly complex and there's no way of actually getting around that,
you need to break it down.
And you need to be able to build some sort of a simplistic
either representation of that system to start from
or subsystemss which can be simpler
in order to build a more complex system that can evolve from them yeah and i think that's exactly
if someone had to create for example kubernetes right now from scratch and they were basically
given the apis and said this is a specification of what we want and how we want it to work yeah
but we're not going to tell you anything about the internals.
And we're not going to tell you anything about what's been happening
in software development for the last 30 years.
There would be an enormous challenge for them because, as you said,
it's made up from smaller systems that have then been proven to work.
Like under the hood, there's SED, which is its distributed state management system.
But distributed state management is really, really, really, really complex.
It has all sorts of challenges.
And there's some of the laws in the repo about that as well.
But they didn't invent that from scratch.
They used an existing proven system.
I think SED is based on a raft protocol, but I'm not sure.
But anyway, they took an already proven mechanism system, I think, as it is based on a Raft protocol, but I'm not sure.
But anyway, they took an already proven mechanism for kind of consensus-based representation of state in a distributed system and then plugged that in.
And then they took existing systems like volumes, file systems in Linux, whatever it might be,
and kind of composed it together from that.
It wasn't like every part of the system was created from scratch.
That actually plays into what I'll call an interpretation of Hanlon's Razor.
So Hanlon's Razor is never a tribute to malice,
that which is adequately explained by stupidity.
I've also heard that as incompetence versus stupidity if we're
gonna start to mince words there yeah which i think is a great thing to fall back on i think
it's a gracious way to writ large approach people and life is to think that probably this was not
ill will but probably this was incompetence or stupidity. Whatever the situation happens to be,
now that it's not always that, it could be ill will. But if you start with that assumption,
that people are generally not against you, but happen to be incompetent or make mistakes or
stupid, then you go from there and it's a much better way to live one with another. But I think
the interpretation of that or a slight change of that, which applies to this complexity situation,
is that a lot of times we attribute stupidity
to the programmer that came before us.
And in a malicious way, right?
Like this person either didn't know what they were doing,
or they left this mess on purpose, or whatever we have.
There's always the previous programmer scapegoat a lot, they were doing or they left this mess on purpose or whatever we have.
There's always the previous programmer scapegoat a lot or the one that you're blaming whatever situation is on.
I think the way that you can slightly change that
in a positive way is that don't attribute to stupidity
that which can be explained by lack of information,
lack of context.
Because the person that made that decision
which no longer makes sense or is confounding,
a lot of times they weren't stupid, they weren't malicious,
they just didn't understand the system yet.
Complex systems evolve over time
and they evolve as more information comes into the game.
And so a lot of those decisions actually were the best decision at the time.
It just didn't scale.
Yeah.
And also a lot of decisions
just have to get made,
sometimes quickly.
Yeah.
And sometimes without as much time
as you would like to take the decisions.
And I think part of this is,
I suppose, an emotional maturity thing.
I think you learn it slightly as well
when you've been around long enough
to have been
sitting around at a table and someone just absolutely shredding something to bits and
saying what was this person thinking it's like i was looking at this thing total amateur hour like
what were they doing and you're sitting there thinking you know how long before i have to tell
this was me and i thought it was the right thing at the time. And I get it. I understand that it wasn't that smart.
But at the time, I didn't know what I knew now
or I didn't know it was used in this way.
So I think that is a good one.
I think it's just also an important one as well.
I think technology can sometimes be a little bit of a harsh word for this,
that we just need to be kind and inclusive towards each other.
You can see amazing things in the world
of open source in terms of say for example the time people spend contributing to projects
for no other reason than they just think that they're cool they love them and want to support
them and give them their time and that's wonderful and you can also see people just kind of rip stuff
apart to try and show how clever they are and
we all grow and we all learn and we tend to learn and grow the best from people who are inclusive
and when we make mistakes look at those and say hey you know instead of tearing into pieces to
show how clever you are say maybe we can look through this together and i've got a few suggestions
and kind of like guide that person through that. Yeah.
It's easy to forget that there's a human on the other side of that text area
because it's all text-based communications
and because all of the cruft
and all of the stuff of life
that you're bringing to your laptop today
and I'm halfway around the world
and I got all my own stuff that I'm bringing
and we're just typing into a thing
and hitting submit or send and we see an avatar, maybe it's a picture of your face, but maybe mine's just a weird green blob representation of me when I was 18 years old. and one of the reasons why, honestly, we get way less blowback on things that we say
that are stupid on our podcast or whatever,
misrepresentations or whatever it happens to be
because there's an empathy with voice that lacks without it.
And people just give podcasters that benefit of the doubt
because they can tell this is just a person talking.
I can hear their voice, there's inflection.
You can hear doubt even if you're saying the words whereas if you just type the same exact words out like that's removed
and a lot of the malice and the way that we treat each other online i think is because
we're just so abstracted away from the human on the other side of that text area and if we just
was more aware of that and thinking about that,
and thinking, how is this going to affect this person's day? You know, like, what I say about
their open source project or whatever it happens to be? Well, I think that we would all be a little
bit better off. Yeah, absolutely. But it's hard. It's hard to remember that in the moment.
It is hard. But I think that's a really important thing. And as work becomes more distributed,
and teams become more distributed,
those kind of things are more likely to happen.
I mean, I found, again, through consultancy work,
having to work with different teams and so on,
one thing I'll often suggest on engineering teams that I'm working on,
particularly if we've got a mixture of people, maybe contractors,
different organizations who have kind have all been thrown together,
is when you do your pull requests,
look over the code, take your notes,
but then go and sit down next to the person
and talk over them together.
And that was something I learned
after really just seeing the occasional incident
where someone would write something
and either maybe they were trying to be funny
and the sarcasm they were using didn't come across in text or they were just having a bad day and
didn't do exactly what you say which is think you know there's another person on the end of this
who's maybe also having a bad day and actually instead asking someone to say take the notes
and they can be you know they can be constructive criticism we should have put things but then go
and sit with them and have a chat about it and make it more
of a two-way dialogue, and then you get
more empathic conversations
happening, and you'll probably both
get a lot more out of it as well.
I'm Jared Santo, GoTimes producer and a loyal listener of the show.
This is the podcast for diverse discussions from around the Go community.
GoTimes panel hosts special guests like Kelsey Hightower.
And sometimes you can leverage a cloud provider and make margins on top.
That's just good business.
But when we're at the helm making the decision, we're like, that's just good business. But when we're at the helm making
the decision, we're like, yo, forget good business. I'm about to deploy Kafka to process 25 messages
a year. It's nerd pride, right? Picks the brains of the Go team at Google. You don't get a good
design by just grabbing features from other languages and gluing them together. Instead,
we try to build a coherent model for the language where all the pieces worked in concert.
Shares their expertise from years in the industry.
Don't expect to get it right from the start. You'll almost definitely get it wrong. You'll
almost definitely have to go back and change some things. So yeah, I think it goes back to
what Peter said at the start, which is just make your code, write your code in a way that is easy
to change. And then just don't be afraid to change it.
And has an absolute riot along the way.
Yeah, you know that little small voice in your head
that tells you not to say things?
What is that?
How do you get one?
You want one of those?
Is it like an in-app purchase?
It is go time.
Please select a recent episode,
give it a listen and subscribe today.
We'd love to have you with us.
All right, Dave, hit us with another law. Okay, so I'm going to butcher the pronunciation, Hofstadter's law.
So apologies to anyone who can pronounce the word properly.
It always takes longer than you expect, even when you take into account Hofstadter's law.
I think this one is great.
It always makes me smile.
Yeah.
It makes my colleagues smile when I say it to them and say, basically, the law is that it's always going to take longer than you expect,
even though you know it's going to take longer than you expect.
You just can't avoid it.
Somehow, still, it's going to take longer.
And I love it because it applies to software development,
but it pretty much applies to everything else in life as well.
I don't know if that's because we're naturally optimistic creatures
or something like this, but things just do take longer. in life as well. I don't know if that's because we're naturally optimistic creatures or
something like this,
but things just do take longer.
There's just always that little bit of complexity that you start to unravel
and you look at it and think,
Oh,
this is going to be something that's going to,
I'm going to,
I'm going to lose an hour,
you know,
working out what's going on here.
Right.
And then four hours later,
you're like,
Oh,
I've actually moved backwards.
And then the next day after that,
you're thinking I've invested so much time now.
I've just got to at least get this fixed somehow.
So that one always makes me smile.
It's some cost fallacy,
but yeah,
absolutely.
My boss,
when I was doing my early days consulting,
he would ask for estimates,
you know,
because you got to come up with something.
And his rule of thumb as a manager of developers was take the developer's estimate and then just
triple it. And then you might be close. Like, and I always thought that was ridiculous as a young
man. I was like, seriously, triple it? He's like, yeah. So if they say six hours, you know,
triple that. And there's your estimate. And it turns out you still undershoot sometimes when
you triple that thing. And if you don don't then you just get pleasantly surprised it's funny you say that
i mean i do that all the time like you know people will say how long do you think it's going to take
to do xyz and perhaps one of the more junior people in the room will say we can probably do
this in in two weeks and then internally i'm thinking okay so this probably
means it's about two months then because i i've been there i know what it's like and even my
estimate of two months it's probably way off right sometimes you see that short look on people's face
basically perhaps more business-minded people and they're like really and like yeah and i'm really
sorry to say this and i know it's a tough one to explain,
but it is just going to take longer than we expect.
Even though it sounds simple,
there's going to be stuff that bites us.
So either we just accept that and plan for it,
or we go for optimism,
but probably end up late.
Yeah, I think if somebody says,
this is going to take two weeks,
I think at that point you have basically unbound risk,
because it means they have absolutely no idea. If they say two hours,
they may be off. Even by an order of magnitude, I guess it would be 20 hours.
That'd be quite a bit. But they go to a day. But if they say two weeks,
I can't think two weeks down the road on a software project.
I'm not sure anybody else can accurately on a recurring basis. Maybe you're right
here or there. When I
was still doing consulting and doing development hours, basically my smallest unit of time was a
half a day. I would say this is a half a day, this is a day, and the longest unit of time was three
days. Anything bigger than three days, sorry, you have to actually re-scope this and break it down
into smaller pieces because I cannot estimate more than this much time in any
sort of accuracy. Yeah. And that's just being brutally honest with yourself about, you know,
the complexities of software development. I think that's, that's inside the right. And that's why
in Agile, you know, there's this whole idea of breaking down large stories into smaller stories
and, you know, until you really break it down to the task level
where you're saying,
how many chunks of my day is this going to take me?
It's kind of just a big question mark.
And of course, that means you need tons of details
to break it down to that level of granularity,
which is why you can then get this kind of conflict sometimes
with people saying,
how long will it take to build a system that does X?
And you're like, well, you know, 18 months.
What is X?
Or it could be 18 days
if you just need something quick and dirty
that kicks off a Lambda function
and writes into a Google Sheets document.
But like, what is X?
And they kind of look at you as if to say,
why am I getting this kind of attitude
it's like it's just so hard to know and neither of us actually understand what x means even if
we spend two days writing a 70 page document trying to define what x is right we still haven't
defined it well you get a few days down the road and x has changed because you have more information
and so now it's a moving target one of my favorite things slash least
favorite things i roll or giggle depending on how i feel that day moments is when somebody announces
a new product or service on hacker news invariably one comment at least one will say i could build
this over the weekend like invariably what's the big deal here i could build this in a weekend and i just have to
think you do not understand you have not been writing software very long have you because
if you're just looking at yeah you could build a shoddy sub group of the main functionality that
only fits the part you know the happy path and your particular use case in a weekend. And that's probably what this thing started as.
A lot of products start off as a weekend hack
or just a proof of concept,
and I got it working, it's the 80-20 rule, sort of.
You spend 80% of the time on the last 20% of the work,
or you're 90% done, you only have 90% to go,
kind of a thing.
But we tend to definitely overestimate our skills
and underestimate
the complexity of these systems which leads us to tesla's law the law of conservation of complexity
this law states that there's a certain amount of complexity in the system which cannot be reduced
so we talk about break it down and make it simple and the cold hard facts is sometimes there's just
no further down it can go.
The complexity is inherent in the thing that you're trying to solve.
This is one you said has resonated with you quite a bit.
Yeah, this one, when I read it, it really did strike me as quite profound
because I always loved this idea a bit like in mathematics
that you can sort of reduce and simplify and make things more elegant and eliminate complexity.
But what I like about Tesla's law is it does kind of just state that the cold hard truth is that there comes a point where you ain't going to make that system any more simple.
So the shoddy, you can build it over a weekend, two days project.
Even that, if you look at it as a system overall,
there's a ton of complexity there that maybe the coder hasn't put.
But the complexity still exists for the systems administrator who has to kind of like wake up and then find a way to restart the system
late at night, or for the end user who has to kind of have a workaround
or whatever else it might be
so of course you can eliminate unnecessary complexity if you've just done something in
a way which is needlessly complex but there will just always be certain things that you can't get
rid of like time zones yeah time zones are always going to be hard that's why it's closer to home for you
and i doesn't it we had some scheduling problems because of time zones and the complexity of that
system right the show that nearly never happened and in fact one of my earliest software development
projects i was working on chipsets and device drivers and we had to do some stuff around time zones.
And it was extremely painful.
Oh, man.
But yeah, going back to the conservation of complexity,
there are some things you just can't avoid, like time zones or financial transactions in software
or transactions that you really have to be absolutely certain
have happened.
It's pretty much impossible to always be certain 100% of the time
that you hope to have done say for example a funds transfer or whatever so whether you deal with that
complexity by having an end-of-day batch process that does some checks and balances or a dead
letter queue for failed messages or you know something like a modern system like, where you've got retry topics or whatever
else it might be. There's just no getting away from the fact that if you're trying to send a
message from A to B, it's an inherently complicated thing to do in the world of computing. And you
can't just magically wave a wand and have a solution that makes that complexity go away.
Absolutely. I was thinking about time zones again
and the funny joke people made around recent advancements towards uh mars is that the
the problem with going to mars is we have to add a new time zone and the complexity that comes in
when the time zones the thing about time zones is they're geopolitical. I mean, they wrap around cities, they change because of politics.
They're really complex.
And the reason I bring that up is because sometimes the complexity comes not even necessarily in the domain,
but the fact that your software exists over a time span.
And it has to apply in the current time span, not time zone.
But the world changes, right?
The complexity might be that the ground is swept out
from underneath your software over time,
so you may have handled the complexity that was in front of you,
but you didn't actually account for the complexity
that was coming your way.
That's incredibly hard to do.
Yeah, and I mean, it's kind of getting you coming and going here, isn't it?
Because if you look at things like all the panic that there was about Y2K,
well, the panic came from classic Yagni,
which was people probably were brightly saying,
we don't need to use more than two digits to represent the year.
If anyone's still running this software 30 years from now,
the world's already in a lot of trouble anyway so don't worry like yeah don't worry about
you know dealing with the millennium was it bill gates that somebody said like who's ever going to
need more than like 48 kilobytes i can't of memory i can't remember what the amount was or who said
it but it was that exact kind of naivety of who's going to need four digits to represent the year and it's like
this is the challenge isn't it we just don't know like yeah do we just use a time zone library
or are we flexible about time zones and like you said it can be a geopolitical issue time zones can
change daylight savings even you know time itself can be changed and with daylight savings and stuff
like this and trying to engineer that into systems with the flexibility to incorporate that change
could be hugely time-consuming.
And then you've got to do that balance between,
do we need it?
How badly is it going to bite us?
Well, that plays nicely in one of my other favorite laws,
which is Murphy's Law,
which doesn't apply strictly to software systems,
but certainly applies to software systems,
which is that anything that can go wrong will go wrong.
So I am a card-carrying pessimist.
I like to think that I'm a realist,
which makes me maybe an optimistic pessimist,
but I tend to think about what's going to go wrong.
And I think that makes for a pretty good software developer,
even though your code can sometimes get more complicated
than it needs to be because
you're accounting for things.
But I've just lived long enough to know that,
you know what, Murphy knew what he was talking about.
And if something can go
wrong, it's probably gonna, and you better be ready
for it. Oh yeah,
absolutely. And it could be
when you're looking at a pull request and
thinking, this certain section of code,
when I think about it,
I'm not convinced that's thread safe,
but it's never really going to happen
that we get a context switch at this time or whatever.
Right.
But no, it will happen.
And when you're up late at night
trying to fix this issue,
that's definitely one that you'll be remembering.
Yeah.
And I think kind of having that healthy,
skeptical view to things
breaking is really important and i suppose plays nicely into a law that i added recently which is
i suppose a bit more kind of academic i realized is super important which is fallacies of distributed
computing which is i'm gonna have to look through my notes to get some of the
the best examples but essentially when we're programming we often kind of like just assume
that we can do like a remote procedure call and we know that under the hood there's some stuff
going over the wire but the fallacies of distributed computing are that the network
is reliable and that latency is zero. The network is secure.
Topology doesn't change.
There is one administrator.
And this is a bit like Murphy's Law
because basically Murphy's Law is like,
well, we know there's these fallacies
and we're doing anything that's distributed.
Things can go wrong.
Things can change.
People can change servers around
or remove something from the rack.
It gives them a new IP address.
We can get weird latency issues.
And if the software is running for long enough,
we'll definitely find those issues one way or another.
Or sometimes you'll never find them.
You'll hit them, but you'll never actually be able to understand
because of the infrequency of it.
So here's a small example.
We have a bot in our changelog Slack
that posts when we publish a new episode
and it's integrated into our system.
And we publish a new episode maybe five, six times a week,
but it just posts in there,
hey, new episode of Brain Science,
here's a link to the Slack community. and about once every four or five months it posts it
twice like boom boom now in the scheme of things this is a good problem to have right because you
know it's not a big deal everybody on our slack channel is like, yeah, funny. Jared can't code.
I'm probably never going to actually get to the bottom of that because it happens
so infrequently and it's so small
stakes.
How would I even go about debugging such a thing?
I don't care enough to do so.
Of course, I could probably find out what's going
on there.
Computers
are hard, especially in terms
of distributed computing. Networked
computers are extremely complex.
Yeah, they are such
a pain. Life would just be so much
easier if people
did not network things together.
We'd just play solitaire by ourselves.
Yeah, why did
anyone ever come up with networking
computers? Let's just build bigger mainframes
and run everything on one big mainframe that would definitely make life easier well we're
getting close to the end of our time here any big ones that uh you had on your list we haven't
talked about of course we're not going to be comprehensive we'll link up all of these
these laws we've talked about and we'll also of course link dave's hacker laws repo so you can
go read them for yourself but maybe one or two more real quick and and we'll also, of course, link Dave's HackerLaws repo, so you can go read them for yourself.
But maybe one or two more real quick, and then we'll call it a day.
Maybe one that's not in the repo yet. I'm still considering this one, because just like any larger project,
when you get a number of contributors, people come up with ideas, and it's difficult to know where to draw the line between,
is this something that you can reasonably say is a kind of roughly well-known
principle or just something funny,
something's come up with which might become a principle someday.
But I did hear about something.
And it was when you talked about Murphy's law that it tweaked my memory
called Schrodinger's backup, which I thought was great.
And Schrodinger's backup basically says the state of any backup is not known
until you
restore it oh i like that one because if you've ever done any kind of disaster recovery type stuff
that is exactly the case until you've tested that backup until you've actually restored it
you don't really know and you can dry run things and you can test things out but um there's always
that uncertainty there that's great I do like that one.
It ties nicely into Murphy's Law as well.
Absolutely.
It reminds me of how I think about backups
which I say sometimes is that nobody actually wants backups.
Everybody wants restores.
The backup is just a liability actually.
It could be a data breach scenario.
It could be wildly wrong. It could be outdated.
It could overwrite things that are valid.
There's all sorts of things that can go wrong with backups.
And if we could just skip backups
altogether, we would.
But what we really want is the restore.
And so backup is kind of a means to an end.
Restore is what we're after.
So make sure you can restore that backup or it's
completely worthless. That's Schrodinger's
backup right there.
And if you search it on the internet,
you'll probably find a Reddit thread
with some horrifying stories
of people who've had terrible experiences.
Another one which I think is going to come in soon
is the box law, which comes from statistics,
which basically says that all models are wrong,
but some are useful.
And there was a bit of discussion
about whether this is valid for software development or not.
The discussion kind of came to the conclusion that it is.
It's actually very similar to Joel Spolsky's law
of leaky abstractions, where he says that
all non-trivial abstractions, to a certain extent, are leaky.
And I think these two are kind of essentially saying the same thing,
which is that when we're doing any kind of software development, we're modeling a system of some kind, which will create some kind of essentially saying the same thing which is that when we're doing any kind of
software development we're modeling a system of some kind you know which creates some kind of
abstraction that represents something like a network or a train timetable or whatever it might
be and of course it's only an abstraction there are going to be you know mistakes with it
simplifications that have to be because to reproduce it in its entirety would be too time-consuming and complex.
But it doesn't mean that it's not useful.
And I guess to a certain extent,
that's where some of the whole idea of the craft
of software engineering comes in.
It's like, how do you draw the abstraction?
Where do you draw the line?
Where do you stop?
Where do you say we need more detail?
It's a process that I guess we're kind of always learning
and hopefully growing on that one as well from our experiences.
Yeah, it seems like we're still in the phase of like, is it an art? Is it a science?
We can't yet call it a science because there's not like hard, fast.
There's these rules and there's idioms and there's practices, best practices.
But it's not like civil engineering where we can just plug in all the numbers and do the math and say with like 99.9% certainty, yes, this bridge can hold that weight.
You know, that's a science.
And we aren't there yet because it's so emergent and so figuring things out as we go.
But I feel like we're on our path to that, hopefully, maybe someday.
But so many parts of it are changing.
Even today, a colleague was saying,
we've got our systems running 20% CPU utilization,
20% RAM.
Could we half the number of systems?
And he was saying, in theory, you could.
You'd have to look at potential congestion at certain times, like peak loads and things like this.
Right.
But at some point, as you start to kind of constrain resources,
you're just going to see weird stuff happen.
Other things that you did not expect to be a problem
will suddenly be a problem.
Like suddenly you'll start getting disk issues
or you'll get some kind of network issues or something
because these systems by their nature are so complex.
There's so much going on that, you know, we have the abstractions like CPU, network, disk, RAM, whatever.
But the physical processes that underlie all of this and the hardware that underlies it is highly complex.
And complex systems, I mean, this is, I suppose, chaos theory.
But complex systems are systems which have wildly unpredictable results,
even with quite similar inputs.
You know, you run the software on day one,
it runs as you expect,
run it on day two,
and you get something wildly unpredictable.
And that was because of the time zone
or whatever else.
Yeah, exactly.
Well, we just touched the surface
of these different laws and principles.
I will submit to the listeners out there to check out these laws.
If you haven't heard of the ones we discussed, there are many others.
And I think even just having maybe not intimate knowledge of all these things, but maybe call it practical or working knowledge will make you a more well-rounded developer or software person.
Whatever your role happens to be.
These are things that others who've come before you
have found to be generally true,
or maybe specifically false in specific instances,
but useful nonetheless.
Dave, this was a lot of fun.
I really appreciate you joining the show
and talking to me about these hacker laws.
Thanks, Jared.
Really enjoyed it.
It's been great having a conversation.
I'd also use
the opportunity
to thank
the translators
for the project
a number of people
who have just been
tirelessly working
at translating laws
as they come in
which I'm just
blown away by
I think that's
fantastic to see
and also
to shout out
a colleague of mine
has started a podcast
called The Venture
which is all about
venture builders in Asia it's quite cool they've got some interesting people talking on that A colleague of mine has started a podcast called The Venture, which is all about venture
builders in Asia.
It's quite cool.
They've got some interesting people talking on that.
So that might be one to check out if you're interested in building new ventures.
Absolutely.
Hook me up with a link to that and we'll put it in the show notes.
Links to all the laws discussed, all the things, you know, we put them in the notes right there
for easy clicking.
So that's our show.
Thanks so much for listening.
We'll talk to you next time.
Thanks, Jared.
All right.
Share your thoughts on this episode at changelog.com slash 403.
This is episode 403.
You can also open your show notes and click discuss on ChangeLog News.
That'll take you right to the comments.
And of course, huge thanks to our partners, Linode, Fastly, and Rollbar.
They get it.
Also, thanks to Breakmaster Cylinder for making all those awesome beats.
And if you haven't heard, we have a master feed of all of our podcasts.
You get all our podcasts in one single feed.
It's the easiest way to listen to everything we ship.
Head to changelog.com slash master or search for Changelog Master in your favorite podcast app.
You'll find us.
Thanks again for tuning in.
We'll see you next week. Thank you.