Future of Coding - Out of the Tar Pit by Ben Moseley & Peter Marks
Episode Date: April 1, 2023Out of the Tar Pit is in the grand pantheon of great papers, beloved the world over, with just so much influence. The resurgence of Functional Programming over the past decade owes its very existence ...to the Tar Pit’s snarling takedown of mutable state, championed by Hickey & The Cloj-Co. Many a budding computational philosophizer — both of yours truly counted among them — have been led onward to the late great Bro86 by this paper’s borrow of his essence and accident. But is the paper actually good? Like, really — is it that good? Does it hold up to the blinding light of hindsight that 2023 offers? Is this episode actually an April Fools joke, or is it a serious episode that Ivan just delayed by a few weeks because of life circumstances and his own incoherent sense of humour? I can’t tell. Apologies in advance. Next time, we’re going back to our usual format to discuss Intercal. Links Before anything else, we need to link to Simple Made Easy. If you don’t know, now you know! It’s a talk by Rich Hickey (creator of Clojure) that, as best as I can tell, widely popularized discussion of simplicity and complexity in programming, using Hickey’s own definitions that built upon the Tar Pit paper. Ignited by this talk, with flames fanned by a few others, as functional programming flared in popularity through the 2010s, the words “simple”, “easy”, “complex”, and “reason about” became absolutely raging memes. We also frequently reference Fred Brooks and his No Silver Bullet. Our previous episode has you covered. The two great languages of the early internet era: Perl & TcL For more on Ivan’s “BLTC paradise-engineering wombat chocolate”, see our episode on Augmenting Human Intellect, if you dare. For more on Jimmy’s “Satoshi”, see Satoshi Nakamoto, of course. And for Anonymous, go on. Enemy of the State — This film slaps. “Some people prefer not to commingle the functional, lambda-calculus part of a language with the parts that do side effects. It seems they believe in the separation of Church and state.” — Guy Steele “my tempo” FoC Challenge: Brooks claimed 4 evils lay at the heart of programming — Complexity, Conformity, Changeability, and Invisibility. Could you design a programming that had a different set of four evils at the heart of it? (Bonus: one of which could encompass the others and become the ur-evil) The paper introduces something called Functional Relational Programming, abbreviated FRP. Note well, and do not be confused, that there is a much more important and common term that also abbreviates to FRP: Family Resource Program. Slightly less common, but yet more important and relevant to our interests as computer scientists, is the Fluorescence Recovery Protein in cyanobacteria. Less abundant, but again more relevant, is Fantasy Role-Playing, a technology with which we’ve all surely developed a high degree of expertise. For fans of international standards, see ISO 639-3 — the Franco-Provençal language, represented by language code frp. As we approach the finality of this paragraph, I’ll crucially point out that “FRP”, when spoken aloud all at once at though it were a word, sounds quite like the word frp, which isn’t actually a word — you’ve fallen right into my trap. Least importantly of all, and also most obscurely, and with only minor interest or relevance to listeners of the podcast and readers of this paragraph, we have the Functional Reactive Programming paradigm originally coined by Conor Oberst and then coopted by rapscallions who waste time down by the pier playing marbles. FoC Challenge: Can you come up with a programming where informal reasoning doesn’t help? Where you are lost, you are without hope, and you need to get some kind of help other than reasoning to get through it? Linear B LinearB Intercal Esolangs FoC Challenge: Can you come up with a kind of testing where using a particular set of inputs does tell you something about the system/component when it is given a different set of inputs? It was not Epimenides who said “You can’t dip your little toesies into the same stream” two times — presumably because he only said it once. Zig has a nicely explicit approach to memory allocation. FoC Challenge: A programming where more things are explicit — building on the example of Zig’s explicit allocators. Non-ergonomic, Non-von Neumann, Nonagon Infinity One of Ivan’s favourite musical acts of the 00s is the ever-shapeshifting Animal Collective — of course 🙄. If you’ve never heard of them, the best album to start with is probably the avant-pop Feels, though their near-breakthrough was the loop-centric Merriweather Post Pavilion, and Ivan’s personal favourite is, as of this writing, the tender psychedelic folk of Prospect Hummer. Jimmy’s Philosophy Corner To learn more about possible worlds (“not all possibilities are possible”), take a look at the SEP articles on Possible Worlds, Modal Logic, Varieties of Modality, and the book Naming and Necessity by Saul Kripke. For more on abstract objects (“do programs exist? do numbers exist?”), see the SEPs on Platonism in Metaphysics, Nominalism in Metaphysics, and the paper titled A Theory of Properties by Peter van Inwagen. Music featured in this episode: Jimmy’s Philosophy Corner got a new stinger. No link, sorry. Why does this feel like a changelog? Get in touch, ask us questions, send us old family recipes: Ivan: Mastodon • Email Jimmy: Mastodon • Twitter Or just DM us in the FoC Slack. https://futureofcoding.org/episodes/063Support us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
no you can't segue off of something i'm cutting out of the show that's not fair i know i wasn't
i wasn't segueing i was just this is just a fact okay you can't keep it see now i'll just say
a bunch of cancelable takes like uh type systems are the thought police that force us into a
conformant monoculture and suffocate diversity uh the last good programming
language was pearl uh i'd actually agree with that that's not bad and i would also i would say
not just pearl but pearl and tcl the the two of them kind of also rands of the early internet era
both of them very good programming languages that uh that are now just punch lines to jokes and yes i
say tcl i don't say tickle so you don't say tickle no i don't say tickle i say yeah i never did any
pearl you never you never did any pearl i i like i've i've written it like for like little test
programs or whatever right uh but usually at the time it was i wrote php and then i translated it
to pearl which actually is like a one-to-one translation for little 20-line scripts.
It's like going from Ruby to Python, right?
They're the same language, basically.
As far as anybody's concerned, they're the same language.
So you can just straight up copy-paste from one to the other and it works.
I don't even know why we bother having Ruby and Python.
We should just have Ruby.
Yeah, do you have anything that you wanted to open with i've got one uh so we got um i'm now speaking to the audience which i love doing oh and though usually i do it in the edit
not in the in the actual recording so this is awkward because jimmy's looking at me and i'm
talking to nobody and so there's a bit of a bit of a hallucination going on here.
We got a ton of feedback on the last episode, way more than normal from so many different
sources like Mastodon and on the Slack.
And I got email and it was great.
That is very cool.
And I love that.
And I think more feedback coming in is a good thing.
So if you're listening to this and you have things that you think about what we're about to say uh don't hesitate to tell us the things
that you think yeah we have thick skins we can take it yeah i am all for it right like you know
the point of this show is to explore these concepts not just between the two of us but
in general to get people talking and exploring these papers and
thinking about these topics. And yeah, I'm going to have takes that you don't like. And Ivan's
going to have takes that everyone enjoys and loves and thinks are perfect. Yeah, totally.
But yeah, I think it's important that we're, you know, starting a conversation rather than just
like concluding one. That's always been my goal with these things.
And I hope, you know,
what I never want to do with these papers personally
is act as if we've settled the issue.
I don't think we're here to give answers.
We're here to explore and question things.
I think that's important for all of these papers
to keep in mind that there's no end to this dialogue
yeah and there was one particular uh bit of feedback that i received that i actually wanted
to respond to on the show because i've been i've been dancing around this and i don't know if this
will make it into the edit or not i'll think about that but it's from somebody on the slack who goes
by personal dynamic media uh They posted a very long,
very thoughtful response
to a whole bunch of different aspects
of the Brooks episode.
And I really enjoyed reading through that.
And I have it on my list
to go and actually write some,
you know, fine-grained replies
to specific points.
But there's one thing
that I wanted to actually say on the show,
just because I think that
now that the show is in a new form,
you know,
Jimmy's here and we're doing papers and this is kind of settling into a thing.
There's a point of confusion maybe about it that I just want to clear up.
Man, I sound like I'm becoming the manager of some large software project in the mid
80s when I use that kind of language.
That weirds me out.
So I'll just read the quote from the feedback from Personal Dynamic Media.
In general, it's more helpful to ask,
what was it about this person's experience and environment
that led them to view things this way?
Are those things relevant to me and to now?
If so, how?
If not, what is different than to say, this paragraph or he's
wrong here? So the feedback is rather than being kind of dismissive and flippant and maybe even
disrespectful or just like discarding things, it's better to do the step of kind of wondering
why it is the way it is.
Like why, you know, did the author writing from the perspective they were writing from
at the time they were writing from, why did they write that thing?
Was it for an audience that maybe existed at the time that the work was done that no
longer exists?
Or was it in reply to some other thing that we don't see because it wasn't explicitly
called out?
Because maybe it was more of a subtle kind of retort.
You know, we refer to that as subtweeting these days.
And so what I wanted to say in reply to this is the sort of the coy remark that when we're
saying this paragraph or he's wrong here, it's helpful to wonder why are we saying that and consider that it is a very
deliberate conscious choice. It's something that we have some intentionality behind what we're
doing. And I wanted to mention that on the show because this episode is going to be,
it might be more of that than other episodes have been. It might be the most of that that we will have for a while. I don't know,
it depends what kind of, you know, dark web papers we end up surfacing over the coming year.
But this is an episode where there's, I think, going to be a lot of
contention between your present hosts and the authors of the paper in question. And so,
yeah, if there's a little bit of dismissiveness this time around know that it is
being done for a deliberate purpose it is not just bludgeoning opinions we don't appreciate or
ignoring perspectives that are valid but maybe a little bit alien to us we are we are very
consciously thinking about why things are and how they got to be that way and choosing to respond in a particular way for
a particular reason so yeah so i guess since uh since the opening segment of this show was about
how we love to be dismissive of things that are you know truly terrible and unforgivable today we
bring you a truly terrible and unforgivable paper called out of the tar pit by ben mostly and peter marks okay so so so just
you know i don't think i don't think many of those references made it directly into the episodes
they've been kind of sprinkled throughout but like uh-huh ivan does not like this paper i do not
and it took when we mentioned when we were talking about it, and we talked about doing Brooks, and I said, you know,
we should definitely do No Silver Bullet,
but my requirement was we had to do Out of the Tar Pit right afterwards.
And Ivan did not like that idea at all.
I'm fine with it.
I just thought this would be a great one to save for, like,
an April 1st episode or some kind of like
you know halloween special or something like that like this is the peak of the mountain of crap in
terms of papers and so if we're gonna summit that mountain i'm gonna want to do it as a special
occasion because i there's there's not many papers that i've read that i i have as much disdain for as this particular one and and
i think you know this might be surprising to a lot of our listeners i mean i don't think up until now
it's obvious like why you would dislike this paper but it's also a fairly beloved paper right
it's this is a okay so out of the tar pit uh i think kind of got popular by a mention from Rich Hickey in one of his talks.
I can't remember which one.
But it kind of, it's a very influential paper in the functional programming world, but especially the Clojure community.
Out of the Tar Pit is kind of seen as this like, this quintessential text.
It is a direct answer to No silver bullet so that's why i said
that we should do this and it's trying to say here is my proposal for what will uh i guess it
doesn't actually explicitly say give us 10x productivity or any of those things but it does
kind of claim that they found the silver bullet now to be clear
since we talked in last episode you know we had this whole framing device and win in 10 years
blah this is in 2006 so this is definitively outside of of brooks timeline yeah it's 20 years
after silver bullet which was 86 yes yes so this is definitively outside of Brooks' timeline,
so it was not a direct rebuttal at the time.
Brooks would not have ever responded to this.
And in fact, one of the things I was actually interested in,
usually I try to, like, if I don't know the author,
even if I do, I try to look up, like,
the background of the author.
I can't find these people.
Yeah, me neither.
I tried to find personal websites.
I tried to find social profiles, anything like that.
I got nothing.
The mostly.name domain did not resolve for me i even like archive.org things and
like i just yeah i have no idea like i found some ben mosley who's a software engineer
yeah or like a researcher but this paper's not listed and the timeline seems a little off and
the area of specialization doesn't match up,
because this Ben Mosley is like an astrophysics kind of like,
I think like JPL and NASA-affiliated developer.
Yeah, yeah, something like that.
And what's interesting is you don't see the academic credentials,
you don't see any EDU email address address here right so like in the paper we get
ben mosley and it's ben at mosley.name and then peter marks which is the email address public
at indigo mail.net i almost wonder if these are pseudonyms yeah i wonder if this is like bltc
trying to do some more like paradise engineering like dose the public rumpting cocktail smart
drugs neuro scanning tar pit sapiosexually elevation magic super spirituality pull people
out of the software complexity fictionalism turn them into mindless automatons that don't feel pain
and are able to abolish the the plight of wombats with chocolate uh or or maybe this is uh you know ben
mosley also known as satoshi right like this is the the original anonymous paper that was supposed
to change the world and then you know turns into the bitcoin did anonymous ever publish a paper we
should do that on the show if anonymous ever actually published a paper that would be good
i'm sure there's lots of cringey Anonymous manifestos.
There you go.
Next episode, we read software engineering fanfic.
Please write your best fanfic.
I asked for it previously, and we never got some.
Oh, we'll get some eventually, I'm sure.
If there's a natural step from Brooks to Tar Pit,
then there is a natural step from Tar Pit to cringey fan there is a natural step from Tarpit to cringy fanfic.
Like that's the trajectory we're on.
That's the vector.
That's the direction that we're trending.
Hey, it's me, the editor.
On the next episode,
we will be discussing the programming language intercal and the fanfic that birthed it.
So I'll just give, since Ivan has kind of given you
his wrong view on this paper, that it's garbage.
I loved this paper.
Past tense.
I used a past tense.
I loved this paper.
For me, every paper that I read,
with the exception of Programming is Theory Building,
as I reread these things,
I can take a more critical eye to them because like the main point
they're making is already in my head right i've already internalized it i've already thought about
it and so i can uh really judge it more accurately for me out of the tar pit was one of those papers
that i discovered right at the same time that I discovered Clojure and functional programming.
And so this really felt like a big change, a big revelation for me.
And I do think that there's a lot that's really good in this paper.
But my experience has also been colored by working on systems that try to realize the ideal that Out of the Tar Pit here sets, which we'll get into what that is. But I've actually worked on systems
where the whole team is influenced by that
and really trying to achieve it.
And I've found it doesn't quite hold up
the way you want it to.
Yeah, and as for my,
like what is the legitimate source
of my beefy reaction to it?
Also, I think for this episode,
I'm going to dispense with my usual avoidance
of animal-related metaphors
and just go with things like,
we're killing a sacred cow this time.
That's what we are doing.
Yeah, so all the meat murder
is going to come flowing out
of this particular vegetarian today um yeah
like why where where does my my big issue with this paper come from we'll we'll figure that out
as we go through it i don't want to pre-set anything up because i think it's it's more fun to
to to pick up the trail of candy as you go along and eventually at the end discover, Hey, I've been picking up rusty needles and jamming them into my ribs.
I don't feel good anymore.
I need to lie down for a while.
Uh,
so,
uh,
with that.
All right.
So I want to do some,
even though,
yes.
So I want to do meta,
but meta about the paper itself.
Yeah.
Yeah,
sure.
Go for it.
Okay.
So,
so the paper is,
we'll get into the content, but the paper is kind of broken up into two sections and we're probably going to focus on the paper itself. Yeah, yeah, sure. Go for it. Okay. So the paper is, we'll get into the content, but the paper is kind of broken up into two
sections and we're probably going to focus on the first half.
Yes.
Of these two sections.
The first half is kind of the big picture, pie in the sky, sorts of like, what are we
proposing?
And the second half is like nitty gritty details that are interesting in and of themselves,
but don't make for great podcast material.
Yeah.
They're really implementation details.
They look at even an example system and all of that.
So there are some things in those later sections that we'll probably touch on, but they're
diving into some detail that we just probably can't cover.
And also, it's a long paper yeah i mean this thing is
it's not the longest paper we covered yeah that would be uh um augmenting human intellect yes
there we go i'm just trying to stop drawing dynamic dead doug i was on the d's my dog is asking to go out. Okay, cool. My nose is asking to be blown. Okay.
So she wasn't asking to go out, so I won't have to go back.
But I have pocket doors.
It's like two sets have pocket doors for,
it's like two sets of pocket doors for this bathroom that kind of spans two rooms here.
And so I, you know, pull them so that they're not fully closed
because if they're fully closed, she's just like, what's in there?
There must be something in there that you don't want me to have,
so I'm going to scratch at the door.
So I keep them slightly cracked with about like 1.4 dog widths right
standard dog width yeah right but she decides every time that that is not enough dog width
for her to walk through it needs to be two or else she'll just scratch the door and ask me to
open it further even though she could very easily just walk straight through the doors nope uh this is
the thing i've run into a lot is like uh like people and animals as if there's any difference
um with like a a sense of the amount of space needed to accommodate their body being a wild
mismatch for the actual amount of space their body occupies like i have that like i'm
very tall but i've always felt like i am a small person trapped in a big person's body
for whatever reason uh all right so yeah let's let's get back into this paper
sorry no let me rephrase that let's dive into the tar pit
all right all right let's dive into the tar pit you Dar, dar, dar. All right.
All right, let's dive into the tar pit.
You want to read this abstract?
I think it's probably a pretty good starting point.
Complexity is the single major difficulty
in the successful development of large-scale software systems.
Following Brooks, we distinguish accidental from essential difficulty,
but disagree with his premise that
most complexity remaining in contemporary systems is essential. We identify common causes of
complexity and discuss general approaches which can be taken to eliminate them where they are
accidental in nature. To make things more concrete, we then give a crappy outline for a potential
complexity minimizing approach based on functional programming and Codd's relational model of data. We then give a crappy outline for a potential complexity-minimizing approach
based on functional programming and COD's relational model of data.
Okay, so now in the edit, you have to have radio voice for all of things except for crappy?
Yes, yeah.
I've actually had to do that in the past.
There have been some times where one of us has been reading a quote
and injects a little thing in the middle and I pull it out.
Uh-huh, yeah, I just like that it's a singular word here. Yes. Yeah. Okay, okay.
And there's a span of text in the very first paragraph of the introduction that I think also
serves as a really good summary of what the paper is about. And that is,
the biggest problem in the development and maintenance of large-scale software systems.
And so it's interesting, they, large-scale software systems.
So they've got that framing, and I think we'll need to do a little bit of that,
where when we're reviewing this, we have to remind ourselves,
they're talking about large-scale software systems.
They're not talking about hobbyist projects or whatever, video games or whatever.
There's a focus on the same kind of framing that Brooks had, big industrial-scale software.
The biggest problem in the development and maintenance
of large scale software systems is complexity.
Large systems are hard to understand.
We believe that the major contributor
to this complexity in many systems
is the handling of state
and the burden that this adds
when trying to analyze and reason about the system.
And I've got to put a coin or I've got to, like, you know what I'm going to do?
I'm going to hit the thumb piano every time one of us says reason about,
because reason about, after this paper and after the Clojure community adopted this paper,
every single thing, every new JavaScript framework, every new javascript framework every new css library every little project that somebody made
was justified as better than what came before on the basis of it being easy to reason about and
that saying easy to reason about was so prevalent in the like mid part of the last decade that i
every time i saw it i thought to myself all right coin in the swear
jar and i i would have been you know able to retire with a cushy pension at this point off of
that swear jar uh yeah so reasoning about things this is where it all started and so i'm gonna
i'm gonna be i guess plunking the thumb piano because that's the nearest thing I have to me.
And it's not just a thumb piano, actually.
Today's swear jar will be a thumb piano with a little bit of pipe cleaner stuck between the tines.
So it's kind of a muffled thumb piano.
So that's what we're working with today. So when things are easy to reason about, I pluck the muffled thumb piano.
Okay.
I think this isled thumb piano. Okay.
I think this is a great thing.
I don't think I'm going to say that much other than in quotes, but we'll see.
All right.
All right.
We'll see.
Okay.
So we have this.
Yes, I love this quote.
I do think this is really getting at the crux of the paper.
We have these large scale systems.
They're hard to reason about because of state and we get kind of immediately that there's the common solution here is object-oriented programming and this is going to be the
alternative and i mean this is really the paper like i i know that there's the relational stuff
and like we'll get to the relational stuff but i think of the like the arguments here, it is really that functional programming reduces state,
state is the cause of complexity,
and the end.
If we just do that, we have solved the problem,
and we no longer have a bunch of accidental complexity
and just have essential complexity.
I'll put a bit of a twist on that.
I don't even think that the functional programming versus object-oriented programming matters all that much
to the main argument of the paper. I think the main argument of the paper is complexity is the
problem. And there are a bunch of things that can be done to reduce it. And there are a bunch of
things that can be done to make it worse. And some of the things that reduce it don to reduce it, and there are a bunch of things that can be done to make it worse.
And some of the things that reduce it
don't reduce it very well.
For instance, there's some parts of functional programming
and some parts of logic programming
that don't reduce complexity very well.
And there's some parts of object-oriented programming
that reduce it a little bit, but mostly make it worse.
There's some parts of testing
that can be used to get a grasp on the complexity,
but they fail because of state and other things.
And there's things you can do to control state.
We're going to explore all of these different things
you can do to wrestle with complexity.
But I think if I had to say what I,
my sense of what it's,
the thrust of the argument is,
is complexity bad,
state bad,
don't do those things.
Don't do complexity, don't do state.
Or at least mutable state.
Yeah, we'll get to what state means here.
And I actually think, it's interesting
because I actually think this paper has
a little bit of a different take on mutable state
than the community at large
has actually taken out of it.
This is actually a little bit different of a take than the popular understanding large has actually taken out of it right there's this is actually a little
bit different of a take than the popular understanding of mutable state so we can get
to that though yeah and so we start off with this introduction and this that the quote that was read
and then kind of like object-oriented programming is really the start of this introduction and the
rest is a overview of what the paper is going to be like, right? In section two, we do this. In section three, we do that. In section four, and I'm going to be honest, I hate that format.
Yeah, it's a waste of space, but I can see that some people like it, and so I'm happy for those
people to be accommodated by that pattern. I'm not.
All right. Hot takes begin. They begin in the most unlikely of places yeah see that's the worst part of this
paper right is that it like yeah it i don't think it helps like i get it right because now i can
just skip the sections i'm not interested if i already know if i already agree with your premise
right but huh like i've seen the dependency charts I don't know if you've seen these in
books. No. Okay, so I think this is a much more interesting way of doing it, where they'll have
chapter headings, and then they have a dependency graph of which chapters depend on which. Yeah,
oh, I have seen this. Yeah, yeah, yeah. And I can get that because now it's like, oh, I,
I can just skip in the flow to the sections and I know which sections are not needed.
And like it kind of adds a non-linearity to a linear text, right?
Whereas this kind of does that
and it's kind of choose your own adventure
because you could just go to this section,
but they're so boring
and I just skip them every time.
I just feel like they're filler.
I don't feel like they add much to a paper at all. I'd rather
just have a table of contents rather than an explanation of every chapter because if your
headings are good enough, I could just figure that out. Yeah, I agree with that. And I think
in general, I'm going to be advocating for make stuff visible and this, you know, spending a
little bit of space saying here's the upcoming structure is
a way of visualizing what you are about to go through and jimmy is already arguing in favor
of making things invisible so clearly he's the enemy uh he's the one who's going to be defending
the enemy of the state oh yeah yeah yes and jokes about separation of Alonzo Church and state.
Yeah.
So what you're saying is every paper should come with a tempo marker at the top?
Yeah.
Yep.
Okay.
Okay.
And it should just say my tempo.
It should say there should be the sound of snapping fingers.
And you're supposed to read this paper at my tempo.
So we get out of, I do think this paper,
we're probably going to do a linear read through it.
There's certain papers where we don't.
But now we go into the second section,
and it's definitively giving us that
not only do they disagree with Brooks' contention
that everything is accidental,
they also disagree with the source of complexity.
So it says that Brooks gave them four things,
which are complexity, conformity, changeability, and invisibility,
and they think the only major source of...
Hold on.
That make building software hard.
I was going to say the only major source of complexity...
Is complexity.
Is complexity, which did not sound good so the only major source of like things that make software
hard is complexity and all the rest of them can just be classified as forms of complexity
i've already got a take on this paragraph if you're up for it oh yeah yeah all right cool so
uh the first thing i've got this paragraph
highlighted and another one coming up soon and then and then nothing for a while so i might
spend a minute on this one so this paragraph begins in his classic paper no silver bullet
brooks identified in the uh the bibliography as bro 86 so i'm going to start calling him Bro86 because that's pretty good.
Bro86 identified four properties
of software systems
which make building software hard.
Complexity, conformity, changeability,
and invisibility.
Pink highlight.
Of these, we believe that complexity
is the only significant one.
The others can be classified
as forms of complexity
or seen as problematic
solely because
of the complexity in the system. And I'm going to argue throughout this episode that they got this
wrong and from this initial mistake fall out all other mistakes. And I'm going to reread this as
they should have written it. Complexity, conformity, changeability, and invisibility. Of these, we
believe that invisibility is the only significant one.
The others can either be classified as forms of invisibility or be seen as problematic solely because of the invisibility in the system.
And so as we go, I am going to be giving us the interpretation of Out of the Tar Pit had it been about invisibility instead of complexity. And you will see that many of their arguments work just as well or better when you view
them through the lens of invisibility as they do when you view them through the lens of
complexity.
Okay.
So now this is where Ivan announces he's writing a counter paper called into the mosh pit.
That cozy mosh pit that cozy mosh pit uh yeah so uh i am so glad that you had a different take
because uh i agree with you that this is the problem of the paper right that like this is
the framing and this is what i think this is the novelty of the paper as well right is to say all these other things
are not important it is about complexity and complexity is to be identified with state right
this is the simple made easy move right they don't quite put it in those terms but you can see rich
giving us the simple made easy and he's saying like simplicity is about taking things apart and complexity is about
intertwining things and so if we can make things simple we remove this intertwining and they say
state is what intertwines everything which i think rich has a little bit more complex of a view
yeah or nuanced he has he has a more sophisticated view yeah a more sophisticated view than than just
like uh than
just state i think he looks at a lot of different things that do this intertwining yeah right um but
i definitely have to agree with disagree with your take as well uh-huh already before i even get to
make my case yeah before you even make your case that's fair yeah I mean, maybe I'll end up believing you, but my gut, right?
My pre-philosophical statement here
before I've done all the philosophy
that you're going to give us here about invisibility
is that all four of these actually really do matter
and Brooks got some good criteria.
Now, there might be more,
but at the very least,
all four of these really do matter for these systems.
And I can think in my career of different systems
that had different problems of each of these.
And that they might have been very visible,
but the amount of conformity we had to adhere to
caused the system to be hard to work with.
Or they might have been very simple, but invisible. Or they might have been, simple but invisible um or they might have been yeah like
any of these things i can think of where any of these dimensions could have caused the systems
to be hard to work on or did cause the systems to be hard to work on and i think like when we did
no silver bullet you especially emphasized changeability as something that meant a lot to
you or that that you had a particular emphasis on and i i i'm not gonna be so you know flippant
as to say no you're wrong only invisibility i agree like all of these things and others are
valuable ways of examining the software that we build at the personal scale all the way up to the industrial scale and the systems that we build that are not software.
These are all valuable lenses through which we can view a problem and see that problem kaleidoscopically spiral into a fractal of beautiful panes and i think that um my my interest in invisibility
here is just because as i read through tarpit again this time and we've both read this paper
a number of times and so reading through it again this time i i started noticing how many of the
arguments in this paper worked just as well for invisibility as they did for complexity and so that's kind of where i'm
coming from is i'm not i'm not truly earnestly sincerely saying that none of those other things
matter just that this paper does a good job arguing uh for invisibility being a problem or
at least it does as good a job of that as it does arguing complexity is the root of all evil so
i'm just low bar yeah i mean i think that that's
that's probably a valid way of putting this right they tell us that all these other ones could be
seen as complexity or or are only problematic because of complexity but i feel like taking
them and doing that inverse like all of these things could be seen as invisibility i think
that's the other unique one here uh like
changeability i don't think you could define everything in terms of changeability right
conformity you couldn't define everything in in terms of conformity but complexity and invisibility
that's a potential yes yeah you can lump those other things up under the umbrella of complexity
or invisibility.
Yeah.
And that makes them interesting.
And that to me is what suggests, like you said, that there are probably other similar categories into which you can are the four evils at the heart of all software, you could pick four different things and probably make just as solid an argument as Brooks had.
And then pick one and say that one is the er evil that the others can be expressed in terms of, and that's probably relatively doable as well.
There's a challenge for anybody i mean i
actually i'm so last episode i introduced my color scheme uh i'm gonna i'm gonna review my color
scheme again because i'm going to use it as shorthand as we go and one of the colors suddenly
has uh has importance here uh i'm using yellow meaning hey this is interesting and we should
talk about it green is good pink is bad blue is something
entertaining or funny and purple is this is relevant to foc and so i think this idea of
um you know could you design a programming that had a different set of four evils at the heart
of it any one of which could be the one that would encompass the others,
be the er evil?
Could you design another kind of programming
that had different er evil at the heart of it?
There's a challenge for the future of coding community.
What other kinds of evils
can you put at the heart of your programming
other than complexity, conformity,
changeability, and invisibility?
So I like this frame,
and I think it's interesting to keep in mind this, like, can we really justify invisibility over complexity? And so let's start with their argument for why is complexity called out as the main source here, right?
So I'll just read a quote. status of complexity as the major cause of all other problems comes simply from the fact that
being able to understand a system is a prerequisite for avoiding all of them and of course it is this
which complexity destroys now it seems like on the surface invisibility might work just as well here
right in order to understand a system you have to be able to see it.
If it's opaque, if it's invisible,
you can't see, you can't understand.
But I don't think that's true.
Now this is a simple example,
but one simple example I'll give is the standard visual programming
node and wire interfaces
that just turn into this big ball of spaghetti.
In and of themselves,
those things are visible. There's nothing hidden. And if there is, just take all of those things
that are hidden and let's put them out more into some nodes and wires, right? Until we have all
the parts exposed. And if it's this big web of spaghetti mess, we don't have an easy way to
understand this. And the only way we're going to get to understand it
is not by changing its visibility or not
but by reducing the complexity.
Yeah, so you're going to force me to draw out
some of my points that I realized
from later on in the paper.
But for instance, when Mos when mostly in marks get into
talking about functional programming and logic programming, they talk a lot about the benefits,
but they don't talk so much about what you have to trade off to get those benefits. And one of the
things that they don't talk about that I think really does matter is ergonomics. It's not
sufficient to say, here's the thing that is better in this one
respect so in your example with node wire programming it's like oh it's more visible
than it would be you know it's more visible all laid out on one giant canvas that you can zoom
out and see your whole system all at once then it would be split up into a bunch of different
textual code files where you can't see the whole system at once or if you can see it you can't truly see it and i would say that that's the failure of that
counter argument is that it is no more ergonomic if not less ergonomic to just have a giant
spaghetti ball of mud and so i'm not going to argue that the spaghetti ball of mud is better because of
that failure of ergonomics but i i think that just making some giant visual thing that is
superficially similar to what people mean when they say visual programming is not what i'm
advocating for i'm advocating for something different than that yeah i i totally agree with
you and that's why i threw it out there but i don't think you
can make the same i don't have to make the same qualifiers for complexity if i said i took this
complex system and i made it simpler and now it's harder to understand that almost feels like a contradiction in terms right simplicity just is the property of
being easier to understand and so it that's maybe why there's this privilege here and we don't have
to decide this now but i i just want to keep that in mind as we continue on with this paper to see does the the proposal actually keep that
property and i'm nervous about terms like complexity and simplicity as used in this paper
and as used in the discourse around this paper because they tend to be used to kind of tautologically
claim that something is better because you can say oh this
thing is simpler than that thing therefore it's better when the definition of simple is like you
know it inherently includes a value of it being better like you know it's better because it's
simple and it's simple because it's better there's uh there's a little bit of circular reasoning there that i don't like and the thing that especially irks me about it is that there's no one dimension along which we're
going to be evaluating these things right so you can't say something is simpler period in most
respects like it may be simpler in one dimension while being equally as complex in some other dimension and
the dimension along which you've made it simpler is irrelevant to the problem at hand and so i think
this this use of simple and complex as just kind of broad values or broad assessments of a of a
particular part of software i find them kind of lacking
because they're not directed at very granular aspects.
And I like that when they talk about state in this paper,
I find that they're much more specific.
And they use state as an example of a thing
that can produce complexity.
And so in that sense, they're saying,
here's a particular way that complexity can be manifested. It's this mutable state and they get very granular about the ways that
like in functional programming here's some places you know monads bring a little bit of
of state back in and you know they handle it elegantly and they just kind of drop that and
move on but they when they get into logic programming it's like you know pure logic programming lacks some expressivity and so we bring in a little bit
of state to try and accomplish that or to improve performance and they're very critical of that and
so i think this this focus on very specific dimensions along which things can be simple
or complex is really important and i would say the same is true of visibility and invisibility it's something like visual like a node wire visual
programming language is no more visible than text programming just period just like as a as an as a
flat statement there's no difference in visibility between those two things. If you want to actually
assess differences in the ways in which they are visible, or the ways in which they help you
visualize the behavior of the system, you need to be much more specific and focused than that.
Yeah, yes. I think we've gotten some good frames here. And what I want to make sure we do
is we don't get lost in contemporary discourse
and we try to you know i don't think we have but i just think this is a danger this is something i
did right is kind of inject rich hickeyisms onto this paper or inject you know popular culture
things and instead take the paper at its own terms yeah and look at what it says and try to see if
it's providing its own internally consistent view i think that's going to
be the more interesting thing here to look at and then we can also look at our own experience
and does this confer with that and and all of those things right but i want to like let's maybe
try to inhabit the world of the paper for for a little while before we get to gross ew but sure do you not like reading dystopian novels i mean that's all
this should be for you right you can inhabit a world you don't like that's fine so speaking of
dystopian immediately after this this this uh part we were just reading it it lists a bunch of
quotes and uh and references turing award. Yeah, just to try and justify complexity
as being the er evil.
It's like Dykstra says,
we have to keep it crisp, disentangled, and simple
if we refuse to be crushed
by the complexities of our own making.
And it follows that up with,
and The Economist devoted a whole article
to software complexity,
noting that by some estimates,
software problems cost the American
economy $59 billion annually. And I just feel like these are pretty weak references. Like,
this is not a rigorous justification for their view. It just feels like they did a keyword search
for software complexity and grabbed some things that talk about complexity without doing the thing that
i was just saying that i want to see people do which is be very specific about what complexity
means like in a very particular way and they'll get there yeah but these quotes do nothing for it
i think they're all a waste of space i agree that they're so we got corbato we got bacchus we got whore we got dykstra like this
is kind of trying to you know go on the prestige here of these people and say look at these great
turing award winners how could you possibly disagree with them it's an argument from authority
it is it's kind of an argument from authority and there from authority. And there's some nice pithy quotes, right?
But yeah, I don't think they amount to much in making the argument.
And I actually think what's most interesting about this paper
is that they give us a different way of viewing complexity
than maybe the simple-minded version that these quotes
might be taken as
alluding to but what we get immediately after these quotes is that there is an unfortunate truth
and that's that simplicity is hard and that this is not something that we're going to
be able to easily conjure up there's not going to be some one answer that we can just
go run an automated refactoring on our program and then all of a sudden they're no longer complex.
The work of making it easy to understand programs is going to be a very difficult task that we have
to continuously work at and continuously do. Even with, I think, I think the, you know, maybe there's not an
explicit statement here, but I think it's kind of understood that even with these, this FRP system
that they're going to be proposed, which stands for functional relational programming, that it's
not going to be this easy answer that immediately you're going to get it all right and you'll never
have any problems. That it is going to be this iterative process that as we get more and more towards the ideal
state of no complexity so now we get one maybe definition we're offered here i'm just going to
put this out there as a provisional definition of complexity is it's whatever it is that makes
it hard to understand our systems right we don not identifying, we're not giving like,
here's how you pinpoint it.
But we know there's complexity there
when it's hard to understand our systems.
And so the question now is,
how do we understand our systems?
Because only if we know what approaches we take
to understand our systems,
will we understand how to fix this lack of understanding.
And so really,
they identify two approaches, testing and informal reasoning. I think that's a little simplistic, to be honest. I don't know. Maybe informal reasoning captures everything you want
to do. But yeah, these are the two points that they focus on here. Yeah. And they're good for
reasons that I'll get to in a little bit.
They're good at saying that these aren't the only two ways, but these are just two widely used approaches.
That's a good point.
And I just want to double emphasize that.
They say these aren't the only two ways.
These are just two widely used approaches.
We're just going to look at these two.
There's others out there, but we're going to care about these two.
I just want to double emphasize that.
Double emphasize. Testing and informal
reasoning. Testing being, you know, automated
testing, unit tests, or like
driving from the outside.
I think their definition doesn't include unit tests.
They do
talk about that later on, how it's like
the, they talk about
the trade-off between writing little tiny unit
tests and driving from the outside.
Yeah, but they say,
this is an attempt to understand a system from the outside
as a black box.
Right, and system could mean
the whole thing, or it could
mean some subsystem, right? Like, system is
a vague word in this context. I think
intentionally so. Okay, yeah.
I just want to, you know,
I know there's unit testing advocates
that might read some of this description
and think, yes, but that's what they missed, right?
They missed that if you test all of the parts,
not as a black box,
not just look at the input output
of the whole system, right?
Because this really does seem like,
yes, they mention smaller parts,
but they really seem to have the frame
of the big integration test,
the manual QA test.
They don't seem to have in mind
a sort of holistic unit testing approach
that maybe modern people might advocate for and i i think and i read it
with this view in mind that i don't think they ever say anything that suggests to me that they're
only talking about like the total software product that is produced at the industrial scale and
shipped as a monolithic entity i think they are thinking about subsystems within that, right?
Like they talk in a little bit about concurrency
and about separate systems
that need to communicate asynchronously,
that sort of thing, right?
Like they talk about that systems
are comprised of smaller systems.
And so when we're talking about testing,
we're talking about,
I think we're talking about a system at any scale.
Yes, unit testing people are going to
get up our butts about that,
but they'll have lots of company.
They do specifically mention that automated tests
are more common for individual component testing.
Yes.
So you could see that.
I just want to put that out there
because I could imagine someone saying unit testing is informal reasoning and i said informal reasoning you said the r word
i had a reason to say it though uh-huh uh-huh. So testing and informal reasoning. So informal reasoning is an attempt to understand the system
by examining it from the inside.
The hope is that by using the extra information available,
a more accurate understanding can be gained.
So this is why I'm saying unit testing almost feels
a little bit more like that from the inside, right?
And using that information as a design tool,
as an understanding tool,
that's what I think a lot of the TDD advocates
and things like that would say that they're doing.
It's not formal reasoning in the mathematically pure sense,
but it's a little bit better
than the loosey-goosey informal reasoning.
It's kind of that halfway between those two.
Yeah, a little bit better than the formal mathematical reasoning too
because that stuff's terrible.
And they are like very, one of the things I like about those papers,
they're very clear of their opinions here.
Like they don't beat around the bush and like,
oh, this could be good.
Like they try to give it something, but they'll just say, of the two, informal reasoning is
the most important by far.
Yeah.
And I have that highlighted in purple because to me, that's a great future of coding challenge.
Can you come up with a programming where informal reasoning doesn't help and you really need
something?
You need like an AI or some kind of like machine based you know property based
testing or something like that or some other way where informal reasoning will not help you in this
programming can you come up with a kind of programming where you know you are lost you
are without hope and you need to get some kind of help other than reasoning to get through it
i think that'd be a fun challenge yeah i don't know
exactly what that would be like that's why it's a fun challenge because you'd have to go what what
does this even mean it's like uh okay you know uh coming up with a new kind of programming as an art
practice like maybe linear b is the example linear? Yeah, so the ancient fragment of text
that we've never been able to translate.
Whoa, I don't know what this is.
This is cool.
Yeah, yeah.
So we have Linear A and Linear B.
If you look up images of them,
they're these like, I think it was a Minoan language
that we've been able to translate
Linear A, but have not been able to translate Linear B. We do not know what it says.
LinearB.io developer workflow automation.
Hey, engineering leader, developer workflow automation is the secret to improving your DORA
metrics.
If when I look up Linear B, it's a Wikipedia article with exactly the things I'm talking about. You can identify inefficiency withORA metrics. When I look up Linear B, it's a Wikipedia article with exactly
the things I'm talking about.
You can identify inefficiency with pipeline metrics.
You can hit your goals
with workflow automation and deliver on promises
with a project delivery tracker.
If we still had sponsors, this would be
the weirdest way of you including
a sponsor ad in this, which is not...
Linear B automatically identifies
and benchmarks your engineering metrics to help teams focus
in five key areas.
Code quality.
Okay, Linear B on Wikipedia.
I'm sure you...
For the JavaScript engine, see Linear B script engine.
ECMAScript engines.
I can't tell if he's really reading something
or just making up everything that he's saying.
No, it's the disambiguation
right at the top of the Linear B article.
Oh, apparently it's Mycenaean.
My wife just texts me from the other room
because she knows more about this than I do.
Yeah, it does say Mycenaean right at the top.
Syllabic script used for writing in Mycenaean Greek.
Yes.
The earliest attested form of Greek.
They had somebody gunpoint and it's like,
what's the earliest form of Greek?
Mycenaean.
Yeah.
So we haven't been able to translate this fragment, right?
And we've tried and tried and tried
and have not been able to do it.
And now people are trying to apply these language models
and blah, blah, blah to try to decode it so you could imagine something i just that's the first thing that comes to my
mind where it's been countless people trying to informally reason through this thing even
take statistical methods right blah blah and they've never been able to figure out exactly what it is.
So we need the linear B of programming.
Yes, that will be the, you know,
the can you make it harder to reason about jam.
Or intercal might also be a fun example of this,
which we definitely need to cover at some point.
Yeah, definitely.
That would be a good one.
Yeah, yeah. It's be a good one. Yeah.
It's kind of like the original esoteric language.
It has some cool features where it's like
not trying to only
be a joke, but also is
constantly a joke.
It's both useful and not.
I really like the mix of it.
That's definitely on the list. That would actually be a good one after this. Because it's both useful and not it i i really like the mix of it so that's definitely on the list
yeah that would actually be a good one after this after this because it's you know this paper is a
joke oh my gosh ivan uh all right so they want to say that informal reasoning matters more because
the key problem with testing is that you can never test the whole thing.
And that, in their mind, informal reasoning can capture more than what you can test.
And I personally agree with this.
I know I'm not an anti-test person,
but I also think that I've worked in systems,
I worked in a system that had a quote unquote, 100% test coverage,
which yes, I know, that's not what everyone says it should be, blah, blah, blah, we're not gonna,
but we had lots of tests. And even when we lowered it from 100% test coverage, like 70%
is like a metric. And we kept it at like a good 90%. It was the worst system for making changes
that I've ever worked on. And it was the worst system for understanding because the tests were written in this awful way.
I'm not saying that good tests are not useful,
but testing in and of itself
and having all these assertions
was not helpful for understanding this thing.
And the only way I could go to understand this
was looking through the code, reasoning about it,
ripping out all the tests
and replacing them with something informed
by that informal reasoning.
For me, I think you make a larger argument
that the good tests are informed by informal reasoning.
That's why they would be subservient.
I think they're useful tools,
but I definitely think that you can't understand a system
without that sort of informal reasoning.
Again, a TDD person is probably going to say, no, yes, I can.
Look, I wrote all these tests, and that's how I understood the system.
But then to combine those into the whole, they use informal reasoning.
Yeah, you can't do testing without doing some reasoning.
Like, that's just, like, that seems like a like a like an impossibility or at least if you
wanted to say oh yes i can then you're just going to be quibbling about the definition of these
terms i think exactly right and and i mean so like this point isn't i don't think it's grand
right because like yeah it makes sense that you have to reason about your system to understand it
yeah but it's an important stepping stone on the path to talking about how different aspects of complexity negatively impact our ability to reason.
I think that's what they're setting up.
Yeah.
Yep.
And they include another Dijkstra quote from his Turing Award speech.
Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with.
Which I think is an argument in favor of visibility is the most important thing, and that's all I'll say. We can move on.
There's another quote from O'Keefe, who also stressed the importance of understanding your problem and that elegance is not optional, who said, Our response to mistakes should be to look for ways that we can avoid making them,
not to blame the nature of things.
And, like, what does that even mean?
Like, what is not to blame the nature of things?
What does that have to do with what we're talking about here?
Do you have any idea, Jimmy?
Because I read that and I was like, I don't see how that relates to what we're talking
about unless you just want to like you know get stoned and sprawl out and go like yeah the nature
of things makes it hard to reason about and that's that's why it's like i don't see the direct
connection uh so here's my assumption because i did not read the text right that word that this is quoting from
but my assumption is the attitude of like you you come across this bug right and it is clearly a
mistake that somebody made they used an api wrong etc and they're just like oh well you know that
part of the system is just finicky and like you can't like You can't help but make bugs there
because it's just finicky
and no one's ever going to not make bugs there.
I've worked with people who have that attitude
and I think this quote would be,
yes, that might be finicky or whatever,
but blaming it is not the answer.
Code defensively.
Make it so that the way you write the code
will make it so even the way you write the code
will make it so even if that component's buggy,
your code won't be.
That would be my assumption for the context here.
The benefit of informal reasoning is that
when you emphasize informal reasoning
and you focus on the informal reasoning
as an important part of your programming practice,
when you encounter code that is complex and difficult to understand,
you should blame not the nature of things, not the code itself,
but you should blame your own inability to understand it.
Because the informal reasoning that you do is your responsibility,
and if you don't understand it, it's your own fault.
Yeah, and so you blame yourself for not having thought about those things sufficiently
and now go think about them
and make sure that, yes, even if that part of the system's buggy,
my code doesn't have to be.
Or that part of the system's slow, my code doesn't have to be.
And I think this is a really important habit
for making good software
because we've all worked
with people who have, you know, we've worked with dependencies that are dependencies we'd
rather not take.
Yeah.
They're not dependable.
Yeah.
Yeah.
Yeah.
They're down all the time.
They're slow.
They're buggy.
They're whatever.
But there are ways to make sure that even if those things happen, you don't deal with
it.
And you've seen, i've seen all these
sorts of things like i think i mentioned this before where like there was some software that
didn't take this into account at all about the world around them and if you were at like from
like mid 11 till uh in my time zone from like 11 to midnight you couldn't submit any forms because
they looked at the day or whatever right and so like you can think about things where it's like yeah there's practices that we embed into our
software like the network isn't reliable we always do retries you know those sorts of things i think
that's what this quote might be getting at but you know without reading the who knows because it's
too ambiguously situated it's not tied to anything It does not seem to support any part of the argument.
I had one more right after, like the immediate next sentence after not to blame the nature of
things is, the key problem with testing is that a test of any kind that uses one particular set
of inputs tells you nothing at all about the behavior of the system or component when it is
given a different set of inputs. I have that highlighted in purple because that feels like a good future of coding challenge.
Can you come up with a kind of a test, a kind of testing where using one particular set of inputs
does tell you something about the system or component when it is given a different set
of inputs? I know of some existing things that do that already, but that feels to me like an area
that is worth exploring some more,
if for no other reason than to further prove
that this essay is...
I have this highlighted in blue
because my highlighting color scheme is random.
Uh-huh.
Uh-huh.
And it just happened to be blue.
Yeah, and what did that lead you to want to say about it?
Okay, so I think you're being a little pedantic here.
Okay.
Gonna be honest.
And I knew this is, I highlighted it.
Part of my highlights are speculative highlights
of what Ivan will highlight.
So blue is blue's clues.
This is when I'm gonna talk back to the podcast about this
or talk back to the essay about this particular thing.
All right.
So I think that, yes,
of course you could come up
with some form of testing
where the inputs tell you about other inputs
or whatever, right?
For example, symbolic execution, right?
That's one way where you put in this symbol
and now it gives you some constraints
on what the ranges are, blah, blah, blah, blah.
There's ways of doing it.
The point is most testing tells you nothing about this.
And this is a practically minded industrial scale.
What techniques are people really using, right?
And so most testing tells you nothing about those inputs.
And I've seen this in practice
i worked at a software company or a company that made software they weren't a software company
anyways i worked at a company that spent lots of money on software and had no idea how to make it
and i was part of this hundred million dollar technological renovation innovation system
and we were building a 2020 million part of it.
Anyways, doesn't matter.
Just like these numbers are absurd.
Industrial scale.
You were programming at industrial scale.
Yeah, but that's a negative thing.
Anyways, so we had this whole integration testing system
where somebody from some third-party company
had made a big list of all the tests we were supposed to run.
And I went to my manager after finally seeing this list
of all the tests we were supposed to run,
and I said, these tests mean nothing.
Like, I could fake every one of these tests,
and we would pass, and yet the system wouldn't work.
Like, there's no, nothing about this actually tells us the system works or not.
These are bad tests.
And of course, instead of hearing someone overheard me and instead of hearing my point
about you could fake the test, they said I was trying to fake tests, which like, no,
I was making a point about what is like that, that these are bad tests.
Right.
And so I think this is reality right
the we have this system of inputs we have these tests and they don't tell us anything about our
system in cases that we might care about outside of these tests so yes you're right we could
probably go build a system and maybe this is making too much of a definitive statement,
but there's a purpose for that.
Yeah, once again, inside the frame,
I agree with this claim.
Outside the frame, I'm covered in flame.
So we got a Dykstra quote after there.
We have all these arguments for authority with
these big figures, right?
So we got, testing is hopelessly
inadequate. It can be used
very effectively to show the presence
of bugs, but never to show
their absence.
We agree with Dykstra.
Rely on testing at your peril.
And I should say, I gotta start
relying on preview.app at my peril
because i had that highlighted with a note next to it and that's gone now but i remember what the
note was mine's highlighted green which means nothing no no i have random colors yeah but then
i just swap between them to separate the fact that they're highlights.
No.
But what's the difference between it being highlighted and it being not highlighted?
So that whenever I'm scrolling through, I know which bits are the important bits to talk about.
Okay, yeah.
So it does mean that it's important to talk about.
That's what...
There, there we go.
That's what I care about.
Oh, I highlight all my paper i i start by highlighting
everything manually right i just highlight all of them a nice pale pink uh and then i highlight on
top so they say rely on testing at your peril a little bit to me feels like saying rely on seatbelts at your peril.
Like, just because something has the possibility of failing you doesn't mean that you can't rely on it to do better than failing.
And they say in the immediate next sentence, that's not to say that testing has no use.
The bottom line is that all ways of attempting to understand a system have their limitations, and this includes both informal reasoning, which is limited in scope, imprecise,
and hence prone to error, as well as formal reasoning, which is dependent upon the accuracy
of a specification. I just, I feel a little bit like this paper is at its best, and I'm going to
call this out again when we run into it when it makes these
kind of punchy quotes like rely on testing at your peril that's the stuff that i i love because it's
the authors it's the authors having a bit of fun right like they're trying to make a punchy
proclamation about something and i love that i also disagree with the uh or at least I fear a little bit, somebody reading that and having their ways changed by it.
I think nobody is relying on testing who should be swayed not to do it by this argument here.
I don't know. I like that and it bugs me a little bit.
And this actually, this next section where it says the bottom line is that all ways of attempting to understand a system have
their limitations i'm going to bring up the first of my takeaways the first of my like summary
questions from this paper which is to ask what are the limitations of simplicity?
Because this paper, like there's a sentence right here, it says,
it's precisely because of the limitations of all of these approaches that simplicity is vital.
When considered next to testing and reasoning, simplicity is more important than either.
They talk a lot about the limitations of things you can do to help you understand. And they say simplicity is the most powerful aid to understanding, to defeat complexity, which, you know, complexity is the
difficulty of understanding. You turn to simplicity, but they never, to me, clearly articulate
what are the limits of simplicity? What's the flavor of simplicity? What's the color of simplicity? What's the texture of simplicity? When is simplicity
not the right approach? When is it impossible? When is it impractical? When is it,
you know, something to be avoided? Because those are all, I think, things that exist.
There are cases where simplicity is not the thing to turn to. And I would love to see that,
or I would have loved to see them explore that a little more
because it would have helped me feel more like
the arguments in favor of simplicity are supported by saying,
and here's the counter arguments to it.
See, I actually think this paper does itself a disservice
in its organization i think section
two uh should have been moved around i don't know where it goes maybe it just comes bumped down
but section six of this paper i actually think is the best one of the rest of the paper
uh like it it is i think the the main the main interesting point that it makes is in section six
and had that been moved up to the front it would make all of the conversations we keep having about
complexity way more crisp and interesting do you want to jump down and do that one yeah i think
it's i think it's actually important because i keep wanting to make the point that I know is coming in section six.
And we're right at the end of section three.
We can come back and pick up from section four, which gets into causes of complexity.
It's very much more granular and that kind of thing.
We can come back to that because there's a bunch of stuff in there.
But sure, let's skip ahead to section six, which I have, I've written a summary of every section,
and most of them are like, summary of section two, Brooks, but only complexity truly matters.
Summary of section three, two common ways to understand systems are testing, examining from
outside, and informal reasoning, examining from inside. Simplicity is more important than both.
Summary of section six is oops
up brooks
see that is why i found this is actually one of the reasons i found our conversation about brooks
so interesting was the stark contrast you actually see between brooks ideas of accident in essence and what my ideas of accident in
essence were going into brooks which were based on this paper same right i read out of the tar pit
first and i had put in my mind the the notions of accident in essence as if that's what brooks was meaning and here we find a stark
difference and they do specifically say that they're disagreeing with brooks yeah good on them
yes okay so i'll just read the first uh two paragraphs here yeah brooks defined difficulties
of essence as those inherent in the nature of software, and classified the rest as accidents. We shall basically use the terms in the same sense,
but prefer to start by considering the complexity of the problem itself
before software has even entered the picture.
Hence, we define the following two types of complexity.
Essential complexity, which is inherent in and the essence of the problem as seen by the users and accidental
complexity is all the rest so i think this is such a different frame like i mean i think it
it deserves maybe more this is why i think this should have been up front this should have been
really like the highlight here because this is a major disagreement
with brooks and when they say from the beginning that they think most of what we're talking about
is accidental not essential they mean accidental to the user so what they're saying is all of the
software that we build the metaphors we use like functional
programming the metaphors we use the terminology we use the the language we use everything about
it is not what the user thinks about and because it's not what the user thinks about it's accidental
it's a historical fact of how we've built software. It is not a necessary part
of how we could solve that problem.
And I think that's fascinating.
And this is the part
that I think has been lost
in the culture from this paper,
that even if it's totally unattainable,
it is such an interesting vision
for what programming should be like.
It really looks at end-user programming.
It really casts a very different model from programming
than I think what we even get in this paper.
This insight, I think, is fantastic and interesting.
I don't care about the names that they're using.
The fact that they're using essence and accident,
the fact that they're talking about complexity,
I just blur out those words. grammar, assuming those are separate people, like Jimmy said, and user programming has an interesting set of perturbations that it makes upon this
breakdown, this cleaving
of the space in Twain.
But I like
that as a split. It's very
galaxy brain to me.
It's very much like we're taking a huge
zoomed out meta view of things, but when you
sometimes you want to do that, and I think
this is a nice one. Yeah, and we get how radical this is.
It says, for example, according to the terminology we shall use in this paper, bits, bytes, transistors,
electricity, and computers themselves are not in any way essential because they have
nothing to do with the user's problem.
I think that that's wonderful.
This is such,
I think this is actually the most radical take we've read.
If it continued on in this vein,
I think it would be a much more interesting paper.
And it kind of does.
And it kind of says like,
oh, yes, we've been big galaxy brain here.
And now we're going to bring it back to reality
where we can't
quite do this but i love this and i think it is an aspiration worth shooting for and i think we've
a lot of people have lost this part of this paper and i think that's to our own peril like what can
we do to make it so we're thinking just in the user's
problem? Now, of course, what's going to come up
in people's minds is chat GPT.
I want to make
a rule of maybe not mentioning it
too much because it's a meme at this point
of mentioning it everywhere.
It's going to date this podcast.
Oh, it does?
The fact that we don't have Robot Overlords yet
is also going to date this podcast.
I have nothing against that being a potential here.
Maybe you can think in those terms.
But I think there are other ways to do this as well.
Some might be about domain-specific languages.
That's kind of getting at some of this idea.
Some of it might be, I think, personally,
widening the end i think the end user programming thing can be let's go make it easier for end users let's go make it so that
they're thinking in those terms but it can also be about changing what is in the vocabulary of
users and how you do think about things i think that's one of the most interesting things about
programmers is when we encounter the world we encounter these techno social systems differently than the the non-programmer
right and maybe we shouldn't have to whatever but there is something to that right and we can start
thinking in these terms so i think all of these things have such an interesting frame, and I love this part.
And if, the reason I wanted to jump to this section, if we take complexity to mean accidental stuff, right?
Things that are not part of what the user's thinking about for this problem, all those bits, all of the stuff in the prior sections makes more sense. When we say,
should we rely on informal reasoning or testing or any of these methods? And they say, oh no,
if we pay attention to simplicity above all, it will pay dividends. You can start seeing what
they might mean. If we take simplicity to be fewer and fewer accidental things, more and more focused on
what the problem itself has to say, you can justify that quote there.
We see essential complexity as the complexity with which the team will have to be concerned,
even in the ideal world. If there's any possible way
that the team could produce a system that the users will consider correct without having to
be concerned with a given type of complexity, then that complexity is not essential.
And I have this in blue, but it's also in purple. And the reason it's blue is because we could just dismantle capitalism.
We could kill the users, right?
Like they can't call it incorrect if they're dead.
If there's any possible way the team could produce a system
that the users will consider correct.
There's a lot of possible ways.
I mean, they do say that they're going to put some caveats on the ideal world here.
Like this problem could already be solved for the users and if so then you don't have to build anything but we're gonna
you know put a limit on that right even in our ideal world now i'm gonna pigeonhole on this
because it's so much more fun to speculate on what they mean by possible here so do they mean
physically possible or do they mean
metaphysically possible it's time for james
so i i don't care that this is totally off topic but they they mentioned the real world
and that not all possible ways are practical but also in the real world not all
possibilities are possible sorry in the real world not all possibilities are possible yes yes they're
accidentally necessary to introduce terms that i brought up last time yeah if you remember from
last time accidental necessity this is jimmy slowly You Philosophy, the podcast. Emphasis on slowly.
Okay.
So, for example, Superman flying around.
We can conceive of that.
We can think about it.
Unlike maybe, let's say, a square circle.
Like a squircle.
A circle that is also completely a square. So like um like red and green at the same time you can't see red and green at the same time yeah yeah so like a square
circle it has four sides and it has no sides its area is pi r squared and it's you know that sort
of contradiction in terms superman is not a contradiction in terms.
Some guy flying around, leaping buildings,
is not a contradiction.
We can imagine it.
And yet, it's physically impossible.
Breaks the laws of physics as we know them.
Maybe you could try to come up with something.
But you can think of some sci-fi scenario
that is physically impossible.
Maybe certain time
travel things or whatever even ignoring paradoxes they might just be not possible physically so the
question that i have is just like do they mean physically possible or do they mean like even
ignore the laws of physics what's possible and maybe that makes no difference whatsoever but
the the reading is definitely like any conceivable way. If it's
possible at all, then it's not essential. And that actually follows more closely the philosophical
definition of these ideas. If it's possible to get rid of it, then it can't be essential.
So I will self-indulge just a little longer. In modern parlance in philosophy, they don't use essence
and accident much at all. You might find some Thomists that do, but generally it's necessary
and contingent are kind of like the same sorts of things. So some things are necessary and some
things are contingent. They could have been otherwise, and it's just a contingent fact that
they happen to be this way. So when we talk about bits and bytes and computers, we could have been otherwise and it's just a contingent fact that they happen to
be this way so when we talk about bits and bytes and computers we could have solved the same
problems without any of those and it's a contingent fact that we happen to solve it that way but the
other way of talking about it is not contingent it's to use the word possible necessary and
possible now of course all things that are necessary are possible but we're you know really
talking about like merely possible right it's possible or necessary um and so the way you define those is things that are necessary
are true in all possible worlds no matter which possible world you're in it is still true in that
world and then things that are possible are true in some possible worlds and this is actually this
is part of modal logic which ends up actually feeding into
programming. And if you look at how you reason about distributed systems in a formal way, you
use modal logic. If you look at how you reason about temporal properties of programs, that's
temporal logic, which is a form of modal logic. So these actually all tie up back into programming
and actually have real applications. So here, this actually
is really good tie-in with essence and accident because necessary and contingent, we're looking
at what's true in all possible worlds, no matter what world they were in. They couldn't get rid
of this problem, so it must be essential. It must be necessary. And then of course here,
they restrict themselves to a certain subset of the world where their users actually care about a problem
and understand their problem.
Anyways, I had to make all that tie in
because this is stuff that really does
have real applications in programming
not just in philosophy, although that's where it starts.
I like that A, the user probably just wants to get their email.
They just can't get through the login page and they're trying to log in and get their email. They just can't get through the login page.
They're trying to log in and get their email.
But email's not essential.
They just want to talk to their grandma.
Oh, yeah, I suppose.
Yeah, they should talk to their grandma more.
Do they really need a grandma, though?
And I also like, in addition to that contrast,
that you've made it such that my dismantling capitalism and killing the users
is like reasonable and straightforward compared to upending the laws of physics as we know them
yes i like that i'm i'm being practical by saying let's dismantle capitalism that's it's a good day
when that's that's being practical there's one other bit
from this that i wanted to ask you about and it's good that we're in the philosophy corner
and it's going back to the very first sentence of this section brooks defined difficulties of
essence as those inherent to the nature of software and classified the rest as accidents
and i want to know what inherent means.
Like, for instance, we got hung up last time
on whether or not programming is inherently not spatial.
And I think that I don't understand the meaning of inherent
in as pure a sense as you do.
And so I'd love to hear your definition of it inherent.
I'm glad we had philosophy corner first,
so now I can rely on it.
Okay, so when people talk today about essence,
they usually define it in terms of these possible worlds.
And I haven't technically defined what a possible world is,
but I think everyone kind of has a good understanding.
It's like a way the world could be. And we're talking about like the whole universe,
everything that exists, not just like earth. Right. Okay. So when they talk about essence,
they say, uh, the essence of a creature of an object or whatever is the properties it has in all worlds it exists i am maybe essentially human because in all worlds
i exist i am human i couldn't have been a frog because then you wouldn't have been you exactly
right so inherent would mean like it is part of the essence of that thing. In no world could it lack that property, would be
one way of understanding
that this sort of
language here, which I think like the
accident and essence really
is trying to give it to you, right?
So it's saying like, in
all worlds, it has this property.
For programming to be
inherently spatial, there must be
no world in which it is not spatial yes
it must be facial in all worlds yes i'm gonna very slowly convince you that that's true as you
very slowly teach us philosophy well if we go back to that discussion there is a distinction
that i will just go ahead and make programming and programs oh god damn it because that was actually what brooke said was that
programs are not spatial that's what i meant well but that matters and i just wanted to make sure
okay because programming might be inherently spatial because it's an activity done by human
beings or creatures you know some some intelligent creature or other right and so that
might be the activity might be spatial but the program the artifact itself is not inherently
spatial that's that's why i wanted to make that distinction do programs exist are you just wanting
to turn this into a philosophy podcast no i, I'm just trying to figure out like,
how can something that exists in our world not be spatial?
Okay, the question to answer this that I'm not going to go into,
because I already talked a little bit about this at the end of an episode
that I totally did not think you were going to leave in the edit about Platonism.
But the answer depends on your belief
on do numbers exist oh right okay yeah okay i'm right yeah i got you if you think that numbers
exist not it's not guaranteed that you think programs exist in the same way but most likely
whatever way you think numbers do or do not exist you'll probably conclude the same about programs
cool right so that that would be my answer if you think numbers don or do not exist you'll probably conclude the same about programs cool
right so that that would be my answer if you think numbers don't exist they're just thoughts in our
head or you think numbers do exist they're thoughts in our head right you could think the
same you could say the exact same thing uh that would be the way in which programs exist now if
you say number programs exist as thoughts in our head then you have this problem of program identity that i was pointing to before are we thinking of the same program if you say programs exist as thoughts in our head, then you have this problem of program identity
that I was pointing to before.
Are we thinking of the same program?
If you think they exist as bits on a computer,
you have the same problem.
This is the question of universals,
which has been brought up since the Greeks.
And so I may not be able to convince you
that in all worlds programming is spatial,
but maybe I have a hope of convincing you that in all worlds programming is spatial but maybe i have a hope
of convincing you that in this world programming is spatial that would be a much or programs are
spatial yes there is a difference yes there is a difference the artifact right yes uh i think you'd
have a hard time but i do agree that that's a much more modest claim yeah so example, it might be that people or minds, you know, our consciousness isn't necessarily material. In some worlds, there might be immaterial minds, but maybe in this world, they are all material, right? Or all made of carbon or whatever, right? Like that, that's the kind of thing that you could do that's a much more modest claim
than that it is inherently or always or necessarily or things like that.
Or that all programs are sequential, which they aren't,
but I could assert that all programs are sequential
and sequence implies difference in position of some sort
and that is a space.
All swans are white, right?
Yeah, not the ones that I spray paint.
There's no,
I'm not being nice to animals this episode.
So yes, the ones that I spray paint are orange.
Yeah, like is,
that might be a true statement,
but is it necessary?
Yeah, and then there's bits in here i will get rid of
i had some more philosophy hour things that i'll just ignore like intentional identity versus
extensional identity and all that stuff we'll get back to that someday i'm sure yeah yeah yeah
so when we left off at the end of section three it was talking about the the different ways that
we can approach understanding and,
you know, informal reasoning and testing. Now in section four, we're going to look at
what are the causes of complexity. And we, you know, jumping ahead to section six learned about,
okay, there's actually a specific framing of complexity here that they mean complexity is
the stuff, essential complexity is the stuff that the user cares about. Accidental complexity is the
stuff the user does not care about. So there are several different causes of complexity. And the
first one is complexity caused by state. And they begin this section. Wow, I lost another highlight
here. Damn weird. Anyone who has ever telephoned a support desk for a software system and been told
to try it again, or reload the document or restart
the program or reboot your computer or reinstall the program or even reinstall the operating system
and then the program has direct experience of the problems that state causes for writing reliable
understandable software and then a little bit later on they have another little throwback to this where they say the fact that
hidden internal mutable state is something you struggle with as you program and it will lead you
to have issues is what happens that leads to this situation with the hypothetical support desk
caller discussed at the beginning of this section. the proposed remedies are all attempts to force the system back into a good internal state. And I really dislike this example because they say, okay,
the fact that you have this mutable state and it's going to cause you to get into these
situations where your state is in an unexpected configuration and there's going to be bugs that
result from that and the solution to it is to blow away your state and start from a clean state again and they're talking about this
especially in the context of testing right when you write your tests you write your tests starting
from an initial clean good state and then perform the test and then at the end of the test before
the next one you revert back to a clean initial state this support desk color thing like
blowing away your state and getting back to a clean state is the kind of thing you do when your
state is hidden and you don't have a good way of knowingly manipulating the state back to what you
want it to be and instead you have to just wipe it all away. It's a very blunt
instrument to use to move from a bad state to a good state. Whereas if the state were more visible,
then you would be able to navigate that state space more deftly and be able to go from whatever
state you happen to be in to the state you want to be in. And it's this hiding of the state. And
they bring this up throughout this section. Every time they're talking about state, they're talking about it
being hidden. And that's, you know, something they put in parentheticals over and over and over again
as they talk about the mutability of the thing being the problem. And I would say that,
you know, they don't address the fact that the hiding of it is what makes it especially
pernicious.
And that struck me.
So let me give an example that's not a computing example.
It's a real world example of the kind of error state can cause that's not, I don't think
you could say is necessarily hidden.
And yet, without state, you wouldn't have had this issue.
You're in your car.
You're pulling out of a spot
that you parallel parked in.
And the car in front of you is a little too close
for you to just be able to turn out.
So what do you do? You put your car in reverse.
You back up a little bit.
And then you need to turn the wheel
and go forward.
It's a busy road. You see your opening.
You slam on the gas
and you haven't moved it back into drive.
You're in reverse and you slam in the car behind you.
Yeah.
There's a reverse light on.
Your reverse handle is wherever it needs to be.
It is all visible state.
There are easy ways to transition out of it
and yet you end up with this
error and it's because of the state of the vehicle at that moment i would argue it's because of the
ergonomics around which you manipulate that state sure it's visible and sure there's a handle but
that doesn't mean that it's ergonomic okay yes, yes. So if you had said that all the problems
are un-ergonomic software,
then you might be getting it good.
But you said visibility.
Yes, I'm specifically talking about
the fact that the state is hidden is bad,
and the answer isn't just to show the state.
It's to show it in a way that is sufficiently ergonomic.
But if we eliminated it altogether,
we wouldn't need to come up with a way of showing it.
So I'm not.
And nobody has successfully eliminated at all,
which is why we have monads and why we have impure logic programming.
And we have all these other.
But this is the point is that if we got rid of it,
then we would not have this,
these sorts of problems.
So I'm not suggesting this is the actual answer.
Okay. Yeah. have this these sorts of problems so i'm not suggesting this is the actual answer okay yeah i'm just giving an example of if instead of you put your car into reverse and then you press the
gas you had to pull up on the gas pedal right with your foot right like you put it underneath and you
pull up then you couldn't cause this problem yeah Yeah. Because the car would never be changing states.
Or the ways in which it was changing states
would be abstracted from you.
You're no longer required to maintain
awareness of the state.
The state is hidden.
It may still be there.
Yeah, I think of modes and mutable state
as almost synonyms.
Yeah.
They're not quite, but the sort of things we talk
about with mode errors are very very similar to the problems we get in programming with mutable
state and so i think this is a very good way to like relate these two and see the problem
the the pedal is still in a mode it's just a like a temporary or a transient mode it's not in a mode that will persist when
you're not operating it yeah yeah and some people say it's only a mode if it's not the locus of your
attention and so this is the locus of your attention you're you know having to actively
pull up with your foot or push down with your foot as opposed to you know what state the gear
boxes in right makes it makes a difference on these things right
you're not that's not the locus of your attention it's the road and your foot it can become modal
if you like accidentally drop a brick behind the pedal or something like that and then you're
driving backwards uncontrollably but you get you get my point right like i think that this this does
i i don't i don't see this as a bad example because if that computer did not have
that sort of mutable state,
even if it had the ability to easily see the transitions,
now you're imposing on the user the ability
to understand that state,
understand the transitions in and out of that state,
and that actually might be not good for the user.
That actually might cause them to
make more errors in other situations and that's what i mean by ergonomics like in the same way
that i'm not i'm not pretending that tar pit is advocating that we abolish all state full stop
it's advocating that we avoid state of certain kinds especially as much as possible and i'm also going to argue that we
avoid hiding state that if you can't eliminate it make it visible in a way that is nice to work with
and if you can't make it nice to work with and you can't make it go away then you're in trouble
and that it's it's sort of like addition to what Tarpitt is advocating for,
I would say that something they didn't advocate for,
but that all the pieces are here to put that together.
That's something that I get out of this as value.
And that's another one of my summary questions,
along with what are the limits of simplicity that this paper didn't talk about,
is are there any good takeaways from this paper that remain that people haven't taken away yet and that's the one that i've got
is that there's a whole bunch in here about how bad hidden in ergonomic state is i don't know if
that's the right negation ergonomic unergonomic an ergonomic non-ergonomic i think though you know i don't want to belabor the point but i think
you know if we get to their distinction between an accidental state and a central state which
we'll get to at some point the the user on this call system if restarting the computer fixes it
that was an example of accidental state right the the computer did not need to be in that state
and it could have been re-derived,
it could have been whatever, right?
And the fact that it persisted,
even though, like the fact that it continued,
even though it shouldn't have,
shows that there was some sort of accidental state there.
And so I think they would see,
yes, if you had to introduce that accidental state,
they might be very well for, like making it visible and making it changeable, etc.
Because they're for all these other properties of making it derivable and not persistent and etc.
But I think they're giving this example.
Again, I just think the ordering of this paper could have helped.
Because once you get to the end and you get accident and essence and how they define it and then you get that applied to state,
it all makes all of this come back and make sense.
Yeah.
And I've got one more from this section,
impact of state on informal reasoning.
There's nothing special about this sentence in particular.
This is just an example of things
from throughout the paper.
The mental processes which are used to do this informal reasoning often revolve around a case-by-case mental simulation of behavior.
If this variable is in this state, then this will happen, which is correct. Otherwise, that will happen, which is also correct.
As the number of states, and hence the number of possible scenarios that must be considered grows,
the effectiveness of this mental approach buckles almost as quickly as testing.
And this whole paragraph hinges on a word in the first sentence.
The mental processes which are used to do this informal reasoning often revolve around a case-by-case mental simulation of behavior.
Often, not always.
And I mentioned earlier, and I said I wanted to double emphasize.
Double emphasize.
Said it two times.
That when they were talking about testing and informal reasoning,
these were but two examples of a larger possible set that they are pulling from.
This paper is loaded with arguments that are made using examples that are pulled from a larger set,
where they say, sometimes it's like this, or often it's like this, or here's two of the whatever,
whatever, whatever. And all of these ways of making their argument make the argument stateful,
in that the argument is now dependent on whether or not you kind of agree with their
example or whether you take the right thing out of their example or if you're willing to grant that
okay this example is indicative of the larger range they give us a couple of inputs to the
argument but they don't give us inputs that tell us things about all of the possible inputs to
their argument they just give us you, a couple to work with.
And what this means is that the whole paper,
the whole argumentation of the paper is stateful.
And it is that stateful argumentation where I think this paper fails.
And since it is that stateful argumentation that makes this paper fail,
that in itself makes it clear that stateful argumentation that makes this paper fail that in itself makes it clear that
state is bad it is this failing to do stateful argumentation that makes state clearly bad and
therefore proves the point that this paper is trying to make and that is why it is actually
a good paper and not a bad paper i rest my case but what if they put all of those inputs into a stream and then they reduce
over the stream yeah if you if you if but would it be the same stream
if they walked into it twice yeah epimenides dipping dipping his buttocks into the stream is
at the same stream as he dipped his buttocks into earlier okay yeah so yeah i i think i think you
know i'll just say i have run into this and i this is the part of this paper, I think it's better stated later, but that rings true.
I do think this is the part that convinces a lot of people.
I know it convinced me, and it's one of the things that I loved about this paper,
was yes, it's not a buttoned-up perfect argument,
but I find myself in systems that are full of mutable states doing exactly that
and it's it's so hard for me to reason that way i suck at it um it's so hard for me to think that
way because i can't visualize things and i i oftentimes it relies on holding in my head a bunch of different
scenarios and think switching between them right and paper can help with this right like you can
sit down and write out like okay here's this set here's this state here's these things and trying
to go through and this is what like my, you know, intense whiteboarding
thinking sessions often are trying to think of all these scenarios. But I did find in my career,
when I moved to software that didn't work that way, right? Like closure, right? I had, I had to
think in that way less often. Now, does that mean my productivity was 10x
and all my problems were all solved?
No, but there's a certain way of thinking
that I never had to do again.
And I found over and over again,
I would make fewer mistakes
because I was better at the other kind of thinking.
And so I think this is gonna to be person relative, right?
I've met people who are so good at thinking in this way.
And like, as you do it longer, you use, I mean, if you compare like me to like a absolute
beginner in programming, they're often not as good at thinking that way as I am just
because I've done it longer, even though I don't have a natural tendency towards it,
right?
And I can kind of
almost shortcut a bunch of steps that they have to do, because they have to think about things that
I can just throw out out of hand, or I'm like, oh, this is similar to that problem, whatever, right?
But I do think, yes, it's not, I think this is something I see over and over again in these
papers that I actually love, is the most convincing part is the worst argument.
And I actually love that, right? We can point out the flaws of the argument as an argument,
and yet the intuition is there. And it's just their framing that makes it implausible.
If they just said, here's an intuition pump, or here's something to get you thinking about why we might care about
this here's an example here's a thought experiment it becomes much better right and so yeah i i hate
this kind of reasoning i find it very annoying to do and the more and more i manufacture my system
so i never have to do it the more and more i enjoy them the kind of reasoning being like having to
explicitly think through like case
by case as you're reading this paper like okay but in this example they've set up this specific
caveat so i have to ignore all of these other scenarios and just consider the scenario they're
talking about in order to understand the point they're making that kind of you you know, case-based situational reasoning. Yeah, I do think that this is actually an important critique of, like, writing,
because I do think sometimes people think their argument is stronger than it is
because they've set up a certain state and assumptions in a paper
that they don't realize their readers aren't going to consistently hold.
And I'm half-joking, but I also did find that.
Like reading this paper, a lot of the arguments rest on whether or not you kind of get the right takeaway from the examples that they give.
And when you go back to the way they set up the argument, if you disagree with the selection of their examples, it kind of invalidates the premise on which all of their argumentation rests. And you get to the end of it and it's like, well, I don't actually agree with your
conclusion because you picked a bad example initially and something about your conclusion
depends on that specific example you picked, not the broader field from which that example
was drawn.
Yeah.
So in section four, we kind of skipped over, but we had impact of state on testing,
impact of state on informal reasoning, complexity caused by control.
Yeah.
So like the two, the two big ways that complexity are caused are by state and by control.
And state is state.
We don't need to explain it.
Control is very specifically the order of things happening
control i had to go and look this up yeah and i i was like um this is just a probably an aspect of
the history of programming and its origin in mathematics or something like that that i'm not
as comfortable with um but it's branching is not what is meant by control in the sense when they talk about branching. It's a very specific like complementary example. But when they say control, they very specifically are referring to the order in which things happen. of that but it's also about like this statement executes first then that statement then this other
statement and in most languages the order you write the statements is the order they execute
that's why they talk about prologue and some other systems where you can write the statements in any
order and they are all independent and you know that they are going to be executed independently
when you write them so you write them to not have interdependencies or you explicitly call out those interdependencies.
And that is what is meant by control in this context,
not like branching and if statements
and that sort of thing,
which are sometimes referred to as control constructs.
And it's broader than that.
And you can think about this kind of more practically.
If you have a program
and you wanted to say something like this,
create an order unless the order
has these invalid conditions.
In your code, you'd have to make sure
that you did the invalid check
before you ran this statement.
You as the programmer need to think about,
and which order is my code going to do? It might be right then I call a function that
takes all that properties. I get the return value when I do it, or I set a flag up here,
and all my checks that would ever touch that flag need to run before I create the order.
In a system where control didn't matter, you could just make that statement,
make an order unless there's anything invalid.
And then you could make statements about what makes something invalid anywhere
else.
And then you could run the program and it would continue to run it until,
until all the possible, you know, things have happened.
All the effects have happened. And then,
and only then when it knew for a fact that it wasn't
invalid, would it make the order, right? And you would never have to think about that ordering.
And there's some interesting questions about like, I'll read here.
Like basic control, such as branching, but as opposed to sequencing, concurrency is normally
specified explicitly in most languages.
So they're saying branching is explicit, concurrency is it that makes something explicit or implicit within a
programming within a programming language what have you how fuzzy are those boundaries and how
do you move those boundaries around so like you know sequencing arguably implicit because it's
like you write your you know statements on each line and each line
you kind of execute them in order but within each line it's like a little bit unpredictable the
order that's going to happen like you have to know order of operations you have to know okay
uh does and um bind more tightly than or you know which one of these is is going to happen first
that kind of thing there's like a a lot of implicit ordering within the code that you write
unless you're forcing it to be explicit by using parentheses or what have you.
And then there's some argument as to whether concurrency is explicit or implicit,
like you could use mutacies or something like that
to guarantee that something is going to happen before some other thing or CSP or something like that.
I just find it interesting to think about how implicit or explicit these things are because to me that is just another instance of visibility versus invisibility where making something explicit is in a sense making it more visible and making it implicit is in a sense making it less visible.
And it makes me think that, you know, concurrency is tricky, especially tricky to test because the way that you try to test it is usually by forcing it to not be concurrent by at least not be non deterministic, forcing it to execute in a predetermined order.
And it's the non-determinism that makes it tricky.
But to me, non-determinism is just like another kind of invisibility.
And so that's just an area of this paper that gave me a lot to think about,
about that visibility-invisibility frame
and what we could do in our future of coding explorations.
Yeah, I think a lot of programming style hinges on explicit versus implicit
and what things we consider when and where.
And I found that a lot of disagreements are really about that question.
Just practical disagreements we have on code style are like,
I like these things to be explicit and I like these things to be explicit,
and I like these things to be implicit, right? You can see that with like composition versus
inheritance. And, you know, should you pass a argument to a function or rely on some global
variable or blah, blah, blah, blah, right? There's all these things that are all about that.
So I do think that's an interesting variable to play with. And I think, you know, like Zig, for example, is an interesting,
I think the most interesting choice Zig did is all allocations are explicit,
even including passing which allocator you will use.
Whereas every other language I can really think of makes allocation,
even if it's explicit and like see malik is this implicit thing that
just exists everywhere right i can just malik anywhere whereas zig said no this whole thing
about allocation is going to be totally explicit you have to pass an allocator if you want to
allocate and that is such an interesting choice for a language
and i wonder if we can do more things like that right more things where we play with the
explicitness of these things that's actually a great reference because we are just looking right
now at section 4.2 which which is complexity caused by control.
Section 4.3 is complexity caused by code volume.
I have nothing there, but we can come back to it if you do.
But the following section, section 4.4, other causes of complexity, it lists some other
examples like duplicated code, dead code, unnecessary abstraction, missed abstraction,
poor modularity, poor documentation.
And it says that these other
causes come down to the following three interrelated principles. Principle number one,
complexity breeds complexity. Principle number two, simplicity is hard. And principle number
three, power corrupts. And I'm just going to talk about power corrupts for a second. We can jump
around. And the authors say, what we mean by this, power corrupts,
is that in the absence of language enforced guarantees, i.e. restrictions on the power of
the language, mistakes and abuses will happen. This is the reason that garbage collection is
good. The power of manual memory management is removed. And I wanted to tie to that because Zig requiring you to specify which allocator you're using,
that's interesting because this thought that I had about garbage collection is that
the reason that manual memory management is bad, in my read, is that it's usually not visualized.
And you can solve that problem by taking manually
memory manual memory management away or you could solve it by better visualizing memory and i think
this thing that zig does of requiring you to use a specific allocator and saying you know which
allocator are using not just some global invisible background malik that just always exists and is hiding behind the scenes
but foregrounding that and saying you need to have a more direct immediate relationship with
your allocator is one way of making it more visual and gc is the blunt instrument that i
kind of referred to earlier when talking about you know do you just obliterate state or do you make state more visible and more ergonomic to navigate through? This is another
example. Do you just obliterate memory management or do you make memory management more visible
and hopefully ergonomic to wield? It's less about dealing with power by saying, I refuse power, and instead dealing with power by making that power less likely to slip out of your grasp and cause wounds and amputations and that sort of thing.
Yeah, I think considering the constraints we put on users and how much power we give them, how much we take away, it has huge trade-offs.
And I don't think there's one easy
answer here no it's a little too easy to think that the the least amount of power is the most
useful thing or that we can even have this sort of easy notion of powerful or not right like yes you might think of like memory management is more powerful than garbage
collected languages but i don't know that feels a little feels a little weird to me right yeah
there's a value judgment about what power means yeah like there's certain things i'm able to do
easily in garbage collected languages that I can get away with
and that cause me problems that they wouldn't
in a manual and memory-allocated language
that just don't happen.
For example, I can just sit in a tight loop
and allocate all over the place
and never realize I'm allocating.
And I could leak that memory
and not realize I'm allocating, right? And I could leak that memory and, you know, not realize I'm doing
it because like, I didn't even know of the concept of allocation. I think of early on in my programming
career, I had no idea that allocation was a thing, right? I'm like, oh, I'm, I'm making objects.
But like, if you would use the word, oh, you're allocating memory and the garbage collector is
going to have to deal with it. That's why your program's so slow i'd be like wait what the
garbage collector the out like i wouldn't know what these things were right and so yes there is
if you try to you could probably give some technical definition or something but if we're
if we're talking practically here it is i think two sides of the same coin thinking of one thing as restricting power,
but it also can provide other powers, right? And yes, that's why I think the Zig trade-off is so
interesting because it is a power and a restriction at the same time. You're given the ability to do
manual memory management so that you can do some of these unsafe things you might not be able to
do in a garbage collected language,
but also you're restricting all functions
that aren't past an allocator to not allocate.
That's the fascinating trade-off here.
And that is one that, it's a trade-off,
it's a, like, restricting what you're able to do
makes it easy to see what you're not doing.
Whereas if you're able to do it, you know, willy're not doing whereas if you're able to do it you
know willy-nilly then you can't see whether it's happening or not yeah you use a librarian zig that
doesn't get past an allocator it doesn't allocate yeah right in any other language i can think of
how do you get those guarantees it's not an obvious answer. Right? So, yeah.
I think, okay, so my opinion on a lot of this,
like we've gone, we've summarized some of this,
and some of this has, of course, been floating out here.
We've gotten pretty far in this paper,
but we're not, I mean,
if we actually wanted to comprehensively cover this paper, we would have to go for a few more hours.
Right?
And we already said we're only going to really cover the first half, but there go for a few more hours right and we already said we're only
going to really cover the first half but there's still a ton more in here right i i just i guess
i don't know how you want to i think if we try to just continue down this list i have almost
nothing highlighted from here on not to say nothing but it's mostly like like i've got um the one advantage that all these
impure non-von neumann non-von neumann derived languages can claim and i find nine non-von
neumann funny and hard to say non-von neumann he's john von neumann but these non-von neumanns
yeah you know watch out for them That's like literally the kind of stuff
that I have highlighted from here on out.
I think it's already made most of its interesting claims
and what follows is it pivots from talking about the problem
to talking about existing solutions
and why they don't work
and then talking about proposing a new solution.
I think we should summarize what
that solution is we don't need to dive into all the details of it but i think we need to put that
flavor out there to do the paper justice and then concluding thoughts if we want to revisit this
paper later which i know your answer is probably no but if we want to revisit this paper i think that would be a good section to cover but i i think that we've covered a lot of what this paper later which i know your answer is probably no but if we want to revisit this
paper i think that would be a good section to cover but i i think that we've covered a lot of
what this paper is about which is state causes lots of problems and the elimination of state
is crucial for reducing complexity and then now we're getting into how do you practically eliminate state
in a way that you can build a system that has all the properties
that you want of a real practical system,
and yet none of the downsides of these big, complex things.
And I had a bunch of little, there's a bunch of things I had highlighted,
like hot takes, like they don't like monads,
and they say that all OOP relies on state,
which isn't true and things like that.
Yeah, their takes on OOP are trash.
Like I have no problem saying that.
Like their interpretation of what OOP is, is very narrow.
And it's a great example of that sort of stateful argumentation that i was complaining
about where it's like they pick oop as the way to talk about imperative programming and then don't
really talk about the imperative aspects of oop they talk about the oop enos of it and then at
the end go and that's why imperative programming is bad which you know doesn't need to be the case
you can have oo without being imperative.
Yeah, and you can have OO without state.
There's a great paper,
it's on data abstraction revisited
by William Cook
that maybe we should cover at some point.
I don't know.
It might not be totally future of coding relevant,
but it's a really good paper.
It kind of offers a different definition
of optogenetic programming
and it eliminates state from it completely.
It is an immutable system.
It's fascinating.
Such cool stuff.
So yeah, I think that they clearly don't like operator programming,
and their answer is functional relational programming.
And I'll give you just like the little elevator pitch of this idea.
So we know that state is bad
and the elimination of state is good.
What we need to do is we need to now limit ourselves
as much as possible and focus on the essential state.
Now remember, essential is what matters to the users.
So we need to take what matters to the users,
what we can't derive from anything else, right?
So, for example, if a user clicks buttons, we might be able to store those clicks of buttons,
but the count would be a derivable feature of those buttons, or the frequency, or whatever, right?
Any other properties of those button clicks.
That's accidental state that we might need to store only for practical purposes,
but the essential state of they clicked this, we need to store that.
And we're going to combine this like functional programming,
minimal state with relational programming,
which we usually think of like SQL,
but they really mean like the relational algebra.
And that's going to be our model for data so all we're going to have
is functions and relations between data items also known as go back to our discussion on brooks
where we talked about algorithms dropping out etc right this is the the model of what the essence
of programming is as much as we can practically get at it right now they admit
that like they don't really meet their ideal world of course they're talking about functions
and they're talking about relations and these are things that users aren't thinking about
but we've accepted some constraints of we're building a software system we have to work with
software practically and they even say like this frp system does not exist. There's no off the shelf thing you can do. They didn't even build it. Yeah,
we didn't build it. There might not ever be an off the shelf. Maybe you always have to build this
yourself. Maybe you can find some ways of plugging into it. But this is the idea. You have a purely
functional little bit of the program. have your relational data these things hook up
and that's the whole system there's no imperative shell around this there is though they have this
like little diagram of like the core of the system is the essential state and it refers to nothing
else yeah yeah then there's the essential logic which refers to the essential state and nothing else and then there's the accidental state and control which is a little
bit imperative in their framing of it yeah that refers to both the essential logic and the
essential state and it's sort of like the accidental state and control is the stuff that you'd write
imperatively or however to get performance uh hints into the runtime that's
running this system so that it doesn't do something pathologically dumb when you just
give it state and pure logic yes you know you're absolutely right there is eventually if you need
it yes and only if you need it can you add some accidental stuff. And the main reason you're going to need it is performance.
And that in an ideal world where performance was no issue,
you would need none of this.
Yes, exactly.
And I think there's also kind of guidance
about when you add this accidental stuff,
it should have certain properties.
Like if it doesn't need to persist and it could be
rederived or persist in a lossy way you know a way that you could wipe it out and then
rederive it later do that right uh try to make it as pure as possible so it's a data defined thing
or something like that rather than like oh and now we just and it has to be totally separate right yeah it's isolated
from the essential logic and state they can't refer to it yes yeah there's a one wayness to
this right so i might make a little data defined language for my control flow diagram and then it
might call some of my pure functions that do things, but my pure functions can't know that they're being called.
They can't reach in.
And if they have to have any of that, it's got to be supplied to them and, you know, all of that.
I have worked on so many systems that were developed this way.
And I will say, I think the reality, I know this is like jumping ahead to conclusions of what I think of the system, but I'm going to.
The reality of these systems are they are totally fine as long as you ignore conformity.
The one element of Brooks that kind of got thrown out out of this whole discussion.
When you have to make this system talk in a complicated way to the outside world
and that outside world has a bunch of requirements on how it wants to be talked to
oh my gosh is this a pain your system ends up becoming 90 translation between what the outside
world wants to say and what you want to say.
Now, there's also performance issues.
And I've worked on systems where I found I took something that took 30 seconds and turned
it into 0.1 milliseconds just by not using the fancier things that had been built up,
a data log database with blah, blah, blah.
Right.
I've done that.
I replaced the same number of lines of code, just didn't do the data log query instead did some
you know purely functional but some normal logic right not a query onto the relational model
like not all data fits in this relational model if you have to communicate to the outside world.
Taking structured data, turning it into relations, and then turning it back into structured data is not fun.
One of the criticisms that I read of this part of the paper when I was doing some background reading was that one of the things that seems to be lacking in this design that the authors were aware of and just kind of brushed under the rug is that it lacks a theory those updates make that essential state mutable and how you ought to
grapple with that within this framework that is otherwise about being very immutable at the core
and that's something that i think hickey did a better job of in his you know absorbing and and reworking the thoughts from this paper
is that he's always come forward with a theory of process or a theory of change or a theory of how
systems you know need to be not just state you know as a pure imm immutable, abstract thing, but a succession of states and how you manage that
succession over time is of key importance. And so I hear a little bit of that maybe in what you're
saying as well. Like when you conform to the outside world, one of the things that the outside
world expects is that things are changing and that you're going to get new data coming in and you're going to have
to incorporate that um and another area where i think maybe this proposal fails a little bit
is the other kind of theory of change which is like how you grow and evolve these systems over time i don't see in here a lot of thinking about
once again the ergonomics of what it's like to work within this kind of a structure
and how one of the nice things about imperative mutable systems is that they're really easy to
evolve and grow so easy in fact that it's easy to end up with a bowl of mud
and it's easy to go off the rails
and it's easy to move from a correct state into an incorrect state.
They don't do enough to help you avoid bad kinds of changes
because they're all about being easy to change.
If you want to go add some new functionality
that needs some data that's over there you just go grab the data you don't have to like do the thing that they talk
about in functional programming earlier in this paper where they say well one way that functional
programming can handle this is by making a big wad of state and then passing it into every function
and have that function change whatever part of the giant ball of state that it needs to and then passing it into every function and have that function change whatever part of the giant ball of state that it needs to
and then return that giant ball of state.
And they reject that as a good idea.
Although I disagree.
I think it's a great idea.
But I totally agree with you,
and I just take it one step further
that they have identified this interesting essence
of the problem,
but the problem itself is what changes.
And most software systems that I've worked in that are overly complex to the point where no
one can understand them, part of the reason is the problem they're trying to solve has changed.
And so what was essential before is no longer, not because it's no longer essential to the original problem, but the problem
itself has changed. And so it's not essential to the software system, even if it's essential to
the problem. Because now that software system solves a new problem, or it doesn't solve that
old problem anymore at all. It doesn't need to. That problem is obsolete. Or it has to solve this
problem in addition to those old problems. And when do you divide up? When do you decide that two problems
are distinctly different, right? And how do you make sure that they don't interconnect in a weird
way? These are all the things that I think actually make these systems hard. And the more you boil
down to the essence of the problem, the harder it is to shift to other problems.
I think a really easy example
that I kept thinking of is
I know many, many
kinds of systems where
performance is essential.
Where if you are building
a software program with a framework
that was designed assuming
an ideal world where performance
does not factor, and then you can
sprinkle on a little bit of performance at the end which they specifically say they specifically
argue somewhere in here that it's easier to make a simple system fast than it is to make a fast
system simple i disagree with that i don't think that that's true. I agree. But if you have this framework that leads you to build a simple system that attacks a single problem and that problem is what is essential and you've built that thing around it, it's very, very easy to find yourself in a position where the system that you've built has kind of crystallized around that problem like jimmy's saying and that suddenly the problem is not oh
you know or the issue is not oh the problem has changed but your system is too slow and now it
needs to go fast and that has moved from being inessential to being essential yeah now it's like
what do you do the entire framework upon which you've built this solution is now part of the problem or and i've
seen this happen specifically with systems built around this you do build this you solve the problem
in a very slow way and that causes other upstream problems that now need to be solved
only because your system is not fast enough and if if it were just 10x, 100x faster,
that would be totally achievable had you not taken this approach. You wouldn't need to solve
those upstream problems because those upstream problems only happen because you can't do
something at scale here. So I do want to say, I think there are lots of critiques here, and I think there have to be lots of critiques here,
because this is offering a silver bullet, right?
And explicitly so.
It doesn't technically say it's going to give 10x.
I don't think they ever make that claim in here,
but they do think it's an answer to Brooke's problem,
and so that kind of implies that it's going to you know
solve these problems they at least think it's going to reduce you know why is it hard to make
software well it won't be now if you follow this and so i think it's good for people to make
ambitious claims and they do try to hedge a bunch on on these sorts of things and not make too bold
of claims i i think pointing out the flaws is not to say
that this paper is bad or wrong or you shouldn't read it or you shouldn't learn from it which it
is and which you should and which you can't so don't uh i think this has been the most um
if i it's not my favorite paper but it's probably been the most influential in my career.
And it brought me to a place where it taught me so much
by just taking this approach.
There's a certain asceticism to it that is really beneficial.
If you come away as a zealot about this paper
and think that all systems that don't follow it,
maybe that's not the best thing,
but maybe you needed that to develop, right? But what I will say is if you take this frame
and you apply it and you see how far it goes, you end up on the other side learning a lot
about software. And that's what I think is so wonderful about this paper.
And just to follow up with what you said earlier, they do actually say
the silver bullet.
I'll read the very last couple sentences from the paper.
So what is the way out of the tar pit?
What is the silver bullet?
It may not be FRP, but we believe there can be no doubt that it is simplicity.
Oh, yeah.
I have that highlighted in green.
I should have remembered.
Because of course, simplicity is exactly the kind of silver bullet
Brooks was looking for.
That's exactly what he was after.
And that's what's so,
that's why I love.
That's what's so galling about this paper.
Well, that's what was so good about reading this paper right afterwards.
That's why I made this requirement, right?
Is because having read them both and then having read the out of the tarp it more and remembered more of it and
then going back and reading brooks i saw how far apart they were these couldn't be more far apart
even though they're supposed to be about the same topic right and i think there's a lot in here that
we didn't get into like essential state and essential logic and external state and control and all of these like little details here that
are really interesting to think about and when you find yourself practically in your program
deriving state like realizing that you might want to treat it differently is a really good little
trick and technique for making your program better and so yeah i think i'm i would be more for people putting
out bold visions that are wrong than people making lukewarm takes that are right now i don't know
which bucket i fall into with my whole it's all visibility invisibility is that a bold take that
is wrong or is it a lukewarm take that is right i don't like either
of these options they both suck okay that has a higher chance of being wrong yeah yeah like i'm
definitely i'm definitely with you in all things in my life when it comes to assessing the work of
other people one of my top criteria is how much risk are you taking? Especially when it comes to,
you know, art and music and film and things like that. One of the things I value most is somebody
who took a swing and a miss, right? Like I'd love to see, you know, some established top 40 act
come back with an experimental noise music album and have it be critically panned and a commercial
disaster and that sort of thing because to me that is a sign that they thought here's an opportunity
for us to do something different and i i really cherish that i cherish the mistakes that people
make on their path exploring the space rather than just finding something that works and entrenching themselves in
it and i think that might be what bugs me about all of this parroting of you know the ease of
reasoning about things that has come up over the past decade and the emphasis on a certain
interpretation of what rich hickey said kind of based on this paper where it's all about you know oh this
particular aesthetic choice we made is justified because it minimizes complexity and i just there's
a certain amount of cargo culting there that that really bothers me in the same way that
it really bothered me like all the copycat acts that
came out after you know animal collective one of my favorite bands from the 2000s they they
came out with a style of popular music that was very different from what everyone else was doing
and each new album they put out was markedly different in its genre and instrumentation and
songwriting style and all of that and each new album they brought out there was a wave of copycat acts that sounded exactly like that new album and that continued
album after album after album for about a decade and that that sort of thing i don't like i don't
like the thoughtless repetition or the regurgitation of ideas and i i appreciate this paper in the little nooks and
crannies of ideas that it almost hits upon that are almost valuable like the way it almost could
be used to support visibility is an important criteria of understandability but it just like
walks right past that so arguably i see the good in the Darth Vader of this paper.
Oh, yeah.
In my notes.
Conclusion.
Tarpit is a great paper.
It's so full of bad arguments that you end up generating a ton of insight by disagreeing with it.
Every time you read it, you'll find new ways that it's wrong, which pushes you to better
understand programming.
It's a golden goose.
Emphasis on goose.