Advent of Computing - Episode 121 - Arguments Against Programming
Episode Date: December 4, 2023Most accounts of the early history of programming languages all share something in common. They all have a sentence or two explaining how there was great resistance to these new languages, but eventu...ally all programmers were won over. Progress was made, despite the forces of counterrevolutionaries. What you won't find in most histories are the actual arguments these counterrevolutionaries made. This episode we are looking at those arguments. I've tracked down a handful of papers that argue against digital progress. Are these truly cursed articles, or is there something to be learned from arguments against programming? Â Selected Sources: Â https://dl.acm.org/doi/pdf/10.1145/1455270.1455272 - Why Not Try A Plugboard? https://dl.acm.org/doi/pdf/10.1145/367390.367404 - Comments from a FORTRAN User https://dl.acm.org/doi/pdf/10.1145/320932.320939 - Methods of Simulating a Differential Analyzer on a Digital Computer
Transcript
Discussion (0)
Hey, it's your boy Sean, coming in a little bit late to the party.
Um, yeah, I have to apologize for being so behind the ball on this one, but I do have
some excuses.
Life's been kind of busy lately.
I've been traveling for work and bustling over the holiday.
I had initially laid out this brilliant plan, but forces tend to conspire against me whenever I have good ideas.
I knew I'd be short on time, so I planned to work up a paper review-style episode. I run these from
time to time. They're usually easy, fun, and pretty straightforward to produce. I find a pile of
related papers I've been wanting to talk about, string together a story, and just dive right in.
talk about, string together a story, and just dive right in. But you know what they say about best laid plans. Progress was slow, I was busier than I thought I would be, and I had less time
every day. Best of all, I got sick. My voice gave out the morning I was planning to record this
episode. So, things are a little bit late, and you have my sincerest apologies. I've mostly recovered now.
We'll see how my voice holds up.
Now, personally, I think I was cursed.
I like that explanation because it's better than
I'm bad at estimating things and doing time management when I get busy.
I also think a curse would fit the topic at hand very well.
Oftentimes I'll say I hate computers and that I personally hate the future. This is usually after
I've spent the day fighting a particularly devious bug or new programs and updates ruin something I'm
working on. I don't really mean that I hate the pace of technological progress or I want to go
back to the pre-digital era and work a plow. Rather, I'm just kind of mad. If I cool off for
a few minutes, I can usually articulate my actual problem. But what would happen if I took a more
aggressive step than cooling off, than walking away from a keyboard and maybe crying in a corner?
What if, instead of taking a beat and then trying to fix my problems,
I decided to complain on the grand stage?
I decided to curse the entire world.
I could go from hating computers in silence to spreading my disdain publicly.
The result, at least in my head, would be some truly cursed and truly confusing arguments.
Perhaps even some wild papers.
This leads us down a strange avenue.
It turns out that such papers do exist.
Folks stuck in the digital world have been complaining since time immemorial,
but how much has been written down?
How many have been pushed to actual action?
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 121,
Arguments Against Programming.
121 Arguments Against Programming.
This topic was actually spurred on by one particular paper that I ran into.
That article, Why Not Try a Plugboard, really messed with my head.
So, let me break this down as best I can.
Whenever you read about early programming languages,
you run into a sentence or two that mentions all this resistance that there was to higher-level languages. We're talking very early days here,
like Fortran early, A-series compiler early. You'll run into these weird stanzas that claim existing programmers didn't want to use compilers, and that programming languages eventually won over these curmudgeons.
They eventually won the argument. From the modern view, with our temporal superpowers,
this is just an obvious outcome. Programming languages have revolutionized the world. I don't
think I could imagine the 21st century where programmers toiled away in machine code.
Productivity, complexity of code, cost,
really everything has been improved thanks to high-level languages like Fortran. Well,
maybe not Fortran itself, but you get the point. We know who won the argument, so
the argument itself seems trivial. But what about the other side? What about the arguments
against high-level languages?
That part of the story seems to really get short shrift, and personally, I think that should change.
I think this might be an interesting thing to investigate precisely because it seems so trivial.
At the time, this would have represented a seismic shift in the industry. Everything was on the cusp of changing forever.
The entire digital landscape was about to be reshaped.
Who was standing against that change, and why?
Is this a simple case of industry ghouls resisting progress,
or are there cogent arguments against programming languages?
Really, are there even cogent arguments against the adoption of
digital programmable computers? This is a gap in the story that I've known about and
been bugged about for a while. What finally pushed me over the edge was this particular
plugboard paper. This is, perhaps, the ultimate argument against programming in general.
The paper offers that instead of using stored
programmed computers, read that as actual computers with, you know, code and everything,
why not just use plugboards? Instead of programming, just configure a machine using patch cables.
Believe me, we will be getting into the details of this one.
I want to see just how cogent the arguments against programming are.
Do these regressive types have a leg to stand on? Can we learn anything from their arguments?
Are there fundamental flaws in programming that we just overlook today, or are these
counter-revolutionaries deservedly swept under the rug? Alright, we're starting off with why not try a plug board? Because
this is just such a wild paper. At least, it looks that way on the surface. I want to try
and dig a little deeper and find out just what's going on here. It's going to be really easy to
just kind of look at these papers today and just dunk on them because, you know, of course they're
wrong. Fortran's the best.
Compilers are great, but I don't think that's productive. So in general, I'm going to be trying to give the authors the benefit of the doubt. I want to see what's good in their arguments,
talk about what's bad, but not dwell on like, oh, of course they're wrong. So we're going to start
with the basics here on why not try a plug board, which brings us to the plug
board. Now, this is a weird technology from the very earliest mists of the digital era. These are
big grids of holes that are each machined to accommodate a patch cable. These plugs and their
receptacles are actually of some size.
Each plug on a patch cable is about a quarter inch across.
As such, the boards can get pretty large.
Plug boards are also held in heavy metal frames so that once a board is patched up, it can be removed from the machine and stored for later use.
This technology was first introduced in the earliest years of the 20th century,
This technology was first introduced in the earliest years of the 20th century, before IBM was IBM, before CRT was even formed.
Herman Hollerith was selling punch card tabulators with integrated plugboards.
These tabulators used plugboards as a way to control their configuration.
Internal signals were routed out through one set of holes, while another set of holes was left waiting for incoming pulses. The plugboard allowed operators to control how those
outputs made their way to inputs. By switching around cables, an operator could, in very short
order, reroute the flow of data through a tabulator. Now, the key word here is flow. You could, perhaps, look at a
plugboard as a type of program, but that's stretching the edges of a very technical definition.
Plugboards were most commonly used in punch card tabulators. In that capacity, they really did
only manage how data flowed through the machines.
To be fair, this flow was very complex.
As punch cards were read into the tabulator, a series of pulses were generated.
Those electrical impulses, in serial of course, flowed into a series of distributors and outputs.
Cables were then plugged from those outputs into wherever those numbers needed to go.
In the most trivial case, it could be wired up to a bus that controlled punching a new card.
You could work a plug board such that it simply copied over data. You could just as easily wire
up a comparison, work up a board that would only copy over numbers greater than 5, say.
that would only copy over numbers greater than 5, say. Tabulators also contained forms of memory,
usually in the form of multi-digit registers. You could configure a board to sum up fields on a card,
then write that field to a new card once all data was crunched. That all sounds pretty close to a computer. It's trivial to write a program that copies data. You can work out a
program that makes simple comparisons, even one that runs sums, believe it or not. But believe
me here, there's actually a vast swath of differences between plugboard machines and
programmable ones. Let us first consider encoding, perhaps the most boring part of the equation, but one of my personal favorites.
Punch card encoding was meant to be flexible. That meant that in some cases, you'd be working
with very standardized numerical data. In other cases, however, you might be going hole by hole.
As a card is read in, signals are produced for each individual punch. That encoding isn't
necessarily binary or
anything that a computer could love. It's just kind of its own thing.
Time also means something vastly different in the plugboard world. I think we're all pretty
familiar with the idea of timing in a computer, at least somewhat. Somewhere deep down, a digital
computer executes a program one step at a time, one clock tick
after another.
It runs line 1, then line 2, then line 3, and so on.
This is all synchronized by some beating heart deep inside the silicon of the machine.
This is called synchronous operation, and it's just how computers want to work.
There are ways around this,
especially with more advanced machines, but any machine that was built back in the punch card era
ran synchronously, one step after another, one operation at a time. If you told one of these
computers to multiply two numbers, it would run that instruction until it was done, and only then would it carry out the
next instruction. There are some secret, deeper levels to this whole digital timing, but for the
most part, we're dealing with one simple step after another. In punch card world, well, things
are very different. Timing is based around the read-write cycle of a punch card. The actual
data flowing into a tabulator's board is serial. You get a series of pulses for each column of a
card. A location with a hull would give you an actual positive line value, while a lack of a
hull would give you a baseline voltage. That scan comes one at a time, in a serial fashion. This means
that part of a tabulation cycle is dedicated to just the first position on a row, then the second
position, and so on. Each of these time slices is unique in doing something special. Once a row is
swept, the tabulator will move on to other phases of its operation.
These phases vary by the type of tabulator in question.
Common phases could include punching to an output card, moving data to storage, or any
other big operation the machine was configured for.
The differences here are profound.
It's not just that a tabulator operated on serial data, but that everything
was staged and timed in its own way. Each of these time slices is special, whereas on a computer,
each time slice is just another tick of the clock. But not everything was tied to timing
in punch card land. Not every operation was part of the row-by-row sweep.
In practice, smaller operations could be carried out totally independently from the larger
cycle.
This was one of the main benefits to the tabulator.
To explain this, I think it's time to turn to, why not try a plug board?
This paper is published in the very early 1950s.
The date puts it before the advent of Fortran.
This was the era where stored program computers were just becoming practical.
In other words, the era when programming left the lab.
It's also a bit of a transitionary period in more ways than one.
Early machines had to be programmed in machine code.
Assembly languages or other simple mnemonic codes were starting to come into fashion as
ways to help programmers work with these finicky machines.
But another type of language was just entering the world.
Very primitive attempts at high-level languages, or compiled languages, were hitting
the scene in this era. It was so early, in fact, that there wasn't a common name for these systems.
I think the most often used one was automatic coding, or automated code, but there wasn't some
grand agreement on terms. In the early 50s, we had a smattering of these proto-languages,
but there was no leader of the pack. Development was moving quickly, and programmers were looking
for ways to make programming better. Speed, ease, and cost were the primary factors here.
Languages were being developed to either speed up programming, make it easier to code,
or save corporate funds. Sometimes it was
a combination of all these factors. And this is where Rex Rice, the mad lad himself, drops
Why Not Try a Plugboard? It's regressive, it's brash, and it's more than a little foolhardy.
The basic premise of the paper is that instead of diving
headfirst into the rapidly developing field of programming, users should instead look back to the
halcyon days of punch card tabulators. Don't write code or store a program in memory,
but wire up a machine by hand. Now, I know, that sounds wild. That's exactly why I love this paper.
So, what are the specifics here? What is Rice proposing, exactly? The paper lays out how a
plugboard can do basically anything a stored program computer can. In Rice's realm, the only
program you need is configured on a plugboard.
Data for that program is fed in through punch cards or tape or whatever.
His dream system has memory, but no code is stored there.
Memory is for data only.
Code has to live on a plugboard.
Right away, this is pretty concerning.
Dare I say it, counter-revolutionary.
Rice proposes a full separation of code and data. To the layperson, or even the novice practitioner
of the digital arts, this may seem innocuous, just kind of a quirky system. But let me assure you,
this is one of the classic blunders, second only to going up
against a system administrator when hardware access is on the line. One of the greatest
tricks computers can pull is treating data and code interchangeably. To a competent machine,
there is no difference between the two. Code can be manipulated as data, and data can be read as if it were code.
This enables you to write a program that produces another program, or code that changes code.
We call these types of programs compilers. The sleight of hand, the generic approach to
information, is exactly what enabled the rise of programming languages in the first place.
is exactly what enabled the rise of programming languages in the first place.
This is a core feature of modern computing that was set in stone during this period.
So, the separation of code and data isn't just a little quirk of a weird platform.
It's a very fundamental and very impactful choice.
Of course, Rex didn't have much of a choice here.
His code, if you call it that, is all physical.
It's wired into a physical board full of holes. The only way you could treat that as data, as a living thing, would be if your computer had little robotic arms. Which, personally,
I think that would be great. I'd love to see that, but that's not practical.
This is one of the reasons that I'm saying a plugged-up program is regressive and limiting. That's a huge downside on its own,
but there is a strange upside that we gain from this board approach. On a computer,
every operation happens one at a time, one after another. A computer can only do one thing at a time.
On a plug-based system, operations can actually run in parallel.
Breathe that in before you take a pinch of salt with it. In theory, this is a huge benefit.
This would make a plugboard computer much more powerful than a contemporary stored program machine.
It would make it a lot faster.
But what exactly do we mean by operations?
And what do we mean by parallel?
Operations here are pretty close to what we'd call operations on a normal programmable computer.
We're talking comparing two numbers, incrementing a number, storing a
number in a register, basic math operations, that sort of thing. The difference is that a
conventional machine, a normal programmable digital electronic computer, often uses a single
centralized circuit for running these operations. Math is the easy example here. A computer will have what's called an
arithmetic logic unit that does all mathematics. When an operation asks for some division,
the computer shuffles around data, runs the division inside the ALU, then shuffles the
results back out. On a plugboard machine, you can have many multiple math circuits.
Each runs independently from one another.
They're kind of just spinning away waiting for inputs and waiting to give you outputs.
You can have multiple channels for shuffling around data.
Rice specifically describes having 10 separate data channels.
This means that up to 10 digits could be moved or operated on at once.
In general, steps could be done in parallel. You could easily wire a plugboard machine to take the first number off a card, split it out to two channels, get the sine of the number on one
channel, and then calculate the cosine on another, storing the two results in memory. There would,
of course, be timing and bus constraints to keep in mind, but you end up with more operational
flexibility than a programmed computer. You're running two math operations at once, instead of
running one after another. Okay, on its own, that's kind of cool. That's a neat trick that contemporary computers couldn't do with code.
But that's just about the only gain you actually get here.
The rest of the paper is focused on how, well, you know,
plugboards can do everything a programmed computer can do.
I'm not going to run down the list since that would be a little mind-numbing at best,
but I do want to highlight one of these features that kind of killed me to read. So,
y'all know what a pointer is? If you listen to my comp sci ramblings enough, then I'm sure you've
worked it out by now. A pointer is just a variable that holds a memory address. You then use it to reference, or point,
to that location in memory. This is a very fundamental technique used on all kinds of
low-level code. One pretty cool example is looping over a table of data. Let's say you have a table
where each entry is a byte and the table is stored at location 100 in memory.
Most computers provide so-called pointer registers, a special register that the computer
and the user agree to use as a pointer. To kick things off, you would set that register to 100.
From there, you can tell the computer to do whatever you want to that location in memory,
to the first entry in the table.
Say, calculate the sign of whatever that register is pointing to.
Then, to go to the next entry, all you have to do is increment that pointer register and
start from the top.
This lets you make these really efficient forms of loops.
Better still, since the pointer is just a register, you can reuse the code. If you
have to move your table to, say, address 534, you can just change the register's initial value.
Pointers give you a lot of flexibility, which is an inherent good thing. Flexibility is always good
when you're working with a computer. I had always just assumed that this feature was only seen in
big, serious, very real stored program computers. You know, since it's a programming trick.
But, dear listener, I was wrong. And the thing is, I don't really feel bad about making that faulty assumption. I was wrong because the exception is almost comical.
To the quote,
Access to the balance of the main high-speed storage on a given program step
is controlled by the number in any one of the storage address control registers.
As shown in the block diagram,
the address registers are contained in a separate
little computer, yet may be entered directly from the main computer channel. End quote.
So, here's the thing. Rex's magical machine has internal registers, just like many old-school
plugboard machines did. On the first reading, I just assumed that he meant you could use one of
those registers as a pointer. That's weird, what with the whole separate data stuff, but not too
weird. I get why that would be useful and how you'd even use that. But on closer inspection,
I guess that's not entirely the case. Rice provides a nice diagram of his machine, which has separate sections for
address registers. This also brings up some concerning questions about what he considers
a computer. The box for the address registers, which he says is a separate little computer,
looks identical to a lot of the other parts of the diagram, but I digress. That's a separate concern.
The rest of the paper follows this same pattern. Rex explains a simple feature of a programmed
computer, says that, oh, a plug board can be just as good, and then drops some saucy implementation
details. The sauce, in this case, is pretty weak and pretty concerning.
The idea of a plugboard machine is interesting because it does solve at least one of the issues
that early programmable computers faced, but it's only a very partial solution. While you may be
able to do a few things faster, you lose a wild amount of power and flexibility.
Rex Rice's solution actually becomes a dead end.
That's a nice warm-up, so I think it's time to tackle one of the bigger questions.
Every time you read about the history of Fortran, you'll inevitably reach this sentence that says something along the lines of
John Bacchus, the creator of Fortran, new programmers would resist this new language.
The detail often added is that programmers at the time were concerned about the performance
of a high-level language. But you'd be hard-pressed to find a citation anywhere near
those sentences. So what exactly is this all about? The abstract argument here is
simple. At the time, most programmers were working in machine code or maybe an assembly language.
Either way, a programmer was communicating with the machine one instruction at a time. They knew
exactly what their code did at the lowest level, and exactly how the computer would respond to an
instruction. That allowed for very tight code, so to speak. Code could be optimized to work just the
right way with the computer it was running on. A savvy programmer could iron out any kinks or
inefficiencies until a nearly perfect program remained. This was all because the programmer was down on the same level
as the machine, with full control over their code. At least, that's the generic line of thinking here.
Fortran was one of the first attempts to get programmers away from this level, to rise them
above the computer. There were other languages and other early compilers, but Fortran was the big one.
A line of code in Fortran might be translated into hundreds of lines of machine code.
The same is true for any high-level language, be that Fortran, Lisp, or C. This technically
takes away some of the programmer's control, some of your agency. Supposedly, this is a bad thing, or it was viewed as a bad thing.
But where are the complaints? Where's the paper trail? Believe it or not, I actually had a hard
time finding contemporary complaints. The internet and even print journals are flooded with critiques
of Fortran from much more recent years. But if you go back to the 1950s,
you don't get a lot of critiques of the language. So I guess the best place to start would be with
the self-critique found in John Backus' own history of FORTRAN from, of course, the ACM's
History of Programming Languages Symposium. This presents a very subtle and, I think,
interesting argument. One of the huge achievements of Fortran was its optimization.
The compiler was designed, first and foremost, to create fast machine code. Once Fortran code
was transitioned down to ones and zeros, it was able to run toe-to-toe with handwritten binary.
This optimization came from a place of profound fear. To quote Backus,
It was our belief that if Fortran, during its first months, were to translate any reasonable
scientific program into an object program only half as fast as its hand-coded counterpart,
then acceptance of our system
would be in serious danger. On the surface, that all makes sense. Bacchus and company were afraid
that if Fortran compiled to slow code, no one would adopt the language, despite its many benefits.
So then we should be looking at older languages to try and find more arguments against code.
then we should be looking at older languages to try and find more arguments against code.
But, as Bacchus explains, Fortran was actually in a very unique position as far as optimization goes.
For this to make sense, we have to go back a little further.
We need to go back to the earliest days of programming languages.
The first compilers, and thus the first languages, were made to help programmers write less code. That should be pretty common knowledge to anyone who even deigns to listen to my
mumbling on the subject. Less code means less time working and fewer bugs. These early languages let
you type in a single line of code that would be translated into a pile of machine instructions.
But, you may ask, what exactly were those instructions doing?
In short, they were doing math.
Back in the day, that was really the only purpose of a computer.
Early machines were, in large part, and as much as I hate to say it, kind of just glorified
calculators. It makes good sense that early programming languages would be automating
mathematics. Grace Hopper's first compiler, in fact, was developed specifically for doing math.
As she explained in a 1980 oral history, these early languages grew out of libraries of commonly used
code. Most of that code was for repetitive mathematical tasks. Let me get more specific
here, because this is really the core of the argument that I think is really interesting.
According to Backus, these early languages automated away the repetitive code used for floating point
mathematics. Languages like A0, Grace Hopper's first pass at a compiler, were designed to make
working with decimal numbers easier. This was necessary because computers of the era were pure
integer machines. They only operated on whole numbers.
UNIVAC, Grace Hopper's weapon of choice in the period,
could only perform math on whole numbers with no decimal points.
In general, decimal points just were not accounted for.
This gets us to something that I just haven't considered up to this point.
Today, programming languages fill a very wide role,
but in general, a language provides abstraction on top of a computer. It makes programming easier
and better for us poor fleshy folk. Languages come in a pile of different types, each approaching
the problem of abstraction and ease of use in different ways. But that's all post-Fortran.
Prior to Fortran, most languages were simply tools for turning mathematical expressions into code.
We actually see this reach its zenith just prior to Fortran's creation. There was a whole spate
of languages that could turn algebraic expressions into code, even handling decimal numbers. But with Fortran,
something had changed. And that change was the IBM 704. The platform Fortran was developed for,
and the first commercial machine to natively support floating point numbers. This, in certain
cases, made early automatic programming systems obsolete. At least,
it made them unwieldy. Bacchus explains in The History of Fortran that programmers of the era
just had to put up with poorly performing languages. Something like A0 would output
working code. It would handle all the floating point mumbo-jumbo for you, but that code
would be slow. To quote,
Experience with slow, automatic programming systems, plus their own experience with the
problem of organizing loops and address modification, had convinced programmers that efficient programming
was something that could not be automated. Another reason that automatic programming was not taken seriously by
the computing community came from the energetic public relations efforts of some visionaries to
spread the word that their automatic programming system had almost human abilities to understand
the language and needs of the user, whereas closer inspection of these same systems would often reveal a complex, exception-ridden
performer of clerical tasks, which was both difficult to use and inefficient. Existing
systems had issues, that was clear. These would only get worse with the IBM 704. That might sound
paradoxical, but try and put yourself in the shoes of a programmer in 1954.
With a computer that supports floating point, earlier automatic programming systems lose
basically their big reason to exist. Now they're just translating mathematical equations into code,
but each math operation is basically just one machine instruction.
You could just write that by hand in machine code without having to learn a new language.
Programmers were already trained to do that.
Out the 704, this option would have looked pretty attractive.
A floating point operation now is just one instruction.
Thus, Fortran had to be fast. It had to use floating point circuitry very efficiently. It had to produce reasonable machine code. It had to blow people away. It had
to knock your socks off. This puts us in a weird position where some of Fortran's critics were
actually inside IBM. Backus and his colleagues were trying to get ahead of possible
criticisms that would come from the outside. So, let's step forward a few years. What arguments
do appear from outside Big Blue? Well, this is where things get tricky. It's actually hard to
find contemporary complaints about Fortran. The reason for this is the usual reason why I'm missing sources.
Not every little detail in conversation is recorded at the moment. Imagine you're working
as a desk jockey in 1957. That's when Fortran reached general availability. It's become company
policy to use this new language, possibly after some conversation with some men in very clean blue suits. So there you
are, trying to adapt to this formula translator. You quickly learn to hate the thing. It's not as
fast as you would like, the language itself is hard to understand, and you'd rather just write
assembly. Now, I ask you, how would you vent that frustration? My first go-to as a desk jockey and industry ghoul myself would be complaining.
I'd complain to my co-workers.
I'd complain to my manager.
I'd even complain to the CEO if he got too close to my desk.
I'd whine to my family, friends, and even a dog down the street.
But ultimately, I'd just learn Fortran and move on
with my life. Don't get me wrong, I'd still complain, but are we a very localized phenomenon?
It's unlikely I'd take any major action. At worst, I might try to form some solidarity
between coworkers and demand the company stop with this whole formula translator garbage.
workers and demand the company stop with this whole formula translator garbage. Notably,
none of that would leave a paper trail. At most, this would result in some internal memos.
That's not really the kind of stuff that routinely gets preserved. More than likely,
it's the kind of thing that would be balled up and thrown back at my desk.
That's not going to get published in a journal or a newspaper. It takes a truly legendary gripe to make it into print, and luckily, some complaints about Fortran have
reached that level. That said, there aren't a lot of these complaints in print, and it also
introduces a selection bias that we need to be aware of. For this section, I've been pursuing journal articles,
which are usually a good source for the latest and greatest tech of a given era.
In this, we can find some critiques of Fortran. It's important for us to consider the nature of
these complaints and complainers. To write an article, or even a communication or letter,
you have to be pretty angry. More than that,
you have to think that your viewpoint is important enough to be published.
You also need some route to publish. A low-level desk jockey over at Widget Co. isn't going to be
getting into any journals with their gripes about Fortran, as valid and very powerful as those complaints may be. To illustrate this point, behold Comments
on Fortran by Dr. Dr. John Blatt. And that is right, we're dealing with a man with two PhDs.
He is quite literally a nuclear physicist who wrote textbooks on nuclear physics. This is about as close to the publishing levers of power
as you can get. This article, published in Communications of ACM, represents the cogent
arguments against Fortran in the period. It also represents one of Bacchus' greatest fears.
You see, Dr. Dr. Blatt was trying to write a serious scientific program in Fortran.
Sometime in 1960, Blatt was trying to work out something or other having to do with
nuclear binding energies of some kind of isotope of... hydrogen, I think?
He assures us that it's a very big, serious problem.
But Blatt didn't have much time for actually running the numbers.
A colleague recommended that Blatt look into this new Fortran thing as a way to speed up
the process.
To quote,
Indeed, a quick reading of the Fortran manual and some comments from Fortran users indicated
that Fortran would save both time and effort and would be a generally satisfactory
scheme to use for this problem. Actual experience with Fortran coding on this problem, however,
has converted Paul into Saul. If a similar problem should come up again, the author would be very
reluctant indeed to use Fortran. End quote. This, dear listener, is the good stuff. Blatt tries Fortran, and he hates it. His opinions
on the language are so strong that he writes to the ACM about it. How's that for a paper trail?
Blatt presents a very reasonable stance here. The main complaint is that Fortran as a language is only really useful for beginners.
It's not a language that so-called advanced coders like Blatt should be using.
But he's not arguing for a return to the halcyon days of binary code.
Rather, Blatt lays out what's wrong with Fortran and what a serious language needs to have in the future.
Blatt is using a critique of Fortran as a jumping-off point to argue for better languages.
At least, that is how it appears on the surface.
First of all, some caveats.
This isn't a normal kind of peer-reviewed journal article.
It's a communication, which means it's something like a letter to the
editor. As such, it's a little bit unpolished. It's a little raw. Black's arguments are sometimes
contradictory and not always 100% coherent. I'm not going to really get into nitpicking that,
since that's not fun and this is very clearly a pretty raw paper. Rather, I want to look at the
interesting parts of some of his arguments. That should give us, I think, a better picture of
contemporary problems with Fortran. The first complaint is that Fortran's documentation sucks.
So right off the bat, Blatt is a man after my own heart. Complaining about poor documentation is a very time-honored tradition amongst programmers.
So this one is a simple and reasonable complaint, right?
Well, not really. Not exactly.
I like to think of this era of computing as Bizarro World,
because, despite seeming pretty modern, things are still in a
weird state of flux. Blatt says the documentation here is, well, fine for a novice programmer.
An acolyte can't go wrong. But for a professional, it's missing a lot of detail.
Fortran's manual explains the language logically, As in, it shows you a function or
operation, then says what it does. It explains constraints of the language, how to do certain
things in the language, you know, very basic manual stuff. What it doesn't do is go deeper
down in the stack. It doesn't explain what the computer is doing during a certain
Fortran operation. It doesn't show the machine code that an operation produces. That, well,
that's not a very standard complaint. That's a very period-specific thing. It's also something
I don't think I've ever seen in a manual. The whole point of a language manual is to show
how to use language, not the inner workings of the compiler. This shows how existing programmers,
real, big deal programmers, were still trying to cling on to older ways. As for a longer
explanation as to why Blatt wanted machine code translations in manuals,
well, that bleeds into another argument. You see, Fortran took a long time to compile.
Once again, this follows the same novice pattern. For a neophyte, slow compilation time is fine,
neophyte, slow compilation time is fine. But for a real programmer, what Blatt calls a type B user, for some reason I don't understand, compilation needs to be instantaneous. It should take no time
to turn source code into machine code. There is an interesting argument here besides, oh,
Fortran is just too slow to compile. Blatt argues that a novice programmer, a Type A user as he calls them, will straight up use a compiler in a different way.
A novice will have smaller projects, more simple code, and probably only compile that code a few times.
As in, they won't make as many changes to their code as a type B user, so a slow compiler won't affect
them as much, whereas an experienced programmer will, more often than not, be working on larger
programs. They will take forever to compile, and they'll need many more adjustments and
compilations. So, cumulatively over time, a type B user is going to be spending a larger percentage of their time waiting for the compiler to run.
Here's where Black drops a bomb.
And by a bomb, I mean the kind of detail that isn't normally preserved.
You see, the 4chan compiler was notoriously slow.
This was especially true early on.
It could produce fast code, but it took a
while to do that translation. Apparently, Blatt watched some real, hardened Type B programmers
working around the speed issue. To quote,
During the author's stay in New York, he noticed the frequency with which type B users were making hand-punched corrections on the object program merely to avoid recompiling. This procedure is the
ridicio ad absurdum of the whole philosophy of compiling routines. Instead of having simplified
things for the user, the compiling routine actually forces the user to learn the basic
machine language, study in detail the object program produced by the compiler, and correct I would have loved to have an actual account from these hand-punching maniacs,
but I'll have to settle for second-hand news. This is, to put it bluntly, wild.
To do this trick, a programmer needs to know how Fortran likes to write machine code.
That's a level of understanding that, frankly, kind of scares me. I've encountered this kind
of stuff very rarely in modern programming. I know folk who understand how C calls functions, for instance,
and can do tricks with that, but I think very few people could do this kind of
by-hand binary patching today. But here's where we hit one of those weird quirks to Blatt's
argument, and something that I think is revealing. For programmers to hand-punch code compiled by Fortran, they need to know how
each operation is translated into machine code. They probably did this by trial and error, or
just by being more competent at machine code than I am. This process would have been much easier if
they had access to one of those wild manuals that Blatt wants. So yeah, Blatt is
arguing against using machine code, but at the same time, there's this really regressive streak
to his arguments. We don't have a lot of sources like Blatt's communication, but I think this is
enough to see the edges of something larger. Folk had issues with Fortran.
Specifically, experienced programmers had concerns.
Those concerns, at least the ones that Blatt outlines, had some merit.
But when you get down to it, I think a large part of those concerns
were that programmers didn't want to give up their control over the machine.
There were good reasons to stay down
at the lower level, but eventually those reasons would start to go away. Programmers would adopt
Fortran. Blatt, personally, would even become a convert. In 1968, he published a book called
Introduction to Fortran 4 Programming. That was followed by another text on Fortran in 69. Those arguments
didn't last forever. They evolved as the field changed. Maybe it was just a case of first
impressions wearing off, or of Fortran itself getting better. That's cool and all, but I want
to get a little more unhinged. We've seen an argument against storing programs in memory,
and an argument against using Fortran.
But what about an argument against using digital computers at all?
Once again, we're on a tour of wild paper, so I bring you the next wild paper.
Here's the opener.
Quote,
Analog computation is, for many problems, more convenient than digital computation,
but lacks the precision obtainable from a digital computer solution.
End quote.
That's, uh, that's weird and concerning.
The paper it comes from makes this sentiment all the more, well, interesting.
makes this sentiment all the more, well, interesting. It's a little number titled A Compiler with an Analog-Oriented Input Language, which went to print in 1959. Now,
quick skim summary because this isn't the actual paper we need to dwell on. The 59 piece explains
this weird compiler that allows users to program a digital machine as though it were
analog. It's mind-twisting, it's weird, and it's scary. But there's a limit to the madness.
The paper assures us that the only purpose of this analog-to-digital compiler is to create
a digital program from an analog description. The final result is still a normal machine code program that takes
advantage of, you know, running on a digital stored program computer. Mind-bending, but not
truly evil. You see, this paper has a wicked chain of citations in it. This is where my mind truly
started to reel. The citation that I want to focus on, the weird trail
this sent me down, is for a paper titled Methods of Simulating a Differential Analyzer on a Digital
Computer, and it's from 1957. This has some very interesting implications. First is that there was, apparently,
this whole thread of programmers that wanted to go back to the old analog days. It must have been
a pretty large group too, since that 1957 paper also has a large section of citations.
Nerds were stuck in their analog ways, I guess. This, to me, hints at a spooky,
hidden history of counter-revolutionaries, but I digress. There is a central argument,
a base assumption shared by these papers. In short, they claim that programming digital machines
is too difficult and time-consuming. Analog computers, however, are much easier to use.
But there is a trade-off. While analog machines are obviously easier to set up, they aren't as
accurate or reliable as digital computers. The best solution, according to these authors,
is to create a language that allows users to program a digital machine like earlier analog calculators. Thus,
you get the best of both worlds. But, and here's the big caveat to the argument,
this is only true when solving differential equations. At least, this handful of paper
claims that to be the case. If this doesn't strike you as strange, then let me lay it out for
you. We should probably start on the math part and work our way up to, you know, spooky languages.
Now, differential equations are just a type of calculus equation. They include derivatives,
expressions of change in one variable over change in another. This is most
often used to express equations that change over time. In other words, we're talking physics.
Differential equations are the bread and butter of all types of physics. Since these are equations,
math nerds often need to solve them. It kind of just comes with the territory. To solve a derivative,
you have to use an integral. This is just another type of operation used in calculus.
Derivatives are often described as the slope on a curve, while an integral is the area under that
same curve. Integrals, specifically, are notoriously hard to calculate.
They're a continuous type of operation, meaning that they don't work by discrete steps.
They're just taking the full area under a curve.
There's no little chunks about it.
Maybe you can see where this is going.
Digital computers are discrete machines by their very nature. They break everything down into little steps that are operated on by the beat of a silicon heart. Historically, that has meant that digital
computers haven't been good at integration. To calculate the area under a curve, a program has
to break that up into slices, then calculate the area of each slice. Since the curve is continuous,
the only way to get a perfect answer is actually to take an infinite number of tiny slice. Since the curve is continuous, the only way to get a perfect answer is actually to
take an infinite number of tiny slices. But maybe it doesn't come as a shock, computers don't really
do infinity. That's not a very digital concept. Early computers didn't do numbers anywhere close
to infinity even, so integrals had to be approximated,
and in many cases, pretty roughly. By comparison, analog machines can run integrals really until
the cows come home. An analog computer isn't discrete, it's continuous. Under the analog
regime, you never break things down into little slices or discrete steps or anything like that.
You just have a continuous series of values. Numbers aren't represented as discrete steps,
but as continuous curves. On an analog computer, a number might be encoded as the rotation of a
disk or the voltage of a wire. That system just works better when it comes to calculus,
when it comes to continuous functions.
This sets up a bit of an issue.
Once digital computers hit the scene, they can do integrals,
but accuracy and speed isn't very good.
The only way to get over that hurdle is to resort to some pretty sophisticated mathematical methods. In smaller words, you gotta write some slick code to do slick math.
On an old analog machine, each operation was packed into a discrete unit. So to run an integral,
a user simply needed to wire up some inputs to the integrator and then feed the outputs of that integrator somewhere else.
On a digital computer, by contrast, you just have to use all kinds of arcane invocations.
You can't just wire up an integral, you have to dive into a totally new field of mathematics.
So, there is something to this central argument. Yes, early digital computers
sucked at integration, they were hard to set up for integrals, which meant that they weren't good
at differential equations. It would take a lot of work to overcome that hump, take a lot of work to
get everything set up and working. Or, as the 57 paper puts it, quote,
One of the problems facing large computer installations today is the increasingly large Or, as the 57 paper puts it, the first production occurs. Many methods have been tried for simplifying the presentation.
Compilers, formula translators, and interpretive routines have been written and tested on a wide
range of problems, and some of them are quite effective. End quote. Those many methods by 1957
would have included the first class of programming languages. This would have included Fortran.
So, in other words, we have a critique of both programming languages and digital computers.
Truly a monument to behold. The solution to this problem, or the proposed solution,
is a program called DEPI, the Differential Equation Pseudocode Interpreter.
DEPI is a language, at least technically speaking, but it's a weird kind of language.
To start with, it's an interpreted language. This puts it into a different category than
what we've been discussing. Fortran is a compiled language. That's the whole
turn-it-into-machine-code thing that it has going on. An interpreted language, by contrast,
never spits out any machine code. Instead, it reads in source code and then tells the computer
what to do about it. This approach can be slow because the interpreter is sitting there running
through everything.
You get this overhead just kind of tacked onto everything you do. The upside is that compile times aren't an issue since you don't really compile anything, and interpreters are usually
easier to develop than compilers. That already puts Deppie off on a weird foot. This is helped along by the fact that DEPI is only meant to work for differential equations.
Now, technically, I think it could do more.
I think the language, if we call it that, is generic enough to handle other types of mathematics.
It's just that DEPI was developed and designed for solving differentials first and foremost.
that DEPI was developed and designed for solving differentials first and foremost.
This is all accomplished by, well, some pretty sick and twisted code. This is also why I don't really like calling DEPI a language, but I think that's the closest fit. DEPI simulates a number
of independent circuits. That's in line with how analog computers worked back in the period.
You have a pile of these circuits for doing things like integration, addition, flow control,
that sort of stuff. The code tells DEPI how those circuits are wired together. In other words,
DEPI is very close to simulating a plug board. Now, I will admit, that almost sounds like a joke,
but it's not that far from the truth. This is a connection, so to speak, that I've never thought
of before. Electronic analog computers are constructed in a very similar fashion to
digital-based punch card tabulators. You have independent circuits that are wired together as the user
sees fit. Operations can occur in parallel depending on how those circuits are connected.
The key difference is the rift between digital and analog operations. Now, I know, I know this
is an obvious observation that I should have never missed, but hey, in this context, it's just kind of funny to run into
another wired-up argument. Anyway, DEPI is able to read in a wiring diagram. You essentially write
up which outputs connect to which inputs, which DEPI dutifully simulates. Then you pass in some
starting parameters and just let the thing run. The purported point of DEPI is to make it easier to program a digital computer.
The paper claims that, quote,
programming for DEPI is done directly from a flow diagram.
And this introduces something also very interesting to consider.
Flow charts, or flow diagrams as this paper calls them for some reason, are something
unique to old-school programming. You will still see flowcharts around today, but they really had
their heyday back before widespread adoption of compilers. These were the first programming tools,
back when programmers were pit against hot vacuum tubes and clacking punch card readers.
A flowchart, in essence, is a way to describe an algorithm.
It shows which steps you need to take, how control of the program moves from step to step,
and at which point decisions are made.
It was common practice back in the day to work up a flowchart before creating a program,
and then use that flowchart to direct your code.
This could mean consulting the chart or even just copying the chart directly into machine code.
It all depended on how detailed your flowchart was. And to be 100% clear, you can still use a
flowchart today, some people do, but we have a lot of different tools on hand to aid in programming.
Back in the early days, a flowchart was about as sophisticated as you could get.
So when Deppie is saying that you could code directly from a flowchart, well, that sounds
like a good deal.
That makes it sound like Deppie would have been really easy to use.
Context, however, complicates this. The configuration
of electronic analog computers can actually be fully expressed as a flowchart. This is because,
when you get down to it, that's all an electronic analog system is. It's just wires connecting little boxes. So a flowchart here is just a drawing of the machine.
To a digital human, such as myself,
it's easy to get tricked here into thinking that Deppie is more sophisticated than other languages.
The opposite is actually true.
Deppie is implementing something so regressive and simple
that it's no more complex than a flowchart.
I think that's kind of funny, personally. It's a nice twist.
When you get down to it, Deppi is weird.
Even in context, it can seem silly.
But, well, there's a common theme here that I think we're starting to see.
Alright, I want to close with the truth that ties all these papers together.
These arguments aren't so much against programming, at least, not really.
They all represent stopgaps between older, crunchier technology and something new and flashy.
Plugboard programming is a way to get folk to move from punchcard machines to computers.
The critique of Fortran is actually a cry out for a better language,
one that can get machine code lovers into high-level programming.
DEPI is really just a means to pull analog nerds into the digital world. There are a pile of these transitionary papers and transitionary
technologies. You can find them in any era of computing. I think if we look at these types of
papers in this light, as arguments for transitions, not arguments against new technology, we can better
understand this time period. Why would someone want to program using a plugboard? Well, they
aren't used to programming in machine code, and good programming tools don't really exist yet.
Why would someone take machine code over Fortran? Simply because Fortran was never the be-all,
end-all of programming. It did some things right,
but it had room for improvement. And why would someone want to simulate an analog machine
on a digital computer? Because they knew how to use analog systems, but wanted the benefits
that digital brought to the table. In all cases, these seemingly regressive papers are actually
arguments for adoption of a new technology. So, what's the moral? What's the useful takeaway here?
For me, it's that we should be open to arguments against new technology. If someone is fighting
adoption of some new tool or arguing for some weird half-measure,
maybe listen for a few minutes.
I've found in my professional career that when someone complains, it's usually not
because they're trying to rip apart your new technology or trying to tear down anything.
It's because they actually care.
If you listen to enough complaints, you can usually find the underlying problem.
You can figure out why people are complaining and how you can do better. That's a route to
progress that's more constructive than just dismissing all critiques.
Thanks for listening to Advent of Computing. I'll be back in two weeks' time with another
piece of computing's past. If you like the show, there are a few ways you can support it.
If you know someone else who'd be interested in the history of computing,
then please take a minute to share the show with them.
You can also rate and review the show on Apple Podcasts and Spotify.
If you want to be a super fan, you can support the show directly
through adren of Computing Merch or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show, and bonus content. You can find links to everything on
my website, adrenofcomputing.com. If you have any comments or suggestions for a future episode,
then go ahead and shoot me a tweet. I'm at adrenofconf on Twitter, or you can just email me.
That's adrenofcomputing at gmail.com, and I promise I will eventually respond. I'm a
little slow sometimes. And as always, have a great rest of your day.