Advent of Computing - Episode 83 - SEAC
Episode Date: May 29, 2022The  Standards Eastern Automatic Computer was built by the National Bureau of Standards in 1948. It started crunching numbers in 1950 and stayed in constant operation until... 1964!  This early mach...ine, festooned with vacuum tubes, lived well past the first transistorized computers. So what exactly is SEAC doing so far into the semiconductor future? Selected Sources: https://archive.org/details/circularofbureau551unse/page/n7/mode/2up - Circular 551 https://sci-hub.se/10.1109/85.238389 - EDVAC Draft Report https://sci-hub.se/10.1145/1457720.1457763 - Imaging with SEAC
Transcript
Discussion (0)
You know me, I'm one of those boring people that hazards against the idea of hard-set
periods of history.
I will admit it can be nice and very simple to break everything up into big chunks and
just characterize each era.
The vacuum tube period of computing started in 1945 and it ends in 1956 with the first
transistorized computer. This period is characterized by big, power-hungry, and simplistic machines.
The transistor period itself ended in 1971 with the first microprocessor.
This era was characterized by commercialization, miniaturization, and widespread adoption.
However nice this view is, it runs into issues really quickly.
I think that computer history and tech history in general are really good at refuting this idea.
Just because a new technology appears doesn't mean it's instantly adopted. It might be nice
if that were the case, but the world just doesn't work that way. The transistor is a great example here.
The transistor itself was invented in 1947.
Right there we have a technology that, given a little bit of development, could be strictly
better than the vacuum tube.
The first large-scale transistor-based computer, TX0, was built at MIT in 1956. It took just shy of a decade for the
transistor to actually show up in a computer. Even then, it takes time for the venerable vacuum tube
to die out. The last major vacuum tube computers are built in the early 60s, say 61 through 63 if we want to be generous. That makes up a nice 13 plus year
transitionary period. Now, I think I'd call that a pretty fuzzy boundary. There are even specific
examples of vacuum tube computers that defy expectations and really tend to ruin the idea of nice little square boxes that you can drop things into.
I'd like to present SIAC for your consideration.
Built at the tail end of the 1940s, SIAC did use vacuum tubes, but it also used semiconductors.
It also survived well into the transistor world, only being decommissioned in 1964.
If you ask me, that makes the idea of some hard boundary line seem a little laughable.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 83, SEAC.
We're traveling back to the early days of digital computers to try and answer some outstanding questions that I've had building up for a little bit.
SEAC, or the Standard Eastern Automatic Computer, was built in the final years of the 1940s.
It was dedicated in 1950.
It wasn't decommissioned until 1964. So this was one of the first handful of machines ever built,
and it was in continuous operation for a total of 14 years. For some fun context, BASIC, the programming language that many of us cut our
teeth on, was created in 1964. This might sound like a wild outlier, but here's the thing. It's
not. Many of these really early digital computers had shockingly long lifespans. ILLIAC was a vacuum tube computer built at the University
of Illinois in 1952. It wasn't decommissioned until 1963. EDZAC, another tube machine built
at Cambridge, had a nine-year lifespan. EDVAC, yet another tube-based beast, was in continuous use
for 11 years. It's easy to think that the era of vacuum tubes had some
cutoff as soon as a better technology arrived, but as we can see here, that's just not the case.
Some of these machines stuck around quite a while. IBM's first transistorized machine,
the 7090, hit the scene in 1959. There's this overlap where transistors are in use. They're even in use
commercially, but vacuum tubes are still crunching numbers in many labs. That's something that,
to me, is kind of wild and something that I need to do some digging to better understand. I mean,
what was SEAC even being used for over its 14 years of operation?
What was SEAC even being used for over its 14 years of operation?
There's also the matter of SEAC's design.
There's this set of machines in this period, ILIAC, EDVAC, EDZAC, ORDVAC, and SEAC, plus a few more, that were all based off a design described in a leaked document called the EDVAC Report.
The provenance around this document is interesting in itself, and that's something I'll get to
here.
What I'm more interested in, and what I think is a better question, is what does based off
of actually mean?
These computers weren't all compatible.
They couldn't all share software.
So what did they actually pull from the EDVAC report?
Was it just some loose ideas, or was the design a copy?
Then there's the specifics of SEAC's origins.
It was built by and for the National Bureau of Standards.
Now, while the NBS might not seem that spooky, it's still a three-letter federal agency.
It's just that NBS is a little
more obscure than CIA or FBI. The fact is that the US government put a lot of money into the
development of computers, especially early on. However, many of the earliest applications were
necessitated by wartime needs. ENIAC, we must remember, was conceived of as a weapon of war.
SEAC and NBS offer an interesting break from that early tradition, but that leads to a fun question
of why the NBS wanted a computer to begin with. Were they just trying to standardize more stuff more quickly? Now, as you can see,
I have a lot of questions here, so let's try to untangle this big mess. I think it makes the most
sense to tackle these questions in reverse order. So, first off, let's try to work out what made the
National Bureau of Standards build a computer to begin with. I think that should transition us
nicely into a discussion of SEAC itself and then onto
what the thing was actually used for.
Sound fair?
In more recent years, the NBS has been renamed NIST, the National Institute of Standards
and Technology.
Under both names, this agency has been one of those seemingly boring but actually pretty important kind of outfits.
Their origins stem, somewhat indirectly, from a mandate in the U.S. Constitution.
Part of the Constitution empowers Congress to establish uniform standards for currency.
This was initially meant for hard currency, like physical silver and gold coins.
So there are very physical and literal standards at play here.
As just a strange aside, this mandate also existed in the Articles of Confederation.
That's the short-lived document that preceded the Constitution.
I guess establishing standards has always been important in the good ol' US of A.
Anyway, this mandate was handled on a semi-official basis for over a century. Different agencies
tended to do their own thing when it came to standards. The NBS proper was only founded in
1901 as a centralized agency to handle all standards, at least all standards that could
be administered by the federal government. That task has ranged from defining constant values
used on physics and engineering to providing standardized samples of materials. That last
one has some fun implications. You can go to the NIST website today and buy standardized
anything. This even means they have standardized peanut butter for sale. To overly generalize
things, the NBS was basically the nerd branch of the federal government in this period.
They even had a lab established in DC. So if the feds needed some kind of physical sciences handled, NBS was the
agency best equipped for the job. The next player in our story is the U.S. Census Bureau. They,
you know, conduct the census. This is another one of those constitutional mandates. Every 10 years
in America, there's an accounting of all residents.
That's an understandably big undertaking, so there is a fittingly big organization to handle
the process. There's also been an interesting history of the U.S. census driving innovation.
In the 1880s, Herman Hollerith developed punch cards specifically for use in the upcoming census.
Counting up and tabulating data is a slow, repetitive, and boring process.
It just screams for automation.
As the 1950 census loomed, the Bureau was looking for a new upgrade to their process.
They wanted to take a computer out for a spin.
Obviously, this presented a bit of a problem.
There just weren't many computers at this point in time.
The search for a system started in 1948.
The Census Bureau wasn't really a scientific outfit.
They're bean counters, but they're not really physical scientists.
So they called on the NBS to manage the hardware side of things. Now, the Bureau,
as it would happen, was also looking for a computer. So hey, maybe they could get a bulk
rate by bundling things. Just to put this in perspective, in 1948 there were only a scant
handful of computers in America. There was ENIAC, the Harvard Marks I and II, IBM's SSEC, and a few
machines at Bell Labs that are quasi-computers. Worse still, these machines weren't necessarily
well-suited for data processing. They were mainly souped-up calculators. Powerful, to be sure,
but not the kinds of machines that were predisposed to crunching census data.
That said, there was one promising option.
In 1946, the Eckert-Mouchley Computer Company, or EMCC, was founded by, who else, John Mouchley and J. Presper Eckert.
The duo had originally designed and overseen construction of ENIAC. founded by, who else, John Mouchley and J. Presper Eckert.
The duo had originally designed and overseen construction of ENIAC.
EMCC was the first commercial entity to offer computers.
At least, in theory.
EMCC was still working on things behind the scenes.
They had yet to build or ship anything substantial.
The current big project over at the company was called UNIVAC.
Later, it's been known as UNIVAC-1, but at the time, it seemed that it was just called UNIVAC.
It was a stored program machine that could be used for the census and would make a wonderful research platform for the NBS.
But there was no extant UNIVAC that EMCC could sell.
It only existed on paper.
And even then, there were issues.
The design phase had been drawn out.
Schedules were constantly pushed back.
It turned out that building a commercial computer was a larger task than initially thought.
commercial computer was a larger task than initially thought. There's also the somewhat unexpected matter of government interference. While the Census Bureau and the NBS were looking
into UNIVAC, the FBI was actively picking apart EMCC. You see, the fledgling company's business
model relied on big government contracts. At this point,
the feds were kind of the only group with enough money to buy a computer. So EMCC had to court
federal contracts to stay afloat. Some of those contracts were for the military, and those
contracts required a high level of clearance. Now, security clearance can be a tricky thing.
It covers a pretty wide range
of secrecy and impact. I've personally had low-level clearance to work in a certain lab
at one point myself. That was on the very much less impactful side of things.
One of EMCC's computers, BINAC, was slated to be used by Northrop Corp, a big defense contractor. So a number of employees
at EMCC had to have pretty high-level clearance to work on this project. The tricky part is that
the higher the clearance, the higher the level of government scrutiny. Sometime during 1948,
military intelligence started asking some tough questions about one John Mouchley.
The investigation was eventually referred to the FBI, and a number of EMCC employees were also put under scrutiny.
In Bit by Bit, author Stan Ogarten explains that these investigations had a number of consequences.
Northrop was instructed to withhold certain key information
from EMCC. Clearance was becoming a delicate issue, and contracts were jeopardized. So,
why did these investigations start in the first place? Well, it was feared that Mouchley and a
select few of his employees were communists. It seems that a lot of these rumors stemmed from the fact that Mouchley was
tangentially involved in union activity, and for the era, the FBI thought that was suspicious.
The point here is that EMCC had somewhat unstable footing. FBI persecution may not have delayed
UNIVAC's development that much, but it certainly didn't
make progress any faster. This investigation lasted years and was just one of many factors
that kept EMCC from finding its legs and really from finding good funding. It was clear to NBS
that UNIVAC would be a good fit for their lab and the census, but it wouldn't be a timely fit.
The Bureau also investigated machines being constructed at universities, and
that also led nowhere. Estimates showed that a UNIVAC-1 computer may be deliverable sometime
in early 1950. So in the meantime, the NBS had to put together a plan. Samuel Alexander, the engineer in charge
of computer research, explained the next twist as such. During the summer of 1948, NBS explored
the possibility of designing and constructing an interim machine that would have sufficient power
for general use and yet be simple enough to be constructed in a short time.
The feasibility of this program was based on the preceding two years of active experience
in component development and the availability of the best technology that had been reported
by other computer groups, particularly those in the universities."
This is where the SEAC project takes root.
NBS isn't able to find a computer in any sort of timely manner, so they decide, hey, why not build
our own? At first, the plan was to simply make an interim machine, a stopgap, until Univax started
rolling off the assembly line. One thing that's interesting in the official NBS narrative
is that the census connection drops off pretty quick. It seems to me that once NBS realized
they'd be doing their own thing, the Census Bureau also went off and did their own thing.
So that's the why part answered, but what about the how? How was NBS planning to create a computer faster than EMCC could?
How would it be possible to outpace the UNIVAC project, which, we must remember, did have a head start?
This was made feasible thanks to two big factors.
First off, UNIVAC was intended as a pretty large-scale general-purpose computer that could be sold to a wide range of customers.
It needed to roll out of the factory as a functioning unit.
SEAC was intended as a testbed, a machine that the NBS could use to test out new ideas, to go fast and break things, so to speak.
The design implications here are, I think, pretty interesting.
In documents, the initial part of SEAC is described as a, quote, nucleus.
This was the first part that was designed, and the first part that was built.
This nucleus was intended as the bare minimum for a functioning computer.
From there, SEAC could be expanded, new experimental
components could be plugged in, and the computer could grow along with the Bureau's needs.
So the interim goal was just to get the Nucleus up and running. Expansion could come later, and
with proper planning, expansion could be done with little to no downtime.
We have to think in terms of a computer being a very limited and very precious resource here.
Downtime was more costly then than it is now.
The second factor was that the NBS chose to work off an existing design.
And, at least at this point, there was only one workable design.
EDVAC.
This is, once again, where we hit a strange twist.
This is something that, at least in my head, I like to call the saga of the three Johns.
It's a little melodramatic, I will admit.
It starts with the two Johns we've already addressed on this episode.
John Mouchley and John Presper
Eckert. The third is John von Neumann, who didn't work on ENIAC itself, but collaborated on some
side research with the larger ENIAC team. From the beginning, it was clear that ENIAC wouldn't
be the be-all end-all of computing. It was designed as a quick way to build a machine that could enter service as soon as possible.
Even before ENIAC was operational, there were plans for a grander machine named EDVAC.
The design of EDVAC was a large-scale collaborative effort,
and it leaned heavily on this idea of a truly general-purpose machine.
Johnny number three, that is, John von Neumann, should be enough of a tip-off that this was a
theory-heavy kind of project. Timeline-wise, designs were being drafted in the last phase
of ENIAC's development, so we're getting a more reasonably paced attempt at figuring out what a computer should actually be.
Now, here's the tension in the saga.
Von Neumann had joined this effort relatively late.
By the time he arrived, Eckert, Mauchly, and the rest of the team had hammered out a lot of EDVAC's details.
It's safe to say that EDVAC was a collaborative effort, the result of an entire team putting
in work.
The third John was just that, part of a larger group effort.
One of his contributions was a document that came to be known as the First Draft Report
on the EDVAC.
This was essentially a summary of EDVAC's design up to that point. It has an intense level
of detail, from abstract theory all the way to an instruction set and proposed circuit diagrams.
It's everything you need to build a computer. It was also a draft. The thing is unfinished, and
crucially, so too is its author's list.
This was meant to be passed around the lab, not the public, so fittingly, von Neumann is the only author.
He wrote the report, and it's just like a report that you'd hand off to your co-workers.
If it ever got out, you'd want to change that byline, maybe redact some stuff.
But you wrote it, so for now it only has your name on it.
The only problem here is that the draft report did, in fact, get passed around for the public.
This von Neumann draft was leaked to scientists outside the lab, and soon copies spread around
the academic world. That computer's design, thanks to the single name on the line,
eventually became known as the von Neumann architecture. In 1948, there were few other
examples of computers to work off of. The EDVAC report was probably the most detailed option out there,
so the general outline of the report was used to develop SEAC. Perhaps the biggest result is that
the SEAC computer becomes one of the first stored program computers. That is, you can actually write
a program, load it into memory, and run it. Further, this follows the EDVAC design, or the three-john approach.
Data and memory reside in a single unified store.
In more modern parlance, everything shares a single address space.
That's the most basic characterization of SEAC, but what about the details?
What specifics did it borrow from the EDVAC report?
Well, to discuss that, we need to get a little bit deeper into the digital tangle.
In 1955, the NBS publishes this wonderful book called
The Circular of the Bureau of Standards, number 551.
Catchy name, I know.
This gives a rundown of the development of SEAC, its internals, and the first three or so years of its operation.
So what I want to do is hit the high-level points of SEAC's design document, and then see how those line up with the EDVAC report.
To start with, we have what I think is a pretty low-hanging fruit.
Data representation.
Now, I know what you're thinking.
Sean, data representation sounds boring.
It sounds abstract.
Well, dear listener, I think it's only boring because this aspect of computing has been set in stone for so many decades.
Our story is back at the dawn of the digital, so even basic questions like how do you store a number become surprisingly open-ended.
Pre-digital machines had a ton of weird ways to store numbers.
This ranged from analog rotation on a shaft and drums to discrete values on gears.
Some really far-field machines used marbles and tubes to represent values.
Stuff gets weird as we go back.
Moving into the digital timeline, we still find some variance.
Both the Harvard Mark I and ENIAC were decimal-based
machines. They stored numbers and operated on numbers stored in base 10. Another camp, and
the one that would win the day, preferred to use binary to represent numbers. That camp included
early machines at Bell, Konrad Seuss' computers in
Germany, and the still theoretical EDVAC. SIAC decided to go along with binary. However, SIAC
didn't copy all of the specifics of EDVAC's report here. The pen and paper EDVAC calls for somewhere between 27 and 30-bit numbers. This number has a range to it
because this is a draft design. This isn't a final set-in-stone decision. Van Neumann explains
that there are some trade-offs to number size. EDVAC was designed to perform mathematical
operations one bit at a time, a serial machine if you will. So the larger your
numbers, the slower arithmetic operations become. At the same time, the more bits you use, the more
precise calculations can be. As the report explains, 27 bits allows for precision up to 8 decimal
digits. 30 is used in some places as a nice round number and because
it adds the space to allow for negative numbers. SEAC settled on 45-bit numbers, which, on this
specific machine, worked out to 13 decimal digits. So, we're in the same rough realm as EDVAC, but still not exactly the same. The adoption of binary is a
big deal because it leads to some neat simplifications. There's a reason that binary
wins out. Binary numbers can be represented with a single wire, and operations can be performed on
these numbers using very simple logic gates. That opens up a world of possibilities.
And just like that, we're on to another seemingly mundane topic, logic design. This is another one
of those cases where homogeneity has, I think, robbed the topic of some of its luster. In the modern era, or really any era past the 1950s, all computers use chained logic gates.
Early systems use all vacuum tube logic.
Those get replaced one-to-one with transistors.
Then that same circuit gets etched onto silicon.
Right?
Well, no.
That's kind of the theme of this episode, is simple progression is fun and tidy, but it's wrong.
Weird stuff has been going on at this low level for actually a really long time.
The EDVAC report talks about implementing logic using these abstract things called E-elements.
abstract things called E-elements. This is that level of abstraction and flair that I can only assume von Neumann added to the project personally. E-elements are composed of two vacuum tubes.
One tube inverts an input, and the other tube functions as a logical AND gate. This is distinct from a NAND gate, which is an AND that's had its output inverted. In E
elements, the inversion of one input is passed as an input to an AND gate. Hence, it's sometimes
called a NOT AND. Now, if that's all gibberish, don't worry. What matters here is you have to be able to do an and operation as well
as an inversion now edvac was slated to use vacuum tubes to implement this you can call this tube
tube logic since you'd have a cascade of tubes feeding their outputs into yet more tubes vacuum
tubes can be used to implement and gates as well as NOT gates.
That is, the inversion needed for an E element can be handled by a vacuum tube,
and so can the AND operation.
This is one of the reasons that tubes end up being used so well during these early days of computing.
They just do it all.
There are alternatives, but you can't totally
get away from the tube lifestyle. SIAC used a combination of diodes and vacuum tubes. Diode
tube logic, if you will. A diode is a really simple electrical component. It allows current
to flow only in one direction. There are some caveats here, but let's
just live in an idealized world for right now. I should also note that these were germanium diodes,
so solid-state semiconductors are in play. An AND gate can be constructed using two diodes and a
resistor. A logical OR is just made by switching the polarity of those diodes,
literally just picking them up and flipping them. This has a certain advantage because vacuum tubes,
despite their flexibility, are kind of trash. They only operate at certain temperatures,
so each tube has its own little resistive heating element inside. They also require somewhat high voltages.
So tubes end up using a lot of power and generating a lot of heat.
That leads to all kinds of problems when designing a large-scale machine.
So why not drop the tubes entirely and replace them with diodes?
Well, you actually can't do that.
But you can get close. The issue is that
diodes can't invert a value. You can't implement a NOT gate using diodes alone. In 1948, the
available solution was the venerable tube. It also turns out that you need to have a NOT gate in order to perform certain functions.
Binary addition, for instance, uses NOT.
The team at NBS made a compromise in order to minimize the use of tubes.
Most logic was handled with diode circuits.
Tubes were only used for two things.
Amplifying signals that got too low and inverting values.
This meant that SEAC could use far fewer tubes than its contemporaries.
ENIAC, for instance, used 18,000 vacuum tubes.
SIAC only needed about 700.
That's a ridiculous difference.
It's also important to note that the idea of diode logic wasn't invented at NBS,
they were just an early adopter of this technique. I'd say they used it to great effect.
There's another related part of SEAC's design that I just have to slip in here.
SEAC's logic was built up using a thing called a tube package. These were packaged circuits that each contained a number of AND gates,
an OR gate, a single vacuum tube, and some support components. Each package was its own little self-contained module. They even had a little metal case, plugs, and a little handle for ease of use
and moving around. CX Logic, the actual part of the computer that did mathematical operations and controlled
operation in general, was composed of these tube packs.
Modules were plugged in slots in large cabinets.
Each pack was identical, basically a removable logic element.
The real smarts of SEAC came down to how these modules were wired together.
Two packs sitting next to each other were identical and totally interchangeable,
but they could be wired up to do radically different jobs. This modular design is fantastic
for reliability and repairability. If a module stops working, you can just pull it out and replace it. There's even a handle
for pulling the broken part out of the chassis. It's just smart. There's no two ways about it.
Another theoretical advantage is that these modules aren't very complicated. A tube pack
is made from standard parts. The only custom part is a circuit board. So it could have been possible
to contract out module manufacturing to some third party. I don't know if NBS took this route, but
it's an interesting idea to think about. This stuff could have been mass-produced with relative ease.
There was one other type of package used in SEAC, the delay line package. Like with tube packs,
these were replaceable modules that plugged into a larger chassis. Now, I have to clear up something
here. I got a little confused when I was first reading about SEAC's architecture. I kept looking
for registers, somewhere that the processor could use for temporary storage during operations.
Somewhere that the processor could use for temporary storage during operations.
I now have to hazard against doing that since, well, SEAC doesn't really have registers.
These internal delay line packs were used for lower level operations.
You don't, say, store a value on one of these delay packs. Instead, these delay packs and some assorted other chunks of
spare space around are used for setting up internal operations. Most of SEAC's instructions
were memory-to-memory, so I guess it's time we actually talk about the instruction set.
This is another place where the EDVAC draft comes into play. Both SEAC and EDVAC follow a series of instructions stored in memory.
These instructions are encoded, meaning that you can't just write out,
Hey SEAC, add these two numbers.
You have a very specific series of bits and bytes that correspond to a specific operation.
In other words, it's a program.
More specifically, CX instructions consisted of either three or four addresses followed by an
operation number. There are registers tucked away really deep in the machine, but they're only used
for intermediate values. When you perform multiplication, you just specify the addresses
for the two inputs and one output of this operation. SEAC takes care of the rest out of
sight. The EDVAC draft calls for a very similar architecture. Something to note here is that
modern computers usually use either a load store or a register of memory architecture here.
use either a load store or a register memory architecture here. That's kind of obtuse language,
I know, so let me explain. Load store machines require you to load data from memory into registers before you operate on that data. Register memory machines allow you to operate directly on locations
in memory, but this is usually limited such that
only one operand can be an address. In other words, a register memory machine might let you
add the value of a register to an address in memory, but not an address to an address.
These ultra-old machines, though, they don't really fall into either of these camps.
The camps haven't been built yet. Now, there's also a
bit of an elephant in the room that I need to address. We need to get around to the headlining
feature of the so-called von Neumann architecture. A crucial piece of the EDVAC report was how it
structured memory, or rather, how liberally it structured memory.
The report called for a single unified memory space where code and data lived side by side.
The idea is that you didn't have special memory for code
that was distinct from the rest of your memory.
The computer just had memory.
That's it.
SIAC also followed this prescription.
There are a number of advantages to following this type of regime,
but we have to be careful when talking about this in the historical context.
Modern computers, almost to a T, use a von Neumann-style architecture.
This makes it really easy to apply what we think of as
advantageous designs to earlier
machines that look similar.
But we gotta be able to separate out our modern expectations from what someone in 1950 wanted
to do.
One often-cited benefit of the von Neumann architecture is that it lets you treat a program
like generic data.
Put another way, it makes it possible to write programs that output programs.
It makes compilers possible. Now, this is a bit of a weird thing for the time period.
Assemblers were just starting to be a thing. These tools let programmers write out instructions
using simple mnemonics instead of hammering in the actual
binary data for an instruction. Instead of having to remember that addition is instruction 01 or so,
you could just write add and let the assembler do the translation.
The first research into compilers started in 1950 at EMCC, so the ability to treat data and code interchangeably becomes more powerful
almost immediately. But in 1948, this ability was useful, but it wasn't the most useful thing
in the world. This is just before the big compiler wave. That's a somewhat simplistic benefit, but there's something
more contentious that I want to touch on. The von Neumann architecture allows for self-modifying
code. That is, you can change part of your program as it runs. Modern day programmers in our
fancy suits with our degrees consider this to be a very bad thing. I've seen
it called dangerous. And that's true. Self-modifying code can be difficult to write, hard to maintain,
and hard for other programmers to understand. It can go rogue, so to speak, since the executing
program will be changing. It might change in an unexpected or undesirable way.
Thus, self-modification is sometimes seen as an inherent limitation of the von Neumann architecture.
To many earlier practitioners, self-modifying code was a crucial benefit of this architecture.
Hey, all I can say is, get spooked, nerds. Self-modifying code has always
been cool. Now, on an actual practical level, self-modifying code allowed programmers to get
the most out of these early machines. Memory was severely limited, so you had to pull some tricks
if you wanted to actually get anything done. You had to be efficient.
SEAC's programming manual has a whole section on this practice.
To quote,
The ability of the SEAC to operate on and modify its instructions is one of its essential features.
Much of the machine's flexibility and versatility springs from this facility.
All except the most trivial of routines
involve variable instructions. Variable instructions here are one of the words that this document uses
to talk about self-modifying code. Now, this isn't just a fun byproduct of how SEAC handles memory.
Self-modification is what makes SEAC work. This can be a little
mind-twisting if you don't write this very specific type of code, but let me try to drop
in a simple example. Let's say you have a list of numbers in memory and you want to add one to each
item in that list. SEAC doesn't have any type of fancy addressing or index registers, so you can't
look and reference by some nice pointer. You could write out 100 add instructions, each operating on
one of these sequential locations. Alternatively, you could just modify that add instruction.
You could just modify that add instruction.
You set up a loop.
Each time you loop, you add one to some location x in memory.
Then you modify that add operation so the next time it executes, it actually adds one to some location x plus one in memory and so on.
Conceptually, that's more complicated, but it takes up a lot less precious memory space.
Self-modification also makes a computer with a limited instruction set much more viable.
SEAC didn't have anything like distinct addressing modes. You couldn't really implement something
like a pointer. You couldn't tell SEAC to grab data from an address by some offset value.
But you could modify your code on runtime to accomplish the same thing. One result here is
that programming on SEAC, as well as some other early computers, had a different feel to it.
Self-modifying code is something that's currently avoided, advised against, or
just not taught. With compiled languages, it's not even possible in any kind of reasonable way.
So programs written for SEAC use a style and approach that most programmers aren't familiar
with. It's like an arcane tradition that's been forgotten. A secret skill that's been hidden from mere acolytes while practitioners died out.
Some still write self-modifying code, but it's a really niche thing.
On SEAC, it was central to the process of programming.
Now, I think it's fair to say that SEAC followed the EDVAC report pretty closely, with only minor deviations.
As a result, SEAC was built at a dizzying pace.
Once again, from the programming manual,
construction of the SIAC, and less than six months for testing. The machine performed its first integrated computational operation on April 7th, 1950. End quote. To be fair,
this was just the nucleus of the machine. SIAC was up and running in early 1950, but
this was its most basic incarnation. Once running, SIAC was ran hard, to put it mildly.
For the next few years, the computer operated nearly 24 hours a day, 7 days a week.
Projects from multiple agencies were scheduled on the machine in shifts,
so at any given time, SEAC was being put to very official use.
any given time, SEAC was being put to very official use. In 1953, the NBS estimated that 30% of SEAC's time was spent on maintenance and upgrades, with the machine being offline for only
20% of those first three years. Now, they didn't call this uptime exactly. The NBS documents say
that SEAC was, quote, engaged in problem solving. It's a slightly
different way to word things, but I think it speaks to what SEAC and its contemporary machines
were meant for, solving problems. These early machines are tools, first and foremost.
This wonderful circular, 551, just keeps giving us good information. So, let's move a
little deeper. To start with, I want to touch on the topic of scheduling. This is something that
the NBS spends a lot of time discussing in the SEAC circular. I think it's worth at least glancing
at since this is an element of computing that may be unfamiliar today.
Quote, the flexibility in scheduling mathematical work is of special interest. During a typical
long day of nearly 24 hours available for computing, some 2-6 runs of 2-8 hours each
are made. In addition, as many as 12 mathematical code checking runs are made,
some as short as 5-10 minutes. Another way of expressing this feature is to state that the
original experimental model of SEAC has demonstrated that it can divide its time
efficiently among at least 15 mathematical projects from totally unrelated fields of science,
15 mathematical projects from totally unrelated fields of science, i.e. each project can get two or more sizable runs plus code checking during the week.
This is a matter of management that falls totally outside SEAC's technical capabilities.
There are time slots available every day for actual programs to run and separate slots
for testing.
Any problem has to be scheduled in advance, and you have to stick to that schedule.
You don't get to run for six hours if you only schedule four.
The Circular goes on to explain that NBS kept a staff of 30-40 people just for handling problems on SEAC, and that's just the internal team.
This ranged from actual programmers and engineers that maintained SEAC to mathematicians that helped
design programs and adapt math methods for the machine. I hope this makes it clear that SEAC
was treated as a precious and very limited resource. It had to be used carefully. It had to be coveted. By 1953, there were a handful
of computers in the entire world, and one happened to be at the NBS lab in Washington, D.C.
The circular goes so far as to provide tables and graphs to show how efficiently SEAC time was used.
In the first quarter of 1953, for instance, there was a relatively low efficiency
of 71%. This was due to a large downtime that quarter. Some quarters saw the machine hit over
80% efficiency, running for thousands of continuous hours. So what problems were actually
being solved during those thousand-plus-hour campaigns?
What was SEAC actually being used for?
The first novel calculation done on ENIAC was, as far as I'm concerned, one of the
more impactful calculations of all time.
It was used to test a number of designs for the hydrogen bomb.
The calculation was so complicated that it couldn't have been accomplished by humans, at least
not in a reasonable time scale. These calculations show that the currently proposed design for a
thermonuclear weapon wouldn't work. Thanks to that, the hydrogen bomb project was able to move down a
more productive path. The result of these calculations quite literally changed the world.
It pushed the entire planet closer to destruction.
We could argue that it was one of the many steps that helped lead to the Cold War.
One of the first big novel calculations ran on SEAC was the tabulation of Loran tables.
Not quite as high profile and maybe not as dangerous, but this was still an impactful start.
Loran, short for Long Range Navigation,
was a radio-based navigation system developed by the US during the Second World War.
You can think of it like ground-based GPS.
Loran relied on signals sent out by paired base and repeater stations.
The base station would send out a radio pulse.
Once the repeater station heard that pulse, it would respond with its own pulse. The repeated pulse was offset
by an amount of time proportional to the distance from the base station to the repeat station.
In practice, this was set up to be around one millisecond. The overall Loran system consisted of a bunch of these pairs plopped down at convenient
and strategic locations. Ships were outfitted with receiver units. To determine your current
location, your receiver would listen for signals from these base-repeater pairs. The theoretical
offset between these pairs was already known, it's just a millisecond. You then get another two offsets,
one determined by the distance from the base station to the receiver and another from the
repeater to the receiver. That ends up making a big triangle of offsets, each proportional to
distance. With a little math, you can calculate your distance from the Loran station. SEAC was
used to calculate these offsets, which, in turn, used to generate new
nautical charts. These charts were pretty psychedelic, actually. Because of how radio waves
propagate, Loran signals twist around and form hyperbolas. So you get maps with these interesting
curvy lines. It's neat looking stuff. In practice, these new charts made navigation
more easy and more precise. The NBS circular also mentions that SEAC was used for calculating
missile trajectories, but it doesn't elaborate much. My guess is that this is a reference to
firing tables, a standby for these old machines. That's kind of boring, so I want to draw our attention
to a much cooler project. SEAC was also used to calculate the wave function of helium and lithium.
Now, I'm going to be taking us into just a little bit of physics, but I swear I'm going to keep it
light. Here I'm working off a 1952 paper titled A Numeric Solution of the Helium Wave
Equation with the SEAC. This paper describes the exact methods used in these calculations.
It's often said that computers changed how we handled math and what kind of math was possible.
Well, this paper is a very concrete example of this phenomenon.
In quantum mechanics, a wavefunction is an equation that gives the probability density for some state.
The simplest way to think about this is as a generalized form of probability.
Instead of saying you have a 1 in 6 probability of rolling a 1 on a die,
probability density gives you a continuous range of probabilities over possible states. The wavefunction for an atom like hydrogen will show, roughly speaking,
where electrons are hanging out. There's a lot more to it, but this is just the most
simplified explanation. You wind up with these graphs that show colored lobes representing the probability density of an electron occupying that space.
Wave functions tend to be a pain to deal with, at least as far as math goes.
There are three-dimensional equations, which means that a wave function is a function of x, y, and z.
The equation also has a dependency on energy, so really these are four variable functions.
The hydrogen wave function has been solved analytically, meaning that there is a clean
solution without any approximations. But hydrogen is the most simple element possible.
Going any further down the periodic table leads to more dirty territory.
From helium onward, the equations don't have analytical solutions. Things get too tangled
up to solve that way. The way around this is to generate a numeric solution. Instead of solving
for a nice equation, you plug in values, run the numbers, and then you
get a set of solutions for some given values. A numeric solution doesn't give you a nice curve
with every possible solution. You basically get a table of solutions at different points on your
curve. It works, but there are error bars that you have to deal with, and it usually takes a lot of time.
Numeric solutions are often approximations, so while it may be an answer, it's not the best possible answer.
The 52 paper explains that a mesh approximation was used to solve the helium wave function.
mesh approximation was used to solve the helium wave function. Essentially, the researchers made a 3D mesh of points that they wanted to calculate the wave function for. Then you just crunch
numbers and solve for each of those points. You might start with 1, 1, 1, then you move to 1, 1,
2, and so on. By running the numbers at each point, you get a 3D approximation of the actual wave function.
What's really neat about the mesh approach is that you get to choose how fine that mesh is.
You could only have a handful of points to solve, or you could have thousands of points.
The more points in the mesh, the more accurate your results are.
By the same token, the more points, the longer it takes for results to be calculated.
Here's where SEAC comes into play.
Like we discussed earlier, the machine was scheduled for pretty tight runs.
There were slots available for testing and debugging programs,
and then longer slots for actually running the software.
The debug sessions might have been as short as 10 minutes, while problem solving slots could last up to 8 hours,
so software had to be designed that could be quickly tested before it was actually executed.
The mesh method works really well here, since it can be scaled very easily. Tests were done using a very coarse mesh with only a
small number of points calculated. Those results could then be checked against previous hand-done
calculations. Once everything lined up, the mesh was changed and the program was left to run.
It's also interesting to note that the mesh approximation could have been easily split up to run over
multiple eight-hour blocks. It's just sitting there crunching away values for a list of points.
Just give it one list to do during the first block, then once you get scheduled again,
you can pass in another list on and on until you have a nice wave function.
So we see here a mathematical method that's particularly well-suited to how
SEAC was used. It's also well-suited to these new computers, since it's all just repetitive
calculations. A human can't really sit down and crunch wavefunction solutions for millions of
different points. That would take too long. That might take more than a human's career. It would be way too
prone to error, and frankly, it would be boring. But with a machine like SEAC, oh, that can't
complain. The job can be farmed out to some diodes and vacuum tubes. This very much changes what kind
of math can be done. Computers allow us to move away from elegant solutions.
If something can be calculated, well, we can just make a machine do it.
These three applications that I've covered so far, Loran charts, Nusseltables, and wave functions,
were all from the early days of SEAC. But the machine kept on crunching for over a decade.
SEAC, but the machine kept on crunching for over a decade. So what happened later in its lifespan?
I think one of the best ways to answer that question is to jump forward to 1956 and look at the development of image processing at NBS. So far, we've been talking about SEAC
as a limited resource, one of only a few machines in the world. For a time, the US government had
one computer, and its name was SEAC. But that was quick to change. New computers started coming
online, eventually mass-produced machines enter the picture, and scarcity really drops off. I mean,
even a few research machines are able to start filling a digital gap.
I mean, even a few research machines are able to start filling a digital gap.
The strict scheduling regime really only existed when there was a run on SEAC,
so once more machines start appearing, SEAC can open up its schedule a little bit.
Funnily enough, it sounds like SEAC came into its own as a tool once other machines existed.
Competition actually kind of helped. First and foremost,
SEAC was built as a research platform. The machine had been continuously expanded since its dedication.
My precious circular 551 even refers to SEAC as a, quote, proving ground for the evolution of
advancement in computer components, design techniques, and maintenance procedures.
For instance, the computer started life with mercury delay lines. Soon, experimental cathode
ray-based memory was added to the mix. The machine had multiple types of tape I.O. over the years.
Its instruction set was even expanded on special occasions. That's not really something that happens to
many modern machines. SEAC was always a machine in flux, but early on it didn't have a lot of
time to flow around. During the first three years of operation, around 20% of SEAC's time was
dedicated to a mix of maintenance, expansion, and actual research of SEAC itself. The staff at MBS was
working during a slim window of time. Part of that sounds like necessity. Computers were a precious
thing back in the day. But that window wasn't always so slim. Once there was competition,
there was suddenly more idle time over at the Bureau. So SIAC could spend more
cycles on out-there stuff. Call it blue-sky research. In 1956, Russell Kirsch started in
on one of those blue-sky projects. There are later accounts of this work, but I'm pulling
directly from a 57 paper on the subject. You see, Kirsch was designing a way to get images into a
computer. Now,
we get to this project in
a bit of a roundabout manner.
During this part of the 50s,
there was growing interest in character
recognition software.
As Kirsch and his co-authors
explain in the paper, experiments
in processing pictorial information
with a digital computer.
Quote, in almost all digital data processing machine applications, the input data made
available to the machine are the result of some prior processing. This processing is done manually
in many applications. Thus, such inputs as punch cards, magnetic tape, and punched paper tape often are the
result of a manual processing operation in which a human being is required to inspect
visually an array of printed characters and to describe these data in a form capable of
being processed by the machine."
I think it's plain to see that there is some room for innovation here.
Manual data entry has the same issues as human-bound calculations.
It's slow, repetitive, and prone to error.
It would be much better to lessen the human factor by developing a program that can turn the written word into something digital.
The start of this line of inquiry led to a fun ancillary issue.
How do you even get an image into a computer in the first place? How can you rig up a computer
to even look at a piece of paper? So while Kirsch and his colleagues at the Bureau started in on a
project to develop character recognition technology, it quickly turned into
a project concerned with digital imaging. At this point, no one had really tried to digitize an
image. There was no process out there for loading an image into a computer. I think the closest
similar technology could have been early facsimile machines, but that's probably a tenuous connection at best.
Kirsch was out on his own, so it should be no surprise that a strange device took shape.
The scanner built for SEAC was a weird little machine. The core of the operation was a
photomultiplier tube, that's a special type of vacuum tube that will output a voltage dependent on the amount of
light it detects. That was wired to a circuit with a voltage cutoff, so essentially you have a
combined circuit here that can tell you if it's light or dark, and you can change the light to
dark threshold. That only gets you a single spot, not a whole picture. So, to handle an entire scan, you have to move the photomultiplier over the entire image.
You have to very physically scan it over the surface.
The CX solution was a little... interesting.
You see, they didn't go with something like a flatbed scanner.
Instead, the paper to be scanned was
wrapped around a drum. The photomultiplier was put on a worm screw so it could be moved
up and down in even intervals. By rotating the drum and then moving the photomultiplier,
the entire surface of the image could be covered. A strobe light and a simple mask completed the
package so that the image could be scanned
one small square at a time.
The scanner was hooked into SEAC like any other serial device.
This is part of what made SEAC a good platform for research.
The computer had this thing called the I.O.
Selector.
It basically did what it said on the can.
It let SEAC switch between different input and output devices.
All those devices had to send and receive serial data,
as in they could only send or receive one bit at a time.
That covered basically everything in the NBS lab.
Tape readers, magnetic wire drives, printers,
and this rudimentary image scanner.
The scanner was just taking in data one spot at a time
that makes up a totally valid string of ones and zeros. That information was then stored as a
bitmapped image. The ones represented a white spot and the zeros represented a black spot.
Hence, bit mapped. These were what we'd now call pixels, but Kirsch doesn't use that
language at this point, so we get spots. The scan was also pretty low resolution. Each image was
reduced to 176 by 176 spots. That works out to 704 words of Siak's RAM.
That's comparatively a lot of space, so low resolution was the only practical resolution here.
That was only the start of the struggle.
The 57 paper explains the next hurdle in a bit of a fun way.
To quote,
As soon as the first picture was fed into SEAC, an uncomfortable fact became apparent. SEAC could store pictorial information in its memory, but the machine user
could not see the picture in SEAC memory except by the very time-consuming procedure of printing
the contents of the computer's memory on a typewriter, and attempting to interpret the numerical information.
End quote.
This was just one of those projects with many little side projects popping up.
Luckily, as Kirsch and co. explain,
SEAC was well prepared to deal with this kind of problem.
The computer had an interesting output device that could display binary data on an attached CRT.
Now, Kirsch doesn't explain what this device was initially used for.
My best guess is it was a debugging item.
Some other computers in the era had a similar setup,
a tube that displayed chunks of memory so you could see where you had loaded something wrong.
What's convenient is that these tubes would show a dot for a 1 and
nothing for a 0. So just set up your memory in the right place and presto, you can now view a
digitized image from a computer. And what was the first image scanned into SEAC? According to Kirsch,
it was a photo of his recently born son. It's a pretty sweet use for some pretty neat new tech.
Now, this was all just the first step in the overall project.
Kirsch and his colleagues were working towards character recognition,
so the next big step was developing some way to make sense of image data.
This, I think, is the really neat part about this early phase of research.
In 1957, SEAC wasn't quite able to interpret text. It wasn't a big enough platform. But it
still made a good platform for working towards that larger goal. The team at the Bureau would
end up developing a pile of tricks and algorithms for image manipulation.
This was, quite frankly, groundbreaking stuff. No one had loaded images into a computer before,
and certainly no one had modified a picture on a computer. One neat trick was grayscale.
Initially, scans were black and white, which, while passable, doesn't look that good. There's something about the combination of chunky spots and high contrast that makes
pictures underwhelming.
The scanner's circuit was actually adjustable.
The cutoff on the photo multiplier could be modified.
In practice, that meant that the white level could be set.
By running two or more scans at different levels, you could get multiple images,
one showing just the very brightest part of the picture,
another showing the slightly less bright sections, and on down the line.
Those could be mixed to give something approximating grayscale.
The 57 paper goes over a handful of other simple image manipulation programs.
I'm not going to go into great detail, but I figured I should at least touch on them.
One program detected blobs in an image.
And yes, that is the term they used.
The program would find a blob and report its size.
Another program could center blobs in the image.
Yet another program removed everything but the outlines of blobs in the image. Yet another program removed everything but the outlines
of blobs in an image. The idea here was to develop a suite of small programs that could be used for
more complex image manipulation. Eventually, this could become part of some larger text
recognition package. Kirsch and his colleagues were setting the foundation for image processing.
Kirsch and his colleagues were setting the foundation for image processing. I think this makes it clear why some computers like SEAC were in use so long after their
construction.
Even these vacuum tube beasts could be used to do important research.
There is so much space for innovation in computing that you didn't even need the biggest and
newest machine.
You just needed determination and some free cycles.
Alright, that brings us to the end of this episode and our dive into SEAC. So,
I want to take a second and see where we stand on our initial questions.
Why was the Bureau of Standards building and operating a computer? Well, I think we can
answer that one pretty easily. In 1948, there wasn't really a set place for computers to exist.
I mean, there just weren't even that many computers. A pattern hadn't emerged and
definitely hadn't been set. There weren't established institutions. The feds wanted
to get in on this new computing thing, and the NBS was already established as the scientific
part of the executive branch. This also ties into the other part of the question.
What was SEAC actually used for? Initially, it was a workhorse for the US government.
Many different researchers and departments had time slots on the machine.
It was a scarce resource, but it was one that was shared and utilized highly effectively.
And rounding things out, we have the matter of EDVAC and SEAC's design.
The NBS didn't pull everything from EDVAC, but they did follow the majority of the draft
report when designing SEAC.
However, I think it's clear to see that SEAC wasn't a straight ripoff of EDVAC. Instead,
the Bureau was using work already done on EDVAC as a way to accelerate their own computing project.
That turned out to be a pretty sound choice. It helped SEAC get into operation as fast as possible,
and ultimately, thanks to some smart design decisions, it helped to keep SEAC get into operation as fast as possible, and ultimately, thanks to some smart
design decisions, it helped to keep SEAC running for over a decade. Thanks for listening to Advent
of Computing. I'll be back in two weeks' time with another piece of computing's past. And hey,
if you like the show, there are a few ways you can support it. If you know someone else who'd
like the show, then why not take a minute to share it with them? You can also rate and review on Apple Podcasts.
And if you want to be a super fan,
you can support the show directly through Advent of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and bonus content.
You can find links to everything on my website,
adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.