Advent of Computing - Episode 58 - Mercury Memories
Episode Date: June 13, 2021This episode we take a look at the earliest days of computing, and one of the earliest forms of computer memory. Mercury delay lines, originally developed in the early 40s for use in radar, are perhap...s one of the strangest technologies I've even encountered. Made primarily from liquid mercury and quartz crystals these devices store digital data as a recirculating acoustic wave. They can only be sequentially accessed. Operations are temperature dependent. And, well, the can also be dangerous to human health. So how did mercury find it's way into some of the first computers? Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
What's the most important part of a computer?
The snap answer is probably the processor.
That's what gets all the work done, after all.
Without a processor, you really just have a complicated pile of wires and silicon, right?
You could also argue that it all comes down to software.
As a software developer myself, I cycle in and out of that position.
Without software, a computer is really just an inert machine waiting around to do something.
One answer that you probably won't go to immediately is memory.
Computer memory can just seem like an obvious component to the larger picture.
It's there, you interact with it, sort of, but you probably don't ever think about what it's made out of or how it really works.
The simple fact is that without memory, some kind of memory,
we wouldn't get very far in the digital realm.
After all, programming is mostly about moving around data to different parts of memory.
Processors spend most of their time pushing data and numbers
from one place to another. In other words, it's the whole ensemble cast that really makes computers
work so well, or rather, makes computers work at all. Each component has their own role to play,
and their own idiosyncrasies. Software often takes center stage just because it's what we're
looking at when we use a computer. Processors have their own design complications, and memory has
something else entirely going on. Over the generations, computer processor design has
varied greatly, but the general building blocks have stayed the same. That also goes for software.
the general building blocks have stayed the same. That also goes for software. Logic is at the core of everything for these two broad swaths of computing. But memory? That has an entirely
different story. One that doesn't follow the expected logic whatsoever.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 58, Mercury Memories.
This time, we're taking a turn for hardware and going back deep into computing's early days.
We're going to be talking about memory, how it was first developed, and a little bit of the wild early days of the technology. Before we get too far into the episode, I have
a little bit of show news to take care of at the top. So, two quick things. First, there's a new
bonus episode up on the Patreon page. It's been up for a few weeks, I'm just bad at announcing things.
The bonus episode is a primer on community
memory. It was a project that started in Berkeley as really one of the first attempts to democratize
access to computing and networking to the general public. It's really interesting and has some cool
intersections with the Utopian episodes that I've been slowly developing. So if you want more
content, go ahead and check that out. It's just a dollar a month to get access to all the bonus episodes,
which I think there's four now. Anyway, the other announcement is Advent of Computing is about to
hit a major milestone. By the time you're listening to this, we will have more than likely already passed 50,000 downloads, which is beyond my wildest
expectations for the show. So to celebrate, I decided to do something a little bit self-indulgent.
My plan right now is to put together a Q&A episode for the main feed. That's going to show up as one
of my bonus episodes on the podcast feed for everyone, so if you don't like Q&As,
it won't obstruct the normal flow of content. Right now, I'm shooting for the middle of July
for producing that episode. So if you want to ask some questions about the show, about specific
episodes or topics I've covered, or even ask questions about your enigmatic host, then go ahead and get in touch. You can email me at adventofcomputing at gmail.com,
or you can just shoot a tweet to me.
I'm going to pin a tweet to my timeline about the Q&A announcement.
And I'll be bringing this up again on all the shows leading up to the middle of July.
So you have some time to get some questions in.
Anyway, back to the matter at hand.
When I first started the draft outline for this episode,
I planned to talk about magnetic core memory,
which is a really interesting type of memory device in its own right.
My overall vision was to hit on the early days of delay line memory,
explain the issues with that technology,
and then dive into the ferrite-based solution
that starts popping up in the 1950s.
The only trouble with that is that delay line memory is a lot more interesting than I initially
thought.
And I really mean that.
Early delay line memory operated in some really out-there ways.
Even just one type of delay memory is enough to fill a whole
hour. So today we're going to be focusing on just mercury delay lines. That should sound weird and
believe me, we're going to get into just how weird this technology is. Besides just being flat-out interesting, the other big reason I want to
talk about computer memory is how deeply important it is to computing in general.
I mean, it should go without saying, but memory is crucial to how computers, well, work as computers.
You don't have memory, you don't really have a very useful machine. Counterexamples exist in the very early days of the field, but those tend to really prove my point.
Without memory, you don't get stored program computers.
You don't get high-speed data processing.
You don't even really get programming in any recognizable form.
My point is, memory matters.
any recognizable form. My point is, memory matters. What you do with a computer is, in large part,
determined by how memory functions and how you access it and just deal with storage. This is one of those topics where the devil really is in the details. That all being said, computer memory
has a surprisingly twisty and complicated history leading up to modern day.
It's a lot more variable than even the history of computer processors,
and it's a lot more variable than I expected going into this.
In general, once we hit the mid to late 1930s and get into Atanasoff, Stibitz, and
a host of other early pioneers, the logic design that would build computers in the future
becomes pretty set in stone. Everything is built up from logic gates at some level.
You just add a bunch of OR gates together, add in a NOT and an AND, and you have yourself a
binary adding circuit. You throw more logic gates into that, and you start building up a full-blown
digital computer. Almost 80 years later, and computers still follow this core pattern.
But memory? That has gone through a lot more permutations than you might expect.
The most recent and most successful approach is random access memory. That is, a fancy chip that
lets you read and write any location in
memory anytime you want. In fact, this type of memory is so successful that RAM and memory have
become synonyms. What other kind of memory could there be? Well, prepare to get a little bit
uncomfortable. Today, we're going to delve into the almost unnatural depths of sequential access
memory, specifically delay line memory. We'll be going back to a time when computer memory
operated on an entirely different paradigm, where some memory devices were even dangerous to human
life. RAM would eventually come in to save the day, but that makes this early sequential
period all the more fascinating. Before we dive into delay lines and before we really get very
far into the history of computing, I think it would serve us well to look at a really,
really early example of memory. I'm talking pre-computer stuff here. My main reason for going back so far is to provide
us with some context for how hard it is to actually deal with storing data, and why delay
lines were developed in the first place. Also, this would be a nice place to drop in some quick
definitions to keep everything on the right track. If you want to be pedantic about things, then we could just say that
a punch card, well, that counts as random access memory. And we're done. But that's not really what
I want to talk about, and not really the point of computer memory. In general, when I'm talking
about memory, I mean short-term storage intended for immediate use. It's a scratch pad used by a computer or other data processing
machine to keep data. In our modern conception of a machine, it's where programs are loaded for
execution. The other little detail that I want to bring up and address is the difference between
registers and memory. In general, processor registers are different than computer memory, but for our conversation
today, that won't always matter so much.
We're going to be spanning a number of decades and a number of different types of computing
systems, so the distinction between these two types of short-term storage won't always
be that clear.
Most of our conversation will be about the underlying technology, so we don't need
to be very worried about the differences between register storage and memory storage, at least
not for today. With all that out of the way, I have a bit of a project proposal for you.
I'm planning to make a really fancy particle meter. I already have a whole mess of radioactive samples in my closet that I need
to test, and I already have an electric alpha particle detector, but I need a way to count
everything up accurately. I'll give you a pile of vacuum tubes, and I want you to make me a circuit
that counts every time my detector goes off. Any ideas? No? Well, I can wait for a little bit. So, the easiest go-to for solving
this problem would probably be a series of binary counters using flip-flop circuits.
Each time a particle hits the detector, it produces a pulse of electricity. So, you feed
that into the first flip-flop. The circuit then switches on, going
from a value of 0 to 1. The next pulse that comes in from the detector flips the flop, turning it
from a 1 to a 0 and sending out an overflow signal. That carry pulse is fed into the next flip-flop,
changing it from a 0 to a 1. The chain would continue until you have enough binary digits to satisfy
your accuracy requirements, which, in my case, very high. Gonna need a lot of flip-flops.
Now, that's how we'd probably try to tackle this problem today. That's with the benefit of decades
of working with binary numbering systems. In other words, this approach is influenced by the success
of digital circuitry. When we take a look at a historical approach, we find something a little
bit different taking place, but there's similar bones to the solution. What I've laid out, an
alpha particle detector and a big pile of of vacuum tubes is the situation that Charles
Wynne-Williams was dealing with in the mid-1920s. Wynne-Williams was a physicist. During this period,
he was still earning his PhD while investigating the properties of nuclear materials. Crucially,
he was an experimentalist. That means that instead of jotting down notes and deriving equations,
experimentalist. That means that instead of jotting down notes and deriving equations,
Winn-Williams was more commonly found in the lab fussing over instruments.
Most people outside the sciences probably don't give it a lot of thought,
but instrumentation is really the heart of experimental science. A lot of phenomenon,
especially in a field like nuclear physics, just can't be observed with human senses. Radiation is a great example.
An alpha particle decaying off a sample of uranium is too small and too fast for us to see.
So to study alpha particles, we need specialized instruments. In some cases, you can just head down
to the nearest stockroom, grab whatever you need, and be done with it. But often, if you're on the
leading edge of a new field, or if you're just doing something particularly specific and nitpicky,
then you have to find a way to make your own devices. Wynn Williams would find himself in
the latter camp. During the 1920s, Wynn Williams was developing increasingly sensitive alpha particle detectors. The trick
was to use the new wonder technology that was the thermionic valve, better known as the vacuum tube.
Contemporary alpha particle detectors only output a low voltage, which made it difficult to get an
accurate measure of this type of radiation. When Williams' approach involved using vacuum tubes to greatly amplify the current of his
detector, through diligent work he was able to detect alpha particles better than anyone
else in the business.
The reason this was possible is because vacuum tubes can be configured to act as analog amplifiers.
By driving a higher voltage over the input lead of a tube and then
tying the gate lead into a detector, it was possible to boost the incoming signal from the
alpha particle detector and really just get a more manageable output level. However, this led to a new
and exciting issue to tackle. How can you count all these alpha particles? Now, this doesn't
sound like a huge problem to us today. We have computers, we have analog to digital converters,
and we have microcontrollers that cost literal cents on the dollar. Back during my college days,
I had some friends who were into experimental physics, and they were working in labs that were
literally just a pile of wires running between detectors and microcontrollers. We have wonderful
tools to deal with this type of work in the 21st century. We have all kinds of options for data
acquisition. We have computers. But in the early 20th century, we didn't have any of that. The existing option
was called a mechanical register. These were clockwork mechanisms. Turning a shaft would
advance a dial and keep track of a cumulative sum. You could drive the shaft with a motor,
thus allowing for electric signals to advance the register. The issue was, of course, these were
mechanical registers. These devices had to adhere to mechanical constraints. And one of the largest
constraints was, surprisingly enough, inertia. It takes time for a register to spin to the next
value. And once it gets spinning, it takes time for it to stop.
This means that there's a set limit to the speed of operation. If you detect too many alpha
particles too quickly, then your count becomes inaccurate. There were some existing solutions
to this register inertia issue. One wild fix was actually to use a spring to store rotational energy, something like
a purely mechanical data buffer. But the issue here was that most solutions were still mainly
analog and mechanical. So at a point, you always hit physical limitations, finding no satisfying
fix when Williams set off in a radically new direction.
His solution would be one of the first examples of digital memory.
But of course, we have some caveats here.
Wynn Williams published some earlier works on his vacuum tube device,
but here I'm drawing mostly on a paper that he wrote in 1932.
The other benefit is the paper has a fantastically sci-fi name.
It's called a thyrotron scale of two automatic counter. Now, thyrotrons are, roughly speaking,
a type of vacuum tube that can operate at high current. Technically, a thyrotron is full of some inert gas so it shouldn't really be called a vacuum tube
but there's not really an easier shorthand for it I'll try to do my best to keep it clear
another key difference is that a thyrotron will arc so you can actually tell when it's on because
you know it starts to glow and crackle. Anyway, Wynn Williams' scale of two
counter is one of the earliest forms of electronic digital memory and also really, really close to
being a binary memory store. The general idea behind his counter is simple. It slows down the
incoming signal from the detector. How it accomplishes
that is by storing intermediate data in a series of flip-flops. So at least in this part,
Wynn-Williams' solution is close to our earlier example. The input of the scale of two counter
feeds a flip-flop circuit made up of two thyrotrons. The output of this flip-flop is then fed into
another. That's fed into another circuit, and so on. The 1938 paper recommends the use of a series
of three flip-flops specifically. The whole point of this arrangement is that it divides up the
incoming electrical impulses. The first flip-flop will cycle through a flip and a flop also every two
incoming particles. The second circuit will cycle every four particles. The third only every eight
particles detected. On its own, this sounds like a simple binary counting circuit, but Wynn-Williams
pulls a pretty fast, tricky play on us. Remember, we're dealing with
a time before binary math becomes mainstream. The output of this last stage counter is fed
into an electromechanical register. In this three-stage arrangement, this means that the
register would only tick up every eighth particle detected.
For Wynn-Williams, that was enough leeway that the mechanical part of the device could keep up.
To actually read the scale of two counter, you'd read off the register dial and multiply its value
by eight. The final part of the counting was reading off the glowing thyrotron tubes as,
counting was reading off the glowing thyrotron tubes as, well, a binary number. Adding the two values together, you get the final, precise alpha particle count. Can you spot the weird part here
yet? The counter is part binary and part decimal. The lower order of this value, in this case from 0 to 7, is stored as a binary number
in a register composed of binary flip-flops. The higher order, anything above 7, is stored
as a decimal value in a mechanical counter. That's weird. The other fun part is the Wynn-Williams paper doesn't mention binary by name.
I've read it multiple times to be sure of that.
He talks about a, quote, scale of two notation, which is just binary but not called binary.
He even has little tables showing how to count it.
But in general, that's just in servicing the reading of
this lower part of the counter. It's almost a second-hand thought. One reason for this omission
may just have been that binary wasn't really in the mainstream consciousness yet, at least as far
as electronics go. The latter half of the 30s would see the first binary mathematical circuits built,
but up to that point, binary was mainly used in theoretical math, specifically number theory.
Wynne-Williams may just not have been exposed to much binary as an experimentalist. It would also
be possible that Wynne-Williams was familiar with binary but didn't think his readers would understand it without a more approachable name. Anyway, the other reason that he didn't dive deep into the binary side of
the counter is that it just wasn't all that practical on its own. From the paper, quote,
There is no theoretical limit to the number of units which could be employed. In practice,
theoretical limit to the number of units which could be employed. In practice, however, whenever possible, it is more economical to use a mechanical meter than a thyrotron unit. Only as many thyrotron
units, therefore, should be employed as are necessary to ensure that the mechanical meter
can follow alternations of the arc in the final unit comfortably. End quote. So, Wynn-Williams was
thinking about using the binary part of his counter in a larger capacity, but only to serve
the old-school mechanical register at the other end. In the specific implementation that Wynn was using, it took two thyrotron tubes to store one bit of data.
At two tubes a bit, this kind of storage wasn't exactly cheap or simple. Keep in mind that early
vacuum tubes were usually made by hand. Each tube is a beautiful piece of craftsmanship,
and they're not that cheap. In terms of raw data storage and cost per bit,
this isn't efficient at all. But hey, these are early days. As we hurtle towards the digital
computer, the Wynn-Williams counter comes along with us. And this brings us to some familiar faces.
During the 1930s, John Mouchley, while working as a professor at Ursinus College,
turns this half-digital, half-analog scale of two counter into a fully digital ring counter.
Now, this comes with a bit of chronological confusion. In 1945, Mouchley and another
researcher, J. Presper Eckert, designed ENIAC. That's one of the handful of machines that claim the title of the first electronic digital computer.
Eckert claims in a number of papers that Mouchly invented the ring counter.
Or at least that he built the first fully digital version.
That's not entirely accurate.
digital version. That's not entirely accurate. Other sources show that Mauchly was inspired to build ring counters after reading about them in existing articles. So Mauchly isn't the original
inventor of this idea, but he is one of the key conduits that brings this earlier technology
into the realm of computing.
Sometime in the late 30s, Mouchly started experimenting with digital logic circuits.
As a scientist himself, Mouchly was looking for a way to automate away tedious mathematical
work.
This eventually led to him tinkering with ring counters.
The key difference between the Wynn-Williams counter and a ring counter came
down to the tail end of the device. Ring counters didn't feed into some mechanical register.
Instead, the end of the circuit was just left hanging as a carry or overflow wire.
The general configuration of tubes also saw a change. The simple flip-flop configuration
used by Wynn-William Williams was well-suited to binary
numbering systems. That should have just been left as is. I've talked about this a lot before,
but binary number systems are the natural representation of digital circuits. You have
on and off. That maps directly into a binary 1 and 0. Any other numbering system shouldn't
really be used with this kind of technology unless you have a really good reason for it.
But hey, that's me talking with generations of retrospect. A flip-flop also fits this number
system naturally. It has two states, on and off, one and zero. Chaining together flip-flops increases
the number of possible states by a power of two. It all works out really well. The math is right
there. But I guess that's my future binary-loving brain talking. And in general, it's not really
convenient for human use. By the time Mouchly was tinkering with ring counters,
the more common designs counted on a scale of 10,
as in the 10 fingers on our hands.
In other words, the device had been reshaped to work on decimal numbers.
This is a little brain-twisting for me because, well, digital is binary.
They're just synonymous in my head and most people's heads.
I've spent some time poring over circuits from this time period, and they still just seem really strange.
Basically, instead of using a normal flip-flop with two states,
a decimal ring counter used a circuit that had five possible states.
A normal two-state flip-flop was put on the input side of the circuit, so combined you get a system
that counts up to 10. Each incoming digital pulse ticks the counter up, and once you hit 10,
the output carry line gets a pulse and the counter rolls over to
zero. The upside is that you can now just read the value off in base 10 already. It's a nice number
that humans can easily understand. The downside is you get a much more complicated circuit.
I've seen examples from the 40s that have anywhere between 8 and 20 tubes plus a network of resistors and capacitors to coordinate and carry signals around.
All in all, this is a wonderful example of why binary is now so tightly wrapped up in digital circuits.
It's a lot more simple.
It's a lot more simple.
Anyway, that rant aside, when all is said and done, these ring counters were working memory devices that were capable of holding a single decimal digit.
That's not much, but it's a really tantalizing start.
Mouchley would tinker away with counters and digital circuits into the 1940s.
In 1941, the traditional story of ENIAC's creation plays out. Mouchly meets Eckert at the Moore School. The two strike up a long-lasting partnership, and as World War II
starts, the team joins minds to create one of the world's first digital electronic computers.
One of the goals of ENIAC was, simply simply put to get a fast computational device up and
running as quickly as possible the computer's development was part of a really fast-paced
phase in the U.S. war effort so to get from design to machine quickly corners were cut
expedience was chosen over good design in a lot of cases. And so Mouchly's ring counters
became an easy choice for short-term data storage. The technology was already well-developed,
it was a known quantity, and someone on the team knew how to make it. So it just got shoved in as
part of the racks that became ENIAC. The point is, ENIAC does some weird things. Decimal counters,
called decade counters in the contemporary literature, are part of this weirdness.
Strictly speaking, ENIAC didn't have what you'd call memory. It wasn't a stored program computer.
It was programmed by wiring components together with patch cables. It didn't really have a place to store data on the fly,
but it did have racks of dials that could be used to click in constant values.
The closest thing to read-write memory was its bank of 20 accumulators.
We'd probably call these process registers today,
but with ENIAC, we're decidedly in a gray area.
Each of these accumulators was composed of 10 ring counters.
The carry output of the counters were cascaded into the next counter, effectively building up a larger counting circuit.
The accumulators could each store 10 entire decimal digits.
The accumulators could each store 10 entire decimal digits.
It's a weird way to quantify things, but in total we're looking at 200 digits of storage.
I don't like how that sounds, but it is accurate.
This worked, kinda.
There were some major issues with ENIAC's implementation and just with the general idea of
using decade counters as computer memory. This was a really wasteful technology. ENIAC's counters
used 36 vacuum tubes per digit. In other words, 7,200 tubes were devoted just to this tiny amount of storage. And just as a reminder,
these are true thermionic vacuum tubes, not the arcing thyrotrons used by Winn-Williams.
The principle of operation for vacuum tubes relies, and I mean that, it relies on heating
an element to a point where electron emissions occur. That takes a lot of power, and it generates
a lot of radiant heat. Heating and power were already well-known issues with vacuum tube
computers, even in these really early days. It's something that should shake out in initial tests.
ENIAC was housed in a custom-built room with specialized ventilation to accommodate this,
but there wasn't an easy solution around for power consumption.
You just had to draw really heavily on the community's power grid.
Just for fun, I went ahead and worked out exactly how much power these accumulators were using.
This is a rough estimate, but it should drive home the point that vacuum tubes for memory
posed a major issue. A running ENIAC consumed about 150 kilowatts of power. That's to drive
18,000 total vacuum tubes of varying types and a bunch of other ancillary components.
For now, let's just say that this is all devoted to vacuum
tubes and all the tubes are the same. It's not entirely accurate, but it'll do for back of the
notepad calculations. 40% of those tubes are dedicated to ring counters, so we get an energy budget of 60 kilowatts for 200 digits of memory.
We can reduce that a little.
It's a bit over 3 kilowatts of power per digit.
Now, we are dealing with the first attempt at a digital computer,
but that should be a cause for concern.
If power consumption ended up scaling with memory size,
well, you'd get to a point where a computer just wouldn't be practical to operate.
You'd need an entire power plant just to crunch numbers,
and that's not sustainable in any form or fashion.
So what's an aspiring computer scientist to do?
There were options, and they turned out to be a lot more radical than you may first expect.
The one technology that would win out, at least for a short period,
was composed primarily of liquid mercury.
Even before ENIAC was functioning, Eckert, Mouchly, and their co-conspirators were plotting a better computer.
functioning, Eckert, Mouchly, and their co-conspirators were plotting a better computer.
The system was called EDVAC, the Electronic Discrete Variable Automatic Computer. Planned to be built after wartime, the design of EDVAC was unhindered by the time constraints imposed on
ENIAC. One of the big changes was memory. The first draft of the report on EDVAC, the earliest paper we have describing the machine,
states over and over how crucial it is to have a large amount of fast memory storage. A big reason,
besides convenience, was that EDVAC was designed as a stored program computer.
Code and the data that it operated on were going to live in the same memory space.
In other words, to run a program, you had to store it in memory.
Just as an aside, today we call this the von Neumann architecture due to a little quirk
of history that happened around this time.
The first draft report on EDVAC was circulated before it was completed.
first draft report on EDVAC was circulated before it was completed. John von Neumann happened to write this draft, but he was only a small part of the larger team that made the actual EDVAC design.
The draft wasn't meant for public consumption yet, it was incomplete, and one of the places
that was left lacking was the byline on the cover sheet. It only had von Neumann's name on it
because, well, he wrote the draft. It wasn't done. It wasn't ready to share. But it got out,
people read the paper, assumed it was a Johnny von Neumann original, and the name stuck to the
design. This should more accurately be called something like the EDVAC architecture, but von Neumann is still the common name today.
Anyway, this architecture has some major advantages over earlier computers.
Broadly speaking, anytime you can generalize an approach, you should.
Treating code and data the same, putting them both in the same memory storage locations, is one of these smart
generalizations. It makes computer design more simple, and it can allow for some fun, if dangerous,
programming tricks. The downside is that systems designed around the von Neumann architecture
are more hungry for memory than other systems. You have to have enough space for your code plus your data,
and it all has to be in the same device or set of devices. This might be a bit more of my modern
preconceptions creeping in, but you probably want some amount of random access to get the most out
of a von Neumann-style computer. Code tends to branch around a lot. An instruction can
literally just say, hey, go execute another instruction at some location in memory. On a
machine like ENIAC, this was handled with patch cables, literally moving data signals around the
computer. But for a stored program computer to execute a jump, you need the next instruction from some discrete location in memory.
If you have to read out into the middle of a ring counter to get the instructions, then you're going to have performance problems.
The bottom line is, for more capable computers, you need more capable memory.
But this next jump is where things start to get messy.
There were a handful of solutions floating around in the late 1960s. Some were more feasible and more reasonable than others.
Eckert and Mouchley ended up going with a mercury delay line, a technology with some murky origins.
Well, it doesn't have a lot of primary sources, which to me makes it murky. For this
story, I'm drawing on an article titled Mercury Delay Line Memory Using a Pulse Rate of Several
Megacycles. It was co-authored by a number of researchers, among them J. Presper Eckert. It
was published in 1949. As I've already mentioned, I've ran into some factual issues in Eckert's writing before,
so I have concerns leaning too heavily on him as a secondary source. The article explains that
early delay lines were already in use in analog televisions as early as 1940. Shortly after that,
the first acoustic delay lines were developed at Bell Labs by William
Shockley. That's the same William Shockley that would be part of the team that developed the first
transistor later on. During the early 40s, Shockley was working to improve radar systems for the US
war effort. Specifically, he was trying to tackle the problem of clutter. This phenomenon occurs when a radar system picks up stationary objects like trees and buildings.
These objects reflect radar, so they show up as stationary blips on the screen.
Usually, a radar operator isn't trying to track incoming trees, so clutter ends up being a bit of an issue.
The method that Shockley arrived at was delay line memory.
The general idea was to split the incoming radar data into two separate signals. One would be
delayed for a cycle, that's the amount of time it took for a radar dish to make a new sweep and get
some data. Then the delayed signal would be compared to the next incoming dataset. Any blip that was present in the delayed
signal and the live signal weren't moving, so it could be discarded as clutter. It's rudimentary,
but it would clean up radar screens pretty well. The only issue is, how do you reliably delay an
electronic signal without degrading it? This gets into some fun physics territory.
An electric signal will propagate down a conductor really quickly. We're talking close to the speed
of light. Some conductors propagate more slowly, but we're still in the range of fractions of the
speed of light. To complicate matters, as energy flows down a long cable, it will dissipate.
So, after a certain point, you lose your signal.
Now, that's enough delay that maybe you could cause some weird issues if you have a really long transmission cable,
but it's not enough to delay a signal for an entire radar sweep without losing it.
Shockley needed a way to delay a signal for a few hundred milliseconds,
and to do so in a precise and, importantly, predictable manner. After tinkering, how much
tinkering we just don't know, Shockley settled on a solution. To slow down the signal, he would turn
it into an acoustic wave. This is where the fun really starts. You see, the speed of sound is much slower than the
speed of light. We're in the ballpark of hundreds or thousands of meters per second depending on
the medium. That's slow enough that you could, in theory at least, construct a fancy sound conductor
that could delay a signal just long enough for a radar dish to make another sweep. And sound waves travel slow enough
that the device wouldn't need to be all that large. Now, Eckert doesn't exactly explain how
Shockley implemented these first acoustic delay lines. The research was done inside Bell Labs
during World War II. If there are records, they're tucked away somewhere that's not quickly
accessible to me.
I've tried searching through already FOIA-released documents for references, but drew a blank.
There may be some notes in a collection of Shockley's papers at the Stanford Library,
and there may also be some FOIA requests that could be made,
but those routes would both take a while to get results.
Maybe I'll put in some requests eventually,
but for right now, let's just leave this as a little bit of a gap in the timeline.
What we do know is that by 1943,
Shockley had developed an acoustic delay line system that could be used in radar decluttering.
That year, one J. Presper Eckert would take the reins and progress the project further.
From Eckert's description, Shockley's initial delay lines were crude, noisy, and required a lot of fine-tuning to work.
So the path forward was clear. Develop the technology into a more reliable form.
This is where we get the actual implementation details, and...
This is where we get the actual implementation details, and, well, this is where the acoustic delay line starts to get wild.
Here's the general rundown of Eckert's formulation.
The bulk of the device was composed of a long, slender metal tube.
Each end of the tube is capped and sealed with a quartz crystal.
Each of those crystals is connected up to a wire.
Inside the tube is mercury. Yes, that somewhat unsafe liquid metal. Send an electric impulse
into the crystal on one end of the tube, and a short time later, that same impulse would come
out the other end. Delayed in time, as if by magic. If this sounds totally out there, like some alien technology,
then that's good. That puts us all at the same starting point. So let's try to break past this
apparent magic and understand what this thing is. The three components I mention each serve their
own purpose. Let's start with the quartz crystals. These function
as transducers through the piezoelectric effect. Quartz, when grown and cut and mounted in just
the right way, can be made to generate a small amount of electricity when subjected to pressure
waves. This also goes the other way. You can send a jolt of electricity into one of these crystals and it will jiggle around a little, creating a pressure wave.
In the delay line, these crystals are being used to turn an electric signal into an acoustic wave on one end.
Then on the other end, it turns an acoustic wave into an electric signal.
Then we get to the star player, the mercury. This is the medium that
the all-important acoustic waves travel through. The choice of a liquid here is key. In a solid,
acoustic waves tend to bounce around and lose focus. Any facet of the solid or any change in
the solid's density can cause scattering or weaken the signal. You also have
surface effects that you have to take into account. But in a liquid, specifically in a highly purified
liquid, there really aren't any surfaces to bounce off of. By encasing the liquid inside a metal tube,
the third secret ingredient, you can eliminate surface effects. You don't get a rippling surface if you just don't have
an exposed surface. The other benefit is that you can refine and eliminate any impurities in
the liquid with relative ease. That means there isn't anything in the tube for sound waves to
interact with besides just the medium and eventually a quartz crystal. You should be
able to tell that control is really the name of
the game here. Mercury ends up working really well in the setup because it can be easily refined to
high purity by distillation. Eckert also writes that mercury, quote, can readily be matched to
transducers such as quartz crystals. Now, I'm not entirely sure what he means by that. My best guess is that
it has to do with mercury and quartz having some similar harmonics or acoustic couplings.
I'm not a material scientist, so I'm not entirely sure of the fine points here. Anyway,
the result of Eckert's work is that by 1943, he has a pretty reliable mercury delay line. Signals don't degrade
too much as they go through the line, and timing can be precisely tuned in. However, the choice of
mercury does lead to an interesting little wrinkle. You see, mercury's density is dependent on its
temperature. The hotter it gets, the more it expands, and the more it expands, the lower its density. This matters because the speed of sound in a medium is related to
that medium's density. Following the dimensional analysis train, we find that a delay line's delay
is dependent on the temperature of its mercury medium. So to function, a delay line has to be
kept at a constant temperature. Not the end of
the world, but this is a quirk that we should keep note of. The development of delay line technology
would have to be tabled when Project PX, that's the effort that led to ENIAC, was started in June
of 1943. One of the considerations was, of course, memory. The team ended up going with ring counters to save time,
but the idea of using mercury delay lines was apparently floated. No pun intended. Eckert
didn't get back to the technology until the end of World War II. I guess this is as good a time
as any to stop and ask, what does a delay line actually have to do with memory? How does a delay equate to storage?
To try and explain this, let's think about how a program uses a computer's memory today.
Let's say you have some variable x.
When you define the variable in your language of choice, a chunk of memory will be allocated
to hold its data.
When you set x to some value, you're changing what's stored in
that specific location in memory. Then, well, x just kind of sits around and waits until you need
to access it again. When you get back to x, you expect it to have the same value you assigned it,
but you don't really know how long it will be until you need to go visit X again.
A ring counter plus some extra hardware for figuring out addressing will do this just fine.
Once you set a value, the counter just holds that value as long as you want.
As long as there's power, it'll just be sitting in that state.
In the case of a delay line, the value you put in is only held for a set amount of time,
then it's delivered to the other line if you're ready or not.
That may sound somewhat useless, but it's actually half of the functionality you need
for full computer memory.
To complete the picture, you just need a way to control when that value gets read out.
to control when that value gets read out. The earliest solution to this problem came sometime around 1945, just as ENIAC was entering service. This brings us back to EDVAC. While the computer
wouldn't be operational until 1951, there was progress being made through the latter half of
the 40s. During this period, Eckert and Mouchly cracked the final piece of the puzzle to get delay
lines functioning as computer memory. The first progress report on EDVAC, written in 1945,
describes its memory system like this, quote,
The memory device uses a delay line as a serial storage device and regenerates the signal pattern
so that virtually unlimited storage times may be obtained.
It is essential that signal patterns stored are digital in nature.
End quote.
There are really two pieces that complete this mad machine.
The first is pretty obvious given the context.
You have to restrict the delay line to only storing digital pulses. You don't get
analog, you don't really get decimal numbers, you get binary signals. That's important because,
you know, this is being hooked up to a digital binary computer, but there's something more
subtle at play. The other trick is the signal regenerator component. This is a circuit that
turns the delay line into something like a ring, or a mercury-filled ureborus if you like.
The output of the delay line is fed into this regeneration circuit. It gets amplified and then
put through this thing called a pulse reshaper. That part of the circuit takes a binary signal, boosts it,
and outputs a new, clean binary pulse. The idea here is that if the signal was degraded while
traveling through the delay line, it may need to be turned back into a crisp square wave.
Then, after the regeneration circuit does its job, the pulses are directed to the input side of the mercury delay line.
The overall regenerator circuit also handles keeping time synced up with the computer and exposes input and output buses.
By giving the signal a little bit of help and wiring things up in a circle, you get some functioning memory.
Once you've set a value, the circuit holds it as long as it has power.
Then you can read off the value whenever you want, or even change it. The caveat being, this type of memory uses a
serial device, so there are wait times for accessing certain bits. So, are you a little
bit uncomfortable now? I know I am. Mercury delay line memory is, frankly, weird. I hope I've
explained the general principle of operation well enough that we have a grasp on the technology.
To complete the picture, we're going to see how it gets implemented, used, and where it starts to
break down. Yet another fun quirk of the history here is that despite being developed for EDVAC,
I'm pretty sure that the first practical use of Mercury Delay Line memory happens outside EDVAC.
The key reason is because the project suffered from some pretty hefty delays.
One big issue was that Eckert and Mautley, two of the key designers on the team,
left to form their own computer company in 1946.
So while we have some early documentation as to the design of EDVAC's memory, we don't have a
working EDVAC for a number of years. The first working machine to use Mercury Delay Line memory,
as near as I can tell, was EDSAC, the Electronic Delay Storage Automatic Calculator. What's
interesting, at least what I find interesting in a funny sort of way, is that EDSAC was based off
the leaked EDVAC report. The project started in 1947, and just two years later, the machine sprang to life in a lab at Cambridge University.
For EDZAK's design, and really the design of all computers made after 1945, memory was key.
Delay storage is even right there in EDZAK's name.
This is even further emphasized by a 1948 article about the computer written by two of its designers,
quoting,
I may have been a little unspecific. The article is just called an ultrasonic memory unit, and it's only about EDZAC's memory. So yeah, memory is
starting to really matter in a big way in this period. This also means that EDZAC's team is the
first to really get down and dirty with the new technology. It was up to the researchers at
Cambridge to figure out how mercury could hold data in any practical sense.
EDZAC had two different types of delay line memory. It had short tubes, and it had long tubes.
The amount of data a delay line can store is based off its length, as well as how fast you
send pulses down the line. In practice, there are upper limits to pulse speed before waves start to break down,
and a computer usually has to run everything at the same speed for convenience sake.
So, all things being equal, we can just say that length equals storage.
EDZAK's short tubes only stored a single 18-bit word, but there was less of an access delay.
single 18-bit word, but there was less of an access delay. These were used as the machine's registers. The long tubes on the other end of the spectrum were 5-foot long slender columns
of mercury and steel. Each tube could hold 576 binary pulses at a time. After discounting some
spacing pulses, this equates to about 16 words per tube.
I know, we're gonna be in weird units for this part.
EDZAK was an 18-bit machine, so we run into this case where we can't directly compare
this to modern storage numbers.
Each of these delay lines could hold something roughly equivalent to 36 modern 8-bit bytes.
Think of it as roughly 36 characters of data stored in 5 feet of mercury.
That might sound like pretty bad data density.
I think it works out to about a quarter inch of mercury per bit.
But compare that to the storage density of vacuum tube-based memory,
where you need one or two tubes per bit, and we can start to see that there are gains here.
But of course, one tube on its own isn't very useful. EDZAC initially used 32 delay lines,
grouped into banks of 16 tubes for convenience. A few years into operation,
this was doubled to 64 total delay tubes. That comes out to a maximum of 1,024 18-bit words of
memory. Which, really, that's nothing to sneeze at. That's a lot of storage. Each word was totally
addressable, so a programmer could read from or write to any place in memory.
From that perspective, we're looking at fully random access memory.
Or at least it seems that way.
Under the hood, it gets more complicated.
In latter years, computer scientists would come up with some tricks to get the most out of delay lines,
but EDZAC was the technology's first scale outing. So we have the most simple addressing scheme possible. The lowest address was just the first packet of pulses in the first delay tube.
The first tube, wherever it was positioned in the larger bank of Mercury, held the first 16 addresses. To get to an address
beyond that, you need to go read from another tube somewhere else in the rack. That means that each
address in memory has a physical location, that's where the tube is actually stored in the rack,
but it also has a temporal location, that is, where the pulse is stored inside the tube,
location. That is, where the pulse is stored inside the tube, so how long you have to wait for that pulse to come out. That final piece is where we break from fully random access memory.
You might get lucky and the address you want is about to pop out of the correct delay line,
or you might have to wait around for it. We're talking a difference of a few hundred microseconds. To an operator, to a human, that amount of time isn't recognizable. But it does mean that memory access
takes an unpredictable amount of time. So we're close to RAM, but there's still this key difference.
This may make it sound like Mercury delay lines were, all things considered, a pretty tame technology once implemented.
Maybe a programmer could even forget about the underlying weirdness.
But don't be fooled.
Just because mercury could be made addressable like later computer memory and operate almost
like random access memory, we aren't dealing with a stable or even a simple solution.
The 1948 Ultrasonic Memory Paper makes this really clear.
While delay line technology existed, it still took a whole lot of work to get it running for computers.
The paper goes into excruciating detail about how the delay lines had to be engineered and constructed to be reliable and predictable.
Just for a fun taste of the text, it has this to say about the acoustic medium.
It was at first thought that the use of commercial mercury from which the bulk of oxidizable impurities has been removed by a chemical method, would give satisfactory results.
Experience has shown, however, that the velocity of ultrasonic waves
in different samples of mercury prepared in this way can differ by as much as 0.1%.
The effect is presumably due to remaining impurities,
and for that reason, it is best to fill the battery with-the-shelf mercury, oh no.
Impurities made that too unpredictable.
And by too unpredictable, I mean a variance of 0.1%.
This is a precision device.
of 0.1%. This is a precision device. Instead, mercury had to be specifically prepared for delay lines to minimize any variance. The paper describes how tubes had to be filled in just the right way,
in stages with multiple fluids, just to make sure that the quartz transceivers were in good
contact with the mercury. If you were careless,
bubbles or thin films could form, preventing good acoustic coupling between the medium and
transceivers. Just as another fun example, let's look at the materials used to construct these
delay lines. You see, mercury is a particularly interesting kind of metal. It has a tendency to form amalgams with other certain metals,
and still other metals, it just corrodes.
So you have to be careful about what comes into contact with it.
The main body of EDZAK's delay lines were made using steel,
which doesn't react with mercury under normal conditions.
The end fittings were made from brass, which does in fact react with mercury.
So to keep things from degrading, a series of gaskets and rings were used to ensure mercury-tight seals.
Even with precision engineering, mercury delay lines still failed.
David Wheeler, one of the many students turned computer scientists who worked
on EDZAC, recalled, quote, at the most, the EDZAC had probably 512 words, and most of it, or
certainly some of its life, it had far fewer than that. I think it also had a variable available
number every day, which doesn't usually appear in documents, but it was typical of the things that happened. He continues,
Some of the Mercury delay lines developed troubles, and basically, the engineers,
instead of repairing them immediately, would rearrange them. So today, the store is 448 words,
tomorrow it's 512. End quote. Failures shouldn't be all that surprising. As I always harp on, mechanical
devices aren't super reliable. And sure, a delay line doesn't have any gears or shafts and it only
carries digital pulses, but it still relies on mechanical changes in a medium. To top it all off,
mercury delay lines were delicate and complex devices. A complicating factor was the temperature sensitivity
of mercury. See, I told you we'd get back to this. For the delay lines to function, the mercury had
to be kept at a constant 40 degrees Celsius. That's 104 degrees Fahrenheit. This was maintained
using heaters controlled via thermostats. There was some wiggle room, but the tubes had to be at least around 40C.
Too far off, and the speed of the acoustic waves would change.
Thus, timing would fail to line up, and the memory wouldn't work.
On its own, that's just an annoying engineering complication.
But combined with reliability issues and the properties of mercury,
we run into something a little bit worse than that. But combined with reliability issues and the properties of Mercury,
we run into something a little bit worse than that.
Early computers had to be run pretty hard. These were research machines, and really valuable research machines.
When EDZAC came online, there were maybe a dozen computers in the world.
Downtime on EDZAC could very literally set back the field of
computer science. So the computer had to be serviced hot. Now, I don't mean that as a euphemism.
If a component broke down, then it had to be replaced before the machine could cool down.
I haven't seen exact numbers, but it stands to reason that it took time for EDZAC's memory tanks to reach operating temperature.
The memory heaters had to warm up EDZAC's tubes by about 20 degrees above room temperature before they could come online.
If a vacuum tube burnt out, it could just be easily replaced.
Vacuum tube sockets aren't hard to deal with.
Sure, the glass bulb could get pretty hot, but just grab a towel
and you're good. But delay lines offered their own special form of difficulty. Remember, these are
five-foot-long tubes of mercury. I know I'm doing a lot of dubious math this episode, but bear with
me one more time. From the dimensions of EDZAK's delay lines and the density of mercury, these
tubes would have weighed approximately 25 pounds. And when operating, they're in the neighborhood of
100 degrees Fahrenheit, so maybe not scorching, but they're warm to the touch. As Wheeler alluded
to, you couldn't just pull out a tube and replace it with an extra. I suspect there weren't really extra delay lines lying around.
Instead, a technician had to rearrange EDZAK's delay lines
to make sure at least some continuous memory remained functional.
So, you know, all you have to do is quickly wrestle around some hot 25-pound tubes
before the computer cools down too much.
It might not have been an insurmountable task, but it doesn't sound comfortable or simple.
Of course, the elephant in the room that I haven't been addressing is mercury's effects
on humans. Elemental mercury isn't normally that dangerous if you handle it carefully.
Compounds of mercury, like mercury salts, can be absorbed and cause mercury poisoning,
which is deadly.
And delay lines being sealed means that, in theory, no computer operator should ever have
been exposed to mercury.
That being said, hot mercury poses a very specific risk.
You see, mercury likes to produce fumes.
The boiling point of mercury is pretty high.
It's in the realm of hundreds of degrees.
But small amounts of mercury will vaporize even at room temperature.
As it warms up, that becomes larger amounts.
If a gasket failed on one of these delay lines, or a brass end cap came into contact with
mercury and corroded, vapors from hot mercury could start to leak into the room.
And while elemental mercury may be reasonably safe for humans, its vapor is deadly.
I haven't read of any poisonings specifically around early computers, but
it's one of those things that must have always been a clear and present danger.
Alright, that does it for our dive into the strange early days of mercury-based computer
memory. As we've seen, the development of memory wasn't a
straightforward affair. It was messy and it was difficult. Mercury delay lines are just one part
of this story. Even as researchers at Cambridge were grinding quartz transceivers, other labs
were developing alternatives. As exciting as mercury memory systems were, they only existed for a brief
moment in time. At most, and I've tried to count this precisely but it gets a little fuzzy,
but I think there were around 100 computers built using mercury delay memory. This was over the
space of 5 to 6 years. These were some of the earliest vacuum tube computers, but not all of this first generation
of systems survived off Mercury.
This is going to be one of those topics that I need to come back to probably multiple times.
Even with all the complications and confusion surrounding delay line memory, this episode
has only covered a small part of the
story. Ferrite core memory, magnetic drums, specialized phosphor tubes, well they were all
contenders in the same space around the same time. The point of this episode is that memory was a
messy and diverse business, and mercury delay lines are just scratching at the surface of the larger story.
The best way I can think to end this episode is with a story that I couldn't really fit in
anywhere else. This comes from an ACM Turing lecture delivered by Maurice Wilkes. He was the
primary designer behind EDZAC. Even in the early phases of EDZAC, there were concerns about mercury delay memory.
As Wilkes recalled,
Turing's contribution to this discussion was to advocate the use of gin,
which he said contained alcohol and water
in just the right proportions to give a zero temperature coefficient of propagation velocity
at room temperature. End quote. Would gin have worked in delay lines? Would that have been a
viable option? Probably not, but I think this goes to show that even as the first mercury-based memory systems were being constructed,
the field was still very much in a state of flux.
Like I keep saying, there's no direct road from the first computer memory to the chips we use today.
Thanks for listening to Advent of Computing.
I'll be back in two weeks with another episode covering the
story of the computer. And hey, if you like the show, there are now a few ways you can support it.
If you know someone else who would like the show, then why not take a minute to share it with them?
You can also rate and review on Apple Podcasts. And if you want to be a super fan, then you can
support the show directly through Advent of Computing merch or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and bonus content.
You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode,
then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.