Advent of Computing - Episode 31 - Road to Transistors: Part I
Episode Date: May 31, 2020The transistor changed the world. It made small, complex, and cheap computing possible. But it wasn't the first attempt to crack the case. There is a long and strange lineage of similar devices leadin...g up to the transistor. In this episode we take a look at two of those devices. First the vacuum tube, one of the first components that made computing possible. Then the cryotron, the first device purpose built for computers. You can find the full audio of Atanasoff's talk here: https://www.youtube.com/watch?v=Yxrcp1QSPvw Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers:Â https://www.patreon.com/adventofcomputing Important dates in this episode: 1880: Thomas Edison Rediscovers Thermionic Emission 1904: Ambrose Fleming Invents the Vacuum Tube 1906: Lee de Forest Patents the Audion Triode Tube 1937: George Stibitz Creates First Binary Adding Circuit from Spare Relays 1938: John Atanasoff Visits a 'Honkey-Tonk' 1941: ABC, First Vacuum Tube Calculator, is Completed 1953: Cryotron Invented by Dudley Allen Buck
Transcript
Discussion (0)
what exactly is a computer?
Now, as it turns out,
that's a little bit more of a tricky question to answer
than it appears on the surface.
Computers have changed considerably over time,
so it's hard to pin down a definition
that encompasses every phase of their development.
Instead of trying to answer that question all at once,
I think it's a lot easier and a lot more sane
to try to break it up into smaller
chunks. And I think the best place to start would be by looking at a computer's most basic building
blocks. In essence, what's a computer made out of? In the modern era, we're familiar with
microprocessors and silicon, machines built using mind-numbingly complex integrated circuits.
But despite their complexity, computers nowadays
are based off a very simple component. That's the transistor. A modern chip can have billions
of microscopic transistors all working in concert. Sure, when put together, it's a complex and
powerful device. But on its own, a transistor's function is incredibly basic. The sum really becomes greater than its parts.
First striking it big in the 1950s, a transistor is just a really fancy switch.
Each transistor has three legs, usually called the collector, base, and emitter.
Current will only flow in one direction, from collector to emitter, and only if there's
also a charge on the base leg.
It's a very specific function, so niche-sounding that it's easy to overlook on a bill of components.
Transistors are this interesting combination. It's a very simple device, but it can be used
to create very complex outcomes. But here's the thing. The transistor is only the most recent
in a long line of electronic switches.
And the road up to the transistor has some very interesting avenues.
The transistor has become the core building block of the computer.
But its predecessors once filled that same role.
From the relay to the vacuum tube and even the mysterious cryotron.
There's a long lineage that leads us up to more modern devices.
So, let's take a look at what built up the computer to what we can recognize today.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 31,
Road to Transistors, part one. Today, we'll be going through the lineage that leads up to the
creation of the transistor. I've been wanting to do an episode on this technology for quite some
time, but it's taken me a little bit to figure out the best way to present it. The transistor
is both exceptionally interesting and exceptionally
important. It would be easy to fill up an hour or more just talking about its development and
getting deep into the physics and engineering behind its creation. And while the how of the
transistor is important, I think it's the why questions that are a little bit more valuable to
us. Why does the transistor actually matter in the grand scale of the story of computing?
Why invent it in the first place?
Why use transistors in computing?
Why do they fit?
And why hasn't something better come along yet?
And to answer any of those questions properly,
we're going to need some context.
Well, actually, we're going to need a lot of context.
Now, I tried really hard to fit this all into one episode, believe me, but I got to the
point where there's just too much to cover.
So instead of piling everything into a single, long episode, the discussion will be broken
up into two.
Today, we're going to go through some of the early roots of computing and get down to the
brass tacks of what made early systems work. Starting
with some of the first logic gates leading us to the creation and adoption of the vacuum tube.
Then we'll take a look at the cryotron, a stunning technology that could have very well become as
big as the transistor, but it would ultimately fade into obscurity. This will give us a solid
foundation for the next episode and the context needed to understand why the transistor has become so important.
The first technology on our tour today is the venerable vacuum tube.
If you haven't seen one before, then I'd direct you no further than to look at this podcast's logo.
The vacuum tube's been part of my logo since pretty early on.
This device is most commonly called a vacuum tube,
but in some regions it's also called a valve. However, its more scientific name should be
properly put as the thermionic emissions tube. Whatever you call it, today it's totally obsolete,
but for a good 50 or so years it was a really hot commodity. The first vacuum tubes were invented by John Ambrose Fleming in
1904, an electrical engineer working in England. But this first example wasn't all that similar to
its descendants. Fleming had a pretty busy life around the turn of the century. He worked as a
university professor and did some engineering consulting on the side. This included some time
consulting for the Edison Electric Light
Company and later the Marconi Wireless Telegraph Company. It was while working with radio systems
at Marconi that Fleming struck upon his big idea. At the time, there were some major issues with
radio receivers. Early radios, it turns out, were just playing unreliable devices. To send a radio signal, you need to first turn a sound wave into a radio wave.
The most basic way to do this is a scheme called amplitude modulation, or AM, it's
where we get the name for AM radio.
In this scheme, you have some carrier frequency, say 1500 kHz.
So you have to generate a wave at that frequency and then change the
amplitude over time, or modulate it based on the audio you want to send. On the other end,
a receiver needs to listen for the wave at your carrier frequency and then demodulate its amplitude
to turn the wave from radio back into sound. The device that does all the heavy lifting of turning modulated waves into
sound is called a detector, and at the turn of the 20th century, every option just kinda sucked.
In general, a detector is able to pull radio waves or radio signals from the air,
turn it into electrical impulses, and rectify it into a usable audio signal.
On its own, a rectifier can't do much, but with a few other components such as an antenna,
you can build up a circuit to pull some sensible audio from fluctuating AM signals.
So what were your options for handling signals in 1901?
Well, it turns out they're pretty bleak.
The most simple of these was the crystal detector, composed of a large chunk of some type of crystal and a small, movable wire.
By getting the wiretip touching just the right part of the crystal, it was possible to detect and rectify a radio signal.
The point where the wire and the crystal touch forms something like a very primitive and delicate semiconductor diode.
Sure, it's cool to think about having to scry over a crystal with a tiny wire to pull
sound out of the air, but this doesn't make for a very practical solution long term.
The other option was a coherer.
This is another really weird device. A coherer is basically a glass tube
that's packed with conductive metal filings and has leads placed at either end. When exposed to
a radio signal, the metal filings cling together, or cohere, and thus allow a current to pass
through the tube. Once again, it's pretty neat and it's a pretty fanciful device that
isn't all that practical. The metal filings tend to want to stick together, so another component
called a decoherer was often added to the circuits. This worked to dislodge the filings by, well,
usually knocking or shaking around the coherer. The other key issue is that the coherer switch was pretty slow.
You need to wait for the metal filings to align and then break apart
before you're ready to receive the next wave of the signal.
So while it may work, you can't handle quickly changing audio signals,
which limited this to relatively slow telegraphs.
Fleming was personally struggling with the state of
detectors while designing transatlantic radio systems at Marconi. He needed something sensitive,
something robust, and something able to handle higher frequency signals. Crystal detectors and
coherers, no matter how cool they sound, just didn't cut it. He'd start looking for alternative solutions sometime around 1903,
and as it turns out, he already had a promising lead in mind from his days working at Edison
Electric Light. Crystal detectors and coherers both worked via pretty strange mechanisms.
So too did the solution that Fleming arrived at. His creation was based off a well-known and
little-understood phenomenon called thermionic
emission. Essentially, thermionic emission is where a heated conductor will emit electrons.
Thomas Edison didn't discover it originally, but he carried out a good deal of experimentation
into the phenomenon, so it's sometimes referred to as the Edison effect. He first described the phenomenon in 1880 while developing the light bulb.
A light bulb is, after all, just a glass bulb with a heated metal coil inside it.
Thomas noticed that if another wire was placed in the bulb,
that it would output a current without being in contact with the hot coil.
But after further investigation, he couldn't find a way to really use the device.
So Edison patented it and then moved on to other work. While Edison thought thermionic emission
was useless, Fleming didn't agree. So he worked up a few tubes and started running experiments.
These early tubes are basically just modified light bulbs. A long heated filament runs as a
loop called the cathode around
the bulb. Positioned in the middle of these early tubes is a coil called the anode. When powered,
the loop heats up and thermionic emission begins. The other key component was to pump all the air
out of the glass tube to form a vacuum, just so you don't have anything interfering with the
thermionic emission.
Fleming realized that, far from being useless, there were actually two very important but subtle properties of these tubes. The first is that the wire connected to the coil seemed to
react to radio waves. The second is the current would only flow from the heated loop into the
coil and not the other way around. Together, this made the
new tube a wonderful radio detector and rectifier. It takes a little bit of time and power to heat up,
but once started, it's pretty reliable. Sure, you can break the bulb since it's glass, but these
thermionic tubes are much less susceptible to mechanical problems since there's no moving parts.
tubes are much less susceptible to mechanical problems since there's no moving parts.
In the short term, this solved Marconi's transatlantic radio issues. But more importantly, Fleming's tube had some far more general uses.
See, it only worked as a radio detector in concert with an antenna, and a few other components.
Outside the realm of radio, the new tube could be used as a diode.
In fact, it was the first be used as a diode. In fact,
it was the first example of a practical diode ever. The next big breakthrough was the triode,
invented just two years later. Its creator, Lee DeForest, was an American engineer with a
particular interest in radio. DeForest was trying to improve Fleming's radio detector. His eventual design, which he called the audion tube, placed a conductive grid between the anode and the cathode.
Forrest was able to use his audion to produce a louder audio signal from incoming radio waves,
but at first it didn't seem to be of all that much use.
much use. It wouldn't be until years later, in 1912, that some researchers would realize an adion could actually be used as an analog amplifier. How this works is kind of interesting.
Varying the voltage on the grid can reduce or block the current passed from the cathode to the
anode. So by hooking up a higher constant voltage to the cathode and then driving the grid with your weak signal, you can get a boosted output on the anode.
Once again, we have a very simple function that has some big impacts.
Once its amplification properties are discovered, the triode became an overnight success.
Slap one into a radio and you can now pick up a weaker signal and boost it up to tolerable
levels. Flip some of the wiring around and you can now pick up a weaker signal and boost it up to tolerable levels. Flip some
of the wiring around, and you can now make a better radio transmitter. Throw one inline on
a cable, and transatlantic telephone cables become possible. So by the end of the 1910s,
the vacuum tube has made its grand appearance. Well, at least in a small niche. It was a massively
important invention, but it would only be really
used for analog audio devices. At least, that was the case in the tube's early years. As it turns
out, the same properties that make the thermionic tube so good as an analog amplifier can just as
easily be used with digital signals. So, with that bit of background, let's switch over to a
totally different topic,
or at least a seemingly different one. While the vacuum tube continued to develop and mature as a
product, people were just starting to realize that something like a digital computer may in fact be
possible. And as with a lot of cutting-edge technology, some of the first steps towards
this future would come out of Bell Laboratories.
In 1937, George Stibitz found himself sitting at a kitchen table with a pile of relays.
After earning his PhD in mathematics from Cornell University, he had started working
at Bell Telephone Labs as a researcher. And while he may have been educated in mathematics,
he worked more in the realm of engineering while at Bell.
Stibbitt's big task at the time was trying to find a way to make Bell's telephone switching system more reliable.
Bell systems used relays to automatically route calls.
These are electromechanical switches.
By sending power to an internal electromagnet, a contact gets pulled down and closed, thus completing a circuit.
Over time, relays tend to wear out.
The contact that gets slammed shut can spark, and that can cause it to degrade.
Eventually, the relay will either intermittently work or completely fail.
The hope was that Stibitz could find a way to get these relays to last a little longer, and maybe fail a little
less catastrophically. But the plan would go off the rails pretty quickly, and in a really beautiful
way. George had never seen a relay before. Remember, up until this point, he was a mathematician,
not really an engineer. But when he eventually got his hands on some relays, he was instantly fascinated.
Not by their mechanical aspect, but more because they reminded him of something.
Stibitz realized that what everyone else saw as just a fancy switch could actually be used
as a binary logic gate.
Prior to digital computers really existing, binary had been relegated to the field of
mathematics. The
first writing on binary mathematics actually date back to the 1500s. But for centuries,
it just existed as pure math. There wasn't much of an application for it. Binary number systems
have some interesting properties, sure. But by the time Stivitz was exposed, it was basically
a cool math novelty. So what exactly is binary and
what's a logic gate? Put simply, binary is a number system that only has two digits, usually
represented as 1 and 0, but it can also be interpreted as true and false or on and off.
Since it's just another way to represent numbers, you can translate binary to decimal and back.
A logic gate or logic operation is a type of operation that takes binary inputs and gives binary outputs.
An example is the logical AND operation, where you pass two inputs and it will output a 1 if and only if both inputs are 1.
Otherwise, it outputs a 0.
There are a whole set of logical operations that all have different characteristics and properties. By chaining these together,
their inputs to other outputs, you can create operations that are more and more complex.
With a little work, you can build up a full set of mathematical operators using this method. So,
you can do something like add two binary numbers by chaining
smaller logic operations. By the beginning of 1937, this was all well known to mathematicians
like Stibitz, but it was all pen and paper type of math. But by the end of 1937, things would change.
Once he made the realization that a relay could be used as a logical AND operator,
Stibitz was hooked in.
That evening, he left Bell's offices with a handful of pilfered relays to play with.
Sitting at his kitchen table with a stack of relays, some metal salvaged from a tobacco tin,
a few light bulbs, and a battery, he set to work.
By the end of the evening, he had built a one-bit adder.
It's a circuit that took
two binary numbers represented by an on or an off, and then output the sum by turning on the two
lights, each representing a one-bit number. This was pretty limited. It was janky, and it could
only add two binary numbers. But it was the first big step. Once you have a circuit that can do one-bit addition,
you can start chaining them together and add larger numbers. Other mathematical operations
would also be translated into relay circuits pretty quickly.
Over the next few years, Stibbitt's idle tinkering would turn into a fully backed and
funded research team within Bell. This led to the creation of the complex number computer, which was finished
in the end of 1939. Now, the name here is a little misleading. This wasn't a computer per se.
Instead, it was a calculator that could handle complex numbers, as in imaginary numbers. It was
faster and more accurate than a human, and it showed that there was a lot of potential in this technology.
In the early half of the 1940s, a number of fully-fledged relay computers would start to pop up, but the underlying design of these machines proved to be a problem. This brings us back around
to the venerable vacuum tube. While Stibitz and others were agonizing over the limiting factors
of relays, there was another line of thinking in the field.
The timeline for who first figured out that you could use vacuum tubes to carry out calculations actually becomes surprisingly contentious. The person with the best claim to this title
would be John Atanasoff. His work would pave the way for future fully electronic digital computers.
Atanasoff's interest in automating calculations
goes back to the late 1920s. During this period, he earned a PhD in theoretical physics,
probably one of the most math-intensive fields outside of mathematics itself. He realized that
while working on his thesis, most of his time was spent crunching numbers and carrying out
repetitive mathematical tasks,
and that really frustrated Atenasoff. He was aided in his work by a mechanical calculator,
but it was a slow and cumbersome tool. This frustration would stick around in the back
of his mind for years to come. In 1930, after graduation, Atenasoff became a physics professor
at the Iowa State College. Outside
of his teaching duties, he started experimental research into vacuum tubes and their application
to radio detection. His work made him very familiar with vacuum tubes in general, but he
found that his work slowed at every turn by the inability to perform quick calculations. At Iowa
State College, Atenasoff had access to an IBM
tabulator, basically a large, more complex mechanical calculator, but it still wasn't
enough power for him. The device was just too slow, and it took a great deal of work to set up.
Atanasoff would even try to modify the tabulator to better suit his needs, but increasingly,
he was coming to
the conclusion that the current technology just wasn't up to his demands. It wouldn't be until
1937 that his frustrations would turn into some solid action. That winter, he sat down and began
in earnest to find a radically new solution, a new way to quickly perform calculations that he needed.
new solution, a new way to quickly perform calculations that he needed. Specifically,
he needed a way to solve complex systems of linear equations. And if IBM's early punch card tabulators weren't up to the task and mechanical computation was out of the question,
his designs would have to be a totally new type of machine. I think it's interesting to note that,
as with Stibitz around the same time, Atanasoff is working just outside his field of expertise.
With Stibitz, his background in mathematics gave him a new type of insight into relay logic.
With Atanasoff, his background in theoretical physics provided similar insight of his own.
But as late 1937 turned into 1938, his work wasn't getting very far.
After yet another late night trying to
essentially reinvent mathematics, Atanasoff only managed to hit yet another wall. Realizing that
fuming at his desk was doing no one any good, he decided that a drive may help clear his head.
Here's his account of the night in question. After months of work and study, I went to the office again one evening,
but it looked as if nothing would happen. I was extremely distraught.
And then I did something that I did in those days, but I've had to stop lately. I got in my automobile and started to drive. I drove hard so I would have to get my attention to driving and I wouldn't have
to worry about my problems. And whenever and once in a while I would commence to think about my
efforts to build a computer and I didn't want to do that so I'd drive harder so I wouldn't.
Here I drove towards the east. I was driving a Ford V8 with a south wind heater.
I don't suppose you know what a south wind heater is.
Pretty warm, but the night was very cold.
It was the middle of the winter in 1937 and 38.
When I finally came to earth,
I was crossing the Mississippi River.
189 miles from my desk.
You couldn't get a drink in Iowa in those days, but I was crossing it in Illinois.
I looked ahead and there was a light. Of course, it was it.
And I stopped and got out and went in and hung up my coat I remember that coat and sat out at the desk and got a drink and then I noticed that my mind was very clear and sharp
and I knew what I wanted to think about and I went right to work on it and worked for three hours
and then got in my car and drove slowly back to Ames.
As you should be familiar with by now, the connection between alcohol and computing goes way back.
Anyway, that late night session gave Atenasov the jolt he needed.
He'd drive home, but this time with a plan.
Firstly, his new computer would have to be fully electrical, not mechanical or electromechanical.
He knew that the bottleneck for existing devices was the mechanical aspect.
So by removing that barrier, computations could be made much faster.
The machine would also have to work with binary numbers.
And finally, they would have to be based off logic operations.
All those restrictions point to a single core device that made the design a reality.
One Atanasoff knew well, the vacuum tube. With a few extra components, Atanasoff was able to turn
a normally analog vacuum tube into a digital logic gate. From there, the rest fell into place.
Over the next few years, his new computer would start to take shape. Once again, this wasn't a
computer as we think of it today. It wasn't fully programmable. Instead, it was built to solve
specific mathematical equations. Atenosov teamed up with Clifford Berry, a grad student studying
electrical engineering, and the two set to work planning and building the device. In early 1941, the computer, christened the
Atanasoff-Berry Computer, or ABC, was complete. Composed of around 300 vacuum tubes, ABC was
able to solve multiple linear equations at once. In practice, that meant it freed up a lot more time for Atanasoff and his
physics students to work on actual physics instead of awkwardly crunching numbers. And while this was
a pretty niche machine, it would end up casting the mold for more fully realized computers if
followed. It would take some work, but by 1945, machines like the Harvard Mark I, Colossus, and ENIAC hit the scene.
Each was driven by banks of vacuum tubes rigged up in a way very similar to ABC.
Just because vacuum tubes were popular doesn't mean they were all that practical.
There were major issues with the technology that became more apparent as computing advanced.
There were major issues with the technology that became more apparent as computing advanced.
I think the best way to explain this is to look at the design of vacuum tubes in use in the 1930s and 40s.
By this time, tubes were pretty well standardized and easy to mass produce, a far cry from the early experimental devices.
One of these newer tubes had five main components, although some could have many more elements. At the center of the tube was a heating element.
This was usually just a resistive wire that emitted heat as current flowed through it.
Around that was positioned a cylindrical cathode.
This arrangement helped to evenly heat the cathode and reduced a little bit of noise.
Outside the cathode was a metal switching grate, shaped like a cylinder,
so it fully surrounded the cathode. Then an anode, also a cylinder, located beyond the gate.
Finally, everything was encased and vacuum sealed in a glass bulb.
The first problem starts at the very core of the vacuum tube, and I mean that literally.
Each tube is built around a tiny heating element. To achieve thermionic emission, a tube has to heat up its cathode to around 700 degrees Celsius. So, one tube on its own is pretty toasty already. But a computer can't run off one vacuum tube. like ABC used 300 or more tubes. Machines like ENIAC used more in the neighborhood of 17,000,
and each of those acted as its own little heater. Here's something interesting to ponder.
Lead solder melts around 400 degrees Celsius, sometimes lower depending on the composition.
The heat inside a vacuum tube is insulated to a degree by the vacuum,
but some of the heat still leaks out, usually around the contacts. So at a certain point,
circuits near a large enough concentration of vacuum tubes could literally start to melt.
This may be a bit of an extreme situation, but you should be able to see where this is going.
The usual solution of the time was to design the
computer to fit in open racks that would allow for airflow. Then the room housing the computer
was outfitted with forced air cooling. While air conditioning could abate one problem,
heat was far from the only issue at play. Another big issue was power consumption.
And as someone who's worked personally with modern digital electronics,
everything about the power needs of vacuum tubes is really weird to me. One of those oddities
is that tube logic circuits used in early computers ran at really high voltages. ABC
used logic levels of plus 120 and minus 120 volts to drive its vacuum tubes. And I've seen some diagrams that go all the way up to plus 300 volt outputs.
For some perspective, a USB cable provides 5 volts out.
It's a whole nother realm of power consumption.
And we do well to remember that vacuum tubes are still amplifiers,
so the voltage level fed in on the cathode side gets
boosted when it comes out the anode. Therefore, you can't run the output of one vacuum tube
straight into another. That could burn the second tube out. So instead, the output has to be stepped
down through some means, often a power resistor. When you start adding everything up, all the tubes,
When you start adding everything up, all the tubes, cooling, support circuits, you run into a pretty hefty power bill.
Going off ENIAC for another example, it required 160 kilowatts of power just to turn on.
But that's not all. Vacuum tubes are also relatively large, especially when compared to other components.
Most tubes were about 4 inches tall and 1.4 inches round, give or take.
In practice, each tube also needed a little bit of space around it to allow air to circulate and cool it, and a mounting socket.
Once you get up to thousands of tubes needed for a functional computer,
you end up with a massive machine.
Another issue is that, just like a lightbulb, vacuum tubes will burn out
over time. So, you need to factor in replacing tubes regularly. While these issues were workable
in the short term, after a while, you start to run into engineering constraints. If you want to make
a more complex vacuum tube computer, you're going to physically need more space, and a lot of it.
You'll also need a lot more power. Power per vacuum tube plus all the ancillary equipment
needed for each tube. And you'll need a way to handle more and more heat transfer. Vacuum tubes
just plain weren't meant for computing. It's a borrowed technology, and because of that,
you get a lot of baggage.
These problems were at the forefront as the 1950s began. Computing was starting to take off in a big way that no one had expected. But for these new machines to become more powerful, more practical,
and more robust, another breakthrough would be needed. During this time period, two promising solutions emerged.
And since the timelines actually overlap quite a bit, I want to go ahead and start with what I
think is the coolest of the two. Dudley Allen Buck was just one of many researchers, engineers,
and scientists working to crack the case of the computer. At first, he didn't really seem like a
likely contender, but his creation would soon rocket into the spotlight. At first, he didn't really seem like a likely contender, but his
creation would soon rocket into the spotlight. Buck's life took an interesting path. As soon
as he finished his undergrad studies, he enlisted in the U.S. Navy. In 1948, he was assigned to Naval
Communications Supplemental Activities, or CSA, the organization that would later become the
National Security Administration.
While there, he put his electrical engineering degree to work researching cryptography.
Now, not much is actually known about what Buck did during this period of his life,
partly because of the secretive nature surrounding the CSA and later the NSA.
It's likely he worked to crack encrypted messages sent from the USSR or devise stronger encryption means for the US.
Whatever the task, he would come out of his time in the Navy with some kind of computer expertise.
In 1950, Buck enrolled in MIT's physics graduate program.
He would still consult with the NSA and similar agencies from time to time on the side.
For a young engineer in this era, MIT must have been like a playground.
The college has a long-standing reputation for being on the cutting edge of computing,
and especially so in the 1950s.
One of the big projects at the beginning of the 50s was Whirlwind,
a new computer commissioned by the U.S. Navy.
This was an interesting real-time system,
meaning that it needed to be able to provide quick turnaround and responses to inputs. Most other computers ran tasks as batches. A
program would be loaded and ran, then eventually, maybe days later, you'd get your results.
For the era, Whirlwind was a unique take on number crunching. Buck would soon find himself
working on the input-output side of things at
Project Whirlwind. He wasn't working on the core computer, but instead the ancillary circuitry
meant to feed data in and read data out of the computer. While on the project, Buck came face
to face with how much of a problem this cutting-edge computer actually was. All in all,
Whirlwind wasn't using that many vacuum tubes, only about 5,000 bulbs.
But the machine was meant for very heavy load, and it drove those tubes pretty hard.
Burnt out tubes were common, and just a handful of failures could bring the whole
machine crashing down until the offending tube could be replaced.
The team took pains to try to work around this problem. They went as far as turning
off heaters to tubes that were currently not in use. While measures like this did help in general, it also added
complexity to Whirlwind. Vacuum tubes just weren't meant for use in computers. And increasingly
powerful computers like Whirlwind were pushing this technology to and eventually beyond its
breaking point. Only a few years after joining MIT in 1953, Buck hit on his big idea.
He had been trying to find a way around the shortcoming of the vacuum tubes by investigating
materials with more exotic properties. And while Buck had come up with some promising leads,
none had really panned out very far. But circumstances would align to change that.
Around this time, the MIT physics department
had built a new liquid helium condenser. Besides just being a pretty neat gadget in general,
liquid helium is a fantastic coolant. Having access to this machine, one of the first
large-scale condensers anywhere, allowed scientists at MIT to conduct experiments
in ultra-cold conditions, verging on absolute zero.
Thinking, hey, there may be something to all this cold stuff after all,
Buck shifted his attention towards this newly open avenue of study,
and this adjustment would lead to his creation of the cryotron.
Now, a cryotron is essentially a switch made using superconductive materials.
Superconductivity is a really strange
property that some metals exhibit. Under normal circumstances, all conductors, no matter how good,
have a certain level of resistance. No metal is a perfect conduit for current, at least at room
temperature. Once cooled down to near absolute zero, certain metals lose all resistance and become perfect conductors.
These are called superconductors. Different metals have different temperatures needed,
but once below that point, the material becomes superconductive. And there's some really weird
and interesting properties of superconductors, to say the least. The phenomenon was first
discovered in 1911, but for decades it only existed as somewhat of a novelty.
That is, until Buck got his hands on it.
One of the many weird and fantastic properties of superconductors is how they interact with magnetic fields.
There are actually a few interesting possibilities, but one specific mattered for Buck.
When exposed to an outside magnetic field, a superconductor will switch back to
being a normal conductor. So even if the metal is cold enough to be superconductive, it can
be switched back to a normal, resistive metal. Buck's Cryotron was based off this principle.
The actual device is deceptively simple. It's a single, thin metal wire wrapped in a coil
of a secondary type of metal.
Both metals are superconductors of some sort, and in normal circumstances, this wouldn't do much.
But once brought down to a critical temperature, it acts as a switch.
And it can be a pretty fast switch at that.
With a little bit of coercion, a cryotron becomes the coldest possible AND gate.
This was the first practical use of superconductivity full stop, which in itself is really impressive. But what's more interesting is that Buck intended the Cryotron as a direct replacement
for vacuum tubes.
Instead of an older technology adapted for use with computers, this was a new device
that was being designed specifically for computing.
Its applications weren't an afterthought.
The drive behind the Cryotron was to improve computing.
This was brought into stark focus with Buck's 1955 paper,
The Cryotron, A Superconductive Computer Component.
I mean, the title sort of gives it all away.
This paper doesn't just describe the cryotron itself,
but it outlines how it can be used as a computing element, and how it solves all the pressing problems with existing technology. First of all, each cryotron is really small. That may sound like
a given, but it's really impressive to see. A single cryotron is smaller than one of the contacts
on a vacuum tube. And with a little work,
Buck was already showing that the device could be made even smaller. So how about power? That's
another point in Buck's favor. He reported that a cryotron uses on the order of 10 to the minus 4
watts, or 0.0001 watts. If one were so inclined to rebuild ENIAC with cryotrons, it would only consume 1.75 watts.
That's less power than a single light bulb. As far as switch speed, that's where we get into
some difficult territory. Initially, Buck's cryotrons, all built by hand in his lab,
were switching far slower than a vacuum tube. With some doing, he figured out that he could
improve performance by reducing their size. By 55, Buck could make a cryotron about as fast as a
vacuum tube, and believed with better techniques, the speed barrier could be broken. The final
section of Buck's paper is my personal favorite. Over about 10 pages, he lays out designs for
common computing circuits, all implemented
using cryotrons.
He has a 1-bit adder.
Flip flops, registers, it's all there.
What I find particularly interesting about these circuits is the strange flexibility
of the cryotron.
It's a wire wrapped in a coil, but with this basic design, you have a little bit of
room.
A few of these circuits make use of cryotrons that share a common core wire, thus building
more complex logic gates than normally possible.
Overall this sounds like a tube killer.
Cryotrons are small, use less power, switch at about the same speed, and can do everything
a vacuum tube can but better.
But there is a downside to this technology, one that can't be
avoided. You only get this cool new switch at extremely cold temperatures. The fact that you
need to submerge everything in liquid helium causes some issues. A cryotron computer would
actually burn through helium. Despite operating at such a low temperature, there is still some heat from
each cryotron. Over time, that evaporates the helium bath, so it would need to be replenished.
Buck posits that the actual burn rate would be relatively low, and that it could be alleviated
by building a helium recycler. But the problem remains. To make matters worse, helium is actually a pretty limited resource here on Earth.
So, if a computer was built using cryotron logic,
then you'd literally burn through helium, a scarce resource as fuel.
Which, I mean, that's wild to think about today.
While it wasn't possible to get around the use of helium,
Buck would continue to improve the cryotron on every front.
Buck and his colleagues continued to make smaller and faster cryotrons.
The culmination of this was designing a method for printing microscopic circuits.
Not all that different from early integrated circuits, except they are superconductive instead of semiconductive.
These tiny circuits were blasted out using an electron beam,
and then superconductive materials were deposited.
Buck took to calling these circuits thin-film cryotrons.
Before integrated circuits were really practical,
Buck and his crew were turning out a nearly identical technology.
The future was looking really promising for the cool invention.
Outside of Buck's work, research at GE, IBM, RCA, and a handful of other labs was being conducted
into cryotrons. A cryonic computer, one more powerful than anything ever built, was becoming
a distinct possibility. A whole new path was opening up, and Buck was at the center of all of this.
But a combination of tragedy and circumstance would conspire against the burgeoning technology.
On May 21st, 1959, Dudley Allen Buck, only 32 at the time, died very suddenly.
In a matter of a few days, he took ill, was rushed to the hospital with respiratory distress, and quickly deteriorated.
The official cause of death was listed as viral pneumonia.
But there has been speculation into the case ever since.
Buck's death is a massive tragedy, no matter how you look at it.
He was a genius scientist that lived a far too short life.
But beyond being tragic, it seems just strange. And as with many strange
events, theories tend to sprout out. One school of thought states that his death was the result
of a tragic lab accident. The last entry in Buck's lab journal references work that involved some
pretty dangerous chemicals, boron trichloride and hydrogen chloride gas. If a mistake was made or
equipment quietly failed,
it's possible he inhaled some of these gases, fatally damaging his lungs. The argument that
I've seen for this theory is that Buck wasn't a trained chemist, so he may have been unaware of
certain safety procedures that could have kept him safe. But I don't know if I fully buy this one.
He may not have had a degree in chemistry, but Buck had been working
with dangerous and exotic materials for quite some time. And there were other researchers that were
working with Buck in the same lab leading up to his death and they survived. The explanation
doesn't seem very complete to me. The other theory I've seen reported is completely different.
According to some, Buck was assassinated by agents of the Soviet Union,
in part due to his work on the Cryotron. I mentioned that Buck had an on-and-off relationship
with the NSA and other American intelligence agencies. With the onset of the space race and
deepening of the Cold War, Buck's work would become a very essential and important part to
the NSA and CIA's plans.
Computer companies weren't the only people interested in the use of the Cryotron.
In fact, the US government would put a good deal of weight and funding behind its development.
One manifestation of this line of work was the ambitious Project Lightning.
Here's how one declassified NSA report describes the undertaking. Quote, eventually we foresee that
natural limitations on speed and size will be encountered, and then the inevitable advances
of our opponents will corner us, so the duel will become one of pure wits. But while we can,
we must maintain our superior weapons. Project Lightning was set up to preserve our advantage in speed of computation.
End quote. Just as an aside, the Freedom of Information Act is a very wonderful thing in
this country. That's how a lot of these types of documents are uncovered. So the speed and size
limits are, of course, some of the biggest issues with vacuum tube computers. Project Lightning was
an attempt to surpass those limits by creating a
cryotron-powered computer. Specifically, the NSA wanted to leverage the thin-film cryotron that
Buck was developing in the latter half of the 1950s. A fast and small computer would go a long
way towards giving the U.S. the upper hand in the Cold War. And this wasn't just some idle planning and daydreaming.
Project Lightning had some serious force behind it. President Eisenhower personally signed off
on the secret project, and the NSA was given broad discretion and funding to see the work
to completion. Ultimately, the finish line wouldn't be reached, but smaller-scale Cryotron
computers were manufactured along the way. There was a lot
of optimism that even if Project Lightning fell short, the Cryotron could be incorporated into
other undertakings. The other reason the US government was interested in the Cryotron was
for rockets, missiles, and satellites. The first satellite, Sputnik, was launched by the USSR in
1957. The same year, the USSR would also launch the
world's first intercontinental ballistic missile. Needless to say, America took note. In short order,
funding for rocketry and space technology was flowing freely into research institutions and
government agencies. But one of the big problems would come down to in-flight control systems.
If you want to make a satellite or missile flexible, you need to have a computer in it.
Something like a spy satellite could get a lot of use out of an onboard computer for controlling its systems,
gathering information and coordinating transmissions.
While a missile could be made a lot more accurate and dangerous by adding a computerized targeting system.
Once again, the Cryotron was a
surprisingly good fit for these applications. A computer composed of just a few thin-film
Cryotron circuits was looking feasible. They would be fast, small, and most importantly,
resistant to shock. A vacuum tube can be broken if hit, or if you shake it hard enough, it'll also stop working. A thin chip
of superconductive metal was a lot more resilient. Plans for futuristic weapons such as these would
start to hinge on Buck's continued progress. Dudley Buck was personally involved in at least
some of this work, either consulting, contracting, or otherwise passing along information designs.
There were others working on the Cryotron, both researchers in Buck's own lab and outside firms.
But he was at the center of the field.
No one knew the technology on the same level as Buck.
We also know that Buck was on the USSR's radar, so to speak.
In April of 1959, a delegation of researchers from the Soviet Union toured Buck's lab,
even meeting with Buck himself. There were also known Soviet spies embedded in the NSA during this period. The
final piece to this story that left me wondering is this. Louis Radenoir, a close associate of Buck,
died under mysterious circumstances on the same day, just hours apart. Ridenoir worked at both Lockheed and the NSA
and was personally involved with Project Lightning
as well as a slate of ICBM and satellite projects.
On May 21st of 1959,
he was found dead from a brain hemorrhage.
So, did Soviet agents identify these two scientists
as major dangers to national security?
And did that lead to their assassination?
And was it all over the Cryotron? The problem is, we'll probably never know for sure.
But it makes me wonder what other plans the Cryotron may have had behind closed doors.
Okay, it's time to close this episode out.
We've covered a lot of ground, but there's still more to cover next time.
From its earliest roots, computers were using borrowed technology.
While the vacuum tube was a massively important invention, it wasn't made for computing.
It could be rigged up to do the job, but there were underlying flaws with this arrangement.
Atanasoff and others took a giant leap to go from an analog device
to birthing the electric digital computer.
But there was still room to improve.
Thermionic emission can be used to make a switch,
but it's a relatively hot and unreliable method.
And just as computers were hitting a vacuum tube ceiling,
Dudley Allen Buck and the Cryotron hit the scene.
What stands out to me more than anything, more than the genius of this device or the
really cool physics behind it, is that the Cryotron was purpose-built for computing.
Instead of adapting an existing technology to the problem, Buck took a totally new and
unexpected approach.
There were some downsides to the Cryotron, namely the fact that it had to be
kept in a bath of liquid helium to run, but it showed some real promise. Buck's work, especially
in miniaturizing and integrating circuits, was far ahead of the competition. But ultimately,
the cryotron would be overshadowed by another technology. Buck's early death, no matter the
actual cause, may have contributed to that in
a way. Left in the lab, who knows what he would have turned out, and how far he would take the
cryotron. However, there was another breakthrough in the making. The transistor was about to take
center stage after a long and strange period of development. If you want to learn more about what
we've covered today, I have a few recommendations.
The clip for the ABC portion of the episode was from a wonderful talk given by Atanasoff in 1980.
I'll provide a link in the episode description. The other recommendation I can make is to check out The Cryotron Files, written by Ian Day and Douglas Buck. It's been an invaluable source for
me, and it's probably the most complete biography
of Dudley Allen Buck. Also, it's just a great read. I highly recommend it.
Thanks for listening to Advent of Computing. I'll be back in two weeks' time with the conclusion
to the story of the transistor. And hey, if you like the show, there's now a few ways you can
support it. If you know someone else who would like the show, then why not take a minute to
share it with them? You can also rate and review me on Apple Podcasts.
And if you want to be a super fan, then you can support the show through Advent of Computing merch or signing up as a patron on Patreon.
Patrons get early access to episodes, polls for direction of the show, and other assorted perks.
You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.