Advent of Computing - Episode 32 - Road to Transistors, Part II
Episode Date: June 14, 2020In this episode we finish up our look at the birth of the transistor. But to do that we have to go back to 1880, the crystal radio detector, and examine the development of semiconductor devices. Once ...created the transistor would change not just how computers worked, but change how they could be used. That change didn't happen over night, and it would take even longer for the transistor to move from theory to reality. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers:Â https://www.patreon.com/adventofcomputing Important dates in this episode: 1939: Russel Ohl Discovers P-N Junction 1947: Point Contact Transistor Invented at Bell Labs 1954: TRADIC, First Transistorized Computer, Built
Transcript
Discussion (0)
Originally, one thought that if there were a half dozen large computers in this country,
hidden away in research laboratories, this would take care of all requirements we had
throughout the country. End quote. That came from a talk given by Howard Aiken in 1952.
It's hard to overstate how important Aiken was to the development of the computer.
He designed and built one of the first digital
systems, the Harvard Mark I. His work was central to the early development of the field,
but even Akin was surprised where the field would go. Looking at computers from the first
half of the 20th century, it's hard to believe that they're related to our modern machines at
all. They were monstrously large, requiring specialized infrastructure to
house them. These machines were one-of-a-kind pieces of art almost, built by hand in research
labs. Just putting together a computer, maintaining it, running it, and cooling the thing, that was an
expensive project in itself. But outside their physical characteristics, early computers were also thought of in a totally different light.
By 1952, there were already a few commercialized computers, notably the UNIVAC.
But those kinds of machines were just starting to appear.
In general, computers were unique.
They even had names that were treated as proper nouns.
You didn't have a ENIAC, there was just the ENIAC,
or the Harvard Mark I. And not just anyone could use one of these computers. They were enshrined
deep in research labs or government installations. In most cases, you couldn't even personally
access or touch the systems. Instead, you'd have to pass along your code or job to a team of technicians
who would actually use the computer. No one, not even Aiken, foresaw how much this would change.
In the 1950s, computers like Univac or some early IBM systems would hit the market.
But still, the market for these machines was highly limited. By the end of the 1950s,
a massive shift happened. Computers got
a lot smaller, they got a lot more powerful, and they got a lot cheaper. These new machines would
start to leave their shrines and spread out much more than before. Now, there were a lot of factors
that led to this shift, but the majority of this change is thanks to a single invention.
That's the transistor.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 32,
Road to Transistors, part 2. Now, as the name suggests, this is the second part of a two-part series of episodes.
Last time, I covered the earliest roots of computing, the development of the vacuum tube,
and eventually the cryotron. That episode outlines the struggles that eventually led to computers and the successes and failures of their early development, particularly focusing on the
underlying technology at play. While it's not strictly required listening,
it is encouraged, and hey, you might even like part one.
Now, just to jog your memory, here's a brief roundup of last time.
Vacuum tubes started being used for computing as far back as the late 1930s. They remained a popular option for quite a while, but there were some major issues with the technology.
Tubes burned pretty
hot. They could burn out, and they used a whole lot of power. They were originally meant for use
in radios and other audio applications, not in digital computers. So, there was only so far that
the vacuum tube could go. Starting in 1953, another contender called the Cryotron hit the scene.
This was a superconductive switch developed by Dudley Allen Buck specifically for use with computers.
Over the next few years, Buck and his team went from simple wire-wrapped models to early integrated superconductive circuits.
Cryotrons were superior to the vacuum tube in every way, sans one.
A Cryotron only worked if it was
submerged in a bath of liquid helium. This brings us around to the third option,
the transistor. Ultimately, computers would become synonymous with this technology.
But the path to that ends up being a pretty strange one. As with the vacuum tube, transistors
are a borrowed technology. The first transistors were
developed for use in audio and radio applications. And just like the cryotron, transistors work
thanks to the exotic properties of some very particular materials. So today, let's see how
the transistor came to be synonymous with computing. That includes a few false starts
and some strangely fanciful science.
And along the way, let's see why the transistor was made in the first place and why it would rise
to prominence. One of the reasons I ended up breaking the story into two parts comes down
to chronology, or rather the fact that the full story of the transistor runs kind of parallel to
the early development of computing.
So last time, we left off in 1959, but that's not where we're going to pick up the story today.
To start this off, we actually have to go back to the end of the 19th century and early
semiconductor research.
Believe it or not, we're actually going all the way back to the first radio, and the crystal
detector.
Sometimes called the cat whisker detector, these are really strange
devices. The core component was a large chunk of some type of crystal, often germanium or galena,
that was mounted on an electrical contact. The other half was the eponymous cat whisker,
a thin metal wire that was brought into contact with the crystal. By carefully adjusting where
the wire was in contact with the crystal, it was possible to get the device to only allow current from the
wire into the crystal but block any current that flows in the other direction. In other words,
it forms a diode, kind of. Wire this up with an antenna, coil, some capacitors, and a headset,
and you have yourself a radio. Once set up,
the crystal detector is able to pull radio waves out of the air and turn them into intelligible
speech by demodulating them. The major limitation is that the device only works for certain sites
on the surface of the crystal. So, once attenuated, moving the wire will effectively break the diode.
This all works thanks to some very special
properties of the crystals involved. As it turns out, galena and germanium are both actually
semiconductors. There are quite a few different types of semiconductor materials, with the most
well-known and most used today probably being silicon. These types of materials are, like the name suggests,
not quite conductive. They can conduct current under certain circumstances, but they aren't
really metals. They can also block current under other circumstances, but they aren't really
insulators. This all changes when heat or energy is applied. As a semiconductor warms up, or more
energy is added to the material,
it will become more conductive.
Now, this is in direct contrast to metals,
where applying heat will actually lower the conductivity.
So, we have a material with unique properties.
And we've seen before that when scientists
find something that acts in a unique way,
that tends to open up some interesting possibilities.
But where things get especially interesting is when you have the junction between the
semiconductor and another material. It turns out that those junction points are where the
magic actually happens. The point where the thin whisker of the detector comes into contact
with the surface of the crystal forms a metal semiconductor junction. Up to a certain voltage, current can only pass
through that junction one way, thus making a diode. Of course, when the whisker detector
was originally created, this wasn't very well known or a very well understood phenomenon.
Now, under controlled circumstances, this works fine. But all in all, the crystal detector is a very crude diode. Due to impurities
in the crystal used, only certain sites on a crystal can actually form a diode. So trial and
error had to be used to get the device working. Even the diode wasn't all that stable, and with
use it would degrade and eventually burn out. So, any aspiring radio operator would have to spend
a lot of time divining around the surface of a crystal with a very small springy wire.
Now, I like high fantasy, and this is definitely a fantastical type of technology,
but that doesn't make it an ideal situation to find yourself in. In 1904, the first vacuum tubes
were developed, with practical devices
following soon thereafter. As we saw last episode, vacuum tubes also function as a diode,
but instead of using semiconductor junctions, they work via thermionic emission. In every aspect,
the vacuum tube was just a better radio detector, and crystal detectors quickly became obsolete.
But while the vacuum tube
continued to develop, so too would semiconductor technology. As is the case with a lot of progress,
the next big step for the diode came from a handful of disgruntled researchers.
Bell Labs had started out in the realm of wired communications, mainly telephone. But over time, they became more concerned with telecom
in general. Radio was a short jump from the telephone for them. And in the 1930s, the next
big jump was to shortwave and ultra-shortwave radio. This change was, in part, brought about
due to Bell's increasing involvement with the US government. As the military was increasing in size,
scope, and scale
prior to World War II, they needed better ways to communicate. Higher frequency transmissions,
moving from normal radio up to microwave frequencies, are able to travel further.
That increased range would be a key asset to the US military. But getting there would prove to be
non-trivial. Russell Ohl and George Southworth were two of the researchers behind that jump.
And with the state of technology in the 1930s, things were looking pretty bleak for their project.
The primary problem came down to the design of vacuum tubes.
At frequencies used for AM radio in the range of megahertz, vacuum tubes are great rectifiers. But at higher frequencies,
around, say, the gigahertz range, these old tubes just don't work. Vacuum tubes can't handle current
switching back and forth that quickly. And over various experiments and design tweaks,
Ohl and Southworth just couldn't get past this limitation. It was fast becoming apparent that vacuum tubes
were a dead end. The team would need a radically new solution. While looking for possible alternatives,
Southworth struck upon a pretty wild idea. During World War I, he had served in the U.S.
Signal Corps, working with radios. When experimenting with how to make a better detector,
Southworth found himself thinking back to the crystal radio sets that he had used while serving with the US
military. Current technology was failing him, so he decided why not give this older device a shot.
But that was easier said than done. By 1930, no one had made crystal detectors for years.
By 1930, no one had made crystal detectors for years. So Southworth hopped on a train and headed to a nearby secondhand shop, and he decided
to try and search for an obsolete radio.
By the end of the day, he was already headed back to Bell Labs with a handful of crystal
sets in hand.
It took about an hour or so of tinkering for Southworth to get one of the crystals working
as a diode. From there,
he bombarded the detector with microwaves and, to his shock, it was actually able to pick up
the transmissions just fine. He now had a working microwave radio sitting on his workbench,
one that was built using 1880s technology. Russell Ohl also had background with crystal radios,
so when Southworth showed him his demo, he jumped at the chance to investigate this new technology. Russell Ohl also had background with crystal radios, so when Southworth showed
him his demo, he jumped at the chance to investigate this new technology. From here on, the mysteries
of the semiconductor would slowly start to unfurl. The first step was to find a way to
improve the cat whisker detector. After much trial and error, Ohl worked out that silicon
was actually the best material to use as the crystal half of the device.
It was just more sensitive than other options.
But even finding a better material wouldn't make these early diodes particularly practical.
Carefully tuning each diode before use was a big issue. The prevailing theory, and Ohl's best guess, was that their lack of
reliability was due to impurities in the semiconductor crystals that they were using.
But that raised another issue. How do you make exceedingly pure silicon crystals?
By this point, the crew at Bell was pretty off the beaten path, and their experiments would have
to get pretty far off the rails to work.
The primary issue with purifying silicon is that it has a really, really high melting point.
It's about 2500 degrees Fahrenheit. The normal course of action is to heat up a material to
the point that it melts, but any contaminants remain solid, and then decant off the molten material leaving behind any
contaminating particles.
But you can't really do that with silicon, it melts at way too high of a temperature.
Most of the impurities that Ola was concerned with melt far below the melting point of this
semiconductor.
Even discounting that, at this high temperature range, the crucibles used to hold the molten
silicon can actually leach contaminants into the liquid as they also start to melt.
Eventually, Ohlin crew figured out a workable, if not extreme method.
This involved using an electric furnace filled with helium gas.
Helium is non-reactive, it's a noble gas, so this eliminated a lot of possible contamination from the atmosphere.
Then, the silicon was cast using quartz crucibles,
which have a much higher melting point than silicon.
Once everything was cooled, the molds were broken,
revealing some, theoretically, clean cylinders of silicon.
But even after going to all these lengths,
the resultant silicon rods exhibited some really strange properties.
It turned out that some of these rods actually acted as a diode all on their own,
and unlike the cat whisker, there was no need for a metal point contact. It turned out that,
despite going to this effort, Ohl wasn't able to make reliably
pure silicon. There were still contaminants that couldn't be removed. Instead of being removed,
those contaminants actually migrated around in the molten semiconductor, and eventually
separated into different parts of the rod. After extensive experimentation, Ohl was able to show that these discontinuities,
caused by regions with different types of contaminants, formed their own junctions.
It acted in the same way as the junction between a metal wire and a semiconductor.
And that was the key to opening up a lot more possibilities. Instead of having a hand-tuned
diode like a crystal detector, Ohl showed that it was possible, in theory, to lock in that junction.
Soon, Bell was able to control this process of contamination.
It would be known as doping.
And they'd be able to make reliable and practical semiconductor diodes.
The benefit of Ohl's new method?
Well, they're staggering.
These new diodes had all the advantages of the cat whisker detector.
They could handle higher frequency oscillations wonderfully.
But with the junction baked into the silicon, it had none of the issues.
Solid state diodes, as these were known, aren't susceptible to shock.
You can't really break one unless you physically crush it.
Silicon is also cheap,
and with a little work, it's relatively easy to mass-produce doped silicon. Overall,
semiconductor diodes are a strict improvement to existing technology. In the realm of radio,
this device became a contender to replace the vacuum tube. But semiconductors were just getting started.
By World War II, reliable, solid-state diodes were already being used in military applications.
Both radio and radar equipment was made substantially better and more reliable
thanks to these little silicon wonders.
So, at this point, were semiconductors ready to make their appearance in the world of computing? Well, maybe. This gets a little bit tricky to think about. By the end of World War
II, we already have a few computers, a mix of relay and vacuum tube logic devices. These devices
were used as the core building blocks of these machines because they functioned as switches.
So, with some wiring,
it's possible to turn them into logic gates. Once you have enough logic gates, you can build a
computer. That's a little simplified, but it's mostly accurate. So, can you build logic gates
using diodes? Well, kind of. You see, a diode isn't a switch, not by any stretch of the imagination. That being said,
you can make some logic gates using diodes. The logical AND and OR operators can be constructed
using only diodes, but that's not a large enough set. The key issue is the NOT operator,
which you can't make only using diodes.
It turns out, for very complex logic, you need to be able to flip a 1 to a 0 and vice versa,
and diodes can't do that on their own.
The other key issue is that diodes have a little bit of inbuilt resistance.
The voltage out of a diode is lower than the voltage in,
so if you chain enough of these together, your signal drops to nothing. Diodes would find their way into early computers, often
used in conjunction with vacuum tubes, but semiconductor technology wasn't quite ready
for the main stage, at least not yet.
One of the major driving forces behind the next development was a man named William Shockley. So far,
we've talked about engineers, mathematicians, and at least one theoretical physicist in this series.
Shockley was a little bit different. He was a quantum physicist. It may be a bit of a fine
distinction to make, but I think it makes a world of difference in how he approached semiconductors.
Shockley started
working at Bell almost immediately after grad school in 1936. The project that most consumed
his early period was the same as many of his counterparts, finding a replacement for the
venerable vacuum tube. But while Ohl and company were working to create a better diode, Shockley
was more interested in the diode's bigger cousin, the triode. The driving factor here was that the vacuum tube triode was still
the only viable audio amplifier out there. And while it worked, it was less than ideal.
The jump from vacuum tube diode to triode had been relatively quick and easy, but the jump from semiconductor diode to the
final transistor ended up being an almost impossible task. Shockley's first step was
to try and figure out exactly what all his diodes were actually doing. Remember that at this point
in history, semiconductors were still not very well understood. Even the experts in the field
didn't know exactly what was going on with these chunks
of silicon. Luckily, Shockley was well equipped for the task. Quantum physics sounds really
impressive and complicated, and don't get me wrong here, quantum is devilishly complicated.
But to simplify things, quantum physics is concerned with how energy moves around at the
very small scale.
This makes it able to describe things like how light travels and interacts with materials,
or how electrons and charge interact with atoms.
The second example is what matters most for Shockley's semiconductor work.
Using a slate of experiments and a whole lot of mathematics,
Shockley started to work up an entire model of Ohl's inextricable
silicon rods. His first breakthrough was that the different sections of silicon,
the regions with different impurities, seemed to have a locked-in charge. Some regions acted
more positively charged, while others acted more negatively charged. He started calling these P and N type
semiconductors. Shockley also confirmed Ohl's suspicion that the internal P-N junction,
where the two different regions of silicon touched, was where the magic was happening.
Armed with this experimental data, Shockley built out a working theory as to how these
interactions on that junction actually
worked. And from there, he wound up at his big idea. According to his math, you should be able
to change what the P-N junction does. Specifically, his work showed that by pumping up an electric
field near that junction, you could change how current passed across it. If Shockley's math was right, then you should be
able to essentially turn off that junction, or turn it on, and you'd be able to create a
semiconductor switch. To Shockley and his crew at Bell Labs, that was proof positive that a
semiconductor amplifier was possible. But more importantly for us, that meant that a semiconductor logic gate was also on the horizon.
During World War II, he briefly left Bell to work with Columbia University developing anti-submarine systems.
But by 1945, Shockley was back in the lab.
And it's around this time that things take another big turn for the strange.
Shockley's next big step was trying to make a functional
device from his theoretical framework. He was trying to implement the field effect transistor
that worked on paper. For the next few years, he would chase after this device, but he was never
able to reproduce his theory in real life. Shockley knew his math was right. He undoubtedly
checked and double-checked it
hundreds of times. But for some unknown reason, all his experiments failed to produce results.
At least, he wasn't able to produce any positive results. Shockley put it this way in an interview
decades later. Quote, What we tried to do is start out to make a transistor. It didn't work.
These field effect things didn't work.
We abandoned the effort to make it work.
But we studied the physics of why it wasn't working, which we didn't understand.
And that led to this.
So we did research on the related science. It was called respect for the scientific Aspect of Practical Problems.
Now, I really like the approach that Shockley took here.
Sure, it's frustrating when all your work and planning ends up failing.
All his meticulous math and theoretical developments weren't able to make a working transistor.
But that doesn't mean your time was wasted.
Instead of dropping the project or thinking that the transistor was an impossibility,
Shockley wanted to find out why he failed.
So he called in some help.
That help would come from John Bairden and Walter Brattain.
Both were co-workers at Bell Labs from different disciplines.
Bardeen was another quantum physicist, like Shockley,
able to help on the theoretical
side of things.
Brattain was different.
He was a material scientist and a noted experimentalist.
The team would be able to tackle the issue from the theoretical and practical side, something
that Shockley couldn't do alone.
Baradine had the gut feeling that the field effect transistor was failing because, well, the field
just wasn't getting deep enough into the silicon to have an effect. He was able to work out that
the problem came from the surface effect of the semiconductor. When current starts flowing through
the silicon, some of those electrons were clumping at the surface and blocking the external field
that Shockley was applying. So no matter how strong a
field was applied, the semiconductor just wouldn't notice anything. With the problem identified,
at least, it would be pretty simple to solve it and have a working transistor, right? Well,
not so fast. The team actually had no idea how to get around the surface effect that they were observing.
Shockley's previous theoretical work was already pretty far off the beaten path.
These new problems that Bardeen and Brayton observed were even further afield.
The two options that they had to proceed were to 1. Establish a theoretical framework to explain these complex surface effects and then devise a theoretical solution,
or two, just experiment with stuff until it works.
They went with the much more fun and slightly less rigorous second option,
diverging pretty far from Shockley's field effect transistor idea.
The team of two shifted into a full-on trial and error mode for the next few
years, using all the clues gathered from Shockley to guide their search. The list of different
permutations and experiments that Bardeen and Brayton tried is pretty extensive, so I won't
bore you rattling off test after test. Just know that it didn't go very smoothly. It would take a while before they honed
in on a solution. By the end of 1947, however, this research was starting to bear fruit,
and the first transistor started to work. Now, this was a very, very rough prototype.
Maybe prototype is implying that it's a little further on than it actually was.
Maybe prototype is implying that it's a little further on than it actually was.
The device was set up something like this.
At the base was a chunk of doped germanium mounted to a metal plate.
Pressing down onto the semiconductor via a spring was the point of a small plastic wedge.
Mounted to the wedge's surface was a thin sheet of gold foil,
with a tiny gap between the foil at the very tip of the wedge.
A mess of wires and solder completed the device, and, inextricably, this device worked.
Bardeen and Brayton called the device a point-contact transistor, and they successfully demonstrated that it could act as a switch and an amplifier. Even better, it could handle high-frequency
signals with utter ease. Everything was falling in place. Now, what I find interesting is that
this first working transistor is remarkably close to the much earlier catwhisker detector.
Both rely on a metal semiconductor junction, and at the time of their invention, both worked in somewhat enigmatic ways. If anything,
the point contact transistor is really just a greatly refined and updated version of the crystal
detector. But no matter how cool the model was, there were still some huge problems with it.
The early point contact transistor only worked in the lab. It was a fragile and very fickle device.
To make matters worse, how or why this one configuration worked wasn't readily apparent.
It had been made via experimentation and iterating designs, so unlike Shockley's
field effect transistor, there wasn't a load of theory behind it. At least not yet.
It would take some doing, but in the coming years the team inside Bell was able to turn
this prototype into a somewhat reliable device.
Soon, sample transistors were flowing around and eventually out of the lab.
To those within Bell and outside the company, this invention's magnitude was immediately
apparent.
It was better than a vacuum tube in every regard,
and unlike the later Cryotron, there were no major downsides or restrictions to its operation.
A revolution was at hand, and Bardeen and Brayton were at the very center of this.
When it came time for Bell to patent their new wonder, the two researchers were listed as its
inventors. While the point-contract
transistor was the first to be built, it didn't end up being the best option. It was a huge step
forward, don't get me wrong, but it was still a little fragile. Shockley would continue on to
develop his ideas from the failed FET experiments, combined with some of Bardeen and Brayton's work,
into a new creation. After another year of work,
Shockley was finally able to turn all his theoretical underpinnings into a functioning
device. In 1948, he constructed the first bipolar junction transistor. The key difference between
its point contact counterpart is that this new device is made entirely from semiconductors. No tiny metal wires needed. The bipolar junction
is formed from a sandwich of N, P, and N-type silicon. The result is a much more robust
device, and in testing, this new transistor proved to switch faster and operate under
higher loads than Bardeen's and Brayton's device.
By the time the DECA was out, Shockley's new bipolar junction transistor
would emerge as the clearly better device. By 1951, the first commercial transistors would
start to ship. These early release transistors would all be point contact types. But in the
following years, bipolar junction transistors would start to flow into the market. At least
for audio, this made the vacuum tube totally obsolete.
It was now possible to build radios using only semiconductor components. A diode could do the
heavy lifting of detecting and rectifying the signal, and a transistor could handle amplification.
Even better, these new semiconductor audio devices were minuscule in comparison to their
vacuum tube counterparts,
and they cost much less to manufacture. There was no real reason to ever manufacture another vacuum tube. The new transistor wasn't expressly developed as a digital device. First and foremost,
it was intended as an analog audio amplifier. However, it wasn't long before researchers began investigating its
digital applications. Perhaps unsurprisingly, some of the early adopters of this fledgling
technology came from within Bell Labs itself. Almost as soon as sample transistors were produced,
computer nerds within Bell had their work cut out for them. The key issue at hand was that no one really knew how to use these
newfangled transistors. They were all stuck in the world of vacuum tubes, and while they knew
that the transistors were leagues better, no one had any practical experience with the new technology.
The fact is that transistors, while filling the same role as triode tubes, aren't a direct replacement for vacuum tubes.
They operate at different voltages, they consume power in a different way, and overall, they're
just a little bit different from what computer scientists of the time were used to. The other
issue was that, at least in the early years, projects would be plagued by supply shortages.
The transistor was brand new cutting-edge tech,
so unsurprisingly, there was a very limited supply. I wanted to make a point in saying this because,
well, it surprised me. I had assumed that once transistors hit the scene, everyone just took out,
smashed their vacuum tubes, and plugged in a new transistor. But that's not actually the case.
The fact is that in these early semiconductor days,
engineers didn't quite know what to do with transistors. They knew the transistor was a
world-changing technology, but no one had experienced them yet. One of the many engineers
trying to make use of the new technology was Gene Felker. He had joined Bell Labs in 1945,
initially working in their military systems laboratory.
In 1948, his world would change forever when he came face-to-face with these early semiconductors.
Point-contact transistors, now packed up in small metal tubes, were being passed between
electrical engineers within Bell. Some had already found their way into simple adding circuits and a few other binary circuits,
but nothing outside the realm of simple demonstrations existed. Felker would start
out as just another engineer tinkering with the transistor, but his tinkering would lead to much
more down the line. Over the course of 1948, Felker would build out his own simple transistorized circuits, starting with a
clock oscillator. He started to get a feel for the new device, and this led him to an important
realization. Not only were transistors fast, but when used correctly, they could be remarkably
stable. Felker was able to generate a pulse in the range of megahertz using these new semiconductor
oscillators, each transistor
switching from on to off around 1 million times a second. Once set up and running, the circuit was
able to keep on running at the same frequency with little to no problems. These things weren't just
fast, they were reliably fast. Soon, Felker was building logic gates, flip-flops, and small memory cells using only semiconductors.
And thanks to the transistor, these new circuits were fast and reliable.
The path forward for Felker was already pretty clear.
Digital computers had been around for a few years by 1948.
It's not entirely accurate to say that there was an established pattern of operation for these machines,
but a mold was starting to form.
Following the growing pattern of digital computers built around logic gates would be a relatively safe path,
but there was still the matter of finding an application for a transistorized computer.
It was going to take a lot of time and resources to build a new transistor machine.
Remember, at this point in time, there were only a handful of time and resources to build a new transistor machine. Remember, at this point in
time, there were only a handful of computers in the world. Felker was dead set on seeing this
technology to its logical end. But to get there, he would need to work up a good pitch for it.
Luckily, it wouldn't be long before Felker found the proper niche. In 1951, he worked up his most
impressive project yet, a fully transistorized multiplier.
This machine could multiply two 16-bit numbers together, and it could do it in a matter of
milliseconds, much faster than anything yet devised. Armed with this enticing demo,
he was able to secure a contract from the US Air Force to try to build a new computer.
Now, this may seem a little bit out of
the blue, but there's good reason that Felker approached the Air Force. During World War II,
Felker had worked with radar systems, and in that time period, that meant analog systems.
And while radar was an impressive and important tool in the military's arsenal,
these early systems were only a stepping stone. They were relatively
simple and inflexible devices, usually built around a cluster of delicate vacuum tubes.
It worked, but it could be made a whole lot better. Radar is just one example. There's a
plethora of these in-flight systems that could stand to be upgraded. The holy grail would be,
of course, getting a computer onto the
airplane. You'd have supreme flexibility and supreme control. But as things stood, that was
completely impossible. A vacuum tube computer is too heavy to get airborne. A machine like ENIAC
was on the scale of 30 tons. Power consumption and heat were also a big issue. A plane simply couldn't
provide enough power in flight and definitely couldn't handle the cooling requirements of an
early computer. The transistor stood in the exact right place to solve every one of these problems.
It was small, lightweight, and it used much less power. It also didn't hurt that it ran a lot cooler.
Felker knew he had a viable solution, and the Air Force agreed. So in 1951,
Felker was able to secure funding and set to work. The contract was technically for Felker to conduct a feasibility study on the possibility of in-flight computers for the Air Force.
But that really just meant that Felker got a lot of funding
to build the first fully transistorized computer.
The project itself was broken down into two phases.
Part one, prove you can make a semiconductor computer at all.
And part two, prove that you can make it fly.
After three years of development,
Felker and his team at Bell Labs
would complete phase one of TRADIC,
the Transistor Digital Computer. It's not a very convincing acronym. But despite the name,
TRADIC wasn't a purely transistor-based machine. Instead, it used a combination of point-contact
transistors and diodes. The stated reason I've seen for the use of some diode logic is that the transistor
was still an unproven technology, at least when it came to computing. Bell Labs simply had more
experience with diode logic circuits, so they wanted to lean on that proven expertise whenever
possible. But you can't totally make a computer out of diodes, so the transistor had to pick up
some of the slack.
Another reason, and bear in mind that this is more speculative, is that there just weren't enough transistors floating around. Tradic Phase 1 was finished in 1954, after commercial
transistors started to appear, but the computer was planned and designed in 51, when options for
transistors were much less plentiful. I'd wager that the designs for
Tradic were based off possible shortages of transistors in 51, and couldn't be adapted to
change to the market as the project wore on. All told, the prototype contained only 643 transistors,
but over 10,000 diodes. So maybe not totally accurate to say that Tradic was the first
fully transistor-based computer, but the transistor was a core component to its operation.
Perhaps the better term for the early stage of Tradic is the first solid-state computer.
Anyway, what did all those diodes and transistors get you in terms of power? What was the benefit?
The best way to describe it is a short passage that's buried in one of Felker's papers on the matter.
Quote,
This machine multiplies or divides in less than 300 microseconds,
and adds or subtracts in 16 microseconds.
It runs on less than 100 watts of power.
End quote.
What do all those numbers mean in context?
I think the best comparison point is going to be Univac.
Now, Univac was the first commercially available computer.
It was built using exclusively vacuum tubes, and it hit the market in 1951.
So it gives us a somewhat reasonable baseline.
So division on Tradic came out to about 13 times faster than
the same operation on UNIVAC. For addition and subtraction, TRADEC's over 30 times faster.
Bear in mind that UNIVAC needed 125 kilowatts of power just to turn on. That's over 1200 times as
much as TRADEC. This is what semiconductors give you. TRADIC wasn't just a
better computer, it was a whole new type of machine. Even in this rough state, it was showing
that solid state gave you massive benefits, and it removed much of the downside that earlier
computers had. As a proof of concept, Felker knocked it out of the park. But phase one,
well, that was just a proof of
concept. A demo reel for the transistor, if you will. Phase 2, the flyable TRADIC, was
where the new technology got another chance to show off. This chapter of Felker's project
began sometime in 1954. By that point, there were other transistorized computers popping
up, but I'm going to stick with TRADIC because I think it shows off just how much of a game-changer the transistor was.
The plan was to rebuild a computer using the experience gained from Phase 1.
This new system would have to fit into an airplane.
The target was the B-52 Stratofortress,
but for testing purposes, Bell had access to an EC-131B cargo plane. The Tratic
also had to reliably operate while in flight, not just in the sterile and static conditions of a lab.
That's a tall order for a number of reasons, biggest among them being, well, how big a computer
is. An airplane, even a large bomber like a B-52, has a limited amount of free space.
Any onboard computer system would have to be designed with that in mind.
In practice, TRADIC would be replacing older analog radar and navigational systems.
It would have a little bit of wiggle room, but it would still have to be relatively small.
The other factor in play was that, unlike Bell Labs, airplanes tend to move around a little bit.
Any flyable computer needed to be able to handle being jostled and jolted.
And finally, traffic would need to be modified to survive without environmental controls,
so it would have to be hardened against humidity and temperature fluctuations.
For a vacuum tube system, this wouldn't be possible.
situations. For a vacuum tube system, this wouldn't be possible. Overall, the adaptation from lab bench to aircraft wasn't that big of a challenge for the team at Bell. Flyable tratic
would be a total overhaul, but it retained a lot of the characteristics of the prototype.
This newer, smaller model ended up using a good deal more transistors, going from 600 up to nearly 3,000 point contact transistors.
The amount of diode logic was also scaled back as the transistor took center stage.
The shift was partly due to Bell Labs gaining more experience using the new semiconductor device,
and partly thanks to the lessons learned in phase 1. The earlier prototype had shown the
transistors were a lot more reliable than anyone
would have guessed. As Felker explained, quote, the most valuable output of this machine and indeed
one of the reasons for its existence is the reliability figures that are emerging for
transistors and diodes. The TRADIC computer is the first large-scale application of transistors,
and for that reason, we are watching it very closely to see if we can get any clues concerning That's leagues better than the vacuum tube.
And the flyable tratic would have similar reliability numbers.
These results made it easier for Bell to move to more transistor-based logic and less diode circuits.
Transistors weren't just cool in a lab setting.
They were proving themselves as a dependable workhorse.
The other big change for the flyable tratic was modifying its input-output circuitry to communicate with radar and guidance systems. But once that was out of the way, the machine was ready to fly. This new model
worked well in flights. It really showed that the transistor could go anywhere, and it could do much
more than earlier technology. And while it would never be in service for real military operations,
its descendants would. And the technology pioneered at Bell
would soon take the world by storm.
All right, it's time to wrap up our series
on the birth of the transistor.
It took a long time to fully realize
the potential of semiconductor technology.
But once researchers at Bell got on the case,
it was only a matter of time before the world changed forever. Inventing the semiconductor
diode and eventually the transistor meant that vacuum tubes were just no longer needed. Unlike
the Cryotron, the transistor had no downsides, and it had no strict requirements for operation
besides a power source. The new device was better than any competition in every way possible.
And thanks to the unique properties of solid-state devices,
computers were able to fit into new niches never before imagined.
In the coming years, one of those niches would be businesses and the market at large.
I want to close on this final interesting aspect. One of the best-selling
vacuum tube computers was probably the IBM 650. The first models of this beast were built in 1954,
and it would stay in production through the 1950s. All told, IBM built around 2,000 of these computers.
But by 1959, there was a new product at Big Blue, the transistor-based 1401. During
its lifetime, IBM produced over 12,000 of these computers. Transistors made it possible to make
more reliable and more powerful computers that cost a lot less. This new generation also had
a lot fewer requirements, especially when compared to vacuum tube systems. You didn't need
as much cooling, or as much power, or even a custom-built room for the mainframe. And I think
that's the biggest legacy of the little transistor. It lowered the barrier to entry for computing.
Previously, computers were kept to the realm of research, government, and big businesses.
With the transistor, computers
started to spread more easily. Sure, you still couldn't buy a PC. Not yet. But smaller companies,
labs, and smaller schools could now afford to become part of the growing future.
Thanks for listening to Advent of Computing. I'll be back in two weeks' time with another episode,
and next time I'm going to do something a little more lighter than the last two physics heavy episodes and hey if you like the
show there are now a few ways you can support it if you know someone else who would like the show
then why not take a minute to share it with them you can also rate and review on apple podcasts
and if you want to be a super fan then then you can support the show through Advent of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes,
polls for the direction of the show, and other assorted perks. You can find links to everything
on my website, adventofcomputing.com. If you have any comments or suggestions for a future episode,
then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.