Advent of Computing - Episode 127 - Molecular Electronic Computer
Episode Date: December 9, 2024In 1961 Texas Instruments unveiled the Molecular Electronic Computer, aka: Mol-E-Com. It was a machine that fit in the palm of your hand, but had all the power of a much larger computer. This was in a...n age of hefty machines, which made the achievement all the more marvelous. How was this even possible? It was all thanks to the wonders of molecular electronics, and a boat load of funding from the US Air Force. Selected Sources:  https://web.archive.org/web/20160304071831/http://corphist.computerhistory.org/corphist/documents/doc-496d289787271.pdf - Invention of the Integrated Circuit, Kilby https://archive.org/details/DTIC_AD0411614/page/n15/mode/2up - Investigation of Silicon Functional Blocks, TI https://apps.dtic.mil/sti/tr/pdf/AD0273850.pdf - Silicon Semiconductor Networks, TI
Transcript
Discussion (0)
In 1961, Datamation ran a brief story about a new and amazing machine.
To quote,
Mollecom, a molecularized digital computer reportedly one-tenth the size and weight of
a transistorized machine with similar capabilities, is under development at Westinghouse Electric
Corp's Air Arm Division in Baltimore.
The device is aimed at application in the special purpose
field. About 18 months remain prior to installation and, although no commercial application has
been planned for the present, Westinghouse officials state there is nothing to keep us
out of the commercial field." That sounds exciting, if not a little tenuous.
There's this super-modern molecular computer being developed over in Baltimore.
By 1961, we had already seen the powers of miniaturization.
This new approach could be revolutionary.
So what happens next?
The answer is, not a whole lot.
At least, not publicly.
New stories about the Mole Ecom last into 1962 and then vanish.
Did this little wonder fizzle?
Or did it find its way into some secret project that's still obscure to this day. This is one of those mysteries that I simply can't ignore.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 147, Molecular Electronic Computer.
So, time for a story.
When I was down at VCF West this year,
I picked up three volumes of translations
of old Soviet computer articles.
The first paper in the very first volume
is a survey of the state of computing outside the USSR
as of 1962.
Now that might not sound like the most exciting thing
in the world, but to me, that's a really, really interesting topic.
It's a source that tells us how researchers inside the USSR
view developments in other parts of the world.
Pretty niche, but it's one of those fine instruments
that can be used in just the right discussion.
Eventually.
The paper is mostly what I expected, some details about the state of the computing market,
how fast the US is growing especially, and a broad overview of some computer applications.
Its sources are actually just a bunch of trade magazines, so somehow those were getting past the Iron Curtain.
That's something I really wanna investigate later,
is was data-mation and computers and automation
just sending to some random address in Moscow,
but that's a totally separate topic.
Anyway, I ran into something I had never heard of
in that paper, the Mole-e-com.
It's described as a molecular electronic computer that weighed less than a pound in the earliest
days of the 1960s.
That, dear listener, sounds straight up fake.
It was just written in such a way that it triggered all my alarm bells There weren't computers in the 1960s that could fit in the palm of your hand, right?
That just doesn't line up with what I know about computer history
There's also the fact that I was reading this in a translation of a Soviet journal
Maybe this was a hoax. Maybe this was bad intelligence, or maybe this was a CIA op.
I just had to dig a little bit.
And as it turns out, MoliCom is very real.
But the deeper I looked, the more strange the story became.
Today, we're going to cover the story and try to figure out just what MoliCom was,
why it was developed, and what happened to it. The
second part is especially interesting to me. If there was some wildly futuristic
microcomputer in 1961, then where did it go? Why wasn't it in newspapers for years
to come? Why does it disappear from the news cycle within the very same year? Why
is the 60s characterized by big machines if we have
this machine at the start of the decade that's so small? It makes my mind real
which I think makes the MoliCom worth looking into. It's also important to note
that in the larger historical context the MoliCom would upset some narratives.
I've always been told that it's the common
knowledge if you go looking around that the first computer to use integrated circuits
was the Apollo Guidance Computer. That machine became operational in 1966. That's always
struck me as a really neat story. The Apollo program was so far ahead of the curve that it required a totally new kind of computer.
It was blazing a trail all the way to space.
That's some good stuff.
But if MolleCom is real and if it is miniaturized using silicon,
well, that upsets the whole story, doesn't it?
Today, we're in for one of those context episodes.
We're going to be working our way up to the machine,
because we need to really set the stage for how something like the MoliCom came to be.
For all those who get a little angry when the title card doesn't show up in the first few minutes,
sit back and just enjoy this one.
Now, to delay a little more, before we get started, I have
two big announcements I need to make. February this year is going to be a very busy month
for Advent of Computing. First off, I'm going to be presenting at the Intelligent Speech
2025 conference. That's on February 8th of this year. You can get tickets at IntelligentSpeechOnline.com.
Intelligent Speech is this great online conference where history podcasters get together with
their fans to share stories.
Each year is themed.
The last Intelligent Speech I spoke at was about backup plans, which was a lot of fun,
and this year is on deception.
If you want to get in on that, there are still tickets available at IntelligentSpeechOnline.com for February 8th and it's all over the internet,
so no travel needed. The second announcement is that I'm going back to VCF SoCal. This
is year number two of that conference. I'm also really excited about this one, but it's
the very next weekend! It's happening February 15th and 16th at the Hotel Farah event center in Orange, California.
That's pretty close to LA. I'm going to be speaking and I might be moderating or
being part of a panel again this year. We'll see. The details are still getting
hammered out but if you haven't been to a VCF event before, and especially if you're in the SoCal area,
I highly recommend stopping by.
I'm gonna be there all weekend,
so I look forward to seeing at least some of you,
either online or in person in February.
All right, with that said, let's get to the topic at hand.
There are four main ways to make a computer better.
You can make a much smaller machine, you can make one that's cheaper, you can make one
that's faster, or you can make one that uses less power.
If you can hit any of those points then you likely have a winner on your hands.
History is full of machines that just push one of those edges a little further.
But if you can hit multiple points, well, then you're really onto something.
It's also a neat little law to apply.
Why does the transistor replace the vacuum tube?
Well, a single transistor is obviously much smaller than a vacuum tube, but it does about
the same job.
If you look at the specs, a transistor also uses way less power, since it doesn't need
a heating element.
Switching speed for transistors is also faster.
As for cheaper, well, that would come in due time.
In the early 1950s, a new theory started to be passed around the electronics research
community.
G.W.A. Dummer, an engineer out of England, described it like this to quote,
With the advent of the transistor and the work in semiconductors generally, it seems now possible to envision
electronics equipment in a solid block with no connecting wires.
The block may consist of layers of insulating, conductive, rectifying, and amplifying materials, the electric functions being connected directly by cutting out areas
of the various layers." That should sound wild, especially given the time period. Dummer
sees these new transistor things and just figures, yeah, we can basically reapply
that same logic.
We can make other crystals that do other fabulous things.
Why not?
With the pace of progress in this period, anything would have seemed possible.
So why not imagine just a big hunk of crystal that can do any computing or calculating you
could ever want
This core idea of using properties of materials to create a new class of electronics
Gained a lot of popularity in this period the thread we're following today
Has to do with three main players that got taken in by this new idea of crystalline electronics
Those are Westinghouse,
Texas Instruments, and the US Air Force.
In the mid-50s, Westinghouse started investigating what they called
molecular electronics.
Gene Strull, at the time a researcher at Westinghouse, described the early program like this, quote,
It came from a concept started by Von Hippel at MIT
called molecular engineering.
And what he felt was,
instead of doing a lot of things by trial and error,
you could use spins, charges and fields
and come up with things.
He was thinking of magnetic materials, special alloys
and electronics called molecular electronics
would be a version of that.
Once again, this can sound a bit out there,
but it's all rooted in reality.
We're talking about using quantum physics
as applied to material sciences,
trying to, instead of just guessing and checking our way
to a new type of electronics,
building up a new theory based off first principles.
That's some pretty heavy-duty physics, but
it's a very reasonable approach. It's a very scientific approach. During these early days,
molecular electronics were looking at a number of different materials. Eventually, things
focused around semiconductors. We can take a semiconductor diode as an example of this science.
In that device, current only flows in one way.
In the past, this was only possible, reliably, using vacuum tubes.
With the creation of the first semiconductor diodes, we made the jump from these complicated
glass ampoules to tiny crystalline structures with little metal wires.
A semiconductor diode works because of the properties of a microscopic junction.
The properties of materials on each side of that junction, and sometimes certain
materials that form inside that junction itself, cause the diode to function and
they define the diode specific properties. That's very very different than the process behind vacuum tube rectifiers.
In the 50s, this was a totally new type of technology,
totally removed from anything else in the field.
This molecular approach offered some huge advantages as well.
For one, a molecular device can,
in theory, be wildly miniaturized. A transistor is smaller than a vacuum tube, a semiconductor
diode is, well, also smaller than a vacuum tube. Tubes really used to do a lot of stuff.
A piezoelectric speaker is smaller than a cone speaker. This lets you cram these devices into more places.
For two, these devices use less power.
You might see where this is going, right?
The idea is that you can get all the benefits of the transistor just applied to other types
of devices.
But how do you accomplish this?
How do you actually get molecular electronics?
Well, therein lies the challenge. In the late 1950s, Westinghouse starts pitching this idea
to anyone who will listen and, you know, anyone who has a lot of money to throw around. This
effort ends up focusing on the US military because they always seem
to have some extra cash that they're willing to part with but as always the
different service branches really wanted to go their own ways. This one young dude
named Jack Kilby who worked at Texas Instruments had convinced the army to
go with these things called micro-modules. They were just fancy packages for discrete electronic components, nothing really that
groundbreaking. The Navy was interested in these things called thin film electronics,
but the Air Force was still a little undecided. Westinghouse started their campaign in 1957, and they'd eventually secure Air Force funding
in 1959.
The USAF poured millions of dollars into this new, well, science?
This new field?
In the documents, it's just called the Molecular Electronics Program, but the concept itself
is so wide-reaching that it seems larger than just a program
Whatever you want to call it Westinghouse had got there first they secured the bag so to speak
That also means that going forward this program is kind of viewed as a Westinghouse
Production even when that's not entirely accurate
So the question becomes what did this lab do with all those defense dollars?
The new program was created as part of a much larger research structure inside the US Air
Force, and just to be clear, this wasn't a case of Westinghouse strong-arming the Air
Force.
They bought the pitch because it aligned with their goals. Here's how the Air
Force themselves describe the new program. Quote, the Electronic Technology Laboratory at the
Aeronautical Systems Division, formerly the Right Air Development Division, has a dual-purpose
basic research program in progress to spearhead realization of the concept of molecular
electronics. The first goal is to understand and classify solid-state
phenomena. The second is to mathematically relate phenomena to circuit functions.
This basic research program is directed by the advanced techniques branch of
the electronics technology lab in such a manner that contractual efforts are designed to complement
in the house research tasks."
End quote.
As usual, it's a labyrinthine structure.
Those in-house research tasks, however,
were primarily related to missiles, planes, and satellites.
Up to this point, guided missiles were somewhat crude at best. If
some new circuit technology emerged, something that was small and didn't use
all that much power, then missiles could be made far more accurate and
sophisticated. Molecular electronics also offered the possibility of
improving planes in wild and nearly inconceivable new ways. The Air Force stood to benefit
greatly if this project bore fruit. Westinghouse would also stand to benefit
greatly. If they were able to create viable molecular electronic systems,
then they'd stand to get huge orders of parts from the Air Force. But time would not be kind to poor Westinghouse.
Remember that young Jack Kilby fellow? Well in 1958 he creates the first integrated circuit.
Well, kinda. Caveats apply. If we're talking about ICs in general, then they actually come out of Dudley Allen Buck's lab in 1957. Those
very first ICs were actually cryotrons. They were superconductive circuits. But that's
usually overshadowed by the semiconductor IC, since semiconductor technology is the
branch of the family that we still use today. The first semiconductor IC is created at Texas Instruments in 1958 by Jack Kilby.
That's the most technically correct line.
There are still some caveats even in just that sentence.
Kilby's first IC used tiny conductive wires to connect discrete components that were each
embedded in a germanium wafer. ICs used tiny conductive wires to connect discrete components that were each embedded
in a germanium wafer. Silicon ICs come a few years later, as did so-called monolithic ICs,
which included the wiring and connections inside the wafer itself.
I think that covers all my bases. The point is, in 1958 there's a new technology on the block that hits our four
big points. ICs can, in theory, lead to cheaper, faster, smaller and less power-hungry circuits.
What's more, these new integrated circuits sounded an awful lot like molecular electronics.
At least, it seems that way in hindsight. At the time, there was still
some debate. This goes back to the very early days of the molecular electronics
program, when it was still at Westinghouse and even before. For some
reason, it was believed that molecular electronics should not contain
resistors. From what I've read, the explanation came down to waste heat, since resistors dissipate
power as heat.
Kilby's ICs contained resistive elements, so there was initially some resistance from
the Air Force, but that would be overcome in time.
Execs at TI would take this new integrated circuit idea to the Air Force and try to get
in on some
of that molecular electronics money.
The actual pitch the TI gave was called the solid circuit.
Once again, this is just another name for the IC, but there are some key differences.
Remember, these are very early days for the technology.
Things moved very fast in the waning days of the 1950s.
Kilby's first ICs are technically called hybrid ICs, because they're a hybrid of integrated
components and conventional wires.
By the summer of 1959, Robert Noyce had found a way around the hybrid approach, by integrating
the very connections between components onto the semiconductor wafer.
In less than a year, the technology had, once again, shifted.
That's not to mention all the changes in how components were actually formed on the wafer.
This means that tracking exactly what's going on at any one time is a little
tricky. When someone says they were using ICs in 1959, do they mean hybrids or monolithics?
Are they using silicon or germanium? Mesa or diffusion transistors? This gets especially
strange if we're talking about research and not production models.
The main sources I'm going to be using for the rest of the episode are unclassified progress reports
that TI submitted to the Air Force in 1961 and 62.
I found that those, plus a little extra side reading,
give enough specifics to put things together properly.
Just a quick note about these sources, because they are pretty interesting.
Reports are basically the equivalent of research papers, but for federal contractors.
As a contractor completes work or hits some deadline, they have to issue reports back
to whoever's paying them.
That means that any federally funded project has some kind of paper trail. We've lucked out with the Molecular Electronics Program because the project was unclassified.
That means it was never kept secret from the public.
We don't need to issue Freedom of Information Act requests or wait for statutes of limitation to run out.
We can just cruise publicly accessible files, which is pretty neat.
The only complication is that reports are numbered.
When a report cites another, they tend to use contract and report numbers.
The contract that led directly to MoleECOM is called AF600-42210 for instance.
It references a handful of other contracts which, luckily, are unclassified.
But the trick is following those numbers. I found that text search is my friend here,
but not always. The OCR on documents sometimes messes up numbers.
Anyway, the bottom line is, you get a fun spiderweb of reference numbers, but with a little bit of work and finagling, you can get to the bottom of things.
So, how do we work up towards MOLICOM?
In late 1958, TI shows some of the very first solid circuits to the Air Force.
By 1959, the first contract is issued, and right away away things change fast. I mean, just to start with, the program
was begun by Westinghouse and now there's suddenly this interloper from Texas of all
places. But there's also the changes on the technology front. Kilby had initially used
Germanium, which works just fine as a semiconductor, at least for the most part.
It turns out that silicon is a better material as a substrate, as the crystal that's used
for the wafer in an IC.
Silicon is much more heat resistant than germanium.
It's also far easier to source.
I mean, the stuff comes from sand, while germanium is a relatively rare mineral that has to be mined.
Silicon is also less leaky, so to speak.
Germanium transistors tend to leak a little current even when they're off.
This would mean that binary logic using germanium would be a little bit skewed positive.
The issue is that silicon is a little harder to work with.
Germanium turns out to be a much more forgiving material.
For silicon to work to form transistors and other components, it has to be highly pure.
Germanium, on the other hand, doesn't. The first germanium transistor was created in 1947,
while the first silicon transistor didn't carry current until 1954.
Just as an example, it takes a while to update the process.
Multiple labs were trying to make that same jump for ICs.
Ultimately, Noyce over at Fairchild would be the first to succeed in July of 1959.
TI was also moving fast, but we're a little late to the punch.
There's kind of a story of companies getting swooped.
From one of my prized contract reports, quote,
In June 1959, this contract, AF33616-6600, was begun, investigating the use of single crystal semiconductor material for
complete circuit functions.
The objective of the program required, and this is a list, all circuits shall be formed
from semiconductor material.
The base material must be silicon.
Smallness and reliability are primary goals. Circuit functions would be oriented
primarily towards airborne electronic equipment applications. Military environment conditions
should be considered for operating requirements." This contract was being handled by Kilby personally. Okay, a few fun things about this contract.
It's actually issued literally a month before Noyce's first monolithic Silicon IC.
That's just kind of funny, right?
TI sent us this contract and then Noyce is just like, oh hey, I already did this for you buddy, don't worry about it.
I can't help but laugh at tight timelines like that.
Now the second thing to note is the language here.
Single silicon crystals with embedded circuits.
That's what an integrated circuit is. It's a single crystalline structure that can perform little tricks for us.
That's also exactly what molecular electronics promised back before any of this was real.
So the old sources aren't just mid-century sci-fi, they were presaging ICs, they just
aren't using modern language.
The more important point here, however, is the applications. This contract and indeed series of contracts that end up
producing MOLICOM are all military oriented. The US Air Force isn't funding
molecular electronics research out of the goodness of their heart. They wanted
a weapon out of this. Silicon chips could, in theory, survive more extreme conditions. They
could also, in theory, use less power and be smaller. That's handy for airborne electronic
equipment. Read bombs, missiles, planes, satellites, etc.
So yes, TI got beaten to the punch as far as a functioning silicon IC, but that isn't the biggest deal
in the world, it turns out.
Part of the mandate of molecular electronics was to quantify and investigate exactly how
new components work.
You have to work from first principles, or as the report calls it, an investigation of
semiconductor phenomena.
Despite Noyce's creation, there was still a whole lot of work here to be done.
I hope you're getting the shape of things.
We're in one of those transitionary periods.
Nothing is yet set in stone, changes are coming very fast, and even the language in the literature
is unsettled.
That means there's actually a lot of space
for some weird things to happen.
This takes us to another feature of these contract reports. The goal here is two-fold.
On the one hand, the Air Force wants TI to conduct some science for them, so we get all
kinds of data and information and rigor and experiments, all that good first principle spins and stuff. On the other hand, the Air Force wants TI to make
them chips that they can put in missiles and rockets and planes and any other
weapons they can think of. So each report also has details on manufacturing, on
demonstration units, and how production can be scaled up to meet Air Force
demand.
Basically, they always end with some kind of product to show off.
For this first report, Semiconductor Single Crystal Circuit Development, the end products are working semiconductor networks.
I'm going to be throwing out a lot of lingo here, so prepare.
A functional electronic block or FEB is just an element of an integrated
circuit. Think an integrated transistor or a resistor, all stuff we've already seen.
By connecting up those FEBs, you get what's called a semiconductor network. That's just
another form of an IC. That's boring, right? This just sounds like
what Kilby was already doing, and what Noyce was also already doing. The process TI used
is a little unique. First, FEBs would be created on the wafer through a few different processes.
Then an insulating layer would be deposited on top of the wafer. That layer was put on through a stencil,
so there were gaps where the underlying silicon was exposed.
Then a layer of aluminum was deposited
on top of the insulator, as below, using a stencil.
This electronically connected the FEBs,
creating a network or a circuit.
Careful lingo here is key because remember, this is
molecular electronics. We aren't in the normal discrete world, we're not making circuits,
we're not using wires and components. We're growing crystals for the Air Force to blow
things up. In some of his later writings, Kilby made it sound like everything was fine
once the contracts were signed.
Like the hardliners in the Air Force were okay with ICs now, they gave up on all their
hangups around molecular electronics.
But the language in these reports paints, I think, a different picture.
It really sounds like Kilby et al. are purposefully casting their work in terms of molecular electronics in
order to keep the cash flow happy.
So TI might be kind of tiptoeing their way around things, yet still, they make great
progress.
TI ends up with functioning silicon ICs.
These are fully monolithic, meaning that during this contract, they were able to catch up
to noise.
In 1960, this technology would be unveiled to the public. TI called it the Series 51 family of chips.
While exciting, these are still very, very primitive. The largest downside is density.
Series 51 was a collection of compatible chips, meaning they used the same logic levels and had similar clock speed restrictions. They were made to be used together, but each
chip in the family was slightly different. At best, you could expect a handful of transistors
on a single chip. The most complex was a triple NAND-NOR chip, the SN5161, that contained a whopping for the time nine transistors.
Now, that is a big leap forward. Let's take the triple NAND as the best case
here. Each NAND gate was implemented using three transistors, three capacitors,
and three resistors. Without integration, you'd have to build
that circuit up using discrete components. Each of those components has its own little
housing. A single transistor may be small, but it still occupies some space. Add in the
space for wires and the through holes on your circuit board, and size is no longer negligible. This new IC on the other hand is tiny in
comparison. Let's just take the pin out since just counting pins is a very
simple way to visualize this. A 5161 was packed into a housing with 14 pins. That
means you'd have to have 14 holes on your circuit board to access all its functions.
The discrete transistors of the circuit alone
would require 27 through holes.
So you could rig up a circuit board with just 14 holes or you could work up some custom thing with
27 plus holes plus all the other support components.
What we're seeing is a reduction in size and complexity. plus all the other support components.
What we're seeing is a reduction in size and complexity.
Power savings were also a huge factor.
The 5161 dissipated 2 milliwatts in waste heat.
Discrete transistors vary quite a bit, but the power dissipation is usually in fractions
of a watt.
Yet again, these little chips are a big deal.
Let's get back to density. The Series 51 was impressive at the time, but compared to later
chip density, it's really nothing. It's small potatoes. The first commercial CPU, the Intel 4004, packed over 2000 transistors onto a single chip.
Nine transistors per chip is a cool number, but once we get to the complexity of say,
a computer, well, the bits need more.
This is where we get into the machine itself.
In 1961, TI issues another one of those contract reports.
This one is titled,
Silicon Semiconductor Network's Manufacturing Methods.
It's a continuation of the first report.
In it, TI explains how they're starting manufacturing
of silicon ICs, what their assembly line looks like,
and how they're scaling things up.
But that's just the first 80 pages.
The report ends with a shockingly detailed description line looks like and how they're scaling things up. But that's just the first 80 pages.
The report ends with a shockingly detailed description of what they call the ASD Semiconductor
Network Computer.
This is the machine that eventually gets called the MOLLECOM, but it was officially the ASD
computer for Aeronautical systems division. The goal was to demonstrate that series 51 chip
technology could be used to create a tiny but functional computer. But I just said the density
was an issue, right? Well, TI found a bit of a trick. They found a way around the density problem. The final ASD computer was truly tiny, just 6.1 cubic
inches in volume. Later we get these glamour shots of the computer being
held in the palm of a man's hand. That's truly staggering for the time. It's still
kind of exciting to see today. So what's up with this machine and what exactly was TI's trick?
I'm not going to build any more suspense. TI got around the low density of their chips by simply
stacking silicon wafers. The ASD computer is composed of these tiny modules that each contain
computer is composed of these tiny modules that each contain between 8 and 16 stacked wafers of silicon. ICs are actually pretty thin, they're just slivers of semiconductors,
so stacking them up saves a lot of space. Even with full casing and everything, the
stack just kind of looks like a suspiciously tall microchip. Stacks were each housed in these little metal cases, the pins grouped into a grid on the
bottom of the housing.
But it was more than just piling up chips and routing around wires.
Stacks could have internal connections.
So a module wasn't just 8 or 16 triple NAND chips, it could implement more complex logic
internally.
From there, the computer was constructed in a pretty familiar fashion.
Modules slotted into a backplate, which connected them all up.
In total, 47 modules were used to construct the machine.
That design is very similar to, say, the PDP-1.
Generic modules are wired up externally.
That means it's easy to switch out modules if one breaks, and that all the actual logic
that makes the computer, well, the computer, is just dictated by simple wires, by the interconnects.
It's a slick design that was used quite a bit in the period.
The backplane had connectors for peripherals.
These amounted to a paper tape reader,
a small numeric display, and a keypad.
So yes, technically when the machine was all hooked up,
it was a little bigger than 6.3 cubic inches,
but the brains of the machine that included memory
were very small.
The ASD computer also made very aggressive use of the
Series 51 line. Its logic was mainly composed of NAND-NOR chips, the ones we've already talked about.
Memory used another class of Series 51 chips, the beloved flip-flop. Now, we've talked about this
kind of footwear on the show before,
but it's been a while, so I think we can have a refresher. We've earned it.
A flip-flop is a circuit that stores a state, either on or off. That value is sent to an output
pin. They also have input pins that are used to either set or reset the state. Crucially,
a flip-flop can stay in one state as long as it has power.
For this reason, they're sometimes called latches. Put another way, a flip-flop is a one-bit memory
element, and if you're careful, you can even walk around in them in most weather. If you put these
together in a sophisticated circuit, you can make random access memory.
But on the other hand, you can put them together in a less sophisticated circuit and form sequential
memory.
That's the route the TI went.
You can wire up flip-flops such that at the beat of a clock signal, data will propagate
down a chain of flops.
That's called a shift register because you're shifting
data down the register. You can turn that into recirculating memory by
essentially looping the chain of flip flops back onto itself. Memory read and
write operations are simply timed so that the right bit is passing by some
read location at the correct time. This is called recirculating memory.
It's a very primitive way to store data.
Very early computers like EDVAC used a similar style of memory, at least conceptually.
EDVAC used mercury delay lines, which circulated bits in a tube full of hot mercury, but the
concept is the same.
You're propagating a signal down a line,
picking it up, reading it, then putting it back down the same line.
At this point in the dawn of the 1960s,
recirculating memory was a little outdated at best.
Period machines used magnetic core memory, which was fully random access.
There had been entire intervening generations of
storage between core and delay lines at this point. On its face, that's a little weird,
this little computer is using kind of an old style of memory storage. But things get stranger.
The ASD computer wasn't a von Neumann architecture machine. It was Harvard architecture.
That means that instead of storing code and data in one big chunk of memory, it stored code and data separately.
That, once again, is an older design style, at least by and large. There are some very niche
applications of the Harvard architecture, but most contemporary machines were using the von Neumann architecture.
So you may ask, why is this older form of memory showing up in such a hyper-modern computer?
The ASD computer was on the very cutting edge of silicon technology.
There was literally nowhere else this could exist.
There are a few possible reasons at play here.
In an oral history, Kragon,
one of the developers of this machine,
says that he just kind of went with what he knew
and what some other engineers that he hired knew.
So we could leave it at that, right?
They're just going with what they're comfortable with.
But there are some benefits to this approach that, I think, make it a pretty smart choice.
That big one is simplicity.
The report describes the machine like this, quote, the ASD computer was designed and built
to demonstrate the use of semiconductor networks in a digital
application."
It's meant as a demo of new technology.
The computer was only designed to show off how far these new silicon chips could go.
Recirculating memory was primitive, but it was simple and very easy to implement.
The same goes for the Harvard architecture.
It's just a little bit less flexible, but leads to some easier design down the line. There's also
the fact that recirculating memory can be easily implemented purely in silicon.
This actually links up to something we recently talked about in a weird way.
Maybe you remember the specific part of the Conrad-Zeus series that I'm talking about. If not, then let
me jog your memory. The Z3 computer used relays to implement both logic and memory. It was totally
homogeneous in that regard. I said that was a strange approach, since historically memory tends
to be implemented with different technology than logic. That's changed today, but in the historic period that really holds true. Well, the ASD
computer is another exception that proves that rule. Magnetic core was the
technology of the day, which was being paired with discrete transistor logic.
That worked pretty well, but it had a huge flaw. It didn't show off just how cool
Series 51 silicon networks could be. In that context, it makes a whole lot of sense that the
ASD computer used silicon memory, and further that it used a very primitive form of that memory.
This approach would also line up with TI's larger goal. This is part of an Air Force funded project.
Smallness and reliability are primary goals after all.
Silicon based memory is more than a neat demo.
It would have some advantages to magnetic core.
It would be more heat, shock and impact resistant.
And it would also be smaller.
At least, I I think maybe in theory
Okay, so this this let me down a rabbit hole to a lot of pretty hand-wavy math
So check this out the ASD computer had 16 words of memory for data and 16 words for code
Now I know that's not a lot of memory but demonstration unit its. Its words were 10 bits wide, so that comes out to 320 bits of storage.
Once again, not really that good.
Now the documents don't give us a module by module breakdown of the system.
Rather it gives us a flip by flop breakdown.
That's not super useful because each module has between 8 and 16 wafers, so there's variance there.
I'm just going to assume the worst case and say that only low density modules are used for memory.
That leads to some bad math and it implies that the computer is actually like 90% memory, which
isn't the case, but this is just a ballpark thing for density calculation.
the case, but this is just a ballpark thing for density calculation. When I run that, it gives us a density of 8300 bits per cubic inch.
How does that compare to the rest of the field? Well, magnetic core only ever reached 18 bits
per cubic inch. The units are a little strange and I'm making a few
assumptions here but I hope the idea comes across. The ASD computer didn't
have much memory but it demonstrated that silicon could in theory make much
more dense memory systems. 320 bits isn't practically useful and recirculating
memory has a bunch of known flaws, but this
does show off what the new technology could be capable of.
In fact, we've seen where this leads.
Once again, with the power of hindsight, we know that IC memory becomes the wave of the
future.
By the 70s, silicon memory is the standard. That paves the way for
modern computing. And we're just getting a glimpse of that future inside this tiny machine.
Now, as always, I tend to say that memory is the true heart of a computer. The memory system of
this machine is primitive, and that primitive nature is reflected in the rest of the machine.
system of this machine is primitive and that primitive nature is reflected in the rest of the machine. The ASD computer's arithmetic circuits are all serial, meaning that it operated
on numbers one bit at a time. This is in contrast to contemporary computers that used parallel
math circuits that operated on all bits of a number at once.
Why go for more primitive logic here? Well, for demonstration purposes, of
course. Our man Craigen even says in that same oral history that they chose
serial math circuits because they thought it would be smaller. Really,
that's the name of the game. Let's take an adding circuit as an example. For a
serial adder, you just need a 1-bit adding circuit. To add
two 10-bit numbers, you would just feed that data through one bit at a time. You use the
same one circuit 10 times in a row. Thus, you in theory need fewer chips. The downside
here is speed. If you have a parallel adding circuit, then it only takes one operation
to add those 10-bit numbers. But the simplicity here, at least in theory, can't be beat. The
ASD computer is optimized for size, so simplicity always wins in its design. That's a theme
that I find really funny in this machine. It's so futuristic. The computer fits on the palm
of your hand. It's created with sci-fi technology. It's literally made from
crystals with tiny designs etched and printed on their surfaces. What's more,
this is the first computer with that distinction. This is the first computer made with magic rocks. Despite
that, it's a darn primitive machine. TI isn't really cheating. They're just, well,
optimizing for size. By using an older design approach, they're able to create
this wild demonstration unit. There's also a little set dressing going on here. I
don't know if this was intentional, but the ASD computer itself looks futuristic.
The actual computer is this slim square with the modules all exposed.
They're even set in this grid of solid milled copper.
Now that was there for heat dissipation, but in photos, it makes it look almost gleaming.
Each module has a set of recessed dots colored in red, orange, black, and blue.
It looks for all the world like a sci-fi prop, like it would show up in an episode of Star
Trek.
TI even made an identical machine, at least in terms of architecture, using discrete components.
This was done so there would be a one-to-one comparison point.
In press and in the reports, you get these glossy photos of the conventional computer next to the ASD computer.
You can literally see the size difference and it's staggering.
I think it's also a little telling that the conventional model doesn't have the same sci-fi styling as the ASD computer.
It's just a big box with exposed circuit boards.
The final piece here is the whole name thing.
You've probably noticed I haven't been calling it MoleECOM.
That's because in all the reports I've read, it's called the ASD computer, or the ASD Semiconductor
Network Computer. I've read, it's called the ASD computer or the ASD semiconductor network computer.
But hey, ASD, S&C doesn't really have a nice ring to it, doesn't have any ring at all.
But in news coverage, it's always called mole ecom or the molecular
electronic computer. What gives?
I think this is just a case of news coverage, choosing a better name and also
just kind of messing up. If you remember from the very beginning, that quote I gave was talking about Westinghouse's new molecular computer.
But Westinghouse didn't make the Mollecom, so there's already a bit of confusion in the press.
Now, officially, TI reports say that the ASD computer is a molecular electronic computer.
That's how they always describe it.
It's a mole-ecom.
We even get this fun photo of the machine all set up with its keypad and a big cardboard
sign that says, Experimental Molecular Electronic Computer.
In 1961, TI publishes a booklet about the ASD computer. It's titled A
Molecular Electronic Computer by TI. It uses those three magic words not to name the machine, but to describe it.
It's those three words that seem to stick in the press. It gets shortened in headlines to Mole Ecom.
At least I'm pretty sure that's what's going on.
I haven't found any TI documents that call it MoleECOM.
All official sources are labeled as AST.
So that's the computer.
It's a wildly futuristic demo machine that uses some older design principles to show
off just how far silicon can go.
At least, how far silicon can go. At least how far silicon can go in
1961. Where do things go from there?
There is a mention of continued evolution of the ASD computer in reports,
but I can't find anything about other experimental computers at TI until the latter part of the 1960s.
Where then did Mollecom lead? Well, in a word, it led to Minuteman.
In the late 50s, the Air Force had started experimenting with land-based intercontinental
ballistic missiles.
One aspect of that larger project was missile guidance.
They needed some way to control missiles to their target.
They needed a robust and flight-capable computer.
In the 50s, that was, practically speaking,
not possible. The Molecular Electronics Program was funded in part to help find a solution to
that problem. With MOLICOM, Texas Instruments was able to show off that yes, you can make a
silicon-based computer, and yes, TI will manufacture the needed chips in bulk.
North American Aviation would design and build the actual guidance computers used
in Minuteman missiles, but its chips were all manufactured by TI. In 1962, Texas
Instruments secured a contract for 22 different types of custom ICs for that
guidance computer. This would have been
using the same kinds of processes as series 51 chips, the very same chips that
the Air Force had paid TI to develop. This was one of the ways the Air Force
was reaping the benefit of the molecular electronics program. Now, was there
anything else? Well, there is a small coda that I can tack on here.
One of the folk who worked on the MoliCom project was named Harvey Kragon.
I mentioned his oral history a couple of times in this section.
This wasn't his first exposure to computers, but it was his first machine that he would
ever design and build.
In the coming years, he'd help with some even more obscure machines at TI.
This leads up to 1966, when Kragon headed up the development of the Advanced Scientific Computer.
This was one of the first supercomputers to use vector processing.
And yes, it was full of ICs made by TI.
We don't see a lot of direct impacts from MOLI-COM. That
said, it did prove to the Air Force that ICs were the wave of the future. It showed that
a computer could be built out of silicon. And maybe it gave a handful of TI engineers
some much needed experience in an emerging field.
Alright that does it for this episode.
I think this is just about as exhaustive as we can get with Mollecom without hitting actual
archives.
There are a few interim reports that I wasn't able to track down that may be responsive to a Freedom of Information Act request, or may be in a cabinet somewhere, but in general, I think what's
already public is enough to tell the full story. Any more details you find would be
very, very specific, and probably not all that revelatory.
The Mollie Con itself is pretty neat to me me because it's this little blip that subverts conventional
knowledge.
It shows up before the Apollo program revs up their first silicon computer.
It shows up in an era of discrete components, of hefty machines.
But the Mollecom doesn't really stick around to change the world.
It's just this little blip right at the beginning of the story of the integrated circuit.
The little machine is a bit of a side story, but it gives us a good excuse to look at a
very interesting aspect of the development of the IC.
And hey, it's all these little side stories that help us flesh out the bigger picture.
Thanks so much for listening to Admin of Computing. I'll be back in two weeks with another piece of Computing's past.
If you like the show, there are a few ways you can help support the podcast.
You can rate and review the show on Apple podcasts and on Spotify.
Reviews do, somehow, help the show spread around, so it's worth doing if you have the
time.
Also, please take a minute to share the show with your friends.
Word of mouth really does help grow the show.
If you want to support the show more directly, you can either buy
Admin of Computing merch on my merch store or donate to my Patreon.
Patrons get early access to episodes, polls for the direction of the show
and bonus content.
I actually probably need to put up a new
poll for the next bonus episode pretty soon, so keep your eyes out for that. If
you want to get in touch with me, I'm off Twitter now, but I'm trying BlueSky
for a little bit to see how that goes. My account over there is at adrinofcomp.bsky.social.
I think you can just find it under Adrin of Comp. And as always, have a great rest of your day.