Advent of Computing - Episode 155 - LINC
Episode Date: April 13, 2025In the early 1960s a neat little machine came out of MIT. Well, kind of MIT. The machine was called LINC. It was small, flexible, and designed to live in laboratories. Some have called it the first pe...rsonal computer. But, is that true? Does it have some secret that will unseat my beloved LGP-30? And how does DEC fit into the picture?
Transcript
Discussion (0)
Early computers and modern computers share very little in common.
For one, we don't use modern machines at all how early machines were operated.
Computers these days are interactive.
You hit some buttons, you get instant feedback.
That is not very traditional to say the least.
Machines were initially built to do batch work.
You pile up some data and code,
then set them spinning and leave them alone until they have an answer for you.
Early machines stored data in fundamentally different ways. You didn't have random access
memory, but weird sequential-ish data stores, spinning drums, delay lines, rings of tubes, even mechanical devices.
Long-term storage was more likely to be on something like paper tape than a removable
disc. Random access just wasn't really a thing for computers. It wasn't in the cards, so to speak.
The main thing that old and new machines share is their core theory of operation. A computer will always
work along very specific lines, even if it's used in weird ways and made up of strange parts.
These facts aren't exactly secret. I figure that leads to a certain, well, let's call it a phenomenon.
If you know these differences, then it makes certain computers stand out in higher relief.
You don't expect old machines to have handy storage.
So when one does, that makes it easy to take note of.
You don't expect old machines to be interactive.
So when you run into an example, well, that's memorable.
So what would you say to another memorable example?
That's memorable. So what would you say to another memorable example?
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 155, Link.
Today, I'm getting over a head cold. So if my voice sounds a little weird, or if
there's a sudden change in audio quality, it's because my voice gave out and I might
be recording this over the course of a few days. But rest assured, I am better. I just
sound a little gross still.
Now, today we're going to be looking at a deck machine I know nothing about.
Or at least, kind of a deck machine.
It gets complicated, but I assure you, it's a very interesting story.
The genesis of this episode is pretty straightforward.
I've been wanting to cover more deck stuff, that is the Digital Electronic Corporation,
but I don't really know the foundations.
So I'm going to build up. And if I'm building up my foundational knowledge, I might as well
make a concerted effort at it.
A while back we discussed the PDP-1, DEC's first computer. That also covered the saga
of how DEC itself formed. The short version is that a group of MIT researchers
who worked on the TX2 computer
spun off to start their own company.
At first, they sold digital lab modules,
which could be combined to form complex logic circuits.
When that part of the business became successful,
they were able to turn around and use those modules
to construct a full-blown computer,
the PDP-1. That all goes down in 1959. In 1962, there's a new computer on the block,
the LINC. Like the earlier PDP-1, the LINC is a mini-computer. That's a step down from a mainframe.
It's meant to be operated by a single person. As such, it's cheaper, smaller,
and less powerful than a mainframe.
That makes it appear, perhaps, much more flexible.
But it's not a deck computer, at least not at first.
The link has also been called the first personal computer.
And that's precisely because
it's so small, flexible, and it can be used by a single person. Hence, pretty
personal. Now, that's where alarm bells go off. We all know that, repeat after me,
the LGP-30 was the first personal computer. Of course, I'm being a little jokey here, but I seriously do want to know what the argument
for the Link is.
This episode will be working up to that question.
Why is the Link considered by some to be the first personal computer?
What did it bring to the table that other machines did not?
There's also another piece of the story that I want to know more about.
You see, I've heard that the link had a cool tape drive.
As a lover of data, that sounds interesting.
Apparently it could handle seek operations and had a file table.
There's no big goal there, I just want to learn a little bit more about that.
That seems neat.
So let's get going.
What exactly was the link, where did it come from? And how personal could it have been?
Plus, what's the whole thing with DEC? How is this even connected to the venerable corporation? Well, let's find out.
Nils Al Baricelli is probably one of the most interesting people I've ever read about.
He's the guy that invents the genetic algorithm, partly by accident, and doesn't realize its
utility because its behavior upsets him and doesn't work towards the end goals of a very
close-minded study he was working on.
I cover the full story in episode 115, which I highly recommend if you want the details.
The root of the tale is that Nils didn't think biology was a science. His argument was that
since there was uncontrollable variation in experiments, you couldn't actually use the
scientific method for biology. You can't control every aspect of how a bacteria
grows in a petri dish. Thus, it follows that it's impossible for biology to be a true science,
and by extension, for biologists to be scientists. Nils, as a mathematician, took it upon himself
to figure out how to simulate biological experiments in a computer.
By doing so, he and he alone could turn biology into an actual honest-to-goodness
scientific field instead of whatever else it was. Nils also carried on one-sided feuds with
dead mathematicians and was a student of Enrico Fermi. He was a wild and crazy guy.
Nils analysis was a little overzealous, perhaps, but he was hitting on a real problem.
Certain sciences have issues with reproducibility. That doesn't mean they're not a science,
it just means they have a different set of struggles to overcome than something like theoretical physics.
In the book Computing in the Middle Ages, author Severo Ornstein calls these fields, quote-unquote, wet sciences.
Now, I ask you, does this mean they aren't really science?
The answer is no, it doesn't.
Once again, different challenges.
But that also means they require a different set of tools than other sciences.
The oncoming digital revolution could offer a whole new set of these tools for wet sciences.
But those tools were slow to leak out. Ornstein, writing about the state of digits in the late 1950s, explains it like this,
quote,
Processing and analysis of data limited the progress of research.
There was no way that data could be processed as an experiment proceeded in order to influence
the course of the experiment.
Instead, researchers had to wait for the results,
which, in the case of experiments with animals, invariably meant starting over with a different
animal with no way to know whether the electrodes were in the exact same place as before, etc."
Here, Ornstein is talking specifically about neurophysiology, the study of the physiology
of the brain.
In this period, some researchers were starting to process data on computers, but the batch-processing
nature of machines wasn't exactly practical.
There was, however, quite a lot of promise here.
Computers could let them do analyses they had never dreamed of before.
It's just that the exact right tools didn't quite exist yet.
If nils were in the picture, why, the solution would be clear.
You just need to model everything on a computer.
Drop the whole idea of physical experimentation because that's not reproducible.
If you can't ensure that each brain you test is identical down to subatomic particles,
well then you can't use the scientific method, so any result's meaningless.
You're not a scientist, you're a simple rube in a lab with a monkey.
Luckily for us, Nils was probably often Princeton at this point, or maybe Gallivanting in Europe.
The story doesn't take place in New Jersey, but rather in a classic setting for us, MIT.
More specifically, in Lincoln Lab.
We've been here many times before.
Lincoln Lab was a military-funded research outfit on the MIT campus. It specialized,
after a bit of fashion, in real-time computers, big interactive machines that had cutting-edge
human-computer interfaces. You may remember such hits as Whirlwind and TX-0. By 1957, TX-2, the follow-up to TX-0, was well underway.
It's in that lab that some researchers were getting sick of the military mandate.
One of the issues with funding is that it will invariably bias your research.
Lincoln Lab was funded by the Department of Defense, so
all research had to do with some kind of military project. That was good for
getting money in the door, but it limited what the lab could do. Historically, that
mandate was, well, it was stretched close to the breaking point at many occasions,
but there were still some hard and fast limits to what could be done. Wes Clark
and Charlie Molnar were some of the researchers that would break that limit.
Clark had been with the lab since 1952, working on Whirlwind and then helping to
design TX-0 and TX-2. The two researchers were approached by a group of biophysicists
that wanted some kind of digital instrument for measuring neuron responses.
It had to be real-time, it had to be small, and it had to interface with external devices.
Read that last bit as wires plugged into an animal's brain.
The machine that came out of this was called ARC, the average response time computer.
It was so named because it calculated the average response times for neural stimulation.
It could run a handful of real-time calculations.
There are three technical aspects of this computer that
bear mentioning. First, it was small. As Clark describes, it was small enough to be wheeled
around MIT's campus. For the time, that was a big achievement.
ARC was built using the same methods as TX2. That is, it was constructed from standardized system modules.
Both of these computers use these discrete packages
of transistor modules that can be plugged
into standard sockets.
That meant the actual machine was composed of sockets
and some fancy wiring,
and then just this pile of generic pluggable modules.
The final point is that Arc wasn't programmable.
At least, it didn't execute code.
All its functions were hardwired.
It was able to carry out useful tasks, but if you needed another function, you had to
call up Clark and Molnar to rewire the computer.
That's a perhaps not the best user experience. It would also limit
what Arc could practically do. This first brush with biology was enough to get Clark thinking.
From Clark writing in a paper called The Link Was Early and Small, quote,
but wouldn't it have been better to program these operating modes rather than
wire them into control circuits? I had since taken just that approach in designing an extremely
simple stored program computer of very limited capability. The 256 10-bit word L1 for use
in a special project. There was no doubt in my mind that the far greater flexibility of the stored program approach
was of enormous value in computers small enough to be considered
instruments for laboratory use."
There is a citation about that
special project and the L1 computer, but I can't find any record of the paper itself.
It must have been a sweet little
machine. What I find interesting is how similar this line of thinking is to other contemporary
projects. We can call back to my usual, period appropriate touchstone, Fortran. That was developed
to make computers more accessible to researchers. John Backus identified that more researchers could benefit from the power of a computer
if only they knew how to program.
Yada yada yada, we get Fortran.
It's designed in large part as a research tool.
My man Nils identifies a problem with biological research.
Yada yada yada, we get genetic algorithms.
Okay, maybe I'm stretching the NILS thing a little thin here, but the man lives rent free in my head,
so you will excuse me. The simple fact is that computers, even very primitive ones,
can bring a lot of power to bear. Special purpose machines are useful.
General purpose machines supremely so. But getting a general purpose computer into the
hands of researchers in this period is not an easy task. This is doubly so for biological
sciences. These researchers wanted real-time computers. Very few computers
operated in any kind of real-time mode in this period. Machines were mostly
batch oriented. You'd send in a job, send in some data, and then leave while
digital clerics pushed paper and hammered out incantations somewhere
physically near the machine. If you wanted to take readings, do
rapid analysis, and get instant results, you needed a different kind of machine. You needed
something like TX-2, a literal bleeding edge computer developed in a Department of Defense
research lab. Clark, it turns out, was excited for that challenge. The idea would fester and
form for a few years. To quote from Orstein, that spring, 1961, Wes disappeared from the lab for an
extended period. When he returned, he brought with him notebooks containing a preliminary design for a new small computer."
This is the point where I made a character assessment.
I like Wes Clark.
This is just a classic component of any hacker tale.
The intrepid genius goes missing from his lab,
absorbed in a problem.
His colleagues can't find him. Mail is piling up at his doorstep.
Deadlines are dropped. Perhaps someone considers filing a missing persons report. Then, suddenly,
the hero returns. Eyes sunken. They scant. They hand over the designs for a cutting edge new
computer. It's really little wonder this story comes out of MIT.
At this point, the machine is called Alpha Link. That's stylized as the Greek character
Alpha, and then L-I-N-C as the first four letters of Lincoln. We can see how this is
still very much tied up with the lab itself.
The project isn't military, but it's still going on inside a military lab.
It's fun to imagine the story ends here, right?
The wild-eyed genius returns, his perfect designs are just implemented, the computer
builds itself, right?
The world is stunned.
That's a simple and fun bedtime story for a growing
hacker, but it's not the truth. The simple fact of the matter is that any big undertaking
has to be a team effort, this new computer being a prime example.
The group that grew up around AlphaLink was mainly composed of folk who were working on TX2. Several Orenstein and Mary Ellen Wilkes
come on as programmers. Orenstein was a TX2 guy who also worked on Project Sage.
Wilkes wasn't a TX veteran, but she had experience programming on IBM's Big Iron.
The two were tasked with programming Clark's proposed machine.
Now that may sound strange at first.
AlphaLink only existed on paper, so what kind of programming was even possible?
Well, first of all, this was meant as a bit of an exercise.
By trying to write software for the proposed machine, Ornstein and Wilkes would encounter
all of its rough edges. Clark
would then use that feedback to refine the design.
The other thing to note is that programming in this era was a very manual affair. Computer
time was at a premium. It was common to have colleagues double or even triple check your
code before it would even see a computer. I mean, heck, not
everyone even used an assembler or a compiler. There are a lot of actual
manuals that say to assemble code you follow these steps. You go through this
table and you write out the binary on a piece of paper. So when Clark asked these
two to program a computer that didn't physically exist,
that wouldn't have seemed all that strange.
Over the summer of 1961, designs were revised, shifted, and finalized. Once things settled down,
Wilkes took the spec and wrote a simulator for the new computer. This was a program that ran on TX2 and could execute
code written for Alpha Link. At that point, they could actually program this theoretical
machine. Using the simulator, Wilkes was able to develop other tools, like an assembler
well before the physical link ever existed.
Two more crucial figures join up in this period.
One is Charlie Molnar, who was another old hand in the lab.
He was an electrical engineer and proved crucial to the physical implementation of the machine.
The other was Tom Stockenbrand, who designed a magnetic tape drive on TX-2 and would do
the same for Link.
Hardware implementation proved to be a real challenge, in part because of self-imposed
requirements.
Clark wanted to keep the machine's cost below $25,000.
That was a very tall order. The LG P30, my current touchstone machine, sold for $47,000 and that was a
very very small computer. The PDP-1, another small system, sold for $120,000.
The 25k price point was, well, I was to say aggressive. As a result, Clark threw out multiple designs that would
simply run over budget. This is where an interesting and crucial decision is made. To save cost,
Link would be built from Deck System Modules. Thus, a fascinating aspect of the story begins.
Deck had been founded in 1957 when a handful of TX-2 engineers defected.
Clark had actually worked side by side with some of these engineers for years.
This would include Ken Olson, who had directed development of TX-2. One of the big advancements of TX2 was its
modular design. It was built up of those interchangeable and standardized computing modules.
When DEC was founded, their first products were those modules, now packaged up smartly and mass
produced. Clark had used TX2 modules to build ARC, his earlier machine. When it came time to build
Link, he went for Deck modules. They were the same as his old beloved TX2 modules,
just packed up as a commercial product. And here we are in a strange position. You see,
You see, the link isn't exactly a deck machine, but it's designed and built using deck modules. It's created in the same lab that spawned Deck.
There's a very deep connection starting to form, one that will come up again and again.
The prototype alpha link was up and running by the end of 1961, and, well, all prototypes
need a good demo.
The first public showing of Alpha Link was in front of the National Academy of Sciences,
with the actual demonstration being in a hotel in Washington, D.C.
Again, this leads to a classic tale from Clark. Quote, Charlie and I spent the night before, all night, trying to fix some unexpected problem
in the arithmetic element, crawling around the floor of the suite at the conference hotel
with test equipment and soldering irons and discovering only by the dawn's early light
that just outside our room there is a huge broadcast antenna tower that had been flooding
everything with electromagnetic noise."
After a sleepless night and a little more coercion, the demo went off without a hitch.
The machine sparked to life, crunched numbers, blinked lights and red data.
The works.
As I said, it's a classic tale of last-minute fixes to unexpected problems.
But there's something that's easy to miss in the setup. Clark & Company had traveled
from MIT to Washington with this computer. They had moved the machine, physically. Ornstein
describes this period by saying that Link hit the road.
It was a traveling computer show.
For this period, for 1961, that was uncommon to say the least.
Contemporary machines were still big stationary things with custom power and cooling requirements.
That's one of the reasons that the LG P30 is so striking.
Same with the PDP-1. Those are both smaller machines that sift power, at least compared to
what IBM was churning out. Just the fact that Link could be moved around and run in a hotel
conference room, that would have been impressive to see. The demo would pay off
and in due course attract investment. In 1962, the National Institute of Health had expressed
interest in LINQ, and the prototype would see its first actual use. The machine was
once again moved and then hooked up to a cat named Jasper in a lab ran by Dr. Robert Livingston.
At the time, he was director of the Institute of Neurological Disease and Blindness
and the Institute of Mental Health. Now, this wasn't just a matter of plugging a cat into a
cat-shaped hole in a computer. Jasper had already been outfitted with electrode implants. These were wires
and detectors bonded into the cat's brain. Those electrodes could be used to take readings
of neural activity. Specifically, those readings were analog voltages. For a computer to read
that kind of data, a conversion is needed. Specifically, an analog to digital conversion. This is accomplished by a circuit called, in a surprising twist, an analog to digital
converter, or an A to D. These circuits convert different voltage levels to digital values.
You put a gross analog voltage in and you get a nice binary number out.
From there, you can do all the normal tricks. In the case of this
first outing, readings were averaged and then displayed as a line, live on our CRT display.
From there, things were on the upswing. At least, in theory.
Funding was secured from the NIH, and Clark's lab started refining the prototype and preparing for small-scale production.
So from here, it's just a hop, skip, and a jump to a finished machine, right? Well,
things don't go that way. From Ornstein again, quote,
Lincoln management, accustomed to the more generous overhead allowances of its military
funders, was unwilling to deal with a allowances of its military funders,
was unwilling to deal with a new set of sponsors."
Again, Clark and his crew were pushing up against the limit of the lab's mandate.
So he left.
Clark picked up shop, took the link designs, and left MIT.
With him came most of the crew that had been working on the machine.
After all, they could secure funding.
The NIH was willing to open a research program just for LINK.
The new group would set up shop in Cambridge, Massachusetts.
In fact, they were a few blocks from the MIT campus.
There would be a number of other twists
and turns, but from here, the computer begins to be finalized. So, what exactly was on offer here?
What exactly was the link? Now, this can become a bit of a tricky question because the link would shift and change over
the years.
In fact, the name even changes.
Once Clark leaves MIT, the name is changed from LINK, as in L-INC, to LINK in all caps.
It's no longer a LINK in lab machine, but it now stands for the Laboratory Instrument Computer.
I'm going to be focusing on one version of LINC, the Evaluation LINC. 1963 marked the
beginning of the so-called LINC Evaluation Program. This was, frankly, a stroke of genius.
A number of early versions of LINC, 20 in total, were sent out to labs around
the country. The scientists in those labs would be trained on how to use and maintain their machine.
In exchange, the LINQ team would get valuable real-world feedback on the new computer,
and they proved that the whole idea of a laboratory computer was viable. I'm going to be focusing on this specific version of the link because it forms the core
of all other iterations of the machine.
To start with, we're late enough we can just say, link is a computer.
That's pretty nice, huh?
There aren't any caveats or gotchas here.
It's a normal normal stored program computer with
honest-to-goodness random access memory and a Turing complete instruction set. In other
words, it's just a normal computer. That is so refreshing. Its memory is composed of
1024 words of magnetic core. But you should be asking,
what's the word size for link? What is the size of data it treats as its native format?
Well, it's 12 bits.
Link is a 12-bit machine.
So using modern conversions, it had something like 1.5 kilobytes of memory.
So why 12-bit? Well, as with many older machines, why not?
We're still well before the great standardization of the 8-bit byte.
As such, we get strange word sizes.
What's also a little odd is that Link isn't a hexadecimal machine.
Now, this is one of those weird human interface things.
Most modern computers are considered hexadecimal machines, meaning that we often represent
their binary data as a hexadecimal number.
That's base 16 for us decimal folk.
That works really well for machines that work
in multiples of 4 bits. It takes 4 bits to represent a single hexadecimal number, a 0
through an F. So if your machine has 16-bit registers, you can display their values as
4 hexadecimal numbers. That way, everything is round and even, and you're
using all the bits properly and everything's displayed, everything just fits right.
The LGP30, for instance, is a hex machine. It stores data in 32-bit chunks. The same
goes for the x86 family. On those machines, you deal with 16-bit or 32-bit data sizes. You can use Hex to display
a 16-bit number, but it's awkward. You end up with three digits. That's not an even number.
So as far as computer folk are concerned, that's not good. I really don't have a better way of explaining that
than it just feels wrong.
Link is an octal machine, meaning when its values aren't displayed in raw binary, they're
displayed as octal, base 8. You only need 3 bits to encode octal values. So you can show a 12-bit register as four octal digits. That's even,
and it's very neatly subdividable, which makes it very good. That feels better than three digits.
But the reason for the octal hex divide isn't as simple as computer feel. In the case of LINQ, I think it's historic.
Whirlwind, MIT's first computer, was also an octal machine. Its documentation used binary
and octal to display values. TX0, a transistorized copy of Whirlwind, used octal, the same is
true for TX2. The staff that designed and
built the LINK, for the most part, had worked on these earlier machines. Octal was kind
of the MIT way.
I think this is one of those interplay things. Clark is thinking in Octal already, so 12
bits makes sense. At the same time, 12 bits is better for octal representation,
so it's natural to make link an octal machine. But at the end of the day, this is one of those human
things. 1001 is still 1001 no matter how it's displayed, it's just MIT's machines tended to
display things in octal instead of hexadecimal.
The other reason to go with 12 bits is to save cost.
At the time, most machines were pretty large-bitted.
TX2 used a 36-bit word, for instance.
A 1-kilo word memory space on TX2 would actually be 3 times larger than a one-killer-word memory space on Link.
That would mean three times as many parts, three times the complexity, and maybe three times the
cost. Link had to be a small and cheap machine, so fewer bits was the way to go. The relatively
small word size also saved on circuits. Clark had explored a number of ways to build Link, but would settle on parallel circuitry.
That means that the computer operated on all bits of a number at once. Its addition circuit, for instance, added each bit at the same time, instead of going bit by bit by bit. This meant that link's circuits were
scaled by its word size. If it was a 24-bit machine then it would need to be
roughly twice as large. Again 12 is a nice small number that fits all these
boxes for us. This however gets a little complicated when we look beyond memory.
Here is where I will be using a bit of an anachronism.
The link manual calls a location in memory a register, but it also calls the little chunks
of storage inside the processor a register.
I'm going to use the more modern term address to talk about memory
and register to talk about register because you don't lose anything in
translation. That means when I say register I actually mean the tiny chunk
of storage inside the computer that's used for immediate mathematical work.
LINQ only has one register. This is pretty common on older and smaller machines.
The one register is the accumulator.
It's where all math and data manipulation is carried out.
At least, mostly.
The accumulator is 12-bit.
Plus one extra bit.
The accumulator proper is only 12-bits, which lines up with the size of a word in memory.
That's important because you, in theory, will be moving numbers from memory into the accumulator
and vice versa. The plus one thing, however, is a little confusing. The link programmer manual calls it the link register. Yeah.
Great.
What does that mean?
Well it's spelled L-I-N-K.
Once again, great.
That's not confusing at all.
Call me old fashioned, but I just don't like that.
Maybe don't name anything in your computer something that's a homophone of the name of
the computer.
Little bit annoying. To me that implies that this is some crucial feature that the device is built around, some secret powerful trick.
But that's simply not the case. It's a overflow flag.
When you do a math operation, if there's an overflow, the link register is set to 1.
When you do a math operation, if there's an overflow, the link register is set to 1. You can also do some tricky things with it and shift operations, but effectively it's
a flag register.
That's something that's very common on a lot of computers.
It's just displayed in a confusing way here.
Now, I've been jokingly saying that the accumulator is 12 plus 1 bits
because the link register can, in some cases, be treated as if it were the
accumulator's 13th bit. The architectural diagrams even show a line connecting the
link register and the accumulator, as arrows on both ends and everything.
The other register that's hidden is the instruction pointer, which Link calls the instruction
location.
This, however, isn't directly accessible to the programmer, it's just the pointer the
computer uses to keep track of what's running.
Again, that's really common design.
We kind of just have a normal computer.
The remarkable bits are all in its scale, which again, it's a small fella.
But there are a few interesting parts of the machine besides its size that set it apart.
The first is its so-called relay register. Now, I
don't really want to call this a
register because it's kind of its own thing. The Link has a set of six relays
that can be used to send power to external devices. This can be used to do
things exciting and mischievous like flipping a light on and off or perhaps
operating the shutter of a camera.
This is one of the ways that LINQ was designed for fancy laboratory work.
With a single instruction, you can set or read the status of the six relays.
So technically you can use it as a register, but it would be slow and limited for that.
It's really more a way to connect the link to external devices
and control them.
Going deeper into the machine's architecture, we reach the so-called miscellaneous inputs.
This is how the link could handle things like analog data, such as the electrodes hooked
into poor little Jasper the Cat's brain. Link has a total of 16 of these input channels,
broken into two groups.
One set of inputs can be sampled at 200 hertz,
that's 200 times a second,
while the other can be sampled at 200 kilohertz,
200,000 times a second.
The sampling frequency matters for a number of reasons.
The crucial one being that we're dealing with analog to digital conversion here.
Each time you take a reading, that's an instantaneous value.
It's not just that you're turning a continuous voltage into some discrete number.
You're also slicing it up in the time axis.
A smooth wave becomes a jagged series of steps. Any analog
to digital conversion will be an approximation. The higher the sampling rate, the closer that
approximation is, the lower your error is. If your signal doesn't change very rapidly,
then sometimes you can get away with a low sampling rate. That also saves you a lot of computational and data complexity.
But in some cases, you have to have a higher rate to account for more rapidly fluctuating
signals.
However, it's not always economical to fork out for better electronics and more storage.
So Link strikes a balance.
It gives you both options.
These inputs are so crucial to Link that the computer has a special instruction for reading
them. It's called Sample, or SAM. It takes a sample from a specified input line and throws
it into the accumulator for later use. There are even tricks around how timing is maintained
for specific sample rates.
Again, this is how the machine was tailored for use in a laboratory setting, not in a
general purpose computing environment.
These features are pretty neat, but they seem rather impersonal, right?
I mean, so far we have that it was small, cheap, could roll around, and did laboratory
things like sampling brain waves from poor cats. That's cool, but that's not exactly the makings
of a PC. So what about the interface? First of all, Link had an honest-to-goodness electric
keyboard. This is something that's been a bit of an
annoyance for me in the past. Older machines would often have these cobbled together teletype
setups. Or they would have a keyboard somewhere in the installation and then be pretty cagey
on how it was hooked up exactly. Link has, front and center, its own detached keyboard that's connected to the computer
with a cable.
It's a physically separate input device that you type on by pressing buttons with your
fingers.
It's just a normal keyboard.
The computer also has the accompanying commands to get keystrokes from the thing and then
process those keystrokes.
That makes for a very reasonable
and very recognizable input method. In fact, I'd call it one of my favorite input methods.
For output, Link had either one or two CRT displays. I say either because the docs describe
two tubes and the instruction set supports plotting to two separate devices.
But not all photos of links show the dual-screen setup.
But ruminate on that, if you will.
In 1963, you could sit down to a computer with two screens.
Two of them.
And it's not even a supercomputer.
As far as details go, these are reasonably capable
displays. Each is 512 by 512 pixels, and you're able to plot an image point by point.
Link doesn't provide an instruction to type characters to the screen, but the Programmer's
Manual does give a program listing describing how to roll your own type function. What's wild
about this, to me, is that it uses bitmapping. Bitmapping is a technique used to store and
display images, in this case monochrome images. You pack the data as binary bits such that
a 1 means a pixel that's on, and a 0 means a pixel that's off. Then you write a small program
that can display that image on screen. It just goes through bit by bit and plots each bit.
Oftentimes, the code is literally just translating your bitmap into a set of display instructions.
This lets you be much more efficient with how you store images.
The manual recommends using this technique for plotting letters, but this also means
that the link would have been capable of displaying full-on images.
You could even program in different fonts, at least in theory.
It's still a pretty low resolution display and there is the matter of refresh rate, but
the bones are there.
You could have a very rich and very interactive environment on this computer.
That coupled with its size and cost, make it look like a pretty personal device.
This is essentially the root of the argument.
That the Link was the first personal computer because it packed together all of these features.
And really, that's reasonable.
These features combined essentially amount to a personal computer.
But superlatives are involved, so we have to ask, is this out of the ordinary?
Is this something new for the period?
We have a small, cheap interactive machine.
Well, that gets a little sketchy. The descendants of Whirlwind certainly fit the bill. At least,
mostly. Whirlwind and the TX machines were interactive, but they were all one-of-one designs.
one designs. That means that their cost was... its own realm of price. The PDP-1 itself, a part of this larger family, also checks all the boxes, but it is at a
much higher price than the Link. Then we have contraptions like my beloved
LGP-30. That machine is small, cheap-ish, and can be used interactively. The main
difference, I'd argue, is that the LG P30 uses an older electric typewriter
instead of a nice CRT display as its main interface. So, was the link
unprecedented? Was it a first? I have to say no.
In the early 60s, there are extant machines
that have the same package of features
that make for a personal computer.
I mean, the link even comes from a tradition
of those types of interactive machines.
If anything, its advancements were all related
to bringing down cost and specialization for
lab work.
Does that mean it's on an important machine?
Of course not.
It just means that the Link isn't really the first personal computer.
At least I wouldn't make a case for that.
There's one final feature of the Link that we haven't discussed.
That's its tape drive, the Link Tape.
I think this can be addressed separately because the technology at play here is unique in its
own right.
The Link had two tape drives which worked as semi-random access media.
Tapes were broken down into blocks which were numbered. The tape drive
could then seek to a specific block, reeling the tape forward and backwards until it found
its target.
This is less of a cool personal feature and more a neat technical feat. With block level
access it was possible to implement a file system on magnetic tape. This was helped by the fact
that link tapes were physically small. These aren't big reel-to-reel tapes. These were
fist-sized spools. Clark and his team had worked with tapes before on TX-2, but those
were much, much larger reels. As he explains, that leads to an issue with inertia. Now, this is something
I would never consider on my own, but it makes a lot of sense. Every object will resist a change
to its velocity. How hard it resists that change is called inertia, and it's impacted by the object's mass and its speed. TX2 had 14 inch reels that spun pretty fast.
That meant they had a lot of inertia, so it was very hard to get those tapes to switch the
direction of their spin. But Link's tapes were tiny and lightweight. They could spool up and
down and switch direction with ease. That made
the whole block accessing thing much easier, since you could spin them back and forth very
quickly.
This was all buttoned up with some smart formatting. As I said, a link tape was broken up into
numbered blocks. Blocks could be variable in size, but what mattered is they were each marked by a special
block header that included the block's number. When the tape went to Seekin, it was looking
for that sequential ID number.
Inside each block, data was stored over 6 tracks, now, I know, again with the strange numbers. But check this out.
This was a redundant stream of 3-bit data.
In other words, one octal digit.
The stream was duplicated to protect data on the tape.
If half the tape was damaged, then the other half could be used as normal.
Overall, this makes for a nifty system.
In practice, it sounds very similar to how a floppy disk is used. That's actually the
usual comparison made here, that link tape was an early predecessor to the floppy disk.
It's a seekable magnetic medium that's easily rewritable and can be used to store little
chunks of data. A floppy is even broken up into numbered tracks.
At least if you break things down to a linear view.
This is also where we start to move into the whole deck part of the discussion.
The design of the link tape is used as the basis for another format, the deck tape.
How did that happen? Well, it comes
down to tax dollars and a strange blending around the edges of the Link Project. First
off, there was never a company that produced Link. When Clark and his group left MIT, they
didn't go start a corporation. The group just bounced from Lincoln Lab and restructured
themselves somewhere else. They were still working off of tax dollars. It was still a
federally funded research program. This eventually includes funding from the NIH as well as NASA.
This is still publicly funded research. They would eventually hook up with another college, the University
of Washington, to continue their computerized work. This has a few immediate consequences.
The first is that Link is a public domain computer. As such, no one really owned Link,
except the American taxpayer in some nebulous sense. It also meant that LINK machines
weren't really sold in a classical way. To explain, we need to go back to the whole LINK
evaluation program. One key aspect of that program was that each lab within LINK was expected to be
capable of maintaining and servicing their own machine. The Link team would
make site visits and travel around with patches and updates, but in general, labs needed to be
capable of self-sufficiency. Staff from each lab that was part of the evaluation program was subjected
to a two-week crash course on computing in general, and the link in particular. At the end of
those two weeks, the victim would fly home with their very own link. But it wasn't just
a sit and forget kind of thing. These early links were distributed as kits. The core of
the machine, its chassis and back planes, were all assembled. I think the peripherals were also assembled, but I'm not entirely sure.
At least labs would have to plug in all of the deck system modules. This, again, made sure that
the lab knew how to service their own machine. It would also save on manufacturing costs.
The evaluation program was very successful. From that came more funding and more demand
for these neat little computers. It had been proven that research labs not only could get
a lot of use out of them, but they wanted to use them. The Link team, now at the University
of Washington, would continue to manufacture some kits, but they weren't a commercial
enterprise. The team was focused on research, not manufacturing.
In this period, late 63 to early 64, DEC started producing links. These would come first as
kits and later as fully assembled computers. I think it's clear to see how funny this situation is, right? The PDP-1 was, physically speaking, an almost identical machine.
It used a chassis and backplanes to house a pile of deck system modules.
I'd imagine some of the same manufacturing techniques used on the PDP-1 were used to
create links.
Now, these didn't exactly sell like hotcakes. I've seen
reports that 21 deck-built links were sold. Not a lot, but still more than the
old Singleton computers. By way of comparison, there would be 53 PDP-1s
produced during that computer's lifespan. So you may ask, why was DEC manufacturing LINX?
Well, my sources come up a little short when it comes to these why questions.
The implication seems to be that DEC was already working very closely with the LINK team, so
it was a natural next step to scale up production.
DEC was supplying modules and even ensuring that
modules shipped on time to correct labs and were functioning within certain
parameters. Plus, the folk that designed LINC had worked with the folk who founded
DEC. LINC and the PDP-1 evolved from the same lab after all. You know, the whole
LINC-1. When it comes to DEC, my go-to source is usually Computer Engineering, a DEC view of hardware
system design, by Gordon Bell et al. What it has to say on LINC, however, is a little
confused and thin. We get half a page about LINC that lays out the barest outline of the story. It also
drops a factoid that I have no way of confirming. The text claims that Link's design was influenced
by the CDC-160, a vaguely contemporary computer. Now, I want to address this because, well,
this book is the big source on DEC related stuff.
Someone will point out my possible omission.
I can feel it in my bones.
I haven't come across any source on LINC that mentions the CDC 160 or CDC or Cray at all.
That said, the authors of Computer Engineering had access to one source I can't find.
It's a 1964 paper by Clark and Molnar titled The Link, a description of the laboratory instrument
computer. The paper was published in a biology journal that doesn't appear to have been archived anywhere. This sucks because, well, this was the first public article about
Link. It kind of hurts to put this episode together without that paper, but sometimes you get these
citations that don't really go anywhere. And besides, I've had access to a lot of other similar
and contemporary sources. So I just want to say I've done due diligence here,
but I can't find any connection to the CDC 160.
From here, the link gets folded into DEC more and more as years go on.
It becomes the foundation for a family of 12-bit machines at the company.
The first is the PDP-5, followed by the better known PDP-8,
and then PDP-12. All those machines are at least in part derived or inspired by the LINK.
They even produce this weird blended machine called the LINK-8 that has a PDP-8 and a LINK
in the same chassis. So by the middle of the 60s, Link has become a core
machine for DEC. The weird thing is, Link is only barely a DEC machine itself. That's
the big funny conclusion to the whole story, right? DEC is founded when some researchers
who worked on TX2 leave Link in lab to start their own company. Link is created when some researchers who worked on TX2 leave LinkinLab to start their own company. Link
is created when some researchers who worked on TX2 leave LinkinLab to continue a research
project. The two threads diverge, but they quickly come back together.
Alright that does it for the Link. This is another one of those groundwork episodes.
Like I said, I've been wanting to talk more about DEC, but feel I have to build up the
basics so the later stuff makes sense to me.
When I hear people talk about DEC history, sometimes I can kind of feel like I'm catching
a conversation about some long running book series I've never read.
I'm working on it though.
Link isn't just part of some larger story, it's also a fascinating machine on its own.
The work done around Whirlwind at Lincoln Lab had a huge, maybe outsized impact.
Part of that is because of the machines it inspired.
TX-0, TX-2, PDP-1 and LINC, not to mention all the federal machines that wound up in shady
government installations.
LINC stands out because it was meant as a scientific computer.
Here I don't mean scientific as in computer science, but as in a computer for existing
scientists to use.
The interactive features of Whirlwind were reshaped and reimagined for a laboratory setting.
We get this radically streamlined computer with a set of very cool and useful features.
It's not easy to shape a computer for a specific niche and keep the thing generally useful,
but that's exactly what Link managed to do.
I think I'm gonna slow roll covering DEC, so don't expect me to come right back to the saga,
but I can't imagine it'll be too long before I return. But hey, until then, thanks so much
for listening to Advent of Computing. I'll be back in two weeks with another piece of Computing's
past. If you like the show, you can support me directly through Patreon.
Patrons get early access to episodes, polls for the direction of the show, and bonus episodes.
You can find links to everything on my website, adjuntofcomputing.com.
And as always, have a great rest of your day.