Advent of Computing - Episode 132 - The PDP-1
Episode Date: May 26, 2024In 1959 the world bore witness to a new type of computer: the PDP-1. It was the first interactive computer to really make a dent in the market. Some say it was the first minicomputer: a totally new cl...ass of machine. But where did this computer come from, and what made it so different from the rest of the digital pack? Selected sources: https://americanhistory.si.edu/comphist/olsen.html - Smithsonian interview with Ken Olsen https://archive.computerhistory.org/resources/access/text/2019/03/102785079-05-01-acc.pdf - Computing in the Middle Ages https://archive.org/details/bitsavers_decBooksBeng_37322315 - Computer Egnineerling, Bell et al.
Transcript
Discussion (0)
There are many recurring characters on the show.
Some computers and people just appear everywhere.
There's kind of a reason I joke about needing a sound drop every time I mention IBM.
Almost every thread, if you tug long enough, will cross with IBM somewhere.
So why is that?
The easy answer is that IBM has historically been a very big force in the industry.
I know I've said that IBM in its prime was not just a company, it was an institution.
While valid, I don't like that answer.
It lacks the subtlety and grays that make history so interesting to me.
It also doesn't answer the more generalized question of why some computers and characters
reoccur so often.
The better answer is that computer history is very difficult to study and discuss.
Now, let me be extra clear here. I don't say that to make myself sound grandiose, grand though I may
think myself to be. Look at it this way. You can't focus on just one
thing when discussing computer history. You can't just discuss the history of a programming language,
for instance, in isolation. You have to have context for that history to make sense. And when
it comes to computer history, that context is very technical. To discuss a language, you need to
discuss the state of hardware at the time. Are we in the era where a language is tied explicitly to
one specific computer? Are we in the era of portability, or are we in some transitionary
period? And those periods don't necessarily have hard and fast dates. How does that context impact
the language? Where are we as far as linguistic theory? Do we have more linguists, mathematicians,
psychologists, or computer scientists in the field at this time? How do their views impact the
language? And what other languages were out there? You quickly spiral into this really tangled web of connections
that you have to address to build up the proper context.
For any part of the story to make sense,
you need to take a holistic approach,
which inevitably means you end up needing to do things like
learn a language besides the one you're discussing
or learn all kinds of technical details
that seem initially unrelated to the topic. For most stories, this web is pretty wide,
so we end up running into reoccurring characters. IBM shows up whenever we talk about computers of
a certain vintage, or software that ran on those computers. Lisp and Fortran show up almost any
time we talk about early programming languages.
And when we talk about hackers, when we talk about Unix,
the boogeyman is almost always a machine made by DEC.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 132, the PDP-1.
Today, we're tackling a legend in the truest sense.
The PDP-1 is a computer that's loomed large in computing since its creation in 1959.
It's the machine that Spacewar!, perhaps the first video game, was programmed on.
It was home to some of the earliest hackers, at least some of the earliest hackers of the MIT
lineage. It gets a little more complicated than that. But it was also one of the first
mini-computers, a totally new class of machines for a new era of computing.
But the PDP-1 is so much more than all of that.
I've had run-ins with this machine in the past, but I've never fully covered it.
It shows up all over the place, not just in the aforementioned settings. It was a pretty
darn popular machine that spread computing to new places, and it made new kinds of programming
possible. I've also personally had a run-in with this computer.
Back before I started the podcast, I had the pleasure of playing Space War on a fully restored
PDP-1 at the Computer History Museum. My opponent was none other than Steve Russell himself, the
co-author of the very game. So this is definitely a computer I know of, I just haven't really delved into the machine.
Today, we're going to rectify that.
The PDP-1 is a hugely important computer, not just technically, but culturally.
It was one of the first mass-produced machines that was designed for interactivity.
That, paired with its relative low cost, led to an especially compelling machine.
To unwind this machine's history and to make sense of its impact, we need to turn back the clock to the very earliest days of computing.
The PDP-1, as revolutionary as it was, didn't come out of nowhere.
It actually falls on a very specific lineage of machines.
Today, we'll trace back that lineage to see where the
revolutionary features of the PDP-1 came from. Along the way, we'll see just what was so
compelling about those features and how interactivity could change computing.
If you know anything about the PDP-1's history, you probably know it was built by some of the same folk that created the first transistorized computer.
The rough story goes that two engineers at MIT who had worked on TX0, the first transistor machine,
leave the university and found Digital Equipment Corporation.
A few years later, they poach a few engineers who had worked on TX-2,
a successor to TX-0. That new crop of engineers design the PDP-1. Thus, the PDP-1 is sometimes
called a commercial version of the TX-2. That, of course, is a very reductionist view of things,
but hey, it gives us a nice outline to follow.
The broad strokes here are correct.
DEC was founded by some of the creators of TXO, or TX0, however you want to call it.
The PDP-1 was designed by some of the same people that worked on TX2.
But does that mean the PDP-1 is just a ripoff of an earlier MIT design?
Over this episode, we're going to be grappling with that question,
so I want you to keep that in mind as we work up to the PDP-1 itself.
Now, this common story also skips an important part of the lineage.
TX0 actually has a predecessor.
part of the lineage. TX0 actually has a predecessor. That computer was called Whirlwind,
and it was the first real-time computer. Now, this is another one of those spots where I have to point out some weird shifting lingo. Real-time here is being used to refer to real-time processing,
as in, Whirlwind was designed to be interactive. You could actually press buttons,
and the computer was designed to immediately respond. In, you know, real time. That's how
computers work today, but back in this period, in the 1950s, that was very novel. I have a full
episode of Whirlwind in the catalog. The short story here is that it was meant
as a flight simulation computer, and things kind of got off the rails. When it was built in 1951,
real-time operations were not the norm. The order of the day was batch processing. You'd send off
a program to be executed, then you'd wait around to get results eventually. There was even
this whole system of clerics and operators in place to handle the affair. You didn't press a
button and get feedback. So Whirlwind, in short, represented a pretty different and pretty
revolutionary approach to computing. It's also important to note that Whirlwind was built at MIT's Servomechanisms Lab. In 1951, the legendary
Lincoln Lab was founded on campus. At that point, the Whirlwind project was transferred over to the
new lab. Lincoln Lab is notable for a few reasons. Reason one is it was funded by the Department of
Defense. So while the lab is dedicated to advanced technological research,
it's all under the larger umbrella of DoD spook projects.
Project SAGE was, in large part, developed at Lincoln Lab, for instance.
This means that the lab developed all kinds of advances in networking,
memory, and programming, all
en route to a large missile defense system meant to prosecute mutually assured destruction.
The policy and ethics side of all of this is a totally separate discussion for a separate
day.
Just know that there is a connection between these real-time systems developed at Lincoln
Lab and at MIT and larger
missile defense systems and DoD projects. Anyway, let's get back to the topic at hand.
One of the neat features of Whirlwind was its interface. For output, it sported an oscilloscope
that could be driven almost like a primitive CRT monitor. That meant it could do very rudimentary graphics.
Whirlwind had a number of input options, one of which was a flexorider. This was a primitive
terminal, meaning you could actually type data directly into the computer and get instant
feedback. Well, kind of instant. We're still talking about a very slow machine. Whirlwind, like many early computers,
served as a testbed for new ideas. And one of those new ideas was magnetic core memory. This
was a new type of memory that had been simultaneously developed by a number of folk.
In the early 50s, the idea was being experimented with at Lincoln Lab. In 1953, a full stack of core was built and installed in
Whirlwind. This is where we reach the first big stumbling block of the story, and that stumbling
block is named TX-1. There are a few stories surrounding this possible computer. The first
is that in the mid-1950s, TX1 was conceived of as a machine to test magnetic
core memory. Whirlwind, after all, wasn't built for core. It had core memory added in later.
I'll be the first to admit, that may sound kind of dumb. Why would you need a second testbed
machine if you already have a testbed? Simply put, a computer's design is very closely
tied to its memory. Whirlwind wasn't designed for core, it was actually designed to use this
weird electrostatic tube memory. And while it could be adapted to handle core memory,
that wasn't part of its deeper design. It didn't have all these tweaks to take
full advantage of the new technology.
According to the book Computing in the Middle Ages by Savaro Orstein, TX-1 was initially planned as a vacuum tube machine that would have been built around magnetic core. This would have meant a more
efficient use of memory and an all-around better testbed to see just what you could do with this new style of
storage media. But, according to the story, plans changed. In 1953, Philco introduced the
surface barrier transistor. This new and faster type of transistor quickly made its way to MIT.
By the latter half of the decade, there was talk of a new transistor-based computer.
By the latter half of the decade, there was talk of a new transistor-based computer.
This is where Ken Olson comes into the picture.
He completed his master's thesis around this time and started working at Lincoln Lab.
His thesis had been focused on, what else, but magnetic core memory.
And some of his first work at the lab included creating test equipment for memory.
Then, in 1955 or so, he was given a wild proposal. As Olson recounted in an oral history interview with the Smithsonian,
When I was given the opportunity to work on a transistor computer, the idea was kind of new.
It was exciting. Olson continues, The rules were, I could hire nobody and have no space. I studied the rules
carefully and found all the loopholes. I somehow was able, one way or another, to get three or four
people to work with me. We discovered that the hallway was not a space, so we moved my office
into the hall and put walls around it. We then traded that space for a space in the basement, which was
less desirable, but bigger. With that, we were able to do our work. End quote.
It would be a bit of a shoestring affair. Olsen was able to trick or coerce MIT into pouring a
new cement floor in the basement, getting new lighting installed, and they even managed to paint the dreary walls.
Over the next two years, a computer called TX0 was built in that very basement.
But that brings us back to the mystery.
According to Computing in the Middle Ages, TX1 was already on the books.
But once this new transistor project came along, it was dropped.
Hence the weird naming convention.
If TX1 was already planned but dropped,
it wouldn't make a lot of sense to call this new project,
the first transistorized computer, TX2.
So they decremented the value and went to TX0, or TXO.
But I've become certain that that's wrong. This isn't the right view of history. The
story's either a misremembering, or maybe it was made up. You see, the TX here stands for
transistorized experiment. That doesn't really fit for a vacuum tube machine.
That said, this story has been spread around online, so if you see it, you gotta watch out.
Just go forward with knowledge. From what I can tell based off interviews with Olson,
TX1 was the name of an earlier design for a transistor computer. According to Olson, TX1 was the name of an earlier design for a transistor computer. According to Olson,
that design was vetoed because it was too complicated. The design they went with was
more simplified, so they called it TX0. That's one stumbling block, but we're pretty well set
up to keep stumbling. The history of TX0 is a little obscured. The conventional story is that
TIXO was designed as a transistorized version of Whirlwind. But that's also not entirely true.
TX0 seems very similar to Whirlwind. It's another real-time computer with an oscilloscope and
magnetic core memory and a keyboard and all that. But design-wise, there is some drift.
TX0 isn't just a re-implementation of the earlier machine. Wesley Clark, the co-designer of TXO,
explained in an interview that the machine was meant to be simple. It was designed to be as
simple and as small as possible. That was kind of the exciting new thing you could do with transistors besides speed. Whirlwind wasn't exactly a simple machine. There's this wide
rift between that approach of making a small, simple machine and the re-implementation story.
This isn't the first time we'll see direct lines drawn between computers, so
I think this bears examination.
TX0 isn't actually a direct successor of Whirlwind at all. Rather, there's a computer in the middle called the Memory Test Computer, or MTC. Prior to the transistor project,
Olsen had developed the MTC as a simple machine to test out magnetic core memory.
as a simple machine to test out magnetic core memory. Maybe this is where some of the fabled TX1 confusion comes from. MTC was a vacuum tube machine. It was small, and it used magnetic core.
So when it came time to build TX0, much of the experience gained with the MTC transferred over
to the new project. MTC had some features of Whirlwind, but it wasn't a direct
recreation. It couldn't have been, it had a smaller budget. That's, I don't know, I think that's like
the bottom line here. What I'm getting to is that TX0 is a successor of Whirlwind, but it's not a
direct successor. That said, I do want to point out some similarities.
First of all was modular design.
Whirlwind was built out of discrete chunks or modules.
Many of those modules were interchangeable.
The idea was, as Olsen himself explained, to allow for easier serviceability and testing.
If a module broke down, it could be swapped out for a spare. Modules could also be built in bulk. Whirlwind may need, say, 16 of a certain module to run, so you keep 20 or
so around to improve downtime and benefit from economies of scale. The actual brains of the
computer, then, all came down to how those modules were connected. In this way, it was
the sockets and cables that connected modules that actually determined what the computer
would do. TX0 would follow this tradition. Its transistor circuits were mounted on separate
boards that would plug into the larger machine. The overall computer uses only a few types of these module boards. The other feature
to watch out for is microcode. Whirlwind used a primitive form of microcode called the control
store. This was a circuit that allowed certain pathways to be selected. Think of it like a very,
very sophisticated switch that you could easily adjust. In this way, new instructions could be added just
by making new paths in that switch. TX0 used a similar approach, so once again, there's a lineage
here, but the later computer isn't just a copy of the earlier one. TX0 became operational in 1956.
0 became operational in 1956. As far as demonstration units go, TIXO was a huge success.
It saw actual active use at MIT, which far exceeded its initial goals of testing out transistors.
That was enough to prove that these transistors could actually work in practice. But version 0 was really just a stepping stone. Once it was clear the transistors were viable,
Olsen, Clark, and their crew started drafting plans for a bigger, better machine.
That machine was called TX2, since, you know, TX1 was already taken.
This is also where we reach a schism.
To quote directly from Olsen again,
I was building the hardware.
Somebody else was designing the logic, and they couldn't settle down.
So after a year or two of that, I got impatient and left.
It was 1957.
There were a number of reasons for leaving.
One was we had published what we had done,
demonstrated that you could make computers very efficiently,
much better than anything done with vacuum tubes by far.
The commercial world just smiled at us and said we were just academic. There's a lot there, so let me unpack it a little.
First is the timing.
TX2 wouldn't be completed until 1958, so Olson bailed while the project was ongoing.
But crucially, he was working on the hardware.
Like other Lincoln Lab machines, TX2 was modular.
But Whirlwind had these huge vacuum tube modules that couldn't really be swapped out.
TXO was still using experimental circuits and early transistors, so the modules were pretty rough and large.
TX2 was now using proven technology.
The new modules were smaller, more dense, and more reliable.
These modules lead us down an interesting path.
That path was called Digital Equipment Corporation.
Ultimately, Ken Olson wanted to make and sell computers.
He joined up with Harlan Anderson, another Lincoln Lab engineer, and founded DEC.
But their path to get to that point, to making and selling
computers, is a pretty interesting one. TX0 had shown Olson just how promising transistors could
be, but these were early days for the technology. At least, that's one viewpoint. Things get a
little muddy when you look at the details. The IBM 7070 would be released to market in 1958, and that was a fully transistorized machine.
RCA and Olivetti had also been flirting with transistors around this same time period.
But Kin says that the business world didn't really believe in transistors.
So what are we to believe here?
I think there are two factors at play.
The first is timing. The first transistor machines start selling in 1958. In 1957,
Olson is trying to scrounge up venture capital. That means that investors didn't yet know that
IBM was interested in transistorized computers. If Olson had waited a year, then
I'm sure investors would jump at the chance to join the bandwagon. The other factor was economic.
1957 was the start of a recession. There was less money to go around in general, which would shrink
the pool of possible investors. A new computer company was a very risky gamble, and a risk that came at a bad time.
Olson and Anderson would eventually secure funding from American Research and Development Corporation, but there was a stipulation.
This gave the new DEC some cash flow in exchange for a 70% share of the company.
That meant that American Research could call the shots.
That also meant that to start off call the shots. That also meant
that to start off, there was a big hurdle. DEC couldn't make a computer. At least, not yet.
That was still viewed as too risky. In order to secure funding and get a working business plan
off the ground, DEC made a compromise. They would roll out their computer plan in two phases.
Phase one would be the production and sales of digital modules.
If that went well, they could transition into making computers.
So, we can look at this another way.
Olsen had bet his future on this whole digital module thing. So what exactly is that?
This is where we get to the brass tacks of digital lineage. These modules were almost
identical to the digital transistor modules Olsen had built for TX2, at least circuit-wise.
The plan was to basically take some of the modules designed for TX2,
rework them a little bit, repackage them, and sell them on the open market. In practice,
the main difference actually came down to packing. DEC initially offered two types of packing,
system design modules and laboratory modules. The system design modules were the closest to what would be thrown
into TX2 the next year. These were transistor circuits in metal trays with big plug connectors
on the back. They could fit into equipment racks, plug into big ports, and be used to build up
semi-permanent systems. Here I'm talking everything from simple logic up to maybe an actual computer.
Laboratory modules were slightly different. These modules were packed in enclosed metal boxes with
discrete ports on the front for banana plugs. The idea was to throw these on a lab bench and
wire them up for testing out circuits, working up temporary logic, or any number of lab bench tasks.
The front panels of these modules are actually pretty neat in themselves.
They show a circuit diagram, and the banana jacks are actually in the corresponding spots that you're connecting to on the circuit.
You can think of system design modules as building blocks for permanent digital systems,
and these lab bench modules as building blocks for permanent digital systems, and these
lab bench modules as building blocks for more temporary stuff. If you've ever been in an
electronic engineering lab, then this probably just makes sense to you. If not, I'll give you a bit
more of an explanation. One of the common tasks in an electronics lab is to mock out circuits,
One of the common tasks in an electronics lab is to mock out circuits, test circuits, or test various equipment.
In order to do that, you need easy ways to generate voltages, signals, set capacitances,
add loads to circuits, the list goes on.
Basically, you wind up needing all kinds of tools for creating quick, temporary circuits.
If you're in the analog realm, those tools are very readily available.
You have power supplies and signal generators that can do a lot of the heavy lifting. You can
add in capacitors, resistors, inductors, and other components using these neat tools called
decade boxes. These are discrete modules that have variable, well, values. One might be a variable resistor, another a variable
capacitor. You can change the values using knobs and switches. They're really, really handy, and
like I was saying, if you're in an EE lab, you're going to see piles of these boxes either on
shelves or on benches. Physically speaking, decade boxes are just big metal boxes with dials and jacks for banana plugs.
You could call them modules.
In practice, you end up plugging decade boxes, power supplies, and oscilloscopes or multimeters or whatever into circuits you're working on.
You wire up those modules to either test out an idea, test out a circuit, or maybe just work through a
problem more physically than on paper. DEC's digital lab modules fill a very similar role to
DECade boxes, just in the digital domain. Instead of wiring in a variable capacitor, you could grab
an AND gate, or a logical inverter, and throw that into a digital circuit. These modules
would allow an engineer to apply all the older analog workflows to newer digital technology.
So there was an immediate market, and it would be cheaper to get going than a full computer.
Thus, the module approach was a lot less risky. The initial assessment would prove correct here.
DEC module sold well enough that the company's backers
were ready to sign off on the next phase of the operation.
You see, the whole module sideshow actually had two goals.
The first was, of course, profitability.
To show that DEC had a viable business plan
and could actually produce digital equipment,
you know, as a corporation.
The second goal was to create the building blocks for a computer.
This should come as no huge surprise.
The idea for digital modules came out of early machines at Lincoln Lab.
DEC's existing modules could, in practice, be used to build up
an honest-to-goodness computer. In 1959, the company would start down that path, designing
a machine using their in-house modules. This is where I get to introduce the next player to the
game, Ben Gurley. He was another defector from Lincoln Lab. In fact, DEC would pull him away from MIT
specifically to design their first computer. Gurley had designed and built the CRT display
and light pins used on TX0 and TX2. He was also heavily involved in the larger design of TX2.
I think it's likely that he was the quote, somebody else designing the logic
that Olsen referenced in an interview with the Smithsonian. Suffice to say, Gurley was a very
good addition to the team. Gurley himself would become something of a legend in the field.
He would design this new computer, the PDP-1, in just three and a half months. That's a wild feat in any era. The initial business plan called
for the PDP-1 to be constructed from existing DEC modules. But when that catalog was lacking,
Gurley designed the missing modules himself from scratch. The PDP-1 was, in no small part, his machine.
The PDP-1 was also the only commercial computer that Gurley ever designed.
In a wild series of events, Gurley would be tragically murdered in 1963, just a few short years after the launch of the PDP-1.
I think the fact that his tenure in the field was so short further added to his legend.
The events surrounding Gurley's murder are strange and, once again, they've been obscured and mythologized.
He was shot while eating dinner with his family in his home.
His murderer, Alan Blumenthal, was another Deck employee.
This would actually happen about a year after Gurley left Deck to work at another company. According to some sources, Blumenthal had expected Gurley
to take him along when he left Deck. When that didn't happen, Blumenthal apparently had some
kind of mental break. It's not entirely clear because all of the sources that discuss this
are, well, they're pretty obscure or secondhand. Some contemporary reports call Blumenthal a
sniper, and I think that's most likely due to the historical context. This is partly, I think,
what obscures the story and what makes the mythologizing so easy. The murder happened
around the same time as the Kennedy assassination. In fact, the magazine Computers and Automation
would run a letter from the editor discussing the killings of Gurley, Kennedy, and Medgar Evers as
all-related assassinations. I think just the title from that piece gives us an idea of how this was viewed.
Quote,
Computers and computer people against assassination.
The combination of technical feats and this early tragic and really bizarre death
I think combine to make a larger legend around Gurley.
All this is to say, we keep entering this weird territory around the pdp1 we've already hit on
bizarre misconceptions and legends in this episode and that will continue
as such we have to tread very carefully when we read sources about this machine so
with that aside how did the pdp1 start out? Well, as I've said before, it was all part of DEC's larger business plan.
Phase 1 was modules, which would lead to Phase 2.
To quote directly from that plan,
The initial goal during Phase 2 will be the production of the first general-purpose computer by Digital Computer Corporation.
It is anticipated that this will be soon followed by additional
production based on orders. A modest expansion of personnel will be made when phase two is entered.
End quote. Also just something to note here that I didn't initially notice, in the business plan,
it isn't called Digital Equipment Corporation, but Digital Computer Corporation. As the company
actually formed, we'll see that they tried to
distance themselves from that computer word, but we'll get into that later. This plan goes on to
explain that the computer will be built from system modules, the very same that were designed
in phase one. Once again, efficiency as always, and risk mitigation. Really, a lot of this business plan is about mitigating
the risk of starting a new computer company. If you just had this information, it may lead to
some bad assumptions. First is that there wasn't much of a plan here. Just let's build a computer
with transistors. In fact, some sources claim there wasn't a design goal behind the PDP-1.
This even shows up in Gordon Bell's otherwise wonderful book Computer Engineering.
But this just isn't the case. In fact, the PDP-1 had a whole host of design goals.
To pull directly from this Olson interview I keep using, quote,
goals. To pull directly from this Olson interview I keep using, quote,
the goal of the PDP-1 was to introduce a new type of computer to the world.
In the tradition that was developed at MIT where the computer was very simple, very fast,
and relatively inexpensive. In this case, the price was $110,000 with only 4,000 words of memory.
Because it was simple, easy to use, interactive with the cathode ray and light pin, There is some anachronism there, sure, but there are concrete goals and quantitative goals. You can find similar statements in more contemporary sources, but I just like this quote because it puts everything together.
There's not really much left to the imagination. The PDP-1 was going to be cheap, simple,
interactive, a single person could use it, and it would be based off earlier work done at MIT.
It would be powerful enough for real-time use, but not
necessarily a huge, blundering mainframe. It's also worth noting that there are details here,
like the price and memory space, that line up with the business plan. This leads nicely into
the whole TX2 myth. That is, the idea that the PDP-1 was really just a commercial version of MIT's TX-2.
I think we've hit the point in the story where we can really start to see how that argument is formed.
The PDP-1 was planned to be an interactive machine in the mold of all those MIT computers that the DEC team had already built.
It was also composed of DEC lab modules, themselves adapted from the designs
of the TX2 modules. I mean, Olsen even designed and built the modules for the TX2. But we already
have the evidence we need to disprove this. Gurley had to create new types of modules for the PDP-1.
had to create new types of modules for the PDP-1. That alone means it's not a rebadged MIT machine.
There's also the timeline to consider. TX-2 was complete in 1958, a year before the PDP-1 was finalized. On the one hand, that means that MIT's designs were waiting to be used. But this was a period of very rapid change and development in the field.
By 1959, TX2 was already somewhat obsolete. There were already new technologies, new ideas,
and new approaches out there. Copying an old machine would just be unrealistic.
As we move forward into the more technical aspects of the PDP-1, I want us to
keep our eye on this TX-2 comparison, because I think, personally, this is the big myth and also
a really interesting comparison to make. Now, there's the additional matter of the machine's
name. PDP-1. That's short for Programmed Data Processor 1. Now, the numeric part, that's easy.
But Programmed Data Processor? What's the deal with that? Well, it's another one of those weird
stories that may or may not be true. According to Olson, the name had to do with government
contracting. Here I'm going to be paraphrasing from the Smithsonian interview, since the explanation there is a little rambling.
So, initially the computer didn't have a name.
Then at some point in development, DEC received a letter from some government organization asking about hardware for reading seismographs.
This could have led to a very lucrative federal contract.
But there's a catch. At the time, the feds wouldn't fund any more computer purchases.
Olson's solution was to call the new machine something other than a computer. So he figured,
you know, it's a programmable machine that processes data, so why not programmed or programmable data processor?
I think that makes sense, and it's a fun story.
It's a little way to trick the feds, get some extra funding, make a nice contract, and still sell a computer.
I've also seen stories that the PDP name was an attempt to make Dex investors happy.
that the PDP name was an attempt to make DEC's investors happy. This was all still during a recession, and even after phase one, DEC still looked risky. So they said they weren't making
a computer, but a programmed data processor. I think some credence there is DEC's name itself.
They replaced the computer in DEC with equipment. Once again,
that would make sense if the company is trying to distance themselves from this appearance of
selling computers. The commonality here is, of course, making their computer not sound like a
computer. And this leads me to the third explanation I've seen, and personally the one that I like the most.
That's that the PDP name was meant to denote this wasn't a normal computer.
In fact, it wasn't like any other computer.
Now, I don't know the historicity of any of these stories, but I do know they all have a very certain ring to them. Simply put, the PDP-1
was unique for its time. It was the first computer on the market that was designed for interactivity.
It was the first machine to bring MIT's ideas to the public. Interactivity was a bit of a weird
pitch at this point. Most big, serious computers were completely non-interactive.
They were lumbering behemoths that sat in special rooms with special cooling and power hookups.
Digital clerics fed in jobs while lowly programmers had to wait for their outputs to be returned.
In that capacity, it makes a lot of sense to call this something other than a computer.
And that also lines up with the government contractor story.
I mean, you're not really lying if this is a new category of machine, right?
Now, that's not to say that interactivity was unheard of.
I mean, look at MIT.
We've already touched on three interactive computers. There
were also add-ons that could make machines more interactive. There were tricks that could be
pulled to make a lumbering beast more agile. But those tricks were usually relegated to research
and maybe some well-funded companies that were working on the bleeding edge. There were actually
some commercial machines that were meant for interactivity that could be bought prior to the PDP-1's release.
The best known is perhaps the LGP-30, a personal favorite of mine.
This was, and say it with me now, a small, cheap, and simple machine that could be operated by a single person.
Did DEC know about the LGP P30 prior to their first computer? Well, if they did,
I don't see any record of that. The LG P30 wasn't a hugely popular machine. It definitely didn't
have enough sway to change how the world viewed computing. So in 1959, it would have been very fair for DEC to say, oh, we aren't making a
computer. We're making something different. To understand that difference, I want us to start
at the outside and work our way in. The first obvious point is the one I keep bringing up,
interactivity. This feature wasn't just an add-on or a side thought. The PDP-1 was built from the
ground up to be interactive. One result is that a PDP-1 installation just plain looks different
than anything else. You can actually see this firsthand at the Computer History Museum in
Mountain View. That museum has a restored and functioning PDP-1 on display.
There's a whole room dedicated to just that one machine. You can walk in and, during special
events, even use the thing. This means we're dealing with a real working computer lab. At least,
a cleaned up one with a little barrier so you don't jump on the delicate computer.
cleaned up one with a little barrier so you don't jump on the delicate computer.
Now, right next to the PDP-1 lab is another restored machine, an IBM 1401. These machines are contemporary. They were both released in 1959, but they couldn't be more different.
The 1401 is a classic mainframe. It operates in batch mode. It's supported by a cadre of
separate special-purpose machines, printers, card punches, unit record equipment, tape drives,
and so on. It requires special power, special cooling, and special data conduits to function.
The 1401 lab actually has a false floor just to handle all these wires.
When you enter the room, you walk up a ramp. The entire lab had to be custom built to fit this
machine. That's the reality of a mainframe. These are big computers that almost warp the world
around them, both literally and financially.
The entire room is full of support for this one machine.
And even with all that, you still have to run programs one batch after another.
By contrast, the PDP-1 lab is almost Spartan.
You have the PDP-1 itself and a desk. That's about it. On the desk is the only
external support you need, a CRT display and a teletype terminal. That's all you need for a
working PDP-1 system. You don't need a false floor, you don't need a pile of other equipment,
you don't need forced air cooling, you just need a PDP-1
and two peripherals. I think that makes for a very clear distinction. Now, technically,
the integrated terminal was just a teletype. Technically speaking, the integrated terminal
was just a teletype. It was always in online mode, meaning it was wired directly to the PDP-1.
If you type a key, that information was passed directly to the computer, and the computer could
also send a character to the teletype to be printed on paper. The special word here is
direct. Let's just look at the 1401 for comparison's sake. On that machine, there was no direct way to type
into the computer. You would have to punch your data onto cards on a separate machine,
and then have the 1401's card reader read those cards and place that into memory.
There was an optional printer for output, but that's not really real-time feedback, is it?
But that's not really real-time feedback, is it?
You can't do something like an interactive shell on a 1401.
The same was true for most contemporary machines.
In practice, many PDP-1 installations would add on a separate Frieden FlexoWriter.
The FlexoWriter here is set up as this all-purpose input-output device.
Call it a Frieden All-Is-One.
I've talked about Flexos before, but I kind of just love the things.
They can be used totally offline, like a typewriter,
or you can punch your keystrokes onto paper tape.
You can also read in-punch paper tape and print that data to normal paper.
The name of the game here really is flexibility.
Once again, this would have been something like an add-on, but even with the FlexoWriter,
we're looking at a very small installation. Once again, this isn't like other computers.
This continues as we peer deeper, so let us turn our attention to the physical machine itself. The PDP-1 is relatively
small, all things considered. It weighs only 1,600 pounds and is all housed in a cabinet that's 8
feet long by 6 feet tall. The 1401, by way of comparison, weighs something like 4 tons.
Controls for most mainframes were spread out over a series of different units.
But the PDP-1 didn't roll that way.
Remember, it's just one machine.
All its controls were on the front of the computer.
This is still an old-school control panel.
We're talking about blinking lights and switches.
But it's all self-contained.
Better still, the peripheral desks can move around. So,
check this out. You can set up your display and teletype on either side of the PDP-1's control
panel. That means everything you can control or read is within arm's reach. In other words,
it's physically possible to operate a PDP-1 alone.
That may sound like a small detail, maybe even trivial, but let me assure you, this
is wild to see in a computer from 1959.
Let's go back to the 1401 next door.
That computer has controls spread out over the entire machine room.
Outputs can be spit out on the printer or the punch card reader or both.
Those outputs are generated on separate machines that are physically separated from each other.
Inputs have to be on either punch cards or magnetic tape.
But those inputs are processed by separate machines, physically removed from one another. In the case of punch cards,
you actually have to go sit on a totally independent key punch to make them. This means
operating the 1401, or any contemporary mainframe for that matter, is a more physical affair.
If you did it alone, you'd have to be running around the machine room almost constantly,
If you did it alone, you'd have to be running around the machine room almost constantly,
cards in one hand, tape in another, printed paper in a third.
That's one of the reasons that teams of trained operators handled mainframes,
and programmers were kept out of the machine room. The very act of using a mainframe is different from the act of programming a mainframe.
So a neat little control panel flanked by I.O.
devices was a huge departure from the norm. This also changes the culture around computing.
You don't need operators if you can just sit down and program the machine. That's one of the reasons
that hackers flock to PDP-1s. You know the story of someone staying up into the wee hours of the night
working on a computer, only leaving the machine room at dawn? That becomes possible on these
newer, smaller machines, like the PDP-1. You can't do that on a mainframe where your operations are
mediated by operators. The other device of note here is the high-speed paper tape punch.
This is the main I.O. device of the PDP-1, so much so that the two are pretty closely
associated.
The combo punch and reader is integrated into the front of the PDP, right above the control
panel.
So once again, we're within arm's reach.
We have this little world for the hacker and programmer to live in.
What makes this tape drive special is that it used fan-fold paper tape.
This is a long feed of paper tape that's been folded up like an accordion.
Once again, we have a bit of a mystery here.
Fan-fold tape was relatively unique.
bit of a mystery here. Fanfold tape was relatively unique. There were actual prototypes of the PDP-1 that had built-in magnetic tape drives, which would have been a more mundane option. But the
final model ends up shipping with this weird form of paper tape. Why? Well, I'm not super sure,
but my gut feeling is it was a matter of cost.
Here I'm turning to Gordon Bell and his book Computer Engineering.
He was an employee of DEC in this period.
He explains that there were cuts made during the prototype process in order to reduce the cost of the PDP-1.
Apparently, some early prototypes included a CRT in the control panel, but that
was scrapped in favor of a separate unit to save on base unit costs. While he doesn't explicitly
mention the magnetic tape drive, I think it would be reasonable for us to assume that that was also
cut for the same reason. But then, why move to fan-fold paper tape? I honestly don't have a good idea.
It's a weird choice, and I haven't found much reference to it in the sources.
My guess would be maybe physical portability and ease of use. One of the upsides of fan-fold tape
is it's easy to store without damaging. You also don't need anything
like a spindle or reel to hold the tape. This makes it pretty nice for storing small programs.
The hypothesis also works well with a bit of business around the PDP-1. DEC would, in time,
sell and ship software alongside their new machine. But that's getting ahead of ourselves. We still need to talk about
the specifics of the machine itself. From the inside, that is. I could take up the rest of the
show going over the intricacies of the PDP-1's design, talking about each register and bus in
detail, but I think that would be a wasted effort.
You can find the full designs online pretty easily.
Other folk have spilled plenty of ink about it.
Rather, I want to look at a few details and how those measure up in context.
And ultimately, I want to come back to one of those central myths.
That is, of course, is the PDP-1 just a TX-2 in disguise,
or is it something else entirely? Let's start with something fundamental. Word length. Or,
put another way, the natural size of data that the PDP-1 operated on. Or, put yet another way,
bittedness. I know, it's one of those all-pervasive qualifiers for a computer.
Word size is one of those core choices that impacts the rest of the machine. It dictates
what kind of math it can quickly compute, what kind of data you can pass around,
how buses are shaped, how its lights are laid out, and even what kind of memory you can use. So it's fair to call it a big deal.
The PDP-1 used an 18-bit word. You could also just say it's an 18-bit machine. That means that
its buses move around 18 bits of data at a time, its memory holds 18 bits at each address, and its
math circuits are designed to operate on 18-bit numbers.
So the question becomes, what's so unique about this word size? A modern gut reaction might be
that 18's weird because it's not a power of 2, you know, 8, 16, 32, 64. But we are in the pre-IBM360
era, so it's very common to see word lengths that aren't powers of two,
so that's not very weird. The bigger flag for this era is that this is actually a very small
word size. Now, to be totally fair, word sizes were all over the place in this time period.
The IBM 1401, itself a business machine, was technically a variable word length computer,
but its basic word size was 6-bit. That's, however, not the best comparison since the 1401
was meant to work with character data and not really do huge mathematical work. The better
comparison here is to a scientific machine. Now, laugh all you want, I am falling into some of IBM's old sales and product line jargon.
You will have to forgive me, but I think this is actually useful here.
Back in this period, IBM had two main product lines, scientific computers and business computers.
The 1401 was a business machine, which is really basically a glorified
punch-card accounting system that could be programmed. On the scientific side were machines
like the 700 and eventually the 7000 series. These were meant to crunch numbers and do real,
like, hardcore mathematical work. Let's take the IBM 709 as a baseline here, since it was released just a year before the PDP-1,
and I think its specifics are pretty indicative of the larger pack of scientific mainframes.
The 709 used a 36-bit word size. That's twice the size of the PDP-1's tiny 18-bit words.
size of the PDP-1's tiny 18-bit words. Why the larger words? Well, it comes down to simple math.
A 36-bit word allows for pretty good mathematical precision. You can represent a 10-digit decimal number with a few bits to spare. If you get crafty, you can even represent larger numbers
within 32 bits of storage. Since the 709 is designed with these
wider words in mind, that means it can very quickly add, subtract, or even multiply large
numbers. Taking the sum of two of these 32-bit numbers would be a single operation. This becomes
hugely important if we're talking about things like simulation models.
When simulating physics or chemistry, precision is crucial. You need numbers with a lot of decimal
places. The ability of the 709 to natively support highly precise numbers meant that it was able to
burn through simulation software. At least, this represents an older approach to performance for this kind of software.
Compare that to the PDP-1's dinky 18-bit words.
If you needed to multiply two 36-bit numbers, that would be a handful of operations.
First, you would actually have to use four locations in memory, since each number would need two words of storage.
or locations in memory, since each number would need two words of storage.
You'd have to multiply the first 18 bits of each number,
then the second you'd have to account for carry, overflow, and underflow.
In short, it would be slow and require extra software when compared to the same operation on an IBM 709,
or any machine with a larger word size, for that matter.
Then, why would you go for the smaller registers?
Well, tradition, of course! Earlier MIT machines, namely Whirlwind and TX0, also used small word
sizes. They were 16 and 18-bit machines, respectively. And to note here just for
completeness, TX2 was actually a 36-bit computer,
so that's another difference between the PDP-1 and TX2. Anyway, smaller word sizes make for more
simple machines, which means cheaper machines. They're cheaper and quicker to produce.
That becomes a huge consideration, especially when you're trying to sell a computer.
But of course,
this comes at the cost of performance when dealing with larger calculations.
To get around that, the PDP-1 operated at a relatively high clock rate. Now, once again,
this is one of those confusing quantifiers. All computers have some clock rate that they operate
at. It's the cadence at which bursts of electrons travel down wires.
A faster clock rate doesn't automatically translate to a faster computer.
It just means that certain operations take a shorter amount of time.
The PDP-1 operated with a clock rate of about 187 kilohertz.
That was limited by the speed of memory access. In theory, I think the transistors
could have been driven in the megahertz range, but magnetic core memory was relatively slow
compared to transistors. The IBM 709 doesn't have a published clock speed, but I can make
some extrapolations. The reason there's not published clock speed just comes back to how
older computers were talked about. We don't have the same kind of standardization that we have
today. A basic instruction on the 709 took 24 microseconds to compute, which we can translate
to 44 kilohertz. Memory access on the machine took just 12 microseconds for a best-case speed of 88 kilohertz.
Either way, that's much slower than the PDP-1's clock cycle.
So, in theory, the small word size may not matter.
You may need to shuffle around data to handle larger numbers, but you're running fast enough
that it could be fine.
This is another example of how the PDP-1 just wasn't like other computers.
It's a very different approach to performance and a different approach to computing, fundamentally.
However, this isn't to say the PDP-1 was a speed demon.
A stock PDP-1 did not support multiplication or division in hardware.
That meant that to handle anything more complex than addition, you'd have to write a good amount
of code. That hoses any theoretical gains you get from a faster clock rate. Remember,
fast clock rate doesn't actually mean fast computer, at least not always.
This actually leads to one of the ways where the PDP-1 was like contemporary mainframes.
Note that I said a stock PDP-1 didn't do hardware multiplication.
Well, you could actually fix that.
Later models had an optional expansion, called the Type 10 expansion or sometimes Option
10, that added
hardware multiply and divide circuits. That expansion, of course, came at a fee.
Today, that may sound weird. At least, I hope it sounds weird and wild. With the state of
subscription services, maybe this will come back in vogue, but I really hope not. Mainframes, the
only real computers on the market at the time, were designed explicitly for this kind of pay-for
expandability. Their sprawling size should be a tip-off to that. The IBM 1401, for instance,
had an index register add-on. You could literally pay IBM to give you more processor registers to
work with. DEC would sell a whole host of expansion options for their first machine.
The hardware math circuits were one option, but you could also get a high-speed I.O. controller,
a mysterious sequence breaker, or even drivers for magnetic tape and punch cards.
breaker, or even drivers for magnetic tape and punch cards. In fact, the PDP-1 had been designed specifically for expandability, much like contemporary mainframes. Early in prototyping,
the PDP-1's chassis had been enlarged to add extra space. So a stock machine actually had
these void spaces inside its cabinets, just waiting to be filled. Of course, this was all
helped by the modularity of the PDP-1. The overall architecture made expansion much easier than it
would have been with a computer that wasn't modular. With this, I think we can start to see
how the PDP-1 was bringing new concepts to market, but those new ideas were shaped by its context.
To continue in that vein, we need to discuss interrupts. concepts to market, but those new ideas were shaped by its context.
To continue in that vein, we need to discuss interrupts.
The basic idea of an interrupt is pretty simple, and it's something that's been around in
some capacity for most of the history of computing.
The idea is that there are certain special events that need to break the normal sequence
of program execution.
These started out as error cases.
Basically, the computer detects a problem and automatically drops whatever it's doing and freaks out.
Over time, this progresses to actual, useful sequence breaks.
The computer runs into an error, stops what it's doing,
and jumps over to some predefined code that will, hopefully,
help resolve or explain the error to an operator.
It gets cool when we get to hardware interrupts. This feature starts to be developed in the 1950s,
around the middle of the 50s. The idea is you have a pin on a computer that if you send a signal to,
will trigger an interrupt. It will make the machine throw up its hands and stop what it's
doing, and then jump to some special code. Once it's done running that special code,
often called an interrupt service routine or an interrupt handler, then it goes back to whatever
it was working on before it was interrupted. This is one of the primitive building blocks
of multitasking. That can all sound a little abstract, so let me give a more
concrete example of how this works. And I'll do that by way of a question.
How does a keyboard actually communicate with a computer?
Let's look at this first without interrupts. You have a keyboard. When a key is pressed,
it sends out a signal over some sort of
bus. Without interrupts, the only way to handle live keyboard inputs is using some kind of waiting
loop. You check the values on that bus. If anything is there, you copy it into some buffer in memory,
then you clear the value on that bus, then you do it again, and again, and again, and again, and again.
and you do it again and again and again and again and again.
This is a very computationally expensive routine since a user doesn't constantly hammer out a perfectly formed string of characters.
You're going to have lots of cycles of that loop where nothing happens,
where the user is just sitting there going,
ah, what comes after the A? Oh, do I need a semicolon? That also means that
you're going to have all these loops where the computer is doing nothing of value. All it's
doing is turning electricity into heat. If you need to get keyboard inputs while a program is
running, well, forget about it. A lot of processing power is eaten up just waiting and checking for key presses
using these kinds of loops, so you have to try and squeeze in calculations during iterations of that
loop. But you can't run too many operations, because if you miss too many cycles, you can
miss a key press. The waiting loop, I think we can see, is a flawed approach. Now let's add hardware interrupts and
make a slight modification to the keyboard. Now when you press a key, a value is put onto some
bus and the keyboard triggers a hardware interrupt for you. When that happens, the computer stops,
reads the value off the bus, and puts that into some buffer in memory. It clears the bus, clears the interrupt,
and then gets back to work. Notice there are no loops. There is nothing having to do with
synchronization, not really even much of a slowdown. The computer just gets interrupted maybe
a few times a second at most, does its hardware thing, then moves on as if nothing ever happened.
does its hardware thing, then moves on as if nothing ever happened.
It should be clear that this is a much better approach. Hardware interrupts mean that a computer can actually be interactive with little degradation in performance. Sure, you could hammer
at the keyboard fast enough to cause issues, but a normal person wouldn't do that during normal
operations. The average typing speed these days is just over 40 words per minute,
so in most cases, interrupts will perform better than the older loop method.
Perhaps it's no wonder that interrupts show up at MIT and at DEC.
The TX-2, the very machine that Olson and company were working on before DEC,
sported a surprisingly
sophisticated interrupt system. TX2 implemented priority-based interrupts. In this scheme,
there are multiple different interrupt lines, and there's a hierarchy between those interrupts.
These different lines were used for things like tape drives, I.O. errors, timers, and external communications even with other computers.
Each interrupt fell on a hierarchy, so more pressing interrupts were processed before
less important ones. This system also allowed TX2 to do things like delaying executions of
chunks of code until certain events occurred. The flexibility here for programmers must have been a dream to work with. The PDP-1
also had hardware interrupts, but with a caveat. There was an add-on available, the mysterious
sequence break module, that could handle priority-based interrupts and multiple hardware
interrupt channels. But stock, the PDP-1 only had one hardware interrupt. This was, perhaps, not the
best setup, but it did make the base model cheap and simple. In practice, this meant that the PDP-1
only had one interrupt routine that had to figure out what the actual interrupt that was triggered was about. But, you know, that's enough to work with.
The most visible use for this interrupt was, of course, the keyboard.
That should come as no surprise.
Just one interrupt line meant that the PDP-1
could be much more interactive than other large machines.
Yet again, it's not like other computers.
It's using its specific design
to make it more interactive. This also let the PDP-1 do a cool data shuffling trick.
It could use the interrupt line to asynchronously handle data transfer to and from input and output
devices. I want to explain this just a little bit because I love these kind of neat
little tricks. Remember how I explained keyboard interrupts? That same setup could be used to read
data into the PDP-1's memory. Just hook up some kind of drive, maybe even the paper tape drive
from a flexo writer, and set it to fire an interrupt when a new chunk of data was ready.
Then the PDP-1 could read in data while also running other programs. In that way,
the same feature that helped make the computer more interactive made it more capable and more
flexible. Let's just take quick stock of all these features. The PDP-1 had a small word size, a fast clock rate,
and a single interrupt. All of those made it stand out among commercial machines.
Features like modular design and expansions were more in line with existing practices,
but overall, this was a very unique and small computer. So you must ask, yet again,
very unique and small computer. So, you must ask, yet again, was this just a commercial version of TX2? By this point, I hope we can all agree that no, the PDP-1 was its own machine. That said,
it was definitely a descendant of the TX2 and really all MIT computers that came before it.
Here's the last little neat wrinkle to the story, and I think it
really is the last twist I have in me. In the slow march of progress, we expect every new machine to
be bigger and better than its predecessor, or rather, smaller and faster. That should mean
more RAM, faster processing, more features, that sort of thing. Call it strict improvements over time.
That's not exactly the case with the PDP-1. DEC's first computer is much less capable than TX-2.
The TX-2 had 320 kilobytes of magnetic core. The PDP-1 had four. TX-2 had tiered interrupts with priorities and multiple lines of input.
The PDP-1 had one. Word size was smaller. There were fewer instructions. There were even fewer
internal registers. The PDP-1 only had two registers for programmers to work with, which in any era is not a whole lot.
So looking just at the numbers, the PDP-1 is a downgrade. Maybe call it a regression.
But it's not necessarily the numbers here that are important. Everything else around the PDP-1
was a vast refinement over MIT's earlier machines. For one, the PDP-1 was actually manufactured at scale.
A total of 53 of these machines were sold, which, you know, that's not a huge number,
but that's a lot better than the one TX-2 that was built. This is the whole commercialization
aspect of the machine, and I think it's why folk like to call it a commercialized version of TX-2.
machine, and I think it's why folk like to call it a commercialized version of TX2. It took the technology that was used in TX2 and all the other machines over at MIT, it refined and reformed that
technology into a new kind of machine, and that machine was something that could be mass-produced
and sold. Part of that commercialization was technical, like the industrial design and
expandability, but part of it was less so.
Here I'm talking about things like support and software. The classic bundle around any computer,
really. DEC would provide software along with the PDP-1. In fact, there's this great story about
some PDP-1s shipping with the game Space War preloaded in magnetic memory. In practice, however, this was
more often a matter of providing tools for new users. Almost directly after release, DEC shipped
out assemblers for the machine, which allowed for easier programming. Debugging software was also in
the mix pretty early. DEC actually spent considerable time and human effort figuring out what kind of software they
needed to provide and sell. That's just one aspect of the whole project, but it's something that was
never even factored into the life of TX2. The PDP-1 was a machine meant for use by researchers
and in labs, but it wasn't a research machine, if you catch my meaning. It was a pre-made machine that was ready to use.
That gave DEC a whole new world of things to worry about, software being just one aspect.
So maybe it's no wonder that the PDP-1 is, physically and architecturally,
so different than the machines it grew out of at MIT.
It's clear to see from all this how the PDP-1 was revolutionary when compared
to other companies' computers. It was cheaper, smaller, and thus more accessible than any other
machine. Its interactivity made it a totally new type of computer. But the technology itself wasn't
some bolt from the blue. This revolution came on the back of years of research
and evolution done at MIT. Alright, that does it for this episode. Once again, this is one of those
times where I have to say I'm just skimming the surface. The PDP-1's history is so much more complex than just its beginnings.
It becomes the home of many early hackers.
It's where Space War is written.
So the first text editors are developed on the PDP-1.
It's used in early timesharing experiments.
The list really goes on.
I've consciously avoided falling into discussions of the actual use of the machine because the
PDP-1 just comes
up so often on the show as a setting for other stories. I'm sure it will continue to do so in
the future. There was one surprise that I ran into, though, during this episode. There seems
to be an almost complete lack of scholarship around TX-2, which seems kind of strange to me.
You can dig up a whole lot of primary sources,
but there are very few secondary sources around. That's especially weird because TX2 itself was an
important machine. There's definitely some meat there that I think I want to come back to when
I have some more time. Anyway, as far as the PDP-1 goes, I hope I've been able to outline what a fascinating
machine it truly was. When it went to market, there wasn't anything like it. DEC was able to
transmute ideas used in MIT's computers into something more simple, more practical, and more
accessible. The result is a machine that's firmly set in a lineage of computers, but not in the most expected way.
We've reached the end, but I do have just some quick announcements before I sign off.
You may have noticed this episode's late. Everything's fine with me, I just got
really busy a few weeks ago when I was starting production. Which leads to issue two. I'm about to go on a big long vacation. This has been planned
for a little bit, and I had scheduled things a little more tightly so I'd be on my normal release
schedule, but pushing this episode back has pushed back next episode. I'm actually going to be gone
for two whole weeks, so I'm pretty sure that next episode
will also not come out for three more weeks.
After that, I'm going to commit to getting back on track more for myself than anyone.
I like having a very set schedule.
Anyway, so if you don't see next episode two weeks from now, that's because it's going
to be just a little bit late.
With that said, thanks again for listening to Advent of Computing. I'll be back, you know when,
with another piece of computing's past. If you like the show, there are a few ways you can support
it. If you know anyone else who'd be interested in the history of computing, please take a minute
to share the podcast with them. You can also rate and review the show on Apple Podcasts.
If you want to be a super fan, you can support the show directly through Admin of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes,
polls for the direction of the show, and bonus content. I'm actually polishing up the script
for a bonus episode right now, which will be coming out next week. So if you want to get that,
you can head over to my website, adminofcomputing.com, and get be coming out next week. So if you want to get that, you can head
over to my website, adventofcomputing.com, and get a link out to Patreon. If you have any comments or
suggestions for a future episode, you can go ahead and hit me up on Twitter. I'm at Advent of Comp,
and I at least read everything I get mentioned in. And as always, have a great rest of your day.