Advent of Computing - Episode 124 - The Full 360
Episode Date: January 15, 2024The release of the IBM System/360 represents a major milestone in the history of computing. In 1964 IBM announced the 360 as the first family of compatible computers. Users could choose a system that... was just the right size for their needs, mix and match peripherals, and have no fear of future upgrades. If you started on a low-end 360 you could move up to a top of the line model and keep all your software! Something like this had never been done before. Such a watershed moment resulted in interesting cascading effects. In this episode we will look at the 360 itself. In the coming weeks we will be examining how it shaped and dominated the market, how it led to a federal antitrust suit, and how a mysterious series of clone computers survived in uncertain times.  Selected Sources:  https://spectrum.ieee.org/building-the-system360-mainframe-nearly-destroyed-ibm  https://archive.computerhistory.org/resources/access/text/2012/11/102658255-05-01-acc.pdf - Fred Brooks Oral History  https://archive.computerhistory.org/resources/access/text/2017/11/102655529-05-01-acc.pdf - 14K Days
Transcript
Discussion (0)
No one ever got fired for buying IBM, at least so it's been told.
This is one of those catchphrases that's so old and it's been so widely circulated
that it's hard to even pinpoint an origin.
For nearly the entire history of computing, that catchphrase has been true.
It's still true today, at least in certain circles.
still true today, at least in certain circles.
No one ever got fired for buying IBM,
because IBM represented the gold standard in computing.
They reached that height through decades of work,
very careful risks, and debatably anti-competitive practice.
It's not just that IBM put out good computers,
it's that IBM consistently put out good computers, it's that IBM consistently put out
good computers. Say whatever you want about their leasing system, their stance on compatible
machines, or their somewhat concerning activities during the Second World War. The bottom line is
that since their very founding, IBM consistently produced solid products.
This is one of the reasons that IBM shows up so often here on Advent of Computing.
They don't just have one big hit, one revolutionary computer, or even a handful of machines.
If that was the case, then IBM just wouldn't loom so large.
If that was the case, then IBM just wouldn't loom so large.
They'd be an RCA, a data point, or maybe even a Xerox.
IBM has been a consistent and huge presence in the computing world since, well, since there was a computing world.
Heck, even before that.
An office worker in 1921 could even be assured that
they wouldn't get fired for just buying IBM. These decades of dominance make IBM's history
dense with lore and stories. There are, quite literally, songs written about the power and
prestige of Big Blue. I mean, just look at that name, Big Blue. IBM has that moniker
because not only is their logo large and blue, but you could spot IBM salesmen because they wore
fancy blue suits and blue ties. They're a titan of industry, to be sure. Over the years here on
Advent of Computing, I've spent a lot of time talking
about IBM's earliest days and their later ventures into the home market. I've shied away from their
middle years, from the era of their true power. Well, I think it's time to rectify that omission, Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 124,
the full 360. Before we get into things, I think it's about time I give a quick state
of the podcast update. This is the first big episode of 2024, at least the first episode that I'm producing in 2024,
so it's an appropriate time.
You may not have noticed it, I've been kind of playing it close to the chest,
but the last half of 2023 was really rough for me on a personal level.
I really don't like talking about myself on the show, so I won't
go into a lot of detail here. There was a loss in the family that really impacted me deeply,
so the last six months or so have been a bit of a slog for me. Work on pretty much all of my side
projects has ground to a halt, and I haven't been very easy to get a hold of. With the year ending,
I've been starting to feel a bit more of my old vigor and verve return. And I swear,
I do read your emails. I will start responding to them again, eventually. Admin of Computing
itself has given me a real sense of normalcy through this rough part of my life. A big part of that is the
fact that working on the show gives me a pretty safe way to challenge myself. I think in order
to grow, us humans need some regular challenges. For me, Adren of Computing provides a nice place
to do that. And my next big challenge is to actually get back to my side projects and reply
to all my emails.
Over the last year, I've had the chance to cover some topics that I think have really
pushed my comfort zone. Right now, I'm specifically thinking about the prologue series and the
arguments against programming episode that I ran recently. Both of those topics really
forced me to think long and hard about, you know, what actually is a program
and the theory around programming and how I personally relate to software. I feel a similar
way about that viral Dark Ages episode from Spook Month last year. That also fills a similar role
in my heart. That episode required some really, really hard-fought research. I usually talk about my work whenever
I hang out with my real-life friends, and by the time I published that episode, I think they were
all really sick of my weird, somewhat incoherent ramblings about newspaper clippings and forum
posts. With the new year here, I've decided it's time to take up another one of these challenges
for the podcast.
I keep this big list of topics that I want to cover. There are a few items that have been stuck on that list since I started the show years ago. I tend to think of those as my dragons,
big beasts that I must slay eventually. One of those is the IBM System 360. It's one of IBM's most influential computers.
If it wasn't for the IBM PC, then I think the 360 would probably be the most recognizable IBM
machine. It's an important story, but a huge and very technical one that I've been shying away from.
The short story is that I'm finally tackling the 360.
That, on its own, is exciting enough. The long story is that I've been drawn into this via a
pretty weird route. A few months ago, I worked up an episode on the RCA CosMac microprocessor.
It was, honestly, really fun to research. And if I had a little more energy,
which I might soon, I still kind of want to go back and learn to program a CosMac. It's
just a really cool computer. During my research, I ran into something that I didn't think was real,
and I still don't really think is 100% real. That's the RCA Spectra 70 series of computers. These are mainframes
that are compatible with the IBM System 360. Right off the bat, that probably sounds boring.
But for me, that really fires off alarm bells. It doesn't jive at all with my notion of Big Blue. So let me explain
some things here. The IBM PC is considered a very exceptional IBM machine. The biggest reason
is that the PC is a, more or less, open platform. You can debate on if IBM meant it to be an open
platform, but it ended up being one. It's often said that the only proprietary
parts of the PC are the IBM logo and a few kilobytes of code. That code, the BIOS, was
pretty quickly reverse-engineered by smaller companies. Those IBM logos, well, you don't
actually need them for a PC to function. By 1983, there were 100% compatible clones of the PC on the market.
In the coming years, a truly staggering number of clones entered the market.
That led to the current PC monoculture that we live in.
IBM tried to stop clones using the real proprietary parts of the PC.
If a company wasn't careful, they would get sued
for using IBM code in their machine. Compaq was the first company to successfully clone a PC,
and a large part of their work was ensuring that they couldn't be sued by IBM. Compaq went to all
these extremes to make sure that their clone of the BIOS contained no IBM code. They even made a paper
trail to prove it in court. A big part of the story is always that the PC was a unique product
for IBM, and these were unique circumstances. The clone market was somehow special, something that
had never happened before, especially not to IBM. But what if I told you
that's not true? Not at all. The RCA Spectra 70 is the first evidence of this, and there's actually
more. During this same time frame, the middle 60s to the late 70s, there were at least two other manufacturers selling IBM clones. One was
English Electric Systems, which was actually making clones of the RCA Spectra 70. RCA themselves would
even buy and rebadge some of those clones as genuine Spectra 70 machines. Then we have Amdahl
Corporation. This is a particularly interesting case because
it was founded by Gene Amdahl, one of the main designers of the IBM System 360. He literally
quit IBM to make clones of their top-of-the-line computer. This type of market shouldn't be possible for one big reason.
IBM mainframes were totally proprietary.
IBM was very well known for creating proprietary hardware and guarding their systems with an iron fist.
I mean, there are accounts of IBM sending cease and desist letters to people that are trying to perform maintenance on their own tabulator machines. Making a clone of one of IBM's mainframes is kind of beyond the pale.
So how can these clones even exist? How did IBM let this stand? I can't even find evidence of
lawsuits filed against clone manufacturers. So what's going on? This is a
really, really weird one. The answer, maybe, is antitrust. But that opens a whole other can of
worms. During this same period, IBM was embroiled in a series of antitrust lawsuits. The biggest of these was brought by the
United States Department of Justice. IBM had a long history of somewhat dastardly deeds and
concerning dealings, which eventually caught up with them. At least, kinda. The whole trial period
is massive and wild. The federal suit was eventually dropped, but along the way,
IBM made major changes to their business. One was unbundling hardware and software,
a choice that was pretty clearly made to get ahead of the lawsuit.
So, we're facing a huge mess here. We're facing Advent of Computing's very first trilogy, and a pretty daunting one at
that. So here's the plan. In this episode, we'll be looking at the IBM System 360 on its own.
We'll examine how it was developed, what made it so revolutionary, why it was developed, and
why it was developed, and microcode. Ah, microcode, another one of my dragons.
The next episode will hit the paint hard on the antitrust trial. That's gonna be a lot. It's one of the largest trials in American history. Instead of doing a blow for blow, I'm going to be looking at the allegations, how IBM worked around them, and if the DOJ actually had a case to begin with.
I think that will give us some context that we'll need for episode 3, where we'll be looking at the IBM mainframe clones that sent me down this rabbit hole in the first place.
It's going to be big, it's going to be messy, and it's going
to be just kind of bizarre. My overall goal here is to just understand how these clones ever existed,
how they survived, and see how that understanding changes our view on the IBM PC's story.
So, let's begin the grand journey.
So, let's begin the grand journey.
I've already used a big $5 word this episode, and really, I'd be leaving money on the table if I didn't reuse it.
The modern computing hardware landscape is what's called a monoculture.
Almost all computers in use, with few exceptions, are built off the same basic design.
They all use compatible processors.
They all have RAM and peripherals wired up in the same way.
They all run the same software.
The binding force here is what's known as the x86 architecture.
It's self-derived from some of the Intel chips stowed away inside the original IBM PC. There are pros and cons that
come with this sort of digital monoculture. Nearly all computers can run the same software.
For instance, I can write and test code on my laptop, then that same code can be used on some
far-off server somewhere else. Or that code could just as easily run the same way on someone else's
computer. There are ways and means to get code to run on different types of machines,
but it's just more simple if everyone is using the same type of computer.
The monoculture, in this case, provides a level of compatibility. It also allows for upgrade paths.
If I want more RAM for my computer,
well, I just go down to the shop and pick up a few extra sticks of RAM. Because the PC architecture
is so widespread, it's easy to get parts, and there's a drive for manufacturers to make and
sell those parts. That means that I can get a PC in any size or shape I want, really, as long as it's still a PC.
But there's a flip side to all of this.
A rigid, single-platform world stifles creativity.
When the whole world is full of Intel-based PCs, it's really hard to break into the market.
That's exactly what we've seen over the last few decades.
That's exactly what we've seen over the last few decades.
As PC clones took root and the monoculture formed, incompatible systems were pushed out of the market.
The world became PC-centric.
Really, to the point that Apple computers turned into PCs for well over a decade.
The monoculture can be almost impossible to fight.
This is something that gets worse the longer the monoculture exists.
A contributing factor here is legacy support.
A PC built in 2024 will still run the same software that was written for an original IBM PC in 1981.
It will be faster, there may be some quirks, but that old code still runs.
Over time, people, programmers especially, get more and more invested in the platform.
What we get is this never-ending spiral of the monoculture growing and strengthening itself.
That spiral itself also forms a story for us. If we follow this exercise for the PC,
if we trace the spiral back, we run into a very well-told story of lucky breaks and weird,
inexplicable timing. The PC arrived into a very diverse home computing market,
and it collapsed it into the current monoculture. That collapse occurred
almost by accident. Now, the PC wasn't the first computer to do this. We can trace out that same
spiral path for the IBM System 360. In this case, we wind back to a diverse mainframe market, at
least, a market that's diverse with IBM machines.
It's a market that IBM intentionally tried to turn into a monoculture. There are some lucky
breaks and there's still some good timing, but unlike the PC, the 360 was designed to dominate
the market. It was designed to take a market full of disparate machines of incompatible computers and replace it with one system. So what exactly was the 360 built to
dominate? Well, this is where things start to get a little bit funny. In the mainframe era,
IBM was just the biggest game in town. Big Blue quickly established market dominance
because they were one of the first players in the market, and they survived long enough that
they could just keep going generation after generation. IBM had started way before digital
computers were even a glimmer in Alan Turing's eye. The company initially dealt with punch cards, which put
them squarely in the realm of very boring but necessary office technology. When computers came
along, IBM was able to transition into that new market. IBM would first hit the market in 1952
with the IBM 701, a machine designed specifically for scientific purposes. It was sold to physicists
and engineers, chemists, and maybe even some forward-thinking physicians. It was meant to
sit in a lab or a university and crunch numbers for research. The 701 was successful enough that
it led to follow-ups and expansions. Eventually, this basic design was shifted and
evolved into the 7000 series, these big beefy mainframes that would land humans on the moon
one day. But crucially, that evolution didn't really form a lineage. The naming convention here
kind of hides the fact that these were incompatible machines. We have 700 series
computers and later 7,000 series computers. In most cases, they were all different computers.
Some were roughly compatible, but in general, they weren't. A 701, for instance, was totally incompatible with a 709. They just all share
that 7 moniker because they're machines built for science. While all these 7 machines were running,
IBM was still selling punch card equipment. In general, punch card machines filled a slightly
separate niche than computers, at least at first. Tabulators and their relatives were being used for
things like payroll, accounting, censuses, and other data-heavy tasks. In most cases,
a tabulator wasn't even that useful for scientific research. There were certain use cases, but a
tabulator just isn't the same type of general-purpose scientific tool that a computer is.
IBM had been working on business-oriented computers for a number of years,
but their first big hit in the market was the 1401.
This computer was released in 1959 and spawned another series of sometimes similar machines.
And while this made IBM money, the 1401 in particular sold in wild
quantities, it led to an interesting issue. By the early 1960s, IBM had to support software
on a pile of incompatible computers. They also had to support that pile of incompatible computers
out in the wild. The problem was coming to a head in the very early 60s,
or maybe late 50s, depending on where you like to quantify things.
The 7000 series had issues with its memory.
Fred Brooks was an IBMer that ends up managing the 360 project.
He had this to say about the 7000 series
in an oral history interview with the
Computer History Museum. And as a note, the key that Brooks mentions in this quote is a reference
to Bob Evans, another IBMer. To quote, we were fundamentally address size limited. We couldn't
attach more memory to the 7090. We couldn't attach more memory to the 7080, and the applications were
hungry for more memory. So the problem was that he proposed to wait for a new semi-integrated
circuit technology that was going to be three years down the road, and the problem is how do
you hold the market in the meantime? End quote. IBM's scientific line of computers just couldn't handle enough memory.
The limit they were running into was deep in the hardware.
You couldn't just throw more RAM at the problem, since the computer wouldn't know what to do with it.
The 7090, despite being one of IBM's top-of-the-line machines, could only handle 32 kilowords of memory,
which, conversions aside, isn't very much.
And while new technology could address the issue, they'd still have to make a big shift.
They'd, as Brooks said, have to hold the market in the meantime, and even after that,
IBM would need to make a radical shift in how they designed architectures in order to overcome these memory limitations.
James Cortada, writing for IEEE, explains a similar issue on the 1400s side. The 1401 became
a huge success, and a true workhorse. That success turned out to be somewhat detrimental.
To quote, with the 1401 so dominating the computer business, any problems with it were
serious. One of them was that the 1401 was too small. Users found these machines so useful
that they kept piling more work on them, reaching the system's capacity. They then had three options.
Move to a bigger IBM system, such as an IBM 7000, install a competitor's system,
or acquire more 1401s. End quote. Switching from a 1401 to some 7000 series machine
would have been a really hard pill to swallow. Remember, these are totally incompatible computers.
Your team would have to learn all new software, a new way to program,
and would have to get new peripherals and tools. The only thing you'd be keeping would be maybe
the same IBM representative. And dudes in blue suits are cool and all, but at that point,
you might save money by just looking at other competition. You might even end up with a better
computer. In this period,
the overall market was sometimes called IBM and the Seven Dwarves. It was a bit of an industry
joke, but roughly accurate. IBM was the only truly huge manufacturer. They were called Big Blue for a
reason. The rest of the market was composed of RCA, General Electric, Burroughs, Univac, NCR, CDC, and Honeywell.
Those manufacturers each held some market share, but nothing approaching IBM.
The other seven made up maybe 30% of the market.
That said, they were still competitors.
CDC, for instance, offered computers with more memory than IBM could bring to the table.
The same was true for a few others of the dwarves.
IBM didn't corner the market on technology, they just had the largest market share.
If IBM didn't keep up technologically, then their market share could slip.
There was a lurking hazard here.
Which brings us up to 1960 proper. IBM as an
organization realized it was facing a problem. The current lineup of incompatible machines and
underpowered machines was dangerous. It left Big Blue in a very precarious position.
It was also eating up corporate resources. IBM had to maintain software for at least six incompatible architectures spread over scientific and business-oriented machines.
That is a massive duplication of effort.
It was time for a new computer. It was time for a new approach.
But that was also a very dangerous proposition.
that was also a very dangerous proposition. Tracy Kidder explains this type of trap very beautifully in the book The Soul of a New Machine, which isn't super related to IBM, but
it goes over a similar issue, so I'm just gonna steal and warp his words.
It all comes down to how you deal with your existing clients. Let's say you're in IBM's
position. Customers have issues with your current
range of computers. You might think the solution is to just unveil a new machine, a bigger and
better computer, one built for the 1960s. This is actually a very bad idea. It's a very dangerous
idea because it gives your existing customers a very dangerous choice. At least,
dangerous for you. Under the current status quo, this new machine wouldn't be compatible with
anything before. No one was really doing that to begin with. All baggage would be dropped,
architectures would be reworked, and a shining new machine would roll out of the factory.
That means that to transition to this new computer,
a customer would have to leave everything they knew behind. They would need new software,
new hardware, new tools, new everything. So why not play the field? Why not go look at one of
the seven dwarves? Maybe CDC has a better deal, or, you know, I heard Univac has a faster machine
that's about to come out.
The way out of this trap is to somehow keep the pressure on your customers,
to provide an incentive strong enough to keep them in-house, or make the upgrades as painless as possible. This can be done in a number of ways, but the solution will be dependent on the
specific trap that you've laid for yourself. What I can tell you is that IBM's first shot at escaping this trap was, well, let's just
say luckily it never came to market.
We get to this weird chunk of IBM's history that becomes a little bit incomprehensible.
The main player to know is the aforementioned Fred Brooks.
He had been one of the primary architects of the
7030, aka Stretch. That's one of IBM's big, powerful number crunchers. He would end up
managing the System 360 project. But between those two big computers is the era of the 8000.
This was IBM's first shot at getting over the issues with their existing product line.
It's notable for being a very safe approach, a very conservative approach.
Brooks' stretch was probably one of IBM's most powerful and most advanced machines in this period.
So part of its technology would be leveraged to make a new range of computers.
would be leveraged to make a new range of computers. In early 1960, Brook moved up to IBM's Poughkeepsie office, where engineers were already working on a machine they called the 70AB.
Now, that's all the context you really need, because we aren't going to get deep into this.
Everything turned into a somewhat incomprehensible maze of machines. The book 14K Days, A History of the Poughkeepsie
Laboratory has this very illustrative passage that I think shows off the maze quality of this era.
To quote, Poughkeepsie engineers reviewed the early work they had done on the 70AB and began
to consider expanding the 70AB into a family of computers to be called the
8000 series. The proposed 8106, based on the earlier 70AB design, would replace the 7070.
Also planned was a scientific feature called the 8108, a smaller commercial machine to be called
the 8103, and to fill the gap in the scientific area a machine called the 8104, end quote.
The plan, in other words, was to just make a bunch of bigger computers.
These would offer direct upgrade paths for IBM customers.
A 1401 shop could move up to an 8103, a 7070 lab could upgrade upgrade to an 8106 and so on.
The incompatibility, though, would remain.
These 8000 series machines would all just kind of be their own things.
The root of the problem here wouldn't be addressed.
IBM would simply be treating the symptom.
They'd be making a new generation of machines with the same core issues.
This was, like I was saying,
a very conservative approach to the legacy trap. It may well have worked, but only for a time.
It wouldn't solve any of the long-term issues IBM was facing. Put another way, the plan for
the 8000 series was very reactive. It wasn't proactive. As Brooks tells it, the 8000 series got pretty far into
planning. In January of 1961, those plans were presented to IBM's executives in a day-long
mega-meeting. To use a bit of an extended quote, because Brooks puts the story better than I ever
could, quote, and the whole program was very well received, except for one fellow sitting in the back who just got glummer-looking as the day went on.
And that was Vin Learson.
And that's not who you want to get glummer as the day went on, because he was executive vice president of the company.
Well, that night, he fired my boss, Max Ferrer.
He shipped him out to Colorado, to Boulder, out to Siberia to work on
tapes. He brought in Bob Evans from Indicott, from the other division, because this was our division's
plan. He told Bob to look into things. If it's right, make it happen. If it's wrong, turn it around.
Bob spent three weeks looking into it, took me out to dinner at a fish place in Poughkeepsie,
and told me he had decided it was wrong and was going to turn it around.
And it was his plan. His plan to do. We were losing market. We were obsolete.
IBM needed a shock to the system. That shock happened to take place over a nice plate of fresh fish. The 8000 series would have been a very old school IBM approach. Very 1950s IBM.
But this was a new decade.
Competition was getting better, and Big Blue needed to do something drastic.
The 8000 series just wasn't it.
It's often said that going in a new direction, taking a fresh and radically different approach,
can lead to great things.
But you can't just decide to drop a project and pivot at random.
It turns out that a dartboard isn't a good way to pick a bold new direction.
To properly shake things up, you need to tread carefully during these transitions.
To properly shake things up, you need to tread carefully during these transitions. Over the course of 1961, a series of committees, workgroups, and boards steered the IBM ship.
The 8000 series project was dropped, and new avenues were investigated.
Crucially, these committees did their homework.
Market surveys were made, existing IBM customers were interviewed. A more holistic image was being drawn.
IBM was trying to come to grips with the full picture of what would be needed in a new line
of products.
The interim name for this new set of machines was NPL, or New Product Line.
There would be innumerable people involved in the design and development of NPL, but
I'm going to collapse things down to two figures.
One is Fred Brooks, who we've already met.
He was the overall manager of the project, which put him in charge of the day-to-day
operations and planning.
The second is Gene Amdahl, the primary architect of the new product line.
He was another old-timer at IBM.
Amdahl had helped to design some of the 700 series,
as well as the 7030 Stretch. He had worked with Brooks before, so would have been at home with
the NPL New Order. Keep his name in mind for now, we'll be getting back to Amdahl,
but I need to take us on a quick diversion. Just know that much of this section is coming
from interviews
with Amdahl himself, as well as the sources I've mentioned earlier. We need to actually address
NPL, or as it would be sold, the System 360. We aren't dealing with any one computer, but
rather a family of machines. Now, I want to linger on this for a minute. Usually, the condensed history of the 360
just says, oh, it was the first family of compatible mainframes, or something along those
lines. That's really easy for us to understand today because we're used to the droll gray of
our digital monoculture. Of course, it makes perfect sense. We see compatible machines
every day. You can buy a Macintosh in any size of Macintosh. I want to linger here exactly because
it does seem like an obvious way to sell computers. I want to make sure we're all on the same page.
So forgive me if all of this is too obvious. The first key aspect is that all System 360 computers would be the same architecture.
This means that they all spoke the same machine language, and they had the same internal interfaces
and resources.
To a programmer, each 360 looked, felt, and acted identically.
This is what ensured compatibility between computers in
the family. What made each member of the family unique only came down to capability. Here, I'm
talking about raw processing speed and memory capacity. Also, I think the I.O. speeds were
slightly different between certain models. The bottom line is you could get a 360 in whatever size best suited your needs. While the
family would expand over time, the plan was always to bring a whole range to the market at the same
time. This was, quite frankly, unprecedented. On day one, you could choose and pick a 360 that
would fit something just the right size. This just hadn't happened before.
The closest we get are small lineages of computers. The Univac 1103, for example,
was followed up by the 1103A, an extended and expanded machine, but that's about as close as
we get prior to the 360. Never before had a company offered a series of compatible machines in
different sizes meant as actual options to choose from. The huge advantage was that a customer could
find a 360 that was perfect for them. Then, if the requirements changed, which they probably would,
they could swap out the 360 for a better fit. You could keep all your
existing software during the transition. That was a huge deal because, bottom line, the software is
expensive. At this point, we're not up to the full-on free market software thing. That doesn't
come into play for a few more years. Rather, we're talking about custom in-house software.
That kind of software is expensive both in monetary costs and just the cost of human effort.
It was one of the huge barriers that would keep customers from switching to a new computer.
The 360 family was also designed to use all the same peripherals.
Now, this is another one of those things that can sound underwhelming at first.
Just imagine the power of a family of compatible machines that can all use the same laser printer.
That doesn't really sound like a cool factor.
But consider this for a moment.
In the mainframe world, storage is considered a peripheral.
That includes everything from punch cards to paper tape and even hard drives.
Terminals are also peripherals, so too are modems. Generic hardware controllers, the
kind of thing that bridge between a computer and industrial machinery, are also peripherals.
So this interface compatibility, the peripheral compatibility, was also a huge deal.
interface compatibility, the peripheral compatibility, was also a huge deal.
Overall, each model of 360 acted the same. It could use the same software and use the same peripherals. So the only difference between a system was its performance.
That's why the 360s were considered a family. They all worked and acted the same. They were all,
in that sense, related machines. And like I keep hammering home,
this would help keep customers inside the IBM ecosystem. There was a 360 for every job and
every need, and it was painless to upgrade within the line. All that being said, there was something
that made certain 360s special. Certain models of 360s were backwards compatible with older IBM mainframes. This
included the 1401, most of the 700s, and the 7000s. This is, pretty obviously, a slam-dunk
feature, a step towards that planned monoculture. There were 360s that could run existing 1401 code just faster.
There were also 360s that could run stretch software, but with more storage.
This provided an immediate upgrade path for existing IBM customers.
If you were sick of your 1401, someone in a blue suit could come sell you a 360 that could replace it perfectly.
There is a weird bit of history here. It's not entirely clear if
backwards compatibility was initially part of the 360 plan. From what I've read, it seems that it
came in late into the process. The earliest planning document for the 360 is this thing
called the SPREAD report. It was created by an internal IBM committee called
SPREAD. It stands for something, but that's unimportant. This report goes over the current
market, issues with IBM's product lines, and concludes that the company should do something
about it and what they should do. The conclusion is the 360 family. The report explicitly states that IBM needs to make a family of compatible machines.
It doesn't, however, say anything about backwards compatibility.
In fact, it kind of argues against that very feature.
In a section on possible problems with the new product line, the spread report lists this one.
Quote,
Pressure IBM to perpetuate compatibility indefinitely. Solution. with the new product line, the spread report lists this one. Quote, Now, this is in relation to issues with a family of compatible machines.
So this isn't explicitly against backwards compatibility,
but I think this
speaks to a certain sentiment. The spread report isn't interested in compatibility over time.
So then, why can some 360s run older code? Well, that might be a long story.
There's this whole history of conflict within IBM in this period. I've been avoiding
covering that because, well, it's very messy. I only understand the vaguest shape of the fights.
This is one of those places where I do need to get into just the edge of the mess to tell this
history. In this period, IBM was divided into a number of different labs and offices.
The two that matter here are Poughkeepsie and Endicott. Most of the 360 story takes place in
Poughkeepsie, where Fred Brooks worked. That's where the 700 and 7000 mainframes were developed,
where the plans for the 8000 fizzled, and where the new product line was first designed.
where the plans for the 8000 fizzled, and where the new product line was first designed.
Endicott, on the other hand, was home of the 1400 series. During the early days of the 60s,
while NPL was just coming together, there was much fighting and posturing over the actual new products to be designed. Would the 8000 series be resurrected? Would IBM keep on producing 1400 series machines? Where would the
new product line be produced? It seems that every engineer and manager had an opinion on the matter,
and some were willing to scheme and plot against the larger tides at Big Blue.
IBM's upper management devised an interesting strategy to defuse the situation. If someone hated a project,
they would be put in charge of it. That way, their career would be tied to that project's success.
If they, say, crashed the new product line, well, then there would be repercussions for that person.
This pitted all the bureaucratic forces of Big Blue against rogue elements in the company.
Really, it's a pretty cunning strategy.
During this period, it's common to read about folks from Indicott getting traded up to Poughkeepsie and vice versa.
This was also done to bring in fresh eyes and shake up projects.
Bob Evans, the very one who killed the 8000 project, was pulled from Endicott to Poughkeepsie for just
such a purpose. Another example is John Hanstra. He was the head of the spread committee. He was
also an Endicott man and president of the division that created the 1401. He was attached to that
older system and its success.
So, in keeping with the shake-up policy,
IBM put him in charge of destroying all older product lines.
This is where we enter into the interesting territory.
You see, the 360 was going to take a while to complete.
In the interim, IBM kept up a bold face,
producing a number of new computers to tide over customers and retain market share. We get a few years of what the book 14K Days calls quote-unquote
crash projects. In some cases, these were projects that were already in flight that had their
potential futures cut short. Once the 360 hit the market, these computers would be obsolete.
Once the 360 hit the market, these computers would be obsolete.
Gene Amdahl, in an oral history interview, brought up this story.
Quote,
John Hanstra, I understand, was made chairman of that committee in order that he would feel locked into executing the decision of the committee.
So it was really a political maneuver, with the basis for it being essentially technology considerations as well as market considerations. And he didn't like the outcome, but he signed off on it and
he tried to make an endrunner later when he wanted to do an accelerator on the 1401 successor. I
think it was the 1410 or something like that. It might not have been there yet, but it was. Anyway,
he was going to do an accelerator.
And that turned out to be the basis for putting in emulation features in the 360.
End quote.
An accelerator would basically be an expansion to a computer that improved performance.
It would have been a way to keep that older system relevant.
In this case, it would have been a way for Hanstra to keep his beloved 1400 series alive.
If Amdahl is to be believed and he is corroborated,
then backwards compatibility was added to the 360 family as a way to combat this tendency.
It would fit very well with the political posturing of IBM in this period.
Or, it could have just been that backwards compatibility was good for business.
It's just one of those features that makes really good sense, but this leads to the question,
how did the 360 family handle backwards compatibility? And how did the family handle compatibility in general? Well, that comes down to a fascinating and kind of scary technology called microcode.
This is where we reach the high technical component of the show. Microcode is, perhaps,
one of the more daunting hidden layers of modern technology. The short explanation is that
microcode is a type of programming that exists even closer
to the metal than machine code.
The long story, well, has got to take a bit to tell.
Fundamentally, a processor is an electrical circuit.
It just so happens to be a very complex circuit, but a circuit nonetheless.
When you issue an instruction, the processor has to work out how
to flow around electrons to service that request. For simple instructions, this is, you know,
pretty simple. If you ask to get the inverse of a bit, that doesn't take a lot of work.
But as you build out more complex instructions and add more and more instructions,
things get difficult. Complexity quickly explodes.
Digital circuits are cool, but they're very unwieldy to work with. An inversion, for instance,
is easy. That's a single logic gate, just a few wires and a few transistors. Addition is a little
more complex, but still not that bad. It just takes some more transistors and a couple more wires, but you can do it.
Multiplication, well, that's a bit of a challenge, but it's still not impossible.
The explosion happens when you try to chain this all together.
Connecting up your little math circuits and adding logic to decide which circuits to use
when, well, that's a lot more work. It gets worse if you need
to change one of those circuits. What happens if you need to expand your multiplication circuit to
handle 32-bit numbers instead of 16? Not only does that change that specific circuit, it may also
change how it's connected up to the rest of your physical logic. That's not to mention the very
physical space aspect. Expanding your multiplication circuit will take up more space, which could force
you to redesign other circuits on your chip. The classic solution to any hardware issue is
my favorite domain. It's software. This is something we ran into on the show a number of
times before. It gets annoying
to control everything with custom circuits. It's easier to drop the custom logic in favor of
a little tiny computer and a few lines of code. Microcode is very much in line with this tradition.
The basic idea is that you work up a very simplified processor, one that understands
basic instructions. Those instructions are microcode. You can then program that machine
to understand more complex instructions. Those complex instructions are machine code.
Put another way, microcode allows a manufacturer to define a computer using software.
Microcode allows a manufacturer to define a computer using software. This is an interpretive process.
The microprogram, which is a real word, I swear, tells the hardware how to read in instructions,
and how to execute those instructions.
Let me work up an example of how this would function.
Let's go with an ADD instruction, since that seems to be the canonical example whenever you read about microcode.
To be more specific, let's say that a programmer has told a computer it wants to add the value of a register to a location in memory.
That's a pretty common thing to do, and it's also a fairly complex operation.
The first step of the microprogram is to get data into the right places.
the micro program is to get data into the right places. The math circuit inside a computer, the arithmetic logic unit, or ALU, has to be loaded up in order to perform any sort of operation.
The micro program starts by connecting the register to one of the ALU's inputs. Then it
fetches the second value from memory, pulls it onto a bus, and then stores that in the ALU's other input
register. Next, the micro program instructs the ALU to perform an addition. Once that completes,
the output of the ALU is then transferred to the proper location in memory, probably going
through a few operations to pull it from a register to a bus and then assert the bus into RAM.
to pull it from a register to a bus and then assert the bus into RAM.
All told, we might be looking at six, maybe even a dozen or more micro-instructions,
especially if we're dealing with carry flags and whatnot.
On the surface, that can sound kind of complicated.
But it turns out that microcode actually greatly simplifies computer design.
The example I just went through is only one type of microcode actually greatly simplifies computer design. The example I just went through is only one type of microcode. In practice, there are more arcane and spooky tricks that make some
microcode even more low-level and more performant. It gets way deeper than what I've discussed here.
But the overall effect is that the actual computer, the physical hardware, can be simplified,
radically so. The transistors and wires and resistors can be streamlined. All the complexities
are defined in your microprogram. In that bargain, you get all the benefits of software.
I know this is something we've addressed a few times on the show, but I'm going to list that
out again. Software is just more flexible than hardware. It's easier to prototype, it's easier to develop,
and it's also easier to debug. All this makes working with code faster than working with raw
silicon. Another benefit is that, depending on your implementation, you can actually replace
microcode in the field. Let's say you've shipped out your computer and
maybe a few days after release, some poor user finds a bug deep inside that new machine.
The add instruction has some issue if the carry flag is set or something of that nature. Normally,
that would hose you. That would mean the entire system was ruined, and new chips would have to be lithographed.
That would take a lot of time.
It would also take a lot of money.
But if you're using microcode, then you only need to switch out the microprogram.
That could be something as simple as going to the user's installation, ripping out a ROM chip, and putting a new one in.
In that case, what seems like a hardware bug
could be fixed very quickly. Those are just the normal benefits of microcode. The name of the
game is, as with all slick software, flexibility. Microcode really shines when it's used to pull
off cool tricks. The System360 is, perhaps, the coolest trick microcode ever pulled.
This takes us back to IBM's offices. The 360 line made aggressive use of microcode.
This was done for two reasons. The first was to ensure compatibility, to tie the family together.
Microcode was used to define the overall 360 architecture.
This makes the family of computers rather interesting little machines. The underlying
architecture of each model in the family is actually different. The physical computers
aren't the same. When the 360 is initially announced, it's described as a line of seven machines, three of which actually ship.
Those are the model 30, 40, and 50.
And each of those computers is slightly different.
Now, if you just look at a spec sheet, the differences are purely performance.
The lowly 30 is slow and has a paltry 64k memory space, while a big 50, that's a lot tougher. It can munch through
its 512 kilobytes of memory at a pretty good clip. But under the hood, and I mean way under the hood,
the 30 and the 50 are fundamentally different machines. The processor on these machines is
totally hidden from the programmer. Ken Schrift has a wonderful breakdown of the 50s microarchitecture over at RYTO.com, and
I'm pulling heavily from that article.
These microcoded machines hide all kinds of things in their deeper levels.
I'm not just talking about math circuits, I'm talking about whole secret registers
and pathways for flowing data around.
It's on this level that we see the real
difference in the 360 family. The 360's architecture presents 32-bit registers, which
can be used for 32-bit mathematical operations. Those registers, however, aren't always physical.
They are defined by the microcode. In the Model 50, registers are actually stored in
a special-purpose region of memory within the processor's circuits. Its math circuits are 32
bits wide, so it can perform on two full registers at the same time. It's a true 32-bit machine.
The Model 30, on the other hand, is an 8-bit machine. Its arithmetic circuits operate
on 8-bit numbers. That means that to add 32-bit numbers, the Model 30 has to carry out four
operations. The microcode handles working up all those operations, but this means that a Model 30
will always perform worse than a Model 50. However, the Model 30 is built from a
much more simple computer. That makes it cheaper. That's just one trick that microcode pulls here.
The second trick, and the most flashy one, led to a new term. Emulation. And I'm not even joking
here. Backwards compatibility on the 360 was implemented using microcode and supporting software.
It worked so well and was such a new technique at the time
that engineers at IBM coined the term emulation just to describe it.
The setup here is actually pretty simple.
The nerds had already developed one microprogram that implemented the
360s architecture, but there was no reason to stop there. You can write as much microcode as you want,
there's nothing stopping you. So why not put together a few microprograms and set up the
computer so it can switch between those programs? That sounds like a wild technique, right? It would mean a chameleon of a machine,
a computer that can change architecture on a whim. Well, apparently, it was pretty easy to do.
Brooks describes part of the process this way, making reference to the
ongoing fight with Hanstra over the future of the 1400 series.
with Hanstra over the future of the 1400 series. Quote. And so, in January of 64, we had the final shootout with Hanstra, where we got the go-ahead to do the 360, and that was a little technical
tour de force by three bright engineers who, overnight, came up with the 1401 emulator on it,
on the Model 30. End quote. This is from another oral history. After that quip,
there's this one-word response from the interviewer. Overnight? To continue the story,
Bill Hanf was kind of the Model 30's representative on the architectural team,
a real smart lad, so he was resident there, and knew the microcoding backwards and forwards, end quote. we had a machine that was four times the 1401 and could switch back and forth between being a 1401 and a 360.
End quote.
This is a feat of speed and programming prowess that wouldn't have been possible using silicon alone.
Even if you were able to design a new circuit overnight, you couldn't have it up and running by morning.
But with microcode, that was possible. By employing microcode so aggressively, IBM was able to,
in short order, create machines that could be 360s or 1401s or 700 or 7000 models.
They had found a way out of the compatibility trap. Now, in practice, the emulation was a little clunky.
At least at first.
Old 1401 users weren't able to just swap out their machine for a 360.
For one, the emulation modes on the 360 machine ran faster than older IBM hardware.
Like Brooks mentioned, the 1401 emulation on the Model 30 ran four times
faster than a real 1401. And while that's usually a good thing, it does come with some problems.
Let's say you have a program that relies on tight timing. Maybe it's a serial communications program.
Your code sends out a pulse, waits three microseconds, then sends out another pulse.
One way to do this was via waiting loops or a delay loop.
The practice has a few names, I'm sure.
This relies on the fact that a computer operates at a constant speed.
So you can calculate how many operations your machine can run in 3 microseconds.
Work up that number, set a loop that just spins that number of times,
and boom, you now have some simple code that can delay the program for 3 microseconds.
But if your code relied on this type of timing, then it wouldn't work under emulation.
The increased performance would mess things up for you.
There's also the matter of practicality to
consider. The 360's emulation modes weren't meant to be used long term. Brooks explains that they
were more of a way to wean customers off of older machines. The idea was that a client could get a
new 360 and immediately migrate to it. They would stop using their old IBM hardware
and live on the new mainframe. At first, they might be running software under emulation.
That would keep them stable enough and give them time to rewrite their code for the 360 proper.
Existing customers were just not expected to stick in emulation for very long.
This thinking was actually pretty in line
with the spread report, despite that report saying nothing about emulation. Think back to the issue I
quoted from the report, that compatibility may put pressure on IBM to maintain compatibility
perpetually. It may force IBM to stick with one architecture, or at least provide some kind of support for that architecture for years to come. In practice, however, the problem seems to have become a reality.
Most 360 models had some kind of emulation capabilities. This would eventually go so far
that you could run specific processes in emulation mode. So in some cases, you might have a program
compiled for a 7030 running side by side with native 360 programs. Years later, the successor
to the 360, the System 370, would also support emulation. Some 370s can even run 1401 programs, so perhaps the spread committee's fears were realized.
IBM had worked in secret for, depending on how you count it, maybe five years. That led up to
the announcement of the 360 in August of 1964. To call it a huge event is an understatement. IBM ended up chartering
a train just to bring reporters to the main press conference in Poughkeepsie. It was a day-long
event, and that's not to mention events and unveilings happening outside IBM's walls.
The first units would ship a year later, in 1965. The System 360 became a massive success.
The numbers are hard to quantify here, so I'm pulling from Cortada's IEEE article
yet again.
By 1966, over 7,000 machines had shipped, and IBM had plenty of backorders.
For perspective, we can look at the 1401.
That computer was manufactured between 59 and 71.
In that time, there were an estimated 10,000 machines in the wild. That's over the course
of 12 years. Put another way, the 360 was really moving units. Cortada also points out that IBM was
effectively expanding the entire computer market.
To quote directly,
increased overall demand for computing so massively that it raised all boats. The industry's annual growth in the second half of the 60s was in double digits year over year. End quote.
Now, the numbers here are kind of weird. We're talking about the value of machines in the wild.
So expensive machines end up counting for more. But still, it's plain to see that IBM was having a massive effect on the digital landscape.
Going with the numbers that Cortada gives us, IBM commanded 72% of the market by inventory value.
That's huge.
It's hard for me to even come up with a comparison for that.
IBM was in a dominant position before the
360, and the 360 allowed them to dominate the market for years to come. That's not to say that
the glory years of the 360 were all smooth sailing. Next time, we'll be looking at the
impacts of this market domination. We'll be continuing the epic by examining US vs. IBM, a case that remains one of
the largest in US history. And I mean largest by almost any measure. What did IBM do to inspire
the wrath of the Department of Justice? How does it tie into the story of the 360 and its clones?
All will be revealed next time. Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with the next segment of the first trilogy. And hey, if you
like the show, there are a few ways you can support it. If you know someone else who's
interested in the history of computing, then please take a minute to share the show with them.
You can also rate and review the show on Apple Podcasts. If you want to be a super fan, then you can support the show through Advent of Computing
merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls
for the direction of the show, and bonus content. You can find links to everything on my website,
adventofcomputing.com. If you have any comments or suggestions for a future episode,
go ahead and shoot me a tweet. I'm at Adrin of Comp on Twitter.
And as always, have a great rest of your day.