Advent of Computing - Episode 56 - TMS9900, an Alternate Future
Episode Date: May 16, 2021The TI TMS9900 is a fascinating microprocessor. It was the first 16-bit microprocessor on the market, it has a unique architecture that makes it well suited to multitasking, and it was on IBM's short...list to power the PC. Today we are looking at this strange chip, and the TI minicomputers that predated it's design. Along the way we will construct a theoretical TI-powered PC, and see how home computing could have changed if IBM took a slightly different path. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
I spend a lot of time on this podcast talking about the homogeneity of modern computers.
We all use really similar computer systems.
Mostly, this just means something like an IBM PC, but, you know, mixed and matched with
some newer components where possible.
It also means a level of conformity from one end of the computer market all the way up
to the other.
Cheap laptops, desktop PCs, and high-powered servers all follow the same unified design.
Usually when I get deep into this aspect of computing, it's so I can complain a little
bit.
The overall market in the 21st century is somewhat stale, at least in my opinion. That being said, widespread standards
have really led to a much better landscape for computing. Sure, it's a little boring for computer
nerds like me, but it makes life a whole lot easier. And really, I think a phase of hard and
fast standardization was needed for computers to become so ubiquitous.
Without some standardized platform, it would be really hard to build up the vast library of
software that we benefit from today. And it would be very difficult to make commodity hardware so
cheap. What I'm getting at is that some standard, some amount of homogeneity may have been inevitable in the computer market.
Due to prevailing forces, that template happened to be the IBM PC.
That machine defined what a personal computer would be, right down to the chips for decades.
The dirty little secret about the PC, and one that I really like to hammer home, is
that it was a bit of a rush job.
Even a year before launch, no one really knew what the finished PC would look like.
Intel's 8088 turned out to be the processor of choice, and thus the die was cast.
All computers thereafter built in its image.
However, there were other options.
One in particular, a strange chip from Texas Instruments, would have set a very different course for home computing.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 56, TMS-9900,
an alternate future. Today, I'm taking us down a counterfactual road to look at something that
may have been. The core of this conversation is the aforementioned TMS-9900 microprocessor,
a chip produced by Texas Instruments that could have powered the IBM PC.
During Project Chess, the codename for the project that led to the PC, this processor was one of the
handful of options under investigation. Ultimately, it lost out to Intel's 8088. But we aren't just
looking at a loser here, it's a lot more complicated than that, as all good stories are.
One of the important things to keep in mind is that the 8088 wasn't chosen because it
was some perfect processor.
In the modern day, we still use CPUs based off the x86 family of chips, but that's not
because it's the best chip design ever made.
Simply put, Intel was in the right place at the right time.
The entire x86 family has some serious flaws and some serious baggage from much older designs.
Now, this isn't to say that the TMS9900 is some beautiful,
shining example of a visionary processor that just happened to be passed up.
It also has its issues.
However, this often-forgotten chip from TI is just plain interesting. At least, I find it fascinating. For one, it's older than the 8088 and 8086. The TMS9900 was completed in 1976.
TMS9900 was completed in 1976.
That's two years prior to Intel's star lineup.
In fact, this makes it the first 16-bit microprocessor on the market.
That's cool on its own, but looking deeper into the silicon,
we get to the big reason I want to cover this chip.
Recently, I was reading through an old electronics hobbyist book on microprocessors.
You know,
the kind of stuff I do on a Friday night. Anyway, the book, which, if I recall, is just called 16-bit microprocessors, gives short descriptions of all the options on the market as of 1980.
You get a blurb about the chip, details on how it works and how it's programmed,
and information about how to get hardware and software for the processor. Skimming through, I eventually made it to the chapter on the TMS-9900.
This was my first time reading about the chip in any kind of depth beyond just basic information.
I knew it existed, that IBM passed it over, and it eventually powered some TI home computers.
When I got to the actual details on the chip's architecture,
well, I just about had to put the book down.
It's a strange design.
TI definitely made some unorthodox choices.
But what blew me away was that the TMS9900 is uniquely qualified for multitasking and timesharing. This low-profile chip from 1976,
well, it was packing features usually relegated to much bigger computers. I just had to learn more.
This is going to be one of those episodes where I'm compiling a lot of shreds of sources.
where I'm compiling a lot of shreds of sources. The simple fact is there isn't a huge paper trail on the TMS-9900. That said, I think this is a worthwhile road to go down. If circumstances
were different, this TI processor may have powered the PC. It may have become the base
for all modern computing. So without further ado, let's get into the details. We're
going to look at how the TMS-9900 came to be and where its weird design came from. Along the way,
we're going to talk about timesharing, multitasking, and how hardware can help.
And we're going to look back at this strange idea of a micro mainframe. By the end, I'm hoping we
can draw a coherent picture of what computing
would look like if IBM had made a slightly different choice. As is custom, we're going to
start well before the beginning of our story. You probably know Texas Instruments as the calculator
company, but that's not entirely a fair summation of their work. TI has done a whole lot since their founding in 1951.
What's important today is just the semiconductor side of their operation.
That's where TI has a strong heritage and still has a bit of an impact on the industry today.
During the early 50s, TI was one of the first companies to license transistor technology from AT&T.
This was still pre-Silicon, so TI was manufacturing germanium-based transistors, but they were
based off AT&T's patents. In other words, TI was in the semiconductor game as soon as
it was possible to be in the semiconductor game. They weren't just a manufacturer,
though. In the coming years,
TI would rack up patents for the first silicon transistor and a slate of other technologies.
1958 saw Jack Kilby, an employee of TI, create the very first integrated circuit.
So yeah, they're a little bit more than just a calculator company. Another early success for TI was the creation of their
5400 and 7400 series logic chips. These are still manufactured today, I've even had the dubious
honor of overvoltaging and breaking a few myself. These were early transistor-transistor logic chips.
Each chip contains a handful of tiny tiny
logic gates etched into silicon. TI wasn't the first company to manufacture
TTL chips, but they were the first to dominate this specific section of the
Simcoe market. TTL chips are deceptively simple devices. They aren't even very
large chips. Each is less than an inch long. You can easily
hold dozens in one hand. But despite appearances, they are really important little chips. Logic
gates are handy for any application that needs, well, a little bit of logic. Sometimes you just
need a way to perform a few AND operations on some inputs. If you do, you don't have to look further than TI.
The 7400 series has been providing cheap and plentiful logic gates since 1966.
The 5400 series' more expensive heat-resistant chips are even older.
So since the middle of the 1960s, TI has been the best place to get generic silicon logic.
It's cool to throw some logic at dumb circuits, but TTL chips have a much better, more ambitious application.
Logic gates are the building blocks of modern computers.
Binary logic has been the core of computing ever since John Atanasoff cracked the code back in 1937.
By chaining together logic operations, you can make a circuit that can perform arithmetic
or do simple decisions. Add enough logic gates, and pretty soon, you have yourself a computer.
It's a little more complicated, but suffice to say, logic gates are the raw material that
make computers possible.
When you see it mentioned that some older computer was built using transistor-transistor logic,
well, if it's after 1966, then there's a really good chance that it used TI's TTL chips.
It's been hard for me to pin down an exact date, but sometime in the early 60s,
TI was also getting into the computer game. At least in a limited sense. Which is why getting a date has been a little bit slippery. As near as I can tell,
this started sometime around 1961 with a device called the, quote, Molecular Electronic Computer.
It's a great name for a neat little computer. This Molecular Computer was really
just a silicon-based computer made using tight packings of integrated circuits. The Molecular
part is because it was small. There are these wonderful brochures that show an entire computer
fitting in one hand. You kind of have to stretch your fingers out, but it does fit in your hand.
Thanks to the fact that everything was built using silicon, they also used relatively little power,
and they were resistant to shock and heat, at least up to a point. During the 60s, TI sold
these systems to the military, primarily the Air Force, for use as onboard guidance computers in
missiles. Texas Instruments would
also build the onboard computers used in NASA's Mariner missions in the latter part of the 60s.
Now, I can't find precise details, but the Mariner computers were probably similar to
TI's molecular electronic computer system. So come 1970, we have a pretty interesting position here. TI has an amazing level of vertical
integration. They own plants to manufacture integrated circuits, the labs to design them,
and they consume their own ICs all in-house. They also have lucrative contracts with the
US military and are making their own computer systems. That's not even counting their consumer electronics division.
This is basically what early Intel really wanted to be.
But there was still something missing.
TI was building and selling computers, sure, but only in a really confined way.
Their existing computer systems were really special purpose.
If you wanted a missile guidance computer, then sure, TI could set you up.
But they couldn't do anything about digitizing an office.
All things considered, this wasn't a really bad place for TI to be.
They weren't making general purpose computers or really competing in the actual computer market,
but they manufactured and sold parts that were used in computers.
IBM, the biggest computer manufacturer of all, was even a big client of theirs.
So the missing feather in TI's cap, some series of generic computers, wasn't really an issue.
That is, until the market started to shift. You see, vertical integration wasn't just a thing in Texas.
IBM was also in on the vertical integration game.
In the latter half of the 60s, Big Blue was working to integrate even further.
Specifically, IBM was working towards in-house semiconductor production.
The company wanted to make their own chips, and one big implication
there is that Texas Instruments would be cut out of their supply chain. According to the book
Engineering the World by Caleb Purtle, IBM's drive towards in-house silicon was what pushed
TI over the edge. Quote, For TI, the commercial computer market was tempting. As early as 1968, a team had been formed to study the potential of the mini-computer business.
A huge market was developing on the horizon, but TI was confronted with a serious issue.
IBM ruled the computer business,
and TI didn't want to compete head-to-head with the company's largest and most valued semiconductor consumer.
End quote.
But with IBM-produced chips on the same horizon, TI no longer had an excuse to stay out of the market.
This effort was spearheaded by Mark Shepard, then-CEO of TI.
The plan was to produce a line of general-purpose computers, but TI took a cautious
approach. They didn't want to compete with IBM. They probably couldn't compete with IBM. So Shepard
proposed that they go a slightly different route. IBM in the late 60s mainly dealt in mainframes.
This is the more old-school approach to computers. An office
would have one hulking, powerful, and reliable machine that would be linked up to multiple
terminals. These types of computers are often packed with redundant systems. They're built for
100% uptime, and they're extremely expensive. To own an IBM mainframe also put you into a long-term arrangement with BigBlue.
Mainframes and most IBM hardware was leased and not owned by users. So IBM still had some control
over their hardware, even if that just meant a regularly scheduled maintenance crew would show up.
Shepard pushed TI to create and sell a line of minicomputers, not mainframes.
Now, the distinction here can get a little tricky because, well, we don't really use the term
minicomputer anymore. In general, this refers to a type of computer that was smaller than a
mainframe but still not a personally owned and operated system. With that broad definition, there's a lot of variation.
But generally speaking, a minicomputer is smaller, less powerful, and cheaper than a mainframe.
This meant that minis could fit into a lot more places that their larger counterparts couldn't.
In 1970, TI unveiled their first miniccomputer, the TI-960. Taking advantage of the smaller and
cheaper format, the 960 was aimed at industrial control applications. That is, a factory could
buy one of these new computers direct from TI, hook it up to their machinery, and with a little
software, it was possible to digitize their entire operation. It's the type of application
where mainframes could be used, and in some cases were, but many computers were just a more
accessible choice for more places. The 960 was just the start of a broader strategy laid out
by Shepard. His grand plan was something that he called distributed computing. Once again, we run into a bit of
outdated language. In the modern day, distributed computing is usually used to describe large-scale
number-crunching jobs carried out by a large network of computers. How Shepard is using it
is similar, but instead of supercomputing, he's more talking about coordinated computing.
but instead of supercomputing, he's more talking about coordinated computing. The idea was to start with the 960 and then work out from there to offer a family of computers that were all compatible.
By working together, these systems would mesh and build the office of the future, so to speak.
Making good on these plans, in the coming years, the TI-980 and 990 were released.
These were fully-fledged, general-purpose computers.
The entire series is a little enigmatic, at least you have to do some digging to get much details.
The 990 has the most readily available documentation, so we're going to be diving into that system.
Just keep in mind that there's a broader field of these roughly compatible systems.
So if I mince some details, I think we can thank the weird sourcing for that.
My first impression of the 9900, and really the entire series,
is that these computers are certainly out of the ordinary.
But these obscure systems aren't just oddities.
Under the hood, there's some really interesting stuff at play.
The big story that we can read from the chips is that of vertical integration.
And this is the big reveal here.
The 900 series of minicomputers stored their registers in memory.
series of minicomputers stored their registers in memory.
Admittedly, that's probably not as exciting a revelation to anyone but me.
So let me explain why this is weird and why TI would make their computers like this.
Hopefully, on the other side of this, you'll start to get excited.
So on a normal computer, you have three relatively distinct pools of storage to throw data into. You have some kind of long-term storage. That could be
hard drives, tape, even punch cards count. This type of storage is what we interact with the most
on a day-to-day basis, and it's also the furthest removed from the actual computer and
processor. Moving closer to the action, we have short-term storage, most often just called random
access memory or memory. That's a computer's working space. It's where anything you see on
screen, any variables and programs, and any running code is kept. Memory is faster than long-term storage, but only retains
its data while the computer is running. A computer's memory space also tends to be much smaller than
long-term storage, so you can't really load every file you could ever want into memory. You have to
be smart about how you use it. The final broad category are registers. This is where data that the processor
is actively operating on is stored. Registers aren't big enough to hold much. Each one usually
only holds a single number. A computer generally has multiple registers. 16 seems to be a normal
amount. Importantly, registers are tied directly into the processor. This makes them
much faster to access. To be a little more nitpicky, registers are usually connected
very closely to the processor's arithmetic logic unit, abbreviated to the ALU. That's the circuit
that handles logic and mathematical operation. Data usually flows
something like this. If you have two numbers in a file that you want to add, you first load those
numbers from your file on storage into memory. Then the numbers have to be loaded into registers.
Once that's done, the processor's ALU can get to work, grab those numbers, and operate on your data.
To close everything out, you'd usually just take the chain in reverse,
eventually dropping your results back into a file on some long-term storage device.
That's a really broad and simplistic explanation, but I think it's workable enough.
What's important to keep in mind is that those three big data buckets,
long-term storage, memory, and registers, are separate entities.
They are interconnected, but totally different tools for totally different jobs.
Long-term storage needs to be stable and, preferably, very voluminous.
Memory needs to be relatively fast, able to handle a lot of read-write cycles,
and stable for relatively short periods of time. Registers just need to be as fast as possible.
They only store data a few cycles, and they have to be wired right into the processor's guts.
I keep saying usually because the TI-990 and its relatives are really an exception to this design.
So let's try this again.
The 900 series stored its registers inside memory.
Do you feel it now?
If you're anything like me, then this quirk should be equal parts confusing and fascinating. I'm sure somewhere
out there in the wide, wild world of computers, there is another system like TI's machines,
but I have yet to find it. So why did TI put registers in memory, and how did they make that
work? Most importantly, what do you get from this weird design choice? The first two
questions I can answer pretty easily. I already hinted at it, but TI was able to pull this trick
thanks to vertical integration. That plus some careful design choices, but I'll get to that.
Early 990 minicomputers were built using, you may have guessed it already, transistor-transistor
logic. This is what you get when you own the semiconductor factory and the computer factory.
A 990 is basically just a pile of TI-made circuit boards populated with hundreds of TI-made chips.
The processor, memory, interface cards, and even the glue that made everything work together
was just a bunch of TTL chips made by TI.
So on a simple parts level, the machine's registers were basically the same as its memory.
Reading from memory, all things being equal, could work out to a similar speed as reading
from a register.
The bit of smart design that makes
this all work is called the tie line. It's an integrated bus that ties, as in T-I-E-S,
everything together. You get it? Tie as in T-I? Jokes aside, this is a pretty cool system. The TileLine interfaces expansion modules with
the core processor circuitry. It's used for memory but also other hardware expansion.
Looking through the block diagrams, basically a map of how data travels through a computer,
it becomes clear that the TileLine is connected almost directly into the processor's ALU.
In other words, memory is brought closer to the processor's internals, as close as possible.
That makes memory access even faster than usual.
In practice, the 990 does actually have a few normal registers.
It has some status flags, a program counter to point to
the currently running instruction, and the ever-important workspace register. That last one
is where we get to the real magic part. You see, the workspace register specifies where in RAM to
store the rest of your register set. Now, let's let that sink in for a second.
Not only does the TI-990 store basically all of its registers in RAM, there's three exceptions,
but it can also change where those registers are stored in RAM. The 990's reference manual describes this feature as, quote,
As an improvement over the undesirable consequences of a multi-register architecture,
the 990 computer uses a block of memory words called a workspace for instruction operand manipulation. The workspace occupies 16 continuous memory words
in any part of memory that is not reserved for other use.
The individual workspace registers may contain data or addresses
and are used as operand registers, accumulators, address registers, or index registers.
End quote.
The reference to a multi-register architecture
just means the conventional description that I laid out earlier.
We see here that TI isn't just explaining workspaces as some quirk of the system,
but rather a flagship feature.
You can use them as any type of registers.
They just happen to be in memory,
and there happen to be benefits. So the question becomes, what are those benefits?
For workspaces to make sense, we have to look at another technology. We have to look at time sharing or multitasking or multi-programming, whatever you want to call it. It goes by a lot
of names. We need to rehash basically how a computer can appear to run more than one program
at once. This comes up a lot, so if you want a full primer, you can just grab any number of my
episodes. In short, a computer creates the illusion of multitasking by really quickly switching from one program to
another. The computer will run a program for a few nanoseconds, pause it, then jump to another
program before pausing again to prepare for another jump. Computers are way faster than
human reaction time, so it appears, at least to our limited existence, that a computer can run more than one program simultaneously.
The trick all comes down to switching between one program and another.
In the biz, it's called a context switch or context swap.
This is where the computer's current state, called the context, is first saved down to somewhere in memory.
This context is usually just the value of all the computer's registers. Then, to finish the switch, a stored context is copied
from memory back into registers. From there, the computer can start executing the new task.
For multitasking to appear seamless, you end up having to do a lot of context switches.
So the speed of each context switch becomes really important. In most cases, a context switch is
actually pretty computationally expensive. Let's say you're writing a multitasking manager for the
8086 processor. That chip has four general purpose registers,
two pointer registers, four segment registers, and a stack pointer. That's a total of 11 registers
to make up a full process context. So at minimum, you need 11 operations to save the context,
and another 11 to restore another, older context.
So, a switch would take at least 22 processor operations.
With any normal computer, the speed of a context swap tends to be dependent on the number of registers you have.
Basically, how big is the context that you need to swap?
Well, I say normal computers.
As we've seen, TI wasn't making normal computers. The 990
and its relatives can perform a context switch with only one instruction. All registers are
already stored in memory. The entire context just lives in RAM already. To switch from one context to another, all you have to do is switch where
the workspace pointer is pointing. This makes the 990 uniquely designed to support multitasking.
There are, of course, some caveats. Normally, systems built for timesharing come equipped with
specialized memory handling hardware. We're talking stuff for memory
protection and process isolation. The 990 didn't really have that side covered, so we're just
looking at a design made for context swapping, not necessarily the full package seen on larger
time sharing mainframes. The last fun consequence of this design takes us back to Tileline. The 990 managed hardware
via memory mapping. Expansion cards and devices plugged into the Tileline interface were mapped
into some region of memory. So, for example, let's say you have a graphics display connected up over
a Tileline card. Then it would be accessed by simply writing to some pre-arranged
place in memory. That sounds a little weird at first, but it's actually a common way to handle
I.O. In fact, x86 computers do this exact operation for most devices. What's cool here is that,
in theory, a write to an I.O. device should be the same speed as a write to a register, since, you know,
the whole register set is in memory already. Just, it's something cool that I think is weird
about these systems. I like it. Now, this brings us up to where TI sat in the early 70s, at least
as far as big computers were concerned. 990s, 980s, and 960s were sold in enough numbers to
keep development going, but they never really got close to competing with IBM. For us to go
further forward in the story, we actually need to get even smaller than many computers,
and we have to look at some of the smallest computers being developed down in Texas.
I've spent a lot
of time on this podcast talking about the history of Intel's microprocessors, but Intel was never
the only show in town. In 1971, TI announced the TMS-1802NC. That's a pretty long name for a very
small 4-bit microcontroller. Like the 4004, it was built
for use in four-function calculators. And just like Intel's chips, we're looking at a pretty
limited processor. That being said, TI was in the microprocessor game from the start,
just like the semiconductor game. In 1974, the TMS-1000 hit markets. That was a new 4-bit microcontroller. It's a single packing
that integrated a processor, RAM, ROM, and I.O. circuits. In a very literal sense, this was an
entire computer on a single wafer of silicon. But still, we're dealing with 4-bit stuff here.
It's not powerful, and it's not really that useful,
but still handy in some specific applications and still a big achievement.
But TI wasn't really competing in Intel's space yet. Once again, the yet is pulling a lot of
weight here. Walden Rines, an engineer who worked at TI around this era, wrote an excellent outline of the situation for IEEE Spectrum.
In 1972, he started working at TI's Metal Oxide Semiconductor, or MOS, division.
That put him right in the middle of the action.
As he recalls,
TI engineers didn't really pay much attention to either Intel's 4-bit 4004 or 8-bit 8008 microprocessors.
TI did take note of Intel's 8080 and subsequent 8080A 8-bit microprocessors, which showed much more promise than the 4004. The MOS division was given the job of catching up to Intel in both microprocessors
and dynamic RAM, end quote. Really, not many were paying a lot of attention to Intel's early chips.
But once the 8080 became a home run, and I think especially after the Altair 8800 starts moving
8080 chips towards consumers, well, then people start to take
Intel a lot more seriously. The drive towards competitive DRAM is interesting, but what we
really care about is the microprocessor part here. Reins describes this process as turbulent.
TI planned to first create an 8080 clone as a stopgap, just a way to move silicon while bigger projects were underway.
That larger project was the TMS-9900, the world's first 16-bit microprocessor.
It would take years of hard work to complete.
According to Rines, the development of the TMS-9900 followed a strict, quote,
one company, one computer architecture approach.
The new chip was designed to be 100% compatible with existing TI-990 minicomputer hardware.
Now, I don't just mean the strange chain of compatibility that Intel dealt in during this same period.
chain of compatibility that Intel dealt in during this same period. The TMS-9900 and TI-990 were binary compatible. So the architecture was the same, the overall design was the same,
and the instruction sets were the same. There was space to improve on the 990's design,
but features could only be added, not changed or removed. So why would a company go this route,
especially when the TI-990 series of minicomputers hadn't exactly rocked the world?
Well, there are a lot of factors. Part of this decision had to come from the larger,
long-term goal of distributed computing. The TMS-9900 could represent the smallest system in a wide class of computers,
spanning everything from embedded systems up to mini-computers. Taking this hardware stance would
also simplify logistics a whole lot. Software from one model computer produced by TI could run on any
other machine they manufactured. According to Rines, this last point may have been crucial.
At TI during this time period,
there was a focus on designing systems that were oriented towards application software.
I haven't really seen steps taken in this architecture
to make programming particularly easy or streamlined.
But looking at the whole family, that changes things.
Compatibility meant that there would be a larger market for software, since a program could be
installed on any of TI's computers. So while we may not be looking at a software-centric design
like Intel's iAPX chip, we are looking at a very similar spirit. The final big why takes us back to earlier mini-computers.
By creating a compatible microprocessor,
TI's 990 could be radically upgraded.
Integrated circuits tend to be faster than TTL chips.
They also tend to use less power.
So the TMS9900 could be used to create a new supercharged 990
computer. It would be 100% compatible, which meant this upgrade could be a drop-in replacement.
No new software, just a brand new processor. This also ties into the whole micro mainframe thing
that I've seen pop up a few times before. Back when I was looking at the 8086,
we talked about Intel's failed IAPX432 processor. The project's goal was to create a radically new
kind of microprocessor, one with the capabilities of a mainframe, but all restricted to a single
wafer of silicon. IBM was doing a similar thing in the early 70s with their Palm microprocessors,
essentially chips that could be used to emulate much larger computers. Notably, the IBM 5100,
which was designed as a personal computer, used a Palm processor to act like an IBM System 370
mainframe. There's this tradition in the 70s of just trying to fit larger computers
onto a single chip, and with the TMS-9900, Texas Instruments is moving in a similar direction.
However, as we've seen before and we'll see time and time again, this is a really dangerous game
to play. It's unclear if an ultra-integrated TMS9900 was planned from the beginning.
I very much doubt it was.
The architecture used by the TI900 series of minicomputers took advantage of their hardware
implementation.
Storing registers in RAM was fine when everything was just TTL chips on the same board.
But adding a microprocessor into the
mix actually changes things quite a bit. It can probably go without saying, but microprocessors
aren't just smaller computers, at least not exactly. The new technology changed how computers
could be built. It changed what types of systems you could design and what kind of designs you
could get away with. It also changed
what kind of restrictions you were working under. Importantly for our story, a microprocessor could
have registers etched onto the same chip as the rest of the processor. A register could now live
micrometers away from the processor's ALU. It would be connected with tiny metal oxide traces,
much shorter than any wire or any circuit
trace you could ever lay. All things being equal, accessing a register on a processor's actual die
is wildly faster than accessing a location in memory, even if the memory is DRAM or a fancy
IC memory chip. There's just no contest. Of course, the TI-990 and its compatible systems
were designed without this technology. It just wasn't really a factor when the project started.
So we get to a sticky situation where the TMS-9900 is compatible with the 990. It had to store
registers and memory, but that prevented it from taking advantage of being
a microprocessor. The chip was hamstrung from its conception simply because it was copying an older
design. Another huge issue here is that the TMS9900 didn't implement the tie-line interface,
at least not on the chip. Now, that's a really big omission. Tileline added an easy way
to support expansion hardware, but it also allowed the TI-990 to access up to a full megabyte of
memory. Without Tileline, the TMS-9900 could only address 64 kilobytes of RAM. The other weird part is that a full tie line actually supported
multi-processor systems, so a TI-990 with a tie line could run more than one processor,
making it a time-sharing and multitasking beast. The TMS-9900 just didn't have the
interfacing hardware built in, so it couldn't run at that level.
Now, we're left with, all things considered, a pretty awkward microprocessor. It has built-in
support for context switching, but its implementation bottlenecks every other
operation on the processor. It's essentially a scaled-down minicomputer that's had one of its most powerful
features, the tie-line, stripped out. It suffers from all the issues of a microprocessor, namely
siloed and unexpandable operation, and all the idiosyncrasies of an older series of minicomputers.
So far, this has all been somewhat straightforward, just another jaunt down the path of bad corporate decisions and the dangers of backwards compatibility.
Well, let's change that and introduce a mystery and some conflicting sources.
This wouldn't really be an episode of Advent of Computing without me trying to unwind a weird mystery, now would it?
So stick with me.
A meandering stream is about to start.
The TMS-9900 may, and I have to emphasize, may be able to run Unix.
Now, this has gotten me into a surprisingly bewildering series of forum and Usenet posts
that conflict with
manuals, but I think this is worth investigating. I keep saying that TI was making computers that
were uniquely qualified for multitasking, so I want to look at what that actually means in practice.
TI shipped its own operating systems for use with 900-series computers.
One was DX10, the Disk Executive Operating System.
This was later replaced with DNOS, the Distributed Network Operating System.
Both of these were multitasking, multi-user systems.
TI was shipping timesharing as a standard option for the 990.
This, of course, took advantage of the weird memory register design.
There's not that much excitement there.
DX10 and DnOS both look like pretty run-of-the-mill timesharing systems.
They even came with options for Fortran and COBOL compilers, among other common tools.
The quirk here is that they take advantage of hardware-backed
context swapping. As near as I can tell, one of these operating systems was supported on most of
the 990 range of systems. Now, I say most because the sources get a little bit weird once we're at
this level of granularity, and the actual manuals don't really explicitly list supported systems,
so let's just say the overall range was multitasking-ready
with some type of supported operating system.
The prestige model in this family was undoubtedly the 990-12.
This was basically the largest TTL-based system you could buy from Texas Instruments.
It came packed with up to 2 megabytes of memory and plenty of room to grow. On the other side
of the spectrum, we get the 990 slash 4, a smaller system built around the TMS9900. This model could only handle 56k of memory and didn't have the all-important
tie-line interface. But there were a few models in between. The most peculiar one,
and I think the most interesting, was called the 990-5. Great names, right?
Anyway, the 990-5 is powered by the TMS9900 processor.
However, this model has a tie-line interface.
In this case, the tie-line is built up as a separate pile of chips on the main processor board.
In general, the 99-5 only had access to 64K of onboard memory.
But here's the catch.
had access to 64k of onboard memory. But here's the catch. If I understand the timeline properly,
then this could have been upped. The timeline is accessed as a memory mapped device. That is to say, to read and write to the interface, you just need to manipulate a specific chunk of memory,
and then send out some signals. What's important is that the Tileine and the TMS9900,
well, they actually have different sized address buses. The microprocessor has a hard-set 16-bit
wide bus, hence why it can only address 64k at a time. But the Tileine's bus is 20 bits wide, which comes out to a max of 1 megabyte of addressable memory.
That's the same size as the 8086's address bus, if you're keeping track at home.
Now, only part of the Tylines memory space is shared by the TMS9900 at any one time.
That range is about a kilobyte wide if we're talking about the 99-5. I think it varies
with the TTL models. It would be pretty janky, but using that small window, it should be possible
to break out of the 64k RAM limit. You'd just need a device on the tie line that could shuttle
around 1 kilobyte chunks of data between the microprocessor
and some external pile of RAM. I can't find if anyone ever made such a memory device, but
if I'm reading the reference manuals correctly, then it should be possible to break the 64K RAM
limit. Basically, what I'm getting at is that the TMS-9900 wasn't necessarily trapped in its tiny memory footprint.
Some smart design and supporting hardware could have solved that one glaring flaw.
My memory conspiracies aside, I can tell you that the 99-5 was probably running multitasking software in 1979.
That much is concrete. As for the Unix part of the equation, we need toasking software in 1979. That much is concrete.
As for the Unix part of the equation,
we need to jump forward in time,
actually by quite a bit.
In 2016, Dave Pitts and Paul Rusendahl
announced on Usenet that they had successfully ported Unix
to the TI-990.
Honestly, this is one of my favorite things
about the retrocomputing community.
If there's a way, Unix or Linux will get things about the retrocomputing community.
If there's a way, UNIX or Linux will get ported to any old system out there.
Of course, there are about a billion caveats, but it is UNIX.
We're talking with UNIXv6, which is a pretty old version that's had source publicly available
since the 70s.
And it's not ported to all TI-990s. Just the 10, the 10A, and the 12.
But don't fear. There is one other compatible system on the list. A little machine called
the Powertran Cortex. This was a computer designed that was published in Electronics Today in 1983.
computer design that was published in Electronics Today in 1983. The only detail that matters is that it was built using a TMS-9995. It's basically an overclocked 9900. Crucially,
both chips have the 64k RAM limit in effect. So bringing this all together, what do we get?
Historically, these TI computers weren't running Unix.
But there are modern ports of Unix made for these systems.
And there's a very real chance that those ports could work on a 9900-based computer.
Now, I want you to stash that and my memory conspiracy in the back of your pocket for a second.
And let's move on to
the final leg of this wild ride. The TMS-9900 could have been the brains behind the IBM PC.
Welcome to full off-the-rails territory. Once again, we're stepping into the realm of speculation here, so be warned. My plan is to go over why
the 9900 didn't end up inside the PC, and then go fully off the rails and look at what kind of
computer IBM could have made using TI chips. This is going to be a blend of counterfactual
conjecture and educated guess based off history. So overall, let's just have some fun
because I think there's some interesting territory here. The design process for the IBM PC was a bit
of a breakneck affair. I covered the overall history back in episode 51 if you want to get
deeper into detail. In general, the main design goals were to create a relatively cheap personal computer system using third-party chips and software.
Crucially, this needed to be done quickly.
The project started in 1980, and the PC was on sale before the end of 1981.
So, timelines were pretty short.
To start with, IBM sent out a team to try and line up third-party vendors.
To start with, IBM sent out a team to try and line up third-party vendors.
The central decision that would shape this new computer was, of course, which microprocessor to use.
The ideal chip would be 16-bit, be capable of addressing as much RAM as possible, have some existing software, and a second source.
That last part, the second source, is probably the least obvious. Essentially,
IBM wanted to ensure that there were at least two suppliers that could produce whichever chip they went with. It was a hedge against a supply chain issue further down the line.
With those criteria in mind, the PC team set out to find a processor. Rines, the TI engineer I quoted earlier,
was one of the unlucky few on the receiving end of this search. As he recalls,
I was told that I would need to give a presentation on the TMS-9900 to a group from IBM that was
working on some very secret project that required a 16-bit microprocessor. The group came from a rather
unusual location for IBM, Boca Raton, Florida. I spent a lot of time preparing, gave what I
thought was a well-polished presentation, and diligently followed up. But the IBM team displayed
limited enthusiasm. We wouldn't know until 1981 just what we had lost. End quote.
So why the lack of interest in a possible contender? Well, it comes down to two big issues.
First was that the TMS-9900 had problems, plain and simple. The 64k RAM limit was a big one,
but TI also didn't have a second-source manufacturer.
Those were some major strikes against the chip itself.
However, I think it's just as likely that the PC team had already made up their minds.
Ultimately, IBM chose to work with Intel.
A big reason given was that engineers working on Project Chess were already familiar with Intel's processor
offerings. Specifically, some of the team had recently worked with the Intel 8085 microprocessor
and its support chips. Reins doesn't throw out an exact date for the IBM visit, and the details
around the early days of the PC development are, surprisingly, pretty scant. So we may never know the truth. That said, I think
it's a reasonable guess that the crew of IBMers were just doing their due diligence. Intel may
have been the choice from very early on. The other piece here that we have to keep in our heads is
that the final chip inside the PC, Intel's 8088, also had issues. The overall design, usually called the x86
architecture, is surprisingly antiquated. Intel had developed the 8086 and 8088 in a relatively
short period of time as stopgap measures. The Simcoe was banking on a larger project, the IAP-X432, as their next generation of flagship
chips. So the design of the first x86 chip wasn't all that stellar. Specifically, the 8086 was meant
as an upgrade path to Intel's last generation of processors. So we're looking at a 16-bit chip
that was built as an extension to an earlier 8-bit chip. To make
things more confusing, the 8088 used in the IBM PC was tweaked to use an 8-bit data bus. The result
is, frankly, a bit of a messy processor. It works, but it's definitely not a perfect chip.
IBM wasn't looking for a perfect processor. They were trying to find a way to make the PC
quickly and cheaply. A huge driving force was getting the PC to market as soon as humanly
possible. IBM could have waited around for a better chip, but that would have cost time that
they didn't want to waste. Over the ensuing years, with wide standardization of the PC's architecture,
Over the ensuing years, with wide standardization of the PC's architecture, Intel would get a lot of breathing room.
The 8086 and 8088 had space to grow.
Intel evolved those stopgap chips into better designs.
Over time, some of the issues in x86 processors were addressed, but it was a slow process that took an initial jumpstart from IBM. The simple fact was that if the PC was released with another processor,
the 8086 would have probably faded into obscurity.
So, let's veer off the path of reality and start to take a look at that other timeline,
where IBM for some reason did choose the TMS9900.
This is the timeline where the 8086 isn't a big deal. It
just became a stopgap then disappeared as Intel intended. What would the new personal computer
look like? Well, we have some precedent to go off. There were personal computers built using the TMS-9900,
built using the TMS-9900, namely the TI-994 and the TI-994A. Texas Instruments had been trying to build a personal computer since at least 1976. Overall, there wasn't much to show for their
efforts. According to Rines, the project was scattered. At least three teams were all working
towards different computer systems with overlapping goals.
Some hit dead ends, others merged, new teams formed, and overall it sounded like a bit of a management disaster.
However, eventually one system came together.
In 1979, the TI-99-4 was released.
And it flopped.
The system was targeted at the home market specifically, so it was one of those wedge-shaped computers like an Apple II or a Commodore 64. It plugged into a TV, could play
games, and ran BASIC from boot. The differentiating factor was, of course, that this was a 16-bit
computer in a crowded field of 8-bit home micros. It was also a lot more expensive,
coming in at $1,400. That's over $5,000 once you adjust it for inflation. Outwardly,
it didn't look that different from a much cheaper Apple II, and a lot of customers couldn't see the
difference. In 1981, the TI-994A was released as an improved model, but it's broadly
the same as its predecessor. It's the most common of the two, so most sources just talk about the
latter machine. Anyway, the 994A wasn't a very impressive computer. Which is really a shame,
I think, since it had the chance to do a lot. The machine came
with 16 kilobytes of RAM plus a smaller chunk of so-called scratchpad RAM that was reserved for the
CPU. That was composed of some faster chips that were used for workspace registers. So we can see
the TI's trying to overcome some of the limitations of the TMS9900.
But 16 kilobyte of RAM isn't much to work in.
Color video and sound were also handled using TI's own chips.
Once again, we see the wonders of vertical integration.
The 994A could produce three-voice sound and had 16 color graphics,
so not that bad, but once again, nothing really
stands out from the competition. The simple fact is that the TMS9900 can do a lot more,
but looking at this home computer, you would never see that. In general, the 994A is really
close to its 8-bit competition. We're left with a really awkward computer. It's a 16-bit machine,
but it's dressed up to act like a much smaller system. The other just frankly dumb thing about
these computers are their keyboards. The 994 used a rubber membrane keyboard, so you get the typing
experience of pressing on a remote control over and over again. The improved 994A shipped with a
much better mechanical system. I've actually been working on converting one of these boards to work
over USB, so I've gotten pretty familiar with it. There were a number of keyboard revisions,
but the most common ones use an interesting leaf spring mechanism. Basically, two spring contacts are
kept separated by a slider. When you press down on a key, the slider moves out of the way,
and it allows the contacts to touch. It's simple, pretty robust, and in my experience,
if you maintain it well, it can feel pretty good to type on. The issue is the layout. It's expected that older keyboards are a little strange, but this one
really takes the cake. There's no tab key, no escape, and no backspace or delete keys. Thus,
the 994 and 994A are rendered useless for any business applications. That's the overall lay of the land and what a real TMS-9900-based
computer looked like. In general, it's a little underwhelming. I think a lot of that comes down
to how the systems were targeted. The real power of the TMS-9900 wasn't leveraged for the right
niche. The chip could already power a mini-computer. To me, it's
clear that its place was a lot closer to an office environment. So it's time for that off-the-rails
part. What would IBM's office-focused PC have looked like with this office-ready chip?
The ground rules for this thought experiment are simple. In 1980, a crew from IBM went to Texas
and came back very interested in a new chip. A much different computer then forms in Boca Raton.
Let's call this the TI PC for argument's sake and to keep stuff separate in our heads.
The only other change I want to make is that in our alternate timeline,
TI does have a second source for everything, just so supply chain issues don't matter.
I really don't like business logistics anyway. Buying into the TMS9900 would have also locked
in the rest of the chips in this theoretical PC. IBM wanted to source as many parts as possible
from the same vendor, so in this scenario, that vendor would be Texas Instruments.
An immediate consideration would be if IBM would go with the TMS-9980 processor instead of the
9900. I haven't really mentioned this, but the 9980 is essentially the 9900 but with an 8-bit
data bus. It's like how Intel offered the 8086 with a 16-bit data bus and the 8088 with its
8-bit data bus. In the real world, IBM used the 8088 to save money on peripheral chips and to
cut some time. There were just more existing 8-bit support chips
from Intel. But the TI PC is going to stick with a full 16-bit processor. IBM would already be
scrapping all their institutional knowledge about Intel's 8-bit offerings, so they wouldn't save
that much with this compromise. RAM and ROM are really easy picks. TI had all those as in-house offerings,
so the TI PC can be loaded up with more chips from Texas. Interrupts and programmable timers
are also easy. Just pull out the TI catalog and place an order. What gets more interesting,
and where the TI PC will deviate heavily, is when we get to direct memory access.
One of the items on IBM's wish list was DMA.
That's the ability to transfer data directly from storage devices into memory
without going through the processor.
This is accomplished using what's known as a DMA controller.
It's a circuit that operates separately from the processor
to handle bulk
data transfer and operate drives. The IBM PC used an Intel chip to handle all DMA operations.
The TI PC takes a slightly different route. Its backbone has to be a modified tie-line interface.
The tie-line can handle DMA transfer from disks into its own memory space,
and it's already produced by TI, so there isn't much upfront work on IBM's part.
Crucially, we've already seen that the Tileline can do a lot more than just moving data from
disks to memory. By incorporating Tileline into the TI PC, we also have our expansion handled.
The interface maps anything you plug in directly to memory.
That's already how the real PC worked, so it's a one-to-one replacement in that regard.
Adding to the pros, the TI PC will also be able to keep using ISA expansion cards.
This is important because the PC's
expansion cards were simply rehashed from an earlier IBM project, the Datamaster.
The expansion interface is close enough to how Tileine works that ISA cards for the TIPC would
look really similar to what we've come to know in the real world. But a huge gain from the Tileine would be real
multiprocessing. Tileine can negotiate operations between multiple processors.
That means that this TI PC can support fully functional coprocessor cards.
Expandability was a huge goal for IBM. They wanted the PC to be both an entry-level and high-grade professional system.
The ability to throw in more processing power fits really well with that goal,
and it's something that Intel-based PCs couldn't do.
The 64K RAM limit would end up being the biggest issue here.
Internally, IBM wanted to create a system
that broke that limit. I think this is where Tileine could also save the day. Like I brought
up earlier, Tileine could already address up to 1MB of RAM, and I think it should be
possible to modify the interface slightly to allow for bank switching. Basically, just make a large part of the TMS-9900's accessible RAM
exist on the tie line's bus. Then you add in the functionality to change which part of memory the
tie line exposes to the microprocessor. Software-wise, I think the TIPC is actually a lot
closer to IBM's initial design goals. Texas Instruments already had their
own in-house version of BASIC for the 9900. They also had their own in-house operating systems for
the platform. IBM would never have to reach out to another software vendor. TI would be a one-stop
shop for everything, just like execs wanted. The TI PC would come with either
D&OS or DX10 stock. One huge implication here is that Microsoft would remain a niche basic
software company somewhere in the Pacific Northwest. The other huge outcome is that the TI PC
would be a multitasking powerhouse.
By default, this machine would have hardware support and software support for multitasking and multi-user environments.
From day one, home users could switch seamlessly between programs.
Now, for most consumers, that wouldn't be an earth-shattering feature.
But for offices, the TI PC could easily be built out into something
on par with a mini-computer. Multitasking plus wild expandability, well, that would be a killer
combo. And hey, Unix could definitely fit into the picture pretty easily. I've been singing the
praises of this sci-fi machine, but there would be some issues.
Of course, memory is the big pitfall here. Bank switching is workable, but it would make memory
more complicated to deal with. In-memory registers could also cause issues. To be viable, the TI PC
would probably have to have some extra-fast scratchpad RAM like the TI-994A.
But even then, between bank switching and slow register performance, there may be some speed
issues for the system. As fun and frivolous as it is, I think this thought experiment does bring up
some interesting points about the real-world IBM PC. As a vendor, I think TI would have been much better
suited to IBM than Intel. Getting hardware and software from the same source would have streamlined
the PC's development process considerably. The tie-line interface would also have made for a
much more expandable system. It would have let the PC grow into a real multi-processing monster.
And just like with Intel, if TI became IBM's vendor of choice,
I'd wager that the TMS-9900 would have outgrown a lot of its initial shortcomings.
All right, that does it for our exploration of what could have been.
The TMS-9900 processor seems pretty strange to modern sensibilities.
Perhaps we're so acclimated to one way of doing things that alternatives look odd.
That said, its design comes with a number of benefits.
And a number of benefits. And a number of issues. Taken in the context of its progenitors,
the TI900 series of minicomputers, we get an interesting case study of how vertical integration
can shape technology. It's a story we've seen before, namely with IBM, and I think it's comforting
to see it play out in another company. Anyway, this has run pretty long, so I
won't leave you with a rambling outro. I just want to close on this note. Standards only seem set in
stone from our current viewpoint. Just because something becomes a standard doesn't mean it was
the best option. So I think there's always value in re-evaluating our held standards.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another story of computing's past.
And hey, if you like the show, there's a number of ways you can support it.
If you know someone else who's interested in computing and history,
then why not take a minute to share the show with them?
You can also rate and review on Apple Podcasts. If you want to be a super fan, you can support the show through Advent of Computing merch
or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction
of the show, and bonus content. You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then shoot
me a tweet. I'm at adventofcomp on Twitter. And as always, have a great rest of your day.