Advent of Computing - Episode 50 - 8086: The Unexpected Future
Episode Date: February 22, 2021The Intel 8086 may be the most important processor ever made. It's descendants are central to modern computing, while retaining an absurd level of backwards compatibility. For such an important chip i...t had an unexpected beginning. The 8086 was meant as a stopgap measure while Intel worked on bigger and better projects. This episode we are looking at how Intel was trying to modernize, how the 8086 fit into that larger plan, and it's pre-IBM life. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
How much has technology changed since 1978?
Easy answer is a lot.
But let's zero in on one corner of this question.
Excluding everything else, let's look at how much home computing has changed since 78.
Now, there's some obvious low-hanging fruit here.
The internet is one great example.
In the last 40 plus years, we've gone from
basically no in-home networking to dial-up to broadband to Wi-Fi and fiber. I don't think
any of us can really imagine using computers without some type of network connection.
Graphics have also improved at an amazing pace. In 78, you got some text, maybe low res and low color graphics if you had a little
extra money. Today computers can push millions of pixels to giant displays, and really that's
just considered the norm. Even something as mundane as storage has seen radical shifts.
In the 70s and early 80s, consumers couldn't really afford hard drives.
You'd more commonly store data on floppy disks or maybe cassette tapes.
Then, cheap hard drives just became normal.
Even more recently, we've seen a shift towards cheap and reliable solid-state storage.
The bottom line is that home computing has changed radically.
Well, at least in most areas, it's changed radically.
Generally speaking, modern personal computers all use x86-based processors. That's a class
of compatible chips named after their progenitor, the Intel 8086. Of course, there have been
advances. x86 processors have gotten a lot faster, a lot more powerful, and a lot more flexible over time.
But their overall design traces back to the 8086, a chip first released in 1978.
Programs written for the 8086, these are binaries that are over 40 years old,
can still run on a computer built today with no modification.
Even with our fancy graphics, super fast internet, and solid state drives,
we're still tied directly back to a much, much older legacy.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 50, 8086, the unexpected future.
Now, I'd like to start off by saying, wow, I never imagined I'd be reaching such a big milestone, but here we are at episode 50.
If you've followed the show since the beginning, then you know just how far Advent of Computing has come.
Also, if you've listened to the whole archive, then let me be the first to apologize.
I swear I've gotten a lot better at this whole podcasting thing.
It's been nearly two years since I started the show. In that time, I've produced 50 full episodes, a pile of bonus episodes, even recorded a few interviews.
I've appeared on quite a few other shows myself, and written more words than I can ever hope to count.
I hope you don't mind, but this episode's going to start off with some more show news.
I haven't really shared this anywhere yet, but as of last month, Advent of
Computing is now fully patron-supported. That is, all my hosting fees plus some extra funds that I
can ferret away for getting a hold of texts and scans are covered by listeners. That's also
something that I never thought possible, and I'm beyond grateful to my patrons. If you aren't signed
up, then now's actually a pretty good time.
I've been putting together bonus episodes over on Patreon.
The episodes tend to be a little more lighthearted and strange.
Right now, I'm working up one on allegations that Vannevar Bush was the leader of Majestic 12,
a secretive government conspiracy to hide the existence of aliens from the world.
So, if you want to get in
on that, plus two other bonus episodes, I'll have links around somewhere near the show.
With that out of the way, let's talk about today's episode and the Intel 8086 processor.
This is the next installment in my Intel series. So far, I've covered the first microprocessor,
the 4004, the follow-up to that, the 8008, and then the 8080,
Intel's first massive success in the market. At least in the microprocessor market. This has been
leading up to the 8086, which I'd argue is the most influential microprocessor of all time,
and it just happens to be my favorite microprocessor. It powered the IBM PC and
since has powered nearly every PC with very few exceptions. The 8086 was so
popular that it's spawned a huge extended family of processors.
Collectively these are called x86 processors. Chips in this family from AMD
Ryzen's to Intel i9s and Xeon's are all binary compatible
with each other. That means that software will run exactly the same on any of these chips.
They're also all compatible with the Intel 8086. A lot of the technology's changed,
but you can still run software written in the late 70s on one of
these more modern processors with very few problems. So understanding the 8086 and where
it came from is a good foundation for understanding current computers. For me, there's also a really
personal element here. I've been into computers in general, and programming specifically specifically most of my life. It goes all the way back to learning
Lisp on my dad's old 286 clone PC. But I didn't really get into programming until high school,
and that's when I learned x86 assembly language. That's basically a step above machine code. In
assembly language, you work really close to the hardware itself. You manually deal with moving around numbers,
accessing memory, and controlling devices. It's difficult to use, but I think it's really
rewarding. To write a program in assembly, you have to know exactly how a computer works. You
have to be intimately familiar with the chip and the hardware. For me, learning assembly made me
a much better programmer. Stripping away any modern
niceties forced me to think a lot harder about what I wrote. It also helped me understand and
appreciate the 8086 processor and the IBM PC in a way that I didn't expect. Like I mentioned,
most modern processors trace their lineage back to the 8086, and most modern computers still follow
IBM's original design. Sure, there's a lot of differences, but fundamentally, a PC in 2021
works the same as the IBM PC from 1981. When I started developing the idea for Advent of Computing,
I knew where I wanted to start. The 8086 and the PC platform are something that I know really well,
and the whole 40 years of backwards compatibility is something I've always found funny and
fascinating. So I scripted out an episode on Intel's processors leading up to the 8086.
I even got as far as recording, but I ended up scrapping the episode because, well, it was pretty bad.
But the idea has stuck around with me, which is why I've been slowly putting together
my series on Intel.
The whole story is really fascinating and I think important, and I finally can do it
a little better justice.
With timelines lining up actually pretty well, I felt it was appropriate for episode 50 to round out my Intel series, and to come back to the vision that I started
the podcast with.
So welcome to episode 50 of Advent of Computing. This is going to be the start of a two-part
saga. Today, we're going to dive into the Intel 8086 specifically. How did Intel create the most important chip in
history? Perhaps more importantly, why did they create the 8086? And how do designs from
1971 live on inside every modern computer? Because this goes back a lot further than
the 8086 itself. Next episode, we'll dive into the IBM PC and how x86 processors took over the
world from there. Now, after all that preamble, you might be a little disappointed, but our
story doesn't start with the 8086. We have to build up to that. Instead, we're starting
off in 1975 with a totally different processor. Intel was in a strange position that year. The previous year, 1974,
Federico Faggin and Masatoshi Shima left Intel to found a competing company called Zilog.
Between them, the duo had a hand in all of Intel's previous processors. Crucially,
they had been the primary contributors behind the 8080, Intel's current flagship
chip.
In 1975, Zilog released the Z80 processor.
This new chip was fully binary compatible with the 8080, but with added functionality
and better performance.
Needless to say, this left Intel in a really bad position.
Their market share in the processor space was lowering to Zilog.
To make matters worse, their best offering, the 8080, was totally obsolete compared to
the competition.
Now that all being said, Intel wasn't in danger of going out of business.
Far from it.
Processors were just one part of the business.
The company made all manner
of RAM, ROM, and other kinds of chips. That gave Intel some breathing room to tackle their
CPU problem. However, it may have given Intel a little bit too much space to be creative.
1976 came along and a new processor entered development. The initial name was the Intel
8800, which, at least in my opinion,
may have been a pretty bad name to go with. At the time, Intel named chips by shifting around
eights and zeros, so the 8800 fit this convention nicely. The fun part, at least the unfun part for me is Intel provided the processors for the Altair 8800.
So searching for the Intel 8800 just brings up information about the Altair computer.
There's no actual relation between the two.
By the time the chip made it into production, the name was changed to the IAPX432. Still obtuse, but a little more unique. So just keep in mind
that moving forward, this chip goes by two very different names. Anyway, the 8800 project was
slated to be unlike anything done at Intel previously. In fact, the 8800 would be unlike
any microprocessor ever built.
Years after the fact, Stan Mazor described the project like this, The 8800 CPU architecture started with a blank slate, was unbounded, and had no compatibility constraints with earlier processors.
The central idea was to redefine what a microprocessor could be.
No compromises, no backwards compatibility, and no looking back.
Up to this point, every Intel processor had been built off existing work.
The 8008 used chunks of the 4004's math circuits, even though they weren't really that related.
The 8080 was loosely compatible with the 8008. Breaking from that tradition would open the field for new technology and radically new ideas about processor
design. The other problem, at least as far as the 8800 project was concerned, was how previous
microprocessors functioned. They were extremely basic. They worked like a computer, but lacked the
capability and flexibility of larger systems. On large computers such as mainframes and
minicomputers, programmers had access to a host of much more advanced features.
One great example is my old standby of complicated software. Timesharing.
Dividing resources amongst multiple users is a really complicated task.
As computers evolved, mainframes started adding hardware specifically to help with those kinds of programs.
This ranged from dedicated memory management circuits to built-in parallelism.
Multiple processors could be housed in a single, massive computer,
thus allowing for fully isolated programs to run. In 1976, there was just no way to do this
with microprocessor-based systems. Well, you probably could, but it would take an unimaginable
amount of work. Microprocessors, the smaller side of things existed as their own
isolated islands of silicon. You had a chip that could run simple programs, but it didn't really
scale out well. If your code was slow, you couldn't wire up more processors and break up the task.
You just had to live with a slow program or keep rewriting it for the rest of your life.
A lack of sophisticated memory hardware also hampered what you could actually do with a
microprocessor. That, and the fact that these chips just couldn't deal with a whole lot of memory.
The 8080 could only address up to 64 kilobytes of RAM, so you better not write a program too big or try to load too much data.
The growing concern inside Intel was that if things continued, microprocessors would remain
siloed. Backwards compatibility, growing catalogs of software, and stingy clients could trap Intel
in an unending cycle of small incremental updates. Gordon Moore put it this way,
ending cycle of small incremental updates. Gordon Moore put it this way, quote, At the time, we thought we had one more chance to establish a new architecture before the massive
software committed us unerringly to an evolutionary approach, end quote. The 8800 was poised to break
out of that cycle before it got too ingrained. Everything about existing microprocessors was
thrown out the window, and the 8800 team started with a totally blank canvas. Pulling from
contemporary mainframes, a design started to form. Initially, the 8800 started to be called the
micro mainframe, and with some good reason. This new chip took more cues from larger computers
than its integrated counterparts. Sounds ambitious, right? So what are the details
that make the 8800, or rather the IAPX432, so different? Holding on to some kind of dumb
buzzwords here, we could say that the 8800 was built from the hardware up for
scalability first. It sounds kind of pretentious, but it is accurate. A single processor was designed
such that it could connect up to other identical 8800 processors. There's a deep level of support
for this, from hardware all the way up to instructions. The idea was that if you
needed more power, well, you just scale out. You add another 8800 processor.
Instead of being restricted to a single chip, a manufacturer could scale out as
much as they wanted to, or as much as they could afford, really. In theory, this
meant you could create a really formidable computer by just throwing
more and more processors into a box.
Add together enough 8800 processors, add in just enough memory, and theoretically it's possible to
compete with big iron machines. I'm sure the idea of an Intel-based mainframe was a tantalizing
possibility, but the design wasn't just to compete in the large realm. You could just as easily build
a computer using a single 8800. This kind of flexibility was unheard of, and this was just
one revolutionary aspect of Intel's new design. If done right, this would be a total revolution.
To get deeper into the 8800, we're going to need to get more technical.
There's a whole lot to processor design, especially in a chip this complicated,
so I'm just going to cover what I think is most relevant to our story.
So we need to cover object-oriented programming and stack-versus-register machines.
If your eyes are glazing over, then don't worry, just stick around for a minute and I'm going to try to clear things up.
A processor's native language is machine code.
It's the string of raw numbers that the computer can actually understand without outside help.
Going up a level, we get to assembly language.
My favorite programming language.
This varies greatly from machine to machine, but usually it just gives mnemonics to machine code instructions. Assembly makes it so that instead of bashing in a bunch of binary,
you can use slightly friendlier instructions like add or move. It's not user-friendly,
but it is a lot better than having to remember a bunch of numbers. Humans don't really do well with numbers.
The point here is assembly language corresponds directly to machine code. You have a whole lot
of control. Moving up a little higher you get more high level languages like C, Fortran, C++.
There's a lot of different options. But once you're using a higher level language,
you have to compile that code, which isn't one-to-one with machine code. The compiler
has to pull a lot of weight. So that means you aren't in direct control of the processor.
And also added on to that, the processor doesn't really provide any special treatment for something written in, say, assembly versus C. An 8080 can't tell the difference between a program written in any type of language.
It's just a bunch of numbers.
It just shows up as machine code.
A programmer using assembly language can tailor their code really well to the underlying computer.
You do have full control when you're using assembly.
So with a little care, you can make your code really efficient, and you can also pull off some
interesting tricks. But, and here's the big problem I deal with whenever I drop down into assembly
land, you don't get any help from the computer. The big sticking point comes down to data structures
and variables. Programming is fundamentally all about data, and variables work as a place to store and
work with data.
Going a little bit above just variables themselves, we get data structures, which give programmers
a way to organize variables.
A simple example would be a graphics program.
To plot graphics on a screen, you need some way to talk about pixels and locations.
You need some convenient place to store an x and y value.
In a language like C, you just define a structure.
Call it point.
Say it describes two integers called x and y.
Then you can create as many of these new points as you want.
You can pass them around like they're any other number and never have to think about how the
computer actually deals with that data. But a processor has no idea what a data structure is.
To a computer, each one of your point structures is just some numbers in memory somewhere.
There isn't anything special or sacred about it.
The machine code that your compiler produces has to deal with turning that random location in memory into nice x and y values that you can actually work with.
The same goes for any kind of variable, even something as simple as a floating point number
or a string of characters.
When you write enough assembly, you run into this all the time. I end up spending a lot of lines
packing and unpacking chunks of memory into useful data. As a practitioner of assembly language,
I have full control over that process. So I do some tricks. I know the hardware well enough that
I can get by and I can make it pretty fast.
But you lose that control when you use a higher level language.
You have to trust that your compiler is doing those tricks right.
This can lead to all kinds of problems, both in theory and in practice.
Since the underlying computer doesn't know how to treat fancy data structures,
it doesn't bother protecting
them from dangerous operations. You can store a string, then try to add a number to it and
wind up with garbage. A person, a fancy programmer like myself, knows that you can't add letters and
numbers. But to a computer, it's just a memory location and another. It just does the operation and doesn't really care what it means.
That little snafu can become a much bigger problem once you're dealing with multiple
concurrent processes.
As in, a timesharing system.
Mainframes of the era solved this problem with special memory management hardware.
It isolated and protected chunks of running memory,
and that kept running programs from interfering with each other.
But that just protects you from someone else's code,
not your own dangerous mistakes.
And not every language or compiler is smart enough
to save you from making mistakes.
Performance was another issue, at least theoretically.
This is kind of an old computer
myth that I haven't been able to find hard numbers on. Historically, programmers have believed that
compiled languages are slower than writing assembly by hand, but that gets a lot more
complicated. It's not a direct result of using a compiler or not. As programming languages became more sophisticated, data structures also blossomed. In the 70s the big new thing
in software was object-oriented programming. Objects are basically a
really fancy and generic data structure. You can define an object once, then later
on create variables based off that initial template. You can also do things like object inheritance, but for our purposes, it's
enough to look at objects as the ultimate in data structures.
They're more complicated, larger, and more powerful.
They made programming a lot easier.
It gives developers a much improved way to work with data and just a better way
to manage information in general.
But it also compounded the problems that I brought up earlier. Microprocessors in the age didn't know the difference between a pile of bits and a string, or a pile of bits and an object.
So the compiler had to produce code that turned piles of bits into objects and vice versa.
reduce code that turned piles of bits into objects and vice versa. Every time you access a variable,
some code needs to run. Compound that enough and the time should add up. It's like one of those made-for-TV infomercials. Tired of bashing bits? Objects keep getting lost? Have you ever thought
there has to be a better way? Engineers inside Intel did think there was a better way. The
solution that the 8800 team envisioned was to kind of flip the status quo around. They designed the
8800 with hardware support for a host of data types. That included everything from floating
point numbers up to objects. Now, I cannot understate how radical of a shift this was. A programmer could
just tell the 8800, oh yeah, treat this chunk of memory as a floating point number. And this other
section, that's an object that's formatted like this, keep it safe. The processor handled turning
raw data into meaningful structures. And it even dealt with memory protection natively.
The 8800 team was basically flipping the problem on its head.
Don't try to solve it in software, just make the hardware better.
One of the huge benefits, at least one of the huge theoretical benefits,
was that the 8800 would be uniquely specialized for high-level languages and also multitasking. The plan was
a programmer for the platform wouldn't ever need to write a line of assembly language.
By supporting high-level concepts in silicon, a compiler had to do a lot less work.
Low-level support for complex data structures meant that a program in C, or for the era more likely Ada, wasn't
that far removed from the hardware.
One of the final pieces that made the 8800 totally different was that it wasn't a register
machine.
Now, this is the other fun technical bit that we need to get into, and we're going to be
talking about this more later on in the episode, so it's a good place to linger.
And we're going to be talking about this more later on in the episode, so it's a good place to linger.
In a register machine, you have the processor that does all the number crunching and logic.
Memory, which stores data and gives you a space to work.
And then, rounding out the trifecta, are registers.
These are really small chunks of storage that are in the processor itself that are used for immediate operations.
It varies from system to system a little bit, but in general you need to load a number into
a register to operate on it.
There are exceptions, but you usually don't work directly on memory.
The reason for this comes down to how processors are designed.
Registers are wired more directly into math and logic circuits, so they're faster
to access in memory. And in many cases, you can't interface your math and logic circuits direct to
memory. It has to go to a register. There are some more details, but I don't want to get too
bogged down in that. Computers are, by and large, register-based machines. Everything from the lowly 4004 all the way up to big IBM mainframes use registers.
It's just a simple and effective way to make a computer.
And at least in my opinion, register machines make logical sense.
However, my view might be a little biased, so, you know, don't take that as gospel. The other option for processor design
is what's called a stack machine, and that's what the 8800 was designed as. Stack machines are,
by and large, much less common than register-based computers. Now, don't get totally fooled by the
name. A stack machine can still have registers. You can't fully get away
from needing some intermediary between memory and processor. The key difference is stack machines
don't really expose or worry about their registers, at least when it comes to programming.
All you really have is the processor and memory. The stack refers to how memory is handled.
memory. The stack refers to how memory is handled. A stack machine maintains a pointer to the top of some pile of data in memory. You can push a new number onto the stack or pop off the number from
the top. From there, instructions are built around traversing and manipulating that stack of numbers.
Contrast that to a register machine, where instructions are usually based around moving
numbers around registers and manipulating data inside registers.
Stack machines are supposed to be easier to implement for the chip designer.
The overall model is just more simple than a register machine.
All of that together, stack-based design plus hardware-supported variables made the 8800
unlike anything else on the market in 1976.
Intel was planning this chip as a design for the future, something unfettered by earlier
processors.
I think it's plain to see that, if executed properly, the 8800 could have changed the
world.
It was a totally new take on what a microprocessor could be and how you would use a microprocessor.
On paper, it solved dozens of problems.
But the 8800, and specifically the goals and management of the project, had a major flaw.
Putting things lightly, the 8800 was ambitious.
Intel wanted nothing less of a revolution. that was the plan from the beginning.
So the 8800 team was taking their sweet time.
This new chip was supposed to be truly next-generation technology.
It was something that was supposed to last Intel long into the future, so they figured
it was fine if the project ran a little long. The investment
seemed worth it at the time. I think that's perfectly reasonable. Put in a little extra
time in the present for a big payoff in the future. But it's never that simple. There are
always outside factors. Zilog was one of those very big outside factors, but there were a lot of others.
Motorola is another example. They just entered the market with their 6800 processor.
More manufacturers were jumping on the bandwagon. Microprocessors were going to be a huge deal in
the coming years. That was easy to see. No one wanted to miss out on the opportunity.
Embarking on a huge project to create a revolutionary
processor was a fine idea, but Intel also had to do something in the short term to stay relevant.
They weren't in danger, not yet, that's an important part here. But if they bet everything
on the 8800, the situation could easily break bad. This was the state of Intel in 1975 or so, when Stephen Morse joined
the company. Morse was an electrical engineer with a bit of a flair for small computers.
Prior to Intel, he had worked at Bell Labs, IBM's Research Center, and most recently, GE.
While at General Electrics, Morse was involved with a project to build a single-board computer,
an entire system that could fit on a single printed circuit board. But he wasn't on the
hardware side of the team, Morse was instead a programmer. So by the time he entered Intel's
offices, he had a firm understanding of how to program microprocessors, a talent that wasn't yet
common. Almost immediately, Morse would be swept up in the 8800 hype, but he wouldn't join
the project. Instead, the 8800 would set the background for something new. Quoting from an
interview in PC World with Morse, quote, I had just completed an evaluation of the 8800 processor
design and written a report on it. My report was critical and showed that the processor would be End quote.
This brings up another issue with the 8800, as both a project and a product.
All goals were focused around creating a more advanced chip.
goals were focused around creating a more advanced chip. Performance wasn't a concern, since,
you know, you can just scale out an 8800-based computer. That freed up the team to make huge advancements in their design, but at a big cost. A single 8800 processor wouldn't be very fast.
In Morse's approximation, that was a bigger issue than the rest of the team
thought. And he was right. For any other processor, speed was a core concern. Sure, you could dismiss
Morse outright, just say that he was in the old mindset, he wasn't seeing the future of processor
technology. That, or he was just taking a realistic approach. Either way,
Intel management saw this as an opportunity to hedge things up. In 1976, a new processor project
started up, a stopgap between the 8080 and the eventual 8800. This new chip was named the 8086,
and Morse was core to its development.
Of course, the team was more than just Steve Morse,
but he was responsible for the instruction set and general design of the chip.
Just him.
This was a pretty big divergent from earlier processor projects, and in more ways than one.
The 8080 is a good example here. That chip was
designed by Federico Faggin and Masatoshi Shima. Both were engineers with experience in integrated
circuit design, and both had experience designing microprocessors specifically. They worked together,
both working on the general architectural design and the silicon level implementation.
While designing the overall 8080 architecture, the duo took actual implementation details into
account. They knew how their choices would affect the final silicon on the chip, and that helped
guide their design. So why did Intel task Morse with designing the 8086? I think there's a few
reasonable answers here. It would be just easy to assume that, you know, Intel didn't really
care about the 8086. The 8800 was the big project, so the 8086 was just some afterthought.
But I don't think that's the most convincing answer. I'd argue that putting
Morse in charge of the 8086's design represents a shift in Intel's priorities. It's the same shift
that we saw in the 8800 project. Also, I apologize for all the 8s and Os, but I can't avoid it
sometimes. Microprocessors were becoming a more fully realized technology.
To use some more buzzwords, the ecosystem around microprocessors was maturing. When the 8080 was
designed, there were only a handful of microcomputers on the market. People just weren't
designing computers around microprocessors yet. And that also meant that there wasn't much software written for microprocessors.
Then, in 1974, the Altair 8800 was released, right after the 8080 hit the market. Similar
systems followed. With hobbyists and some businesses adopting microprocessors, software
started to flow. So programming microprocessors became more viable as more practitioners were
exposed to the technology. This meant it was a really good idea for Intel to pay special
attention to how their new chip would be programmed. The 8800 is the poster child for this approach.
The chip was designed from the ground up to support high-level programming languages.
Ideologically, the 8086 took some notes from this bigger project.
The stopgap chip wasn't specifically designed for fancy object-oriented languages.
But putting a programmer at the head of the project would go a long way towards making an architecture developers could enjoy.
In Morse's own words,
Now, for the first time, we were going to look at processor features from a software perspective.
The question was not, what features do we have space for, but what features do we want in order to make the software more efficient? The other important factor here is that Morse wasn't totally alone on the project.
He dealt with the majority of the 8086's architectural design,
so the instruction set and how it handled and stored data.
He was given a little backup from Bruce Ravenel, another Intel programmer.
On the hardware side of things were Jim McKevitt and John Bayliss.
I can't find much about the rest of the team behind the 86, but there is some specialization
going on here.
Intel was taking a route that deviated from the established norm, but they weren't throwing
programmers at silicon-level design.
That would be reckless.
Creating a stopgap was one of Intel's larger goals, but that's pretty nebulous on its own. There were
other targets used to guide the project. With the 8080 on the market for a number of years,
Intel had received good feedback from customers. One of the main complaints came down to memory.
The 8080 could only address 64 kilobytes of memory. That's not really a lot to work with. And while restrictions do breed
creativity, this specific memory restriction also severely limited software complexity.
We can do a little back of the napkin calculations here. Instructions for the 8080 were variable
length. A single instruction ranged from one to three bytes of data. Let's just average that to 2 bytes per instruction.
That means once assembled, each line of assembly language comes out to 2 bytes of machine code
stored somewhere in memory.
Assuming you have no data, that means a program for the 8080 could only have 32,000 lines
of assembly language.
That may sound like a lot, but once you add variables and any
other data you may want to work with, that number drops significantly. There's a hard limit on what
you can do with an 8080. What's worse, that limit's pretty low. So the 8086 had to have more memory to be worth the trouble.
The goal was set at 128 kilobytes.
As a minimum, at least.
The other big project restriction came down to compatibility.
Intel wanted the 8086 to be compatible, at least on some level, with the 8080.
Now this is where we get into something that's wonderfully weird about the 8086.
Remember that the 8800 was all about breaking the mold, no compromises, and no compatibility.
But the 8086, that was a stopgap, something to keep customers working with Intel while
the next generation of chips were completed.
And thus, the next link in a chain is formed.
Morse opted to go for a somewhat loose level of compatibility.
8080 assembly language code could be converted automatically to 8086 assembly language with
the help of a program. So the 8086 had to be similar enough to the 8080 to allow for simple conversion.
Every instruction on the older 8080 had a counterpart on the new chip.
They were often given a different name, but they served the same function.
Registers and how memory was dealt with were also broadly similar.
In general, we can look at the 8086's architecture as an improved
superset of the 8080's architecture.
And here's the fun part. The 8080 was also roughly compatible with the much earlier 8008
chip. This was a similar type of compatibility. The 8080 was an extended and souped-up 8008. Older source code could be
converted to run on the new chip. But it goes deeper. The 8008 was really just a single-chip
implementation of an older data point terminal. The point here is the 8086 has a hard and fast lineage. Design elements from this random 1969 terminal found their way into Morris' project, maybe
even without him realizing it.
This is the exact kind of super legacy support that Intel was trying to escape via the 8800.
Historic baggage added hidden restrictions to the 8086's design. That's probably enough of a preamble.
We've talked about the context and the restrictions around the development of the 8086.
So what did Morse actually create?
What did these forces lead to?
What did it look like?
You can probably guess at some of the details already.
The 8086 was designed as a register machine, just like
every Intel processor before it. There wasn't any way to break that pragma and keep compatibility.
But that doesn't mean it was the same tired and limited design exactly. Previous Intel chips have
been 8-bit. That is, their registers were 8-bit wide and circuits were built to operate on 8-bit numbers.
This new fancy chip was 16-bit. That's a double value.
So why does the number of bits matter here? Well, that turns into a surprisingly moving target.
Simply put, it makes the 8086 a more capable and flexible processor.
Despite being a concrete number, the bitness of a processor isn't always cut and dry.
The best way I can think to describe it is that the number of bits tells you what kind
of data the processor was built to handle.
For the 8086, being 16-bit means that each register, the tiny chunks of immediately operable data,
are 16 bits wide.
Each register can hold a value from 0 all the way up to 65535, or better known as FFFF
in hexadecimal.
Contrast that to the 8080, whose 8-bit registers can only count up to 255, and you should see
there's a big difference here.
Math operations on the 8086 also run using 16-bit numbers.
An add, multiply, divide, or subtract each accept 16-bit numbers as operands.
That part of the 16-bit life of the 8086 should be pretty clear, but there's also
a little complication here. You can't do everything you want in 16-bit land. Sometimes,
either for convenience, practicality, or compatibility reasons, you need to deal with
8-bit wide numbers. To accomplish this, Morse does something new,
at least for Intel. His new 16-bit registers can be accessed as 8-bit registers, and instructions
will treat these smaller registers accordingly. This is a strange but important quirk, so I think
it's worth mentioning. The 8086 has four main registers, AX, BX, CX, and DX. There are others,
notably SI and DI, which are used as pointers, but let's stick to those first four for now.
Each of the main registers is subdivided into two 8-bit registers. So for AX, you get the full 16 bits in all its glory. Or you can use AL to access
the lower 8 bits, or AH to get the higher 8. In this way, you can still operate on 8-bit data
using your fancy new 16-bit chip. The other note about these registers is the naming convention.
these registers is the naming convention. The 8008 followed the same standard. It had registers called A, B, C, and D. So yeah, there's a little bit more of the family tree showing up. The other
important reason that the 8086 was 16-bit comes down to how it connects to other chips. That means
I.O. and memory. In electronics parlance, this is called a bus, because, you
know, it takes signals from a stop to a destination. Funny, right? It's like a bus. Anyway, on
the 8086, external buses, that is, the buses used to link up to other chips and components,
are at least 16 bits wide. And this isn't some software
thing. Buses are very physical. The data bus on the 8086 is composed of 16 physical pins.
Those are the little legs that come off the chip itself. This means that the 8086 can read 16-bit
wide numbers from the outside world directly into registers. The key word is at least.
The memory bus is a special case. It's 20 bits wide. That allows the 8086 to access up to one
megabyte, a truly massive amount of memory. Mars pulled some trickery to get this to function,
but he takes a note from earlier designs.
A memory chip just sees itself as a big list of numbers. Each item in that list has a unique address. These addresses start at zero and go all the way up to however large the chip is,
so in this case, one megabyte at the most. To access a value in memory, you tell the memory chip which address you want,
then if you want to read or write to that address. So addresses are important. Well,
this would normally be where we hit a problem. The 8086 sports 16-bit wide registers.
It can talk about 16-bit numbers pretty easily, but its external memory bus is actually 20
bits wide.
That's too much for one register, so how did Morse do it?
The trick is called memory segmentation, and while it has a similar name to techniques
used on mainframes, it functions in its own weird way. Essentially, the 8086 sees memory as chunks of
64 kilobytes of data. Those are called segments. To access memory, you specify a segment and then
an offset within that segment. The final piece of the puzzle is a set of segment registers. These
are special registers that Morse added just for specifying
which segment you want to access. By using two registers, a programmer can then access the full
one megabyte of RAM. To make things easier, Morse also built the 8086 to default to using certain
registers for certain segments during operations, but that's a different conversation.
There's one last technical detail that I want to touch on before we continue.
You see, the 8086 did kind of support one special data type.
When designing the chip, Morse added functionality for string manipulation.
Now, this is a weird case for a number of reasons. When I say string manipulation,
I don't really mean what you'd expect if you're a programmer. The 8086 doesn't have some native
way to combine strings of characters or compare strings or even define strings itself. But it does
have some helpful instructions for loading or storing string data.
One example are the load functions.
Those let you load a chunk of memory into a register and then increment the pointer you are using.
So it gets you to the next memory location.
There are a handful of these string instructions that just make handling this other type of data a little bit
easier, but it's not a full hardware implementation of data types like the 8800 supported. It may
seem like a small inclusion, but the 8086's string instructions actually speak to how Intel was
viewing this newly emerging market. One of the key reasons that Morse included string instructions was
simple. The Z80 had very similar functionality. Intel was trying to compete, so it took notes
from its competition. Putting all of this together, we get a weird chip. The 8086 was
far removed from Intel's grand ambitions of some revolutionary processor for the future.
Sure, it was meant as a stopgap product, but it went against everything that Intel wanted out of a new chip.
It was essentially the anti-8800.
The overall project was also completed at a breakneck pace.
18 months after design, work was completed.
Everything was finalized and Intel was ready for production.
In June of 1978, the 8086 was officially released.
And this brings us to a somewhat listless period for the chip.
The chronology here becomes pretty important, and I'm going to have to
introduce one more chip. IBM released their personal computer, that's THE PC, in August 1981.
The PC was built around this other chip, the Intel 8088. This is a little complication that
I don't really like because it gets nitpicky, but it is important to discuss for things to make sense later on.
So we're back into the realm of a bunch of eights and zeros.
Anyway, the 8088 is basically the same chip as the 8086.
People may get mad at me for saying that, but bring it on.
For nearly all purposes, the chips are identical.
They run the same code.
They have the same register layout.
To a programmer or a user, there is no, I repeat, no discernible difference besides the name.
The 8088 represents the start of the whole class of 8086-like processors, usually just
called x86 processors.
The only difference is the external data bus size.
The 8088 uses an 8-bit wide data bus instead of the 8086's 16-bit bus.
This means that the 88 can be used with parts designed for 8-bit processors.
The difference here only matters on a hardware design level.
We're probably going to talk about this more next episode when we get deeper into the IBM PC.
But for now, let's not worry about this too much.
They're very close chips. The 8088 came out in
1979, and there's a smattering of PC-like computers that use either-or. So just keep that in mind
somewhere. So moving on, we have this weird period of about three years between the 86's release and the IBM PC rocketing the chip
into the mainstream. And this is what I mean by a listless period. There isn't any one huge project
that uses the 8086 during these years. There isn't a breakout hit. The chip exists, it's widely known about and widely publicized,
and it slowly finds uses. There just isn't some explosion, it's more like a slow boil for a while.
But that's not to say that there weren't uses for the 8086. In kind of a strange twist,
the first user actually came from outside Intel. Before release, Xerox PARC got a hold of a set of prototype 8086 chips
for an internal project called the Notetaker.
This is something that's come up on the show before,
but it intersects really nicely with this episode.
The Notetaker was one of Alan Kay's many visionary attempts
at creating a portable personal computer.
The best word I can use to describe this particular computer is breathtaking.
Kay had previously contributed to the Xerox Alto, that's one of the first computers designed
to host a graphical user interface.
The NoteTaker was an attempt to scale down that system into a portable form factor.
This ambitious computer ran a
fully graphical interface. It was operated by a mouse and keyboard, and even supported
networking. Everything was housed in a case about the size of a modern desktop PC, and
that included a tiny CRT display. Running the show was, of course, the 8086 microprocessor.
This should be a slam dunk for Intel, right?
Even before launch, the chip was powering the computer of the future.
Well, the issue is, things didn't work out.
K was never able to get the note-taker very far off the ground.
A series of prototypes were completed, and the computer worked exactly as intended,
but Xerox management just didn't really see the potential. The project was scrapped in either late 78 or early 79, and it never saw the light of day. One side effect was that K and his team
were most likely the first people outside of Intel to use the 8086. It turns out that
K wasn't the biggest fan of the new processor. In a later essay, K described the 8086 as,
quote, a 16-bit chip with many unfortunate remnants of the 8008 and 8080, but with barely
enough capacity to do the job. We would need three of them to make up for
the Alto, one for the interpreter, one for bitmapped graphics, and one for I.O. networking, etc.
End quote. Now, that's not the most glowing review. The Notetaker existed somewhere beyond the cutting
edge of computer technology, so I don't think any microprocessor
in that year would fully fit the bill. But K does bring up a really good point,
and one that will haunt Intel. The 8086 came with a lot of baggage from earlier chips.
So the note-taker was basically a dead end for the 8086, but there were a few more early routes carved out.
One early system to use the new chip was the SDK-86, a system development kit released by
Intel themselves. Once again, this brings us into territory I've touched on before,
but not in great detail. One of the ways Intel presented new chips was via development systems.
These were simple computers that showed off a processor and its support chips.
The practice started off way back with the release of the 4004.
The SDK-86 was the model built for, what else, the 8086 processor.
The goal with these SDK computers was to introduce new hardware to companies, manufacturers,
students, researchers, really just anyone interested in Intel's latest and greatest.
The computer came with reference material, software, and a detailed explanation of the
entire system. It was a way to become familiar with the new processor and, hopefully, would
drive third parties to start using the
8086 in their own hardware.
The computer itself was extremely barebones.
It's a single board computer, meaning everything is mounted onto one large printed circuit.
You have the 8086, interrupt and I.O. chips, RAM and ROM, plus a keyboard and a small display
for loading data manually into the system.
There's also a large part of the circuit board that's left open with headers for soldering in
your own components. The SDK-86 was a good way to get up to speed with the 8086, but it's not a
useful computer. Really, it's closer to an engineering sample, it's not something for general purpose use.
Another outlet for these chips were S100 and Multibus computer boards.
These are almost two sides of a similar coin.
Multibus was a standard for modular industrial computers,
whereas S100 was a standard for modular hobbyist computers.
The more relevant side, by by far are the S100 systems
here. In short, S100 was a bus standard based around the original Altair 8800's design.
S100 computers were composed of a large backplane that had just a bunch of 100-pin buses wired
together. The active components were held on cards that slotted
into this backplane. It's kind of a ham-fisted approach to expandability that I find actually
pretty charming. The pinout of the 100-pin bus was standardized, so anyone could build a new
card by following a simple spec. The beauty of the S100 design is that the processor, the heart of the computer,
is housed on one of those cards. So sure, you can add more memory or I.O. options by building some
new cards. Or if you're more ambitious, you can create a new processor card. This meant that S100
computers were able to adapt to new technology a lot quicker than other early computer systems.
And with backing from a growing legion of hobbyists and early adopters, S100 computers
stayed up to date, at least for a while. There were a number of 8086 processor boards built
around the S100 standard. As far as I can tell, the earliest was developed by Tim Patterson at
Seattle Computer Products.
Shortly after the 8086 was released, Intel started hosting seminars on the new processor.
Patterson attended one of these traveling shows in the summer of 78.
Taken in by the technology, Patterson started designing an S100 processor card for the 86.
In May of 1979, Seattle Computer Products debuted the new card, complete with a port of Microsoft BASIC. Now, you may recognize those names. At least Tim Patterson should sound familiar.
In 1980, Patterson wrote QDOS, later renamed 86DOS. This was the first operating system,
the first disk operating system, that is, that was made for
8086-based computers, and it was designed on one of these S100 bus computer cards.
Pretty shortly after release, it was licensed and then bought up by Microsoft, where it was
transformed into what we know today as MS-DOS for the IBM PC. So once again, all roads either lead to a dead end or
IBM when it comes to the 8086. And that's really what makes this period between 78 and 81 so
awkward. The 8086 was getting used. Seattle Computer Products was just one company making
S100 boards with the new processor.
There were also a handful of home computers built around the 8086 prior to the PC.
But when the chip hit the market, there just wasn't an immediate home for it.
At least, not quite yet.
Alright, this brings us to the end of today's episode.
But the story isn't over.
Really, we're just at a halfway point.
The 8086 started off as a stopgap, and its early years make that abundantly clear.
Intel's 8800 project was meant to be the future of the company, the future of microprocessors,
project was meant to be the future of the company, the future of microprocessors, while the 8086 was just supposed to keep customers in the Intel fold for a few more years while a bigger and better
chip was completed. From the outside, it looks like everything was going to plan, right? The 8086
was there, it kept up interest and kept Intel relevant, but it wasn't exploding.
One of the more interesting sources I dug up while preparing this episode is a book
called 16-Bit Microprocessors, published in 1981.
It's a description of the current state of the art in silicon, describing all the
16-bit microprocessors on the market at the time.
It explains hardware that used those chips and
where to source parts and software. There's even an entire chapter dedicated to the 8086.
The text introduces the chip like this, quote, it is and will be a very popular microprocessor,
simply because of the tremendous amount of support, both hardware and software, that Intel and its second sources will give it. End quote. Notable in that excerpt,
and the whole chapter, is an utter lack of IBM. Support for the 8086 will be good. Soon.
But not yet. I can't get an exact publication date for this book. It has to
have been only a matter of months before the IBM PC was announced. After August 12th, 1981,
the 8086 would be on a path to unprecedented success. It wouldn't just overshadow the
competition. It would even overshadow Intel's plans for the future. But in the days leading up
to the PC, there were no warning signs. The 8086 was completed. It was just waiting for the right
home. Thanks for listening to Advent of Computing. I'll be back in two weeks time with the conclusion
of our saga. That episode's going to cover Project Chess and the IBM PC.
And finally, how the 8086 gets a home
and becomes popular.
And hey, if you like the show,
there are now a few ways you can support it.
If you know someone else
who would be interested in the show,
then why not take a minute
to share it with them?
You can also rate and review
on Apple Podcasts or really
anywhere you listen to podcasts.
And if you want to be a super fan, you can support the show directly through Advent of Computing merch or
signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction
of the show, and bonus content. You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead
and shoot me a tweet. I'm at Advent of comp on Twitter. And as always, have a great rest of your
day.