Advent of Computing - Episode 16 - 4004: The First Microprocessor
Episode Date: November 4, 2019Intel is one of the dominant forces in the computer industry today, they may be most well known for their line of microprocessors. These chips have powered computers going back to the early days of mi...crocomputers. How did Intel become so entrenched in the field? Well, it all started with the 4004 CPU, the first "one-chip" computer. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers:Â https://www.patreon.com/adventofcomputing Important dates in this episode: 1971: Intel 4004 Released
Transcript
Discussion (0)
How can it be that computers nowadays are just so darn small?
If you stop and think about it for a minute, then it really is some kind of miracle that we can cram an entire computer into a packing that's smaller than a postage stamp.
It's such a common sight today that it's really easy to gloss over.
But there was a time, somewhere at the beginning of the 1970s, when that change in size was so dramatic,
that it would fundamentally change how computers were looked at and used.
Now, oftentimes, we use the terms microprocessor and CPU interchangeably. But getting that micro
part added on was really a huge step forward. But it was more of a jump, and it wouldn't happen in small steps either. It happened in the
space of only a few years. Between 1969 and 1971, computers would change. In the 60s, you'd be lucky
if you could get a small computer that weighed in the neighborhood of 300 or so pounds. But by the
middle of the 70s, you could buy a computer that weighed less than 20.
Even better, these newer systems were powered by a wafer-thin slice of silicon that you could fit on the tip of your finger.
And one of the large driving forces behind this shift was a new company called Intel,
one that's still in the computer game today.
So how did Intel manage to pack a whole computer into such a small space?
To explain that, we're going to need to look at the story of the first microprocessor,
the 4-bit 4004 chip.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 16, 4004, the
first microprocessor.
Now for me, this is going to be a little bit of a throwback episode.
When I was first working on developing the podcast, I actually ended up making a good
deal of content that was never released.
Part of that was in episode 0, so to speak, that I decided would never see the light
of day. For me, it was a dry run just to see if I could get the process down to start off with.
That episode was on the lineage of the x86 instruction set, and well, it's not really that
good. The content itself was super dense, and I just hadn't had any practice under my belt yet.
was super dense and I just hadn't had any practice under my belt yet. So I decided it would be in my best interest to leave it on the shelf but maybe come back to it later. Part of the reason for that
is because I genuinely love the x86 processor family and I know that sounds pretty dweeby but
hear me out for a minute. I learned assembly language on an 8086.
That was the first time that I ever got down to the nitty gritty of what a computer really was.
It helped me understand computers better, and it made me a better programmer.
And I had fun the whole time.
And even more than that, I know that I'm not alone and that a lot of people have had a similar experience.
So I knew that when I came back to doing an episode about the x86, that I wanted to do it the proper justice.
This episode is going to be the start of a journey that will tell the story leading up to the 8086 processor.
That's the base for modern day CPUs used in desktop PCs.
The tale is going to span quite a few episodes so I'm not going to be doing it back to
back to back, but instead I'm going to throw them in here and there in the feed.
And today we're going to start at the root of the x86 lineage, or it might be better
to say the dirt underneath the roots. We'll be looking back at the story of the 4004, the first mass-market-producible microprocessor,
and one of Intel's first real success stories.
And even though this chip may be so far removed that we no longer see its influence in modern computers,
it will act as the springboard that will launch Intel down the path to dominance.
So what was the 4004?
And how did a scrappy startup create one of the most influential inventions in human history?
There are two key problems that have been around since the beginning of computing.
Those are size and cost.
No matter what the current state of technology,
computers have always been too big and just
too expensive.
Or at least enough people think that the progress keeps chugging along.
And there was no better time for this mantra than the 1960s.
The first microchip had only been created in 1958 and that really opened the floodgates
for miniaturization of computers.
Now, the word microchip gets thrown around a lot, but what does it actually mean?
Well, I think the more evocative term to use here is integrated circuit,
that's often abbreviated down to just IC.
Prior to ICs, any circuit would have to be made up of discrete components,
those are your resistors, capacitors,
or transistors, that are mounted onto a board and connected with either wires, printed traces,
or solder.
While that's fine for most applications, you run into problems for complicated designs.
Space constraints are always an issue with these kinds of circuits, since you have to
make room for all the separate components.
A single resistor may be small, but when you have hundreds of them, it really
starts to add up. It also takes a lot of work to make a complicated circuit, even if you
have fancy printed circuit boards and all your parts prepared ahead of time. You can't
really turn these out at large scale very quickly. This would all change when in 1958 Jack Kilby, a researcher at Texas Instruments, invented
a method for creating an entire circuit on a semiconductor wafer.
That's the integrated part of integrated circuits.
The entire circuit lives on a shared wafer of silicon.
Instead of using discrete components, each component is either etched or printed directly onto that wafer of silicon. Instead of using discrete components, each component is either etched
or printed directly onto that wafer. The advantage for ICs is that they can be made much, much
smaller than traditional circuits. And depending on the tooling and the machinery, ICs can be made
in massive quantities and a lot quicker than discrete circuits. There's a lot more advantages to ICs,
but those are two of the big ones to keep in mind. The shift that ICs enabled can't be understated.
To give you an idea of it, Kilby would win a Nobel Prize in physics for the innovation.
That's not something that really you see much in the field of computing.
In the coming decade, this new technology would
fundamentally change what could be done with electronics, and eventually computers. Not only
could components be packed more densely, but they became a lot cheaper since you can make them for
pennies on the dollar. Bottom lines wouldn't be the only thing affected by this. It would create kind of a domino effect.
Engineers could now make more complex circuits that could be made into more viable products
and could be sold a lot more cheaply to customers.
And while better radios and fancier home appliances are pretty neat, we're not here for that.
We're here for the computers.
And ICs would end up having a profound
effect in that field. Sometime around 1965, the first computers would start showing up
packed with ICs. Mainframe manufacturers would embrace the new technology pretty quickly,
and so would the US government and military. In addition to shrinking down the overall size of a computer, IC-izing systems
also made them more impervious to hazardous environments. In fact, the guidance computers
used in the Apollo lunar missions were some of the earliest computer systems designed around ICs.
The properties of these new chips made it possible for the Apollo guidance computers
to be relatively small and reliable
while in space. Without this technology, then it's likely the humans flat out couldn't have
made it to the moon. But like I said, there were far more terrestrial uses for ICs, and IBM was
one of the early adopters of this technology, first using them in their System 360 mainframes.
of this technology, first using them in their System 360 mainframes. But nearly all computer manufacturers made the switch to ICs before the decade was out. Let's let manufacturers make more
complex computers that were smaller and cheaper, so demand could go up. The whole process started
opening up the computer industry as an economy of scale, so to speak. And a byproduct
of that was creating an entire new market just for manufacturing integrated circuits.
The new companies that sprung up to trade in semiconductors are often called Simcos.
This is where we get the Fairchilds, Texas Instruments, National Semiconductors,
and eventually how we arrive at the topic of
interest, Intel.
Intel was actually a little bit late to the Simcoe market.
Both Fairchild and Texas Instruments had already been in business prior to the advent of the
IC, and both companies had claimed to at least somewhat invented the technology.
Both companies had claimed to at least somewhat invented the technology.
In contrast, Intel wouldn't start up until 1968, that's 10 years after Kilby's first integrated circuit.
And that made the company kind of a second wave Simcoe.
But despite a late start, Intel would bring a fresh take to the industry.
One of the problems with early Simcos was how ordering a chip worked. Everything had to be custom. A manufacturer like IBM would go to a company with the plans for a new single
purpose chip. The Simco would design the chip in silicon, manufacture it, and sell it back
under contract to the client. And that worked, broadly speaking, but it was missing
a larger opportunity. By working on a custom contract basis, Simcos could only ever make a
limited run of a chip. Let's say, for instance, IBM needs an IC to control a master readout on
their new mainframe, and they plan to make about a thousand of that system. Well, the Simcoe they contract with through the chip could only ever make and sell about
a thousand of that one chip.
When you factor in the cost of design, creating custom tooling for etching, and production,
well, really the margins just start to dry up.
And this is what Intel would set out to fix. Gordon Moore and Robert Noyce
would found Intel in 1968, but they'd been integral to the semiconductor industry much
longer than that. Both founders had been members of Fairchild Semiconductor. Noyce himself was
credited as one of the co-inventors of the integrated circuit. Beyond being veterans in the field,
both men were intimately familiar with the industry and its faults.
And so when the two spun off from Fairchild to found Intel,
they had one goal in mind, making general-purpose integrated circuits.
The business plan was relatively simple.
Instead of courting contracts and making customized Cs, Intel would
have a catalog full of general purpose components that could be used in a lot of different applications.
This would ultimately save the company money, since they wouldn't have to retool and redesign
for every order. That meant that each chip could be sold with a much better profit margin.
But to make that work, Intel would need to somehow
come up with flexible chips and make sure that they were all designs that clients would want to
use. The first IC that Intel would settle on was set to be a standardized memory chip.
Now, many electronic systems, computers included, need somewhere to store temporary information.
electronic systems, computers included, need somewhere to store temporary information.
That's the role of computer memory, or RAM. And it's really a good place for standardization,
since there isn't much variance in what systems need to store. But the long-term plans would end up having to be put on hold for the short-term reality. While Intel was founded with a pedigree, it was still a startup, and despite outside investment,
it was strapped for cash.
Thus, Intel would end up taking their first contract for a custom made and very single
purpose integrated circuit.
Even if Intel had its sights on an idealistic future, it still had to keep the lights on
somehow.
So in 1969, Intel took up a contract from Busycom, a Japanese calculator manufacturer,
and they were set to produce a series of ICs for an upcoming desktop calculator.
Intel wasn't a calculator company, but what are you going to do?
Sometimes you just have to find a way to get cash
to flow in. And this may sound like one of those things a struggling company has to do in order
to make ends meet. And well, it kind of was, at least at the start. But in a few short years,
Intel would be able to turn this custom contract into the foundation of their new business.
contract into the foundation of their new business. Ted Hoff was one of the workers at Intel who worked directly with Busycom. Quoting him when interviewed about the process,
and just a note, when he says ETI in this quote, he's referring to Busycom.
To quote, in April of 1969, Intel agreed to make calculator chips based on ETI specifications.
In June of 1969, three engineers came from Japan to spend the summer transferring their design.
I was assigned to act as liaison but had no design responsibility.
Again, my curiosity took over and I'd studied the design that was to be transferred.
I quickly became concerned, because the circuit seemed quite complex and would severely tax Intel's limited chip design resources.
I expressed my concerns to Bob Noyce, then Intel's president, who urged me to pursue any ideas for simplifying the design.
End quote.
me to pursue any ideas for simplifying the design.
End quote.
Pretty soon, Hoff's curiosity would give way to a compulsion as he started thinking more and more about how to improve Busycom's designs.
Partly, it was a matter of cost.
The initial plans called for seven entire custom chips, each with a very high integrated
component count. That's 7 sets of
circuits that needed to be implemented in silicon, and 7 sets of dyes and tooling to manufacture the
chips. With that amount of complexity, Intel wouldn't be able to turn out much of a profit
on the venture if at all. But part of it seemed to have been a genuine curiosity on Hoff's part,
But part of it seemed to have been a genuine curiosity on Hoff's part.
And so it was while Hoff was trying to work to refine Busycom's designs, possibly sitting at a computer terminal in his office, that he realized that he was looking at things upside down.
The IC designs were complicated because they weren't general purpose enough.
Each part of the seven chips was specialized to one task.
It would make things a lot easier if he could just have one component that he could tell to do a job.
If he could have a programmable IC.
In other words, it would make things easier if he just had a tiny computer.
That way, you could just program it to be a calculator, or maybe a traffic light controller, or even a miniature mainframe. And thus, what was intended to be a simple contract
to keep Intel afloat would go totally and insanely off the rails, but in a good way.
Hoff would spend July and August of 69 working on his miniaturized computer design.
During that time, he would work closely with Masatoshi Shima, the engineer from Busycom responsible for the original circuit design.
The two were able to hammer out a lot of the details, but it still wasn't what we'd consider a microprocessor.
details, but it still wasn't what we'd consider a microprocessor. The duo's design so far included multiple chips for the core logic of the computer. It was sometime during these first two or so
months of work that Hoff realized just exactly what he was heading towards. To quote Hoff again,
While my original goal had nothing to do with trying to make a one-chip computer, the architecture I was developing seemed to indicate that most of the control and arithmetic could be done with a single chip.
And so that's really the core of a microprocessor, packing all the logic into a single chip.
Packing all the logic into a single chip.
The idea of a so-called computer on a chip had been floating around in the collective consciousness of the industry for quite a while, but it had yet to be realized.
And so far, Intel was getting the closest to that goal.
But a single worker, even aided by another engineer, can't get that much done.
And Hoff and Shima had been working at a deficit.
The Busycom redesign had really kind of been a side project, done in stolen time.
It was just starting to get steam as an official endeavor at Intel,
and it was decided that some help could speed things along.
So in September, Hoff would add on to his team.
The new hire, Stan Mazur, would bring a fresh new perspective to the project.
Mazor, like a lot of Intel employees, was a recent hire from Fairchild Semiconductor.
While there, he had been designing computers.
So he was a perfect fit for the newly official project.
By the time Mazor was brought on, most of the high- had to somehow work as a calculator.
To quote Mazor,
When I was working for Ted, the assignments that he had me look at specifically were how to write programs to scan the keyboard,
write software programs that are stored in ROM to run the display, and to run the printer.
In each of these areas, he had already had some ideas
as to how we would do it, and some sample coding. I'd done quite a bit of programming at the time,
so I was writing sample, what we call snippets, pieces of programming code to demonstrate the
feasibility. End quote. And so over the latter part of 1969, the finalized design would begin
to take form. Instead of a redesign for a desktop calculator, the team would present a single chip
computer that could be programmed to act like a desktop calculator. Intel presented the new
redesigned plans to Busycom, who accepted the overhaul to the original design. The final calculator circuit
was now powered by a miniaturized computer, which was supplemented by a calculator program stored on
ROM, a small amount of RAM, and a simple I-O chip that connected up the keyboard and the outputs.
In other words, Busycom had came in with a plan for a custom calculator, and Intel's counter-offer
was a tiny computer.
All in all, it's not that bad of a deal.
The project would eventually get the fancy codename Microcomputer Set 4, or MCS4.
The 4 being for the one computer chip plus the three supporting chips.
But Intel wasn't out of the woods yet.
This was all just design work.
The actual implementation of these plans into silicon had yet to be done.
This led to a problem though.
Hoff, Mazor, and Shima were experienced with computers, true, but not at the silicon level.
Their contribution had been working up a specification for how a single chip computer could be built,
but not the final details on how it would be.
The final stretch would need some outside help, but that part of the project would be
put on hold.
For some reason, and I haven't been able to figure out exactly why,
Intel just dropped the Busycom project for about six months.
It seems like as soon as Busycom signed off on the modified chip designs,
Intel just took a break.
Shima went back to Japan, and Hoff and Mazor went on to other internal projects.
It seems really strange to me, but it may just have to do with the climate inside Intel.
A lot of employees weren't happy that they had to do a custom contract.
They saw it as a distraction from the company's larger goals, even if the final designs for Busycom are ending up looking a lot more general purpose than they originally were.
So it may have just been office
politics. Or it could have been that it plain took Intel six months to find someone to do the
final implementation for the new BusyComp chips. Either way, come April of 1970, the project would
start back up for its final sprint. Intel would end up hiring another ex-Fairchild employee to do the job,
Frederico Fagan. And honestly, the six-month gap may have been caused by Intel waiting to
snap Fagan up just for this job. He was uniquely qualified for the task. While at Fairchild,
Fagan developed a method for manufacturing ICs called Silicon Gate Technology.
Now, how this exactly works is way over my head.
From my reading, it seems like it has something to do with how the actual logic gates are
implemented at the silicon level on the computer chip.
What's important, though, is that Fagan's new method was faster and more reliable than existing
technology.
That, and it allowed for the chip to be packed more tightly and into a smaller space.
This made Fagin the one engineer in the field that was capable of creating Intel's new
ambitious chipset.
Mazor would help Fagin get familiar with the design of the chip, but after that, Fagin was more or less
on his own. And as fate would have it, Shima would come back into the story. He arrived at Intel to
check up on the MCS-4 project just a few days after Fagin was hired. And needless to say,
Shima was not very pleased with the state of the project. I couldn't find the exact details on the schedule,
but whatever it was, Intel was far behind. So Shima did the one logical thing and essentially
became an employee at Intel for the next six months. During that time, he'd work directly
with Fagan to finish the implementation of the chip. Between April and October of 1970,
the two would feverishly turn the on-paper designs into silicon. The entire project entailed not just
the processor, but also the three support chips, the ROM, RAM, and I.O. But the support chips
aren't that important to the story. The CPU is really what's cool here.
The chip, which would eventually be called the 4004, was the heart of the operation.
That one chip had all the logical components of a computer baked into it,
which was an unqualified first.
However, there were some compromises.
Mainly in power, it was by no means a powerful or capable computer.
It was 4-bit, meaning that it could only operate on 4-digit binary numbers,
and it could only use up to 640 bytes of RAM. It's not really that useful, but it is a start,
and it is a fully realized computer on a single wafer of silicon.
The first chips would start flowing to Busycom in early 1971, that's almost two years after the
whole ordeal began. The MCS-4 would find its way into Busycom's 141PF desktop calculator.
However, there were still some loose strings attached to the whole project.
You see, as per contract, Busycom owned the rights to the MCS4 chipset, which meant that
Intel could only sell its new superchip to Busycom. And like I said before, the whole
arrangement was counter to Intel's founding principles. So much so that the internal trio
of Hoff, Mazor, and Fagan started to raise a stink about it. They wanted Intel to buy back the rights
to sell the new chipset, and they weren't alone in the company in wanting this. They had made the
Silicon Holy Grail for the time, a computer on a single wafer of semiconductor. They weren't about
to let it live its life as a calculator.
And so luck would have it, they got their wish.
Intel had changed a lot in the two years since the project started.
In the intervening time, they had started to sell their memory chips,
and sales were really good.
The company was in a much better place financially.
So in May of 71, after much internal discussion,
Intel bought back the rights to sell MCS4, and so the 4004 CPU would start to hit shelves,
and it would be an entire computer selling for just $60.
But, here's where the 4004 kind of fails to live up to the hype.
But here's where the 4004 kind of fails to live up to the hype.
True, it was a huge first step, but it didn't really penetrate the market as much as one would hope.
It was a full computer, but it really wasn't powerful enough to be used in many interesting applications.
All in all, around a million 4004 chips would be produced.
But since they were so underpowered,
they weren't used in full-fledged computer systems.
Instead, the 4004 was the heart of a lot of smaller and less complicated electronics.
It would find its way into pinball machines,
cash registers, gas pumps, and applications like that.
It didn't exactly set the world on fire, but it did start things turning.
Alright, I think this is a good place to table our discussion of Intel's road to domination.
The 4004 stuck around in production all the way up to 1981, serving as a controller for
relatively simple electronics.
There would be an extended successor to the chip, the 4040, but the direct lineage stops
there.
And architecturally, Intel's later chips are totally unrelated to the humble 4-bit
machine.
But that's not to say that the 4004 doesn't have an important legacy. It showed that
a computer could be reduced to a single chip. Computers didn't have to be large or expensive
anymore. And designing the 4004 gave Intel the institutional knowledge to become the dominant
force behind microprocessors in the coming decades. I want to end this story with a little reminder.
In reading up for this episode, I ran into a lot of sources that claim this engineer or
that programmer as the quote, one true creator of the microprocessor. The matter of credit is
a contentious topic even today so far removed from the fact. The four people that I talked about today, Fagin, Hoff, Mazor,
and Shima, were just some of the players with the most screen time. All told, it would be hard to
work out even how many people had a hand in the creation and production of the chip. And the same
is true of most innovations. Very rarely is the future brought about by someone working alone in a vacuum,
and I think that's important to keep in mind when we're looking at these stories of innovation.
Anyway, with all that aside, in 1972, Intel would make a new hire. Masatoshi Shima would be brought
on to the team. In the ensuing years, Intel would pump out one microprocessor after another, but they'd be building off the knowledge they acquired during the 4004 project.
Thanks for listening to Advent of Computing.
And thanks for indulging me a little bit in my fanboying over one of my favorite chip manufacturers.
I plan to continue the series on Intel CPUs at a later date.
I don't want to bog down the feed
with just months of me talking about different revisions to chips. I'll be back in two weeks
time with a new episode, something a little more audio visual than me chattering on about Intel.
Until then, if you like the show, then please take a second to share it with a friend.
You can also rate and review on Apple Podcasts.
If you have any comments or suggestions for a future episode, go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.