Advent of Computing - Episode 109 - What's Up With Microcontrollers?

Episode Date: June 4, 2023

What really is the deal with microcontrollers? Are they just little computers... or are they something totally different? This episode we are looking at the development of the microcontroller through... the history of the TMS1000.              

Transcript
Discussion (0)
Starting point is 00:00:00 One day back in high school, I was killing time waiting for class to start. To this end, I was chatting with a friend's older brother. We didn't really know each other that well, you know, how it is with siblings of friends, but we did know that each of us were into programming. I can't remember the finer details about my side of the conversation. I'm sure I was talking his ear off about assembly language or the glory of the x86 architecture or something along those lines. He'd been working on some hardware projects, something outside my experience
Starting point is 00:00:31 that I was eager to learn more about. After a little back and forth, he reached into his backpack and pulled out this little cardboard box. I remember it having a big black and red Texas shaped logo printed across its lid. He gingerly opened the box and produced a tiny circuit board and explained this is how he was messing around with machines and electronics. The board itself really only had one chip, which was ringed with all these pins and plugs and headers. He told me that it was this thing called a microcontroller, and Texas Instruments
Starting point is 00:01:05 makes them. Yes, the calculator TI, that Texas Instruments. Best of all, the chips only cost a few cents. I mean, the board that goes around the chip is a little pricey and there's software to consider, but the microcontroller, the actual chip itself, well that's cheap as dirt. It seemed pretty cool to young Sean, but I still didn't really get what a microcontroller was. I remember being told it was like a little computer, at least somewhat like a little computer. It would be a few years before I got my hands on a microcontroller of my own.
Starting point is 00:01:44 By that time, Arduinos had really hit the US market in force. It would be a few years before I got my hands on a microcontroller of my own. By that time, Arduinos had really hit the US market in force. These are kind of the de facto hobbyist microcontroller board these days. But I've always kept that memory of my first encounter. I think it's illustrative of a point. Microcontrollers are a little strange. How would you explain one to a total noob? Well, they're kinda like little computers, but there are also some big differences.
Starting point is 00:02:24 Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 109, What's Up with Microcontrollers? This episode, we're going to be answering one very simple question. What really is up with the microcontroller? Perhaps this is a good spot for some background. On the show, I've sometimes made references to computer-controlled machines. You can make a device a lot more useful by simply slapping a small computer on it. The switch from something that's purely mechanical or electrical to something that can be programmed,
Starting point is 00:03:00 well, that's a big leap to be sure. To make this leap, all you really need is a small computer that is some way to connect with the outside world. Usually, this is some little machine with pins that can be wired up to devices. Then you can write a program that communicates with the outside world over all of those little pointy pins. You could use a full-on computer to do this. In fact, this was the norm in the early days. Some of the first CNC machines, those big digital mills, were controlled by mini-computers. Some factories were automated by straight-up mainframes. While workable, I think it's clear to see that there are downsides to this approach. You don't necessarily need a big, powerful machine to handle a single mill.
Starting point is 00:03:45 In the case of a factory, one big computer might not be enough to manage everything. It would be nice if you could digitize each machine using a smaller computer. That would allow for flexibility on the smaller side of things, and even something like more simplified centralized control on the bigger side. Eventually, small computers would become a reality. We get the first commercial microprocessors in 1971. These are initially called computers on a chip, but that's just not entirely true. The 4004, the first in the market, was only a processor. The first in the market was only a processor. To make use of the chip to make an actual computer, you needed a ROM, RAM, and some type of interface device.
Starting point is 00:04:36 You end up using around three chips to support that one chip computer. While this is an improvement, it could be better. The modern solution to this issue is called a microcontroller. These are true computers on a chip. You get one chip that has a processor, RAM, ROM, interface drivers, and sometimes other features crammed into one package. They might even have a timer. Now, this streamlines everything really, really nicely. You can just throw a single chip into some machine, add a few wires, write a program, and bam, you're done. You now have a computer-controlled widget. Modern devices are absolutely chock-a-block with microcontrollers.
Starting point is 00:05:19 Everything from toys to light bulbs to cars have these tiny computers just kind of shoved into them. The best part comes from the economics of the matter. A microcontroller costs a fraction of the price of an actual computer. It's also a more simple device, so they're often easier to program. You're getting exactly as much computer as you need for an application. They really are ingenious little things. So, where did they come from? The market today is basically, well, it's everything.
Starting point is 00:05:54 Everything can use a microcontroller. Any smart device will have something like a microcontroller or maybe a system on a chip crammed inside. Anything electronic has some little blob of silicon that's its brains. But was this the initial use case? Were microcontrollers always meant to be used that way, or is it some more niche technology that just happened to be generally useful? And here's the big one for me. How are microcontrollers actually different from computers? I can get a microcontroller for, in some cases, if you're buying wholesale, which really gets it done,
Starting point is 00:06:33 you can get a microcontroller for less than a cent. So why do we bother with bigger computers? What's the technical difference that prevents me from, say, spending a dollar on a few thousand microcontrollers and making a big desktop out of that? Why do I have to have a big, complex machine with expensive chips in it? This episode, we're going to be trying to answer these questions. So, without further preamble, let's get into it. Our story this episode is primarily concerned with a chip called the TMS-1000. We're going to be taking a pretty circuitous route to its creation, but I think this will give us a good foundation.
Starting point is 00:07:14 Now, the TMS-1000 is usually cited as the first microcontroller. Thus, it's a good chip for us to focus on. It was released by Texas Instruments in 1974. Just to give some quick context, the Intel 8080 is released the same year. So we're at a point where microprocessors are starting to really hit their stride. We're getting the classics right around this same time. Now there are a few interesting things to note about the TMS-1000 that set it apart from its contemporaries.
Starting point is 00:07:48 Chips like the 8080 and the soon-to-be 6502 were big and very capable processors. We're starting to get these actually usable 8-bit processors on the market. The TMS-1000, on the other hand, was a teeny tiny 4-bit computer. Bittedness can really have a few different meanings here. It's essentially a shorthand for the complexity of a computer. The easiest way to explain it is as the native data size that a computer wants to deal with. that a computer wants to deal with. This is the basic size number that a processor likes to, well, you know, process.
Starting point is 00:08:35 For the TMS-1000, this means it has 4-bit registers for storing numbers, and it does 4-bit arithmetic on those numbers. That, all things considered, is pretty small. That means that the largest number a TMS-1000 could deal with, unaided, is 16. By way of comparison, an 8-bit computer can handle numbers up to 256. There are, of course, a lot of caveats here. You can use some smart code to work with bigger numbers, but fundamentally, the bit size gives you an idea of what kind of data the chip wants to work with on the very lowest level. So, why would anyone want such a limited chip, especially when there are better options?
Starting point is 00:09:18 That gets us to the second factor that makes the TMS-1000 unique, or at least unique for its time. This was the first commercially produced microcontroller. A base model TMS-1000 was a single chip, but in that packing was a real punch. It had the 4-bit processor of everyone's dreams, plus 1024 bytes of ROM, 32 bytes of RAM, and an I.O. controller, all wired up to a smattering of pins. You really get the whole package. Sure, you could get a big, fancy 8-bit computer with a motherboard and circuits and everything, or you could get a single chip shipped direct to your home from Texas. I think it should be clear at this point that we're dealing with something more than a little strange. The TMS-1000 is way less powerful than an 8080. In fact, there's
Starting point is 00:10:19 not really a comparison to be made. The 8080 could handle 64 times as much memory as the humble TMS-1000. It could do big math. Well, bigger math than adding two numbers that total to less than 16, but you get the idea. You could straight up write longer programs for Intel's chip as well, since there's more space, there's more memory. You'd assume that the Texas machine would just be a blip, right? Just some little device that shows up for maybe a year, maybe a fiscal quarter, and then disappears. But not so fast. The TMS-1000 actually did hold its own. You want proof? Well, I have this lovely book called An Introduction to Microcomputers by Adam Osborne.
Starting point is 00:11:15 It was sent to me by a listener to whom I am forever grateful. The text is from 1976. It's one of those trade-style books that's printed on wafer-thin paper. As far as I'm concerned, this is a fantastic primary source for getting a boots-on-the-ground opinion. The first chapter of this book is devoted to the TMS-1000. So, what's Osborne's opinion? Simply put, the 1000 isn't only a useful chip, it provides a great bang for your buck. It is a true value in the microcomputer scene. How on earth can this be the case? I mean, we're talking about a 4-bit machine, it's not even expandable. You get a kilobyte of ROM and nothing else. You can't add anything greater than 16. How is this a remotely useful device in 1976?
Starting point is 00:12:08 In fact, why is it even a product? To unravel this seemingly impossibility, we need to start at the beginning. We need to talk about the calculator wars. This is one of those events that may seem mundane today, but during the 60s and 70s, it was a huge deal. One of the old technology battlegrounds was the fight over the humble calculator. For this to have the proper impact, we should really be considering the context. Prior to the microcomputer, there were very few options for running quick calculations. Some businesses could afford mainframes, but those weren few options for running quick calculations. Some businesses could
Starting point is 00:12:45 afford mainframes, but those weren't really meant for quick use. You'd have to log into some timeshare terminal or submit a batch job. In some cases, this might fit into your workflow. Let's imagine you're working as an engineer at some big three-letter group. You've worked out which equations you need to run for calculating the stress on beams for a bridge. You're just down to actually crunching the numbers at this point. It might be a simple matter of just a few additions and a division. If your company has a mainframe, and if you have a terminal at your desk, then you're good to go. Load up a program, hammer in some numbers, and you get the result back. You'd never have to leave your desk. But this is making a whole
Starting point is 00:13:31 lot of assumptions. This would only be possible in one very specific scenario. Not everyone had a terminal within hand's reach. So what are your other options? One could be a mini-computer. These were a newer generation of machines that could be operated by a single person. Something in between the old mainframes and the future micros. These minis were cheaper and more simple, which made them more accessible for smaller businesses or institutions. But you wind up in the same scenario. Our poor engineer has to just hope for a nearby
Starting point is 00:14:06 terminal that's not in use. That's actually it for computerized options. While fully-fledged computers will very easily and happily run the numbers, their size and cost made them less accessible. Maybe a company would have terminals set up for a handful of their more important users. But what about other pencil pushers? What about clerical workers that also need to run numbers? You hit a point where you can't set up or afford enough terminals. Maybe there isn't even enough space in the rafters to run cables. So what are your actual alternatives? If we want to go full analog, then let's go full analog. We could use an abacus, or we could use the venerable slide rule. These are, quite frankly, kind of awful devices. I've tried to learn how to use one very concertedly on multiple occasions,
Starting point is 00:15:08 and it's a lot of work. A slide rule is simply a set of sliding bars and a sight piece. By manipulating the bars and sight, sliding it all around, you can perform calculations. However, this is one of those skill-based things. To use a slide rule effectively and quickly requires training and practice, something that your intrepid host just... I don't got that. It's a little bit harder than typing in 18 divided by 3. Slide rules are definitely an option, but it's not the kind you want to be stuck with. They also don't solve all your problems. One key issue was dealing with running sums, long totals of many numbers, some positive, some negative. A slide rule is more for running
Starting point is 00:15:58 division, multiplication, or even square roots, than running long sums. Moving up the ladder, we get to mechanical calculators. Believe it or not, mechanical calculators were pretty widespread and sophisticated machines. The first iterations of these devices appear in the 1700s, but they really hit their stride in the late 19th and early 20th centuries. These machines used gears and clockwork mechanisms to run calculations. There were, of course, limits to what you could do. More complex operations like multiplication or division were, well, a little counterintuitive. These machines were primarily for running addition and subtraction, for keeping a cumulative sum,
Starting point is 00:16:46 so you didn't necessarily have a multiply or divide button. Instead, you'd have to build up complex operations from addition and subtraction. It's doable and definitely helpful, but it's not a total solution. Electronic calculators followed the creation of the computer, and this is where we start to enter the calculator wars proper. Electronic calculators offer very distinct advantages over the previous forms of the art. For one, they can be a lot more flexible. Division can be a tricky thing to work out in gears, but you can pretty easily gin up
Starting point is 00:17:23 a digital division circuit. Same goes for multiplication, square roots, exponents, and even trig functions if you're feeling really fancy. That said, this isn't a very instant solution. You see, there were issues with making an electronic calculator. I know, wild. This is underlined by the Sumlock Anitta, the first fully electronic calculator. This was a vacuum tube machine that was actually designed by a computer engineer. The lead designer on the project, Norbert Kitz, had previously worked
Starting point is 00:17:59 at the National Physics Laboratory during the development of the Pilot Ace computer. That was one of the computers among the pack of the Pilot Ace computer. That was one of the computers among the pack of the first programmable machines. So suffice to say, the Anida came with a pedigree. The first Anidas came out in 1961, at least the first electronic ones. They were tube-based beasts, so we're looking at pretty big and pretty power-hungry machines. We're talking about these large slabs of metal and glass that sat on top of a desk. This primitive appearance matched their primitive inner workings. These machines weren't binary, instead they were decimal.
Starting point is 00:18:41 The Anoda Mark VII, the confusingly named first electronic version of this calculator, used these things called ring counter circuits to perform actual mathematics. Think of something like an electronic dial with set positions. A pulse on the right pin would twist the dial to the next position, which corresponded to some value. Once you hit the highest value, the dial rolls over back to zero. As far as functionality, these early calculators only supported four operations. Addition, subtraction, multiplication, and division. These become the basic functionality that combatants in the coming war will chase after. It's really the minimal set needed for a useful calculator. Interface-wise, the Mark VII looks kind of as expected. You have four
Starting point is 00:19:34 functions mapped into four buttons. But number entry, well, that's kind of a different story. The Mark VII had a numeric grid for entering data. This was actually a holdover from older mechanical calculators. You got this grid of numbers where each column corresponds to a digit, and each row represents the value of that digit. So to enter a number, you have to glide across the grid. It works, but it's not a very minimal interface. You gotta move your fingers quite a bit. Operating the Mark VII was a little complicated.
Starting point is 00:20:12 There were actually separate inputs for division and multiplication. These were each separate columns to the left and right of the main number grid. But in theory, operation was somewhat simple. number grid. But in theory, operation was somewhat simple. The idea is that you're loading up two registers, two little chunks of memory that store numbers. You say what operation you want carried out on those registers, and then the calculator just spits out the results. That value is usually loaded into one of the registers, so you can keep operating on it if needed. Numbers are displayed on a set of 13 Nixie tubes. This gives the whole calculator a very delightfully retrofuturistic aesthetic. It looks cool, and it also gives us an idea of how the MK7 works. This will be important to explain because, well, it'll start a long series of strange parallels.
Starting point is 00:21:06 Each Nixie tube, each digit of the display, is actually the output of a ring counting circuit. These circuits were able to count from 0 to 9 and then loop back to 0 again, giving a carry signal when the value rolled over. You can actually perform all four basic arithmetic functions using ring counters. Addition and subtraction are easy enough. Multiplication is just a repeated addition, and division is just a fancy form of repeated subtraction. It all comes down to just wiring everything up right and adding a few extra vacuum tubes.
Starting point is 00:21:42 It's nothing earth-shattering, right? Well, here's the connection. This is the exact same way that ENIAC, one of the first electronic digital computers, operated. ENIAC had these big banks of ring counter circuits that could be rewired in order to carry out different operations. In a very real sense, these early electronic calculators are scaled down and specialized iterations of earlier computer designs. The Anoda Mark VII is kinda just a tiny ENIAC. That's pretty cool.
Starting point is 00:22:16 On its own, like I said, it is a neat little connection. Seventeen years after ENIAC, we get tiny versions of the same machine on the desks of clerical workers. We can actually work with this comparison to get to some interesting conclusions. First off, these rain counter machines don't have memory, at least not in a traditional sense. The rain counters themselves are a form of memory, but it's very specialized. It's not random access like we're used to. You can't just blit out a value into ring counters. Instead, you have to send that value as a series of pulses.
Starting point is 00:22:56 That leads to some kind of weird engineering. In the case of ENIAC, you get these thick cables used to send around serial pulses from one ring to the next. For the Mark VII, that means the number grid. Once you click a value into the grid, each number is read off and sent to the ring counter as a series of pulses. That kind of explains the weird interface here. Partly it's a holdover from earlier mechanical calculators. Partly it's a matter of convenience. We've established this connection to early computers. So the question becomes, can the
Starting point is 00:23:32 Mark VII be programmed? In a word, no. That's one of the major limitations here. The humble calculator is essentially a specialized computer. It does one thing, or rather, it does four things. Add, subtract, multiply, and divide. Add in input and output, and there you go. That's all the thing can actually do. If you could rewire the ring counters, then you'd actually have a programmable computer. But the Mark VII was only ever meant to do those four functions. Here's the point I'm trying to get us to.
Starting point is 00:24:12 As calculators become electronic, they borrow DNA from earlier computers. This isn't by accident or chance. A calculator can be viewed as a special case of a computer, just with limited functionality. So what happens if we pull on that thread a little bit? After the Mark VII, we see more and more sophisticated calculators. This is where the war starts. The drive to make better, cheaper, and smaller calculators. During this period, we see a shift away from ring counters. Silicon enters into the picture in the late 60s, with the integrated circuit soon to follow. Calculators start to get closer and closer to computers. This culminates in 1968 with two
Starting point is 00:24:59 companies named Busycom and Intel. Now, this is a story that I think some long-time listeners are probably sick of me telling. Busycom, a Japanese calculator company, contracted Intel to make a custom calculator chipset. After some back and forth, engineers at Intel figured the best solution would be to make a programmable machine. They also figured they had the right tech and know-how to make a single-chip packing for the device. This leads to the 4004, the first microprocessor on the market. The reasons for the 4004's development are, I think, understandable. The contemporary pack of calculators were using multiple integrated circuits, that's basically just a lot of chips. were using multiple integrated circuits. This was the next step in the process of downsizing calculators, but it was merely a step. It was clear that calculators would continue to shrink.
Starting point is 00:25:54 The next logical leap was to move towards a single chip that could do it all. A calculator on a chip, so to speak. The solution devised at Intel was to create a tiny 4-bit computer. At first, that might seem a bit overblown. Why make a whole computer? Why make a whole processor? The big draw is, of course, programmability. Creating a one-chip calculator that can run four functions, well, that's one thing. That works for a while, but what if you get a contract to make a chip that can also run four functions, well, that's one thing. That works for a while, but what if you get a contract to make a chip that can also run square roots?
Starting point is 00:26:30 Now you need to scrap your four-function chip and create a whole new layout. You can reuse a lot of the original chip's design, but you also have to add quite a bit. Not to mention the whole testing and manufacturing process. That's going to take a lot of time and money to go from a 4 to 5 function chip. If you happen to have a tiny computer lying around, well, this upgrade is trivial. The 4004 came with three support chips, ROM, RAM, and an IO controller. The ROM held a program that the 4004 would execute as soon as it was flipped on. The 4004 itself wasn't etched to be a calculator. It was designed as a tiny processor. It was built to run a program. That program, the code stored in ROM, told the processor what to do. That means
Starting point is 00:27:21 that you could just write up some routines to make the 4004 think it was a four-function calculator. You could add in a fifth function by just modifying the code and burning it onto that ROM chip. No silicon would actually need to change. No manufacturing required, just some software. That's still a work, but it can be done a lot more easily and a lot more quickly than redesigning a chip. The 004 also ends up being a really canny business decision. Intel answered Busycom's contract with a general-purpose chip. By 1971, that chip was delivered.
Starting point is 00:27:58 After the Busycom deal was over, the calculator ended up being something of a flop, Intel was free to sell the chip on the open market. Later, Intel processors aren't so much related to the 4004, but they at least benefited from the experience and good press that Intel garnered in this era. That said, the 004 was just one solution to the calculator problem. It was one possible weapon in the greater war. As we've already seen, this kind of follows a trend. Calculators like the Anadom Mark VII reduced down ENIAC's operating principles. By the time the 4004 was developed, stored program computers, that is, computers that actually ran software from memory, were the norm. computers that actually ran software from memory, were the norm.
Starting point is 00:28:50 The 004 is lagging behind by about 23 years, but we can forgive that. This chip is very much just a recreation of larger-scale computers. It's smaller, less capable, and more specialized, but it follows the same principles of operation. You have a circuit that's the processor, a store for code and data, and some way to handle inputs and outputs. All things are in nice separate containers, too. So what about alternatives to this solution? I did say Intel hit on a solution, not the solution. There were, of course, single-purpose integrated circuits designed for calculators. These were, as far as we're concerned, kind of an evolutionary dead end. The first of this pack
Starting point is 00:29:33 was the MosTech MK6010, which was, funnily enough, also commissioned by Busycom. They're a pretty busy company in this era. The 6010 was, when you get down to it, a chip designed just to be a four-function calculator. It was small, relatively cheap, and used relatively little power, but come on, it's a calculator. It also suffered from the same limitations we discussed. It only did calculator stuff. It was a very static type of device. That said, there were benefits to the MK6010. It was one chip. I know that might sound dumb, but this is a factor we have to consider. Intel's processor was flexible since it could execute code, but it required three other chips to be useful.
Starting point is 00:30:26 We could say it was a computer on a chip, but in name only. In practice, you needed some other chips. That meant that any calculator using the 4004 had to have a more complex circuit board, it had to have more components, and would have higher power requirements. By contrast, MOSTECH's chip was one chip. You needed only to wire it up to a display and buttons. Sure, there are some tricks and some supports involved, but less so than with a 4004. An MK6010 calculator had one integrated circuit, plus some diodes and some wires and some buttons. That's a lot smaller, and it used a lot less power than an Intel-based calculator. Each of these approaches solved the same problem, and each approach came with some trade-offs.
Starting point is 00:31:20 Now, I've only given you the two extremes of the spectrum, the limited calculator and the limitless computer. Well, limitless within four bits. There did exist a middle ground, a solution that kind of split the difference, something that had the flexibility of a computer and the convenience of a custom chip. This brings us, finally, to Texas Instruments. TI is a bit of a weird company. At least, that's the case when it comes to computer history. They were never a huge player in the microcomputer scene, but they did produce a really capable machine in the TI-99-4A. They didn't really corner the mini-computer market, but they did produce some reasonable minis.
Starting point is 00:32:07 TI never really captured the market for a full-on, complete computer. They were always at-home manufacturing components. Chances are that most computers in the era of integrated circuits had TI chips somewhere inside them. And these chips came with a pedigree. The integrated circuit itself was invented at TI. So we're looking at a company that was on the forefront of a very specific type of market. TI was also an old hand at the whole calculator thing. In 1965, the first handheld calculator entered development at TI. This made use of integrated circuits.
Starting point is 00:32:47 But it wasn't a single-chip affair. The first pass at a pocket device called the Caltech project used three custom chips. These were built in-house at TI. All told, the project would take about two years to reach completion. told, the project would take about two years to reach completion. Caltech, confusing name aside, really shows why integrated circuits were such a huge breakthrough. This new pocket calculator was tiny, cheap, and it used very little power to operate. This is exactly why companies were racing to create a calculator that used a single chip. From accounts I've read, it almost sounds like TI was taken off guard when Mostek announced their own single chip calculator. The MK6010 was the first real calculator on a chip.
Starting point is 00:33:35 Shortly after that chip was unveiled, TI announced that their own chip wasn't far behind. It would take a few months, but eventually the Texas company would announce the TMS-1802 in late 1971. This is probably a pretty good place to pause and look at how tight this timeline is. The first Intel 4004s are sold as components to Busycom calculators in July of 1971. in July of 1971. The MK6010 came to market a little earlier, in January of the same year. Then we have TI following the pack with the 1802 in September. In the space of one year, we see three new approaches to highly integrated calculator chips. So, what's this third approach? What exactly is the TMS-1802? Simply put, it's a true computer on a chip. This one chip included everything you need for a functioning machine. A processor,
Starting point is 00:34:36 RAM, ROM, and an input-output driver. TI is essentially doing with one chip what Intel took four to achieve. This really puts TI in the lead of the calculator war, as far as I'm concerned. The 1802 had all the programmability of Intel's offering, plus all the benefits of MOSTECH's single-chip solution. Externally, the 1802 actually looks and acts a lot like MOSTECH's chip. 1802 actually looks and acts a lot like Mostek's chip. Pens on the TI chip are allocated for data and signaling, so in practice, you end up treating this new chip just like a special circuit.
Starting point is 00:35:18 It's geared to work with a specific type of membrane keyboard and drive a specific type of digital display. This is in contrast to something like a full-on computer, digital display. This is in contrast to something like a full-on computer, where you usually have generic interfaces whose usefulness is only dictated by software. Look at a serial port as a contrast here. Depending on which program is running, a serial port could be used for communications with a terminal, printer, or even data transfer to another computer. By contrast, the 1802 only works with special button matrices and special little displays. If it looks and acts like a less capable chip, then what's the big deal? Well, it all comes down to implementation.
Starting point is 00:35:58 It's all about what's under the lid. As I've alluded to, the 1802 contains everything you need for a functional computer. So why does it act like a more lame device? Simply put, it's hard-coded for one task. The key to the puzzle here is the ROM, the chip's read-only memory. There isn't superb documentation about the 1802. This is partly because it was, after all, a pretty niche product. From press releases, it sounds like the chip was using what's known as a mask ROM.
Starting point is 00:36:32 A mask ROM is just about the most primitive version of read-only silicon memory possible. This is a chip that has data literally etched onto it in a factory. A physical mask is made that functions as a photoresist. That mask is then used to etch a pattern onto a ROM chip, which encodes the desired data. In this way, a program is loaded onto the chip during its manufacturing process. This is a neat method, but there's a big downside. The 1802 couldn't be reprogrammed. Once the mask ROM is, well, masked, that's it. The silicon has literally had data etched into it. There's no going back.
Starting point is 00:37:16 So a TMS 1802 will only ever run this one calculator program. Over at Intel, the 4004 offered a much higher degree of flexibility. To change the program, you only had to switch out the ROM chip, which was a separate package. If you were using sockets, then this was as easy as pulling and replacing a chip. A manufacturer could buy into the Intel platform and use the 4004 in any number of applications by simply writing up new code. So why would Texas Instruments want to hobble their new product? It may look like a bad idea from the outside, but this approach offers some really interesting advantages. The thing is, most of these advantages aren't immediately apparent to a consumer.
Starting point is 00:38:09 The tooling used in the 1802 gave TI a platform for custom chips. The four-function calculator on a chip was just one application that the 1802 could be used for. By swapping out the mask used to etch the ROM, TI could alter the 1802's programming. In this way, the chip could be geared to do basically anything. This fit pretty well into the general workflow of the time. Big semiconductor manufacturers like Intel and TI did quite a bit of custom work. Recall that the 4004 was designed as part of a contract with Bizicom. In this mode of production, a client goes to a Simcoe with some kind of requirements they need filled. The client might want a four-function calculator, they might want to control traffic lights, or they might want a chip to run a terminal.
Starting point is 00:38:56 The 1802 could, with changes to the ROM mask, service any request, at least within reason. In theory, this meant the TI was now set. They had a platform that was easy to adapt. There are, of course, other benefits to this approach. The 1802 was a really cheap device. A single chip cost $150, which steeply discounts for bulk orders. A batch of 100 chips dropped the price down to $90 per chip. By contrast, a new 4004 cost $60. I'm not sure if that's bulk or single unit. That may make the 4004 sound cheaper, but that's not the case. You also had to purchase an Intel 4001, 002, and 003 to go with the processor.
Starting point is 00:39:47 I can't find pricing on those chips, but I can't imagine they went for anything less than $10. That's not to mention the power requirements. The 1802 simply used less power than a full 4004 setup. Intel's chips were relegated to big machines that required multiple power levels, while 1802s could sip on batteries. That alone was enough to give TI an edge in the calculator wars. That all said, I want to circle back a little. I want to get back to the outlandish mask rom stuff. This is kind of a tangent, but I think it's worth examining. The whole idea of a maskrom might sound a little weird, at least at first. It's one of those technologies that sounds
Starting point is 00:40:33 either horribly antiquated or like some type of ultra-niche industrial thing, something you only see in catalogs, and go, huh, well that's neat, good thing I'll never see that. Well, dear listener, that's not actually the case. There is actually a scenario where we commonly encounter this type of factory-burned data, and that's microcode. Now, not all microcode uses a mask ROM, but it's the same idea. It's a program that's burned deeply into a chip at the factory. You may be asking yourself, Sean, what's microcode? Well, suffice to say, it's one of those technologies that's a close cousin to magic. Essentially, microcode is a tiny, ultra-low-level program that defines how certain processors work.
Starting point is 00:41:26 A microcoded processor will have this thing called an execution engine. It's something on the silicon that reads and interprets microcode. That microcode is used to define its instruction set. It's a way to define a processor in pseudosoftware, if that makes any sense. That all sounds a little mumbo-jumbo. Just look at it this way. Microcode is a special language that's used to define how a processor functions. Most modern processors rely on microcode to function at all.
Starting point is 00:42:02 The prime example here is the venerable 8086. On the lowest level, the 86 only has a few really basic instructions. A small program in microcode tells the processor how to string together those basic instructions to form more familiar instructions. So when you run a divide instruction on an 8086, it's microcode that tells the processor how to actually do that. This is a pretty sophisticated approach to processor design. It may seem overcomplicated. That said, it gives you some slick tricks.
Starting point is 00:42:38 Microcode provides an abstraction layer over the hardware. It lets you hide how the deeper levels of the computer function. This means that it doesn't actually matter how the computer works. It could be made out of marbles and rubber bands. As long as it runs microcode, you can make it look like anything you want. You can essentially define any type of interface. There are also some instructions that would be hard to implement on raw silicon. Take the 8086's move instruction as an example. That's a really complex instruction that lets you move data around the chip. You can move from register to register, load a constant into a register, load a
Starting point is 00:43:20 register with a value from a memory address, set the value of a memory address, and on and on and on. There are a lot of different cases here. Implementing that in silicon, well, that can be a bit of a pain. By contrast, it's easier to write a program to handle every possible case of move. So what's the connection here? The way I see it, the 1802 is using something really similar to primitive microcode. It's not exactly the same, but it strikes me as having a similar flavor. You have a chip that has a full-on computer that just comes pre-programmed to do a certain task. That's as true of the 1802 as it is of the 8086. It just so happens that TI chose to program their chip to be a calculator instead of a processor. This takes us to the Grand Slam.
Starting point is 00:44:20 The 1802 would see moderate success in pocket calculators for a number of years. But, come on, it's still a limited chip. It has all the horsepower to be a much cooler device. The implementation just hamstrings it. How can more power be unleashed? Well, what if you made the thing programmable? Welcome to the weird part. I mean, everything so far has been a little weird, but this is where we should leave all expectations at the door. In 1974, TI introduces the TMS-1000 series. The naming convention here is awful, and I am so sorry about all the numbers. The TMS-1802 isn't actually part of the 1000 series. With this new release, the 1802 is renamed the 0102 it's a bit of a strange move because well the 1000 series is very similar to ti's earlier one chip computer but
Starting point is 00:45:12 whatever let's let's just move on and not think about the numbers for a little bit we've already talked specs the tms 1000 series of chips used a 4-bit processor backed up with pretty small amounts of ROM and RAM and a really simple I-O interface. You can actually make out these distinct components in a photo of the chip's die. There's a big region for ROM in the lower right-hand corner. RAM is off to the left, and the processor and I-O circuits take up the top half of the die. It's kind of neat to see the separation at the silicon level and the processor and I.O. circuits take up the top half of the die. It's kind of neat to see the separation at the silicon level. I just like it. The whole 4-bit thing here is already strange. It means that the TMS-1000 is pretty close to
Starting point is 00:45:57 the Intel 4004 chipset, at least in terms of very rough capability. That said, TI's cool new chip is very different from more familiar computers. The 1000 is a Harvard architecture chip. For some of you, that may ring some bells. For others, let me explain. Most computers adhere to the von Neumann architecture. That name is a bit of a misattribution, but that's neither here nor there. In a von Neumann architecture computer, both code and data are stored in the same memory space.
Starting point is 00:46:34 So a program and its data can both be treated in the same way. One chunk of memory might have a program, another chunk might have a variable, but ultimately the computer doesn't really care. It's all just numbers to the machine. This architecture comes with a lot of power. It's great for big machines because it lets you do things like write programs that can manipulate other programs. Take, for instance, a compiler. That kind of program works really well with the von Neumann architecture. In the Harvard architecture, data and code are kept separate. Usually this is described as data and code living in different memory spaces. That's just a fancy way of saying that your program lives in one bucket and your data in another, and never the twain shall meet.
Starting point is 00:47:23 This is a very fundamental difference. In a Harvard architecture machine, data and code are just treated differently. Why would you choose one design over the other? Well, if the von Neumann architecture can be characterized as giving programmers flexibility, then the Harvard architecture gives that same kind of flexibility to hardware folk. We're talking researchers in the early era of computing, and by the 70s, we're talking manufacturers. We're talking folk like TI. This is something that's taken me a while to come to terms with. As an ardent software ghoul, I've never liked the idea of the Harvard approach. In my head, you want to have as
Starting point is 00:48:05 much programming flexibility as possible. Code should be free as in freedom, and hopefully free as in beer also. But look at it this way. In a von Neumann computer, all memory has to be treated the same, at least at some point. There are some early stored program computers that use combinations of different types of memory, but the final interface was unified. For a Harvard machine, you don't have that restriction. This means you can have radically different types of storage for code and data. The actual pathways that connect the code stores and the data stores to the processor, those can also be different. Heck, you don't even have to use electronic storage for code. The Harvard Mark I, the progenitor of this
Starting point is 00:48:51 architecture, used a reel of paper tape to store programs. Jumps and loops were done by manipulating the actual tape. So the sky's the limit here. So how was the Harvard architecture used in the TMS-1000? Simply put, it allows for separate technologies to be used for RAM and ROM. The RAM inside the chip was pretty standard stuff, just silicon gates. ROM, however, was... Drumroll, please. Mask ROM! That's right!
Starting point is 00:49:24 The thing is using the same technology that we saw in TI's calculator on a chip. This is exactly why I was saying I don't get why they changed the 1802 naming convention. They're kind of the same chip. Now, I can already hear the complaints. But Sean, you promised a programmable microcontroller. Well, I can keep that promise. The TMS-1000 series was programmable, just in a roundabout way. This is, honestly, kind of a neat way to handle things given the restrictions. TI offered a service to clients that was hosted on some of TI's own mainframes. The service was a TMS-1000 simulator.
Starting point is 00:50:10 Third-party programmers that were contracted with TI could hop on a mainframe and work up their code on a software-defined 1000. This approach isn't exactly groundbreaking, at least the components aren't. Processor simulation was already a mainstay in the microprocessor field. That was how most early microprocessors were programmed. One of TI's clients would work up the code they wanted on their chip. Then TI would burn that code onto some 1000s. At the end of the process, a client would get these almost custom chips. They were using a standardized platform, but with custom-etched ROMs. In my opinion, that's pretty slick. Customers got chips tailored to their specific needs, and TI didn't have to design those chips from scratch.
Starting point is 00:51:00 Costs could be kept down, as could turnaround time. That said, you know, this is still a pretty inconvenient process. Me as a hobbyist, well, I couldn't go out and buy up a single TMS-1000 with a dinky little program on it. This may also come as a bit of a shock. Microcontrollers, at least in the modern day, are kind of known as a hobbyist's favorite tool. You can get them for pennies on the dollar, program them at home, and just glue them onto anything that's conductive. So why did TI go the mask route?
Starting point is 00:51:36 Well, welcome to the Wild Speculation Corner, where I try to string together cogent guesses at questions. Now, the way I see it is there are two or three main benefits to the mask ROM in the TMS-1000. First is complexity. The only real option for silicon ROM besides mask ROMs was the EPROM. These were chips that could be electronically reprogrammed. Essentially, you could store non-volatile data using a special programming circuit. If you wanted to change code, the EPROM could be erased using UV light of all things and then reprogrammed. The method of this programming is kind of the key here. EPROM chips have exposed data and address buses. That is to say, they have
Starting point is 00:52:27 external pins for specifying an address and data to be stored at that address. In the case of Intel 1702, the first EPROM on the market, that comes out to 8 pins for the address bus and 8 for data. We're looking at 16 pins just for memory access. To write a byte to ROM, you first select the address on the address bus, then you send a high voltage signal into the data bus. The operation, once again, requires 16 pins. That's not to mention a few pins for signaling writes or reads, handling power, and a few other support tasks. This specific 1702 EPROM comes in a 24-pin packing, just to give you an idea of size. The TMS-1000 came in a number of different packagings, the smallest with 28 pins. Given the relatively small ROM space, you wouldn't need all that many pins for programming.
Starting point is 00:53:27 Maybe, say, 14. 4 for the data bus, which is already in the pinout, and then 10 for addressing. That's assuming a 1024-byte ROM in 4-bit bytes. So we end up with 14 pins that are now dedicated just to writing this theoretical EPROM. That leaves us with 14 free pins. At least two get taken up for power and ground, and at least one for an oscillator. That gives us 11 pins left for outputs and inputs. That can be tenable, but it would severely limit what the TMS-1000 could do. It would limit the number of devices it could connect to. This isn't even accounting for the increased complexity of using an EPROM.
Starting point is 00:54:11 More of these fancy programmable ROMs will take up more space on the silicon die. It'll just need more supporting circuitry on the chip than, say, a mask ROM. So while an EPROM would be a nice addition, it would have added new issues. There's also just the matter of cost. The TMS-1000 was being developed as a pretty cheap device. It was the successor to chips made for cheap pocket calculators. The Intel 1702, just an EPROM alone, and really the du jour choice of EPROM in this era,
Starting point is 00:54:47 cost $13 per chip. That might sound cheap, but check this out. In 1974, you could buy TMS-1000s in bulk for $2 a chip. $2! I know that clipped. I'm keeping it in there because that's a screaming deal. There is some wiggle room here. The 1702 prices for a single unit buy in 1976. The 1000 is quoted in bulk for a 74, but the difference here should be very clear. A simple EPROM was more expensive than an entire TMS-1000. Alright, so with all the pieces in place, I think we can answer my big question. Microcontrollers are cheap and handy devices. We've already seen how they were able to hold their own in the calculator wars. So, why not apply the same technology to a fully-fledged computer? The easy answer is, you can't. You can't just use a microcontroller to replace a whole computer,
Starting point is 00:55:54 at least not for a general-purpose computer. The price point is enticing, but it's kind of a trap. The TMS-1000 is so cheap because it's meant for a very small niche, for very small-scale applications. Its whole design revolves around simplicity and cost-cutting. Let's go by way of example. Perhaps the most common operation on a computer is loading a program. That's a basic task that happens on computers since time immemorial. Or, well, since the 1940s at least. That process is really simple. You instruct the computer to load a program from some type of secondary storage. The machine then reads data into memory, be it from disk, tape, or card. Then the computer jumps to where the program was loaded
Starting point is 00:56:46 in memory and begins execution. You might see the missing link here. The TMS-1000 can only execute code that's stored in ROM. That ROM can't be changed once the chip rolls out of the factory. In other words, there's no way to load a new program. The 1000 is etched to do one thing over and over until it loses power or gets destroyed. You don't get to change code on the fly, you just don't. There are some ways around this, but that takes us into the territory of pure hackery, and at that point, just buy a computer, dude. So yeah, the TMS-1000 is totally a computer, it's just not a general purpose computer. It's designed to fill one role, whatever that role may be. You aren't going to see a microcontroller like the 1000 taking the
Starting point is 00:57:39 place of a whole desktop, but it definitely has a home in more simple applications. desktop, but it definitely has a home in more simple applications. What were some of these applications? What were some of these simple roles? The canonical example is the speak and spell. This was a toy from the late 70s that sounded out words that were typed into it. The speak and spell could also be used to play simple spelling games. This machine used a TMC-0271 with two support chips, one for external data storage and one for voice synthesis. The 0271 is a member of the 1000 family, just with more ROM and RAM and some extra external pins. I know, TI has kind of awful naming conventions. TI has kind of awful naming conventions. While this is a neat little use of the chip, it's not very satisfying to me.
Starting point is 00:58:35 I want something big and industrial to really show off the distinction between microprocessor and microcontroller. But that's actually hard as nails to track down. The 1000 series is most prominently used in calculators and toys. I mean, that's kind of its lineage. It came about from devices geared towards single-chip calculators. I keep running into sources that say the 1000 was used in just about everything, but there's a distinct lack of specifics. And really, I think that's probably emblematic of how a microcontroller should work. It's not flashy. It's not powerful. These are little chips that get thrown into devices to add just a little bit of smarts to replace a complex circuit with a tiny blob of code.
Starting point is 00:59:21 So hey, next time you see a circuit board from the late 70s or early 80s, pause for a second. You might see a weird little chip from Texas tucked somewhere inside. Alright, that brings us to the end of this episode. We've seen how Texas Instruments developed the first proto-microcontrollers during the turbulent years of the calculator wars. We've also looked at the greater context around the first microcontrollers proper. I think that's crucial because, when you get down to it, the microcontroller is one possible solution to a larger problem.
Starting point is 01:00:00 That problem, of course, is how to make more accessible calculators. So this was kind of a niche thing that got out of control. Intel went on a route that would lead to commercial microprocessors. MosTech went with a totally purpose-built chipset that kinda sucked. And TI went somewhere in between the two extremes. That middle route led to small and flexible microcontrollers. streams. That middle route led to small and flexible microcontrollers. This is also one of those episodes where the history connects directly to the current day. Modern microcontrollers are pretty similar to the TMS-1000, just vastly improved. Chips like AT Megas, which are used
Starting point is 01:00:41 in many Arduino boards, are still Harvard architecture machines. At least, mostly. It's a little more complicated than that. The point is, the basics that we've outlined today still apply to microcontrollers in the modern day. Newer chips are also a lot easier to program. I guess what I'm saying is that if the TMS-1000 sounds at all interesting to you, I'd recommend getting a microcontroller board. I have the most experience with Arduino boards, but I hear the Raspberry Pi Foundation is putting out a pretty good microcontroller these days. They're plain useful devices, and if I may be so bold, they're a lot of fun to work with.
Starting point is 01:01:23 And ultimately, that's the legacy of the TMS-1000. The same technology that made it a good platform for industry has also led to a good platform for hobbyists. It just took a few years to get that tech into our hands. Thanks for listening to Advent of Computing. I'll be back in two weeks with another piece of computing's past. If you like the show, there are a few ways you can help support it. If you know someone else who'd be interested in the history of computing, then please take a minute to share the show with them. You can also rate and review on Apple Podcasts and I think Spotify now, too.
Starting point is 01:02:01 If you want to be a super fan, you can support the show directly through Advent of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction of the show, and bonus content. I think it's about time I start planning the next bonus episode, so keep your eyes out. You can find links to everything on my website, adventofcomputing.com. If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.