Advent of Computing - Episode 64 - Gemini's Tiny Machine

Episode Date: September 5, 2021

Today we are talking about computers in space! 1964 saw the launch of Gemini I, the first spacecraft to carry an onboard computer. The aptly named Gemini Guidance Computer was responsible for guidance..., navigation, and safe reentry. Built by IBM it weighed in at a tiny 59 pounds. For 1960's technology there just isn't any comparison to make, it was an amazingly small machine. What secrets does it hold? Did IBM crack some secret code to build such a tiny computer? https://www.ibiblio.org/apollo/Gemini.html - Overview of the Gemini Guidance Computer https://history.nasa.gov/computers/ch1-1.html - Official NASA History https://www.ibiblio.org/apollo/Documents/GeminiProgrammingManual.pdf - How the thing was programmed

Transcript
Discussion (0)
Starting point is 00:00:00 What's cooler than a spaceship? Now, that's pretty obviously a trick question because, let's be real, nothing is cooler than a spaceship. But seriously, space exploration is fascinating and really captivating to think about. On a surface level, it's just plain neat. The idea that humans have been able to send robots to other planets? That's pretty cool. Not to mention all the crewed missions. Humans have walked on the moon.
Starting point is 00:00:30 I know that's just a fact at this point. It's a historical event. But sit and think about that for a second. It's wild. The fine details of space exploration are just as wild. Everything from the engineering to telecom and ground support. Rocket science is very literally only part of the equation here. Now, this is another fun thing to puzzle over. What do you think was the first computer in space?
Starting point is 00:00:57 Space travel is, after all, a very complex affair. Even the earliest crewed flights required a lot of calculations and a lot of accuracy to work. You need the kind of real-time mathematics and reactions that computers are good at. Well, as it turns out, computers were a slightly later addition to spacecraft. The first spaceships, the first occupied capsules in orbit, well, they didn't have their own computers. They blasted off, orbited, and returned, all using calculations radioed in from ground control. But that only worked for the most simple of flights. As more ambitious plans were put on the table, technology had to adapt. In April of 1964, Gemini 1 launched from Cape Canaveral,
Starting point is 00:01:46 and it carried on board the first space-bound computer. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 64, Gemini's Tiny Machine. Today, we're going to be talking about spaceships, or at least the smart bits that make them run. We're going to be taking a look specifically at the Gemini Guidance Computer, sometimes called the OBC or Onboard Computer, aka the first computer in outer space. I think the best way for me to introduce this topic is to explain why I'm covering it. Now, you might already know this. I think I've talked about on the show a little bit before, but I keep a massive spreadsheet of every topic I want to cover and every topic that I have
Starting point is 00:02:37 covered. It has a listing for all episodes that are all lined up in neat order with dates and times. Each episode has a checklist of which stage my research, scripting, and production is at. Plus, they all have this column that I just call genre. That last column says what type of show I'm planning to work up. It's basically just a note to myself so I can quickly look at the sheet and go, oh yeah, that's what this line item is about. That column ranges in values from event episodes, which cover some happening that occurs within fixed dates, computer episodes, which, well, they cover just one specific computer, lineage episodes, which go over the development and spread of something,
Starting point is 00:03:20 and quote-unquote exploration episodes. Those are the ones that tend to kind of go off the rails that I really like to produce. Anyway, the Gemini computer has been on the sheet for a while, and that one was marked as an event episode because I wanted to address how that system played a role in early space exploration. I knew there were some cool aspects of the machine. I knew it was designed by IBM in 1962-1963. It weighed only 59 pounds, and it was built to operate in space. I figured the episode would mostly be about avionics, some technical stuff about the system, little Cold War set dressing, but mostly about the event.
Starting point is 00:04:00 Hence, this would be an event episode. But that was subject to change. Friday night for me is D&D night. Some friends and I meet up online and play. We do technically have a set meeting time, but you know how things go. Inevitably, someone is late. I tend to show up early just to hang out while everything gets set up. Anyway, most of us were online talking,
Starting point is 00:04:24 and I mentioned how I was planning to produce a podcast about the Gemini computer. I ran down the facts I had on my sheet. IBM, 1962, 1963, small, space. Then one of my friends asked me something that I hadn't considered. He wanted to know if the design of Gemini's guidance computer had an impact on later systems. It had to be hugely miniaturized, but did any of those techniques carry over to other computers? Once I heard that, I basically had to change the column for the episode from event to lineage, but with a big added question mark. event to lineage, but with a big added question mark. Just for some perspective here, the IBM PC, made by the same company as the Gemini Guidance Computer, was released in 1981. It weighed 28 pounds when fully configured. It's a little bit of ballparking here, but Gemini's computer from
Starting point is 00:05:20 1963 is about twice the size of a PC. We're looking at a ridiculously compact computer, especially for the time period. IBM must have done something wild. They must have made some new technology to make the machine so small, right? So what did they do? And did those changes just fall by the wayside, or did IBM rehash them in later computers? As I started digging, the story really opened up. There's a lot more going on here than I initially imagined, and a lot of it isn't what I expected at all.
Starting point is 00:06:03 So this episode, the question of lineage is going to be key. We'll be examining how IBM built a tiny computer, one small enough to almost be portable, at least on a rocket. Today we're going to talk about space travel, and I want to look at why a computer was needed in space in the first place. What did it do up in orbit? And along the way, we'll try to figure out what technology made this all possible. Was this a spark of an oncoming shift in technology, or something completely different? To kick things off, we're going to need to cover a little bit of space history. While not strictly necessary, I think this is important to keep everything here in context.
Starting point is 00:06:45 This episode will be mainly covering the Gemini program, that's the second American human spaceflight program. It ran from 1961 to 1966. It's smack in the middle of the Mercury and Apollo programs. Mercury was, of course, NASA's first human spaceflight program, with Apollo being the program that landed humans on the moon. Gemini was a bridge between the two programs. Mercury was all about just trying to get Americans into space. Apollo was the final push in the golden era of the space race, getting an American flag planted on the moon by American boots. Gemini's main goals were to plan, prepare, and test new methods and technologies that would be needed in Apollo. Because of that connection, we're going to need to address some aspects of Mercury and Apollo in order to talk about Gemini. It just comes with the territory, I figure.
Starting point is 00:07:36 So, let's start off by looking into Mercury and how computers factored in to the very earliest US spaceflights. As this is Advent of Computing, we do have a fine eye for math, so let's start off by taking a look at a typical Mercury mission and figure out where calculations are needed. At least, you know, the crucial calculations. Mercury was perhaps the wildest phase of NASA's operational history. That may be putting things lightly. Like I touched on, the whole goal was to get Americans into space.
Starting point is 00:08:12 Just a few orbits was enough, followed by a safe return and recovery. That's the simplest possible task. In April 1961, the USSR had accomplished just that feat with their Vostok 1 mission. Yuri Gagarin was launched into space atop a rocket, completed a single orbit, re-entered the atmosphere, and landed safely back on terra firma. The US space program followed hot on the heels of the Soviet program, launching Mercury-Redstone 3 in early May 1961. Astronaut Alan Shepard became the second human to reach space. Now, if you're familiar with military history, then the mission name might be a slight tip-off to what we're working with here.
Starting point is 00:08:55 This wasn't just Mercury III, it was Mercury-Redstone III. The stack that launched Shepard into space consisted of a Mercury capsule and a modified Redstone ballistic missile. Yeah, at this point, there wasn't really such thing as a quote-unquote space launch vehicle. So NASA was using modified missiles. I said we're dealing with a wild part of NASA's history, right? Essentially, the nuclear warheads that were usually installed on Redstone rockets were swapped out for spaceships. Then it was bombs
Starting point is 00:09:31 away, but it was just going straight into space. Early Mercury missions were all suborbital, meaning that while they did reach space, they didn't have the proper velocity and trajectory to enter into an orbit. Redstones were eventually switched out for Atlas missiles, an early intercontinental ballistic missile which was powerful enough to achieve full orbital flight. There are a lot of technical differences between Atlas and Redstone missiles, but the tooling here remained largely the same. I mean, bottom line, these are still modified weapons with space capsules on their tips. The Mercury capsule itself was also primitive, at least as
Starting point is 00:10:12 far as spacecraft go. The capsule was single occupancy, just enough space for one astronaut. There were relatively few controls, all things considered. The capsule had jets for adjusting pitch, roll, and yaw. This really just changes how the thing was oriented. There was also an external retro rocket pack for initiating re-entry. Firing the retro rocket slowed the capsule, dropping it out of orbit. So all you could really do once in orbit was come back down. Control systems inside a Mercury capsule were minimal. The astronaut had a joystick for controlling the craft's orientation and a button for firing the retro rockets. The rest of the controls were for things like communications, life support,
Starting point is 00:10:56 and mission status. Rockets are cool and all, but where's the math? The heavy lifting really comes down to launch, tracking, and re-entry. The launch side of things was completely handled by ground control. Redstone and Atlas missiles were both guided via radio communications from a ground control station. Astronauts weren't in control of that aspect of the mission at all. The calculations for proper thrust and steering were already done at this point. Any fine adjustments were made on the fly by computer systems on the ground. Once in orbit, things got a little bit more complicated. Ground control for Mercury missions was based in Cape Canaveral, Florida, but it was really a global affair. Radar stations located around the world tracked Mercury capsules,
Starting point is 00:11:46 relaying radio telemetry back to a communications center at Goddard Space Center near Washington, D.C. That's also where computers make their first big entrance into our story. The largest computational task, at least during active missions, came down to tracking and predicting the capsule's location. In general, NASA already knew roughly where the Mercury capsule would be, at least in theory. Real-world factors don't like to play nice with theory, so it was crucial to track where the spaceship actually was and then use that data to predict where it was headed. There was also the ever-present risk of some type of disaster, since space travel is not safe. NASA had to monitor everything and be prepared for emergency.
Starting point is 00:12:34 A complicating factor was how the Mercury capsules landed. There were some initial plans to mount an airfoil on capsules, allowing them to glide to the surface, kind of like the space shuttle would eventually do. In theory, that meant you could even have a Mercury capsule land back at Cape Canaveral itself. That would make the trip a full circle, and it would be really easy for ground control. But that ended up being kind of impractical. Instead, after re-entering the atmosphere, capsules deployed a parachute to slow down, then splashed into the ocean. This made it imperative for NASA crews to get to the landing zone as quickly as possible,
Starting point is 00:13:13 since, you know, it's the open ocean. Capsules could float, and some later capsules especially would have buoyancy aids, but it would be unwise to test how long that would last with a live subject. All this meant a lot of math. Luckily, NASA had a lot of information to work with. Two main channels of data were fed into the computer center at Goddard. Radar that told the location of the craft, and telemetry sent directly from the capsule. From that, a capsule's current location and velocity were well known, as well as the spacecraft's overall status. That meant everything from fuel level and cabin pressure down to the astronaut's heart rate were back in a computer
Starting point is 00:13:55 somewhere in a NASA office. All that was fed into a pair of IBM's 7090 mainframes at Goddard. One functioned as a primary and the other as a live backup. This pair of computers projected the next few orbits of the capsule, firing parameters for retro rockets for eventual re-entry, and location of splashdown. When it was time to come back down to Earth, ground control just needed to check their computer readout. In most flights, the firing parameters, as in when to fire thrusters and for how long, were relayed over radio to the astronauts. But Mercury capsules could be fully controlled from the ground over radio. This was used for test flights and, in theory, could be
Starting point is 00:14:37 used if an astronaut became incapacitated. Ground control computers served another big purpose that I've been hinting at a little bit. That's safety. Ground control systems were monitoring for issues during the entire mission. That started well before liftoff and continued until splashdown. If a problem cropped up, it had to be noticed immediately and dealt with. You can't mess around in space. Part of these safety routines were used for calculating abort scenarios. If an error occurred on the capsule and the mission had to be ended early, the 7090s already had contingency plans calculated. Retro-rocket firing parameters could be radioed over immediately with alternate landing sites already on screen. The overall point here is that all the smarts that made Mercury run and kept it safe
Starting point is 00:15:26 were located on the ground. Everything was remotely controlled and remotely monitored. There was a bit of a delay and a lack of autonomy here, but for Mercury missions that was fine. Each Mercury mission consisted of going up, orbiting, and heading back down and then landing somewhere in the ocean. The most crucial calculations needed were the firing times for retro rockets, that is, how long to blast in order to fall from orbit safely, and corollary to that, how to properly orient the spacecraft. A capsule had to re-enter the atmosphere at a very specific angle. Too steep and friction and compression from the atmosphere would cause the capsule to burn up. Too shallow and the capsule would essentially bounce off and enter some erratic orbit.
Starting point is 00:16:19 You really just got one shot at coming home. While crucial, re-entry wasn't a super dynamic process. Once everything was lined up and thrusters were fired, you were on your way. Orientation and speed needed to be monitored and adjusted, but you weren't going to run into anything up there. Generally speaking, space is big and empty. There was only ever one Mercury capsule in orbit at a time. An astronaut didn't need to dodge anything in orbit, they didn't need to pass through some door in the upper atmosphere,
Starting point is 00:16:53 they just needed to hit the right parameters at roughly the right location. A radio delay was fine. The lack of ability to run your own calculations in a pinch was also fine. Mercury was just the simplest possible case for human spaceflight. The program ended in 1963 after six successful crewed launches. By that point, plans for Apollo were taking shape. But to get there, NASA would need to gain experience with more complicated missions. Thus, we enter the Gemini program. Technically, Gemini started in 1961, well within the time bounds of Mercury. Everything overlaps a little bit here. There were three main goals for the Gemini program. First, just run longer space flights.
Starting point is 00:17:38 The longest Mercury mission was Mercury Atlas 9, which lasted 34 hours. That was a start, mission was Mercury Atlas 9, which lasted 34 hours. That was a start, but NASA needed to do better. A round trip to the moon was estimated around 8 days. To add in some margins, NASA wanted to test a 14-day mission. Not all Gemini flights would last that long, but they would all be a good deal longer than any other US space flights. The second big target was to run a few spacewalks. This doesn't really have a lot of bearing on our story, it's just cool. The goal was to test out spacesuits and see what issues astronauts would face working outside the capsule. The final goal, and the one that will have the most impact on this episode, was to test orbital rendezvous and docking. episode was to test orbital rendezvous and docking. That is, to have two spacecraft approach each other, connect, and then disconnect without issue. Specifically, NASA planned for a few different
Starting point is 00:18:34 types of orbital rendezvous. First, they wanted to get two Gemini capsules in orbit and have them safely approach each other without colliding. The point of this exercise would be to demonstrate maneuverability in orbit and also the ability to find another moving target. Second was actual docking. For this, a separate docking target would be launched called a gena, or the a gena target vehicle. A Gemini capsule would then launch, reach orbit, find the a gena, A Gemini capsule would then launch, reach orbit, find the Agena, rendezvous, and eventually dock. Agena modules were basically booster rockets with a docking ring. So once you're connected up, they could be used to boost a Gemini capsule to a higher orbit or just rocket around in space.
Starting point is 00:19:24 So we can already see that the complexity of these missions is increasing quite a bit. It's a lot more than go up, orbit, come down. So what kind of math was involved here? Well, we have all the fun of orbital and re-entry calculations that were used in Project Mercury, plus calculations for rendezvous. And to top it all off, everything has to be doubled, since some missions will have two Gemini capsules in orbit, and some will have a Gemini capsule and an Agena target. Another fun fact is that orbital mechanics can get wickedly complicated. I'm not just talking in abstract here, I know this from personal experience. Back when I was an undergrad, I wrote a lot of software for simulating gravitational interactions, and let me tell you, it takes some pretty nasty math to get everything right.
Starting point is 00:20:15 The big hurdle here is something called the in-body problem. The equation for gravitational interaction between two bodies is well known. It's easy to calculate, and it's easy to solve for needed parameters. So, for instance, let's say you have two Gemini capsules in deep space. You know their masses, their locations, and relative velocities. It's really simple, almost trivial, to figure out how to make those two capsules approach each other. You just need to pick a rendezvous point, find the change in velocity needed, and then figure out how to fire your thrusters. At every step, you have a nice equation you can solve. You have what's known as an
Starting point is 00:20:56 analytical solution. That's just a fancy way to say you can make an exact solution without needing to plug in numbers or make approximations. Once you're up to three or more bodies, you just kind of get wrecked. That's the best way I can think to describe it. Especially when you take into consideration the larger gravitational fields going on in orbit, you have Earth and the Moon just to start with, so you don't have a neat field of just two objects interacting. For a rendezvous mission around Earth, you definitely have at least three bodies. You have two Gemini capsules, the Earth itself, and then the moon, other nearby objects. It gets really sticky really quick. Gravitational attractions between capsules may be small,
Starting point is 00:21:47 but it's still a factor that matters. When you're in this more complicated domain, you can no longer solve for orbits analytically. You have to use numeric approximations. That makes the math a lot more nasty. The only saving grace here is that computers are good at numeric approximations, but depending on the precision needed, it can take a whole lot of computing power. But even just outside the in-body issues, the maneuvers needed for rendezvous are just more complicated than anything that had ever been done during Mercury. First off, you need to be able to find the target. That requires something like radar or radio range finding, since from far enough away, a capsule will just look like a star on the horizon. You also have to match orbits. In practice,
Starting point is 00:22:39 that means you need to be able to match speed, since stable orbits require specific velocities to be maintained. Add into that the ability to catch up to another capsule, basically to boost your speed for long enough to approach the target. Once you get close enough, you need to have fine controls to line everything up. We're getting a pretty big checklist of things that need to be calculated. From this list, it should be pretty obvious that the old Mercury capsules just couldn't cut it anymore. The logical solution was to just make everything bigger. For this section, I'm pulling a lot of information from Computers in Spaceflight, the NASA experience. It's direct from NASA's history office, so it's as good of a source as
Starting point is 00:23:23 we're going to get. It also gives a good summary of the changes made in the transition to Gemini. Quote, At first glance, the Mercury and Gemini spacecraft are quite similar. They share the bell shape and other characteristics, partially because Gemini was designed as an enlarged Mercury and because the prime contractor was the same for both craft. The obvious difference is the presence of a second crew member and an orbital maneuvering system attached to the rear
Starting point is 00:23:51 of the main cabin. End quote. So, very literally, Gemini capsules were just bigger beasts. The so-called orbital maneuvering system was basically just a jetpack strapped to the back of the capsule. It had extra fuel and rockets for dealing with more complicated maneuvers once in orbit. Larger capsules also meant the crew could be expanded to two astronauts. As the official history put this, This helped mitigate the complexity of Gemini missions, and it opened up more possibilities for experiments and just more work done in space. But importantly, it meant that there could be some specialization on a mission,
Starting point is 00:24:37 and that there could be more complicated controls inside the capsule. To throw this all into another list, we have more complicated missions, bigger spaceships, more rockets, and more eyes and hands. This would set the stage for the introduction of some new technology, the first space-bound computer. Now, it's not entirely clear when NASA decided to arm Gemini capsules with computers. It must have been pretty early in the project, sometime during the first few months of 1962 perhaps. There's a bit of a maze of contracting and subcontracting that was used to construct the actual spacecraft, but we do know that on April 19th of 1962, IBM was awarded a contract to build Gemini's onboard guidance systems.
Starting point is 00:25:24 IBM was awarded a contract to build Gemini's onboard guidance systems. This also gets us to an issue with sources. NASA's official history is something of a double-edged sword. It compiles a lot of primary documents and a whole lot of interviews into a coherent and detail-oriented story. But a lot of those primary sources aren't readily accessible. This is especially true of interviews. I straight up haven't been able to find any transcripts to consult. So we end up having to look at some events
Starting point is 00:25:52 through the lens of what NASA's history office thought was important. Some of the interviews are held in the National Archives, but they're on physical tape and haven't been digitized yet. Plus, add in the subcontracting arrangements, and that makes tracking down memos and reports pretty difficult. The Gemini capsules were developed by McDonald, who then subcontracted IBM to build the onboard computer. The result is that we don't have a lot of insight into the actual development cycle of the Gemini Guidance Computer. We do have plenty of information about the final machine, and we do have a little bit of stuff about its
Starting point is 00:26:30 development, but we don't have a lot of answers to why questions. With that aside, let's get back to the machine. In addition to all the requirements we've hit on and all the tasks that this computer needed to handle, IBM was under some strict engineering requirements. On space flights, size and weight are at a premium. You couldn't just put a mainframe in a space capsule. This would also have to be a really rugged computer. In general, I'd say a spaceship is probably up there for the worst places a computer could be. There are new and exciting types of radiation in space that we just aren't exposed to on the surface. There isn't any gravity, so if you have a mechanical component, that has to be built to account for that. Plus, takeoff and reentry are very violent events with a lot of shaking and jolting.
Starting point is 00:27:23 A delicate machine wouldn't survive in space. All things considered, what IBM was able to hammer out is nothing short of breathtaking. The project started in 1962, with the first computers delivered by 1963. It weighed in at a diminutive 59 pounds. It had a small size to match, only 15 by 12 by 18. Now, the guidance computer wasn't a perfect little rectangular box. Instead, it was slanted and formed to fit in an extra storage bay inside the capsule. If it had been scrunched down, then I bet it'd be closer to a flat foot cube. This is a truly tiny machine, especially for the era. In 1963, there just weren't small computers. There just isn't a comparison to any
Starting point is 00:28:17 contemporary machine, at least not any normal system that I've been able to find. IBM was able to shrink this guidance computer down so much because it was highly specialized. It did a handful of tasks, and it did them well. That's something that we'll get into in more detail, but keep that point in mind for now. I think it has a lot of bearing on our main question about lineage this episode. So what kind of futuristic technology can we find in this tiny computer? What code did IBM crack to make such a small machine? Well, I think that's the most wild part. The Gemini Guidance Computer, at least in terms of the parts it's made out of,
Starting point is 00:29:00 is a very traditional machine. It's a transistor-based computer, which I think by 1963 is the only real choice. Specifically, though, this computer used discrete transistor logic. There are no integrated circuits. Why is that so impressive? I think it's worth examining. Plus, it's just kind of cool. An integrated circuit is basically a miniaturized chip constructed on a tiny silicon wafer. They're the little chips that make up today's digital technology, but they don't necessarily have to be digital, to be fair. Each component that's built on the wafer, transistors, capacitors, or what have you, is tiny. In the early days, this meant a transistor could be a few microns wide, but today we're on the scale of a few nanometers wide. The two key upsides to integrated circuits are
Starting point is 00:29:52 size and power consumption. Depending on the technology used, an integrated circuit can basically just sip on power. It should also come as no surprise that ICs can radically reduce the size of a device, such as a computer. With each component being miniaturized so much, you can pack a lot more transistors into a much smaller space. The first integrated circuits were built in 1959, but they wouldn't really see much use in computers until the end of the 1960s. In the early 60s, ICs just weren't a viable choice.
Starting point is 00:30:29 The technology was on the cusp of usefulness, but it wasn't proven for computing. So the Gemini computer only used discrete components. That meant there's no chip on the board that packs more than one transistor. there's no chip on the board that packs more than one transistor. Everything was made up using good old bulky transistors, resistors, and capacitors. Discrete components work exactly the same as their integrated counterparts, but they take up more space and usually require more power. Another just interesting side effect is that circuit boards for discrete-based machines can get a little more complicated. Integrated circuits are tiny circuit elements, so in addition to just being smaller, they can simplify your general circuit design. You have a whole circuit on a wafer
Starting point is 00:31:17 instead of on your PCB. The IBM team working on Gemini's computer didn't get any of those nice advantages. They were just a few years too early. That said, IBM did take some measures to make their machine as compact as possible. And, sadly, this gets into one of those frustrating sourcing issues again. There are a few preserved Gemini guidance computers floating around in museums and archives. There are a lot of pictures of those fully assembled machines, but I have yet to be able to find any photos of the disassembled computer. And I get it, these are museum pieces. Prior to that, they were government machinery that
Starting point is 00:31:59 was part of a somewhat secretive space program. It's not really the kind of thing that I imagine curators are eager to rip apart, but this means that we don't get a good look at the machine's innards. So this next part is slightly speculative. Anyway, the parts of the guidance computer circuit that we can actually see are dominated by these big white rectangles. On the reverse side, you can see a neat double row of solder blobs. As near as I can guess, these are packs of transistors, something like a handful of discrete components sealed up in a module. Now, deeper into speculation here, but I'd wager IBM was doing this, one, just to cram everything closer together, two, to make it easier to replace components, and three, maybe
Starting point is 00:32:53 to give some extra radiation protection to semiconductor components. We can see a very roughly similar thing going on with Apollo's guidance computer, which used cram-packed and sealed modules to protect some circuits. What I find interesting about the transistor packing is that IBM is basically trying to get some of the advantages of an integrated circuit without actually being able to use integrated circuits. I think that's a big strike against the Gemini guidance computer as some missing link in the development of small systems. Once ICs started showing up in computers a few years later, there isn't any reason to tape a bunch of transistors together. At least,
Starting point is 00:33:38 in this way, Gemini's computer is more of a product of its time than some evolutionary step. It's an expression of how far discrete transistors could be pushed. But that's all really specific processor and logic implementation stuff. As I will always argue, the real core of a computer is its memory. That's what programmers spend the most time working with, so it really defines the feel of a system. What does Gemini's memory look like? Well, I'd say it follows the same theme as its discrete transistor logic. It was cutting-edge-ish, but soon to be upstaged by better technology. The Gemini computer had 4 kilobytes of magnetic core memory. Well, it's very roughly 4 kilobytes. I'll get back to that part in a bit.
Starting point is 00:34:34 First, let's look at the physical implementation here. Starting in 1955, magnetic core memory was the real mainstay of computing memory. It would be in power for about 10 years, give or take. Call it the second decade of computing if you want to be grandiose about it. In the first decade of the discipline, we saw mercury delay lines and a couple other, a handful of weird technologies used for memory, but those were all soon replaced with magnetic core. This newer type of memory uses a grid of wires, and at each intersection is a ferrite washer. Basically, each washer can hold one bit of data. A bank of memory is constructed by building a stack
Starting point is 00:35:18 of these grids. In practice, you can make core memory using really, really small washers and really thin wire. Over the technology's lifespan, we start to see finer and finer weaves, leading to more and more dense memory. Once again, 1962-63 was the tail end of this technology, or at least its latter years. The newcomer on the block, integrated circuits, were approaching a replacement. Researchers at Fairchild Semiconductor were producing early integrated circuit-based RAM chips as early as 1963. By 1965, IBM joined the party and started manufacturing its own static RAM chips. By the time Gemini capsules are flying, a Challenger is almost prepared to replace magnetic core memory.
Starting point is 00:36:08 So, at least in physical implementation, once again, the Gemini computer wasn't packing any super-secret technology. Now, the actual layout of memory is where things take a turn for the weird. The Gemini Guidance Computer was a 39-bit machine. Not 8-bit, not 16-bit, not even 32, but 39. And I gotta say, until now, I have never even heard of a computer with that wacky bittedness. Like, not even in passing, not even someone being like, oh, here's a neat fact about a 39-bit computer. In practice, this meant IBM's little beast used a 39-bit word as its core unit of data. That's weird enough, but it goes deeper. These words were broken up into three bytes. Some Gemini documentation called these syllables instead of bytes, and each were
Starting point is 00:37:16 defined as 13 bits long. If you don't spend a lot of time thinking about register and word sizes, I don't blame you. It's a bad thing to do with your life. But let me try to explain just how out of this world those numbers are. Let's use the Intel 8086 as an example, since it's probably what most programmers are somewhat familiar with, at least architecturally. The processor is 16-bit, which means it thinks in terms of 16-bit words and 16-bit long numbers. Registers, the processor's immediate working space, are all 16 bits wide. Each word is composed of two bytes, which are each eight bits wide. You should notice a rhythm here. Every number is a power of two. In the deeper parts of computers, that's just what's
Starting point is 00:38:15 expected. It's the norm. Seeing bittedness, or really any number associated with computer that's not a power of two, is rare. Seeing odd numbers even more so. There's no reason you can't do it. We can see IBM breaking convention right here, but it's just strange. That power of two rhyming scheme comes from binary, which every computer with very few exceptions uses to represent math. So seeing an electronic digital computer that does use binary, uses ones and zeros, but it has an odd number of bits in each byte, that doesn't sit well with me.
Starting point is 00:39:08 number of bits in each byte, that doesn't sit well with me. Once again, this is a point where we aren't looking at some super-secret formula that IBM cracked during Project Gemini. This is a strange computer that was built just before a major technological shift. The immediate elephant in the room, at least when it comes to memory, is why? Why would IBM go with such a strange word-and-byte system? That's one of those questions that's really hard to answer, but I do have a guess. I think it came down to accuracy. Having more bits per digit meant you could perform more accurate operations than, say, an 8-bit computer. I'd assume that somewhere in some memo in IBM's archives, which may or may not still exist,
Starting point is 00:39:55 there'll be an explanation of why 39 was chosen for a specific accuracy goal. But unless someone breaks into IBM or McDonald, I don't see us finding a satisfying answer. That's the short rundown of the computer itself. So, what all did the guidance computer actually do during a flight? In practice, the onboard computer was part of a much larger system of redundant machines. Ground control played a huge role in the Mercury program, and it kept up its importance during Gemini. In general, everything the onboard computer calculated was also being crunched on the ground at the same time. If the onboard computer had an error, then the correct numbers could be relayed from the primary ground control machine.
Starting point is 00:40:49 If the primary ground control machine ran into an issue, then the correct numbers could be sent up from the backup ground machine. That puts us in kind of an awkward position. The onboard computer can do everything needed for a successful Gemini mission. For some tasks, it was the primary system, while other calculations that the guidance computer ran were used as backup for ground data. Basically, everything is doubled and tripled down for safety here. The other awkward factor is that the Gemini onboard computer wasn't really flashy, at least not in the cockpit. We're dealing with a highly specialized machine that has to operate in really unusual and really confined conditions. Inside the Gemini capsule, astronauts controlled the onboard computer via the Manual Data Insertion Unit, or the MDIU. That was the only direct access to the machine they got. This is pretty close to the most bare-bones computer interface possible.
Starting point is 00:41:49 The MDIU consisted of a numeric keypad, a seven-digit display for showing memory addresses and their values, and a few extra buttons for entering, reading, and clearing data. That was the most direct window into the computer. From the MDIU, an astronaut could read outputs from a program and, if something went wrong, maybe try to diagnose system errors. While not technically part of the MDIU, there was also a program selector dial and a stop-execute-reset switch. Now, I really like the idea of a program select dial here. The Gemini system was, technically speaking, a general-purpose computer.
Starting point is 00:42:30 You could run any program on it just fine. But in practice, there were only a handful of programs the computer would ever need to run. As near as I can tell, the program selector switch just told the computer where to jump to start a specific routine. switch just told the computer where to jump to start a specific routine. Then the MDIU proper could be used to input any needed parameters and get any direct outputs. But the onboard computer wasn't just some isolated calculator. It was integrated deeply into the Gemini capsule itself. So while the MDIU was the only part of the control panel dedicated wholly and fully to the computer, there were other inputs and outputs scattered throughout the capsule, both inside and outside. NASA furnished full block diagrams that show how everything in the capsule was connected up,
Starting point is 00:43:16 and it really is a web of wires. On the input side, the onboard computer was receiving data for a reference timer, inertial sensors, ground uplink, radar, and an onboard horizon center. The ref timer was used in a lot of calculations, so it's just always in the background to provide some consistent tick. From inertial data, the computer calculated the craft's velocity, which was pushed out and displayed on a dedicated set of dials. The ground uplink was really just a way for ground control to send data up to the capsule. Radar and the horizon sensor were the key to conducting safe rendezvous. And just safe navigation in general. Radar is the most simple one here.
Starting point is 00:44:02 It was used to identify rendezvous targets and then figure out their distance. That made it possible to find a tiny capsule or a gena in orbit. The horizon sensor is a little more complicated. It basically kept track of where Earth's horizon was relative to the capsule. Readings were used to determine the orientation of the capsule with regard to Earth. Capsule orientation mattered for docking, but also for re-entry. On Mercury, re-entry was mainly done manually, with a few aids inside the capsule. On Gemini, the computer handled re-entry automatically. Just flick the program select switch to the re-entry program, slam execute, and buckle up. When running the re-entry program or a few other navigational programs,
Starting point is 00:44:50 the onboard computer took control of thrusters. In this mode, the Gemini capsule was in full autopilot. Alright, so with all that, where do we stand so far on the main question of this episode? Did the Gemini Guidance computers somehow influence later machines? As it stands, I have to say no, at least from the hardware standpoint. The bottom line is that IBM produced a very specialized machine using very safe technology that was approaching the tail end of its lifespan. Within these confines, IBM was able to squeeze something amazing out. But there isn't any breakthrough technology tucked inside. At least
Starting point is 00:45:33 nothing that would shape later development of more mainstream computers. Everything is very specific to the technology it was using and the niche it was operating inside. But that's just the hardware. Now, Gemini's computer hardware was already pretty weird. Once we get into software, we ascend to an entirely different realm of high strangeness. Well, strangeness and some actually really interesting programming methods. This is where we can see, in my opinion, IBM's real secret sauce. This is where it's important for us to consider the actual scale of this computer's production run. In total, there were 12 Gemini missions.
Starting point is 00:46:21 Each capsule was used for only one mission. So, on the conservative side of the estimation, there were 12 guidance computers ever manufactured. Throw in at least one for testing brings the total to around 13. Let's just ballpark it to somewhere around just over a dozen computers. Despite the low number of units, the Gemini guidance computer was probably one of the most heavily tested machines up to that point, which really it should be. This was a big contract for IBM. NASA had strict reliability requirements. Any failure could not only lead to massive disaster, but to a very public disaster. The computer and any software on it had to work. It had to be able
Starting point is 00:47:07 to run error-free, and in the event of a problem, it had to be able to recover and cope. The overall parameters of this project left IBM in a bit of an impossible position. How do you test a space-bound computer without putting it in space? How do you test for, say, a catastrophic rocket failure? I mean, NASA wasn't going to send up a capsule full of computer scientists to run some stress tests. NASA definitely wouldn't have let IBM break an orbiting spacecraft to see how the computer would respond. What's a subcontractor to do in this position? The answer is simulation. Essentially, IBM built up a series of software and hardware tools just for testing and development of the Gemini computer.
Starting point is 00:47:54 On the purely software side, we see the Guidance Computer Simulator. This lets programmers run guidance computer programs on more terrestrial mainframes. programmers run guidance computer programs on more terrestrial mainframes. By using the simulator, code could be tested before it was even loaded into a real guidance computer. In fact, you could write a whole program without ever having to touch a real Gemini machine. Plus, since everything was defined in software, it made debugging a lot more easy. The simulated computer's memory was fully accessible to programmers, just waiting to be inspected. This technique of simulating new hardware isn't unique to Gemini. It ends up being used in the computer industry at large in the coming years.
Starting point is 00:48:37 When you get down to it, simulation is a really powerful way to approach software and hardware development. Intel would use simulators while they were developing new microprocessors in the 70 software and hardware development. Intel would use simulators while they were developing new microprocessors in the 70s and 80s. This allowed researchers at Intel to try out new processor ideas without having to cast anything in silicon. Perhaps more famously, Microsoft was founded on the use of a simulator. In the mid-70s, Paul Allen wrote an 8080 simulator. That was then used by he and Gates to develop their own version of BASIC, and the rest is history. This is an especially savvy approach for the Gemini project. It meant that IBM could much more rapidly develop new programs
Starting point is 00:49:20 for the machine. It also helped deal with possible scarcity issues. Recall that there are just over a dozen of these computers ever built, but perhaps a few hundred IBM employees on the project. Simulation meant that programmers didn't have to wait in line for a Gemini guidance computer to be free, at least not in most cases. Leaving the purely virtual realm, we enter into test hardware. Internally, this was called the Aerospace Ground Equipment, or AGE. The onboard computer was tricked out with some beefy ports for connecting up to a Gemini capsule's systems. As near as I can tell, the AGE took advantage of those connections. The computer would think that it was plugged into a spaceship,
Starting point is 00:50:07 while it was actually just sitting on a table at some IBM office. Once fully simulated tests were done, a program could be loaded into a real Gemini computer for another phase of testing. The AGE was also used for diagnostics, and this is another interesting design choice on the part of IBM. The Gemini computer could actually tell if it was connected to an AGE. All you had to do was poll a specific I.O. device. From what I've read, it sounds like this was used primarily for sending diagnostic
Starting point is 00:50:38 commands. You'd load up a program that polled from the AGE for instructions and then tried to execute them. That way you could test for any issues in the computer. Then we get to integration testing. This part bleeds over from IBM into the larger project. According to NASA's official history, this type of testing was, quote, carried out in the Configuration Control Test System Laboratory, which contained a Gemini computer and crew
Starting point is 00:51:05 interfaces. The mission verification simulation ensured that the guidance system worked with the operational mission program. Further tests of the software were done at McDonnell Douglas and at the PAD. NASA and IBM emphasized program verification because there was no backup computer or backup software. End quote. The CCTS laboratory was where the full Gemini capsule systems were tested and analyzed. The guidance computer did have to work as part of a much larger system after all. So, at least in this lab, IBM was able to connect everything up and see how their computer played along with the full package. As someone who's worked in the software industry, this sounds at least vaguely familiar. What IBM was doing,
Starting point is 00:51:52 in some conjunction with NASA and McDonald's, was developing a method for developing reliable software. It's a bit meta there. Anyway, testing plays a huge role in that, but so do the tools and practices used for testing. The specifics are a little different, but the structure of modern practices are here. Nowadays, most good-sized software development teams will have a member dedicated to this type of work. The name for it varies, and the gigs I've worked has been called QA or quality assurance testing, but the idea is the same. You have dedicated personnel that use specialized tools and follow established procedures to test hardware and software. What we're seeing is a deeper specialization in the field of software development. Once again, from the official NASA history,
Starting point is 00:52:43 quote, although the official NASA history. which did not, and what actually occurred in the development of software projects. The SAGE air defense system, the IBM 360 operating system, and NASA's requirements for both spacecraft software and ground-based software were instances of major software projects that directly contributed to the evolution of software engineering. End quote. This wasn't the start of the professionalization of software development. That said, we are definitely seeing a huge leap towards a more professional approach to software. Part of that is instituting ironclad testing. Another part was the drive towards standardization. Now, I think one of the underlying currents of computer history is this grand and sometimes hidden quest to standardize everything. Perhaps my favorite example of this is IAL, better known as ALGOL.
Starting point is 00:53:54 This was a language developed by an international consortium of computer scientists and researchers starting in 1958. The goal was to create a universal programming language, a language that every programmer in the world could speak. Creating a standard like this would have huge advantages. Any programmer could just plop down in front of any computer and set to work with little to no extra training. Software could run on vastly different machines with little to no modification. International collaboration would become second nature. It's a real programmers-of-the-world-unite kind of feel. I can't help but think that these ideas were buzzing around the office when IBM was planning for Gemini. Over the course of the
Starting point is 00:54:39 project, they developed a system that fulfilled a goal somewhat parallel to Algol's grand plan, developed a system that fulfilled a goal somewhat parallel to Algol's grand plan, just on an IBM-wide scale instead of a worldwide one. Basically, IBM built out a set of standards and techniques for programming on Gemini. This allowed any IBM programmer to produce code of consistent quality. There were a lot of moving pieces to this operation, the crown jewel being a little thing called MathFlow. And this, dear listener, is truly where we've reached the bottom of the rabbit hole. MathFlow is not a programming language. There is no compiler for MathFlow, no interpreter, it doesn't even really have syntax in any conventional sense.
Starting point is 00:55:29 MathFlow is a standardized system for drawing flowcharts. Sources on the system are a little ambiguous. The overall software packages that flew on Gemini missions were called MathFlow versions 1 through 7. on Gemini missions were called MathFlow versions 1 through 7, but the actual flowcharts drawn in the process were also referred to as MathFlows. So just bear in mind there's a little confusion here. MathFlow, as in the flowcharts, was how IBM was able to keep up consistent code quality during Project Gemini. These charts show each step a program will take and each equation that needs to be calculated on those steps.
Starting point is 00:56:11 This includes all the fun math equations needed, but also things like conditionals and input-output statements. What I find so fascinating about MathFlow is that it tackles the problem of standardization in a really unexpected way. Before writing a single line of code, an IBM programmer had to sit down and draft the flow. This was all done by hand, it's a pen and paper kind of affair. Then, a second programmer would go over their work, double-checking the flowchart to make sure it followed all the proper rules and made sense.
Starting point is 00:56:47 Once this step was done, the math flowchart represented the overall algorithm in a graphical form. It was totally platform agnostic. And at least within Project Gemini, it was universally understood. I'm not saying it aged well, but you can still easily understand a sheet of math flow. You just follow the arrows and read the equations. The next step was to convert from math flow into Gemini assembly language. This is another place where we see the impact of enforced standards. The conversion process from flowchart to code was 100% unambiguous.
Starting point is 00:57:27 The Gemini Programming Manual, the official document compiled by IBM, goes over the process. It lays out in excruciating detail the steps to produce a functioning assembly language program from the flowchart. We're talking everything from the simple instruction listings down to how to properly sequence math operations. There are even flowcharts in the manual itself detailing the development and testing process, so you get flowcharts explaining how to apply your flowcharts. Here we also get something like a style guide. Besides just explaining how charts and code line up and how memory and instructions work, the Gemini Programming Guide gives specific instructions for how to structure your code. Some of this has to do with the physical limitations of the
Starting point is 00:58:18 computer. For instance, the guide has very specific instructions on how often you can access memory. Making too many sequential operations on memory can use a lot of power. So, accordingly, the manual provides an equation to determine how many operations you can do before you need to let the system idle. Other rules include such hits as define your constants all in one group. The point here is that if you follow IBM's guidelines and stick to the process, you will always produce the same final program. This meant consistency and it meant reliability, but it also allowed for more people to work in collaboration
Starting point is 00:58:59 on the project. Everyone's code looked the same, and was written to the same standards. Now, believe it or not, this approach to programming wasn't totally new. We've even run into it on the podcast before. Programmers on Univac followed a really similar procedure, but for different reasons. A Univac programmer started out by writing up a pseudocode for the machine. This was double-checked by a co-worker, after validation it could be converted to machine code, and eventually tested on Univac itself. But this wasn't done for reliability, it was just done because computer time was so scarce. Univac couldn't afford to let a programmer sit in front of a computer for hours hammering away on a first draft.
Starting point is 00:59:49 By contrast, the IBM approach to Gemini is a lot more modern. It's a systematic way to ensure the best code possible every time. It speaks really deeply to the increased professionalization of programming. It might not be a totally new set of tools, but IBM is bringing something important to the table. In the end, I think this is the most impactful part of the Gemini Guidance Computer. It gives us a fascinating snapshot in time of how programming was evolving. Alright, that does it for this episode. And I gotta say, the Gemini Guidance Computer is probably the most strange machine I can think of. So where do we stand on our starting question?
Starting point is 01:00:37 Was this bizarre space-bound machine some missing link? Was there any secret technology in it that influenced later computers? If we look just at the computer, then the answer is a flat no. IBM's little machine may have been smaller than any conventional computer of the day, but it was just that, a very conventional computer. Well, at least its parts were conventional. At every turn, IBM went with safe choices. It used normal discrete transistors and relatively pedestrian magnetic core memory. There were some neat packings used for everything, but under the hood, it was a pretty normal-looking device. What I think was more impactful were the methods used to create and program Gemini's computer.
Starting point is 01:01:26 As we've covered, IBM took a very rigorous approach to programming for Gemini. This definitely isn't the first instance of companies thinking long and hard about how code was written, but it does give us a glimpse at an early step in this process. Now, I couldn't have made this episode without the Virtual AGC Project. The name says Apollo Guidance Computer, but they also have a wealth of information about the Gemini Guidance Computer. That includes a full emulator. This is just the kind of topic where there's so much information spread out over so many sources that it's hard to digest. Having a summary of the computer
Starting point is 01:02:05 architecture all in one place, well, that made it possible for me to dive deeper without drowning. I'll have a link to their Gemini page in the show notes. Thanks for listening to Advent of Computing. I'll be back in two weeks time with another piece of computing's past. And hey, if you like the show, there are now a few ways you can support it. If you know someone else who would be interested in the history of computing, then why not take a minute to share the show with them? You can also rate and review on Apple Podcasts. And if you want to be a super fan, then you can support the show directly through Admin of Computing merch or signing up as a patron on Patreon.
Starting point is 01:02:39 Patrons get early access to episodes, polls for the direction of the show, and bonus content. You can find links to everything on my website, adventofcomputing.com. If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet. I'm at adventofcomp on Twitter. And as always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.