Advent of Computing - Episode 102 - Application of Ada

Episode Date: February 20, 2023

This episode picks up where we left off last time. We are looking at Ada and its applications. How does Ada handle tasking? What's the deal with objects? And, most importantly, what are some neat use...s of the language?   Selected Sources:   https://dl.acm.org/doi/pdf/10.1145/956653.956654 - Rationale for the Design of Ada   https://trs.jpl.nasa.gov/bitstream/handle/2014/45345/08-2590_A1b.pdf - Cassini's AACS computer and software   http://www.bitsavers.org/components/intel/iAPX_432/171821-001_Introduction_to_the_iAPX_432_Architecture_Aug81.pdf - Behold the iAPX 432

Transcript
Discussion (0)
Starting point is 00:00:00 In the fall of 2017, an event called the Grand Finale occurred. This was the intentional destruction of the Cassini space probe. It had actually been planned out for years at that point. The probe would set itself on a very dangerous orbit around Saturn. This orbit was unstable. It would bring it closer and closer to the planet's atmosphere with each orbit around the planet. Data would be beamed back to Earth right up until the end. That end would come on September 15th of 2017.
Starting point is 00:00:33 Cassini slipped into the clouds of Saturn and disappeared. Why would NASA want to destroy such an irreplaceable piece of equipment? Well, in a way, Cassini was suffering from its own success. Mission data had suggested signs of life on two of Saturn's moons, Titan and Enceladus. At least, there was enough data to warrant caution. As the Cassini mission came to an end, there was very little fuel left aboard the probe, so the fear was that once the probe was no longer controllable, it could impact with one of these moons, which would cause biological and radiological contamination. After evaluating options, it was decided that an impact with Saturn would be the best choice.
Starting point is 00:01:17 That would be the end of an amazing mission and a fascinating machine. Really an amazing mission, and a fascinating machine. Cassini's design is wild in a number of ways. Its onboard computer was powered by a radioisotope thermoelectric generator. That's a literal radioactive battery. Hitched onto the probe was a whole other probe, the Huygens lander, which Cassini dropped off on the moon Titan. The whole thing was controlled by a admittedly archaic 16-bit microprocessor. Amongst the list of cool features, there's one that you might not expect. The entire software stack for Cassini was
Starting point is 00:02:00 object-oriented, and it was written in a little language called Ada. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 102, Application of Ada. We're picking up where we left off last time. All the way back in episode 101, we discussed the unique origins of the Ada programming language. That included the initial specification, the bid process, and the development of the language itself. We also started to look at the actual language, but ran into a bit of a time crunch. Ada is expansive and complicated, so it's very fittingly spilling over into this episode. And as we get going here, I just want to make a quick comment on pronunciation. I've actually received a lot of messages and comments about how I was
Starting point is 00:03:01 saying Ada last episode. The great part is, I've had people saying that it's pronounced Ada, and others saying it's pronounced Ada. After doing a quick tally, it seems that most messages are saying it's pronounced Ada, so I'm trying to stick with that. Alternatively, I could call it a day, I guess, but I don't think that would make anyone happy. Anyway, I have a whole slate of topics to wrap up today. First off, we're going to finish covering the language itself. There were two big features that I omitted from last episode, tasking and object orientation.
Starting point is 00:03:40 Ada came out of the gate with the ability to write concurrent code. Now, that's a really complicated subject. As such, I decided that I shouldn't try to speedrun it at the end of an episode, so we're picking that up today. The object piece here is a little weird. Ada doesn't initially have explicitly stated support for object-oriented programming, kind of. It's strange, and we're going to get into it. Suffice to say, in the 90s, there were revisions of Ada that championed object-oriented programming. But there are features that show up in earlier
Starting point is 00:04:20 versions of Ada that look a lot like object-oriented programming. So we're going to need to tease that out, which is going to take a little bit of time. We're also going to talk about some of the interesting applications of Ada. Specifically, I want to get into some of the hardware that was designed to work with the language. This takes us to that fun interstice between hardware and software that I really like to research. To do so, we'll be discussing the Intel IAPX432 and the MIL-STD1750A. Both are mouthfuls, and both are processors that were designed, at least in one way or another, to work with Ada. How were they tailored for a specific language? What kind of programs did these processors actually execute?
Starting point is 00:05:06 Were they actually tailored very closely for Ada? That's what I'd like to find out. Now, before we get started, I have to throw in a plug for the big background project I'm working on. I forgot to do it the last few episodes, so I need to make up for lost time. That is, of course, Notes on Computing History. This is my attempt to start up a community-driven journal on the history of computing. And I'm trying to make it as inclusive of a space as possible.
Starting point is 00:05:38 So if you have any interest in writing about computer history, then please get in touch. All the details are up on history.computer, or you can just reach out to me directly through the show. The big important thing to keep in mind is I'm not looking for people with academic backgrounds, or even people with experience writing about anything. I want anyone who's interested to have a place to publish. If that sounds like you, which if you're listening it probably is, then please do get in touch. Now with that out of the way, we have a lot to cover here and I swear we are not running into a part three, so we better start. Let me clear
Starting point is 00:06:19 something up right off the bat. I've been using the term tasking because that's what's used in the rationale for the design of the Ada programming language. That's a major source that I'm working from. When I initially saw that phrasing, I thought it was a little strange, but not that left field. This seems to just be the Ada nomenclature, or at least I've seen it used in other early Ada documents. What's interesting here is that Steelman, the Department of Defense spec that ADA was developed to fill, doesn't call it tasking. It uses the term parallel processing. Either way, we're looking at multitasking, running more than one operation at once. Steelman is also a good place for us to start
Starting point is 00:07:06 here. What does the spec have to say about parallel processing? Well, to avoid a pretty long list, I'm going to grab the juicy bits. To quote, 9a, parallel processing. It shall be possible to define parallel processes. Processes, i.e. activation instances of such a definition, may be initiated at any point within the scope of the definition. Each process activation must have a name. It shall not be possible to exit the scope of a process name unless the process is terminated or uninitiated. 9h. Passing data. It shall be possible to pass data between processes that do not share variables. It shall be possible to delay such data transfers until both the sending and receiving process have requested the transfer. And finally, a little out of order, but
Starting point is 00:08:02 you'll see the reasoning. 9b. Parallel process implementation. The parallel processing facility shall be designed to minimize execution time and space. Processes shall have consistent semantics, whether implemented on multi-computers, multi-processors, or within interleaved execution on a single processor. End quote. or within interleaved execution on a single processor. End quote.
Starting point is 00:08:30 These are, in my view, the most important requirements to address, so let's take them in turn. The first point, 9a, if you're following along in Steel Man at Home, is the most basic. It just says we need parallel processing. We need to track things somehow so each process gets a name. The last sentence, that's where things start to get into technical details. Scope can be a pretty tricky thing to discuss. In the simplest terms, scope is just a way to talk about where something is visible within a program. It most commonly refers to variables,
Starting point is 00:09:05 so you'll hear us keyboard aficionados talk about variable scoping quite a bit. More generically, you may hear the word symbol used. That's just a generic way to refer to anything in a program with a name. So a symbol might be a variable, a function, a subroutine, or anything else that you can give a name to. You don't always want every symbol to be accessible from everywhere in your code. The most common example here is a loop. Let's say you have a loop that counts up to 10, and you're using some variable named i as your iterator. For a number of reasons,
Starting point is 00:09:45 some variable named i as your iterator for a number of reasons. i is the traditional variable name here. now every time the loop is executed i is incremented so it goes from 0 to 1 to 2 all the way up to 10. in a reasonable language you will only be able to use i while you're inside that loop. that's the only place you can see the variable. We might say that it has that local scope, and we use the term local scope because, well, it's smaller than global scope, as in fully accessible everywhere. Now, the local i used in that loop stays local. Once the loop's over and you're back into the main part of your program, the i variable just doesn't exist. It's gone. This means that, among other things, you're safe to make a new loop with a new variable also named i. Thanks to how scoping
Starting point is 00:10:38 works in this example, you don't have to worry about a possible variable with a similar name floating around. Having reasonable rules for scoping just makes everything easier and safer. If everything was global, as in there was no restriction on scope, then things can get dangerous. This is especially true in a multitasking environment. Imagine, if you will, two processes that each have loops. As is convention, each of those loops increments a variable named i. If there wasn't any type of scope restriction, then, well, you couldn't be sure what would happen. Each loop would be constantly messing with the other loop's iterator. Your wires would be all crossed, so your results would be indeterminate. You might make it to 10, or
Starting point is 00:11:33 one of the loops might interfere. They might both make it to 10. Maybe nothing happens. You don't really know. That tends to ruin a program that is not a way you want to be. How can you get around this? Well, you can make it so every process has to use uniquely named variables. You might have process underscore one underscore iterator and xxxx underscore process underscore iterator underscore final underscore one. Or, you know, you just make each process have an isolated scope. That solves the problem without a need for long names. But that leads to an interesting predicament. What happens if you need to interact with another process? This is something that comes
Starting point is 00:12:22 up pretty often. You might have a process that checks to see if any data is coming in on the serial bus. As data comes in, the process would make a note of that event and store it in some sort of buffer. On its own, maybe not the most useful thing in the world, but it can be part of a larger program. To get to that data, you need some mechanism for communicating between processes. Tasks in Ada don't share variables, so how can you get access to that buffer? How can you talk with this special isolated process? The solution is called, at least in early documentations, the rendezvous. It's a point where two normally isolated tasks can touch, like ships passing in the night. This requires another nasty thing to think about,
Starting point is 00:13:13 synchronization. Normally, a task just kind of does its own thing. It runs code, it gets variables, it wears a rut into your hard drive. That all happens at the task's own pace. One way to communicate between tasks is to get them to sync up at a certain point. Ada implements this using entry and accept statements. I know that's a lot of nomenclature, but let me explain. Each task is structured as a module. That means there is a visible part and an internal part. The visible part, also called the declaration, can include a list of entries. To an external program, these just look like normal functions. Those entries match up with accept blocks inside the task module. So you might have an entry called getSerialEvent that lines up with an accept of the same name.
Starting point is 00:14:12 When your program needs to get data from the bus, you just call serialTask.getSerialEvent, or getSerialData, which in your code looks like a normal function call. However, the entries act a little bit differently than a normal run-of-the-mill function. Once you make that call, your program will pause and wait for the serial task to reach its matching accept block. Once that happens, the rendezvous occurs. The serial task passes you back the data you ask for, and both of you can continue on your way. Tasks can also synchronize in the other direction, where a task will wait once it reaches an accept block. This is a pretty slick way to handle multitasking. In most other languages, you have to really mess around to get another process up
Starting point is 00:15:06 and running. Ada, on the other hand, presents a very simple interface. This interface matches established syntax. So if you know how to write a single tasking program in Ada, you can pretty quickly transition to writing multitasking code. Just as Steelman prophesied, there's a consistency of semantics here that you gotta love. There is more to the tasking model in Ada, but it all follows a structure similar to what we've laid out here. This brings us to the next stop on our tour. Objects.
Starting point is 00:15:42 Or maybe the path towards them? on our tour. Objects. Or maybe the path towards them? So we need to have the object-oriented talk before we really get into this. And I know, Advent of Computing is a pretty technical show, but I try to tailor it for a somewhat general audience. If you're already a smooth OOP-erator, then this will be some simplified review for you. Object-oriented program, or OOP, friends call it OOP, can have a pretty steep learning curve. This is partly due to the fact that OOP encompasses a lot of smaller ideas. It's a lifestyle, almost. The big core concepts here are objects and classes. To the external observer, an object is simple. It's a collection of data and code. You can think
Starting point is 00:16:36 about an object, at least superficially, as a really fancy structure. You have a variable that can hold other variables, it can hold functions, and it can even hold other objects. It's pretty slick. On the surface, that seems very simple, but I assure you, there's a lot more going on. An object is, in reality, an instance of a class. A class, when you strip away all the fancy stuff and all the theory, is pretty much a template. It says how an object should look, how it should act, what kind of shape it will take. Maybe you're working in widget manufacturing and you need a way to automate widgets. You might create a new widget class to handle that. Each widget has a power source, mass, volume, and optionally some sub-widgets tucked inside.
Starting point is 00:17:33 A widget also needs functions to turn it on and turn it off. That can all be represented in data and code, so it's very easy to work up a nice little class. code, so it's very easy to work up a nice little class. When you need to create an actual widget, you grab that class and you make an object. You instantiate the class. That's just a fancy way of saying you fill out the template. You can then go ahead and load up the widget's vital statistics, maybe give it some fancy functions, or maybe just stick with the functions that were in the original template. That's kind of level one of OOP. The next step up from there is inheritance, and I don't mean the kind that can be taxed. The idea with inheritance is that you create a new class that's related to an existing class. These new child classes are most often extensions or specializations of their parents. So let's get back to the widget factory. I've been creating
Starting point is 00:18:33 some new machines that are very similar to widgets, except they have a special feature. They have an extra button that will dazzle the user. With OOP, this is pretty easy to handle. I just need to make a new class called FancyWidget and set it to inherit from the older widget class. The new FancyWidget class gets everything from its parent automatically, so I don't have to replicate any of the code that was already written in the widget class. All I have to do is add one function and call it dazzle, and I'm done. This not only reduces the amount of code I need to write, it also makes maintenance a lot easier. If I need to add a safety interlock to all widgets, then I only need to modify the parent class. The change will automatically translate down to any child classes.
Starting point is 00:19:26 This overall package, objects, classes, and inheritance, makes up the most basic core of object orientation. These features are really powerful, especially when it comes to large projects. Ada, as we discussed last time, was designed for use on large-scale projects, so OOP seems like a good feature set to support. Moreover, object-oriented languages already existed by the time the DoD was looking for a new language. Simula, often cited as one of the earliest OOP languages, was also one of the tongues that the DoD investigated during the early parts of their Higher Order Language Project. So the idea, oop, was already out there. Now, the history of objects in Ada, that gets a little complicated.
Starting point is 00:20:22 To quote from Whitaker's History of Programming Languages 2 presentation, quote, Ada was designed to support and encourage object-oriented design, and it does. Ada packages can be used to create, and this is in quotes, objects, end quote. Ada does support objects, just not by name. That can lead to a little confusion. It also doesn't help that Ada 95 improved support for OOP. But some articles just report that Ada 95 introduced object-oriented programming to the language. The truth is that Ada initially supported object-oriented programming to the language. The truth is that Ada initially supported object-oriented programming, but lacked certain traditional features.
Starting point is 00:21:12 We've already met objects. In Ada terms, a package or a task module is basically equivalent to an object. The Ada version of a class is called a generic. Generics are kind of what they sound like. This gives programmers a way to make templates for variables, packages, and tasks. That may sound similar to full-on classes, but there are some distinct differences. And to be clear, we're going to be talking about Ada as described in the rationale for the design of eta, not newer versions. This is 1978 to 1979 eta. Eta is a statically typed language. That means each variable starts out as a certain type and stays that way.
Starting point is 00:22:02 An integer is defined as an integer and can only hold an integer for its entire lifetime. Similarly, operations in Ada are tied to data types. The language won't let you multiply an integer by a string, since, I mean, come on, that's utter nonsense. The core design philosophy carries over into user-defined code. When you write a subroutine, aka a subprogram, you have to define its arguments. Each of those arguments must have a defined type. Maybe you can see where this goes a little sideways. I'm going to be shamelessly stealing the example that's used in Rationale, since it works very well for our purposes. It's almost like the designers of Ada knew how to
Starting point is 00:22:53 describe the language. The example they use is a stack, which is a really common type of data structure. In an OO environment, it's common to implement a stack as its own class, since you need a way to store data and also a way to manipulate that data itself. For a basic stack class, you would have some type of array, a pointer pointing to where you are on the stack, then functions to push data onto the top of the stack and pop data off of the top. data onto the top of the stack and pop data off of the top. Now, this class approach presents a special issue for languages as strictly typed as Ada. What do you do if you want a stack class that works for any data type? One way would be to create a separate class for each possible kind of stack. But that just sucks. You'd wind up with a pile of classes that use almost identical code,
Starting point is 00:23:49 save for a variable type name. I could give you a whole list of reasons that's bad, but just trust me, it's a bad design. The Ada solution to this issue is the generic, and it works almost like a Mad Lib. Anything in Ada can be specified as a generic, but I'm going to focus here on generic packages and task modules. Instead of writing a package that handles, say, an integer stack, you can just say it handles some generic data type. And I mean that literally. You start by defining a variable type as generic. You might say something like stack element is type generic.
Starting point is 00:24:34 Anywhere you'd use that new data type, well, it treats it generically. To actually use a generic package, you have to instantiate it. In other words, you have to fill in the blanks of the matlib. In the stack example, you can simply tell Ada to make a new stack and that your generic variable is actually an integer, or a character, or any other variable type you want. The same can be done with operations, which I think is really cool. You can have a package that performs multiplication and then later specify if you're doing vector or scalar operations.
Starting point is 00:25:13 That's some neat stuff to me. Now, that's the short explanation of generics. The way I see it, generics are just a different interpretation of classes. You don't have inheritance in the more familiar sense, so you don't end up with these family trees of related classes. Instead, you get to refine and specialize your classes at runtime. Once again, slightly different, but it's in the same spirit. It's the same type of feature. Those were the two big features that
Starting point is 00:25:46 I wanted to talk about. So what were people actually doing with all of this? I think it's high time that we start talking application of all this theory. I will warn you right at the start that we're not talking the best or the coolest applications of Ada, we're talking about some applications that I think are particularly neat. Maybe that doesn't jive with fantastic case studies, but I think you'll enjoy this. The first route to tackle here is going to be the MIL-STD-1750A. This will take us into the realm of aerospace, so we're literally entering rarefied air. The 1750A was a processor standard developed by the Air Force during the tail end of the 1970s. The spec was finalized in 1980, which puts us firmly in the early days of Ada. This also puts us in something
Starting point is 00:26:48 of a tricky situation. I really like to cover spooky federal stuff because, I mean, let's face it, it feels cool. It's neat to use sources that used to be classified. When I can put together a story from these types of sources, I feel like I'm uncovering some grand secret, like I'm trespassing somewhere I'm not supposed to be. So these avenues of investigation, well, they can be very rewarding and, dare I say, thrilling. The tricky part is that the fed boys don't like to give up their secrets. Usually I can find a lot of declassified sources when I'm dealing with older events. Ada, however, just isn't quite old enough for this to work out.
Starting point is 00:27:34 The 1750A processor is still in use in certain places. Some of those applications are military. The chip shows up in computer control systems in the F-16 and F-18 fighter jets, as well as IBM's AP-102 flight computer. Now, the US military isn't exactly excited to share the operating details of active fighter jets. Go figure. Let me just add some fun color to the story here. The AP-102 is used in, among other things, the F-117 Nighthawk. That's one of those UFO-looking stealth bombers. Now, I've actually seen one of these planes in person. These planes were retired from active duty in 2008, so there are a few examples floating around at museums. I caught a glimpse of one at the Hiller Aerospace Museum near Salt Lake City, Utah.
Starting point is 00:28:31 Although impressive, the jet was merely a shell. Many of the finer details of the plane are still hotly guarded secrets. That goes as far as its paint job. The F-17s that have made it into museums have had their exterior paints removed and redone. I guess there's some secret tech in their coatings, so they can't be displayed to just anyone. If we don't get to see the paint, then we definitely don't get to see the details of its software. What we are left with are bits and pieces of the puzzle. From what I've been able to gather, I don't think Ada was in use with the AP-102 specifically, so I think we have to rule out the dream of a Nighthawk flying in stealth mode through
Starting point is 00:29:22 the sky with a fancy tasking model. The 1750A was intended to run Ada, but also Fortran and this weird language called Jovial. I believe that the AP-102 was mostly running Jovial at the time. I'm pulling this from the 1. lack of discussion of Ada on the computer, and 2. an article about upgrades made to Australian-owned F-111s. That Australian article has a single paragraph that mentions the 111's upgraded computer ran jovial, and that's the best sourcing I could find. A bit of a ramble, but the point is the theoretical military applications of Ada are a little too new for us to probe into. I guess that also illustrates why I try to stay firmly in the past. It can make certain sources just plain easier to get. Luckily, there is another
Starting point is 00:30:20 avenue that is much better documented. We just have to drop the arrow from aerospace. One of the most high-profile uses of the good ol' 1750A was the Cassini Huygens probe, may it rest in peace. And best of all, this is a confirmed application of Ada. I guess I should start out with a bit of an admission of guilt here. In the past, I have worked in the same office as someone who was on the Cassini project, so I may have a little bias. Keep that in mind when I tell you that Cassini-Huygens was very, very cool. It's one
Starting point is 00:30:59 of the coolest things we've ever put into space as a species. To start with, the probe is massive. It's over 20 feet tall and weighed around 5,000 pounds. I've actually stood next to a full-scale engineering model that's on display at the Aerospace Museum at the LA Science Center, and let me tell you, it is fittingly impressive. And yes, I do, in fact, go to every aerospace museum I see. The probe was launched in 1997 on a mission to Saturn and its moons Titan and Enceladus. The larger mission, the Cassini part of the probe, did flybys of both moons while orbiting Saturn. The Huygens part of the probe was a lander that successfully touched down on Titan, The Huygens part of the probe was a lander that successfully touched down on Titan, a moon with a very dense atmosphere and lakes of liquefied hydrocarbons.
Starting point is 00:31:51 I can't really understate how fascinating the Cassini-Huygens mission was. I mean, the entire thing was powered by a giant radioactive battery. There's just so much to say. So let's take an esoteric approach and skip all the cool stuff. Sound good? Cassini was initially part of a larger program called Mariner Mark II. This was planned as a follow-up to the early Mariner program, which sent probes all over the solar system. The design of these Mark II probes started at the Jet Propulsion Laboratory, better known as JPL. The part we're interested in, the software side, is well described in Cassini Altitude Control Flight Software, From Development to In--flight operation, a paper written by Jay
Starting point is 00:32:46 Brown at JPL. I want to open our discussion with this little preamble. Quote, while it seems like all the excitement started at launch, there were prior decades of ingenious planning and work to get to the launch pad. NASA's Mariner Mark II program began in 1987. In 1989, the first two spacecraft missions were initiated from this program, Cassini-Huygens to Saturn and Titan, and the Comet Rendezvous-slash-Asteroid Flyby. Both missions, with very different payloads and scientific objectives, were to be very similar in design goals. Both spacecraft were to encounter Now we're really talking. Space probes might all seem different, but they do have a lot in common. They all need to do the same things.
Starting point is 00:33:51 They need to steer around in space, communicate with Earth, and take readings off a bunch of sensors. There will be differences between probes. Instruments can change, bus systems could be upgraded, but there will always be fundamental similarities. You'll probably be using the same communications protocol when connecting to ground control, for instance. So, wouldn't it be nice if you could prevent duplication of effort? The plan for Mariner Mark II was to leverage Ada and object-oriented programming so the code could be shared between different missions. I think that's pretty easy to understand. This is a very classical use case for Ada. When it actually came down to programming one of these probes, you might end up with a different module for each instrument, but you could have shared modules for communications protocols, or firing thrusters.
Starting point is 00:34:45 That's a pretty basic and abstract example, but it illustrates the point. JPL started using Ada in this program partly to save time and effort. However, that was almost a side benefit. The same feature set that allows Ada programmers to share code also allows for this thing called encapsulation. Packages can be written as these isolated chunks of code and data that are only loosely coupled with the outside world. On the surface, that might seem like an annoyance. But in practice, encapsulation is surprisingly important. But in practice, encapsulation is surprisingly important. For this to make any sense in context, I want to walk you through Cassini's computer systems.
Starting point is 00:35:35 And hey, I'll take any excuse to talk about space-bound computers. It's easiest for us to look at Cassini as a giant, highly specialized computer. This may sound surprising, but when you break it all down, Cassini functions almost the same as any run-of-the-mill desktop. The differences all come down to specialization. Instead of normal peripherals like webcams and printers, Cassini just happens to have gravimetric sensors and cosmic dust analyzers. gravimetric sensors, and cosmic dust analyzers. The heart of this machine is the 1750A. Now, an assumption I made going into this was that Cassini's computer systems would be closer to an embedded system than a fully-fledged computer. That is to say, a machine with a processor and peripherals that loads a single program up from ROM and just hammers away at one task.
Starting point is 00:36:26 Defining what an embedded system is can be a little tricky, but that's the rough feel of these machines. We're talking smart toaster stuff, not exactly mainframe level of complexity. Cassini kind of blurs the line between an embedded and general-purpose computer. Let me try and explain my thinking here. The main onboard computer was called the AACS, the Altitude and Articulation Control System. That first boots up from a program that's stored in read-only memory. Internally, that program is called RTX, the Real-Time Executive. This is actually
Starting point is 00:37:07 a full-on operating system, the component that manages Cassini's computing resources. Embedded systems sometimes use operating systems, but this type of software complexity is more common in larger machines. RTX then loads up Cassini's actual software from secondary storage. Just this loading process feels more akin to a desktop than an embedded system. Now, if the 1750A was Cassini's brain, then its nervous system was the so-called AACS bus. This was a networked interconnect that ran throughout the spacecraft. The bus allowed for communication between onboard systems and the computer. It very much functioned
Starting point is 00:37:54 as a network. Components could communicate with one another, and each device on the bus had its own address. I don't bring this up to ruminate on details, more to show that Cassini was a big machine. Its software component didn't just have to manage rockets and instruments. It also had to deal with all the trappings of a normal computer. It had to do resource management in space. The final piece of the hardware side was, well, an entire other hardware side. Cassini had to be as fault-tolerant as possible. If something went wrong, you couldn't really fly a technician out to Saturn to fix the issue. We wouldn't be sending robots around if that was something we could just do. The probe had to fend for itself.
Starting point is 00:38:46 To that end, most of its onboard systems were redundant. That means two computers, two buses, and, in general, two-a-everything. If something went wrong with a specific part of the system, Cassini could switch over to a backup. The mechanism for this switch gets us kind of back into the topic at hand. It's fair to say that Cassini was complicated. Encapsulation ends up being a really nice feature when tackling this kind of complicated program. First of all, it protects your code from itself. When dealing with a lot of code, it's very easy to step on your own feet. When dealing with a lot of code, it's very easy to step on your own feet. Memory has to be used for storing code as well as data.
Starting point is 00:39:34 You could just assume that each part of your program will be polite and not dip into someone else's memory, but that is not a good assumption. This is especially true when a large team of programmers are working together. You can't just keep a big list of memory addresses around and expect everyone to check before allocating space. That's impractical. The JPL approach was to group related data into objects and bundle that up with the functions used with that specific data. These variables were totally internal. The adder packages just kept them all private. Functions were the only things exposed to the outside world. So you might be able to ask the com protocol object to put a message in its buffer,
Starting point is 00:40:15 but you couldn't actually touch the buffer yourself. That's a nice, reasonable level of encapsulation. But what if things went wrong? What happens if some of your nice, encapsulated code goes rogue? Or what if one of the AACS buses gets fouled up? Who knows, maybe an alien with some wire cutters boards the probe at some point. This is where fault checking comes into play. Cassini had a set of classes devoted purely to fault detection and recovery. Running tasks would report performance metrics and other information to this fault detection system. That system would then check to see if those metrics
Starting point is 00:40:56 indicated a problem. If all systems looked good, then nothing would happen. The probe would go along on its way. If an issue was encountered, then Cassini would kick into recovery mode. Different sets of conditions corresponded to different solution scripts. If a degradation in performance showed that the AACS bus was malfunctioning, for instance, then the script might tell Cassini to switch over to the backup bus. That's the basic rundown of the probe, but Cassini got more out of encapsulation than just some vague kind of fault tolerance. This is yet another neat part to put on the list. Each class was only loosely coupled to the rest of the program. You could only run function calls. One class could never see what was going on inside any other.
Starting point is 00:41:46 To put it another way, you didn't have to load a class in a full system. You could just grab the communications protocol class, or the class that managed the stellar detection sensors. The flashy use case here is, of course, sharing code. But that's very surface-level stuff. The same approach made testing and debugging a lot easier. Think about this in terms of the fault recovery system. How would you test a scenario where Cassini is flying by Mars and a stray alien decides, you know, I'm gonna hop aboard, I'm gonna take out my scissors and start cutting up some of these wires. going to hop aboard, I'm going to take out my scissors and start cutting up some of these wires.
Starting point is 00:42:30 You can't fly a test mission just to see how your code would handle that scenario. You could try doing it on the ground, but that would involve some grubby software developer taking wire snippers to a very, very expensive machine. I think that's how you actually start a fight inside a clean room. So, what's a poor programmer to do? The cool workaround here actually involves no hardware whatsoever. Well, none of Cassini's precious hardware. Thanks to how each object is encapsulated, it's possible to instantiate a class into a test environment. To put it another way, you can build a dummy program to act out your test scenario. Need to test a bus failover? That's easy. Just write a program that loads up the failover system, give it signals that indicate a bus problem, and see how
Starting point is 00:43:19 it reacts. You can even go further, making whole test classes that other classes can call. As long as you provide a class with the right name and the right functions, the internals can do whatever you want. Thus, testing becomes much easier. This is a case where Ada's specific qualities made it a great fit for this environment. The 1750A kind of fades into the background, since it was really only the platform. It wasn't very tailored to Ada in the first place. In Cassini, the processor is just less of a huge factor. I think it's more the case that the 50A happened to have good support for Ada, plus it came in radiation-hardened models, so maybe the chipset doesn't matter at all.
Starting point is 00:44:07 Whatever way you cut it, this is a really cool example of how Ada was used effectively. But what about a slightly different case? What about a processor that was molded more explicitly to fit this military-strength language? It's time to move on to our last case study of the evening, the Intel IAPX432. Like with the 1750A, I already have some coverage of this chip in the show's archive. The last few episodes in my Intel series discussed the 432. As such, I don't want to dive deep into the chip's history, just the relevant parts. Sound good? Cassini was an ambitious project. That's pretty self-evident.
Starting point is 00:44:53 The 432 was also ambitious, but in a more disastrous sort of way. I can think of no better example than a beautiful trademark filed in 1981. This is for the word Micro Mainframe. The full name of the chip we're discussing is the IAPX432 Micro Mainframe Trademark. At least, that was Intel's marketing. The processor was pitched as an entire mainframe computer on a wafer of silicon. Was that ad true? No, it simply was not.
Starting point is 00:45:33 The 432 represented a radical departure from Intel's other processors. Up to this point, the semiconductor manufacturer had been making very traditional chips. These were register-based machines, meaning that they operated by moving data around between memory and an internal working space. This is just, well, very traditional. It's how most computers had worked and how most still work today. This type of architecture is nice and approachable for the programmer. That is, for the assembly language programmer. You know, true computer enjoyers. You got all the hits like
Starting point is 00:46:14 add, or move byte from memory to register, or jump to address, or check if a register's value is zero. jump to address, or check if a register's value is zero. Actually, that just about describes a full instruction set architecture. These types of processors are easy to program for simple tasks. Adding two numbers can be accomplished with two or three instructions. It's really simple. But that's assuming that the numbers can fit into the processor's registers. A processor only has so many registers, and each register has a fixed size.
Starting point is 00:46:56 With the 8086, for instance, those registers are 16 bits wide, so they can only store 16-bit numbers. You get about six registers you can use. So looking at that another way, each of those registers can store a value between 0 and FFFF in hexadecimal. So as long as you're adding two 16-bit numbers, and as long as the result of that operation is itself a 16-bit number, you're golden. You only need to load up your registers and run the add operation.
Starting point is 00:47:25 Done. But what happens if your numbers are too big? The specifics depend on your processor, but no matter what, you have to write more code. You need to check the sizes of your operands, deal with overflows and underflows, manipulate some data, and probably run one more add operation. manipulate some data, and probably run one more add operation. That's bad enough on its own, but take a wider view of this issue. Adding large numbers is a very common operation, so every time an assembly programmer needs to do that, they end up writing extra code. That can come in the form of a library, or it can come as just something you memorize and puke up onto a file. Let's kick this up a notch. What about string operations? You know, for when you need to deal with more than a single character. This is another super,
Starting point is 00:48:19 super common use case. You know, anytime you use a computer. Anytime you see text, that means there's some code around there somewhere that has to deal with it. Processors usually don't have instructions specifically for doing things to strings. You just get math operations and some simple logical stuff. Once again, the intrepid assembler has to write their own code to handle strings or use a library. I personally just have my favorite string algorithms for things like length or copy or search. Whenever I start a new project, my first task is to vomit up that code that I have memorized or copy it in from a more recent project. It works, but it's annoying and repetitive. I don't really love it. We can keep building up this problem. There are always more layers to good
Starting point is 00:49:14 issues. Once we move up to compiled languages, then all these issues are just automated. If you compile a program that deals with strings, well, it has to have that low level code for handling that specific data type. This means that each program you write has to have this little chunk of code, little stub thrown in. That takes up precious space and memory. It's also executed a lot, so there is a theoretical performance hit you have to take. executed a lot. So there is a theoretical performance hit you have to take. And this is just talking about data types. Once we get to things like multitasking or memory protection, we see more and more of this repeated code that has to just be tacked onto everything. The overall issue here is one of complexity. A normal processor is a pretty simple machine,
Starting point is 00:50:03 all things considered. So to make it do complex tasks, well, you end up needing a lot of extra code. But what if you could flip the script? What if that complexity was moved over to the processor, thus allowing programmers to write more simplified code? This was one of the main goals behind the creation of the IAPX432. The chip's design brought things like multiprocessing, memory management, memory protection, and even complex data types onto the silicon level. The development cycle of this chip ended up being a nightmare. Part of this was due to the genuinely ambitious and forward-thinking designs. Part of this was due to the fact that the 432 was so different than anything Intel had made up to this point.
Starting point is 00:50:52 The processor was finally released in 1981. It was a 32-bit stack machine, something pretty unique in the field at the time. Instead of using registers, the 432's instructions manipulated a stack of data. It went direct to memory. There are a number of reasons to go this route. As I said, it allows for direct manipulation of in-memory data, so you don't need to shuffle things around. It also lets you get around the size limitations of registers. Normally, for a string operation, you have to load each character into a register and do your task one character at a time. With the 432, you keep that data in memory and work directly on the string. Better still, the processor had built-in support for strings as a data type. You heard right.
Starting point is 00:51:48 The 432 supported hardware data types. That can be a little wild to wrap your head around. Basically, since everything was in memory, there was no reason to limit the size of your operands. Just point an instruction to a string and tell it to run. No need to shuffle around bytes at all. We've already hit on some of the secret sauce that made this possible. The 432 was a stack machine, meaning that instead of storing working data in registers, the 432 used a memory structure called a stack. In practice, this gets pretty complicated, though. The 432 actually broke memory up into segments and offsets.
Starting point is 00:52:27 Each segment could be configured in different ways, and there were pointers all over the place, but the short explanation is that the processor manipulated data in memory directly. Intel used this to great effect. Not only were fancy data types like strings and big numbers supported, Intel went a step further. The 432 had hardware-level support for objects. Everything is already resident in memory. The chip is able to flip around data, follow pointers, and do all kinds of advanced memory tricks. That laid the foundation for object support. memory tricks. That laid the foundation for object support. The other factor to keep in mind is that the 432 was envisioned as part of a larger, interconnected system. The processor
Starting point is 00:53:12 was only one level which would be extended upon by a language and an operating system. All three of these components were to be built in concert. To quote from the 432's introductory manual, if an architecture, a high-level language, and an operating system are all based on the same methodology, then the boundaries between them begin to blur. It becomes hard to tell where one begins and the other leaves off. Functions can be moved from the operating system to the architecture, or from the language to the operating system. Basically, the whole system has a kind of geometrical integrity, constructed as it is
Starting point is 00:53:52 out of a set of common building blocks. End quote. Taken as a whole, we get this trifecta of object-based systems. The IAPX 432 with hardware support for objects, an operating system built in some object-oriented language, and then some language that fills in this gap. Intel's choice of language makes a lot of these factors really make sense to me. There's actually a lot of geometric integrity, so to speak. You see, the main language for the 432 was planned to be Ada.
Starting point is 00:54:33 Hardware-level objects on the 432 support data protection. You can configure an object to have private data and public data. Just like in Ada, you're able to hide information inside packages. The 432 also handles tasking. Tasks are defined as objects, once again, in a way very similar to how Ada handles tasks. The processor even lets you define new data types at execution time. Once again, this is another feature we saw in Ada. Processes on the chip communicate through message passing, much like rendezvous in Ada. Many of these big fundamental language features have one-to-one support on the hardware of the 432. A lot of this alignment, at least in theory, was serendipitous.
Starting point is 00:55:24 This gets kind of weird, but the 432 project started in 1975. That's prior to Ada even being named. John Dvorak points this out in an article called Whatever Happened to the IAPX 432. So maybe Ada was just the object flavor of the day when the chip was released, or maybe there was some convergent evolution here. The development process of the 432 is a little shrouded in mystery, so we may never know. Now, I do want to stop here for a second because I don't want us to get wrapped up and lose perspective. and lose perspective. Going from an 8-bit register machine like the 8080 to the 432 was, quite frankly, a jump into a whole nother universe. The 8080, the last major processor Intel had released before the 432 started, well, that didn't support anything near this feature
Starting point is 00:56:22 set. The 8086, the chip Intel built as a stopgap while the 432 was under development, had almost no shared DNA with the object machine. This is like going from arithmetic straight to tensor calculus. Along with this hyperspace jump comes a truly staggering level of complexity. Now, I'll be completely honest here. This is a safe space, after all. I don't really understand how a processor works. By that, I mean if you sat me down at a light table,
Starting point is 00:56:58 I couldn't draft up a new processor that would actually do anything. I don't get the silicon level. That said, I understand how they work in concept, and I get how they're used. If you show me a block diagram of an 8086, I can puzzle it out. I can make sense of it. You have decoders, you have buses,
Starting point is 00:57:18 stuff gets shuffled around to different parts of the processor. There's something about a pipeline, maybe. Some operations don't work because of how buses are wired out. I can work stuff out. This would be the moment where, if we were talking face-to-face, I'd look at you with bloodshot eyes. I'd stare right into your soul. Now, believe me when I tell you, I don't get how the 432 works at all. The introductory manual is 78 pages long. The fuller reference is over 400. To put that into some kind of context, I have a book on the shelf right behind me that's an architectural reference for the Intel 8086, 8088, 186, 286, 386, and 486. That book's just over 600
Starting point is 00:58:10 pages, and it fully describes five and a half processors. Call it a hundred pages per processor. The 432 is just a lot more complicated. We're talking hundreds of pages of diagrams and flowcharts that describe other diagrams and flowcharts. It's on a whole other level. That's not even mentioning the fact that the 432 can coordinate with other 432s, so there's some stacking going on. Looking through the reference for the 432, to me, it looks a lot more like the reference to a programming language or an entire programming environment than a processor. It's frankly staggering. Suffice to say, this is a super-powered chip. At least, on paper, it is.
Starting point is 00:59:04 The project was slow, and that begat a very slow processor. The complexity also made it unwieldy to program. There was only one really big program for the 432, at least, only one I've been able to find any information on. That's Intel's own IMAX 432 operating system. Let me open up with a mission statement for IMAX, or rather, the core mission statement around the entire 432 project in general. From a 1981 paper submitted to the ACM covering IMAX, quote, Of paramount concern in this system is the uniformity of approach among the architecture,
Starting point is 00:59:45 the operating system, and the language. Some interesting aspects of both the external and internal views of IMAX are discussed to illustrate this uniform approach. Ah yes, the uniformity of it all. And here's the thing. IMAX is so closely tied to the 432 that we've basically already covered it. A lot of the operating system is just there to connect up the 432's features. The processor provided ways to represent tasks and deal with task objects. IMAX had the code to decide when and how to switch tasks. The processor had all the tools for creating objects and dealing with memory protection. IMAX stitched that together with a programming interface. This is, I think, another one of the underwhelming parts of the IAPX experience. IMAX was meant as an operating environment
Starting point is 01:00:46 for programmers. Some nerds often in office were expected to write Ada programs that ran on top of IMAX. The operating system provided all the hardware management and runtime goodness you need, but you had to do everything else, you know, on your own. iMacs had no user interface, for one, so you can't even really program in iMacs. You'd have to use a separate computer and cross-compile with Intel's own Ada compiler. I can track down very few actual applications of iMacs and the 432. What I can find are papers written by Intel employees, theses on IMAX software, and benchmark reports. The benchmarks are disappointing, by the way. Ultimately, it's kind of a shame. The 432 was designed as a really cool chip,
Starting point is 01:01:47 shame. The 432 was designed as a really cool chip, but it may have just been too ambitious for its own good. That said, it gives us a fascinating example of how software and hardware can be made to work together, even though it doesn't really show a whole lot of the practice in reality. All right, that does it for this episode. At this point, I think I'm good on Ada coverage for at least a bit. It's a really neat language with a really full feature set. But more than that, it was built for all the right reasons and in all the right ways. I want to close this out with a final observation. The 1750A and the IAPX432 are really two ends of the spectrum when it comes to hardware support for languages. The military spec chip doesn't really do anything special. I think its most exotic feature is that it has some flexibility in its stack pointers.
Starting point is 01:02:46 It's not really an object-oriented chip. Partly, I think that's because Ada was only one language it supported. Jovial and Fortran, the chip's other suggested tongues, don't really do objects. That said, it has some physical characteristics that make it a good fit in the aerospace niche. Companies made radiation-hardened versions of the chip, for instance, that were used in space. That, combined with the DoD's push towards adoption of Ada, well, it made the 1758 something like a de facto Ada chip, at least in a bureaucratic sort of way. This put Ada into some unexpected places. The 432, on the other hand, was a chip purpose-built for object-oriented programming. The timeline might not line up perfectly, but by the time the 432 was released, it was tied closely to Ada. The processor might
Starting point is 01:03:39 be a bit heavy-handed, but it provides a wild level of support for the language. Some design issues did make the chip a little bit underwhelming in practice, but at least on paper it was exciting. This gives us two very different approaches to hardware-level language support. One is by mandate, and the other by feature set. I think it's interesting to see a case where good theory and features don't live up to the hype, while a more pragmatic approach gets a lot further. There's some kind of geometrical integrity here, to steal a phrase. Somewhere between the lofty theory and grounded practice, we stumble upon Ada,
Starting point is 01:04:18 a very practical language built with some solid theoretical backing. Thanks for listening to Adren of Computing. I'll be back in two weeks' time with another piece of computing's past. And hey, if you like the show, there are now a few ways you can support it. If you know someone else who'd be interested in the history of computing, then please take a minute to share the show with them. You can also rate and review the podcast on Apple Podcasts. And if you want to be a super fan, you can support the show through Adren of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction of the show, and bonus content. You can find links to everything on my website, adrenofcomputing.com. If you have any
Starting point is 01:05:01 comments or suggestions for a future episode, then go ahead and shoot me a tweet. I'm at Adjunct of Comp on Twitter. And as always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.