Advent of Computing - Episode 154 - ACTing Up

Episode Date: March 30, 2025

The LGP-30 is one of my favorite computers. It's small, scrappy, strange, and wonderous. Among its many wonders are two obscure languages: ACT-I and ACT-III. In this episode we are exploring the ACTS,... how the LGP-30 was programmed in practice, and why I've been losing sleep for the last few weeks.

Transcript
Discussion (0)
Starting point is 00:00:00 I got into an argument recently, well, more of a heated discussion, let's say. Someone asked me what, in my opinion, was the first personal computer. That can be a dangerous topic. Now, when we talk about early personal computers, there are some stock answers and some common machines that people go to. Many will argue for the Xerox Alto from 1973. It was the first machine built for graphics and able to actually be user-friendly. I mean, the thing even had a mouse.
Starting point is 00:00:32 On the cons side, the machine was never really sold as a product. It was more a very, very sophisticated experiment inside the walls of Xerox PARC. Some will argue for the Altair 8800 itself from 1974. That has a slightly different line of reasoning. The Altair was the first pre-assembled microcomputer that was widely available. It was relatively cheap too. You could buy one, unwrap it, drop it on a desk, and be up and running. But the Altair wasn't a super capable or really useful machine on its own, and its default interface was a panel of switches and blinking lights. That's not the most personal experience.
Starting point is 00:01:16 There are also various arguments for other microprocessor and similar systems that get thrown around. Maybe it was the IBM 5100 with its handle and built-in CRT display. I can think of one time traveler who would probably vouch for that machine. If you really want to go for a deep cut, you could probably argue that it was the Honeywell Kitchen Computer from 1969. That machine was designed in the shape of a kitchen counter and would help you cook in your very own home. Now I believe that none of these are truly the first personal computer.
Starting point is 00:01:55 My candidate checks all the boxes you could ever hope for. It was powered by a normal wall outlet when other machines required very custom and high power hookups. It required no external cooling when other devices needed actual forced air conditioning. It's designed to be used by a single person. It's also highly interactive, so much so that it has a built-in keyboard. It's portable, or at least it has integrated caster wheels for pushing it around, and it's cheap when compared to contemporary machines. What is this wondrous machine?
Starting point is 00:02:34 Why, it's the LG P30, a computer that was first sold in 1956. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 154, Acting Up. I'm going to be back to the classic programming language episode today, and back to one of my favorite computers. Today we'll be discussing the ACT or ACT family of languages developed for the venerable LGP 30. For those of you who aren't familiar, let me learn you something. The good old LGP 30 was actually one of the coolest computers ever built. If you want to learn more then I have a full episode on the machine, episode 114 in the archive, that covers the computer itself and its history. The basic rundown is
Starting point is 00:03:31 that the LGP-30 was a small computer that entered mass production in 1956. About 500 of them would be built. It was cheaper, smaller, and more simple than anything else out there. It could be moved around on caster wheels. It used a normal wall plug for power, and it was designed to be operated by a single person in a highly interactive mode. It is the first personal computer, fight me. Even with all these futuristic features,
Starting point is 00:04:00 the LGP-30 was still a very early computer. This is why it's so interesting. It's built before this whole random access memory thing is figured out, so it uses a spinning drum to store data. It also doesn't have much memory, so there are major limitations on what it can do. The LGP-30 is also a very important machine. It's not just this weird blip.
Starting point is 00:04:24 It sees a lot of use for over a decade. The predecessor to BASIC, called DOPE, is developed on the LGP-30. It's manufactured in large numbers and sold all over the world. The story of MEL, one of the most beloved pieces of hacker folklore, is partly set on an LGP-30. It's also positioned such that it gives us something very, very interesting. The 30 is designed just as high-level programming languages are created. We get Fortran 1 in 1957.
Starting point is 00:05:00 There are experiments prior to that point, but by and large, languages don't exist. You're expected to write machine code, maybe assembly language. That was the initial interface for every computer, the LGP30 included. But then we have these totally new things called programming languages. There are a number of them developed for the LGP30 specifically. I know one of them, Dope, very very well. I wrote a modern implementation of that language a few years ago. Dope is a strange and quirky language partly due to the hardware limitations of the LGP30. There are a few
Starting point is 00:05:41 other languages that surround the 30 that I've always heard of, but never found much information on. Those are Act 1 and Act 3. These were developed internally for the LGP-30, maybe. At least one of them is a first party language, and they even had compilers and everything. This episode, we'll be looking at these two, maybe three, languages? Questions abound.
Starting point is 00:06:10 These are very early programming languages developed for a very early computer. That on its own is exciting. But I also wanna look at the interplay between hardware and software. The 30 is limited, but it's also particular. Drum-based machines are very different than modern computers. So how different does that make a language written for one of these machines?
Starting point is 00:06:40 I want to be frank and upfront with you. I feel I'm always upfront with my listeners. I have lost more sleep over this episode than any in recent memory. The reasons for that should become clear as the episode progresses, but it all starts with the first language we're going to be discussing. That's Act 1. Now already there are issues. The name here is stylized a few different ways. I've seen it as ACT1 the number, ACT1 as in the Roman numeral, a capital I, and ACT-capital I.
Starting point is 00:07:19 That makes looking for information a little bit challenging. But I digress. The language appears in 1959. Now, I say appears because we have very, very little information on its development phase. We just have a manual for its compiler, which is stamped 1959, and a few articles that show up in trade journals in 1959.
Starting point is 00:07:47 So to start with, let's take a step back and set the stage. What was the state of programming languages in 1959? In a word, primitive. This is the year that Lisp is first implemented. That's often called the second high level programming language, so that should give you an idea for the antiquity here. The first committee meeting that leads to Algole starts in 1958, but those are closer to technical papers than, I don't know, a concrete language. No one's writing Algold and running it in 1959. As far as comparison points, we kind of just have two. We have Fortran and we have Lisp,
Starting point is 00:08:34 the two pillars of old-school programming. Then we have this mix of algebraic languages. These were much more simplistic and not always turning complete. By that I mean they didn't all have conditional execution. Some were just systems for translating equations to machine code. This is so early, in fact, that the term programming language isn't even the only one in town. There's still this linguistic diversity. Some of these early languages were actually called automatic coding systems still. Act 1 is not nearly as complex as Fortran, and not nearly as elegant as Lisp.
Starting point is 00:09:17 It is, however, distinctive for quite a few reasons. In preparing for this episode, I, and this is the point where I first started losing sleep, decided to write an interpreter for the language since I want to get a feel for it. We don't have a preserved copy of the compiler and the state of LGP30 emulation is a little hit or miss. So the only way to really experience Act 1 for me was to make a tool to run code snippets. My analysis here is based in large part off my experience hacking together that interpreter
Starting point is 00:09:55 and then writing a little bit of Act 1. I think that should qualify me to judge. Now Act 1 feels old. This can be hard to quantify sometimes. There are some languages that, to me, just have a certain old-school feel to them. I know I've said that before about some versions of Algol, but with Act 1, I can quantify why I have that feeling pretty easily. So, let me count the ways. First off, the language doesn't use any sort of block or structured code. Act 1 doesn't have anything like curly braces to hold chunks of code, scope, or even begin-end
Starting point is 00:10:38 statements. That makes the language look very simple. And, well, it is. Each line is a single statement. The only caveat is you can do weird compound math equations in certain situations. Now, this is actually very similar to BASIC if you're familiar with that language. In BASIC, each line is a statement. There aren't blocks of code except for later, more sophisticated versions of BASIC. Perhaps that's not a surprise because BASIC does grow out of the LGP 30. Now, back to Act 1. It's a very easy language to parse, computationally speaking. That means it's easy to write software
Starting point is 00:11:25 that can look at a line of act one and figure out what it's trying to do. Each line translates into a single action. So it's pretty easy to translate that into machine code. A line might be multiple machine instructions, but it's still a single action. It's still an addition, a jump to a location in the program,
Starting point is 00:11:46 or a bit of a looping statement. In my case, that makes it really easy to evaluate a program. I was writing an interpreter, not a compiler. The difference there is subtle, but it's worth pointing out. A compiler will take a program and convert that into machine code. So it has different concerns. It has to care about how software runs and what the computer actually looks like. An interpreter, what I was writing, just looks at a line of code and then figures out how to run that at the time. You don't have the same machine considerations that a compiler has.
Starting point is 00:12:23 Since you're really just looking at code and running it, it's called evaluation instead of compilation. And like I said, evaluation of Act 1 is pretty straightforward. I just used very basic and crude pattern matching to determine what a line is asking to do, and then I have a mapping between the patterns and evaluation functions. That's a lot more simple than, say, a parser for Fortran. That would have made developing the first Act 1 compiler, which did run on the LGP 30 itself, a much less daunting process. So perhaps that's the first way we can see that the language was influenced by its target computer.
Starting point is 00:13:06 The typing system in Act 1 is also, well, it does exist. You get two options. A variable can either be an integer type or a floating point type. You can also define arrays in both flavors. Here's what makes it weird. The typing is technically determined at compile time. You can select the Act 1 compiler to output code for floating point values or integer values.
Starting point is 00:13:37 That means that you don't define type in your code. You just switch the compiler to the mode you want. If you've never programmed before, there is nothing normal about what I just said. In most languages, if you have different data types, you would say, oh yes, here's my variable x, it's an integer, here's y, it's a float. And when it was compiled, you could have both of those talking to each other They'd be in the same program in the same binary file having a compiler switch to go between integer and float math That ain't normal What makes us more strange is that there are different operations for integer and float mathematics?
Starting point is 00:14:21 So even in float mode you could do an integer division. But in integer mode, you can't use any of the float operations. I'm not sure what happens if you try to do a float division in integer mode, and I'm not sure I want to find out. The manual just kind of says don't. So then how do we define a variable in Act 1? That's an interesting question. Most early languages are somewhat static in manner. That means you have to declare the variable before you start using it.
Starting point is 00:15:02 In Fortran, you literally have a special section of your code just for stating which variables you want to use. Act 1, however, doesn't follow that practice, at least doesn't follow it entirely. When it comes to single value variables, scalars, things like just a number, everything's handled implicitly. When you first assign a value to a variable, Act1 picks up that you want to create a new variable. It then does its magic and handles that for you, so you never have to declare anything. That should sound very familiar if you're used to Python or PHP or JavaScript or any other dynamically typed languages.
Starting point is 00:15:41 This is very similar in feel to that type system, except with the caveat that you're not gonna be changing around floats and integers and strings and characters, since that concept of a type doesn't really exist in the same way. The syntax, however, will be much less familiar. Let's say you wanna make a new variable called x and set its value to one.
Starting point is 00:16:07 In most languages, you would say x equals one with the variable name or symbol on the left-hand side and the value on the right-hand side of that equal sign. Act one uses the opposite convention and it uses a different symbol. You write 1 colon x. That's the equivalent. That might sound a little strange.
Starting point is 00:16:33 The colon part I can explain. In formal logic, assignment is written as colon equals. That means that the left-hand side is defined as the right hand side. Early computer aficionados are by and large, Mathematic nerds. So when they're trying to figure out what to do with computers, they are often trying to adapt mathematical notation to somehow working on a digital machine. The issue is character sets never ended up having a colon equal symbol. So they ended up choosing one aspect of that symbol, and most chose the equal sign.
Starting point is 00:17:13 It looks to me like Act 1 just went with the other half of that symbol. As for direction, I got no idea. I don't know why it's backwards. I think that's just an artifact of age. There's not some profound reason for the order one way or another. It's just that we've all agreed that variables always go on the left-hand side and values on the right. Act ones so early that there's just not a convention. At least that's my guess. Arrays introduce a weird hiccup though.
Starting point is 00:17:46 One of the most important advances in language design has been consistent syntax. Lisp ends up being one of the best examples of this. Early versions of Lisp have this weird split between syntax used for defining data and for defining code. As the language develops, that split disappears, leading to one consistent syntax for everything. There are very fancy words to describe all this. I like to think of it as the principle of least surprise. When you read about a language and reach a new feature, you should never be shocked. You should never go, well,, you should never be shocked. You should never go, well, I would have never expected that. You should ideally just go, oh yeah, given
Starting point is 00:18:31 what I've learned so far, that makes sense and I understand how they got there. This is fine. I don't need outside context to guess at why this is happening to me. Arrays in Act 1 do not follow that principle. They use their own separate syntax and they're actually declared. Let's say you want to make an array called A and it has 16 elements. You would write that as DIM A16. dim a 16. That tells Act 1 to set aside 16 locations in memory and name them a. Specifically, Act 1 calls that a region. And fine, whatever, you can learn that. It's just a little inconsistent with the rest of the language, but the naming's reasonable. Dim, if you don't know, is short for dimension, which is what Fortran uses to describe array identifiers. That's nice, but once again, it's kind of a surprise.
Starting point is 00:19:35 If you didn't know Fortran, you'd never understand why we were doing this. Now, in a normal high-level language, arrays are treated specially. They're kind of their own data type in a normal high-level language, arrays are treated specially. They're kind of their own data type in a way. But Act 1 is a little primitive. It calls arrays regions because that's how they're implemented.
Starting point is 00:19:56 The compiler literally allocates a region of memory for you to store your data and gives it a little name. That can be used as an array or you can use it as a reference to something else in memory. Let's start with the array case. Then I'll move to the wider implications of this feature in a little bit. So you have your new array. How do you access an element inside that list? You have to declare an index.
Starting point is 00:20:27 Now, this is something that messed me up when I was working on my implementation. You declare an index, let's call it i, by literally typing out index i. That's simple. Once again, it's kind of weird that normal variables aren't declared, but arrays and now indexes are declared, but whatever. Index i creates an almost variable that's
Starting point is 00:20:54 always treated as an integer. That makes sense, right? You can only have integer indexes in arrays. You can get elements 0, 1, 2, and 3, but never element 0.74 or element 9 and 3 quarters. That's a no-go. That doesn't make any sense to the computer. Here's the trip up. The access pattern, the actual syntax you use to access a certain index in an array. In normal languages, you have very specific syntax that says, hey, I'm accessing an element of an array. In Fortran, it's the parentheses. So you'd say array, and then in parentheses,
Starting point is 00:21:35 you put the index you wanna access. In many other languages, it's square braces. You write out something like array, and then in square braces, i, to get the i-th element of the list. Act 1 doesn't play that way. Instead, you just write an array's name followed by the index's name with a little space between. So that nice syntax becomes array space i. Now, dear listener, ya boi don't like that. I really don't like that. Most of my work implementing Act 1 was dealing with this behavior. Basically, what you have to do is keep track of all region names and all index names. Then
Starting point is 00:22:24 you need to scan for an array followed by an index and treat that specially. That's actually kind of complex to write. I implemented this as a two-pass system. That means that my interpreter looks at the code twice. The first pass, it gets all symbols and tracks them. That includes array names, index names, variables, and labels used for jumping and such.
Starting point is 00:22:51 The second pass actually evaluates the code, keeping in mind all of these array and index names. Now, this is fine. Many interpreters and compilers take two passes or even more. The Fortran compiler takes, I think, like five or seven. But that makes for more complicated software. Implementing that on the LGP 30, well, that'd be a real challenge. The actual array access, however, is simple once you identify it.
Starting point is 00:23:23 A region is just a chunk of memory, and the index is always an integer, so it acts kinda like a pointer. That's only half of what regions do. The other half is, well, it's an old approach to things. This is also where my implementation ends. It's where I come up short. Early languages can't do everything. This is for a combination of reasons. One is just that they're primitive. You don't have very fine-grained control over hardware in Fortran. The other reason comes down to trust. It would take a while for programmers to trust
Starting point is 00:24:01 these new-fangled languages. It was common practice to allow programmers to execute machine code directly from a language, or to include assembly language in their code. You can still do this in something like C today, but early languages had low level access built really deep into them. For simple tasks, Act 1 would let you write assembly language directly into your code, or at least something similar. The language literally has commands for manipulating the register and doing math on a very low level. If you want to do something more, you could use regions. Act 1 can call a region as a subroutine. This transfers control to whatever information
Starting point is 00:24:46 or whatever program is stored in that region. Remember, a region is just a chunk of memory with a name. It can hold data, that's how it's used as an array, but it can also hold code. In other languages, this is called static linking. You name a known address in memory, you load some external code into that region, then you call it up as needed. When you compile your Act 1 program, you have to take a few steps to make sure the region
Starting point is 00:25:16 is properly filled with code. That was a pretty manual process, but it apparently worked. To round things out and make the language actually turn incomplete, Act 1 has labels. They're just implemented as a prefix in front of a line of code. You can then jump to that, you can do unconditional jumps, and there's also a built-in loop which lets you repeat a certain label multiple times. That's the entirety of Act 1. If this was a more recent language,
Starting point is 00:25:48 I'd almost call it a systems programming language. Act 1 has a blend of some high-level features while allowing for low-level machine access, like BCPL or Bliss or C. But in this case, this design choice wasn't an aid of some specific task. It was just because you had to have some kind of system access in 1959. In order to be practical, especially on such a small machine, you had to make compromises. Now, I'm kind of done with my implementation. If someone wants to use
Starting point is 00:26:24 it, I can bundle it up and put it on GitHub. It's not very good code, but it will run at one. However, it doesn't implement any of the low level features. That would basically require writing an entire LGP 30 emulator, you know, in JavaScript. And uh, who would even think who would consider doing such a thing especially in JavaScript So then let's go to the big question Where does act one come from? Well, I'll start with what we know
Starting point is 00:27:01 Because it's a bit of a mess and we don't know very much what we know because it's a bit of a mess and we don't know very much. First off, the Act 1 compiler was written by a man named Melvin Kaye. That's spelled K-A-Y-E. He's more better known as Mel, as in the story of Mel. That Mel, if you aren't familiar, Mel is something of a programming folk hero. The story of Mel is about a blackjack program that he wrote while working at Libroscope, how he refused to mangle it even when the president of the company asked him, and how a later programmer could barely comprehend the genius of that software. Drum machines are devilishly subtle beasts, and Mel knew exactly how to make them work. The trick, in part, was some very smart self-modifying code. Mel wrote the Act 1 compiler probably a little bit after his first version of that fabled
Starting point is 00:28:03 Blackjack program. He was hired by Libroscope, the manufacturer of the LG P30 in 1956, so he was basically with the computer since the beginning. This is also a good spot to point out something very important but also very technical. That's how the LGP-30 actually functioned. Drum memory machines are notoriously difficult to program effectively. That's because of the spin of the drum. And I don't mean that as a joke, that is actually why. Each operation takes a variable amount of time depending on what memory locations it needed to access, or even what register it's dealing with.
Starting point is 00:28:51 If you write without that spin in mind, then your program will be wasteful and slow, since you'll be waiting for the right sector on the drum to spin around to the read head. The issue affects everything. That includes instruction fetching. When the computer runs an instruction, it first pulls that instruction from memory so it can start decoding it and trying to figure out what it means. During normal execution, a computer starts at the first instruction in memory and simply increments up to the next one and so on and
Starting point is 00:29:25 so on and so on. You fetch instruction one, you operate on it, and then you fetch instruction two, and so on. On a drum, that's very wasteful. Think about it this way. If you have your drum and each column on the drum is an operation, then when you finish one operation, you won't necessarily be ready to pick up the next one. The drum will be at some random location in its cycle, which depends on how long that first operation took to execute. When you go on FetchAddress2, you might need to wait for the whole drum to spin around, or it might be right under
Starting point is 00:30:05 the read head. If you write naive code, then you have no idea. So you have to get that rhythm worked out. You have to kind of internalize how the drum spins and how long everything takes. Because of this, drum memory machines usually have a very weird machine code. You'll give it an instruction, followed by the address of the next instruction you want to execute. In the story of MEL, it's described as every instruction ending in a go-to.
Starting point is 00:30:36 Run this addition, then go to this other instruction somewhere. That allows for a type of drum optimization, since you can calculate where the drum will be in its cycle after your instruction runs. Then you can say, oh, I want my next instruction to be right under the read head. That makes it so if you're smart about your go-tos, the computer can effectively just go fetch run, fetch run, fetch run, without ever waiting for the drum to spin around to the right location. A lot of these machines even came with tables and these little circular slide rule things for calculating how to do this address trick.
Starting point is 00:31:17 The LGP-30 is one of the best known drum machines, but it does NOT implement this trick. Let me just reiterate myself to be clear. An lgp30 instruction doesn't include the address of the next instruction to execute. That's a common misconception that even I fell into. I only spotted it once I got back to looking at the 30s machine code. The reason for this misconception, at least for me, traces back to the story of MEL. It talks about how MEL was great at optimizing for drum time
Starting point is 00:31:53 by using those built-in go-tos. But the story is actually about two computers, the LGP30 and the RPC4000. That later machine uses the explicit go-tos. The LGP 30 does not. When Mel's making all of these optimizations using these jumps, that's for a different machine. I bring this up partly to clear up some confusion I've seen, but also because this has an impact
Starting point is 00:32:24 on Act 1. The Act 1 compiler is not an optimizing compiler. When Fortran was developed, care was taken to make its output program as fast as possible. But with Act 1, I don't think that would even be possible. There are a bunch of tricks you can play with the LGP30, but you can only do so much to speed things up. This also impacts lower level features of the language, so check this out. Act 1 will accept assembly language instructions and turn those into machine code.
Starting point is 00:33:01 But it's not a one-to-one thing. The LGP30's machine code is somewhat mnemonic. Its instructions, called orders, are each represented as single letters. B for bring, D for divide, that sort of thing. It's common to write them out shorthand that way. Act 1 doesn't use that notation. Instead, it presents much better names. Bring is bring, divide is div. To add x to the accumulator, you write add x. Clear and to the point, Act1 will even resolve variable names into addresses. You will note there is no address at the end of those instructions.
Starting point is 00:33:45 If the LGP-30 supported those chain go-tos, then I'd expect that to carry over into Act 1. But as we can see, it just doesn't. Act 1 also complicates the story of Mel, which I love. Once again, check this out. Here's an excerpt from the story as passed to us by the jargon file via CatB.org. Quote, Mel had written, in hexadecimal, the most popular computer program the company owned. It ran on the LGP30 and played blackjack with potential customers at computer shows. Its effect was always dramatic. The
Starting point is 00:34:27 LGP30 booth was packed at every show, and the IBM salesmen stood around, talking to each other. Whether or not this actually sold computers was a question we never discussed." That sounds macho. And that's kind of the point of the story. That Mel was a different breed of programmer, one that wrote directly in a hexadecimal, never needing any crutches or tools. But that's not true. Mel wrote the compiler for Act 1. He wrote a supremely complex tool for this computer. That's more impressive than just writing in hexadecimal and having a strong ethical compass. There's one more Mel fact that I want to drop before we continue. That's that during this time, 1959, Mel was only 28.
Starting point is 00:35:19 He's not some wise and old sage. He's a young guy who was able to work up a fairly complex compiler with a lot of features on a very difficult to pilot machine. Okay, so enough about folk heroes. Who else was involved with Act 1? Well, we get two names and that's about it. The author of the manual, and who I think designed the language, is named Clay S. Boswell. He's another Libroscope employee. There's not much to go on with Boswell himself, which is pretty frustrating.
Starting point is 00:35:54 But it gets worse. In the opening of the manual, in addition to thanking Mel, Boswell thanks a man named Dr. Henry Bolden of the National Carbon Company. Specifically, Bolden is thanked because he quote, offered some of the basic concepts, end quote. Now, I would like to know what basic concepts we're talking about here. At the end of the day, Act 1 is a first-party compiler written and put out by Leverscope, but beyond that, we don't have very much provenance at all. As we venture deeper into the action, we must first deal with a complex matter.
Starting point is 00:36:40 That is, corporate history. I need to really get out of this rut before I consider a business degree. The chain of custody around the LGP-30 isn't too gross, but it does add a layer of confusion to the story. Normally we could ignore this, we're talking about software after all. This time, however, we kinda need to look at the business side because it even confused me while I was going through documents. The other reason to address this comes down to tracing people. The LGP30 changes hands a number of times.
Starting point is 00:37:18 Each time that change isn't noted very publicly, but we do see changes in things like copyright statements and manuals, so let's start at the beginning and weave up the spider's web. Stan Frankel designs the machine independently from anyone else in 1956. He turns around and sells the designs to a company called Libroscope. Libroscope is itself owned by a company called General Precision Instruments, or just General Precision. These are both military contractors. They become interested in computers, from what I gather, incidentally.
Starting point is 00:37:56 This is also where the machine gets its name. LGP stands for Libroscope General Precision. Once again, military contractors, so not much imagination to go around. There's also this typewriter company called Royal McBee involved in sales, but we can kind of ignore them. In 1968, GP is bought by Singer, the sewing machine company. That's the same Singer that buys up Flexo Rider in the same period. The 30 and its variants, although kinda old at this point, are still seeing some use, so there is some documentation that mentions Singer and these machines in
Starting point is 00:38:39 the same breadth. There are also a handful of defense companies in the mix that are owned by GP at different points. That means that it can be hard to tell when a document is first party or third party. Might say Libroscope, might say LGP, might say GP, might say Singer, might say a number of corporate entities. We have one other player. That's Poole. P-O-O-L.
Starting point is 00:39:06 This is the Libroscope user group founded in 1957. As with all things Libroscope, there is scant information on this group. From what I can tell, Poole operated like any other user group. Its members met up to give talks, form committees and work on projects. Pool distributed user submitted software. We have scans of some of that software and its documentation that was published with it. From those, it appears that Pool
Starting point is 00:39:39 was sanctioned officially by Royal McBee. That's the company that sold, distributed, and handled support for the LGP30. Libroscope is also involved with this because McBee and Libroscope had formed a partnership for sales and support. This means we see some weird documents. Among those is a set of manuals for the language called Act 3. That's Act with three I's. It comes officially bound in a manual that looks very similar to the manual set for the LGP 30 itself. There's a general precision copyright statement printed on its pages. The manual says it was distributed by Poole. But, and here's the whammy, its author is listed as Henry J. Bolden,
Starting point is 00:40:35 Doctor of Union Carbide. This is one of those documents that threw me for a loop at first. I was scraping over old newsletters trying to figure out when Libroscope had sold to Union Carbide or when General Precision had owned Union Carbide. It turns out, never. Bolden, an employee of Union Carbide, was also an active member of Pool. The software was submitted by him, so it's technically third-party software, distributed kind of like first-party. You kind of gotta love how convoluted it was for old software to get around, right? So we've reached act three, but hold on a
Starting point is 00:41:20 minute. You may be asking, what about Act 2? I have found no information that such a program ever existed. If it did, then nothing about it was ever written down or preserved. It could be that Bolden jumped to Act 3 as some kind of joke that I just don't understand or that Act 2 was the name of some temporary project that was worked on but never completed. Thus we actually reach Act 3. Released in 1961, it claims to be an expansion to Act 1. But where exactly is all this coming from? We know that Bolden was thanked for his contributions to Act 1, but who even is Bolden? Well, and get this, he was a physicist. Prior to the act, he published on semiconductor physics. The acts, however, appear to be the first time he has any connection with computers. Once again,
Starting point is 00:42:26 early days, so that's not too surprising, right? Once we reach the mid-60s, Bolden becomes involved in the Algol community. He even publishes a number of papers on the language. I've actually read some of his papers before without realizing it. He wrote one of the Algol chapters in ACM's History of Programming Languages, Volume 2. But as far as Act 3, and any prior work, the man's an enigma. What we do know is that Act 3 wasn't a first-party program. It was distributed via pool, but it wasn't code from Libroscope itself. But it does fall into a bit of a weird zone, since Bolden had some kind of influence on Act 1. It's clear to see there's a dialogue going on here, it's just not clear what the dialogue was.
Starting point is 00:43:22 We have no preservation on that topic at all. Anyway, what we do have is a manual. And oh man, what a manual it is! Act 3 is a more complex language, so I'm not about to implement it. But from reading alone, I have a, I've gotten a little concerned. Let me start off with a note on notation. There's this weird thing in Act 1 where a space is technically a special character. A variable's name is allowed to have a space in it. That, dear listener, that is cursed for a number of reasons. In almost every programming language, the space is what's known as a token separator.
Starting point is 00:44:13 It's used by compilers and interpreters to extract meaningful information from source code. When you say 1 plus 1, well that's actually 1 space plus space 1. You're using a space to separate out your tokens. In this case, the tokens are the two 1s and the plus sign. Those tokens have special meaning to the programming language. The space, though, is just there to keep everything clear and isolated. It's one of those underpinnings of the art that, well, it's very, very basic to how things work. Allowing spaces in variable names is very uncommon. I don't think I've ever seen another language that rolls that way.
Starting point is 00:45:02 Sure, it's handy to have spaces and names, but we programmers have traditional ways of naming things without using spaces. There's the beloved camel case, where you smash together words and capitalize each adjoining word, which forms humps and bumps. There's snake case, where you just add underscores where a space would be. But those are traditions. Those take time to form. The acts are developed before there was such a rich tradition. What are the implications, linguistically, of spaced variables? Well, it means you can't use space as a token separator. You have to have some way to separate things out. You can't get around that and still have a recognizable language.
Starting point is 00:45:53 Act 1's token separator is a special character called the conditional stop or the cond stop. This gets us into a whole mess of trouble. So, the LGP30's character codes are dictated by its main input-output device. That is a customized, read-in flexo-writer. That's where numeric codes get translated into printable characters and hammered down onto paper. That's where printable characters typed on the typewriter are translated into numeric codes. This machine is hardwired into the lgp30, meaning some buttons do wild and very powerful things. Those special buttons sit above the keyboard on their own small panel built into the FlexoRider. Once again, this is a customized device.
Starting point is 00:46:50 One of those buttons is labeled Cond Stop, the conditional stop. If the LG P30 is running and it's in an input mode, then pressing the Cond Stop will cause the FlexoRider to print what looks like a single quote. That tells the FlexoRider to stop working and pass control back to the computer, and it tells the LGP30 to start running if it isn't already. It's a powerful thing! That's what was chosen as the token separator for Act 1 and it carries over into Act 3.
Starting point is 00:47:28 That's the token separator. It has a very special power to the LGP 30. So why use such a special character? Well, I hope I've already explained that linguistically speaking, just looking at the language in a vacuum doesn't make much sense. The reason for using the const stop comes down to implementation, which means it may have been one of Mel K's tricks. I think it's time to admit to something. I've been losing sleep because I've been writing an LGP-30 emulator in JavaScript.
Starting point is 00:48:09 It's a silly and challenging project, but in doing so, I've been learning how to program the machine, and well, let's just say its reputation is well earned. The 30 is a beastly and mind-bending computer. One particular difficulty is handling strings of characters. The reasons behind this are highly technical and boil down to how the computer stores data as 30-bit numbers, and how you can move data around. Not very easily. I don't have the source code for MEL's Act 1 compiler, and if I did,
Starting point is 00:48:48 I think reading it would actually kill me. Viscerally. I can only spend so many nights without sleeping. So I'm going to speculate here with my short background in programming the machine and with manual and my emulator core in hand. Here's what I think is going on. When the LGP-30 takes input, it throws data directly from the FlexoWriter into its accumulator register. That's the only way it can grab data from the outside world. That holds 32 bits of data, which is five characters worth of code. That's the largest chunk
Starting point is 00:49:28 of information the compiler or the lgp30 can take in at a time. So I think what's going on is it gets five characters, hits the const stop, and then it has some kind of code that pulls that into a buffer, examines what's doing, updates some variables, and then gets ready to pull in the next token. This way, you'd only ever need to hold one token in the machine's memory at a time. That explains why the constop is used. It also means that the constop is an implementation detail that's made due to how the LGP 30 works, and it's not part of the language's design. Like I said, it doesn't make a whole lot of linguistic sense.
Starting point is 00:50:12 One issue this leads to is readability. The Act 1 manual gets around this in a very simple way. When it presents code, it replaces all cond stops with spaces. Then it tells you later, when it describes how to prepare a program, that you have to replace all spaces with a cond stop. It's annoying, but it means that all code examples are actually readable. It looks like normal software. I think this is also more evidence for my speculation
Starting point is 00:50:42 that cond stop is just a thing you have to do for the computer to understand your code, and not part of the language's overall design. Act 3, that does things differently. It uses Cond Stop as a token separator, and the manual uses Cond Stop in all its code examples. That... That just kind of sucks. It's made worse by the fact that act 3 has way more tokens than act 1 and supports more compound expressions. So a single line of act 3 code as presented in the manual can have a dozen or more single quotes
Starting point is 00:51:28 mixed in. It's frankly pretty hard to even read. I don't just mean this as a dumb dunk on the language, oh, Act 3 is an ugly programming language. No, I mean this in a very practical sense. programming language. No, I mean this in a very practical sense. There's a bit of a smell test that happens whenever a programmer sees a new language. It's one of those first impression things. So let's say you pick up the Act 3 programmers manual. That's what's intended to be your first impression of the language. We can believe we're rational and that we don't form biases, but in reality we do. That first impression may determine if you pick up a new language or stick with something you
Starting point is 00:52:11 already know. The first impression you get of Act 3 is very messy at best. The manual leads you into this program for calculating the roots of a quadratic equation. It's classic stuff. And it has all the good bits you'd want to show off in a language. There's all the mathematical operations plus a square root. You even have an excuse to show off numeric input and output. The manual sets up the program, shows a block diagram, and talks about which variable it uses. Then you turn the page to source code
Starting point is 00:52:46 for the program, which is almost unreadable. The cond stop notation is an immediate issue. It makes the code look very, very busy, and it fills the page almost entirely with black ink. The feeling is enhanced by a new feature of Act 3, string printing. In Act 1, you could only output numbers, but Act 3 can output characters. This is done, perhaps, in one of the more annoying ways it could be accomplished. There aren't string variables in Act 3. Your types are either floating point,
Starting point is 00:53:27 integer, boolean, or null. Strings are a special type of argument you pass to the print command. Act 3, for some reason, treats each character as a token. That means there has to be a marker between every character. And that marker is the single quote. But that's not where it ends. So check this out. In order to print an uppercase letter, you have to place a control character, UC2, into your string. Then to go back to lowercase, you have to use another control character called LCL or maybe LC1. It's impossible to tell the difference because the LGP30 treats 1 and lower the phrase DAPRT for this. It's short for Direct Alphabetic Print.
Starting point is 00:54:35 So a Hello World program reads as DAPRT stop UC2 stop H stop LCL stop E stop L2, stop H, stop LCL, stop E, stop L, stop L, stop O, stop space, stop UC2, stop W, stop 1, C1, stop O, stop R, stop L, stop D, stop stop. That is, that's not good for readability. So why in the world did Bolden go this route? Well, I think I can answer this one. And it comes down to precisely how the LGP-30 functions. It's another example of how hardware has really influenced the acts. The only output instruction for the LGP-30 is P, which is short for print, and it's a
Starting point is 00:55:26 direct print. Most instructions on the computer are formatted as an instruction code followed by a memory address. Add will add the value of an address in memory to the accumulator. Store will move the value of the accumulator to an address in memory. Print, however, is different. It sends the value of the upper six bits of the argument to the flexo writer. That can be a character or a control code, which means something special to the flexo. This means that to print a variable,
Starting point is 00:56:01 you have to actually modify the print instruction. The LGP-30 actually expects you to self-modify your code in a lot of spots. Luckily, there is an instruction specifically to modify the argument part of another instruction. What can I say? It's a elegant machine for a less civilized time. But there's an unfortunate fact buried in this.
Starting point is 00:56:29 I said print only sends out six bits of data at a time. That means you only get 64 possible characters and control codes. That's not enough for a full character set with upper and lower cases. The solution is a special set of control codes, one that flips the FlexoRider into uppercase mode, and one that flips it into lowercase mode. Act 3's direct print command is a very thin wrapper over the LGP30's print instruction.
Starting point is 00:57:03 This is just kinda how Act 3 rolls. It's a language with almost no abstraction. Its implementation details tend to blur the line between the language itself and the LGP 30's hardware features. Maybe I'm just seeing that because I've been so deep into the LGP 30 lately, but I think it's pretty visible. So let me give you some more
Starting point is 00:57:30 examples. One is the weird floating point representation that Bolden uses and how that leaks into the language's manual. This is a bit of a wild one to me, especially as a modern programmer. So Act 3 can handle floating point and integer arithmetic at the same time. It does, however, keep them separate. You can't do float math on integers without first converting them to floats. Floats are first-class citizens here, which is actually the opposite of Act 1. The plus operator which is actually the opposite of act one. The plus operator assumes floating point numbers. To do integer addition, you have to use a special integer plus operator. Strange, but not that far outside the scope of what's expected. But floats are tricky. Act three represents them in scientific notation. So 1.0 would be 1e0 or 0.1e1, right? Well, no, you actually can't represent a 1.0. To quote from the manual, and just to note, whenever
Starting point is 00:58:40 I say a small chunk of code, I've taken out the conditional stops. Just assume that they're always present. Quote, there are two floating-point constants in the program in Chapter 2. .2e1 equals 2.0 on line 25 and.4e1 equals 4.0 on line 27. The constant 1.0 should be written as.9999999990, and similarly for other powers of 10, for most accurate representation. When whereas.1e1 is represented as 1.0000002." That's, well, it's at least annoying. So what's going on here? It's called floating point imprecision. Many languages have this problem, but this one seems particularly frustrating, especially
Starting point is 00:59:48 since Act 3 has elevated floats to being first-class citizens. You literally can't write an Act program that simply adds 1 plus 1 without understanding its operator-enforced typing system and the caveats of how it stores floating point numbers in memory. The implementation is leaking out. This is also seen in how Act 3 handles labels. It uses the same notation as Act 1. You prefix S followed by a number. So your first label might be S1, another might be S2, but there's no reason to be sequential. You could start with S8 or S58. But there's an interesting note in
Starting point is 01:00:31 the manual. It says S1 is equivalent to S0001. Why would that be? Well, I thought about this for a while. I had gotten stuck in the modern programmer mindset. When I implemented labels for Act 1, I reached for my favorite tool, a string indexed array, or in JavaScript, an object. When my own Act 1 encounters a label, it takes its name as a string
Starting point is 01:01:03 and uses that to store a reference in an object. It's stored by name. When I ran into this warning about s1 equaling s0001, I thought, well, that's a bit of annoying extra work for the compiler. You'd have to split the s from the number, then remove any leading zeros, then check for a label in your structure. But no, that's not what's going on here. That's modern me making assumptions. If you strip off the S, you get a number. That can be the index for an array, a table, if you will.
Starting point is 01:01:45 In other words, that little detail tells us how Bolden is using a simple array to store labels. We also know how big this array is. The manual tells us that labels can range from 0 to 190 inclusive. So, he has an array that's 191 entries long. Once again, implementation comes to the surface. Why am I spending so much time hammering on this? It comes back to fundamentals. The point of a programming language is to provide abstraction on top of a computer. It shelters the programmer from the details of their machine, which is supposed to make programming easier and more flexible. Take Fortran as the canonical example. The language is a contemporary of the
Starting point is 01:02:26 acts. It provides integer and floating-point variables. Its manual talks about the range of numbers that each one can hold, but it doesn't have a section where it flatly says it can't represent 1.0 and that you have to use little tricks to get around the issue. Fortran provides printing, but it does so in a much cleaner and abstracted way. You give it formatted data to print, and you let Fortran figure out how to drive the printer for you. What's strange is the difference here. Act 1 feels like it's almost a toy language, but Act 3 feels strange. It feels much more official in presentation, but the
Starting point is 01:03:11 language itself doesn't. It doesn't feel tight, almost like it was overreaching what was possible. The language itself still feels like a toy or a side project, just one that grew a little too big, which to be fair, it likely was. Bolden was still working his day job at Union Carbide after all. Maybe this is just the quality of code that we can expect out of some user groups.
Starting point is 01:03:38 Or maybe this is just a manifestation of how limited the power of the LGP30 was. That's the general rundown of the acts. They're interesting but flawed languages. They're tied very close to their host hardware, so they have taken on some of its quirks. I want to close out by looking at programming the LGP30 in general and the actual process that was used. We have a lot of really good documentation around the machine and its programmers, and I've been wading pretty deep into those waters lately.
Starting point is 01:04:15 Since I took on this emulator project, I've been reading a lot about how the LGP30 worked internally and how it was used in practice. It turns out we have a lot of preserved LGP30 software, in part thanks to Poole. The programs they distributed came with documentation on how to load and run those programs. That was needed because the loading process on the 30 was complex. Old machines in general work differently differently and we're talking about a very old machine here. First there's the actual process to bring the machine up. You make sure the manual entry button toggle is on, you hit the power button,
Starting point is 01:04:58 the machine starts to warm up, then you hit the operate button which starts the next step of the warm-up phase, which takes about 50 seconds. You wait for the standby to operate light to go out, and then you're ready to roll. During that time, it's doing two things. It's physically warming up the vacuum tubes, and it's spinning up the drum. The 30 uses honest-to-goodness tubes and those need to be brought to a certain temperature to properly operate. Its
Starting point is 01:05:30 memory drum also needs to be spinning at the proper speed to be read or written to. That's a little weird but understandable, right? We've all had to wait after turning on a computer, so we get it, it's warming up. Today is a metaphor, back then it was very real, but that's just how things go. From here, though, things start to get different. The first has to do with memory. The LGP30 used a magnetic drum for memory. That's technically non-volatile storage. Bits are stored as regions of magnetization, just like on a hard drive platter. That leads to something weird. Now, I haven't seen this written anywhere explicitly, but this should mean that the LGP-30 starts up in an unknown state. Whatever was last in memory will still be resident on the drum.
Starting point is 01:06:28 The same goes for data in registers, since registers are also stored on the drum. In theory, you could turn on your LGP30 and start up a program from where you left off. These days, computers use volatile memory. On basically any silicon computer, memory has to have electrons flowing to still store data. Information isn't retained once those electrons stop flowing,
Starting point is 01:06:55 once the machine's turned off. That means each boot will be the same, roughly speaking. Memory will always be empty, but that wasn't the case with the 30. So then, how do we get that machine into a workable and known state? How would one run a cool blackjack program? Well, you have to bootstrap the machine. That's a word that should sound familiar. It's the automatic process that computers go through to turn on. Except in this earlier era, that process was manual.
Starting point is 01:07:30 The LGP30 doesn't start with usable software, at least not necessarily. Maybe you could save a program and reuse it, but the jury is still out on how actually reliable that would be. So you have to manually enter what's called a bootstrap program or a loader program. Programs for the 30 were stored on paper tape, but the computer on its own, well, it didn't know how to load one of those programs. To even get to that step, you had to enter in a simple loader program. That was a bit of a process. The details come down to buttons and blinking lights. The LGP-30 had two main points of human contact.
Starting point is 01:08:14 It's flexo rider and it's front panel. The flexo we've covered a lot. It's a glorified electric typewriter with some very special data encoding and extra buttons. The front panel is a little more simple. That's where things like power and operate buttons live, along with about a dozen others. In general, all data flows into the LGP30 via the FlexoRider, while control of the computer itself was handled from the front panel.
Starting point is 01:08:43 For the bootstrap to make sense, I need to explain a little bit about how the 30 functions in detail. So you have three registers in the machine. A is the accumulator, R is the instruction register, and C is the counter register. A is used for math and is the only register that you can directly modify. R is where for math and is the only register that you can directly modify. R is where instructions are stored as the computer executes them. C is the instruction pointer. It's the address of the instruction that the LGP 30 is currently running. The basic operation loop goes read the
Starting point is 01:09:21 instruction in address C and to register R. Run the operation in register R, adjust C, and then go back to the top. The front panel has a few buttons that let you manually go through that process. One, clear counter, sets C to zero. That just tells the computer to start over at the very beginning of memory. Another, fill instruction, transfers the value in of memory. Another fill instruction transfers the value in the accumulator into the instruction register. The final button, execute instruction,
Starting point is 01:09:52 well, that just runs whatever instruction is currently in the instruction register. We can combine this with something I mentioned earlier and another button. The manual input button puts the computer into this manual mode, which sends keystrokes from the Flexo writer directly into the accumulator.
Starting point is 01:10:14 This means you can effectively type machine code directly into the computer. You can program the thing in hexadecimal using nothing but the keyboard and a few toggle buttons. Of course, there's a neat trick going on here, which took me a while to catch, so I think it's worth explaining. When the Flexo writer sends a character to the LGP30, that character is encoded as a 6-bit number. But in normal operation, the LGP30 only puts the first 4 bits of data into the accumulator. Why?
Starting point is 01:10:50 It's because of this thing called the order code. Each order, each instruction you can tell the 30 to carry out, is encoded as a 4-bit number. Manuals show a mapping between the actual binary representation and a mnemonic for each of these orders. P is short for print, which is actually 1000. A is short for add, which is stored as 1110. But that doesn't line up with the FlexaWriter's character encoding. P should be some 6-bit number, same with A.
Starting point is 01:11:24 The trick is truncation. When the LGP-30 is handed a character, it chops off the last two bits, the least significant bits. That is, unless it's instructed not to. So normally, the P will enter as just 1 0 0 0. This truncation makes the interface for the machine, well, not exactly user-friendly, but perhaps more manageable. It makes it so that if you press a 0 on the FlexoRider, the computer reads the value 0 0 0 0. The first four bits of every number line up with its numeric value, a zero is a zero, a one is a one, and so on. That lets you type numbers in more or less directly. Orders follow the same pattern. This is so central, in fact, that the FlexoWriter even uses different colored keys to mark out numbers and orders. Okay, so you can enter a program in by hand. That's only half the
Starting point is 01:12:27 battle. Since the bootloader is punched in manually, it can be, well, almost anything. It's not always standard. Plus, there's this incentive to have the smallest bootloader possible. You don't want to be punching in dozens of instructions if you can get away with six. Thus, bootloaders end up being these terse and kind of mysterious programs. You have to be careful to match the bootloader with the program that they're supposed to load. The bootloader determines where the program is loaded and also how it's loaded, and that can have major ramifications.
Starting point is 01:13:04 Once boot is up, you can load a tape. This is still done at the FlexaRider, and it's actually kinda automated, at least somewhat. Bootloaders usually implement a small loop that grabs a chunk of data from tape, throws it in memory, and then moves on to the next chunk. The process here works the same as entering the bootstrap, it's just under computer control. Tapes are formatted as 8 characters, followed by a conditional stop. The LGP-30 pulls those 8 characters into the accumulator, then when the conned stop is reached, the bootloader takes back over and moves the data into memory. That operates exactly like we saw with
Starting point is 01:13:46 the acts. In both cases, the program is using the LGP30's very specific behavior to handle data transfer. This continues until the tape runs out, at which point the program is ready to run. The final step is to actually run the program. This is where you go back to manual mode. You have to instruct the LGP30 to jump to the start of the loaded program. That's done by manually entering an unconditional transfer instruction in hexadecimal into the machine. I did say this was a different kind of computer, right? After all that, your program is running. If you were going to, say, compile an Act 1 program, then there would be more steps to come. You'd have to load in another tape containing your source code, then set a blank tape to do
Starting point is 01:14:35 a punch-out, but in general, that's the rundown. I want to pull us back to what's kind of become the touchstone of the episode. The story of Mel opens like this, quote, Real programmers write in Fortran. Maybe they do now in this decadent era of light beer, hand calculators, and user-friendly software. But back in the good old days, when the term software sounded funny and real computers were made out of drums and vacuum tubes, real programmers wrote in machine code. Not Fortran, not RAT4, not even
Starting point is 01:15:12 assembly language. Machine code. Raw, unadorned, inscrutable hexadecimal numbers, directly." The process I've outlined is what's being described here. The story of Mel is written about working at Libroscope on one of these old drum machines. As such, I think we can now point out a possible misinterpretation. Programming in hexadecimal numbers, directly, wasn't outside the norm in this era. It was the norm. Programmers didn't necessarily like it. They probably didn't view themselves as being more macho for doing so.
Starting point is 01:15:56 The evidence is in the fact that programming languages emerged. That some of the best programmers to ever live, from Rear Admiral Grace Hopper to Mel K. himself, were involved in the development of programming languages and tools to make programming easier. The folk that wrote in inscrutable hexadecimal were trying to move to a more decadent era. They're the reason we can drink light beer. There's also something more subtle here. When you think of programming in hexadecimal today, your hair probably stands on end. Computers now are insanely complex. The 64-bit PC chips have variable instruction lengths lengths with instructions up to 15 bytes long. That's not bits. That is bytes
Starting point is 01:16:48 We're looking at thousands of operation codes huge amounts of memory not to mention multi-threading and multi-processing That would be truly horrifying to program in hexadecimal The lgp30 and its contemporaries on other hand, were built for this kind of programming. You had to program them in raw hexadecimal, it was unavoidable. As a result, there are all kinds of tricks and design choices built into these machines to make that as easy as possible. You can see it right on the LGP-30's keyboard. So well programmed in unadorned, inscrutable hexadecimal numbers.
Starting point is 01:17:29 At least, sometimes. Okay, that does it for this episode. Maybe I should just title it my second love letter to the LGP-30. So what have we learned? Act 1 and 3, which yes, I will always call the Acts, are quirky languages. That may be putting it mildly, in fact. They are rough, they're weird, and they aren't exactly powerhouses. Looking at these languages gives us a unique view into programming on early machines. Looking at these languages gives us a unique view into programming on early machines. The acts are so closely tied to the LGP30 that we can see the machine's lower-level
Starting point is 01:18:11 features bleeding through. And this wasn't just a matter of early programming languages either. This was particular to the acts. Remember that this is after Fortran was developed, and that's a pretty abstracted language. Act 1 is developed at the same time as Lisp, and that is a wildly abstract tongue. But there is a larger lesson here. It's easy to fall into this trap about the machoness or purity of earlier periods. It's this weird contradictory, right? Programmers in the past were at once more powerful and more versed in the arcane arts, yet their work was more simple, their challenges
Starting point is 01:18:55 more attainable and more pure. Ah, to be writing Fortran in a machine room instead of being stuck in a cubicle writing PHP. But that's wishing for a past that never existed. Programming the past wasn't necessarily better or worse. It was different. Computers were different, tools were different, education and training was different. I'm sure if you asked Mel K to program a modern machine in unadorned hexadecimal, you know, like in the good old days when men were men, when light beer didn't exist, well, he'd probably throw up his hands just as quickly as any of us. But I'd bet that he'd take to modern languages and modern tooling just fine. While it's fun to think of the past as a mystical place, we
Starting point is 01:19:42 have to recognize when that view leans heavy on the mystic. I think it's a lot more interesting and compelling when you realize how much more we share with early programmers and their struggles, instead of ascribing them with some superhuman prowess. As for me and my emulator, well, I'm working on it. I've learned quite a bit about how to program the LG P30. I'm not the best, but I'm getting passable. And yes, it is an unadorned hexadecimal and all that. I'm to the point where the emulator can almost load old programs on tape, but there are some issues. Right now I'm in the process of rewriting everything and I'm publishing my updates to GitHub.
Starting point is 01:20:25 I have future plans with this thing, so stay tuned. Until then, well, I'm tired, so we're just doing a short sign off so I can get back to either sleep or more TypeScript. Thanks for listening to Avenue of Computing. You know where to find me, and I'll be back in two weeks. As always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.