Advent of Computing - Episode 130 - ALGOL, Part II

Episode Date: April 21, 2024

This is a hefty one. I usually try to keep things as accessible as possible, but this time we have to get a little more technical than usual. We are picking up in 1964, with the first proposals for a ...new version of ALGOL. From there we sail through the fraught waters of ALGOL X, Y, W, and finally 68. Along the way we see how a language evolves over time, and how people and politics mesh with technical issues. Selected Sources: https://dl.acm.org/doi/pdf/10.5555/1061112.1061118 - Successes and Failures of the ALGOL Effort https://sci-hub.se/10.1109/MAHC.2010.8 - Cold War Origins of IFIP https://archive.computerhistory.org/resources/text/algol/algol_bulletin/ - The ALGOL Bulletin

Transcript
Discussion (0)
Starting point is 00:00:00 One of my favorite realizations this year is that programming has always kind of sucked. I found evidence of that back a few episodes in a paper on early computer games written by Alan Turing himself. He explained, in better terms than I ever could, that any program actually represents a great achievement because programming is very, very difficult. That was in the 1950s, just a few years into the discipline. I like this because I think programming is still pretty hard. It also gives us the beginning of a fun narrative. I think it's pretty valid to look at the history of programming languages as this long struggle to make programming easy. Has it succeeded? Well, that's hard to say. And I don't mean that as some kind of joke or as a way to sound
Starting point is 00:00:53 contemplative. I honestly mean it's hard to say. Part of solving the programming puzzle has always been identifying what problems actually exist. It's very easy to just say programming is hard. It's more difficult to say why it's so difficult. There has to be something of an iterative approach here. You identify a problem, attempt to solve it, and then see how it goes. Sometimes that leads to, sadly enough, new problems. Sometimes the solution is actually worse than the initial state. And this all changes over time. Take, for instance, memory management. Back in Turing's day, there wasn't an easy way to get a chunk of memory to work with.
Starting point is 00:01:39 On those early machines, you just had to know which parts of memory were free for use. That was done by knowing your program and your computer inside and out. Flash forward to today, and we have all kinds of solutions to this. Some languages let you allocate memory manually. You actually say, hey computer, I need a thousand bytes of RAM. But for that to work, you have to also manually free up that memory when you're done using it. There's a whole class of bugs caused when you forget to free. That's a new problem created by an attempted solution. We can actually find a whole host of these types of problems and solutions and problems. They're easy to find if you know where to look. And
Starting point is 00:02:23 if this does match up with my larger overarching narrative, it ties us into the long march of improving programming, or at least trying to improve programming. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 130, Algol Part 2. Today, we're truly entering the belly of the beast. We'll be discussing, primarily, the road to Algol 68, and in doing so, looking at a lot of the features of that language itself. This is, I think, one of the most inaccessible pile of sources I've dealt with in quite a while. Now, let me be clear here about
Starting point is 00:03:13 what I mean when I say inaccessible. There are some topics where sources are just scarce, or hard to come by, or even, you know, physically difficult to access. But that's not the case with Algol. There is a glut of sources here. We have everything down to meeting minutes. The inaccessibility here is purely in the nature of those sources. The word that I keep having in my head is hermetic. The ALGOL 60 spec and its associated papers were technical, but not that far out of line with contemporary papers on programming. Then we reach this new era, something like 1964 onward, where the language around ALGOL and the notation around it is only vaguely recognizable. Maybe this is on me. I'm not
Starting point is 00:04:08 actually a computer scientist. I'm just an industry ghoul who has an interest in this kind of stuff. But whatever the case, the texts around ALGOL 68 read as if they were coming from a different programming tradition. As if they were written by, well, some hermetic order of digital monks. Algol 68 was slowly developed over a series of drafts. Each draft, well, really each proposal submitted to a paper that was working on Algol, is numbered. So when Algol people are talking about the development of the language, they speak in terms of these draft numbers. These histories are full of sentences like, MR-87 had some good ideas, perhaps just as good as W-2.
Starting point is 00:04:56 Those ideas would find their way to MR-92, but it wasn't until MR-99 that hipping and dereffing really fell into place. That's not to mention all the colloquialisms. This is something I've actually encountered professionally, but it's weird to see in written works actually published in academic journals and actually in physical print. I used to program Perl professionally. That's Perl 5, of course. I personally think it's a very beautiful and misunderstood language. Anyway, I'd call the community around Perl pretty insular. Maybe even hermetic, to keep up the analogy.
Starting point is 00:05:39 The Perl community is so well-connected and the language is so complex that there are all these little tricks and patterns that have been developed over the years. Most of these tricks have names. Just off the top of my head, I can remember using the Orcish maneuver all the time. The Venus, Inchworm, and Spaceship operators too. That's not to mention all the weird community activities like golfing and poetry. Pearl has its own material culture. The difference is that pearl culture is primarily online. You can read about it on forums,
Starting point is 00:06:13 mailing lists, and websites like Pearl Monks. Or, I guess, some places on CPAN. But with algol, things feel a little different. You find all these same cultural features in academic papers, in manuals, and in technical specifications. Just as an example, there's this thing called Jensen's device. It's this trick you can pull as a consequence of how ALGOL 60 passed parameters. It's something that was developed by a programmer named Jensen. It is not a feature of the language, at least not explicitly, but it was picked up and used by the community of ALGOL 60 programmers. If you're just reading that ALGOL 68 had to include support for Jensen's device, well, that sentence makes no sense. Unless, that is, you are
Starting point is 00:07:08 initiated in the arts. Or you've read a lot of this stuff. The point is, this all makes Algol past 1964 particularly challenging to discuss. So if I take some missteps, then please, I mean no offense. I really do feel like an observer here, and as such, I'm trying to strip out a lot of the obtuse language where I can. At least, I want to make this episode accessible, if at all possible. In this episode, we're going to be peering into the strange world. Last time, we looked at the early era of algol, its issues, and how those issues were addressed over time. We'll be continuing that cycle by examining how Algol adapted after 1964. Which ideas were folded into the new language, which were discarded, and what new
Starting point is 00:07:59 problems arose. We'll also be talking about the perils of committees. This is something that Peter Naur, one of the luminaries of early Algol, wrote on at length. So, let us begin our exploration. I'm using 1964 here as the start of the next phase of Algol's development, because in that year, we start to see serious discussions about a new ALGOL language. Like last episode, my guiding star here is C.H. Lindsay's ALGOL 68 session from The History of Programming Languages 2, or HOPL 2 as I will call it throughout this episode. The HOPL series really is a wealth of information for any of this kind of stuff. I highly recommend it if you can track down a copy or find the scans on the internet archive.
Starting point is 00:08:54 1964 is also, roughly, where we left off last time. Algol X is proposed as a conservative modification to Algol 60, a fast facelift to keep things moving along. ALGOL Y is proposed as a more radical change, which will take longer but be better in the end. Y is the easier of the two languages to address because, well, it's a dead end. There are a few papers that mention the language, but it just kind of dies out. What we do know is that Y would have either been a homo-iconic language or just a more powerful meta-language. What ends up happening, from what I gather, is that there was movement on Algol X and none on Y. For this to make sense, I need to explain a few of the more political aspects
Starting point is 00:09:46 of ALGOL. I need to get into something that I've been dreading. That's IFIP and WG 2.1. If that sounds confusing, well, welcome to the weird world of ALGOL. IAL, or ALGOL 58 if you prefer, was made rather informally. It was developed by a committee composed of, as Lindsay explains, quote, half nominated by ACM and half by various European institutions, end quote. That's all very self-organized, which is fine for this kind of effort, at least as long as everyone is invested and plays by the rules. This structure also lacked consistent funding and, as Lindsay puts it, authority. It's just a random working group of programmers at this point. Things change in 1960. That year, UNESCO founded IFIP, the International Foundation for Information Processing.
Starting point is 00:10:48 This was a fully international organization meant to oversee and facilitate international collaboration on, well, programming and data processing. That lines up exactly with the goals of ALGOL. lines up exactly with the goals of ALGOL. The story goes that in 1960, as the ALGOL 60 spec was finalized, the project was passed off to IFIP. Now, this is where the structure of this organization starts to matter. IFIP is composed of technical committees, which themselves are composed of working groups. All capital letters, I assure you. ALGOL fell under the purvey of TC2, the Technical Committee on Programming Languages. More specifically, the ALGOL effort would be handled by Working Group 2.1, or WG 2.1, all very official, very structured, and very much funded with international backing.
Starting point is 00:11:47 WG 2.1 was composed of both new and old faces to the Al-Gol effort. For instance, Peter Naur would stick around. A key difference in this new regime was that everything adhered to a more rigid and bureaucratic structure. This would slow development to, in some cases, a standstill. One of my favorite critiques here is from Peter Naur's Successes and Failures of the Algol Effort. To quote directly, Perhaps we should count it as a major failure of the Algol 60 effort that it had created the false impression that this could be done fairly easily by an act of will. The attitude within IFIP Working Group 2.1 became dominated by the desire to produce
Starting point is 00:12:33 a monument, an urge that I consider extremely harmful. It has distorted the idea of what was and what was not achieved in the Algol 6D report. One of his refrains is that IFIP needed ALGOL, but ALGOL didn't necessarily need IFIP. The original ALGOL spec had been developed in two years, and it only had a few committee meetings during that time to produce that spec. It was a small and agile project. The team moved fast and broke things to borrow an overused adage. IFIP wasn't able to do that. The new committee plotted slowly down the path to a new ALGOL.
Starting point is 00:13:20 The refrain about the needs of ALGOL versus the needs of IFIP is something to look out for here. ALGOL doesn't actually need that much. It needs funding for meetings, it needs funding for print and paper, it needs backers that want to program it, and that's about it. That could have been accomplished by the smaller ad hoc committees of the Al-Gol-60 project. Now, this is where I'm going to enter some speculative territory, so the usual cautions apply. IFIP, of course, was not formed in a vacuum. It was created in a very specific context, that being the Cold War.
Starting point is 00:14:01 When I say that IFIP was intended to promote international collaboration, that does mean international, with a capital I. It was meant to bring in computer scientists from both sides of the Iron Curtain. Ksenia Tatarchenko examined this aspect in a 2010 paper titled, fittingly, The Cold War Origins of IFIP. I'm pulling some of my evidence from this paper. Tatarchenko notes the tension between the West and the USSR had escalated in the lead-up to IFIP. Sputnik had launched in 1957, which made attempts at collaboration a bit more difficult. The spirit of the time had shifted away from collaboration in the sciences and more towards competition. For IFIP to work, and for it to continue receiving funding,
Starting point is 00:14:53 it had to prove that collaboration was again possible. More than that, IFIP needed victories. It needed something like ALGOL. This meant that the community around ALGOL and iFIP were working at slightly different purposes. Programmers wanted a newer, better language. They wanted I.O. options. They wanted more features and less ambiguity. iFIP wanted, to crib from Naur, a monument. Something to hold up to the world that proves collaboration was possible and powerful and that IFIP should continue to receive funding. Tatarchenko's paper also points out something I
Starting point is 00:15:32 initially missed, so check this out. The ALGOL specs were always developed by a large committee, but the actual papers were produced by a principal editor. The editor for Algol 60 had been Peter Naur. While technically Naur was just a cog in the machine, the position of editor did give him some power over the rest of the committee. There were a few different editors during the IFIP era, but by the time Algol 68 is produced, they had settled on one editor, Aoud van Vingarden. I'm so sorry on all these pronunciations, I am doing my best. Aoud had been part of the ALGOL 60 group initially,
Starting point is 00:16:18 but there's a bit of a maze going on here. Technically, IFIP was first founded in 1959 under the name ICIP, with Aoud as one of its vice presidents. The org became IFIP in 1960. In 1962, Aoud, once again, was elevated to a vice presidential seat. Then, in 1964, he became a trustee. One of the critiques that Naur brings is that once someone is enshrined in IFIP, it was nearly impossible for them to be ousted. So Aud, who started out in a position of power, was able to retain that power, no matter what the rest of the working group thought. As editor during the Algol 68 era, and as a high-ranking member of IFIP, he had a wild amount of influence on the language.
Starting point is 00:17:08 Naur seems to imply that this was to the detriment of ALGOL. Now, I realize I've kind of jumped over the chronology here, but I wanted to get this political background explained before we move any further. Once IFIP takes over ALGOL, things get messy. Maybe there wasn't a way around that. For ALGOL to grow, it needed some permanent home. That need, combined with the international nature of the project, may have put the language on a short track for conflict. This is all a bit of a long-winded way to speculate at why ALGOL-Y disappeared. At IFIP, the ALGOL Working Group fell into a series of regularly scheduled conferences.
Starting point is 00:17:52 The May 1967 conference was planned to cover ALGOL-Y, but things didn't really go to plan. As Lindsay explains, the conference instead focused on ALGOL-X, even producing a draft proposal for the language. The last mention of ALGOL-Y is in the beginning of that conference in 1967. That's the last time it shows up. So I think it's likely that Y was dropped because X showed some promise. So even if Y, at least ideologically, may have been planned as a more sophisticated, more powerful language, that didn't really matter to the committee. It was more important to produce a monument than maybe choose the exact best programming practices.
Starting point is 00:18:40 But once again, that is partly speculation based off Naur's papers and Tadachinko's paper. Okay, my time skipping is done, at least for right now. Let's get back to 1964 and look at how Algol X was shaping up. We've already determined that Y was a dead end, so what was the actual path forward? For that, we can look to the Algol Bulletin, a newsletter put together by IFIP. The Algol Bulletin is fully archived, and it's huge. We're dealing with a truly wild amount of information here. What I'm pulling from, specifically, are some of the first articles on ALGOL X. That will give us a good understanding of the early state of the new language.
Starting point is 00:19:30 These early papers, in general, are proposals for features needed in any new ALGOL. As such, there's a certain experimental feeling going on here. I think that's perfect for examining how languages develop. here. I think that's perfect for examining how languages develop. These papers are contributed by people who have used ALGOL60, want very specific changes made, but might not necessarily know where those changes will lead. One of these early experimental features is the pseudoconstant. This is a weird one, and I point it out specifically because it's weird. This first shows up in a 1964 paper penned by Gerhard Siegmuller, but he cites a proceedings paper by Dijkstra, so the idea might be a little older.
Starting point is 00:20:19 Let me quote straight from the proposal for an explanation here. Quote, The intention is to freeze the value of a variable over a certain period during computation, primarily for efficiency reasons. No assignment to this quantity is allowed during that period. Our proposal starts with the observation that the importance of pseudoconstants is coupled with their appearance in statements of repetitive nature. pseudoconstants is coupled with their appearance in statements of repetitive nature. End quote.
Starting point is 00:20:55 This is a really, really left-field idea. At least, as a modern programmer, it feels just totally out of the blue. The idea here is that a pseudoconstant is just a normal variable that is temporarily set to read only. This is actually a feature in another later language that I know about. That language being intercal. In intercal, the forget statement turns a variable into a constant. This can be undone later using the remember statement. Now, of course, I should tell you something here. Intercal is a joke language, an esoteric programming language. It was designed to poke fun at programming and perhaps frustrate some people that are too serious about computers. You could call its forget-remember statements a version of pseudoconstants. At least, this was the first thing that sprang to my mind. Now, to be fair, intercal is later than
Starting point is 00:21:55 this Algol period. It shows up in 1972. But we must ask the question, what's the actual purpose of a pseudoconstant? If we take the intercal reading, then this is a dumb feature, something that should be ridiculed for its complexity and, really, the fact that it's not needed. As described in this paper, pseudoconstants are a way to solve a very specific problem with scope. Remember scope from last episode? If not, then let me explain, since this is crucial here. This is actually getting to some very foundational computer science issues.
Starting point is 00:22:35 Scope is a fancy way of saying what data your code can see, or what other parts of the program your code can see. In ALGOL, scopes are very specific. Code is broken up into nested blocks. The scope of what a block can see is restricted to itself and any blocks that it's nested inside of, any of its parents. In other words, a nested block can only ever see variables that are defined above it, but nothing defined below or next to it. This works really nicely with procedures. A procedure is defined using a block, so it has its own local scope. This allows you to have local variables that exist only inside that procedure and can't be touched by the outside world. Those variables are safe. It's a nice little way to keep
Starting point is 00:23:26 procedure data isolated. I think this is very, very logical. If you have data in a procedure, then it needs to be kept away from everything else. It's that simple. But this can break down on the edges. Case in point, the for loop. A loop is just a way to repeatedly execute a statement. It works by writing a header that defines some iterating variable, by tradition the letter I, then telling Algol how you want a step and when you want the loop to end. For example, you might say you want to count from 1 to 10 in steps of 1. Then you say what expression you want to loop, which can be defined as a block. The statement that defines that iterating variable is within the block's scope.
Starting point is 00:24:17 It's right outside the block, it's in the parent, so you can see i within that block. That block will be executed with i equals 1, i equals 2, and so on until it reaches i equals 10. It's very convenient, very simple, and very useful. But here's the weird thing. The variable i is within the block's scope. So there's no reason that the block couldn't modify i. You could, say, write some code that during each iteration sets i to 1. That would make an infinite loop, since once the actual statement tried to run the block again, it would see that i is 1, actual statement tried to run the block again, it would see that i is 1, increment to 2, then the block would set it back to 1. I would never reach 10, so you'd be essentially resetting the loop every time. That, my friend, is some gnarly code. There is a way, though, to prevent variables from
Starting point is 00:25:21 being modified. You can declare that a variable is a constant, which makes it read-only. That means that you can never reassign its value. But then the loop wouldn't work at all. The variable could never be incremented. You would once again get an infinite loop. You'd be back to the same gnarly code. Hence the proposal for pseudoconstants. This would be some mechanism for specifying that a variable was constant only for certain blocks. You could modify the variable in the for part of your code, but not inside the block itself. I think that in that context, this makes a whole lot of sense.
Starting point is 00:26:05 It would also have some uses with procedures, but we're going to talk about that way later. I think this pseudoconstant thing is fascinating because it really speaks to how early 1964 still is. High-level programming languages are still very young. Fortran was 7 years old at this point. Variable Scope, as a named concept, has only been around for 4 years. We're still at this point where programming is not a solved problem. Or at least, it's a lot more unsolved than it is today. We can watch in real time as problems arise and are worked through, and some of these problems still remain. The for loop thing is still an issue in many languages.
Starting point is 00:26:52 C is an easy example and a direct relative of ALGOL, so I think it's a good example. A for loop in C functions almost identically to a for loop in ALGOL. You name an iterator, you call it i due to decades of tradition, you say how you want to step, and you set an upper bound. Scope in C works roughly the same, so the iterator is within the loop's scope. You can have the same type of bugs as in an ALGOL loop. Where this really becomes a problem is when you have nested loops. If you have one loop inside another loop, and they use the same variable name, then they will interfere with each other.
Starting point is 00:27:31 The traditional solution is to just use different names. You use i, then j, then k. Sometimes this is the best solution, since you may need to access all the iterators in the innermost loop. You may need to access all the iterators in the innermost loop. But, crucially, that's a practice thing, and not something enforced by the language. There's no protection for two loops using i and having a bad time. You just kind of have to know that traditionally you use the iterators i, then in the next one j, then k, then LMN, OP, and onward. The whole matter of pseudo-variables isn't interesting because it has weird loop implications. Rather, I think it gives us a
Starting point is 00:28:13 fascinating window into how the sausage was being made. Programming was still so young as a discipline. Languages were so new that problems were being unearthed. Totally new problems. Scope is one of those fundamental things today, but in 1964, it was still very new. Algolers were still getting used to the idea, and were still finding all these weird little issues around its edges. So, let me backtrack a little. I said pseudoconstants solved a few problems. The whole loop and block scope is one thing. Another issue comes with procedures. As discussed earlier, a procedure has its own scope. Procedure names are also limited by scope. So, here's the weirdness. Let's say you have a procedure that's defined in the broadest possible
Starting point is 00:29:06 location, right at the beginning of your program. That means that, scope-wise, any block could access it. So what happens when a deeply nested block calls that procedure? What data can that procedure access? So who's ready for some heavy-duty comp sci? And I do swear, I'm trying to slowly ratchet up the complexity here. In ALGOL60, there were two ways to pass parameters to a function. Pass by value and pass by name. In general, that's the only way you're getting data in and out of a function. It's not going to ever be touching an outside scope, which is a little bit different than the other views of scope. But let's push that aside for now.
Starting point is 00:29:54 If you read the actual documentation, then these two types of passing are termed call by value and call by name, which is more correct, but I think it's confusing. and call by name, which is more correct, but I think it's confusing. So I'm going to use the term pass here instead, since when you call a procedure, you pass it arguments. That just makes more sense to me. Pass by value is simple. If you call a procedure and give it an argument, it will make a copy of that argument's value. That way the procedure can do whatever it wants with that data and there are no implications to the larger scope of your program. Let's say you have a procedure called double. It takes an argument named x.
Starting point is 00:30:36 It multiplies x by 2 and then returns the value of x. If you are passing that variable by value, then when you call double on x, ALGOL takes the value of x and copies it into a local variable. This is nice because there are no side effects. You can't accidentally do something weird to x. This protection is enforced by the language itself. It also makes it easy to do something like, say, double x plus
Starting point is 00:31:07 one. You technically always pass in an expression, so that's legal code. In this case, Algol would see the argument is x plus one, it would think, great, I need to evaluate that, so it would get one more than x, then it would copy down its value to a local variable. Once again, you don't even have to think about weird implications. x is always kept safe. Pass by name, on the other hand, does have weird implications. This isn't something we really use anymore, and it's a little confusing, so I'm only going to touch on it. If you pass a parameter by name, something that's specified when you set up the procedure, then ALGOL doesn't evaluate that argument. At least, not at first. It notes that it has
Starting point is 00:31:58 some argument that has a specific name. Whenever you use that argument, it gets evaluated. name. Whenever you use that argument, it gets evaluated. This happens every time you use that argument. Basically, pass by name is like turning an argument into a tiny function. It's useful for some neat tricks, like Jensen's device, but I'm going to steer clear of it for now, because we don't really see that in modern programming. In either situation, a procedure in ALGOL60 is kept very isolated. The scope situation is simple. A procedure only touches local scope. I think technically a procedure could see global variables, but I'm not entirely sure on the status of globals in ALGOL60 in general,
Starting point is 00:32:44 so don't quote me. Once again, outsider perspective. The upside here is that procedures are kept isolated. You don't have unexplained side effects. Assuming you don't do something weird like a go-to that jumps outside of the procedure, but that's a totally different pain point. Now, what if you want side effects? Or what if you just want more flexibility? This same proposal that offers up pseudoconstants has another interesting idea. References. Now, to be fair, references didn't start here. They were already in use by other languages. They hadn't, however, been part of the ALGOL60 spec. A reference for the uninitiated and unafraid is a variable that
Starting point is 00:33:34 points to another variable. I know, that may sound kind of dumb on the surface. Why would you want a way to point at a variable when you could just, you know, have the whole thing? It all comes down to side effects. In most cases, side effects are viewed as a bad thing. One school of thought is that if you run a procedure, nothing about the larger program state should change. You should only be able to pass in data and be handed back a result. But, let's say, you want to do something more sophisticated. Maybe you want to write a procedure that swaps elements inside an array. Maybe you want to remove the last element
Starting point is 00:34:11 of an array while also returning its length. References let you do that kind of stuff. It gives you a structured way to create side effects. Instead of passing in a variable or an expression, you're passing in a pointer to some chunk in memory. That way, you can operate on the actual data, and those changes will be permanent. You're actually touching memory outside of your procedure's local scope. It's not a local copy or even an evaluated expression, but an actual reference to memory. This also solves another problem. In ALGOL 60, data structures were relatively simplistic. You had numeric types, and you had the array. With that, you could construct a lot of somewhat sophisticated structures, but there were limits.
Starting point is 00:34:59 You basically go up to things like matrices and some types of trees. References allow for much more flexibility. By adding just this one feature to Algo, you would get the whole family of linked lists, which opens up a whole new level of complexity. You'd get more sophisticated trees. You could even actually build something like a database structure. References plus just a few more modifications could even send ALGOL down the path of objects, but that's a separate discussion. The other broad category of proposals for ALGOL-X are new data types. This should be, perhaps, unsurprising. As more programmers used ALGOL, they ran into more data types they needed to represent.
Starting point is 00:35:46 You can get by with just numbers and lists, but it gets tricky. Our friendly 1964 proposal lists a pile of new types. We have complex, character, string, bits, and label. We actually ran into the call for strings and labels last episode. We actually ran into the call for strings and labels last episode. The Cleaning Up Algal 60 paper suggested adding those two types, so I won't agonize over the details. The slight difference is the introduction of characters. This is a type that represents, well, you can probably guess, it's a single character.
Starting point is 00:36:24 Strings are composed, in this proposal, of an array of characters. The complex type here doesn't mean complicated, but rather complex numbers, as in numbers with a real and an imaginary part. That's a sign that math and physics nerds are using your language. There are whole fields of physics that rely on the use of complex numbers, and supporting them as a first-class data type would greatly simplify that kind of code. Back to the central thrust. These are early days, so the proposals are just a little off. For instance, one recommends that types should include explicit sizes. As in, you don't just say you want a complex number. You say you
Starting point is 00:37:07 want a complex number with 10 digits. Or you say you want a character defined in 6 bits. This isn't exactly how things work today, but I think we can see where Algolers were coming from. Despite being designed as hardware-independent, there was still the issue of hardware. Explicitly sized data types solved that issue, but it would have made for a more complicated and ugly-looking language. Imagine trying to deal with differently sized integers, the mind reels. Those are, in very broad strokes, the types of proposed features for Algol X. But there's one other thing during this transitional era that I want to discuss. That's orthogonality. This is an ideology that emerged during this liminal period in the middle
Starting point is 00:37:58 of the 1960s. To quote directly from Aud van Vigengarden, quote, To quote directly from Aoud van Vigengarden, If that didn't really hit, then allow me to translate. A Cartesian product is the matrix you get when you multiply vectors. It's a big table of values. Auld is saying that all features of a language should be able to work with one another. A language should be made up of simple features that interact in expected ways. Those can be used to build up more complex features. This concept is called orthogonality.
Starting point is 00:38:44 In Hopple 2, Lindsay explains this in terms of references, so I'm going to just steal that example. Adding references means that to be truly orthogonal, you need to be able to make references to any data type. That cascades down through the entire list of features. You must be able to pass references to procedures. You must be able to make lists of references. You must support references of references, and so on. On the surface, this approach is really slick as all get-out. It means that if you only know a few things about a language, you can kind of guess the
Starting point is 00:39:24 other parts. You might know about strings, and you might know about arrays of numbers. That's enough that you can guess you can make an array of strings, even if you were never explicitly told that you could do that. That makes language easier to pick up and easier to use. That's the shallow view, but it goes a lot deeper. There are some implicit rules to orthogonality. Features of an orthogonal language have no side effects. Adding two integers gives you a result, but it doesn't do anything to memory. Same for assignment. Setting a third element of an array will only affect that element. Language features, or concepts as Al termed them, should also be unique.
Starting point is 00:40:07 You should only have one way to do something, not a bunch of special purpose ways. We already saw this with the transition from ALGOL 58 to 60. The earlier language had functions and procedures, while the 1960 revision dropped functions and ran with only procedures. This simplifies the language greatly, and it also makes the language easier to change. If there's only one way to call a chunk of code, then if calling conventions change, you only have to make that change in one place. You don't need to think about how arguments can be passed to procedures versus functions, and if both should support references or not. So, let me put this together. An orthogonal language uses a small collection of simple features which can be combined to make more complex features. Those simple features, sometimes
Starting point is 00:40:56 called primitives, are all unique and have no side effects. Each primitive can be used with each other primitive in some logical way. This approach dramatically simplifies a language. Instead of defining every possible feature, you just need a smaller core of features. Lisp, yet again, is the prime example here. Lisp can be fully defined using only nine primitives. Lisp can be fully defined using only nine primitives. The rest of the language, including things like math operations, are defined in terms of those primitives. If you write a Lisp compiler, you just need to handle those nine primitives, then everything
Starting point is 00:41:38 else is just written in Lisp in terms of those primitives. Orthognality is one of those funny things that just kind of appears. As near as I can tell, it's first named in 1965 by Auld himself. But Lisp and other languages were doing it since earlier in the decade. In fact, a lot of Algol was already somewhat orthogonal. Just look at Backus-Nar form. That starts with simple primitives and uses them to build up complex definitions of the language. Perhaps you've clued into the other interesting thing here. Orthogonality lines up really well with Algol's core ideals. An orthogonal language would be simple, logical, and clean. It would be easy and intuitive to learn.
Starting point is 00:42:27 It would be simple to define and simple to implement in hardware. ALGOL was already targeting all of those features, so orthogonality could almost be seen as the philosophy of ALGOL itself. But perhaps I'm getting ahead of myself. This is all the lead up to the main event after all. So far, we've been setting the stage, throwing together all these ideas that don't fit into a clean timeline. We have a taste of what algol programmers want, and the theory around programming is becoming more refined. So how do we go from X to a new language? mind. So, how do we go from X to a new language? Let's continue with the critical year of 1966. That year, Niklaus Wirth and Tony Hoare dropped a little paper called Contribution to the Development of Algol. The paper itself was edited by Donald Newth, so we have a pile of Algolers accounted for right there.
Starting point is 00:43:28 This paper describes a language that would later be known as Algol W. And I know, we now have W, X, and Y, and that's not even to mention Algol N. Anyway, the 66 paper is notable as a stepping stone towards the final ratified ALGOL 68, but it's not all the way there yet. Hence why it's known as ALGOL W. It also kind of breaks off and makes its own implementation later on, but I'm just going to focus on this paper as described. The proposal had been a while in the making. In late 65, Wirth and Hoare had submitted an early draft to Working Group 2.1, which was rejected because, quote, it was felt that the report did not represent a sufficient advance on ALGOL 60, either in terms of
Starting point is 00:44:20 the matter-of-language definition or in the content of the language, end quote. This was at the same meeting where Van Vingegaarden submitted his orthogonal design paper. That paper actually has a full proposal for a new language, which was also rejected. That said, the working group did see promise in both of these approaches. A compromise of sorts was struck. The Algol W proposal had all these features going for it, and the Orthogonality paper had some cool design ideas. So why not merge the two? Working Group 2.1 asked Wirth and Hoare to do just that.
Starting point is 00:45:01 The early draft was rewritten to be orthogonal. horror to do just that. The early draft was rewritten to be orthogonal. That takes us up to the 1966 paper, the final proposal for W. Now, there is something else that came out of the Oud paper that we need to address. That is Van Viggaarden grammar, or Wmar. I might be mixing up the chronology a little here. The contributions paper from 1966 doesn't seem to include W-grammar, but I read passages and citations that say it does adopt the new grammar. This paper was also published in different forms and has different drafts, so I might just be viewing a form that doesn't explicitly use W grammar in the text, but whatever the case, we still need to talk about this grammar. Algol 60 was defined using BNF, Bacchus Naur Form. This name on its own is a little misleading.
Starting point is 00:46:01 BNF is a metalanguage, a language used to describe another language. It could also be called a grammar. W-grammar was meant as an improvement to BNF. The main upgrade, if you want to call it that, was context. BNF is what's known as a context-free grammar. You can define the shape and format of a chunk of text, but that description doesn't have any concept of context. It can't do something like verb agreement. You can get around this issue by writing just a lot of rules, basically cases for different contexts. But that's inefficient and really just skirting around the larger problem. A BNF rule can't reference something inside itself. It can't do any recursive definitions. Ideally, you should be able to make one rule that describes, say, a variable declaration with assignment. That rule
Starting point is 00:47:01 should be able to tell the difference between assigning an integer and a string value. BNF can't easily handle that. Auld's solution was to create a context-dependent grammar. This, on the surface, is gonna seem pretty simple. W-grammar retains the normal context-free form of BNF. You define classes of symbols and how those symbols can be arranged to form more complex symbols. The difference is that W grammar introduces attributes. These are symbols that are defined as lists. Basically, you name an attribute,
Starting point is 00:47:39 then you say all of its legal values. So you might say a digit can be any item in the list, 0, 1, 2, 3, and so on. So here's the dig. Grammars like BNF aren't just a theory thing. They have a practical purpose. You can use BNF to parse a language. That's how some compilers for ALGOL worked. You'd set up a program that could parse data based off a set of BNF rules. Simple, easy, and effective. How that's actually implemented, let's push that to the side. The point is, BNF does need to be executed in some way. To use WGrammar, you had to first expand its rules.
Starting point is 00:48:28 grammar, you had to first expand its rules. That is, you take your context-free rules and apply each possible combination of attributes. That gives you this automatically generated pile of context-free rules that can then be applied like any normal BNF. The upside here should be obvious. The upside here should be obvious. W-grammar is much more sophisticated than the older BNF. It allows for context-dependent parsing rules, which are a big deal. It can also be reduced to context-free rules, so you could have the best of both worlds. But there's a catch. W-grammar is Turing-complete thanks to the fact that it supports recursion. W-grammar is Turing-complete thanks to the fact that it supports recursion. Now, under normal circumstances, that's a good thing. If a language is Turing-complete, then it can be used to compute anything computable. It connects you up to this robust set of theories and proofs.
Starting point is 00:49:18 That said, not all proofs are good. The important issue here is the so-called halting problem. The halting problem states that given a Turing-complete language and an arbitrary input, there's no way to decide when that program will terminate. In that way, the halting problem is a type of undecidability problem. You can't decide what the program's going to do. You can't decide what the program is going to do. You can't decide if it will end. In practice, that means you can't be given a wgrammar rule and say with any certainty
Starting point is 00:49:52 that it will return a string or even if it will ever stop. For normal programming, this is fine. It kind of sucks, but we put up with it and we forget about it. But WGrammar is special here. It's not meant to be a programming language. It's a meta-programming language. It's meant for parsing normal code. So if you're using WGrammar, you have to add all these restrictions and rules to make sure it does return data,
Starting point is 00:50:25 to make sure it doesn't get stuck in some weird recursive loop. W-grammar was also meant to be used for defining ALGOL moving forward. That means that it will be used for things like proofs or other rigorous definitions, like the spec. But it's Turing-complete. So you have to have these caveats and restrictions when using W-grammar that BNF doesn't suffer from. The other downside here is that W-grammar is kind of hard to read. It requires expansion based off its list of attributes. You can't just
Starting point is 00:51:01 read it. You need to do the expansions in your head to understand which combos are legal and how they work together and maybe even deal with recursive definitions. It's a lot more sophisticated, it's a lot better, but it tends to work against the spirit of a simple and understandable language. So, okay, W grammar on its own introduces some complex issues to ALGOL, but what about the actual language? What's new in W and what makes it into 68? You probably know where I'm going to start, and that's types. I know I can be formulaic, but there is some interesting detail here.
Starting point is 00:51:39 We get the expected booleans and reals and integers and arrays. From there, things get, well, they get neat. References are fully adopted from the Algol X proposals. They work roughly the same. Bits come over, so do characters and strings. That gives us the full set of types. A little addition here is the long modifier. You can now make a larger integer by declaring it as a long integer. This is similar in spirit to the old AlgolX proposal for explicitly sized variables. But the new implementation is much cleaner, at least I think. Notable in its absence is the label data type. That, honestly, kind of surprised me. The argument for label as a first-class data type made a lot of sense. It allowed for some
Starting point is 00:52:34 streamlining of the language. Part of this, I think, was done in an attempt to phase out labels. This goes hand-in-hand with a change made to switch. Specifically, the switch statement was replaced with case as this big multi-condition conditional. To quote from the W paper, The case construct extends the facilities of the ALGOL conditional to circumstances where the choice is made from more than two alternatives. Like the conditional, it mirrors the dynamic structure of a program more clearly than go-to statements and switches, and it eliminates the need for introducing a large number of labels in the program. End quote. And yes, Dijkstra was part of Working Group 2.1. The other clue here comes from orthogonality. With the introduction of references, it was now
Starting point is 00:53:27 possible to make a reference to a procedure. That's just a variable that points to somewhere in your program. That worked the same as a label variable, but was more flexible since it exploited a simple primitive feature. Thus, keeping labels around as a data type would have been redundant. That's not to say that orthogonality was mastered in 1966. Far from it. We also get some weird non-orthogonal stuff in the proposal. Let's go back to two of the new data types, bits and strings. So check this out. Think about how you would take the primitives we have and implement a string data type. We have arrays, we have characters, and strings are really just a list of characters in sequence. It would make good ideological sense
Starting point is 00:54:21 for strings to be arrays of characters. Same with the bit type. We have booleans, we have arrays. Binary data is just a list of true and false, so bits should be an array of booleans. Simple, clean, orthogonal. But ALGOLW doesn't go that way. It instead implements strings and bits as a special data type called a sequence. These are distinct from arrays in that you can't use them like arrays. They're only used to implement bits and strings. And while that's fine, it works, it's messy. It totally breaks from established rules of orthogonality. But to be clear,
Starting point is 00:55:08 this is only the second Algol proposal that uses orthogonal design. Programmers were still trying to figure out how to do this. There's one other contribution from this proposal that I have to discuss before I move on. Up to this point, Algol only had simple data types. You had arrays and all the types that can be made with those. Adding references opens up the possibilities of more complex interlinked types of data. However, that's still pretty limited. ALGOL-W addressed this by introducing the record. You can think of records as very primitive objects. In fact, Ben Viggarden called them objects in later proposals. A record is a way to define a new
Starting point is 00:55:52 kind of data type. It describes a table in memory where each field has a name and an associated data type. You may also know them by their later name, structs or structures. This is a very foundational feature of many programming languages, but once again, this is a feature that's in flux. The tricky part here seems to have been deciding how fields should be accessed. In Algol W, you define records at the top of your program. That's where you name and type each field. You can then create new records and fill them with data. To access a field, you type out something that looks like a function call.
Starting point is 00:56:39 The canonical example seems to be a record called Person. So if you had a field called Name and a record instance called Sean, you'd get the name by saying name and then in parentheses Sean. Now, I will say, when I read this, I kind of recoiled, and I really mean that. If you've been programming as long as I have, you can usually spot when something will hurt. This is one of those things. Getting a field from a record uses the same syntax as calling a procedure. This means you can't have a procedure with the same name as a record field. I don't think that violates anything about orthogonality, but it's gross and it hurts to see. It's painful. You would run into some real issues with this implementation.
Starting point is 00:57:24 It hurts to see. It's painful. You would run into some real issues with this implementation. For instance, with that person record, since it has a field named name, you could never have a procedure called name. They would collide. Over the next few years, ALGOL is further refined. ALGOLW just gives us a snapshot into where things were going. There are more committees, more proposals, and more gloomy days. This eventually leads to ALCO 68, and I will get back to the timeline in a minute, but first I want to jump ahead and look at what 68 changed. I'm kind of in the linguistic mindset right now, so I'll close out with history, I swear. 68 cleans up records in, I think, a really, really cool way. First off, they're changed from being called records to structs. The syntax here is changed to be its own thing distinct from procedure calls. To get the name of a person, you would say,
Starting point is 00:58:21 and this is just quoting actual text, name of person, where person is your structure instance. There are no special characters involved. If you think way back, ALGOL is actually full of these really wordy lines of code, so this fits in pretty well with the rest of the language. And while that's cool, that's not really cool. The actual upgrade here is that ALGOL 68 applied the whole orthogonal treatment to structures. Complex numbers are actually defined as a struct. It's just a structure with a real part and an imaginary part. That right there, that's some real powerful kung fu.
Starting point is 00:59:08 It means the complex isn't actually a primitive, so your set of primitive features is made even smaller. The same trick is applied to strings and bits. Instead of these being their own weird thing, they're just implemented as arrays. Strings are arrays of characters and bits are arrays of booleans. This is made more convenient by the addition of so-called flex arrays. These are just arrays that can be resized. It's a common feature in modern languages, but was somewhat controversial at the time. The result, once again, is a reduction in the overall complexity of the core language. Fewer primitives are used to make more complex features.
Starting point is 00:59:50 Now, let me be clear, this isn't to say that ALGOL 68 is some ultra-clean, ultra-lean language. It has some idiosyncrasies. Some call them flaws. I'm not going to give an exhaustive list, but I do want to hit on some of the weirdness. The first big one that sticks out to me is type coercion. Lindsay actually points out that coercion had existed prior and separate from ALGOL for years. But as with many things, ALGOL named the feature and put it on a more large, more critical stage. This is one of those features that, for me, is a bit of a red flag. That's not to say that type coercion is bad, just that it leads to some complex and
Starting point is 01:00:32 sometimes counterintuitive behaviors. Let me explain. ALGOL is a strictly typed language. That means any variable you declare has an explicit type. If you declare x as an integer, it will be an integer forevermore. It can't change, it can't be reinterpreted, it can't shift or slide. It is an integer. There are pros and cons to this approach, but that's just how ALGOL is. I think it makes good sense with such a standards-focused language. This leads to a consideration. What do you do when types collide? What if, say, someone makes a real number called r and tries to say that r equals x plus 1?
Starting point is 01:01:17 Well, x is an integer, and 1 doesn't have a decimal point, so that is implicitly an integer as well. The sum of two integers is an integer. But the programmer wants to store that integer inside a real variable. Should that be allowed? And if so, how should it be handled? Algol's solution is type coercion. In certain circumstances, Algol will detect that the user is trying to do something weird with types, and language will automatically convert values.
Starting point is 01:01:53 Algol had actually been doing this since 1958, but Algol 68, with more data types and more complexities, is I think where coercion becomes weird. In the example above, the result x plus 1 would be coerced from an integer into a real. That's called type widening, since real is a physically larger data type than an integer. You just take an int, slap on a decimal, and add a few zeros. That's simple and, I think, very straightforward. It makes total sense. You may want to convert an integer into a decimal number, and that conversion is actually trivial. You can't go the other way, since turning a real into an integer would mean losing information. That, too, is natural, normal, and simple. Things get weird once you look at other types of coercion.
Starting point is 01:02:46 To be clear, I'm pulling directly from Lindsay and Hopple 2. They have a whole section on type coercion that makes a lot more sense than reading the actual reference material here. Anyway, references are the first stumbling block. If you have a reference to a variable, then you don't actually have the variable's value. To get the value, you have to follow that reference, a process called dereferencing. ALGOL 68 handles that implicitly. The language tries to guess if you want the reference or the value the reference points to
Starting point is 01:03:22 and acts accordingly. This is all decided by type, hence why it's a form of coercion. If you declare a new variable that's not a reference and you assign it the value of a reference, then ALGOL automatically dereferences that variable for you. It's maybe not the weirdest thing in the world, I grant you. It makes sense in many circumstances. But again, this is implicit. Algol guesses what you want based off the types involved. There are also coercion rules for procedures, unions, special matrix-like arrays, and a few oddities for dealing with issues in the typing system. The most concerning one for me is procedure coercion. It's called
Starting point is 01:04:07 deproceduring. I know, slick name. I'm not even going to be talking about voiding and hipping this episode. Anyway, deproceduring. Normally, a procedure is called, invoked, by typing its name, an open parenthesis, its arguments, and a closed parenthesis. It's perfectly valid to have a procedure with no arguments, in which case you just have an open then a closed paren. They touch, but it's 100% legal for parentheses to touch. ALGOL 68 has a special case for procedures with no arguments. You can implicitly call them without parentheses at all. Let's say you have a procedure called readCare that reads a character from standard input and returns it as a character data type.
Starting point is 01:05:00 You could legally write characterC equals readCare. No parens, no arguments, nothing. ALGOL 68 would call read care and set your variable to its output. That's deproceduring. On the other hand, if you had said your variable was a reference, then there would be no coercion. You'd end up with a pointer to the procedure itself. And this, dear listener, concerns me greatly. I get the point. It's really a nice way to streamline code. It matches up with Algol68's larger theme of implicit coercion, which, side note, might be a pretty cool band name. If anyone wants to make a punk band called Implicit Coercion, I think I'd buy a t-shirt. This seems dangerous to me for two reasons.
Starting point is 01:05:53 Reason one is that deproceduring is a special case. Many of these coercions are actually special cases. That, in my head, doesn't really match up with the core tenet of orthogonal design. Special cases mean complexity. It means that things don't actually all plug together nicely. You have to have special rules for these very specific circumstances. That makes me a little queasy. I don't like that. That makes things a lot more complex to deal with. Reason two is that this implicit stuff doesn't always work. I'm willing to accept that this is just me. That this is my opinion. That I just don't like implicitness in languages. It takes
Starting point is 01:06:40 control away from the programmer. It means that the language is guessing at what the programmer wants. Granted, it sounds like ALGOL68 would most of the time guess correctly, but I've been burned by this before. Implicitness is one of my main gripes with Python and JavaScript and a whole pile of other languages. Even my beloved Perl does some weird implicit behavior that only makes sense if you know exactly how the language works on a really deep level. Those languages, though, don't guess very well.
Starting point is 01:07:16 They don't always guess what I want them to guess. So I end up going through all these contortions to either force their hand or make them guess something right. contortions to either force their hand or make them guess something right. You have to think about the language when you're programming, which for me, I think that's a hallmark of poor design. An alternative approach here is called typecasting. It's used by other languages, although some, like C, use a combination of coercion and casting. Basically, this casting makes the programmer explicitly say, I would like to turn this integer into a float, or please treat this procedure as a pointer. That can give the programmer much more control over the process. It's nice to have that
Starting point is 01:07:59 implicit stuff, but only up to a point. I think it's easy to overdo it, and ALGOL 68 seems to cross over that point, at least to me. In general, I think we can see how ALGOL 68 is a refinement of ALGOL W, itself a refinement of X, and so on. You get the idea, even if not all those names are actual proposed languages, even if they're just snapshots in time. There's more to ALGOL 68, but I think this is enough to get the point across. Features were refined, created, invented, and dropped as the committee moved from proposal to proposal. The actual development process was a slow burn of progress. I promised some history, and I'm going to close out with that.
Starting point is 01:08:49 Once again, I'm turning to Naur's successes and failures for this part. It's really just a spicy paper. This passage I'm going to read comes from Naur's description of how Working Group 2.1 dropped the ALGOL-W proposal and moved on ahead with a more complex language. There is a truth to that allegation. ALGOL 68 builds on everything prior, but it is a much more complex language. Also to note, MR93 is one of the ALGOL 68 proposals. Once again, we're in the world of ALGOLers, so beware. Quote,
Starting point is 01:09:22 Van Vigengarden was allowed to continue alone along the tangent of formalized description. The latest result of this is the report MR93 of the Mathematical Center in Amsterdam. I feel that the direction taken in the report MR93, viewed within the context of a committee established to work on programming languages of broad common utility and appeal is completely wrong. Instead of learning from the experience gained in the work leading to ALGOL 60, it amplifies the failure of that effort. It sets out to prove the ultimate informality of description, a point where ALGOL 60 was strong enough. In doing so, MR93 sets a new record of lack of appeal to human readers. In fact, it makes an attempt to create not only its own special
Starting point is 01:10:13 terminology, but a linguistic universe wholly of its own, and requires, to quote Mike Woodger, that the reader will have his normal reading instincts thoroughly suppressed. See? Spicy. The adoption of W grammar and the free reign of Van Vigegarden made ALGOL 68's specification unwieldy. Now, I don't think we can blame any one person for this. IFIP is partly to blame, since it was notably difficult to oust members. Worth, Hoare, Dykstra, Naur, and a host of other Algolers are partly to blame. They would all quit the project at various times or disappear from multiple meetings at a stretch. Perhaps more involvement from them could have changed the shape of Algal 68. Lindsay describes this latter point during the development of 68 where Dan Viggarden was
Starting point is 01:11:13 basically left alone. Working Group 2.1 would convene, review the latest papers off his desk, submit suggestions, then pass it back for revision. There's this wild passage in Hopple 2 where Lindsay explains just how pervasive this was. Van Vigengarden worked using an IBM Selectric typewriter. Those machines had these spherical printheads that could be swapped out, thus changing the font. They were called golf balls. From Lindsay, quote, Every time Van Vigengarden acquired a new golf ball, he would find some place in the report where it could be used. As such, the hermetic world of Algol during this period became more and more insular. Part of this is just the structure of IFIP itself driving away members.
Starting point is 01:12:02 It makes it really easy for that to happen. Part of it, though, may have just been the disconnect from the outside world. IFIP needed ALGOL, but ALGOL didn't need IFIP. It needed programmers, and it needed passion. By the time we reach the latter years of the development of ALGOL 68, it almost sounds like there's been a brain drain. And that would have some strange effects on language itself. By the time the ALGOL 68 spec was published, Working Group 2.1 had transformed. It wasn't operating like it did in the early 60s.
Starting point is 01:12:46 Many of the original algolers had departed. Maybe some of the magic was just lost once the international language found a home. Maybe design by committee had led to a language that its creators never imagined. Or maybe the thinning of the committee led to design issues. I've seen all these hypotheses posed online. Ultimately, however, that casts ALGOL 68 as a total failure. I don't think that's necessarily the case. Even the HOPL2 paper that I've been living out of has an entire section called What Went Wrong. And sure, there are problems with the language. Its spec is
Starting point is 01:13:27 actually massive. It's 237 pages long, written primarily in W grammar. A human can't really read that unaided. The project took 8 years. The language was never widely adopted. It was never wildly popular. There's only a few compilers for ALGOL 68. That's usually a sign that not many folk are using your language. All of those could be counted as symptoms of failure. But you could take a different perspective. Over the years, ALGOL functioned as a laboratory for new ideas. By following the history of ALGOL, we can watch new concepts and programming being created, refined, or even destroyed outright.
Starting point is 01:14:14 The effort itself, the first effort to really investigate programming language design, bore fruit. You can look at any family tree for this. Most languages are, somehow, connected to ALGOL. Be it aligned back to IAL, ALGOL 60, XW, or even ALGOL 68. Programming would have developed very differently if it weren't for the successes and failures of the ALGOL effort. Thanks for listening to Advent of Computing. I'll be back in two weeks' time with another piece of computing's past.
Starting point is 01:14:50 And hey, if you like the show, there are now a few ways you can support it. If you know someone else who'd be interested in the history of computing, then please take a minute to share the show with them. You can also rate and review the show on Apple Podcasts and Spotify. If you want to support the show directly, you can sign up as a patron on Patreon or buy Advent of Computing merch. Links to everything are available at my website, adventofcomputing.com. And actually, right now I'm running a poll for a new bonus episode on Patreon, so if you want to get in on that, you have a few more weeks before that closes. If you have any comments or suggestions for a
Starting point is 01:15:24 future episode, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.