Advent of Computing - Episode 136 - Getting On TRAC
Episode Date: July 21, 2024Have you ever formed a bad first impression? Way back when I formed a hasty impression of this language called TRAC. It's been called a proto-esoteric language, and for good reason. It's outlandish, c...omplex, and confounding. But, after the urging of some listeners, I've decided to give TRAC a second look. What I've found is, perhaps, more confusing than I ever imagined. This episode we are looking at the wild history of TRAC, how it actually pioneered some good ideas, and why it feels so alien. Selected Sources: https://dl.acm.org/doi/pdf/10.1145/800197.806048Â - 1965 TRAC paper https://github.com/gmilmei/trac64Â - TRAC64 processor in "modern" C https://dl.acm.org/doi/pdf/10.1145/365230.365270 - 1966 TRAC paper, with more code!
Transcript
Discussion (0)
programming is one of those fields that's simply too big to fully understand.
It's also highly subject to the whole Dunning-Kruger effect.
If you're at the start of the learning curve, it's really easy to think you've figured it all out.
But let me assure you, I have yet to figure it all out.
I don't think I ever will figure it all out, and I don't think anyone has figured it all out.
There may be some
notable exceptions with very early programmers, but that understanding wouldn't have lasted very
long. The field really blew up really quickly. There are about as many ways to construct a
programming language, and as many ideas and ideologies about programming as there are
programmers. This has become especially true as the field
has become more specialized. There are whole swaths of languages that I've never heard of.
There are whole fields of the discipline that I will never come into contact with,
and I may live my whole life without ever even knowing about. This makes it almost impossible
to speak in absolutes. What's the best language for the job?
Well, there may be one out there hiding.
Or that language may not yet exist.
My favorite example of this is the Stuxnet computer virus.
That virus was written in a totally unique and new language.
One that wasn't known until researchers got ahold of the virus's binary and started
to decompile it.
It's this very complexity, this wild unfolding landscape, that makes programming so neat
to me.
It feels like there's always something new to discover.
Sometimes what we discover is amazing.
It can open us up to totally new ideas and in the process make us better programmers.
Sometimes what we discover is more concerning.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 136, Getting on Track, in which I will face another of my personal demons. Before we get started, I need to throw in an announcement.
In just under two weeks, I'm going to be speaking at VCF West in Mountain View. That's August 2nd.
I'm actually speaking at noon, so easy time to remember.
If you haven't been to a VCF before, a Vintage Computer Fest, they're a wonderful experience.
I highly recommend it, and this is even at the Computer History Museum, one of my favorite places
in the world. So if you're in the Bay Area, I very much suggest stopping by. I'm going to be there
the whole weekend,, at least Friday and
Saturday when the show's running. My talk is going to be on edge-notch cars. It's going to be kind of
a state-of-the-research project. It'll be similar in feel to the talk that I gave at BCF SoCal
about the state of research with cryotrons. It should be a good time. Now, this also means I'm getting a little bit busy. My plan is to record
my talk at VCF so I can put it into the podcast feed to that. And I actually have an audio
recorder coming in the mail, portable one. So it's easier for me to record and hopefully upload
kind of quickly. But I'm going to need the next week, maybe week and a half or so to get ready
for the talk at VCF. So I'm not going to be releasing a normal sized episode in two weeks.
My plan is to record something small to put on the feed just to tide you over until I can get
the live show out. So if you see anything weird going in the RSS feed, that's why.
It's because I'm getting ready for VCF West.
Alright, so back on topic.
Way back in episode 78, titled Intercal and Esoterica, I discussed this language called TRAC.
That's T-R-A-C.
That was in the context of esoteric programming languages, languages written as
jokes. One of the common features of esoteric languages is their obtuseness. Most esoteric
languages are almost impossible to read or understand. They function using some backwards
or intentionally confusing mechanisms, and are just hostile in general. For the joke to really stick, these
languages need to be intentionally bad. Intercal is the premier example here. It's a wild nonsense
language meant to parody more serious languages. During that episode, I went on my usual hunt
checking for prior arts. Intercal is the first esoteric programming language, but I wanted to be sure there weren't
earlier forms of this dark art.
TRAC is sometimes cited as an unintentional esoteric language because it's obtuse, operates
on strange mechanisms, and is uncomfortable to think about.
It's not really like many other languages.
During episode 78, I was a little harsh on it.
Perhaps that was unjust. When I went to VCF SoCal earlier this year, I ran into a number of
listeners. It was, let me assure you, a wild experience and a profound honor. It's just a
weird thing I never thought would happen in my life. One of those listeners actually took me to task on my assessment of track.
He assured me that the language isn't just an oddity and that I need to take another look at it.
That's kind of been kicking around in the back of my head for a while now.
What brought it to the forefront is simple.
I recently received a manual for the language.
Someone sent it to me in the mail.
That manual is the beginner. Someone sent it to me in the mail. That manual
is the beginner's guide to track. And, well, I myself am a beginner after all, so I figured now
would be as good a time as any to take a deeper dive into the language. In this episode, I'm
learning track. I'm going to look at what made me so uncomfortable with my first experience with the language, and I'm going to try to understand why it is how it is.
I don't want to just rag on track.
I truly want to see what good ideas are at play here,
and try to figure out where it fits in the larger historical context.
I think that it will undoubtedly be more productive
than just looking at a language, going, ew, and moving on.
So with that, let's get in to track.
Since this episode is something of a reassessment, I think it's only fair to start off with
my old impressions of track, which, to be fair, are still my initial impressions returning
to the language. Track
isn't really like anything else I've encountered. And I know, I haven't used every language in the
world. I have a buddy who brings up a new language I've never heard of almost every time we talk.
That said, I've been programming ages and ages at this point. I've worked with a huge swath of
different languages. So I'd wager
I know the broad strokes pretty well. So when I say TRAC is, in my experience, unique, well,
I'd like to think that has some weight. To start with, TRAC is a string-oriented language. By that,
I mean the fundamental data type in TRAK is the string.
It's also the only primitive data type.
When I say orientation here, what I mean is TRAK is all about strings.
This on its own isn't an uncommon idea.
Lisp is famously a list-oriented language.
It's all about lists.
Forth is a stack-oriented language. It's all about lists. Fourth is a stack-oriented language.
You get the idea. Certain languages are built around specific data types or data structures.
For track, that's the string. That has a few major implications.
The first that we need to discuss is my favorite $5 word, homoiconicity. It isn't just a cool term, it's also going to be central to the whole episode
today. The term homo iconicity was actually coined by Calvin Moores, the creator of track,
specifically to describe how track functions. So yeah, it's a bit of a big deal for the episode.
it's a bit of a big deal for the episode. Homo Iconic roughly means same representation.
In the modern sense, Homo Iconic languages are able to treat their own code as data,
and treat data as code. It can write and edit itself. Lisp, Forth, and Prolog, three of the coolest languages I've ever talked about on the show, are all homo-iconic.
I think the easiest one to explain here is Lisp. In that language, code and data are both
represented as lists of elements. That representation is maintained both internally
and externally. As in, you write your code in the form of a list, and the Lisp interpreter breaks
your code down into a list on the inside. You also get all these tools for manipulating lists
of data. Thus, you can use Lisp to write and edit more Lisp. But TRAC? Well, TRAC is different.
Here I'm going to quote directly from Moore's 1965 paper on TRACK,
specifically where he coins the term homoiconic. To quote,
Because TRACK procedures and text have the same representation inside and outside the processor,
the term homoiconic is applicable. End quote. Note, this is a slightly different version than what I described.
It all hinges on representation. I mentioned that in Lisp, there's a difference between internal
and external representation of the program. Actually, if we break things down, there are
many different ways to represent a program. You have the program you keep in your heart and mind, what Naur would call the theory of a program. Then you have the code you write, the string
representation of your code that you actually type out into your editor. It's the human-readable part
of the program. That representation gets fed into the computer, which almost always uses that code to make a new internal representation.
In the case of Lisp, the string code is transformed into this thing called a parse tree. It gets
turned into a format that the computer understands, but might not be easy for a human to read.
The computer then works with that internal representation. It might simplify some things.
It might walk around the code and even execute it to further transform it.
In many cases, that gets turned into an executable, a totally new type of representation for the
computer itself.
The key part here is that the computer has its own way of representing your program,
and that's separate from your source code.
It's separate from the string representation of your program.
And for that matter, it's separate from the secret program that one keeps in their own heart.
Anyway, in TRACK, there is no internal representation.
What you put into the computer, your code, as an unadorned string is what the
computer works with directly. It doesn't make some internal state. It doesn't compile anything.
It works straight from your source code. That's what Moore's means by homo-iconic.
The representation is the same as the code. That has a few major implications
that initially scare me. They still kind of scare me, but I'm just at the start of my journey,
so let's see where it goes. Implication the first. Track works via self-modification.
Track works via self-modification.
In order to execute your code, track modifies that code.
Let me walk you through what that means and why that spooks me.
Track is an expression-based language.
Code is broken up into these discrete expressions.
As a track program runs, it evaluates these expressions.
Since expressions can be nested, it has to start at the innermost expression and work its way out.
Each expression can, optionally, evaluate to some value.
This is also the point where I should mention the track is kinda lisp-shaped.
Expressions have their operator first, followed by arguments,
all enclosed in parentheses.
So you get expressions like add 1, 2, which evaluates to 3.
There are other expressions, like ones for storing variables, that don't evaluate to anything,
but that's a separate matter that we'll consider way later in the episode.
Once an expression is evaluated, if it's a separate matter that we'll consider way later in the episode. Once an
expression is evaluated, if it returns a value, then TRACK replaces the expression's string with
its value. Remember, we do not have any internal representation here, so TRACK actually modifies
the string code, the code that you wrote, and plugs in the value of the expression.
It repeats this process until there are no expressions left to evaluate. In other words,
as track runs, it's slowly rewriting your program. Morris calls this an evaluative language because
as track runs, it's continually evaluating expressions. Really, that's all it does.
You can also think of this as a reductive language.
As TRAC evaluates, it's reducing expressions,
slowly working down until there's nothing left to reduce.
That may, at first, seem like a very foreign concept.
However, there is precedence here.
Languages like Lisp work in a very similar way,
reducing expressions from the inside out. The key difference being the fact that Lisp keeps
an internal representation that it works on, whereas Track is working directly on your code.
This was one of my big first concerns, that track modifies code as it runs.
But really, I think I can look past that.
It's very similar to how other languages function, but there's less sleight of hand involved.
It's just the guts are more open, easier to see.
Perhaps that's concerning, but I don't think it's all too scary when you think about it
that way. Okay, so implication number two. Track is all strings all the time. This is a smaller
corollary, but one worth addressing. Track only has one data type, the string. Its code is written
as strings. Its state is represented as a string. Really,
its state is just the current code, but whatever. Its variables are also all stored as strings.
Your code manipulates strings. Do you see where we're going with this? It's all strings.
This is the real brass tacks of being homo-iconic, at least in the earlier sense of the word.
Track can modify itself at any point, and it has all the tooling to do so.
How that can be used to your advantage, well, that's one of the big things that I'm slowly
trying to work out myself.
And then implication three, your code can be wildly flexible. The combination of self-modification during evaluation and homoiconicity means that TRAC
can, in theory, do things that other languages simply cannot.
If I understand correctly, you should be able to, say, reach into your program at a specific
step in execution and replace a result with a function call.
You should be able to do just some wild things that you can't even imagine in something like
C. Then we come to the matter of syntax. Oh, the syntax. When it comes to esoteric languages,
one of the classic jokes is always bad syntax. A good esoteric language uses all kinds of wild
characters, weird formatting, and just bad visual design. Intercal is a classic in this regard,
using things like the tilde, the set sign, and the number sign as operators. Other languages like
Malbodge use so many symbols in such nonsensical ways
that you can't actually read the code. These all give esoteric languages a certain feel.
You give them a glance, you can't make heads or tails of the code, and you have a nice little
sensible chuckle. This can also have the reverse effect. If you see a language with weird and outlandish syntax,
maybe a language you can't read,
you may assume it's esoteric, have a chuckle, and move on.
Track falls victim to this classic blunder.
It just plain looks weird.
I said it's lisp-shaped.
Well, that's really only the broadest outline.
In track, expressions start with a number sign, which indicates you want to evaluate the expression.
That's followed by the expression itself, enclosed in parentheses.
The first part of the expression is the operation, followed by arguments, all separated by commas.
You can nest expressions, which means you have to have a number sign
buried inside your expressions. If you want to delay evaluation, you have to use so-called
protective parentheses, which means you add an extra set of parentheses around your expression.
So you wind up with lines of code that are a mess of parentheses, commas, and hash signs.
You even in some cases have expressions that start with two hash signs. It's
honestly really, really ugly when you look at it. To make matters worse, or weirder,
we have the primitive set. All languages are technically composed of a set of primitive
instructions, the most basic instructions that are built into the system. Those are used to build up everything.
TRAC has 34 primitives, and each is denoted with two characters. That means that the primitive
for adding two numbers is AD, for instance. Other languages do the same kind of primitive
mangling. Lisp, as the example that I've fallen into today,
has really confusing primitive names like cons, cdr, and car. However, Lisp only has
six primitives to remember. That makes it easier to deal with. Track, however, has 34.
Until you get up to speed, you have to read programs with a table in one hand.
There's one other small quirk that I just want to mention. When you actually want to run an
expression or a script, you type a single quote. That's right, just a single quote,
unpaired on its own. That just kind of hurts. At this point, I hope it's clear that TRAC is pretty unique.
It has similarities to other languages, notably Lisp, but that's only really on the surface.
Then we must ask, where did this language come from?
In doing so, I hope we can start to understand why TRAC is so unique, and why it looks how it does.
TRAC was developed by Calvin Moores.
Supposedly, development of TRAC started in 1959.
At least, that's according to an oral history with Moores himself.
The first official paper trail I can find for TRAC is an article in ACM from 1965.
The timing here really, really matters.
If TRAC was really devised in 1959, that would explain a lot about its strangeness.
That's early enough that programming languages didn't really exist. There were only very few early tongues. The earliest information I can find on TRAC at all is a 1962 article titled Wanted, a Reactive Typewriter.
That puts us in very early days, but still at a point where there were other languages around to influence TRAC.
Right away, this puts us in weird territory. This report, this 1962 paper, is a report that was paid for by the U.S. Air Force.
Specifically, Moores was under contract with the Office of Aerospace Research.
So this isn't just federally funded research, it's funded directly by the Department of Defense.
So, okay, what kind of research are we talking about?
Well, it's actually my favorite topic.
Information retrieval.
You see, Moore's isn't really a programming guy.
He's one of those early digital sages that worked around multiple fields.
Ted Nelson would call him a generalist, I'm pretty sure.
multiple fields. Ted Nelson would call him a generalist, I'm pretty sure. Moore's first high-profile break was this data encoding scheme called Zata coding. It's actually pretty similar
to my near-deer-edge-notched cards. By 1947, he had worked up this entire system for storing
and managing data using unique ID coding. By 1962, Mors was well-positioned as an expert in the field.
He had actually formed a company called Zator
that manufactured and sold Zator-coded cards and equipment.
His work for the Air Force, from what I understand,
was in relation to these systems.
He was consulting on information management.
Initially, this would have all been
manual stuff. Xator cards are, like edge-notch cards, just slips of paper with special patterns
on the edges. They aren't really electronic and digital. But it was becoming increasingly apparent
that computers were going to be the future. The 1962 article gives us a bridge between those two worlds.
It explains, I think kind of unintentionally, how Moore's went from manual data cards to track.
Let me tell you that this is one of those times where I feel like I've gotten really lucky here.
Many people have a totally different voice when writing. It's the whole tailoring the message to the audience kind
of thing. Academic papers, for instance, tend to be in this rigid, formalized language. That means
that when researching the past, you often lose something of the author's flair. I think of it
as viewing a story in black and white. Formal language means you lose a whole dimension of the story. However, I've ran into some people who don't do that whole voice shift thing.
Either they haven't developed a formal voice, or they just don't want to use it.
Grace Hopper wrote that way.
Even her academic papers drip with jokes and color.
Ted Nelson is another example.
His writing is almost a stream of consciousness, which
really answers any questions you could ever have. Luckily, Moore's also falls into this category.
The 1962 report, sent to the Air Force as completion of a DOD contract, reads almost
like a conversation with an excited colleague.
The other thing I'll say is that these folk tend to write really punchy stuff.
They like to use exclamation points, and they'll even call out researchers by name.
And, in general, these folk feel like true believers.
Nelson and Moores both do things like coin new words with total confidence.
They even cite themselves to make a point or two.
There are points in this one Moore's book I've been reading where he just has a quotation from himself,
not from another paper, just something he thought up that's inset with quotes around it.
No reason other than to prove a point.
These are the kinds of papers that I actually think are a lot of fun to read. I've laughed out loud reading some of this stuff
before. This is a feature of Moore's work that will serve us really well this episode.
Even manuals for track follow this convention. He'll straight up stop a narrative to add
asides and explanations as to why he did certain
things a certain way and why he doesn't like doing things another way.
You get a feel for his thought process, which is more than I get from most sources.
So what's up with the specific 1962 article?
Well, here's the best summary I can give and it comes from the text itself.
Quote,
I need a reactive typewriter.
You need one.
Let us work together to get reactive typewriters for all.
End quote.
And that does end with an exclamation point.
When Morris says reactive typewriter here,
he's more or less describing a teletype terminal connected to a computer.
This paper on its own is a really fascinating period piece.
Mors is making the argument that the next step in information retrieval, and really for any kind of knowledge work, is to put terminals all over the place.
The specific story he covers is the need for libraries to be wired up with terminals and
central computers in order, specifically, to prevent data duplication and enable better
research.
The root of this argument is actually a refutation of the information problem, or, as Morris
puts it, the quote, information crisis.
This is one of the few arguments I've seen that actually
goes against the information problem, so it's a pretty interesting read. Basically, Morris claims
that we don't have a problem caused by too much information. Rather, the real crisis is that
researchers have focused too much on the wrong solutions for information management. That electronic systems
have, up to that point, been a dead end. More a solution is this reactive typewriter concept.
It's very similar in a lot of ways to something like Xanadu or even the Mundanium. This would
be a system where remote terminals connected over phone lines would give users
access to a number of services.
Crucial for the information crisis would be centralized catalogs, something like a digital
mundanium.
You can think of it as a digitized and supercharged bibliography system, allowing users to pull
up information on books with ease.
Mors also envisions systems for typesetting and text editing to go along with this.
Crucially, he explains that these systems would, by necessity, be developed using reactive
typewriters.
That librarians would use a larger system to build up that central bibliography.
So we aren't just looking at data retrieval.
This is something a lot closer to a giant data management system.
It would allow authoring data, editing data, and searching data.
Once again, it's like Xanadu or some other proto-internet.
Little wonder why Ted Nelson likes the guy.
Here's where the bridge leads us. Moores didn't view this as some special purpose system. Rather, he took a wide view.
His approach was to construct a generalized way to manage data, string data specifically.
The very heart of this reactive typewriter system, the thing that gave it a spark of life,
was TRACK. If we're calling this a proto-internet, then TRACK would be something like TCPIP,
HTML, and JavaScript all rolled into one. This gets us to the root of what TRACK actually is.
It's a string-based language because it was intended to handle
library-style data. It was intended to work with text rather than numbers. As Mors himself explains,
in 1962, there weren't really options for that. Programming languages were, by and large,
geared towards numeric work. Mors points out that the only other possibility was this language called Comet,
but that was a much earlier language, and from what I've seen, pretty rough.
For Track to work with this reactive typewriter idea, it had to be interactive.
It had to be straightforward, and it had to be very flexible.
Those are the core design goals that Morse would stick with.
He would describe it as a, quote, keyboard language, which is how he wraps up all the
user-friendly features he was going for. Something that's interesting to note and think about, at
least in regard to this first paper, is that Morse is writing the same year that Doug Engelbart
publishes Augmenting Human Intellect.
Many view that as one of the classic works on computer-human interfaces,
and here we have Moores printing similar ideas about user interfaces.
Just, instead of being visual and largely generalized, they're text-based user interfaces.
I'm inclined to guess it's some connection. I want one to be
there, but I don't think that's backed up by the sources. It's possible that Moores read Bush's
As We May Think, but if he did, he just doesn't talk about it and he doesn't cite it anywhere.
Rather, it seems like Moores was just arriving at a similar conclusion from his own starting point
at a similar time.
That, in itself, is fascinating to consider.
But whatever the case, this does tie back into the larger lineage I talked about in
episode 134.
And I'm sorry for the self-citation, but this is something that has really been in
my head for a while.
In that episode, I made the argument that large-scale data handling came into the realm
of computing from an outside transition.
That there was this separate tradition of indexing and bibliographic data that existed
outside number crunching.
Data indexing, as opposed to data crunching, entered into the
computer realm very slowly as machines became more powerful, as they got more memory and more
text capabilities. Well, this is a perfect example of how that worked in practice.
Mors even points out that current machines aren't quite powerful enough to handle string processing at scale, but the terminals may be
slow enough to hide that lack of power. In fact, this highlights something that's missing from
TRAC's design goals, performance. Mors argues that for TRAC, performance doesn't really matter.
The reason for this is both technological and biological. In the early 60s, data links just weren't very fast. Only so many
characters of text could be transferred over a phone line every second. Since TRAC was supposed
to be this remote-operated text language, modem speeds put a huge restriction on its upper
performance limit. Moores also takes typing and reading speed into account. Track is meant to be human-facing.
Humans aren't mind-melded with computers.
They have to type out their requests on a keyboard and read the results, in this case,
off a printed paper feed.
That introduces another bottleneck.
As a result of these factors, Moors just ignores performance.
This is one of the reasons that Track can get away with its weird execution model.
For this point to make sense, allow me to introduce another language.
That is BASIC.
I think this makes for a good comparison since BASIC was intended to be used at remote terminals,
it was meant to run on a central shared computer, and it was designed to
be user-friendly. Specifically, I'm going to be making a comparison to Dartmouth Basic, the 1964
original. That has a few quirks that I think make it worthwhile to compare to TRAC.
One huge difference is that Dartmouth Basic was intended to be used for numerical calculations.
In that capacity,
performance matters a little more than for track. There was a slick sleight of hand designed to deal
with possible performance problems. The first version of BASIC was actually a compiled language.
When a user entered a program, it was first transformed into executable code, which was then ran by the
computer directly. That gave BASIC two huge advantages. The first was that small snippets
of code ran pretty quickly. The compile time for just a line or two is short, so you get these neat
gains. The second was that large programs didn't need to be recompiled or reinterpreted.
If a user was running a program that had already been compiled, well, BASIC just pulled up
and executed that program.
When you're dealing with larger programs, compile or interpretation times become the
bottleneck.
So this approach saves a lot of time over multiple runs of the same program.
The downside, of course, is added complexity. Dartmouth Basic had to do some basic bookkeeping
to keep track of what programs had been compiled, and it had to move around temporary data when
doing ad hoc compile jobs for single lines of code. This complexity was enough that later
versions of Basic, those not made at Dartmouth, tended
to drop the whole compilation step.
Most BASICs end up becoming interpreted.
That's much easier to implement, but much less performant, especially when rerunning
existing code.
TRAC never really cared about performance.
That means it could be implemented as an interpreted language. The fact
that Moore's never considered compiling TRAC means that the whole runtime replacement trick could be
a core feature of the language. This is a fantastic example of how design goals impact the overall
approach of a language. Just to drive that point home, Mooreores says in the Beginner's Guide that TRAC is
inherently incompilable. That's because of this runtime replacement and how it handles recursion
and a few other features. He was freed up to implement those features in whatever way he
wanted because he never had to even think of compiling the language. So in 1962, we can see that Moores has a very firm idea of what TRAC is going to be.
Maybe he was thinking about it as early as 1959, but we don't really have a paper trail for that.
Over the next two years, the actual details of TRAC are ironed out.
This is all leading up to a 1965 paper published in ACM,
which actually describes TRAC for the first time.
To fill in this gap, we actually have Moore's own words as recorded by the Charles Babbage Institute during an oral history session.
Once again, this is one of those very well-documented stories.
At least, it appears to be well-documented.
The story goes something like this.
Moores develops an initial specification for TRAC.
He was assisted by Eugene Ferguson, an engineer who would go on to be instrumental in the ASCII character encoding standard.
By 1964, he thinks he has something, so he pitches the whole reactive typewriter idea to ARPA. I don't have details
on the contract, but according to Moores, it resulted in further development and refinement
of TRAC. It was during this period that the first version of TRAC was implemented. This was done by
one L. Peter Deutsch, who I think would have technically been an employee of Moore's Zator company at this point,
but this might be at the right time that Moore's has the actual company that sells track.
It's a little messy chronologically.
Deutsch himself is an interesting figure.
At the time, he would have been either 18 or 19.
In 1963, he wrote an implementation of Lisp for the PDP-1.
Now, it gets weirder than just a 16-year-old writing an implementation of Lisp. The paper
describing that code is co-authored by Edmund Berkeley. That's the same Edmund Berkeley that wrote Giant Brains, the first book about
computers. And remember, Deutsch is 16 at the time. He was very much starting off embedded in the field.
Deutsch did the first TRAC implementation on a PDP-1 at BBN, that's a government contracting outfit. So, once again, TRAC is itself embedded in this world
of federal spookery. Morris would later explain that many of the finer details of TRAC were ironed
out by Deutsch in this period. I can't help but think that the LISP-like appearance was only
intensified by that influence. Now, it's important to note that this is a full implementation of
TRAC. That includes remote operations. Morris actually had a Model 33 teletype set up in his
office and connected to the PDP-1 over a phone line. This was the real deal, a real reactive
typewriter. He could sit miles away from the actual computer and program
as much track as he wanted. The vision of a reactive typewriter was actively coming true.
This takes us up to the next big step, a publicly published description of track itself.
For this, I'm going to pull a longer quote from Moore's oral history because, well, it's funny,
and I like it,
and it's also important for the rest of the story. To quote,
I wrote up a descriptive account of TRAC in 1965 for the Communications of the Association of
Computing Machinery. The editors, when it came in, evidently were amazed since they had never heard
of me. Of course they hadn't heard of me. I hadn't talked about programming languages. I wasn't one of the big names. So I came in with this finished piece
of work and they sent two of their big wheels out to look me over. One of them was Carlos Christensen
and the other was Robert Floyd, a big wheel in the parsing programming languages. They came to
visit me at my office to find out who this guy Morse was and
how come they'd never heard of him. I brought them into the office and took them into the back room
and turned on the teletype, and we were in remote communications, with a remote computer at BBN on
which TRAC was running, and I demonstrated it. So they were all ready to deflate a hoax. Laughs.
Quite different was the fact of the matter, so my paper was published. End quote. You have to love this image, right?
Your editor literally shows up to your door to make sure you're a real person and you aren't trying to trick them.
If that ever happened to me, I think I might have a heart attack.
But this is more as we're talking about. He played it
cool. He showed them the actual working software. He showed ACM just how cool track was. At least,
that's how he tells it. The underlying feel to this story is that the ACM couldn't believe that
some nobody like Mors appears out of nowhere with such a sophisticated
and genius language. But I think it's probably not that cut and dry. I'd wager there was some
genuine confusion with the paper. I mean, this is track we're talking about. It's a very unique
language. If this happened, I think it was less that some people at ACM had their feathers rustled and more
that they were probably just confused. The story is also useful because it further positions Morse
as an outsider in the industry, which, when you get down to it, seems like how Morse wants to be
viewed. He wants it to be clear that he came into the world of programming as some maverick with wild and fresh ideas that rocked the foundation of the industry.
This is, of course, all assuming the story actually happened.
This part gets complicated.
Morris publishes two papers on track in ACM.
The first paper is in ACM's Proceedings of the 1965 National Conference. The second
is in Communications of the ACM. That's the paper the story is about, I think? The complication
here is that the Communications paper is published in 1966, and that's after Morris had already
published with ACM.
So, so what? Maybe he messed up the years.
Maybe he's actually talking about the proceedings paper.
Well, that leads to its own issues.
This is a proceedings paper we're talking about.
It's a paper that's published alongside a talk at a conference.
The paper was presented at the National ACM Conference of 1965. That means he would have been exposed to the very belly of the ACM. It means that Moore's would have been a
somewhat known quantity. He would have been put on some docket. He would have had to deal with
paperwork, maybe even become an ACM member, depending on how the conference was run back then.
He's not some nobody. He would have
had a speaking credit before he even published this paper. This is a little nitpicky kind of
thing, but I think it's evidence enough that I have to be careful about trusting everything
Morris says on its face. In 1965, we get the first public details about track that may or may not have involved an office
visit. This is another one of those wonderful non-academic academic works. It just doesn't
describe track, it describes how and why certain features of track were chosen. It gives us a deep
look into the ideology of the language. So let me pick through this paper for you.
One of the first big surprises is the sheer amount of thought that went into TRAC. That,
on its own, kind of kills the whole esoteric allegation. It's not just amateurish or bad
design. It's not meant as a joke. It's a very deliberately made system with actual thought behind choices. And actual thought
that's good. I agree with a lot of the choices that Morse describes in this paper. Further,
Morse positions TRAC as a step forward in technology and actually backs that up with
fine detail. TRAC was intentionally designed to be a new kind of language. This starts as low down as
its data representation. Up to this point, most languages, with very few exceptions, would have
been written and stored on punch cards. The exception being, of course, Lisp in some circumstances.
TRAC was designed to only be usable from a keyboard. It's a keyboard language.
Morse conceptualizes it as a stream of expressions, something that exists totally
outside of punch card media. Why does that matter? Well, with punch cards, code tends to be more
discrete. You have to have these packets of code between 0 and 80 characters long.
A language like Fortran handles this by using a single card to represent each line of code.
That means the language, at a very basic level, was designed around this constraint.
Even modern Fortran compilers have this default check for 80 character lines.
Track, on the other hand, is based around nested expressions.
There isn't even a concept of line numbers. Rather, you have a collection of expressions
that relate to each other either by nesting or by name. There are no limits on expression size,
the only limits are enforced by the size of your machine's memory. Lisp works in a very similar way.
by the size of your machine's memory. Lisp works in a very similar way. You may have noted that I was a little cagey about that earlier. Well, there's a reason for that. The 65 paper makes
this weird comment about Lisp having a, quote, dual-language problem. That it was poorly designed
because the programmer had to use both M-expressions and S-expressions to write Lisp.
This, uh, this baffles me quite a bit.
Lisp did have a, quote, dual language problem way back in the day.
Very early versions of Lisp had different syntax for defining code and data.
of Lisp had different syntax for defining code and data. The unification of those syntaxes into one, the S-expression, is what made Lisp such a triumph. That would happen very early in the
history of Lisp, probably around 1961 or so, maybe 1960. Notably, the version of Lisp that
Deutsch implemented used only S-ex S expressions. I think Moore may have
been working with some out-of-date information here. Some very early versions of Lisp, prior
to the syntax conversions, used punch cards for input. I think it's possible that Moore's just
was not keeping up with the Lisp world, and maybe wasn't listening closely to
Deutsch? Either that, or he wrote his 65 paper way earlier than I thought. Either way, it gives
me the feeling that Moores was just kind of blazing his own trail, even if he was duplicating
prior arts. So, he might have been positioning himself as an outsider maverick because
he wasn't super familiar with parts of the field. This is also the point where Moores drops the M
word. This is a word that I think is crucial to understanding track, and it's something I didn't
really consider when I first saw the language. That word is macro. And maybe I was a little
down for not noticing this connection before. I was actually alerted to this omission at VCF SoCal.
So if you're out there, you told me about this, you know who you are. Big ups.
A macro is a replacement rule. These are pretty common in all kinds of computer applications,
from spreadsheets to word processors all the way up to programming languages. You set up some kind
of name or pattern, you add in some arguments, optionally of course, and then you have a rule
for what that pattern gets replaced by. I'm most familiar with this
from assembly language macros. Most assemblers can handle macros, which makes programming
much easier. For instance, let's say you were programming some x86 assembler. There are common
calling conventions for functions, basically how you begin and end a function.
Those conventions are always the same. You push some things to the stack, you move some things around, then when you're done, you move the stack back into the proper place. All functions have to
do the same instructions, the only difference being how much popping and pushing needs to get done.
being how much popping and pushing needs to get done. You can do this long form, or you can use macros. I have macros for function enter and function exit. They take options for how many
arguments the function expects, since that dictates how much data needs to be moved.
When I call up those macros, my assembler, the NetWide assembler, actually copies in the code for those macros. It edits
the text of my program. It replaces every instance of function enter with a few predefined instructions.
That, my dear friends, is almost exactly how track functions. Most modern assemblers can even handle nested macros. The key difference is that TRACK is
much more sophisticated, but it's still doing text replacement based on rules, just like any
mundane macro system. This isn't just a superficial connection either. Mores explicitly states that one of his main inspirations for track
came from early macro assembly systems. It was already known at this point that macros
could be Turing-complete, so there was nothing stopping someone from making a macro-based
language. From that point of view, Moores is really just taking a logical step forward.
Moore's is really just taking a logical step forward.
To, uh, kind of condense down the little ramble there,
track makes a lot of historical sense in context.
It's not this wild and isolated language.
It's not unintentional esoterica.
It was at this point in the 1965 paper that I really started to get that.
But, as I said, this is a great source. It's one of those papers that just keeps on giving. It really lays out explanations for everything. So I'm going
to forge on a little bit more on this thread. TRAC looks like LISP, with parentheses around
expressions, because Moore figured that was the best way to handle functional grouping of data.
The reason for that is simple. It was the best option at the time, and it aligned with Trac's
larger design goals. He wanted to avoid block structures that languages like ALGOL and FORTRAN
used. When you group code using an explicit begin and end statement, you fall into the classic blunder of
discrete lines of code. TRAC was very anti-line number, so that couldn't be used. The only other
viable option was the parentheses, so parentheses were the choice. However, parentheses couldn't
stand on their own. This gets at one of the strange features in TRAQ. In order to do
actual homoiconic tricks, TRAQ needs a way to represent code as data. Under normal operations,
any expression will be evaluated and replaced. All macros get executed until there are no macro
patterns left. So you need some way to protect code from execution, and some way to
signify code for execution. In TRAC, there are two types of expressions, active and neutral.
Sometimes Morris calls these active and neutral strings, but that's more generalized. An active
expression starts with a single number sign or a hash.
So you have a hash followed by your expression enclosed in parentheses.
When track sees that, it knows it's ready to be executed.
You can also have a neutral expression.
That doesn't have a hash.
You just have parentheses, whatever code you want to run, close parentheses. Those are also called protective parentheses in TRAQ. So you can have a situation where you have active code inside
parentheses. There are rules for how those get stripped off and how execution can begin, but
in general if you have open close parentheses without that hash, it's not going to execute.
Now, that's all roughly normal.
But there is another mode here. That's the so-called neutral expression, as distinguished by an extra leading hash.
So, hash, hash, expression.
Now, you may guess that this tells TRACK to not execute the expression. You would
be wrong. That's just the protective parentheses. This is something different, and it's more
complicated. A neutral expression tells TRACK to execute the expression, but treat its output as a neutral string. In other words, work out its value,
but do not execute the result. And I know, I just said track isn't esoteric, and that sounds like a
pretty esoteric feature. Why would you want to use this? Well, the example that Morse uses throughout
everything I've read is user input. You can easily write a function
that accepts input from the keyboard and prints it out. If the user were to type a track expression,
then that will get executed immediately. If you want to avoid that, you can wrap the input
function with an extra hash. That reads in the data as a raw string of characters, which allows
you to then operate on it.
You could strip out any expressions, you could do string replacement, or even add in some
expressions and then choose to execute the string. Taking that added step to protect the input
function gives you those options. It gives you that kind of flexibility. You could also use this
feature to protect code you are actively working on.
So while it may sound a little weird and convoluted at first,
there is thought behind the feature.
It's not just a random addition to make it more confusing.
I think that's the bottom line here.
Track has all these seemingly weird features, all this weird syntax.
It makes the language look pretty incomprehensible, at least at first.
When you look a little closer and take the historical context into account,
the language starts to make more sense. Now, is it actually usable? That is a different question altogether. When face-to-face with such dark arts, one will often ask, is it possible to learn this power?
The answer for track is... kinda?
From my studies, I've been working out of The Beginner's Manual for Track Language.
It's a book that was graciously sent to me by a listener.
track language. It's a book that was graciously sent to me by a listener. It's one of the many semi-cursed objects that I keep in a special closed bookshelf. The text isn't available online,
or really anywhere as far as I can tell. I think that inaccessibility has a very specific cause.
That being the standardization of track. Here I'm going to quote directly from the beginning
manual. Quote, careful standardization of a language and its processors in this fashion
doesn't just happen. In order to carry out standardization and to protect users against
confusion and uncontrolled dialects, I have's in a published reference manual.
You were only ever able to get a track processor from Rockford Research Incorporated,
the company that Moores
founded specifically to sell track and track accessories. Moores didn't standardize the
language by creating some open and public specification. He didn't make some reference
document or an international standard. Rather, he enacted central control through intellectual property law.
This is most apparent whenever you see official TRAC documentation. The name TRAC is always
followed by a little R with a circle, because TRAC is a registered trademark. The documentation
for TRAC, which includes the actual manual I've been working from,
is also copyrighted.
Why would Moore's go that far?
Partly it was money.
TRAC was meant as part of a business.
But it was also an honest attempt to keep TRAC standardized.
At least, so Moore's claims.
The Beginner's Guide starts with this whole section about standard
controls. The concept boils down to this. Moore's and Rockford Research have intellectual property
rights to the track name and its documentation. That's the only place the track is fully described.
To create a full implementation of track, you need those manuals, which can only be bought from
Moore's. Thus, there's a central point of control. This was a move that, I think,
kind of killed TRAC in the womb. Intentions aside, this may track a fully commercial language.
The classic source here is the first volume of Dr. Dobbs' journal, an early computer
magazine. Moores even names and shames the issue in his oral history interview. What's usually
omitted is the article's title. Copyright mania. It's mine, it's mine, and you can't play with it.
If you listen to Moores, you get one story, that Dr. Dobbs'
journal publishes this awful smear piece about him and track. But the journal says otherwise.
To pull straight from Dr. Dobbs' journal of calisthenics and orthodontics, volume 1, issue 5,
quote, During the past year or so, People's Computer Company has received several letters
with enclosures from one Calvin N.
Morris of Rockford Research Incorporated in Cambridge, Mass. We initiated the rather
unfortunate contact by asking him for information about an interesting but relatively obscure
programming language that he had developed called TRAC. Note, TRAC is, at least, a registered
trademark and probably patented, copyrighted,
and marked with infrared dye to boot. What we have since received from this person, however,
appears to primarily be concerned with copyrights, patents, trademarks, and the like. We don't really
know because we didn't take the time to wade through all of it. End quote. It continues on
in that fashion. The article basically explains that they got copyright blasted by Moores,
that he threatened lawsuits if they printed details about track,
and that he had already won in many such cases.
Notably, this is printed in 1976.
This is into the micro era,
during which such figures as Bill Gates would claim that all computer
hobbyists are thieves and criminals. Needless to say, threats touched a bit of a sore nerve.
My favorite part here is that the journal calls track a, quote, relatively obscure language.
That totally contradicts what Moore's had to say on the matter. Here's his rationale for the copy-trade-patent-mark debacle from his oral history interview.
Quote,
What I was interested in doing was, in one way or another, to make an economic capability
out of TRAC, which was a clever creation.
It still is.
What happened, in fact, was that TRAC probably became the most widely bootlegged
computer intellectual property that existed. In other words, it was terribly easy to program,
and it was programmed at one place after another. For instance, Dartmouth had a version called CART.
TRAC intentionally spelled backwards. Professors assigned programming TRAC as a project.
And there were all sorts of other implementations called by various things, including TRAC.
I was trying to market it, and since I could not use copyright nor patent,
I was trying to use trademark.
End quote.
TRAC was just so popular and profound, it was widely copied,
despite all the legal protections.
Now, I don't know about that. I haven't been able to find anything
about Dartmouth's cart or really much about track clones at all. That makes me think that if these
clones did exist, they weren't widespread and they weren't very high profile. They probably
weren't making any money at it. Some of that may have been people trying to hide from Moores himself. We can probably
chalk some of it up to the fact that track was already somewhat obscure, so a derivative would be
even more so. What makes this weird is languages aren't usually protected like this. Languages
spread by people using them. When you have these really hard copyright or trademark or patent or whatever
controls, languages tend to stagnate or get stuck. C would never have spread if it wasn't so open.
Neither would have Lisp or any other number of successful languages. In that sense, once again, TRAK is relatively unique.
That's kind of a long-winded explanation for why it may be logistically difficult to learn TRAK.
That said, there are resources.
I've been using this TRAK64 processor written in C by one Gerard Millmeister.
It's distributed as source code, but easy to compile. I don't think it even has
any dependencies. So you can actually run track today, but would you want to? In a word, my short
review is no. Track is a pretty hostile language to learn, at least in my experience it is.
One little detail that held me up longer than I'd like to admit is the so-called
meta-character. This is a character that, when entered, begins execution of a script.
The default meta-character is a single quote. So to actually run code, you need to hit the quote
key. To make things more confusing, the meta character can actually be changed at runtime.
There's a primitive just for changing the character.
That, at best, is odd.
At worst, confounding.
Just the primitive set alone is strange.
Like I mentioned earlier, there are 34 primitives. Each is abbreviated down to two
characters. You have classics like DS to define a string, PS to print a string, AD to add two numbers,
GR to check for greater than, all that jazz. Earlier, I said it's too many primitives. Well,
after using the language, I think it may also be too few.
Allow me to explain. Track is missing some pretty basic features. However, that's not to say track
isn't Turing complete. It actually is. That means it's possible to do anything you want with track.
It's just that some common things are a little, how do I put this, counterintuitive, at least.
To me, they're very counterintuitive. Loops are one example. There are no primitives for loops
in track. You also don't have the usual tricks of conditional jumping. In track, there's no concept
of a label or a line number,
so even if there was a primitive to jump, there wouldn't be anything to jump to. Rather,
you implement a loop using recursion. If you've been paying attention, you may be able to see
where this is headed. So, to do a loop, you write a track function that ends by calling itself.
That means after the loop does whatever it needs to do, it calls itself and does everything
all over again, and then it calls itself and does everything all over again, on and on.
The problem here is that track isn't like other languages.
For most languages, when you call a function, you're actually just telling the computer way deep down in the circuits to jump back to some point in memory and continue execution
from there.
But like I said, there's no concept of a point in TRACK.
There's no label.
TRACK evaluates expressions and replaces them with their value.
So when you call the loop, TRACK actually replaces that call with the full text of the loop.
That includes the part of the loop that says, call this loop. Every time the loop executes,
it expands the program's text. In other words, the loop gets unwound into a series of expressions.
loop gets unwound into a series of expressions. Some of those expressions will evaluate down to nothing, but some do leave behind text. Crucially, any formatting text will stay behind. That means
that, at the very least, new lines and spaces will accumulate as the loop continues. In practice,
a lot more than that will accumulate into the
computer's memory, because most primitives evaluate to a string value. This means that as the loop
runs, you're filling up memory. Each iteration adds more data to memory. Morse even has a very
explicit warning about this in the manual. Eventually, a loop will exhaust memory,
grinding the machine to a halt. So in practice, a loop can only be repeated so many times,
and that depends on how much RAM you have and how complex the loop is. Funnily enough, this is
where we actually reach the edge of what Turing completeness can give us.
Part of that theorem assumes unlimited memory.
That is, if you have unlimited memory and unlimited time,
then a Turing-complete language should be able to compute anything computable.
When I hear that, I usually think about running out of space for the actual executable.
Sure, you can write anything you want in, say, BF. That's Turing complete. But you reach a point where you need way too much code for it to be reasonable. You
run out of space in your memory. There's just too much code. In track, you have that same limit,
with the twist that your code can expand during execution. There's a hard and fast limit to how complex a track program can get.
This is especially true in the early era, when RAM was a much more limited resource.
Now, other languages have this problem, but with track, the problem's super acute. Loops
are almost too complex for track to handle. What's even more mind-warping is the finite loop.
When I sat down to learn track, I figured I'd do some basic example program.
My first thought was Fizzbuzz, that's the classic.
But track only does integer math, and I didn't want to work up a function to do modulo, so
forget that.
My next thought was a number-guessing game, but track can't do random numbers.
Once again, I didn't want to write my own cursed RNG, so I decided on the bare minimum.
A loop that counts down from 10 to 0, then prints boom.
That's a classic if I've ever seen one.
Well, that turns out to be non-trivial. Incrementing variables in
track is a little strange for one, but the larger problem is how to make a loop that terminates.
Morris has a few examples in his manual, but they either rely on very specific language tricks or
don't seem to work in my track processor. The solution that I arrived at
was cobbled together from these examples. It works, but I don't like why it works.
The trick is to delete the loop. That's the exit condition, as recommended in the beginner's manual
to track language. You use a logical check to see if the iterator variable is in the right
spot. If it's greater than zero, then you call loop again, but if it's less than or equal to zero,
you delete the loop function. You remove the name loop from memory, or whatever you're using
as your function name. As I understand it, that deletion prevents the final execution of the loop,
since track may see the call to loop and evaluate it,
even if it's nested inside a logical check.
So, it's a weird language, yes,
but you can actually write track in the 21st century.
I, uh, I just don't really recommend trying it, though.
Okay, that's where I'm gonna end this episode.
But really, there's a lot more we could discuss about track.
I usually feel some kind of closure when I complete an episode, but
not this time, not at all. TRAC is like a fractal that I've only glimpsed part of. Every time I look
closer at part of the language or part of its history, there's so much more for me to see.
If you really want to get caught up, then I'll link a few papers in the description and I'll
try to find a link to that track processor I've been using.
So where does this leave us?
Is track actually an esoteric language?
Is it a serious language?
Or is it just something else?
That, of course, is very complicated.
It's definitely not esoteric.
There is reason behind its construction.
It has some genuinely interesting
and useful ideas wrapped up inside it. That said, it has some features that later esoteric languages
would mock. The weird limited math operations are one example. So are the three different kinds of
expressions, which include syntax for expressions that should not be executed, expressions whose results should not be executed, and even combinations thereof.
Even the syntax of TRAC is strange and easy to laugh at, but there is a reason to it all.
Morris explains every choice in wild detail, and in a lot of cases, I do agree with his reasoning and choices.
One of the bigger problems with assessing these older languages is the simple fact that we exist
outside of their historical context. We've escaped through time, so to speak. I'm more used to working
with modern languages, with all kinds of handy features and wildly different design philosophies.
languages with all kinds of handy features and wildly different design philosophies.
That makes Snap Judgment really easy. There are a lot of features of Track that feel odd just because it's old. The limited primitives, the worries about memory space. That's not to mention
its strange string handling that I don't even have time to get into. It's an old, old language.
have time to get into. It's an old, old language. More and more, I really do think it almost feels like it was designed in 1959. In that case, it's less of a frustrating language and more just a
period piece. That said, I think I've been able to put my finger on what makes track so bizarre to me.
As far as I can tell, track has no standard library. You have to work with primitives.
It doesn't have some set of higher-order functions built out of those primitives that it
automatically turns on with. Moore talks about users building up complex and powerful TRAC
programs, but all he ever shows are scripts written from primitives. This makes TRACK stand
out compared to contemporaries. I mentioned that Lisp also has a very small and limited set of
primitives. But when you use Lisp, you have access to a whole library of higher-order functions
defined from primitives. TRACK doesn't have that, which makes TRACK a lot harder to use.
I've been starting to think that this omission may have been intentional, at least partly.
Recall that Calvin Moores is, low-key, an early hypertext guy. His early descriptions of reactive
typewriters, of what becomes the TRACK system, are very similar to early descriptions of hypertext systems.
It sounds a lot like Xanadu, or maybe even NLS.
So, here's the rub, and here's the last idea that I want to get out of my head before I close out.
Very early hypertext systems were meant to be completely personalized.
They were a way to augment your brain by offloading part of your thought process to the computer.
A corollary to that is that those systems must be personalized, and that has to be done by the user.
You start with a blank page and start building up your data, all custom, all tailored to your brain.
TRAC feels like that.
You go in with this blank space and you have to use the most basic building blocks to make a super personalized programming system.
All these factors, for me, mean TRAC isn't a joke.
It's a totally unique approach to programming, one that most of us have just
never been exposed to. Thanks for listening to Advent of Computing. I'll be back in two weeks
with another piece of computing's past. And hey, if you like the show, there are a few ways you can
support it. If you know someone else who'd be interested in the history of computing, then
please share the podcast with them.
You can also rate and review the show on Apple Podcasts and on Spotify.
If you want to support the show directly, you can do so through Advent of Computing
merch or signing up as a patron on Patreon.
Patrons get early access to episodes, polls for the direction of the show, and bonus content.
You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead and contact me.
I'm at adventofconf on Twitter, and my email address is somewhere on my website.
As always, have a great rest of your day.