Advent of Computing - Episode 70 - The oN-Line System, Part 2
Episode Date: November 29, 2021NLS, or the oN-Line System, is often looked at as a mile marker in the development of modern computing. It was the first system to use a mouse, one of the first functional examples of hypertext, pion...eered remote collaboration, and so much more. But how much do you know about NLS itself? In this series of episode I'm picking apart the system behind the legend. In Part 2 we are looking at the development of NLS itself. Along the way we talk timesharing, strange custom hardware, and complex programming practices. Does NLS live up to the hype? You'll have to listen to find out. Selected Sources: https://dougengelbart.org/content/view/374/ - Go watch the Mother of All Demos https://www.dougengelbart.org/content/view/140/ - 1968 NLS progress report http://web.archive.org/web/20160210002938/https://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/EngelbartPapers/B2_F5_ARNAS1.html - 1966 progress report
Transcript
Discussion (0)
Do you think the internet has made humanity better or worse?
Now, I think we should look at this as a pure thought experiment, if at all possible.
Do you think that rapid access to all the collective data of humanity has helped us?
Have improved and easy-to-use computer interfaces been a benefit or a detriment to our development?
Has the easy mass communication afforded by the internet
solved any big problems?
Now, I don't know how to answer all of those questions.
These are the kind of things that keep me up at night.
But I think figuring them out would go a long way
towards understanding how computers have changed us.
And that is something that I can say for certain
is that computers have changed all of us,
just the fact that they exist even if we don't use them.
That said, I do think there are some people who would be able to answer these tricky internet questions
pretty quickly and perhaps pretty decisively.
If you asked Doug Engelbart, I'd wager that he would say yes, the internet has made humanity better. That systems
designed to augment human thought, to organize vast amounts of data, and to facilitate remote
collaboration have changed us all for the best. I think he'd be so positive because Doug Engelbart
was working towards just such a system all the way back in the early 1960s. Welcome back to Advent of Computing. I'm your
host, Sean Haas, and this is Episode 70, The Online System, Part 2. Today, we're going to
be looking at the development details and a little bit of the downsides to NLS.
As you may be able to tell, this is a follow-up to last episode.
Last time, we examined the context leading up to the online system.
So, if you haven't listened to that episode, I suggest you do so before listening to part two.
As always, I try to make it so these series, each episode isn't strictly necessary for the next one,
but I do highly recommend it if you want the full story.
We left off with the 1962 publication of Doug Engelbart's seminal work,
Augmenting Human Intellect.
That paper laid out a framework for how humans may think,
and, more importantly for Advent of Computing,
how computers could
be used to improve that process.
You know, it's not really Advent of Human Thought, anyway.
Crucially, the paper described an augmentation system with three key features.
A live, full-screen text editor, hypertext, and timesharing.
As far as I'm concerned, many other details are just set-dressing.
This episode, we're going to be looking at how Engelbart worked to make these three plans a
reality. Speaking in the larger context, I also want to examine how NLS connects up to later
hypertext systems. Engelbart's work is seen as a watershed moment for hypertext, for computer-human interfaces,
personal computing, and a whole slate of other inventions.
The 1968 Mother of All Demos was the first public display of not only hypertext, but
also other fundamental technologies like the computer mouse.
It is a fundamental moment in computing history.
It is a fundamental moment in computing history. Engelbart's work is nothing short of groundbreaking,
but there's something here that I don't think gets addressed enough.
You see, I hate to be the one to break it to you, but we aren't using NLS today. This is sort of a pattern that we see with a lot of early hypertech systems. For the time, they were
revolutionary, they spurred on innovation, and they inspired a lot of future work. However,
the systems themselves often failed to catch on outside of small communities or niches.
NLS is a fantastic example because it's put on such a high pedestal. Despite its promise,
the system itself wouldn't make it very far outside of Engelbart's lab.
Either way you cut it, if we're looking at successes or failures,
we have to get deeper into the details of the online system.
So, let's get into it.
Last episode was pretty theoretical, I will give you that.
I don't think we even touched on an actual computer.
This episode, I'm changing that around.
We're going to be talking about CCTVs, very large-scale computing installations,
and long-forgotten programming languages.
To kick things off, I want to talk about something that was conspicuously absent from last episode.
Well, it was conspicuous if you're as familiar with
Engelbart's body of work as someone like me. That's the concept of bootstrapping. It doesn't
come up in augmenting human intellect itself, at least not explicitly. Bootstrapping was Doug's
idea that in pursuit of better augmentation systems, he and his team should be primary consumers of their own
research. Every time they develop some new method or program, they should be the first to put it to
use. The rationale here is, I think, pretty ironclad. It allowed Engelbart to test out new
tools really extensively. Some issues or realizations only crop up during long-term use of a system.
It would also, in theory, allow Engelbart and his co-conspirators to continually increase their
productivity. Assuming every step was in the right direction, then every new step would speed up their
pace of development. All in all, pretty good deal. Bootstrapping was especially potent in a group
context. You see, Engelbart wasn't working alone, at least not for very long. I won't go through the
entire funding history here since, well, it's kind of complicated. Essentially, after leaving UC
Berkeley, Engelbart went to work at the Stanford Research Institute. Now, there is some confusion
here that I should sort out for completeness sake. SRI is kind of unrelated to Stanford the
university. The institute was originally founded as a separate think tank-like group by the
university's trustees. In the 70s, it was fully separated as an entirely independent entity.
But SRI was mostly quasi-independent from the start.
It's a little confusing.
The overall point here is just to keep SRI and Stanford University separate in your head.
Different places, different projects.
Anyway, Engelbart had some funding secured while he was writing Augmenting Human Intellect.
In 1963, about a year after the paper's publication, there was finally enough funding
and institutional backing for him to found the Augmentation Research Center, or ARC,
inside of SRI. Funding was from SRI itself, NASA, ARPA, and the U.S. Air Force. The bottom line here is that by 1963,
Engelbart has a roadmap, he has funds, a location, and a team starting to form.
With all the boring administrivia working out towards a resolution, another more pressing
problem came into view. An issue that will haunt ARC during its lifespan is that Engelbart's ideas
are always a little bit beyond the cutting edge. This was exacerbated by the fact that ARC was,
for all its big-name backers, an independent research outfit. This independence was a bit
of a double-edged sword, as it usually is. Engelbart's augmentation research was,
double-edged sword, as it usually is. Engelbart's augmentation research was, by its very nature,
interdisciplinary. He had to draw from multiple fields to try to make a balanced approach to human-computer interfaces. When dealing with something as squishy and complicated as humans,
there really isn't any other way to go. The downside is that there wasn't any existing lab or company where Engelbart
could accomplish his grand goals. Beyond that, there were few colleagues to be found.
Quoting from a history piece that Engelbart wrote years later,
I first tried to find close relevance with established disciplines. For a while,
I thought that the emergent AI field might provide me with
an overlap of mutual interest. But in each case, I found that the people I would talk with would
immediately translate my admittedly strange-for-the-time statements of purpose and possibility
into their own disciplines framework. And when rephrased and discussed from those other
perspectives, the augmentation pictures
were remarkably pallid and limited compared to the images that were driving me.
End quote.
This left Engelbart, for better or for worse, to blaze his own trail.
He had to form his own lab, and he had to secure his own funding and find his own employees.
There just wasn't anywhere that existed where he could nicely slot in.
Now, if you'll indulge me for a moment, I like to think about all the what-ifs that surround
this critical juncture. What if Engelbart had been brought on by, say, an IBM? He may have had
access to exactly the technology he needed. Hardware would have been plentiful, and a well-funded lab connected
to a large manufacturer, well, that could have allowed him access to all the custom hardware
and software he could ever dream of. Working from ARC, he would have to be content to be on the
outside. There was a benefit here, though. Engelbart had almost complete freedom to pursue augmentation
however he saw fit, but I want to
wallow in the downsides a little bit more. I think those are a little more interesting than the broad
idea of creative freedom. Engelbart would start his implementation adventure with a relatively simple
CDC-160A computer. At the time, 1963 at this point, that computer was already a three-year-old machine.
The 160A also falls into this middle category. It was called a mini-computer. This is basically
a step down from a mainframe in all regards. For instance, the model Doug started with had a tiny
eight kilobytes of RAM. That's just not a whole lot of space to live in.
RAM and all the other specs aside, the interface options are where things really get bleak.
This machine didn't have anything like a graphics terminal. It also lacked any kind of remote
terminal just in general. The 160A was designed as a quote single user computer,
but we aren't talking a personal computer here. At the time, most mainframes were used via batch
processing. Multiple users would queue up jobs to be ran in batches. The smaller CDC machine
was designed to be used by a single person sitting at a flexor rider, which is essentially a really fancy electric typewriter that was hardwired directly into the machine.
The only other input option was paper tape.
Not really the most interactive of devices.
Now, this should go without saying, but those are very sequential input devices.
You can't really construct links on a paper feed.
A less readily apparent issue was that these small computers were rather fragile,
at least in the software sense.
As Engelbart would point out,
quote,
If the system crashed, you had to load the application program from paper tape
and the most recent dump of your working file before you could continue. End quote. The point being that the CDC-160A was not suited
to human augmentation. You had to wrestle with the computer too much to get actual work done.
However, this was a starting point. It was also on the 160A where the NLS acronym, as strange as it is, originated
from. You see, the online system, that's the in from on, was originally a pair of systems. NLS
will go through a number of revisions and total rewrites, calling it by any one name brushes that away, but it was originally called NLTS,
the Online Text System. It coexisted with FLTS, the Offline Text System. The F, of course,
coming from the second letter of OFF. Really thrillingly constructed acronyms. Now, there are precious few details about how FLTS actually worked or was used.
In general, this early phase of NLS's development isn't super well documented online.
The tragedy here that I keep kicking myself for is I know where the papers are that should describe this stuff. It's listed in the
finding guides for the special collections at Stanford. When I was down at that collection,
I was really only looking at an earlier period, so I didn't get into these specific details.
Maybe that's going to be worth another trip later on. Anyway, before I ramble on too much, here's what we do
know. FLTS was, as the name suggests, used without a direct connection to the computer. Remember,
in this era, online means on a physical line into the machine. The early online users would sit at
the single flexorider and type away, while FLTS users would instead punch
commands onto paper tape. That tape would then be fed into the CDC-160A in batches, allowing the
minicomputer to, at least in a limited sense, to pretend to be a mainframe for a few minutes.
These offline tapes contained strings of commands telling FLTS how to perform some type of
manipulation on the user's text data. That could be essentially, or at least as near as I can guess,
anything that a user of NLTS could command. We also know that at this point, ARC had developed
a very simplistic hypertext-ish system. They were still living
out of a Flexa writer, so there wasn't anything too breathtaking, but the bones seemed to have
been there. Engelbart describes it as a, quote, structured file editing, implying that there was
at least the idea of relationships between data. The million-dollar question, or for me, the road trip down to
Stanford dollar question, is what the interface actually looked like. I think I can make a few
educated guesses that will at least allow us to put some meat on them there bones.
I'd wager that NLTS and FLTS used similar, if not identical, commands. It's easy to imagine a situation where the only
difference really came down to the input method. But that leads to an interesting point. So far,
we're only dealing with simple textual inputs. That should go without saying, but I think it's
also important to point out. In Augmenting Human Intellect, Engelbart envisioned
a very graphical system with custom input devices. This vision would have to be compromised,
at least in the beginning, to traditional keyboard inputs and simple lines of text.
This is also important because, in a strange way, this simple text command input system
may be close to something like a version control system.
Engelbart describes a series of inputs operating on data in a cumulative manner.
One command would, say, move a chunk of text. Another might change where a link is pointing,
or even splice in a totally new line of data. Eventually, you arrive at your final data,
and as a byproduct, you have a record of how
that final data was created.
The Arc team wasn't setting out to make version control, but we can kind of see the
shades of that technology showing up here.
FLTS wasn't too long for this world, but I do think it's relevant to keep this idea
of text-command-driven interfaces in mind.
Luckily, Arc was soon
able to upgrade. This came in the form of a brand new shiny CDC-3100 mainframe, which arrived to
the lab sometime in 1964. Judging by the release date of this particular computer, it would have
to be very late 1964. This is when NLS, ni NLTS, really kicked into gear. The particular
machine used at ARC came packed with a full 16 kilobytes of RAM. Not a huge amount by any measure,
but definitely an improvement over the earlier eight. The team also got their hands on a disk
pack. This was a primitive relative of the more common hard drive,
which opened up a whole new world of possibilities.
At the time, hard drives of really any type were pretty new and pretty expensive.
It was more common to use magnetic tape for primary data storage.
But here's the dig.
Hard drives are random access.
There's that big scary R word again.
Sure, there are some implications about spinning the disk and seeking the head, but this is negligible compared to the hard and fast sequential nature of paper or magnetic tape. The randomness
here meant that Engelbart could, for the first time, treat a computer's storage just like he treated edge-notched cards.
He could make a reference to some location on disk, and then pull that reference out
of the disk almost instantly and really easily.
You just can't do that without random media like a hard disk or, back a few years, edge-notched
cards.
The piece de resistance that brought this all together,
and really the signature feature for ARK, was a new graphics terminal. The only issue here
was that ARK couldn't really go out and buy a ready-built graphics terminal. Those
didn't exist on the market, or they at least didn't exist in their budget. So they had to build one. Surviving
pictures of this early terminal are suitably sci-fi for the era. The screen itself was a large,
round cathode ray tube, probably the kind meant to be used in an oscilloscope. It's even
dished out a little bit, so it almost looks like a fishbowl. This was housed in a giant metal box
with some big visible bolts fastening all the panels together.
It really does look like it came off a science fiction set.
This giant display housing was then attached to a desk, and it sat in front of a trio of input devices.
A keyboard, a mouse, and a corded key set.
The keyboard is the odd one out here. It was just an ordinary keyboard. There's nothing
too fancy there. We all know them. We all love them. ARC developed the mouse especially for use
with NLS, and perhaps surprisingly, it's really close to the mice we still use today. So once
again, we know it, we love it. Simple. Finally, we have the chord set, sometimes called a chorded keyboard, a key set, or a chorded
key set. This particular device would see a bit of evolution over the years. Eventually, the device
settles down to a simple five-key device, but some early examples have 10 or even more keys.
It's used by typing chords, pressing combinations of multiple keys which allow you to quickly
enter data with a single hand.
The traditional configuration appears to have been for the left hand to chord while the
right fiddles with the mouse or maybe types.
In theory, an operator doesn't ever need to use the keyboard.
The keyset and mouse can do all the work.
With a 5-key chord set, you get a total of 32 possible
chords, but there's a bit of a learning curve here. You have to memorize this weird
5-bit binary encoding system to use a chord set effectively. Engelbart saw it as a much
faster way to stream data into a computer, and if you're practiced, then that very
well may have been the case, but it does take a bit of practice.
Anyway, that's the primary hardware side of things. In this early era, we're still looking
at a one-user, one-computer affair. As near as I can tell, that meant that the custom terminal
with all its input devices were wired directly into the CDC mainframe. As far as the software side of things go, well, that gets a
little bit more messy to deal with. Engelbart and his team at ARK were working with a really
customized rig, so they were locked into the customized lifestyle, so to speak. It's hard to
tack down the minute changes in NLS over the years. The whole bootstrapping paradigm meant that
software had to move fast. I'm going to start us off with a snapshot of NLS in 1966, as presented
in one of the few digitized progress reports. At this point, NLS had been ported fully from
its nascent form to the CDC-3100. Here's where the custom parts start to come in. And by that I mean almost
immediately. The ARC squad had to modify the 3100's assembler. Specifically, they added a
CRT-based debugger. Now, this is just a small issue, but I think it highlights a trend to be aware of. So, sure, a CRT-based debugger is really slick. It's a new
enough idea, and it definitely would have made debugging easier. And plus, it falls into the
framework of bootstrapping. You gotta write tools to get better. They had a CRT workstation,
so they should be using it. However, this also meant side work. It meant extra work on top of
an already daunting project. This work was totally off the straight path to NLS, and it was necessitated
by the fact that Arc was working with new and custom hardware. I'm sure there was also the
operational reality of just needing better debugging tools and wanting a project
to help familiarize the team with the new workstation and new computer. This was all
probably worth it, but this pattern of needing custom tools and hardware will only continue.
It's part of the price that Engelbart was paying for being on the cutting edge.
The other interesting part here is that the new assembler debugger package, which was
being called COPE, also allowed for symbolic debugging.
The 66 report doesn't really go too far into the weeds here, probably a good choice,
but apparently this was all accessible from inside NLS.
So one, that's darn handy, and two, we're starting to see these shades of almost an
integrated development environment. And I think besides the pattern of custom software and
hardware, there's something interesting about really modern sensibilities showing up inside
NLS, at least modern in part. Now, the other fact to take note of on this programming front is the choice
of language, or rather, languages. In 1966, NLS was written in a combination of COPE-modified
assembly language and FORTRAN. That's right, we are talking about FORTRAN yet again. Once again, the superfine details aren't entirely
clear, and I suspect that's because Fortran was a bit of a tool of convenience. It may have been
that the CDC-3100 just didn't have another compiler quite yet, but it's clear that Engelbart
wanted to move away from this venerable language as soon as possible. What I find
interesting is that NLS was supplementing Fortran with assembly language. Now, this is a perfectly
valid method, it's just not something I've ran across before in the sources. Basically, the
assembly side of things was structured so functions within could be called by Fortran. This was probably used
to drive all the custom hardware, and maybe to handle more complicated data types. Fortran has
never really been well known for having complicated or useful data types. In the report, Engelbart goes
on to mention that the team had been looking at writing a Snowball compiler for the 3100. Now,
team had been looking at writing a Snowball compiler for the 3100. Now, despite the name,
this has nothing to do with the more well-known COBOL. Snowball is short for the String-Oriented and Symbolic Language. It's a bit of a forced acronym. I'm definitely not an expert on this
language, but it seems to have been pretty early to the whole object-oriented game. I'm not bringing this up so much to start off a comparison between Fortran and Snowball,
more to point out how Engelbart and Arc are operating. They're relatively tool-agnostic.
The report demonstrated that they're willing to switch to a better solution once it becomes
available. This is an
important tack to take when working on a really long-term project, and this becomes especially
possible thanks to Engelbart's view that augmenting human intellect will be a pretty darn long-term
endeavor. I think this also makes NLS tricky to talk about because, like I said, it's not
necessarily any one program, but an idea
or maybe you could even call it a really loose specification. Anyway, the final important takeaway
from the 66 report is that the team at ARC was actively using NLS. Engelbart explains that,
quote, very detailed documentation has been developed for NLTS. This documentation
involves 11 separate memos in linked statement form, each of which will represent a file on disk,
end quote. So the team was able to at least write and debug COPE from NLS and write a set of
documents for NLS on NLS itself.
If that's not bootstrapping, then I don't know what is.
Plus, I'm pretty sure the report that I've been pulling all of this from
was also written using an early version of NLS.
There's some signs of the hypertext treatment.
Notably, each paragraph starts with a characteristic identification number
that I'll explain a
little later on. It's safe to assume that after this point in 1966, any resources you read that
are detailing NLS from inside ARC were written on NLS itself. Now, I've avoided covering the
finer details of NLS's interface so far. That's with some good reason.
Sources on early NLS aren't very descriptive if they exist at all. Plus, if we fast forward just
a few years, then we get into some of the best sourcing possible. In 1968, Engelbart and his
co-conspirators at ARC presented NLS at the Joint Fall Computer Conference in San Francisco, California.
This event is most commonly called the mother of all demos, and luckily for us, it was all
recorded on video.
That last part is, as far as I'm concerned, the most important, at least for me.
When looking at these old computer systems, it's rare to get so much as a photo.
Sometimes you get a crude screenshot taken with a physical camera,
but usually just a description and maybe a diagram of the interface.
NLS, at least the 1968 incarnation, has the demo.
The film runs just under two hours, and it shows all the highlights of the system.
Of course, as a source, we have to be
careful here. It's a public demonstration, so it's only showing the best and flashiest parts.
Engelbart doesn't stop to explain parts of NLS that don't work, or failings of the backend. In
that sense, it's a highlights reel, but it gives us a really good understanding of the machine's
interface. So I think this is a good time to check some things off the list of major features.
We're looking for a text editor, hypertext, and timesharing.
Running over these features should give us a pretty good idea of where NLS is as far
as Engelbart's grand plans go.
What I've been calling a text editor may as well just be the entire interface.
After all, the main interface of NLS was designed for editing text, so there is some bleed over here.
This is also the first place we can see a certain schism going on inside NLS.
The entire interface was graphical.
You could even have graphics sitting right next to text on the same screen.
The mouse, perhaps the most quintessential part of any graphics system, was designed for NLS.
The demo even shows a sleek three-button number in use. You even get a little cursor on screen
that's shaped like an arrow that moves around via mouse input. So with all that, you should be imagining a pretty modern
interface, right? Well, not exactly. This is something that a lot of coverage of NLS either
skims over or just gets wrong. It's a revolutionary system with the most modern interface of any
computer in 1968, but that doesn't mean that it would be familiar or
even accessible to a modern computer user. Despite the focus on graphics, the bulk of NLS's interface
was still text-based. To do just about anything, you had to type in a command. This was so central
to NLS that it dictated how the screen was even laid out.
The very top line of the display was dedicated to command inputs. The rest of the screen
displayed hypermedia. For things like copying and pasting text, changing views, saving files,
and even jumping to links, you had to first enter in the proper command. But this wasn't
all just keyboard work. NLS was still a good deal more advanced than
its true contemporaries. This is where the canonical trio of interfaces comes into play.
In the demo, Engelbart types commands using the chord set almost exclusively. It really does look
impressive to see someone who's practiced using the interface. It's like he's playing a tiny five-key piano.
You could also enter commands using the keyboard
if you perhaps weren't as proficient.
Arguments for those commands can be entered using the mouse.
If you wanted to, say, copy a word,
you would first chord in the copy command,
then select a word using the mouse.
What we're looking at is a strange hybrid between text and
graphics. It's not really a point-and-click type of system. The clicking here is used in conjunction
with much more traditional text-based operations. This is a really fine detail, but I think it bears
repeating. NLS is often held up as the mold that all user interfaces are cast from.
In a lot of ways, that is true, but we still have to look at NLS as at least a little bit of an
archaic system. The missing point and click here is a good example. To navigate around NLS, you
have to use two, maybe three separate input devices. It was still all oriented around textual input. The graphics,
backed up by the mouse, came second to text. That's most of the interface slash text editing
stuff, so what about the actual text you were editing? What about the hypertext? This is another
spot where NLS shows us a mix of modern and archaic features.
The low-hanging fruit here is the link.
That's what we all recognize as a core feature of hypertext,
and as with a lot of early systems, NLS does links a little bit different.
In general, compared to modern internet-based hypertext,
NLS's links just plain do more.
These links are bidirectional, at least they can be.
As near as I can tell, NLS could do one- and two-way links. This is the same camp that Ted Nelson and Project Xanadu fall into. There's a lot of subtlety that comes along with two-way links
as opposed to the one-way links that we're used to. What really matters here, at least in the
NLS context, is that bidirectional links tell you more about how chunks of data are connected.
You can traverse up and down a chain of links. Put more simply, this gives us a richer form of
hypertext. Links were also fairly granular, meaning you could jump to very specific points or just to chunks of text.
A link in NLS could point to a large chunk of data, a document, or even a specific word.
But that's all relatively mundane.
Links have been on the train since Vannevar Bush, probably earlier if we're being honest.
They've been a mainstay of anything resembling hypertext all the way into
the modern day. Let's get to the weird and the wild part, though. The most visible hypertext
feature that sets NLS apart wasn't really text. It's graphics. You see, NLS supported full-on
hypermedia. In NLS, documents were broken up into chunks of data called statements.
And each chunk didn't have to just be text alone.
It could be text, it could be an image, or it could be a mix of text and images.
This was made possible thanks to the ultra-customized workstations Arc built.
The workstations were backed by independent text and vector generators.
A text generator is a circuit that just turns text data into the actual pixels or scanline information or what have you
that can be displayed on actual CRT monitors.
A vector generator does roughly the same, but for vector data.
You pass in something like a list of lines,
and it outputs the necessary video signals. That's all on the backend. The result of this
configuration was that NLS could display crisp graphics right next to text. Since this is all
vector graphics, we aren't talking about pictures so much as line drawings and diagrams, but this
is still a huge step in the right direction.
The ability to handle mixed media documents was, and still is, very important. In that regard,
NLS knocked it out of the park. And, of course, you could link right into graphics.
Next on the tour, we get to what I think is another big difference between NLS and other similar systems. This all comes down to the
matter of hierarchy. Today, it's most common to work with unstructured files of some sense. So
common, in fact, that us computer nerds even have slang for it. We call these flat files, since,
you know, they're just kind of flat. They just lie there. Above that, you have the choice
to work with structured data, at least of some kind. Often this is handled using something like
a database. In this context, you aren't really accessing files directly. You're using a database
system that somewhere down the line is dealing with a grouping of files. NLS was built around structured files. This is
something that, at least for me, is a little bit hard to initially wrap my head around.
I'm used to either having a flat file directly that you can edit, or handling data via some
intermediary like a database. The best way I can explain it is that NLS fits somewhere between these two ends of the
spectrum. Under this venerable system, files were treated themselves as structured data,
and files were edited and viewed through something like an intermediary layer. But,
well, it gets a little more complicated than that. Each file was internally structured as a rigid
hierarchy of statements. Remember, statement is just NLS talk for chunk of data. Each statement
had a single parent and, optionally, children. The technical name for this is a tree structure.
It's a really useful way to structure and handle data, and it's how Engelbart believed people worked best.
So how did this work in practice?
Well, it comes down to lots and lots of little headers.
Within a file, each statement was given an implicit identifier, basically a unique name.
These weren't just incrementing numbers. There's a little more going on here.
Remember, NLS is all about richness of
data and connections. IDs encoded the statement's position in the file's overall hierarchy.
For example, a statement labeled 2B would be the second child of the second statement.
2B1 would be that statement's first child, and so on until you get IDs that look something like
3C2H1D or something kind of gross. In practice, these ID numbers weren't really meant for human
consumption. They were just used to describe file structure within NLS. As a bit of a callback,
it's similar to the unique numeric identifiers that Engelbart used on his edge-notched note decks.
You could point a link to the gross 3C2H1D statement, and NLS would know exactly what you were talking about.
Statements also had short names. These were limited to a small number of characters.
I've seen five mentioned in some memos, but I imagine
it probably fluctuated as the project changed. Names defaulted to the first five letters of
the statement, but they could be edited. I'm not entirely sure how duplicate names were handled.
In the demo, Engelbart even shows a file with multiple statements just named
Word, so maybe names weren't the most reliable way to refer to statements.
Now, this should all sound a little out of the ordinary. Well, at least at first. When I was
going over NLS memos and reading more and more about this rigid hierarchy, I kept thinking,
oh wow, I've never worked with anything like this. This is totally foreign to me.
Well, I was being a little bit stupid there. There's no better way to put it.
In case you haven't put two and two together, or you aren't a web programmer,
there is an easily accessible modern equivalent.
HTML, and more broadly speaking, XML, is also structured as a rigid hierarchy.
HTML is what's used to write webpages.
It's composed of tags, something roughly equivalent to statements.
Each tag has exactly one parent and can have many multiple child tags.
That's a structured file.
In fact, it's even a tree structure.
HTML is even used in the modern system that helps
augment human intellect. The similarities here are fascinating and undeniable. Even so,
I think this makes the differences all the more important. On the modern, flashy HTML-based web,
we only have two view options. You can look at the source code,
or you can look at a fully rendered page. Each view has its own unique uses. Web denizens find
viewing rendered pages nice since, you know, you can actually read and enjoy them. Pros like myself
spend a lot more time on the source code side of things because that's really the only way you can
edit HTML. How does this view situation look on NLS? Simply put, NLS offers a lot more options.
Many of them come down to which data you want to display. You can select and modify views to show
just the top most items in a file, or expand specific statements, or even view every single
substatement.
You could show a view that shows all statements flat, running together as a single file, or
you could choose to just see all antecedents of a very specific line.
The key word here is flexibility.
With NLS, you had fine-grained control over how data was displayed and traversed. Crucially, under any
of these views, you were able to edit your data. This is a little bit of a bleed over between
hypertext and text editing, but hey, these were never really going to be hard and fast categories.
The final wrench that I want to throw into this whole structured file thing comes down to how it coexisted with
links. You see, internal file structure and links were pretty unrelated under NLS. You could have a
link inside any statement, and you could link to a point in any statement, but that was about it.
Under NLS, the link was one feature that was part of a larger system.
The demo, as well as much of the writing coming out of ARC at the time,
stressed structure over the link.
This does make sense in context.
Augmenting human intellect is, in large part,
Engelbart's attempt to explain how humans may think.
That's formulated as hierarchical structures,
connected ideas held in this prescriptive and precise framework. Links play a role in this theoretical work,
but read as just part and parcel of structured data. Now, once again, this is pretty in line
with modern HTML-based systems. A link is just one type of tag in a larger structure of web documents.
Links are positioned as important, it's how you navigate, but they only make up a small part of
an overall structured web. This is one more way that Engelbart's work really presaged the modern
internet. This is also where NLS set itself apart from its inspiration
and other contemporary systems. Vannevar Bush never talked about internal document structures,
or if he did, it wasn't ever put in writing. The mimics just had links between amorphous
pages of information. In that system, the link is supreme. NLS was heavily influenced by Bush's work.
Augmenting Human Intellect even has a whole section talking about Bush.
But that was only up to a point.
The treatment of links makes this much really clear.
So, here's the strange part.
Well, at least it shook up my way of thinking about early hypertext.
Earlier in the series, I ran an episode on Ted Nelson and Project Sanity.
I'd recommend checking it out if you find this episode interesting.
Nelson is a visionary in this field, and came to the problem with an outsider's perspective.
You see, he was never a programmer or a computer scientist.
Because of that, much of his work on hypertext has
been theoretical, but also breaks hard from any kind of traditional approach. But despite being
an outsider, Nelson's work has been very influential. I mean, he coined the term hypertext
for starters. It can be easy to look at Engelbart as the old guard when it comes to hypertext.
He's working on the fringes of the industry, to be sure,
but he's at a dedicated lab with government funding.
SRI is a pretty big and recognizable name.
He's backed by a large team of researchers,
so you'd expect systems he created to be a continuation of some older, stuffy academic tradition.
Well, that's not the case at all.
Whereas when you look at Nelson's work, you see a consummate outsider.
Even his academic papers, when he writes them, are outside the box.
Nelson's best-known book, called Computer Lib Slash Dream Machine,
is closer to a piece of contemporary art than a formal treatise.
But all that being said, the hypertext system that Nelson proposed was much more in line with established work.
In Xanadu, internal structure exists, but it's second to the link.
Chunks of data are connected and structured using links. Just as with Bush, the link is supreme. This has been a bit of a tangent,
but it's important to the larger picture I'm trying to paint. We can't look at NLS as this
perfectly modern system that just happened to show up in the late 1960s.
By the same token, we can't look at it as a purely archaic system either. Some contemporaries like
Nelson's theoretical Xanadu and eventually real hypertext editing system are closer to successors
of earlier work. But NLS doesn't fit into that lineage either. It's a response to earlier work
like the mimics, but not a descendant. NLS is very much its own thing. Does that muddy up the
picture enough for you? Simply put, structured files are key. Structure is key. So how was this all achieved? We've looked at how the interface
and hypertext of NLS worked in practice, but what's making it tick? Well, in my opinion,
that's where we enter into some dangerous territory. The short story is that NLS,
and keep in mind this is the 1968 version we're talking about, was massively complex.
I know this is an audio medium,
but when I say complex, just give that an underline and some bolding in your head.
One quick way to start our dive into this side of NLS
is to look at the third core feature, timesharing.
That is, how NLS was able to service multiple concurrent users.
By the time of the demo, ARC had upgraded computers yet again to an SDS-940 mainframe.
This was an important upgrade because the 940 had hardware support for timesharing.
Simply put, timesharing lets you share a computer's time and resources among multiple programs and,
by extension, multiple users. This is done by a trick where a computer can suspend and switch
between tasks really quickly. Switch fast enough, and to us slow humans, it looks like the computer
is running multiple programs at once. Add in some fancy code to deal with multiple terminals,
and you have timesharing. In practice, it's not really that simple. Nothing ever is.
The fancy SDS-940 helps the process along by providing hardware support for memory protection.
Basically, it has features to help keep running programs isolated.
It can also, conceivably at least, support multiple terminal sessions at once.
Normally, that would make for
a really good out-of-box solution. Just get your big new mainframe, a few teletypes, and you settle
in for a pleasant experience. But that's not good enough when you're on the bleeding edge.
The core problem came down to graphics terminals. NLS needed graphics workstations and support for custom input devices.
The 940, as fancy as it was, didn't have those out of the box. So once again, we're looking at
custom hardware and custom software. Just looking at the hardware side of things will give us a
taste for the complexity in just the display system alone. A memo released roughly contemporary to the demo gives this short summation.
Quote,
The display systems consist of two identical subsystems, each with display controller,
display generator, six CRTs, and six closed-circuit television systems.
End quote.
This may not be the most clear description in the world, but this is how ARC was explaining
NLS itself, so I think it's a fair starting point. A single workstation was configured something like
this. In the mainframe room, hooked directly into the big SDS-940 were the text and vector
generators. These were then connected to small high-resolution displays. The interface was wired straight into the 940's memory bus, a pretty common practice
for I.O. devices in general.
Basically, NLS's software just had to drop what it wanted to display into the right region
of memory, and then it would show up on one of these hardwired displays.
The 1968 setup had two banks of six displays for a grand total of 12 independent display systems.
But here's the wrench in the works.
The online system was designed for remote access, and these hardwired displays couldn't be very remote.
Now, I haven't been able to find an explicit explanation as to why the signals from ARC's custom graphics generator couldn't have
just been passed down some really long wires to a distant monitor. My best guess is either
logistics, as in it would take too many wires, or maybe signal degradation, as in the signal
wouldn't survive on a very long wire. So a solution was built. Each of these local displays was placed in a light-proof box.
On one end of the box, you had this hardwired display. On the other side was a black and white
closed-circuit TV camera. This worked as kind of a strange converter of sorts. The digital data
from the SDS-940 was blasted in one side, and an analog CCTV signal came
out the other.
The signal was then fed through a series of mixers and multiplexers on its way out to
remote workstations.
On the actual desk side, I guess we could call it the client side, was just a normal
black-and-white TV set.
That sat behind the standard inputs, a keyboard, mouse, and corded keyset. Inputs were
sent back on their own data channels. For more remote sessions, this was done over a modem.
Now, this does solve the problem of mainframe access at a distance. You could, conceivably,
run a CCTV channel down a pretty long wire. Same goes for input data down a phone line.
These two data channels were both tried and true technologies,
just being used for a more cutting-edge purpose than usual.
The issue here is complexity.
Each workstation had a lot of moving parts to it just to get a signal to flow properly.
The other, perhaps larger issue, is the matter of scalability.
perhaps larger issue is the matter of scalability. Each workstation had to have a matching CCTV camera and local high-res graphics display back in the mainframe room. One of the upsides that
Engelbart toted was that remote workstations were relatively cheap. They'd just use an ordinary
television and some custom input hardware. But in practice, to add another workstation,
you had to add a matching install back on the mainframe side.
This greatly limited expandability.
That all said, there were some interesting upsides to this strange arrangement.
NLS was, perhaps, the first system to support remote collaboration.
This was backed up with, believe it or not, webcams. Well,
sort of webcams. Let's just call them online cameras. Like I mentioned earlier, CCTV handling was a well-known quantity at this point. There were existing techniques for mixing and compositing TV
signals, so teleconferencing was implemented by simply mixing the feed from a
front-facing camera with the normal terminal feed. In practice, you get something like a ghostly face
cam of your collaborators. The interesting thing here is that when Engelbart explains this feature,
he always frames it as more of a happy coincidence of the system, and not really a reason for its initial design.
Neat video tricks aside, this complicated graphics system was primarily needed for time-sharing.
By 1968, time-sharing was on this big list of other known quantities. The first implementations
of time-sharing had shown up in the late 1950s or so. However, those were primarily text-based
affairs. There are always exceptions, just for completeness, the Plato Project at the University
of Illinois offered timesharing and graphics. But the main point here is that there wasn't
stock technology for remote graphics terminals. NLS had a neat way around that problem, but it was decidedly limited.
The next issue I want to examine comes down to the matter of programming. I know, talking about
implementation details can be a bit of a slog, so I promise I won't use too fine of a magnifying
glass here. I've already mentioned that NLS is a bit of a moving target. I think it's probably better to call NLS a project,
or, as I have been doing, just a system instead of a program,
since, when you get down to it, NLS changed a lot over its lifetime.
The 1966 version that we talked about earlier
was written in a combination of Fortran and assembly language.
Well, by the time of the 1968 demo,
NLS had been totally rewritten. The team at ARC was always pretty technology agnostic after all,
so when the new mainframe came in, it was time to get hammerin'. Now, here's the deal. I can't
actually pin down what the bulk of NLS was written in. One reason for this is the fact that
the software parts of NLS were written in multiple programming languages. Another reason is,
like I keep driving home, NLS evolved pretty radically over time. It's just hard to pin down
a snapshot. The final reason, which dovetails nicely into a major critique,
is that NLS was mainly developed using custom programming languages. Yeah, plural on the
languages here. The big language, the one that enabled the development of NLS in general,
was called TreeMeta. As near as I can tell, development on Tree
Meta started sometime in late 1966 or early 1967. The first manual for the language that I can find
was printed at the end of 67, so I figure we can just pad that date out a little to make an educated
guess. Anyway, you may be wondering what's with the name? Why the meta?
For that matter, why the tree?
Well, the tree part is simple.
Tree meta is packed full of support for tree structures and hierarchical data.
Makes sense.
NLS is all about those trees, after all.
But the meta part, that's a little more interesting.
Tree meta is what's known as a metaprogramming language.
It's a language used to describe other programming languages. You take one of these descriptions and
pass it through the metacompiler to get a new compiler for the described language. Now, that's
a bit of a tongue twister. In other words, TreeMeta is a tool specifically for creating new programming languages and
compilers.
Setting aside all the twists and turns wrapped up in that last sentence, I think this is
a relatively measured approach.
NLS required very specific software, and the project tended to switch platforms often. I mean, between 1962 and 1968, NLS had run on no fewer than three totally different computers.
I'm sure the ARC crew was a little sick of rewriting the same software over and over again.
Building up a toolchain for developing new compilers makes a lot of sense in this context.
Next time the lab got a new computer, they would only need to port
TreeMeta's metacompiler, then everything else could just be recompiled. I actually just covered
a pretty similar scenario a few episodes ago. If you listened to the episode on Zork, then TreeMeta
should sound at least a little familiar. There's a bit of a ring there. Zork didn't use a metacompiler,
but eventually it was rewritten in a custom programming
language better suited for the game. In general, this is a relatively savvy approach for developing
complex software that has to be portable. That said, the case of tree meta is a little bit more
high-octane than what the implementers of Zork were working with. This is taken from the 1967 TreeMeta manual.
Quote,
TreeMeta translates directly from a high-level language to machine code.
This is not for the faint of heart.
There is a very small number of users, approximately three.
All are machine language coders of about the same level of proficiency.
End quote.
What I'm trying to highlight here
is that TreeMeta was built for a highly specific niche.
It was intended to be used by the programming team on NLS,
more specifically by a subset of that team.
So you have a custom programming language
used for making compilers,
and that language can only really be debugged
and managed by a small
part of your team. That's a little annoying, but maybe not the worst thing in the world.
So what was tree meta being used for in practice? For one, the tree meta meta compiler was self-hosting,
so that itself was written in tree meta. that's another one of those mouthful sentences.
Going further, we start to unfurl layers of NLS kind of like an onion.
The next step up from TreeMeta is the ControlMeta language.
This was another meta language used for defining grammars and programming languages,
but instead of being a general-purpose language, it was meant for defining user
interfaces. The 68NLS Progress Report describes it thusly, quote,
The Control Meta-Language Translator can process a file containing such a description to produce
a corresponding version of an interactive system which responds to user actions exactly as
described in the file, end quote. This is how the
sausage is really made. The user interface that we talked about earlier with its typed commands and
mouse-selected options was written in the control meta-language. Once again, I think this is a pretty
smart approach. Engelbart makes it clear that this meta-language approach was taken to
make experimentation easier. The interface could be tweaked by making small modifications to maybe
one or, at the most, a few files. Since the interface was built from a formal description,
things could be kept relatively consistent. This must have also been really convenient with all the custom hardware at Arc.
Need to change how mice work?
Just drop down to the control meta language, make a few tweaks, and recompile.
For a quickly evolving system, especially one using so much new technology, this all
seems like a really good choice.
And hey, it's in line with bootstrapping. That all said, I think this is
one of the places where NLS got itself into some trouble. The system had deep-seated hierarchy at
its very core, not just in how it handled data. To use NLS, you had to learn the control language,
that's the series of commands and clicks needed to navigate and modify data.
That's the series of commands and clicks needed to navigate and modify data.
But let's say you want to program NLS.
Let's say you're a new hire on the Arc team.
Well, then you really have your work cut out for you.
Depending on what level you're programming,
you either need to learn the control meta language,
tree meta, machine language, or any combination of those. Two of the three of those are totally custom and new programming languages that you can't know from any other work outside of Arc.
Now, I've been going back and forth on how I feel about all this. It's all wickedly complicated to
be sure. I don't know if anyone worked as a full-stack developer for NLS,
but that would have definitely taken a particularly strong constitution.
I wouldn't want to be a new hire, let's just say that. Just trying to make sense of the different
layers on its own can be a little confusing. But I can also see the upsides here. First of all, you get some wicked portability.
Arc could pretty easily migrate to a new computer. By the same token, it would be easy to deal with
new input and output devices. Just change some code at the exact right layer and you're done.
Simple. As long as you're familiar with all these layers of the NLS onion, then you're good to go.
you're familiar with all these layers of the NLS onion, then you're good to go. This could have also made teamwork easier. Now, I haven't seen sources describing this part of the project, so
bear with me here. Each level of the metalanguage boogie had its own distinct area of responsibility.
The tree meta side of things was all about lower-level development, basically the stuff
needed for the overall toolchain.
The control meta language layer was all about the user interface and front-facing features.
There were a few other layers that I haven't even gotten into here because there's so
little information on them, like the machine-oriented language, but that also had its own niche.
The point here is that different teams could be responsible for
widely different parts of the system. Each team would have its own isolated environment, its own
niche programming language, and its own set of goals. Once everything was set up, once all the
plethora of languages were developed, I can see how this would have been a pretty slick workflow.
Differentiation and specialization really do help make software development faster and better. It
allows you to focus on just one task. It's just getting everything up to that point seems, well,
like a Herculean feat. But hey, I guess that's what bootstrapping is all about.
is all about.
Alright, that does it for this episode.
And that also does it for our impromptu NLS November.
For me, at least, here's the key takeaway.
NLS is a complicated system with a complicated story.
It's a lot more than just the mother of all demos.
It wasn't some fully formed modern system that just showed up out of nowhere in 1968.
And it also wasn't some fully archaic old school system that's just been overhyped.
NLS lives somewhere in that daunting liminal space,
which makes discussing it in detail all the more important.
Now, by this point, you've already listened to two hours of me talking about NLS. To be fair, that did include a lot of lead-up last episode, but it's still a lot of
content. This is just scratching the surface. There is a lot more that I'm realistically going
to come back to eventually. I'm especially interested to look at some of the later software and applications of NLS
and maybe doing a deeper examination of metaprogramming in general, but that has to
come later. I need a break from this. And hey, this is Advent of Computing, after all, not Advent of NLS.
Before I sign off, I have a little announcement, since I guess the end of the show's become the announcement corner.
I'm right now gearing up for a new bonus episode over on Patreon. It should come out sometime in December, but leading up to that, I have a poll to help me decide what my listeners want to hear.
So if you want to get in on the poll for next month's bonus episode and get my other bonus
episodes, I think there's four or so now, then you can head
over to my Patreon. All the links to everything are on my website, adventofcomputing.com.
The show is fully listener supported. That's something that I don't say as much as I should.
So if you want to help with the support, then head over to adventofcomputing.com,
click on the link to Patreon, and go and donate. You can also support the show through buying merch or by just telling your friends to listen. If you have any comments
or suggestions for a future episode, then go ahead and shoot me a tweet. I'm at adventofcomp
on Twitter. I love hearing from listeners and I love talking to you guys. Y'all are really what
makes the show special. Anyway, I'll be back in two weeks' time with another episode.
Thanks for listening to Advent of Computing, and have a great rest of your day.