Advent of Computing - Episode 61 - FRESS and Practical Hypertext
Episode Date: July 25, 2021Hypertext has really become a core offering of daily life, and defined the face of the Internet for decades. But the links and formatting we know so well only make up part of the story. Today we are ...looking at FRESS(the File Retrieval and Editing SyStem), a hypertext system developed at Brown University at the tail end of the 60s. What makes FRESS so crucial in the history of hypertext is that it was extensively studied. Multiple experiments were carried out to test if FRESS, and hypertext in general, had a place in classrooms. Some useful sources from this episode: https://sci-hub.do/10.1162%2F109966299751940814 1999 paper on FRESS and hypertext in general by Andres van Dam https://archive.org/details/VanDamFinalReport1976 Final experimental report https://archive.org/details/AndyVanDamHypertextFilm Short film on the FRESS experiment Â
Transcript
Discussion (0)
Hypertext has become this strangely casual part of our daily lives, at least for any
denizens of the internet.
We don't really think about the underlying mechanics at play, it's just always there.
We see an underlined line of text and poof, we're on a new page.
We just don't question the technology, why would we?
But perhaps more importantly, have you ever questioned if hypertext is actually
useful? Now, I will admit, on the surface, that may seem like a ridiculous question,
but I think it's a really important one to ask, and I think it's become especially relevant since
2020. With so many schools and universities switching to remote learning, hypertext has become even more crucial in classrooms. Now, luckily, if you know where to look at least,
we do have a concrete answer. Hypertext is, in fact, a helpful technology. At least,
that's the short answer. More specifically, it has been shown to be an effective teaching tool. And I'm not talking
in broad generalities and anecdotes here. This is backed up with actual science. However, the
research might be a lot older than you think. It's definitely older than the internet. So let me
introduce you to FRESS, a hypertext system developed in 1968.
This system provided the first concrete proof that hypertext was, in fact, a practically useful technology.
But like I said, that's just the short answer.
The long answer has a lot more interesting twists and turns.
twists and turns. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 61, FRESS, a practical hypertext. We're finally getting back to my techno-utopian series.
Last time, we looked at Ted Nelson, a visionary that spawned,
almost fully formed, the modern concept of hypertext decades before the internet ever existed.
His work on Xanadu that was a theoretical hypertext system is nothing short of miraculous.
It really makes you think that the dude had a crystal ball somewhere. Even though his writing may be a little inaccessible to a casual audience, Nelson was somehow able to see the future of networked
information with amazing clarity and accuracy. And crucially, Nelson saw hypertext and hypermedia
as a way to build a better future for all humanity. In his mind, it was a possible utopia rendered in binary digits.
Xanadu has also been called the greatest vaporware program of all time, mainly because it's never
been fully realized or released in any usable form. So Nelson's own work remains firmly in
the realm of theory. However, we do have some idea of what a real-life Xanadu may look like.
Today, we're going to be looking at the File Retrieval and Editing System, or FRES.
Now, those last two S's are both taken from system in the acronym, so that's a bit of some fun computer nerd stuff.
This project was heavily influenced by Nelson and, I mean, really heavily.
The key, crucial difference between Xanadu and FRESS is that the latter system didn't only exist, but it saw actual use.
Developed at Brown University, FRESS was in service for something verging on a decade.
In that time, it was used to write and typeset books, manage libraries of data, and teach classes.
But what really gets me excited about FRESS is that we have an honest-to-goodness scientific study that was conducted to determine its efficacy in teaching.
One of Nelson's huge points about hypertext was that it would expand the horizons of the human mind.
Another early practitioner of the linked arts, Doug Engelbart,
would say that these types of systems were made for, quote,
augmenting human intellect.
With FRESS, we get a chance to see how those theories work in practice,
and not just casually, but systematically.
So, let's get back into the
realm of hypertext. Just how close was Fress to a living example of Xanadu? Where does Nelson
connect up into the whole project? And does Fress prove that hypertext is all it's cracked up to be?
Before we get into Fress proper, we need to talk more about Ted Nelson. For a deep dive,
I'd highly recommend you go back and listen to my episode on Xanadu, since this isn't going to be
the story incomplete. Instead, I just want to cover some key details that will matter for our
discussion of Fress today. Nelson may or may not have invented the idea of hypertext. Now,
may or may not have invented the idea of hypertext.
Now, it gets complicated with chronology and such,
but he did coin the term.
That much is crystal clear.
It first shows up in a 1963 paper where Nelson describes a system of linked and enriched textual data.
Once again, this isn't the first time we run into discussions
of linked webs of information,
but Nelson provides an amazingly
modern and full description of hypertext, name included. A description is about as far as Nelson
would get, at least on his own. You see, he wasn't really a computer person, per se. At least Ted
wasn't a programmer. He preferred to think of himself as a generalist. While this gave him the freedom to explore really futuristic ideas and really divorce himself from the existing biases in the industry,
it also meant he had to find help to implement any of those ideas.
The first major partnership Nelson would forge was with Andy Van Dam, a professor of applied mathematics at Brown University.
Don't let the title fool you, Van Damme was very much a computer person.
This was all occurring in the 1960s, which, as strange as it may sound,
was still really early for the field of computer science.
Van Damme had recently graduated from the University of Pennsylvania with a PhD in comp sci.
At the time, only one other person had graduated from that school with a doctorate in that discipline.
So, while his title at Brown said mathematics,
he was, for all intents and purposes, a computer scientist.
Nelson and Van Damme had been classmates during their undergraduate studies.
In 1967, the two reconnected at the Spring Joint Computer Conference,
and they got to talking about
hypertext. From everything I've read, I don't think it's ever been possible to talk to Ted
Nelson without it turning into hypertext, especially in those early days. Van Damme was
quick to see the use of such a system, and he invited Nelson to come up to Brown to test out
his ideas. This collaboration would lead to the hypertext editing system,
better known as just HES. Now, this is where Nelson's precise vision of hypertext starts to
really matter. The modern-day internet shows us just one take on hypertext. The OG formulation
is actually a good deal more complex and capable. In general, Nelson conceptualized hypertext as information
organized in such a way that it would be impossible with physical media. He's looking at computers as
a way to transcend the printed word. That meant links, but also more advanced features.
To rapid fire in no particular order, we have bidirectional linking, version control,
embedded data from remote documents, full data searching, and write and access control mechanisms.
Nelson was envisioning a way to handle data totally divorced from existing methodology.
But that's all theory. HES was the first place where the rubber really hit the road.
The research group at Brown
implemented a lot of his magic formula, but not all. HES had one-way links between chunks of text.
There was a back button, but that wasn't full bidirectional chains of links. Data from one
file could be referenced and inserted into another. But that was about the extent of things.
We aren't dealing with a full
hypertext system that would live up to Nelson's vision. That being said, HES was far from a
failure. Van Damme was interested in hypertext for a specific use case, education. A timeless fact,
and one that I run into every time I research about computers in education is that current teaching
methods are somehow flawed. You can go to any time period and you'll find someone saying that
almost verbatim. Around the 50s and 60s, researchers and teachers were trying to figure
out how to throw computers at some of these issues. HES was one of many such attempts. So,
accordingly, the core function of HES was to manage data in the classroom.
This took two main forms, presenting educational material to students and providing tools for
authoring hypertext.
I mean, editing is in the name after all.
This first part is the simplest use case.
HES allowed students to browse hypertext documents. They
could bounce from text to citations, references, and footnotes with relative ease. In theory,
an entire library could be encoded into HES. The benefit here was that students didn't need to stop
what they were doing and go track down a reference to another work. The unproductive task of scrounging
up other sources was automated,
thus allowing for more engaging media. It also meant that you could have just more sources and
citations. If it's easier to pull up a source, then why not add as many as possible? Hypertext
on HES also allowed for some things that you just couldn't do with books. Trails of connections could
be built up from one topic to another. In the era of the internet, this is a no-brainer. Of course you can link one
page to another and on down to infinity. But in 1968, this was a really fresh idea. It broke from
the printed page. According to Nelson, it brought hypertext more in line with how humans naturally
organized thought. By ditching the entire concept
of printed media, a lot more was just possible. The other side of HES was, of course, editing.
A user could create their own hypertext documents, complete with links and embedded data.
And they could do that all within HES itself. This self-hosting nature was another huge step
forward. You didn't need to ever leave your
terminal, or even load a different program. HGS was a one-stop shop for creating, editing,
and consuming hypermedia. This editing set of features was also where friction arose in the
project. You see, HGS was able to handle typesetting. You could format hypertext for
later printing.
And there were rules for how different types of links and structures were translated when
printed on a page.
Is that a controversial feature set?
I mean, to most people, it's just more minutiae of data editing systems.
Of course, users will eventually want to print something out.
In 1968, it wasn't exactly easy to get computer time,
so it just makes practical sense to be able to print out what you're working on for later use.
Typesetting is also a powerful feature for publishing.
It made HES usable for formatting everything from magazines to books to paper and back up to hypertext.
But for Nelson, this was nothing short of a betrayal. His vision of
hypertext was all about transcending existing media, especially printed media. Including
typesetting in HES, at least in his mind, massively compromised the system. The overall design still
had connections to printed media, no matter how tenuous they were.
There were other contributing factors, but the entire to-print-or-not-to-print debate
would help drive Nelson out of the project. Ted Nelson wasn't the only one who was displeased
with HES. Van Damme had his own issues with the system. HES was controlled by a combination of light pin taps
and typed commands. In theory, it was intuitive, but there was a limit to that. Van Dam would recall
in a later ACM lecture, quote, We had already come to the point where Ted, who designed it,
was able to go through the hypertext pretty well, but some of the rest of us had difficulty
following it. It was not
exactly obvious where you were. This, of course, is the classical lost-in-hyperspace problem,
which has been mentioned by one and all. I won't elaborate on it here because it's amply discussed
in the proceedings. We already started getting the notion that the richer the hypertext,
the greater the navigational problem." You could pack a lot of information into HES. The entire thing was designed to support massive
troves of data, disk space allowing, of course. But as the library increased in size, as links
were more aggressively used, it became harder to manage information. Nelson was fine because one,
it became harder to manage information. Nelson was fine because one, he designed the system,
and two, he basically thought in hypertext. Throw someone without any experience at the system,
like a student, and they were bound to get lost in its intricacies.
From context, it seems that another part of the issue was, well, context. HES implemented one-way links. That's what we use today on the internet. A link points to some target destination but doesn't include information about its source.
There was a back button, which helped, but you couldn't tell where you were without navigating
back and forth. Adding to the issues, HES was built on Nelson's concept of a freewheeling system of pointer-based data.
While that gives users unparalleled flexibility,
it meant that there was no inherent hierarchy in HES.
There was no predefined structure that you could fall back on.
One document may be linked together as a hierarchical tree of data,
another could be linked up to form a chain of random consciousness.
You had to bring all your own structure to the table.
And if you didn't know how a document was set up and how nodes were linked, you couldn't guess.
This leads us into one of the huge touchstone events in computing history.
In 1968, Doug Engelbart presented the mother of all demos.
That's what it's actually called in the literature. Van Damme was present to catch a preview of the
future at this show. Sometimes shortened to just the demo, this was the first time a hypertext-like
system would be shown off to a large audience. Engelbart had been working on a very similar
line of research to that of Ted Nelson, but he took a different approach. The system he produced,
called the Online System, or NLS, was a lot more than just hypertext alone.
NLS was a massive and expansive system, one that I've covered a little bit before, but
I definitely need to come back to in this
series. If I had to summarize, then I'd say at its core, we have three big features. Hypertext,
graphics, and timesharing. The hypertext part, or I guess more accurately, hypermedia,
is something we've already been looking at. A key difference was that NLS presented more structured data.
Everything was in some kind of hierarchy, and there were limits to the size and complexity
of entries inside that larger structure. Not as freewheeling as HES, but perhaps easier to wrap
your mind around. The graphics component was where NLS was really dazzling. The 1968 demo was the first public outing of the computer mouse,
and I don't mean some early primitive example of the species.
The mouse used on HLS is the basis for what we use to point today.
It's a palm-sized puck with a series of buttons along the top edge.
Dragging the device across a desktop moves a pointer on a graphics display,
clicking the different buttons fired off different actions. For the first time,
you could point and click your way through hyperspace. And this wasn't just restricted
to text. You can embed images and even draw inside documents on NLS. The system was designed
to really get the most out of these new graphics displays.
The final big category, timesharing, may be a little less flashy in comparison.
However, as far as I'm concerned, this is the real meat of the matter.
NLS was a multi-user system.
All users shared a single mainframe between them.
In 1968, timesharing wasn't exactly fresh off the blackboard,
but it was still a pretty new idea. There were other people doing timesharing, much on a larger scale. But what makes timesharing on NLS particularly interesting is when you mix it with
the other factors at play. Collaboration has gone hand-in-hand with hypertext since the early days
of theory. It was never really described as a solo activity. Sure, hypertext is useful for keeping
notes and personal files, but the real power of the medium comes from sharing information.
If hypertext is supposed to be this transcendent active medium, then you need someone to transcend with.
The earliest expression of this idea goes back, surprise surprise, to Vannevar Bush and his
theoretical mimics. Bush described storing linked pages of data on microfilm, then swapping reels
with friends. Engelbart took this idea into the digital age. NLS supported live collaborative document editing.
Multiple users could, in real time, view and modify the same document.
This wasn't just clumsy collaboration where a file could be shared, either.
It was a full-on collaborative workflow.
You could watch someone type in new data.
Ancillary hardware even let users chat over a video feed while
working on the same page on NLS. The demo is a watershed moment because it brought all this
technology together for the first time in public. Engelbart was able to give a demo of the future.
It was shocking, and it was inspiring. Van Damme would go back to Brown University with a healthy
injection of new ideas. Nelson was out of the picture, but Van Damme would start to form his
own hypertext system. Alright, we've hit this transition point, so I think it's a good place
to take a quick sidebar and address some minutiae before we continue. Hypertext systems may initially sound like
text-based software, but that's not really the case. NLS is a good counterexample of that thought.
It's resplendent with simple graphics and even a pointer. You have to have more than just a text
display for that kind of stuff. HES may have sounded like a purely text mode affair, but
it also fits more into this realm of graphics. That being said, to the casual observer, HES,
or even NLS, looks remarkably text-based. The bulk of the system is hypertext, which is text.
Most of what you view and edit is made out of just raw text characters. That said,
these systems deviate heavily from contemporary text-based terminal systems. In 1968, the normal
interface was a text-based command line. Emphasis here on the word line. On a teletype terminal, commands had to be entered one line at a time.
Files were edited one line at a time.
And crucially, data was printed out one line at a time.
You can't rewind a paper feed and change something that's already been printed.
All outputs are set in ink.
Even as we start to see a transition away from these hard copy
terminals that physically print on rolls of paper, we still are using really similar interfaces.
CRT terminals can theoretically go back up a line, but all the software is still written for these
old teletypes with physical paper. Both HES and NLS used specialized graphics display terminals
to get out of this catch. This opened up a new horizon of possibilities, but it came with its
own set of problems. Advances always seem to come with problems. For HES, this meant it needed
special terminal hardware to operate. A teletype would not work with HES.
Even a fancy new Glass TTY wasn't cut out for the task. The other downside, and one that I don't
see mentioned often but I think is worth bringing up, is that HES and NLS required new programming
practices. Van Dam and his colleagues didn't just print out lines of text.
They had to manage data in two dimensions for the first time. You're no longer able to just say
print this text. You're more telling a computer to render characters at some given space at a
given location. So just keep that in mind as we continue. Early hypertext system wasn't just conceptually new, it was also a programming challenge in a very real way.
Now, with that tangent aside, let's get back to the hard and fast chronology.
Van Damme started to adjust his vision of hypertext after seeing the demo.
In 1998, early 1996, he started in a second try, and this is where Fress Proper begins.
To kick things off, Van Damme had a simple plan, from the same ACM lecture I used above,
quote,
My design goal was to steal or improve on the best ideas from Doug's NLS and put in
some things we really liked from the hypertext editing system, a more freeform editing style, no limits to statement size, for example. So there's two goals. We can bring that up to a trifecta by adding in another little implicit
goal. That's to make the system well-suited to an educational environment.
From these foundations, FRES would start to take shape.
I want to start on the technical side of things because, well, I like the technical side of things.
Plus, we just finished talking a lot about Hypertext's more theoretical aspects, so
some hard-and-fast implementation details could do us all a bit of good. One of the quirks of HES was that it ran on what's known as a partition. That's fancy
ye olde IBM lexicon for a virtual machine running inside a mainframe. In other words,
HES wasn't a program that ran under some operating system. It was kind of its own thing. While not
the worst design choice, this meant that HLS didn't really play nice with anything else.
Adding to the mix, we get the input-output routines. HES was designed specifically to
work with an IBM 2250 graphics display unit. This device gave all the graphics and input functionality the team
needed, but we're dealing with a really specific device. HES only functioned in this really slim
space. This was a distinction shared with NLS. Engelbart's system did run on an off-the-shelf
mainframe, but there were extensive modifications
to make everything work. Closed-circuit TV feeds had to be laid to connect up graphics terminal.
All software down to the operating system was custom. It just lived on its own little island.
For HES, that was fine, but it limited the system's mobility. It was able to live as this neat, isolated prototype.
To be frank, that left HES as a dead end.
This was one of the bad parts of HES that Van Damme wanted to do away with.
But how does that contribute to the larger design goals of FRES?
Well, it had to do with that implicit third one that I brought up.
For FRESS to be used in a classroom environment, it had to be accessible. And I don't just mean
that it had to be easy to use and have a slick interface. I mean accessible as in, you know,
physically being able to get access to FRESS. To make that happen, Van Dam and the team ditched the old standalone
codebase and started FRESS. FRESS was designed from the beginning to run as an application under
IBM's VMCMS. That's an old multitasking mainframe operating system. Just that choice of environment
would have a huge impact. It meant that FRESS could more easily handle
multiple user sessions since, you know, it had support from an existing operating system that
could do timesharing. You just have to write less code if you can call out to VM, CMS, or
whatever for some help with basic functionality. The other huge technical difference between FRES and other contemporaries
was how Van Damme chose to handle inputs and outputs. The design for HES was very deeply tied
to one specific IBM terminal. NLS used highly specialized custom hardware. Even roughly similar systems like Play-Doh tended to focus on special-purpose
I.O. devices. FRES didn't follow that path. It was designed to handle somewhat generic
input and output devices. A lot of different terminals could work just fine with FRES.
Now, of course, that doesn't mean that a text mode terminal would function the same as a big, expensive graphics display device.
A normal glass TTY was able to display a simple window into hyperspace, with commands and cursor movement all handled by a keyboard.
Moving up to the more sophisticated hardware, you get more features.
The cutting edge that FRES took the most advantage of was the MLAC
PDS-1. That was a pretty tricked-out graphics machine that was complete with a keyboard,
light pin, CRT display, and even a small computer to keep everything running.
This compatibility was accomplished using what's known as a virtual terminal.
Now, we've talked about abstraction before. The basic idea is that
abstraction allows you to deal with some underlying hardware in a more generic manner. It's also a
great $5 word to throw around to impress your friends. Programming languages are full of
abstractions. You usually don't tell a computer to specifically plot this series of bitmapped pixels to a specific place in graphics memory.
Instead, you just say print hello world.
In FRES, virtual terminals were an abstraction used to gloss over the specific differences of terminal hardware.
To the system, each logged-in user just had some kind of terminal assigned to them.
Input would come from keyboards,
light pins, maybe both. Output would just be sent to this virtual terminal device that was set up
somewhere inside FRESS. It was only at the final stage, just as signals were making their way to
the user's screen, that the output would be modified and tailored so the terminal could understand it.
But in general, each terminal looked the same to FRES from the inside. Printing to an MLAC or some
generic CRT used all the same functions. The practicality of the system should be apparent.
An MLAC or an IBM terminal wasn't exactly cheap. MLACs sold for over $8,000 each, and that's in 1960s money.
I don't need to do an inflation calculation to tell you that's a lot of scratch.
You aren't really going to see a classroom tricked out with these machines.
To be fair, you probably didn't see many classrooms full of terminals in general in this period,
but my point stands.
see many classrooms full of terminals in general in this period, but my point stands. The flexibility to connect any terminal to FRES opened up a lot of doors. In practice, a fancy MLAC could be sitting
in a computer lab with scheduled access hours, while multiple cheaper terminals could be set up
in easier-to-access locations around campus. So there we have actual accessibility. So let's say you finally
hop on one of these cool terminals. What exactly are you going to see? Going forward, I'm going to
be working off the MLAC example since that gets us into the cooler future bits of FRESS. And also
most of the docs are written about graphics display terminals. Anyway, this is where we really start to see the influence of NLS.
And sadly, this is also where we get into an annoying gap in the sources.
Back to the old advent of computing standby.
By far, the most useful resource to me has been Brown University's archives.
It has everything from notes about the
development of hypertext to manuals covering FRESS. What it doesn't have, and what I haven't
been able to track down, are clear and well-taken screenshots of FRESS in action. There's a few
frames in a documentary that I'll touch on later, and there's a handful of pictures of a screen running HES. By far, the best view we have are these kind of charming little character art diagrams
in one of the fresh manuals. The diagrams are fine, but I don't know, it just kind of bugs me.
I like big glossy photos sometimes, and diagrams can only show us so much. But here's
the big reason that these diagrams matter. FRES was a multi-windowed environment, and
I work better off visuals. Like I mentioned, this is partly drawn from NLS. Engelbart's
system could also split the screen up into multiple sections, each displaying
data independent of the others. The FRES implementation is interesting because Van Damme
was able to cut some corners. As a rule, these types of systems aren't the windowing systems we
use today. FRES, and for that matter NLS, displayed tiled windows. They didn't overlap, you couldn't really drag them around, the screen was just split into subdivisions.
In FRES, you could have up to four windows plus a lower section used for keyboard commands.
However, you couldn't put your windows anywhere on screen.
FRES provided selections of presets for window placement, each with their own number.
For instance, if you wanted one big window, then you could switch to the 1A configuration. 4A would
break the screen up into four equal quadrants and so on. On the surface, this sounds kind of annoying,
On the surface, this sounds kind of annoying, and I'm sure for some students it was.
However, I think this is a fine example of the types of compromises that make FRES so interesting as a system.
Just the idea of windows were pretty new, and applying that to an already cutting-edge
hypertext system took things up to another level.
FRES could have probably handled arbitrary
window setups. There was this really flexible text rendering system that was programmed into FRESS,
so it could wrap and display text any way you wanted. It was flexible enough that you could
have any shape or size windows you could think of. But that would add another layer of complexity for Van Damme,
for programmers, and for students. It was just a lot easier for a student to type out a command
to switch to the 1A configuration than trying to guess at the row column with height parameters
needed to properly display the page. The result is a mix of bleeding-edge features and pragmatic compromises. That's
something that Van Damme appeared happy to do in order to pursue his larger goals. But I'm not
entirely sure that an idealist like Ted Nelson would have been comfortable doing this.
Most of F.R.E.S.S. was controlled via this magic fifth window, a single line at the bottom of the screen dedicated to command line inputs.
It'll still be a number of decades before we get away from keyboard-heavy systems.
Anyway, from there, a user could load files, start new drafts, change the windowing configuration, and a whole lot more.
There were also facilities to create macros, which will be important later.
A user could also use one window as a so-called command window. This was almost exactly what it
sounded like, a window with a simple list of commands. Pointing and pressing the light pin
would execute the selected option. While that's interesting and there was a lot of depth to these
commands, it's not really new territory.
This is just showing off more of the compromises and what FRES pulls in from old systems to make everything possible.
The actual cool part is what went on inside the upper four windows.
That is, FRES's implementation of hypertext.
Everything so far has been nice set dressing around the main course. In a 1999 article,
Van Damme explained that there are four key issues that any hypertext system must address.
That list goes like this, quote,
1. The size and internal structure of nodes or documents.
2. The availability of alternate views of information. Three, bidirectionality
of links. And four, link classification as a means towards a rhetoric of linking. End quote.
Now, first off, I think this is really showing that hypertext isn't just something that came
out of nowhere. The development of hypertext systems in general was very systematic,
there was a whole lot of theory behind it,
and just a whole lot of work.
The internet isn't just some aberration that appears from nothing.
For our purposes right now,
I want to use that list as a way of examining hypertext in fresh.
I think that should help us
restrict and focus the scope of our discussion a little. So let's start with one, and we'll
meander our way through the list. First off, FRESS was designed such that each node in a hypertext
web could be any size. That was a holdover from the freewheeling days of HES, but there was a purpose to it.
I'm paraphrasing a little bit here, but both Van Damme and Nelson explain time and time again how a good hypertext system shouldn't impose limits on users, at least not without a good reason.
A computer can handle files of any size, so why limit yourself?
computer can handle files of any size, so why limit yourself? Internal structure is also crucial here.
That is to say, what does each node look like on the inside? If you did a byte-for-byte accounting, then most of the hypertext in FRES would be composed of, what else, but text. This is
supplemented with bits of markup, or at least something close to markup.
These commands are where we get to the formatting instructions and hyperlinks or as Fress calls them
structure and jumps. We need to be sure to check our expectations. We aren't dealing with something
like HTML here. Fress used a much more simple structure, and in general just had a different approach
than modern hypertext.
FRES just doesn't have underlying links,
frames, and divs,
but there are some familiar faces.
In general, structure was denoted on FRES
by commands that were sandwiched
between percent signs.
A jump, the rough equivalent of a link, started with
percent percent j. So while a lot different, we're already seeing this tagged structure forming.
Now, jumps were bi-directional, meaning that they tracked your starting page as well as your
destination. That also handily ticks number three off the big list of hypertext issues.
Additionally, jumps carried keywords and explainer information.
The explainer part is pretty simple.
It basically just explained where the jump was going to take you
without having to follow the jump.
Keywords are a little more interesting.
In practice, anything between percent signs was a keyword. These keywords
were generally displayed on screen, but they didn't have to be. You could assign multiple
keywords to a jump or to any structure in FRESS. It's roughly equivalent to the class attribute
in HTML if you're familiar with that system. The cool part is how these keywords were actually used.
In the command line, you could specify a query based on keywords,
and then use that query to fetch results, manipulate, or display data.
So let's say you have a document that lists everyone in your class.
Each student has a few jumps to some of their papers,
all with keywords set for the
assignment name and student name. With a simple command, a teacher could pull up every paper with
the week one keyword, or pull up every paper written by Sean. To put it another way, we're
dealing with a rudimentary database that just exists by virtue of designing hypertext. I think that should really give us a
hint at how complicated these systems were. Now, I don't know about you, but it sure sounds like
keywords also solve hypertext problem number four. Classifying links in a database-like structure
goes a long way towards forming some consistent rhetoric. Of course, jumps weren't the only structure in
FRESS. Footnotes and annotations offered a simple way for users to slot in some more extra data.
Any jump, annotation, footnote, or really any structure that would lead to more data could be
opened in any window on your display. So let's say you wanted to use window 3 for footnotes
and keep the main text in window 1.
You could simply do that by specifying
which window you wanted the footnote to appear in.
That would be done on the command line.
Once again, the system is really showing off that it's flexible.
The final relevant structure is the splice.
These were hidden jumps that were automatically followed. Using splices, it was possible to link together nodes to form a larger
document, or you could import a passage from another document altogether. Now, this is something
that I really, really want in HTML. I don't mean iframes or templating systems,
I mean native web functionality to stitch together a larger document from parts all over the web.
You can kind of do that, but you have to do it in a bit of a hacky way on the modern internet.
This is also a feature that's straight out of Xanadu,
so chalk another point up for Ted Nelson.
Number two on the big list of hypertext, alternative views of information, is another feature that modern systems lack.
Accordingly, it may sound a little bit strange to internet users.
FRESS let users choose how hypertext was rendered.
What I've been describing so far was a so-called WYSIWYG mode, as in what you see is what you
get.
In this mode, splices are followed.
Links and tags are displayed as items waiting to be selected, and all information is fully
rendered and formatted.
You could also drop out of the fully rendered mode to view
structure. This let you view the formatting instructions and easily modify them. The more
exciting view was Fress's outline. In this mode, the actual text part of hypertext was hidden.
Instead, users were presented with just the overall structure of their document web.
This is another case where I really, really want some screenshots. Van Damme describes it as a
straight NLS ripoff, so we can get a good idea by looking at that contemporary system.
In NLS, the outline view functioned like an automatically generated table of contents.
It showed headers for each node of a hypertext web.
You could expand those headers to view any structures underneath it,
so you could plainly see the hierarchy of your web.
The key difference here is that NLS structured everything under this rigid tree-shaped hierarchy.
Every node had to have a parent.
FRES was more freewheeling. Nodes didn't have to follow any structure inherently.
So Van Damme's version of an outline view would have looked a little different, but
I just don't have the pictures to show how. In practice, this outline view plus the multiple
windows and bidirectional links
helped Fress solve the lost in hyperspace problem.
You didn't have to lose your place to follow a link.
You could always follow links back up the chain.
And if you were still lost, you could bring up a map of all your pages.
Now, I want to touch on one final piece that goes beyond Van Damme's four key hypertext issues.
FRES had an undo operation that worked anywhere.
In fact, it's likely, and it's been reported widely, that FRES was the first editing system with an undo.
How did it handle that?
Well, this takes us full circle back to Ted Nelson.
You see, FRress had a secret version
control mechanism. As you edited hypertext, a copy of your file would be periodically saved.
Hitting undo would just revert to the most recent copy, thus blowing away any unwanted changes.
On its own, undo is a cool feature. It's definitely really useful.
It speaks to how Van Damme was trying to make FRESS a user-friendly system. In context,
it shows just how close FRESS was getting to Ted Nelson's vision of hypertext. We can even
count the ways that the two intersect. FRESS had bidirectional links, version control, splices, no implicit hierarchy, and unlimited document sizes.
Those are all key features of the theoretical Xanadu system that Nelson was constantly after.
after. This doesn't just show how Nelson really hit the hypertext nail on the head,
but it also speaks to his direct influence on Fress. What I'm getting at here is that despite Nelson not having any really direct involvement with the development of Fress, this may be the
closest thing to Xanadu that was actually built in this time period.
This takes us to the next big question mark in this saga. We have Fress, which is close enough
to Xanadu in all but name. If anything was going to put Nelson's ideas to the test, it would be
Fress. So what was it like to use? Did it actually augment human intellect like NLS was planned to?
Was it a liberating tool like Nelson theorized? In a lot of cases, this would be where we run
into conjecture. Sources on day-to-day use of many computer systems, especially from so far in the
past, are really hard to come by. I mean, we don't usually take notes about our computer use and preserve them for posterity.
The same goes for people in the 60s or even 70s or 80s.
At least, that usually doesn't happen.
FRES is a very special case, and this is a huge reason why I wanted to talk about this
system in particular.
You see, Van Damme conducted a few honest-to-goodness scientific studies on the use of
FRES in the classroom. I'm talking grants, I'm talking experiment design, and I'm talking
quantitative results. So we can start to actually answer some questions with real scientific rigor.
There were a few experiments carried out using FRESS, but we're going to focus on a study that
was planned in 1974 and then carried out through the 75-76 term at Brown. It has the most sources
to go off, so I just think it's the best option to look at. Incidentally, the character art diagrams
I mentioned earlier came from this study's paper trail. The specific goals of this study were to
test if FRES used in conjunction with traditional teaching methods could improve student outcomes.
The first big hurdle was finding someone willing to teach using a computer.
At the time, that was a pretty scary prospect.
But strangely enough, Van Damme found a sympathetic ear in the English department.
Van Damme started reaching out to interested parties in 1974 and eventually came into contact with Dr. Robert Scholes, an English professor.
In later interviews, Scholes explained that he
was growing concerned with a drop in student literacy, and he worried that existing teaching
methods weren't as useful as they used to be. So he was eagerly drawn to the hypertext banner.
The target of this new technology was set as English 16, an elective course at Brown that covered poetry comprehension
and criticism. The key parts here are that this course wasn't needed for a degree, it sounds like
it was just an extra elective. Plus, the course was about understanding poetry, an activity that
involved heavy reading but not that much writing. So let's move on to the experiment design. Van Damme,
Scholz, and a handful of other co-conspirators selected one of the English 16 sections that was
currently taking place. They specifically chose a class with a wide mix of majors and a good mix
of students with and without computer experience. The class was then split into a control and an experiment group.
The report spends actually a whole lot of time explaining that inclusion in the experiment group
was totally informed and voluntary. Each student was only a victim of their own choice.
English 16 was broken up into multiple units, each covered a single poem. Students would get three sessions with that poem.
In the first session, they got to do a dry reading and compose some comments and analysis.
In the second, students were given more background information and context for the poem.
They made a second reading and then followed prompts to reassess their analysis.
The final session introduced critical material about the poem
and guided the students through writing a final response. To add some variety, there were a few
poetry assignments in class. Students would get the chance to write their own poems and then
critique the work of other classmates. That's all pretty boring syllabus stuff, no offense to any
teachers out there. It just sounds like a run-of-the-mill
English class, which was kind of the point. The class had to be able to operate both with
printed paper and in hyperspace. The less complicated things got, the better. The control
group got to experience pretty much verbatim what I just described. Poems were presented on
printed sheets of paper. Responses
were either handwritten or typed on a typewriter and then turned in for credit. Everything was done
in a purely analog world. The experimental group got to experience something totally different.
For each session, the test students were allocated one hour on an MLAC terminal. Of course, this introduced
some fun logistical overhead since that time had to be scheduled, but they made it work. All the
material for the course was accessible on FRESS, complete with hyperlinks to related sources and
critiques. In all, the equivalent of 1,000 pages of text were loaded into FRESS at the beginning of the semester.
Now, this is where we can circle back to Van Damme's number four issue from the big bad list
of hypertext issues. That's the one about logic and rhetoric of linking. I'm coming back to this
part because, well, now we have a concrete example of what links and structure were being used for in practice.
The boring part here is just normal links.
To prepare for the experiment, Scholes and two teaching assistants
transferred the normal material used in English 16 to FRES.
Links were added between poems, sources, and critiques.
In that capacity, links were just functioning as fancier footnotes
and citations. They provided a lot of context, but not much else. The interesting bit occurs
after the start of the class. Students in the experimental group were asked to add comments
and questions to poems in the form of fresh structures. Macros, like I mentioned earlier, were set up to
make it easy to add in tags with predefined question and comment keywords. These macros
also embedded the student's login name as a keyword. Going a step further, students were
asked to read and respond to other students' comments and questions. A web of links was being used here to create discussions.
The logical connections were those of a conversation, not just a citation.
FRES, and more broadly speaking, the use of hypertext,
was allowing for a new type of structured classroom engagement.
This was all possible thanks to how FRress handled links. It also got us into
some weird terminology. Van Damme describes modern HTML linking as quote-unquote chunky, as in
chunky peanut butter. By this he means that in our current hypertext system, links default to
connecting large chunks of data. You can use markup to link
to any part of a page, but in general, that takes extra preparation. When you follow a link on the
internet, it takes you to the top of some big chunk of text. You have a nice substrate that
connects things up, but you still have to deal with chunks of peanuts. By contrast, Van Damme
describes the linking system in FRESS as quote-unquote creamy. A link in FRESS can specify
any location within a node without the need for prior setup. You don't have to tag a location in
a document with an anchor, you can just jump right to it. Students were able to reference specific locations
even down to the letter, all without having to get too deep into the technical side of FRESS.
So, this gets us to the big payoff. How did the experiment shake out? It was initially thought
that FRESS would help students more easily traverse sources and critiques, and the comments and questions would make a dialogue between students easier.
Additionally, Van Damme hoped that students would blaze their own trails of links between poems and sources.
That last part was key because building webs of links,
in other words, creating personal trails of thought,
that was a core feature of hypertext. It goes all the way
back to Bush and the Mimics. So if hypertext worked the way everyone thought, then students
should be going wild with links. Well, here's the thing. The results didn't really turn out as anyone expected. Students made some links, but Van Damme didn't see anywhere near the quality or quantity of linking that he expected.
This wasn't from a lack of understanding for us, it turned out to be really easy for students to learn the system.
A large proportion of the experiment group was composed of students who had never used computers before.
But after an initial lesson, there were very few complaints about FRES itself.
Students only seemed to complain about the school's mainframe having downtime for maintenance.
I'm sure partway through the semester, Van Damme was feeling a little concerned.
Sure, FRES was working fine, but it didn't look like it was
doing much beyond what would have been accomplished in a normal class. But then he noticed something
strange. Fress was starting to eat through a lot of disk space, a lot more than they expected.
Looking into the matter, Van Damme realized that while students may not have been making many new
links, they were creating a lot of comments, questions, and responses to each other.
By the time the semester was over,
students in the experimental group had written more than twice as much
as students in the control group.
The study's final report makes it clear that this was a wildly unexpected result.
English 16 was designed as a critical reading course.
There were assignments that involved writing, but that wasn't the main focus. So what was going on?
Primarily, students were engaging in dialogue with each other. FRUST was being used as a tool
to enable better communications, as a conduit to connect students and form a virtual community. More than just idle
chatter on the terminal, these dialogues were helping learning outcomes. In the final report,
Scholz wrote this, and just to make things a little clearer, it starts with a quote from a student.
You know, several of the students in the class said things about this poem that were just as helpful to me as anything I read in the Professional Critics.
End quote.
Scholz continues,
Now, when we reached that point, I thought we had succeeded, not just because one student had made a very good response,
but because the other students had realized that a student could make a response that was just as valuable
as the response made by professional critics to the poem. For me, this was an extraordinary moment,
and I think a very important one, and it says something about what can be accomplished with
this kind of course that can hardly be accomplished in any other way. End quote.
Of course, by this kind of course, Scholes means a hypertext course.
Remember that Scholes is primarily concerned with literacy.
From that standpoint, we can see how Fress was helping students understand poetry.
But that's a very microscopic view of the technology and a view of the results, really. So let's zoom out and close with a few links of our own.
The FRESS study partly vindicated what Engelbart was saying about NLS.
By adding timesharing and more complicated framing,
hypertext could be turned into a collaborative medium.
In the Brown study, students weren't working simultaneously,
but FRESS did support that kind of operation.
So, that makes our first link.
FRESS showing that hypertext could improve collaboration in a real-world setting.
Now, there was more to this project than just a single study.
Part of the grant funding that Van Damme and Scholz received was used to make a short film
about the project. It basically rehashes parts of the study's report with fancy shots of students
and teachers. Plus, there's a few short interviews. One of those interview clips is with a student
from the experimental group. I'm paraphrasing once again here, but they basically explain how
they were initially intimidated by computers.
They didn't think a computer belonged anywhere near humanity's. Using FRESS changed their mind.
By the end of the experiment, this student was seeing how computers could be used in a wider range of ways. In other words, they were becoming a casual computer user. This ties us back to Ted Nelson.
One of his many theories of the utopian future
was that friendly hypertext systems would allow anyone to use a computer.
We're seeing this transition occur in Microcosm at Brown.
So, that's our second link.
FRESS as a tool to allow non-experts to benefit from computers.
Finally, I want to bring this back to something I brought up in the first part of this series.
I basically spent the entire introduction in that episode talking about utopianism in general.
Specifically, the working definition that I had in that episode followed the tradition of
19th century utopian movements.
The idea was that by creating a controlled community and following an exacting protocol,
some perfect society could be formed. Now, this may be a bit of a stretch for some, but I wager
that Fress was following that same line of thinking. We can see this in the English 16 experiment really well.
We have a controlled communal environment. In this case, that was the digital world inside
FRES. We have an exacting protocol composed of the class's structure and the very structure of
hypertext. We don't have a perfect society, but we do have an improvement of the human condition in a very small way.
Students in the experimental group at least felt they were improving thanks to FRES.
The shared space they lived in during that class, as virtual as it may have been, allowed for better communication and better learning outcomes.
So, we might have to squint our eyes a little bit,
but we can see how FRES is following its own utopian program.
And that's our final big link.
FRES as an inheritor of the utopian philosophy of thinkers like Ted Nelson
and even all the way back to Vannevar Bush.
All right, that brings us to the end of this episode. It also closes out the latest installment of my Hypertext Utopia deep dive.
We saw how hypertext continued to develop at Brown University after Ted Nelson left.
HES had its problems and was reborn as FRESS and then entered into practical service.
Most importantly, FRESS was rigorously studied to see if hypertext actually had a place in
education, and more broadly, if hypertext actually had a use. FRESS would continue in
service at Brown for about a decade, running through 1978. During that period, it was used more as
a text editor than a hypertext system. A handful of books were drafted and typeset on FRESS,
as well as innumerable reports and papers. Ultimately, FRESS ran into issues that many
similar hypertext systems faced. It just failed to spread. Van Damme's system would remain part of student
life at Brown, but it didn't see widespread use outside the school. With that being said,
FRESS offers an important building block on the road towards modern hypertext.
Here, we see early proof that hypertext is, in fact, a valuable tool. That it can bring communities together, forming something
bigger than a single person. It can be used as a medium to augment human intellect. And,
with all that progress forward, FRES stayed tied to the utopian roots of this technology.
Thanks for listening to Advent of Computing. I'll be back soon with another piece of the story of computing's past.
And hey, if you know someone else who'd be interested in the show,
then why not take a few minutes to share it with them?
You can also rate and review on Apple Podcasts.
If you want to be a super fan,
then you can support the show directly through Advent of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to
episodes, polls for the direction of the show, and assorted bonus content. You can find links
to everything on my website, adventofcomputing.com. If you have any comments or suggestions for a
future episode, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.