Advent of Computing - Episode 55 - Project Xanadu
Episode Date: May 2, 2021Project Xanadu, started in 1960, is perhaps the oldest hypertext system. It's creator, Ted Nelson, coined the term hypertext just to describe Xanadu. But it's not just a tool for linking data. Nelson'...s vision of hypertext is a lot more complicated than what we see in the modern world wide web. In his view, hypertext is a means to reshape the human experience. Today we are starting a dive into the strange connection between hypertext, networking, and digital utopianism. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
The computer world is not just technicality and razzle-dazzle.
It is a continual war over software politics and paradigms.
With ideas which are still radical, we fight on.
We hope for vindication, the last laugh, and recognition as an additional standard.
Electronic documents with visible connections.
That's the chunk of text you're greeted with when you visit Xanadu.com, the homepage for Project Xanadu. Although that
may as well be the opening to a software manifesto. Without context, that should leave
you with some questions. What exactly is a Xanadu? What does this have to do with electronic
documents? And perhaps most importantly, why would someone have such strong feelings over software?
Well, you see, Project Xanadu is a hypertext system, but it's not the World Wide Web we know. In fact, it predates the recognizable internet by
decades. It even predates the ARPANET. In fact, we may just want to call Xanadu THE hypertext system.
Its creator, Ted Nelson, coined the word hypertext after all. That should all sound surprising, but at first seem innocuous
enough. So why would you get so worked up over hypertext? Well, there are a lot of reasons behind
that, but for Nelson, there's one big factor. When he started working with hypertext, all the way back in 1960, he realized that the
technology stood a good chance at changing the entire world.
To Nelson, and others who have come over to his way of thinking, hypertext is a means
to create a new utopia. Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is Episode 55, Project Xanadu.
Today, we're going to look at a hypertext system that started development in the 1960s and, at least depending on how you look at it, is still in pre-release development today.
According to some, it's a bold vision that's still ahead of its time.
To others, Xanadu is the oldest and longest-lived vaporware program ever conceived.
But we aren't just looking at Xanadu alone.
With the conclusion of my Intel processor series about five episodes ago,
I've been looking for another topic that I can really sink my teeth into,
something I can do another long-running series on.
I like that format, and it's gotten a lot of listens,
so it seems that you like that format too, so I'm sticking to it,
and it's a lot of fun for me.
What I finally settled on isn't really a
lineage of chips or devices. It's more a lineage of ideas. My grand plan, at least for the time
being, is to kick off a series on techno-utopianism. Now, that may sound a little left-field, but
hear me out for a second. There's something really interesting here. The basic idea of
techno-utopianism is that technology can be used to somehow better the human condition,
eventually leading to a perfect world. Really, it's a bit of an overkill name for a simple idea,
and it's a pretty old idea. With such a broad definition, anyone with some new technology and a dream of a better life
could be counted as part of this ideology. I mean, even an early human with a sharp stone axe and a
dream of a wooden house would fit the bill pretty nicely. As we get closer to the modern day,
especially after the utopian movements start in the 19th century, we get a more focused definition.
start in the 19th century, we get a more focused definition. In general, we see an exacting program that, if followed to the letter, is supposed to usher in some new perfect world. By following
a core set of beliefs, this state of utopia can, in theory, be reached. The other factor in most
utopian programs is isolation, or at least some type of controlled environment.
This usually meant compounds, communes, or some other way to gather up adherence to the cause in one place where they can all carry out this utopian program.
Once technology advanced, and especially once computers enter the picture, the techno part of Utopian becomes
a lot more emphasized. In a lot of ways, computers match up really well with the idea of a Utopian
program. A digital machine represents the perfect tool to carry out a rigid program.
It's a great way to just smooth out the details. I mean, after all, they are built for
programs. Believe it or not, utopianism actually comes up a whole lot in my research for the show.
While it's often not as rigid and institutional as 19th century utopian communes, it's this
unexpected undercurrent of a lot of development. Sometimes it's more just
people talking about how a new technology is going to change the world and utterly upend how we live.
Other times it's really explicit. And from running into it so much, I've started to notice a bit of
a pattern. For whatever reason, and I don't fully understand why,
but hypermedia and networking always seem to have some utopian idealist nearby. Those technologies
seem to form a kind of digital replacement for a physical commune. And given how important
hypermedia and networking are to modern life, I think it really behooves us to understand why utopianism is such a big deal in its early development.
My plan for this series is similar to my coverage of Intel's processors.
Episodes won't be back-to-back, but more interspersed throughout my podcast feed.
I like how that flow worked, so I'm going to stick with it.
This also won't be a strictly chronological story, more of an anthology of stories all with the same
utopian themes. To kick off this new adventure, I thought, what better place than Project Xanadu?
Its creator, Ted Nelson, coined the terms hypertext and hypermedia specifically to explain his ideas around Xanadu.
In that sense, this is where the modern conception of hypermedia really starts.
But more than that, Xanadu is also a synthesis of good ideas that Nelson picked up over the years.
What makes the project so interesting is that it started in 1960, and there still hasn't
been a completed release. As of 2014, a version called Open Xanadu is available, supposedly.
I can't find any downloads for it. There's scant demos, but nothing that I can really sink my teeth
into. So this episode, we're going to be
looking into the story of Xanadu and Ted Nelson. Like I keep repeating, this is the start of a
slow-burning series, so we're going to start off with theory. Today, we'll see what exactly Xanadu
is, how did Ted Nelson coin hypertext, and how does this vision differ from its use today?
We'll also take a look at some early attempts to make Xanadu a reality, and what his ideas
may have looked like in practice.
Nelson himself describes programming as a form of art.
As a programmer myself, I have to agree.
It takes a combination of skill, experience,
and creativity to write good code.
Plus, it makes me feel fancy to think of myself as an artist.
Anyway, in this context, we could call Nelson an outsider artist.
He's not a programmer, not a computer scientist, or a mathematician, or an engineer.
In 1962, Nelson graduated Harvard with a master's in sociology.
After graduation, Nelson spent a number of years as a photographer and videographer
at a communications research center,
so not really the kind of person you expect to show up on an episode about computing.
Throughout his early life, Nelson was interested in media,
and specifically how people interacted with and understood media. This started in childhood.
Nelson's father was a TV director and his mother was a Broadway actress. This upbringing gave him
an inside view of a few huge avenues of consumption. And growing up, he wanted to follow in a similar industry.
Nelson pictured himself as a future film director. But this experience, and Nelson's experiences in
college, didn't really push him in any one direction. He was always interested in interdisciplinary
studies, or as he put it, he wanted to be a generalist. Even while pursuing his master's degree,
Nelson kept an open mind, looking for new stimuli. And it was during his master's program,
armed with a wide-reaching curiosity, that Nelson decided to take a course on computer science.
It wouldn't have been part of his degree path, but in the very early 60s, it would have been
a really cutting-edge offering. It's easy
to imagine that it was something far too interesting to pass up, and it was during that class that
Nelson saw something that would change his life. During the course, he came face-to-face with a
computer-driven CRT display for the first time. I haven't found exact details on what model or computer he met, but it would have been
primitive. For some context here, Space War, the first computer video game, was written the year
Nelson graduated from Harvard. That game displayed simple vector-ish graphics on a round oscilloscope
like CRT. So the state of the art that Nelson saw had to be a little more basic
than that. It was probably something along the lines of text displayed on a small green
phosphor tube. But just that taste set Nelson's mind whirling. In Nelson's mind, he had just seen the future inside a cathode ray crystal ball.
That basic notion stuck around, in no small part because he was already interested in visual media.
Like I said, Nelson initially wanted to be a director, so he immediately recognized the value of digital displays.
But that was just half of the equation.
There was a deeper realization. Nelson would
articulate it in a number of ways throughout the years. I think the clearest form appears in his
book Dream Machines. There he wrote, in all caps for Cruise Control, quote, responsive digital
display systems can, should, and will restructure and light up the mental life of mankind.
End quote.
The huge jump here, and the one that I find so remarkable, is that Nelson was early to the PC game.
Of course, the full vision wasn't really there.
No one would get a full vision of personal computing for
decades, but he was able to recognize that the hulking beasts of the 60s wouldn't be hulking
forever. He was able to recognize that computers, if properly applied, could be a tool for reshaping
humanity. In other words, in the early 1960s, Nelson started thinking about a future digital
utopia. But here's the weird part about Nelson's career that makes him stick out to me. I've
covered a lot, and I mean a lot, of people on this podcast that have a similar early life.
They go to college, run into computers, get bit by the bug, and then they reshape their
degrees and lives around computers. They switch majors, or sometimes just get a whole other degree,
and devote themselves to deeply technical work. Well, that is not what Nelson did. Despite his
epiphany, he didn't change degree programs. He completed his sociology
degree. He didn't throw himself at programming. He didn't get deep into a computer lab. He didn't
specialize. He stayed on his initial course, but his ideas about the coming digital revolution
kept churning in his mind. Taking a quick tangent, I think this is what makes a lot
of Nelson's work a little inaccessible. He is an outsider, and he presents himself as such very
proudly. Throughout his writings and talks, Nelson embraces an almost adversarial attitude to traditional academia and industry alike. So reading up on his work can feel
a little bit prickly at first. It's part of the double-edged nature of his stance.
As Nelson sees it, this outsider position gives him a unique view into computing, and I don't
think he's totally wrong about that. As we will see, Nelson does have a very unique and
valuable insight into the field. But at the same time, his unique position often puts him at
odds with the field that he's trying to advance. Anyway, there was another big transformative
moment in Nelson's early career. You see, inquiring young minds tend to be drawn towards new ideas. The more far out the idea, the better.
While he was reading philosophy papers and more radical underground texts,
Nelson was exposed to something that would also be life-altering.
That was Vannevar Bush's As We May Think.
This 1945 article is something that we've talked about at length on the show,
and we're going to talk about on the show, and we're
going to talk about it some more, so get ready.
To summarize without covering way too much ground, in As We May Think, Bush presents
a concept called the quote, information problem.
The core idea is that we are fast approaching a point where the sum total of human knowledge
is simply too great for any one
person to comprehend or analyze. Once we hit this data singularity, scientific progress will grind
to a halt. Now, the caveat here is that that was at least too much data for any one person to
understand without help. The solution that Bush presents is a theoretical device called the Mimix.
It's a machine that can store, organize, and edit the sum total of human knowledge.
Bush designed his Mimix to mimic human thought, at least in one way. Data is stored as pages on
microfilm, and each page can link to another. The idea is that this trail of links
forms something like a train of thought, allowing humans to store their ideas in a more natural and
accessible way. The Memex could be used as a personal device, simply a way to organize your
own data, or microfilm reels could be passed around to share your ideas with colleagues.
or microfilm reels could be passed around to share your ideas with colleagues.
Eventually, the Memex could be used to organize everything that humanity knows.
According to Bush, it would all be possible using 1945 technology.
Now, in an interesting way, Bush and Nelson were kind of kindred spirits.
Both were something of digital outsiders. Bush had been instrumental in the development of analog computers during the interwar period. And as World War II began,
Bush started working for the federal government to coordinate scientific research for the war effort.
But Bush never really embraced digital computers. He saw their utility firsthand, but didn't become personally active in their development.
He preferred to stay on the administrative side of things.
After the start of the Manhattan Project, Bush didn't really do research himself.
He helped plan and direct research.
In that sense, Bush's career lined up nicely with Nelson's dream of being a generalist.
But just as important for us, as we may think, presented a vision of a type of utopia.
The text is partly presented as science fiction. It has this narrative about a researcher using
their mimics, and that's tied into a larger conversation about how technology can
better the human condition. This was powerful stuff, and it really seemed to click with Nelson.
Just like his exposure to interactive computer displays, Nelson easily saw the value of Bush's
ideas. He also recognized that it wasn't complete. It was just the first glimpse of something bigger.
After graduation and during his photography gig, Nelson started to pull these bigger ideas together.
The first was easy.
He realized that a digital computer with a CRT display could be used to create a real-life Mimex.
However, there were some issues with Bush's conception. In Nelson's view,
at least, the Mimex wasn't radical enough. It was stuck in the pre-digital world. And it was stuck
in older ways of thinking. So with an eye for media and human interaction, Nelson started to
rework the Mimex into a new form. In 1965, he presented his ideas at that
year's Association for Computing Machinery Conference. This presentation comes down to
us as a conference paper titled, A File Structure for the Complex, the Changing, and the Indeterminate.
It's a bit of a strange and vague title for a pretty strange paper.
Remember that Nelson isn't a computer scientist.
He has chosen to cast himself as an outsider, and this shows up in his writing style.
It's more relaxed than other academic papers, and I'm pretty sure it didn't get him any extra points with the ACM,
but it definitely adds to his street cred. There's
something really cool about publishing such a conversational paper in a journal full of stiffs.
Anyway, content-wise, what are we looking at? The title doesn't give away much fine detail.
This paper gives the first full description of hypertext. In fact, it's the paper that coined the term hypertext.
Quote, let me introduce the word hypertext to mean a body of written or pictorial material
interconnected in such a complex way that could not conveniently be presented or represented
on paper. End quote. In other words, Nelson's defining hypertext as non-sequential text,
an organization of data that exists outside conventional media.
At first look, this seems in line with the mimics, but this isn't entirely the case.
Nelson's hypertext is a refinement of Bush's earlier work,
an update that takes new media
and tools into consideration. The MIMEX dealt with data in terms of pages. That was just a
limitation of Bush's proposed design. The smallest unit of data was a single microfilm slide.
From there, trails of ideas were built as links from one slide to another.
For the time, that was a revolutionary idea, but it's not the most flexible approach.
The end goal for Nelson, and for Bush for that matter, was a system that could mimic
human thought.
Something that worked as an extension of the human mind.
I don't know if I'm alone in this, but I certainly don't always
have ideas that are the same size. Nelson's 1965 paper proposes a system that can manage
variable-length data, and that's a departure. Also, as a word of warning, Nelson coins a lot
of terms for himself. Hypertext is an example of one that's stuck around, but going forward,
we will get into some that don't reach common parlance. The nuts and bolts of Nelson's proposal
are actually pretty easy to understand. That's one of the nice byproducts of his non-standard
approach. Everything comes down to a data structure called ELF, or the Evolutionary List File. It's a bit of a grandiose name to be
sure, but not all that complicated. Essentially, an ELF is just an ordered list of data. Each entry
has some type of identification number and can either contain a chunk of raw data, a link to
some other entry in another ELF, or a reference to another
list contained in another ELF.
If we want to be fancy, we could say an ELF is a type of linked list that supports recursive
definitions.
So, why does a really fancy list matter?
What does this actually get you?
Well, we're dealing with a very generic way to define
connections between data. And as we all know, when something is defined in a very general way,
it tends to be flexible and powerful. The most simple application here is, of course, generating
linked data. Simply define some data elements and then define a set of links between those elements
and you can duplicate the functionality of the mimics. That's the baseline. Also, just as an
interesting aside, in Nelson's paper he calls these links between ideas, quote, bush trails.
And I gotta say, I kinda like that more than hyperlink. Now, the variable data length means that Nelson's link implementation is actually more flexible
and useful than the formula used in As We May Think.
A link in an ELF can be used to connect entire articles, add footnotes, or even just for
attributions.
The loose structure of the ELF means you can link entities one-to-one
in what Nelson calls a zippered list, or you can construct a net of interwoven connections.
Links were also bi-directional, meaning that a user couldn't just follow a chain one way,
they could also follow the trail of links back to the starting point and see where a link was
originally referenced. Already, we can see Nelson reshaping the Mimex idea to take full advantage
of digital media. But Nelson's not just rehashing old ideas. In fact, it goes a lot deeper than
just links. Going further into the paper, we get to so-called list patching and text patching.
This is where we go well beyond the hypertext that most of us know. List patching essentially
allows an ELF to reference another ELF. Once everything is resolved, you end up with a list
built from multiple other lists. Since this is mediated through references, a change in one
list is reflected in the combined output. Text patching is where things become, I think, really
interesting. This takes list patching another step, pulling in data from each referenced entry.
With the recursive nature of ELFs, this means that it could take a few steps, since each
elf could in theory reference another deeper hidden elf.
The final result is that you construct a much larger structure from a series of small chunks
of data.
And here's the thing that's so interesting about this.
Let's say you're writing a book or a script for a podcast.
Or really any text.
Often when preparing the show, I end up pulling quotes from other texts. It's just how my life
goes. Instead of copying and pasting in a block of text I want and then adding a little link,
I could just define a text patch. Then I don't have to have a pile of text thrown into my document.
Instead, I have that text plus the linkage to its source. I maintain easy access to the passage's
original context. If I want to see where in an article a quote came from, I can just follow the
link. It also maintains attribution kind of automatically.
By virtue of how a text patch is structured, I have a predefined record of where that patch
came from. The other important component of Nelson's hypertext system was what he called
PRIDE, the Personalized Retrieval Indexing and Documentation Evolutionary System.
Once again, maybe not the most reasonable name, but it works.
Nelson describes Pride as a set of tools and conventions for managing ELF data.
This would be the system that interpreted and stitched together a patchwork of ELF files.
On top of that, Nelson planned Pride to handle version control.
So as ELF data changed, old versions would be stored.
Thusly, a user could always roll back their edits or deletions.
Nothing was lost.
This is the core of Nelson's conception of hypertext.
Over the coming years, the details will shift and features
will be added, but here we see the early basics forming. Already, this should sound a little bit
alien. The hypertext system that we know best, the internet, has some of these features. The big one
is the hyperlink. That's just a core feature of the modern net that Nelson described.
But beyond that, a lot of what Nelson laid out in 1965 is still way ahead of the curve today.
Patched texts aren't really a thing, at least not in a transparent and widely used way.
Version control is commonly used by software developers, but systems like Git are standalone. There is no
one system that incorporates all of Nelson's ideas into one package. In fact, there has never been.
In the years following his 1965 paper, Nelson would refine his ideas. He ditched the slate
of names and put everything under one banner. He started calling
his combined system Project Xanadu. Like the rest of this hypertext system, the name sounds a little
unfamiliar. Nelson picked this from a Samuel Coleridge poem called Kublai Khan. Specifically,
the passage, quote,
In Xanadu did Kublai Khan a stately pleasure-dome decree, where Alf the sacred, quote, In Xanadu did Kubla Khan a stately pleasure dome decree,
where Alf the sacred river ran,
through caverns measureless to man,
down to a sunless sea.
But despite such radical ideas,
and Nelson's white-knuckle determination and commitment,
the real-world Xanadu wouldn't be the site of a
pleasure dome anytime soon. Once Nelson started thinking about hypertext, he just never stopped.
His overarching career goals turned into one vision, to make Xanadu a reality. Now, that's
nice to say, but the fact was that Nelson found himself in a bit of a bind.
Well, over time, he'd find himself in a progressing series of binds.
The core issue was always that Nelson is an outsider.
He never operated inside the world of formal computer science, and he never really worked
in the industry. So from the outset,
he was fighting with a handicap. There are a pile of sagas throughout this battle, all boiling down
to Nelson's attempt to find the right place and right people to make Xanadu real. Sometimes this
was a simple matter of gaining funds, sometimes it was an actual collaboration,
and other times it was just looking for the right place to build a pleasure dome.
For today, we will focus on just one of these attempts that I think is the most interesting and relevant.
In 1967, Nelson became acquainted with Andres Van Damme.
The two had technically met previously during
their undergraduate studies, but they reconnected at the Spring 67 Joint Computer Conference.
At this point, Van Dam was a doctor of computer science and a professor at Brown University.
The two got to talking, catching up, and Nelson started to work some magic.
You see, one of the remarkable things about Ted Nelson is that he has this ability to convey his vision of the future, and he does it really well.
You can see it plainly in his body of work, and I imagine it's even more apparent in person.
Nelson couldn't just convince people that hypertext was possible,
he could convince them that it was the most important technology of the future.
Van Damme put it like this, quote,
Nelson's vision seduced me. I really loved his way of thinking about writing, editing,
and annotating as a scholarly activity, and putting tools together to support that. I hadn't heard of Engelbart. I hadn't heard of Bush and Mimics. So after meeting quite by
accident at this computer conference and talking about what we were doing, we somehow got onto the
topic. I had this wonderful graphics display, and I was working on various graphics applications at the time.
He talked me into working on the world's first hypertext system, and that sounded cool."
Now, there's a lot to unpack there, but on the surface, we're starting to see this magnetic quality Nelson has.
Another important piece here is that Nelson was still right on the cutting edge of the next
big wave of hypertext technology. This was in the wake of As We May Think. Bush's ideas were
starting to become viable in the digital world. Other researchers such as Doug Engelbart were
starting to create implementations of hypertext-like technology. But importantly, no one had really
cracked the hypertext code yet. The idea of freeform data structured as a trail of associations
was bubbling just under the surface. Nelson not only had a name for it, but he also had a simple
and generic outline for how to implement it. This is also a place where context
plays a huge role. In the middle of the 60s, computers were very different from the data
munching beasts that we deal with today. I don't just mean in terms of size and power, but just
what computers were built to do. In the more modern epoch, computers deal with a lot of textual data.
I don't have hard numbers, but I'd wager that most of the bits and bytes that pass through a processor are somehow associated with text.
Whether that be a book, a webpage, email, or some other pile of characters, it's still text.
But in the 1960s, that wasn't really the case. Notice all the
applications I listed were pretty modern. Folk didn't really browse the web back in 1967.
What they did do came down to either crunching numbers or writing programs to crunch numbers.
That was part perception and part reality. Computers were
seen as really fancy calculators, and they just so happened to also be used as really fancy
calculators. Just one fun example, and one that came up a lot in my recent C-series,
comes down to how memory was laid out. Characters are usually encoded as a single byte in memory,
but these early computers didn't address memory by byte. Memory in the 60s was usually addressed
by words, often composed of multiple bytes. So text manipulation required a lot of extra code
to pack and unpack memory locations. These things just weren't meant to deal with a whole lot of text.
A computer could handle text, but it wasn't the primary function.
Talking about storing the sum total of all human knowledge on a computer,
and then linking it up and having facilities to edit that? Well, that must have felt more than a little bit subversive.
Sure, a computer could do it, theoretically. Hypertext seemed possible. And as strange as it
sounds, it would actually push computers in a pretty new direction. A contributing factor in
this case was Van Damme's position at Brown. Or, really, I guess this could have been anyone's
position as a professor. Van Damme wasn't just interested in hypertext for its broad applications.
He wanted to apply it to classrooms. The important detail here is that he didn't just want to use it
in his CS classes. Van Damme envisioned hypertext as a tool that could be used in just about any
class taught at Brown University. Just like Nelson, Van Damme saw the utility of computers
outside of pure number crunching. So when he got a glimpse of Nelson's grand plans,
well, he just couldn't resist. Van Damme invited Nelson to join him at Brown.
resist. Van Damme invited Nelson to join him at Brown. There, the two started in on the first implementation of hypertext. It was to be called the Hypertext Editing System, or HES, and from
the beginning, the system took a bold path. HES was designed to be used by non-computer scientists.
That was at the core of its design.
By leveraging hypertext, the duo planned to build a system for storing, managing,
and connecting textual data. That system could then be used by students, faculty, or really
anyone on campus. The goal was for HES to become a central part of classes all over campus. It wouldn't replace conventional
teaching, but it would supplement it. Think of HES as a way to augment existing classes.
Over the span of a year and some change, HES took shape on one of Brown's IBM System 360 mainframes.
Nelson must have been over the moon. He finally had a place to test out his
theories in real life. And, to sweeten the deal, Brown had a number of graphical terminals.
That was key. The pairing of digital technology with graphics-capable screens was crucial to
Nelson's version of hypertext. HES was positioned to make those dreams possible.
The core of HES was a text editing and navigation system.
That shouldn't be all that surprising.
Any user would spend most of their time either reading or writing text.
This was implemented in a really smart way
that makes the best use of hypertext possible, really.
The entire project is described in a 1969 paper titled
A Hypertext Editing System for the 360.
Now, you can tell it's a more serious paper because we're back to names that kind of make sense, anyway.
Within, the team explains their approach to HES as,
quote,
Within, the team explains their approach to HES as, quote, Our philosophical position is essentially that the writer is engaged in a very complicated pursuit,
and that this work legitimately has a freewheeling character that should not be encumbered with irrelevant restrictions on size and structure of text or operations.
Ideally, anything goes as long as it's well-defined.
Once again, we can see Nelson's writing style leaking in here. What's important is that HES
is falling in line with the tradition of systems designed for less experienced computer users.
In the past, we've talked about that in regards to basic and
other high-level programming languages. All these tools seek to abstract away the idiosyncrasies of
a computer in order to make a more easily used system. By doing so, more people could more easily
be brought into the fold, and computers could be turned into a more universal tool.
It's a path towards a digital world. Even in this short passage, even without hitting any technical details, we start to see an interesting trend. The simple fact is that hypertext is very
different from printed text on a page. But in this early period, it had to define itself in relation
to more traditional written word. This 1969 paper even admits that the goal of HES was to allow for
the same freeform expression possible on a blank sheet of paper. At first, this shouldn't seem
major. Hypertext transcends normal media, but is still deeply connected to it.
However, keep this in mind as we look at HES in more detail. This is something that we will come
back to. So the goal was a freewheeling hypertext system with some caveats. And actually, HES gets
really close to attaining that goal.
I've dug through the handful of papers and interviews and talks I can find,
and I think the best way to approach HES is to start with what a user actually sees and then go from there.
Entry was simple, and in most cases, mandatory.
A professor would set up part of their course on HES and then assign work that had to be carried out using the hypertext system.
That lucky student would then find some open time and make their way down to a lab that held the university's IBM 2250 graphics display terminal.
Sitting down and logging in, they were greeted with a surprisingly complex display.
The screen was
broken up into three sections. A large pane at the top was used to view and browse the actual
hypertext document. Below was a smaller section for viewing annotations, then at the bottom of
the screen was a section for inputting commands. So far, not all that weird compared to more modern systems. For the time, this was a step in
the exact right direction, and really a step in the winning direction. HES was one of the first
systems to take advantage of a graphics terminal for displaying large, continuous chunks of text.
From that upper pane, a user could scroll through a really impressive amount of characters
and easily edit and manipulate any displayed text.
Crucially, this was all done using a big window that could display lines of text in context.
Contemporary text editors worked on a single line-at-a-time basis.
They were actually geared towards paper-feed teletype terminals, where you can only display one line at a time basis. They were actually geared towards paper feed teletype terminals, where you can only
display one line at a time. But Nelson and Van Damme were dropping the older system in favor of
really flexing a graphic display's abilities. However, we're still talking late 1960s,
so we still get some wild differences showing up. Input devices are a fun example of
that. On HES, you get the usual keyboard, something with a similar QWERTY layout to a modern computer
keyboard. That's fine. For commands, there is a separate keypad, something like a macro pad,
that's sat on one side of the keyboard. Not all that standard nowadays, but recognizable.
The final device, the pointing device, is where things get fun.
The mouse technically existed.
Engelbart had patented the computer mouse in 1967.
But the HES crew didn't know about Engelbart,
and they didn't know about his computer input research.
But they did have easy access to an existing pointing device, a light pin.
This was a small pin-shaped detector that was wired up to the terminal.
It was stock IBM equipment, so easy enough to set up.
In practice, a user would simply point the pin at the CRT display and zap to make a selection.
I guess we could look at it as a mediated touchscreen. You couldn't use a finger, but you could touch the light pin to the display.
It's intuitive, but a little clunky since you need to move your hand off the keyboard and grab
the pin to use it. Anyway, via keyboard, keypad, and light pin, a user could edit and traverse hyperspace on HES.
In general, this would have been a pretty seamless experience.
A student was able to scroll through large documents on a nice phosphor display,
and they could even write using the system.
But that was just the most simple use case.
Going further, we go to links and branches.
use case. Going further, we go to links and branches. Links are, well, the same hyperlinks that we know and love with a few minor tweaks. In HES, text was relatively freeform. It was
stored as variable-length strings. That is, you could have a single line that was anywhere between
one character and an entire page worth of words. On screen, text was displayed in what was called a
quote area. At least, that's the term used in the 1969 paper. It sounds like a div in HTML if you're
used to web stuff, or something like a text box with a specific ID or name. To make a link, you
just had to specify where you wanted to insert the link and the area
or chunk of text that you wanted to link to. This was stored as a pointer, something like an entry
in one of Nelson's ELF data structures. On screen, a user would see a mark where the link was, as
near as I can tell it was a little percent sign. Pointing the light pin at this mark selected the link. Then you could edit, inspect, or jump to the link location.
HES also stored where you jumped from,
so it was trivial to go back up any trail you went down.
With what I've described so far, that's editable text and links,
we already have a living MIMEX.
Well, something really close to a MIMX. You can't store
images in HES, which is a strike against it, but on the other hand, HES isn't limited to linking
singly between full pages. I just think it's interesting to note that it didn't take all that
long for Bush's theoretical paper to turn into reality. HES wasn't technically the first system to achieve
this, but it's in a group of systems all rushing towards the same goal in the same era. Now,
going beyond, as we may think, is where HES starts to shine and where we get some more
high strangeness. Links were just one part of the hypertext web going on at Brown.
Another structure, one that we don't really use today, was the branch.
I've been trying to find a more serious way to explain branches.
I keep coming back to the fact, though, that a branch is just a digital implementation of one of those choose-your-own-adventure novels.
In HES, a branch linked the end of one text area to the start of
another area. You could have single branches which just linked one page to another. But,
if you wanted to get fancier, you could have multiple branches at the end of an area.
Each branch said where it linked and could be annotated to give a short explanation of the branch. That means that,
conceivably, someone could write a novel on HES and end each chapter with a choice that branched
to a different chapter. Now, I just think there's something delightful about that feature.
Annotations or explanations are another key feature that we should address more fully.
In general, anything displayed on screen could have a tag or annotation.
These showed up in a special section on the graphics display,
and were essentially non-printable chunks of text.
Something like a digital footnote.
These notes couldn't have links inside them, but they could be tacked onto links.
This is how you implemented short explanations for branches or links
that could be displayed without jumping to the actual link.
Rounding out the system, we have typesetting.
HES wasn't just a tool for reading and editing hypertext.
It was also used to prepare manuscripts and papers for printing.
The software implemented this using IBM's own Text360 typesetting language. Essentially, an HES user could
format their text and then seamlessly send it out to be printed. Formatting was
preserved on print, so what you saw on screen was relatively close to what you
were going to get out the printer. Some annotations could
even be set up to print as numbered footnotes. Of course, links weren't followed, but if my reading
is correct, then singly branched areas were stitched together into a full text. Now, that
should sound pretty innocuous. You're editing text. It stands to reason that you may like to print it out at
some point. There wasn't even some requirement to print everything you edited on HES, but it was an
important and optional use feature. Simple, boring, not all that surprising, right? Well, this whole
feature set ended up becoming very contentious. At least,
it became contentious for Ted Nelson. HES already was a subset of Nelson's full hypertext vision.
That, on its own, is forgivable. HES was really just a first pass at a full Xanadu. However,
this printing thing became a big
source of friction. It's a little subtle, but important to note. The printing facilities
inside HES meant that, at least at some level, there was still a connection between how it
handled data and the old-school printed page. Sure, HES wasn't the revolutionary system Nelson
had been plotting,
it was a step in the right direction,
but it remained intrinsically tied to non-hypertext.
That meant that features in HES had to have some mapping back to printed paper.
There had to be rules for how an area of text should be converted to typesetted text.
Rules for how branches and links should be treated by a printer.
In other words, HES was built with a way to turn hypertext into normal, boring, useless text.
To Nelson, that was a corruption of his vision.
He'd stay on the Brown University team with Van Damme for a number of years,
but eventually left in search of a better place to create Xanadu.
That said, HES was a huge step forward.
It proved that a Xanadu-like system wasn't just possible, but could be practical.
In the coming years, Van Damme would upgrade and permute the system,
eventually creating a program he called the File Retrieval and Editing System. That program, shortened to FRESS, would serve Brown
University for decades. It became what HES was planned to be, central to interdisciplinary
studies. Nelson was on the right track, his ideas were sound, but in his eyes, no one had yet seen the
full potential of Xanadu. Of HES, Nelson wrote, quote, I see this as only the beginning. My
Xanadu system will go much farther. I think of Xanadu as the fundamental text system of the future,
the magic carpet of the mind. The basic idea is that the computer should be able
to hold your writings and thoughts and at least the complexity that you have in your mind.
Over time, Nelson's vision of Xanadu shifted. Now, I think that's true of anyone with a good idea.
The core was always hypertext as described in his 1965 paper. But changes and additions crept in as he
gained more experience and new technology emerged. And, of course, Nelson never stopped writing about
Xanadu and looking for ways to make it a reality. This brings us to what is often cited as the
seminal work on the system and what got me into this episode in the first place.
Nelson's magnum opus, Computer Lib slash Dream Machines.
This text is important for a few big reasons.
Published in 1964, it shows how Xanadu was changing into something even more prophetic.
It was also positioned in a perfect time and place. I've
mused before about how the 1970s was probably one of the fastest moving decades for the development
of the computer. This was an era where personal computing went from an idea to a product. The
ARPANET was just getting its sea legs, and over at Xerox, the modern user interface was invented.
Nelson published a book on his utopian vision smack in the middle of all of this.
The text was well-placed to seep into the growing revolution.
That all being said, this is a really weird book.
Computer Lib slash Dream Machines is actually two books in one binding.
Flip to the cover labeled Computer Lib and you're at the start of the first book. To read that part,
you just have to read the right-hand pages of the book. Flip it over to the cover labeled Dream
Machines and you get to the second book. This part is printed on just the left-hand
pages, and it's flipped 180 degrees so it all works. So in other words, if you're sitting in
front of the text, there's always one page that's right side up and one that's upside down, and
you're only ever reading one page on each side. Now, I've been reading some scans of the book,
so luckily I can zoom in and stay on just the proper part of the scan.
However, that's just the start of the weirdness here.
Once you actually look at the content, or at least how it's laid out,
things get even more interesting.
Nelson called it a, quote, magazine layout, but that
makes it sound like neat columns with glossy print photos. Every page has some variable number of
columns. Some have just two, others have, well, an indeterminate amount. A single page may have a
section with two columns, then break and have another section with
three, or any permutation or mix thereof. Some chunks of text are rotated at odd angles.
Let's just say the typesetting is complicated. Interspersed between columns are figures,
pictures, clippings, and somewhat related quotes from other texts.
Sometimes there's just a poem in the middle of a page.
Headers, titles, and even a few short passages are all handwritten by Nelson in a flowing, stylized script.
There are a multitude of hand-drawn graphs, figures, and even notations in margins.
This adds to the overall freewheeling experience of the text.
For me, reading Computer Lib slash Dream Machines is a pretty disorienting experience.
I have to say, I haven't read it cover to cover.
I've read a lot of it, but it's just not the kind of book you can sit down
and read one page after the other, flip it, and then do the same in one sitting.
Adding to the weird presentation, the actual content can be scattered.
Sometimes Nelson takes a few pages to dive deep into a single topic.
In other places, he only uses a few paragraphs.
Ideas are sometimes continued or restarted in a different framing somewhere else
in the book. That should sound like a bit of a fever dream of a text, and on first examination,
it kind of is. This was my first introduction to Nelson, and it was initially very off-putting,
to say the least. However, that first impression was wrong. As I've sat with the book, turning it over
my head and reading deeper into Nelson's body of work, I've started to glimpse the bit of genius
going on inside this wild, freeform double book. So what's the actual content of the book? As the
name suggests, you're supposed to start with Computer Lib. The overall message of
that half is clear. It's written right across the cover, quote, you can and must understand
computers NOW. The NOW there is in all caps, just to drive the point home. Most of the cover is
taken up by a drawing of a raised fist, so from that plus the weird
formatting, it's pretty clear that Nelson is positioning his book as a subversive text.
That's a theme that really carries throughout all of his works, especially this book.
The general thesis of Computer Lib is that everyone, and Nelson really does mean everyone,
should understand what a computer is
and why computers are important. Quoting from its introduction,
Computers are simply a necessary and enjoyable part of life. Like food and books, computers are
not everything. They are just an aspect of everything. and not knowing this is computer illiteracy, a silly and dangerous
ignorance. End quote. For the era, this was a truly subversive stance. Computers had been
becoming more accessible, but not really to everyday people. You had options to join the
digital future, sure. You could either work in a job where computers were used,
get a college education in an associated field, or learn through electronics hobbyist books and magazines. So sure, computers are accessible if you're already in some technical niche.
There were, of course, exceptions, but there were no widespread ways to become familiar with computing, at least not
casually, you had to delve deep into it. This view also makes a lot of sense when we look back at
Nelson's influences. As we may think, disgust and near future where some kind of machine would help
to augment human ability. It would become an ever-important part of life that would improve
the human condition. It would lead to a digital utopia. Computer Lib is Nelson's response.
By his analysis, that future is here, and that machine is going to be some type of computerized
system. The next step, and really the last important step, was to get the public on board, to get other people to see that the future was ready.
The formatting is partly a way to appeal to a wider audience.
Like I mentioned, you don't really sit down and read this book cover to cover.
Computer Lib is best enjoyed in chunks, like a magazine.
You're presented with bite-sized articles. In ComputerLib,
these slowly explain what a computer really is, the current state of the field, and the current
issues in computing. Everything is presented in Nelson's non-academic style. It's conversational
and light while presenting big new ideas to the reader.
The hand-drawn figures and headers plus margin notes are weird formatting, but they all add to this.
In the introduction, Nelson says he's publishing the text in a draft state.
That could explain some of the handwritten notes, but I think that's just a bit of a cover story.
The entire book feels intimate and
personal. It's almost like it was a pile of notes handed to you by a friend. But the real meat and
potatoes, at least for us, comes when you flip the book over to Dream Machines. If Computer Lib
explains how computers are, then Dream Machines explains how computers should be.
It follows the same free-flow format, but it covers Nelson's vision of the near future.
This is where we can see a more mature version of Xanadu, and where the final pieces of this
strange text start to make some sense. Of course, a big part of Nelson's description of Xanadu, you could say his main
thesis for this book, is that printed text is a limited medium. Nelson's hypertext, just like
Bush's mimics, is designed as a means to store human ideas better than mere paper. Dream Machines
spends a lot of ink restating and explaining this idea.
It's the whole point of Xanadu, after all.
With that in mind, I think the weird contents of the text start to make a little more sense.
Computer Lib slash Dream Machines is a really good case for why hypertext is a better medium.
I mean, the book almost screams out for links. Every section has
some kind of context that could improve it. This isn't just a case of an oddly constructed book.
It's a book that should have been written in hypertext. I don't know if it's explicit or not,
but Nelson is really showing where paper starts to break down. At least, that's the feeling that I get the more I read and think about the text.
Okay, I swear I'm done analyzing the medium, so let's get back to the content.
By 1974, the world had seen ARPANET.
Networking was appearing on college campuses and in research labs, and Nelson took
note. Of course, there were earlier examples of networks, but with ARPANET, we see a nearly modern
system. This new technology was really a natural extension for Xanadu. Nelson's project was all
about democratizing data, finding a way to store, share, and transmit human thought. So networking fast
became a central feature deep inside his proposed system. Nelson's networked Xanadu was a pretty
simple idea. Machines running Xanadu would be connected to a larger computer network.
Then anyone could connect up and use the hypertext system remotely. Basically, he was talking World Wide
Web before the web ever existed. Nelson also realized how impractical this approach was,
at least how hard it would be to get it off the ground. From the text, quote,
But who will pay for it? To build the kind of capacity we're talking about, all those disks, all those
many computers in a network, won't it take immense amounts of capital? End quote. It's a fair point.
We've seen in other episodes how funding can really make or break a product, especially when
it comes to networked services. Nelson offers a few options for how to get around this. The first one, and
probably the most recognizable, is framing Xanadu as a pay-per-minute service. In this model,
Nelson explains that a user would only need to pay as they were using Xanadu. The user would
only need to connect up from a terminal in their own home or office, thus keeping Xanadu as a central
part of normal life. The other model Nelson proposes, and one that's a little more wild,
is the idea of Xanadu as a franchise. Now, I think this is partly inspired from Nelson's
mixed attempts to find a single source of funding. The general idea is that Xanadu would be funded by a
grassroots collective of small shops. Each would chip in a franchise fee to pay for equipment and
time on the network. Users could come to a brick-and-mortar shop to use terminals connected
to the Xanadu network. Of course, users would pay a fee to their local franchise. Thus, funding could come from a
large network of backers, each covering only a small chunk of a large overall cost. At first,
this may sound like a total miss, but we see shades of these models in the modern internet.
No one huge entity owns the internet, it's spread out to a network of smaller organizations.
Some are larger, but you don't have to be huge to become a player on the web.
The overall infrastructure and servers are owned by a wide cast of players. Single users pay ISPs
for access, and often pay more for access to specific services. It isn't really a franchise model,
but the internet has been able to grow so large
because so many people have chipped in on the large bill.
The simple fact is that Nelson's ideas were really prescient of the future
and, in a lot of cases, are still predicting an even further-flung utopia.
We don't have Xanadu embodied in any one modern system,
but pieces of it are in a multitude of programs.
With the current track record how it is,
we should see more ideas from Project Xanadu making appearances in the future.
All right, that brings us to the end of today's discussion of Project Xanadu.
We've seen another one of the tendrils of As We May Think spread out. For Nelson,
early exposure to Bush's writing, coupled with a glimpse of a digital CRT display, drove him to carve out a radical new course. This led to his work on
hypertext. And while Nelson didn't outright create the new medium, he codified it in an enduring way.
He did come up with the name, after all. Even today, we have yet to see a system that fully
implements his vision of hypertext. But that's really just the micro-scale view. The big,
overarching reason Nelson chose to devote his life to hypertext was simple. He had seen a vision of
a better future. He was trying to build a utopia with the cornerstone made out of better data
management. It may sound silly, but this thread of hypertext and networking form a
base for many utopian visions. Like I said, this is just the start to a larger series on digital
utopianism. I'm treating this like the Intel series I just wrapped up. Episodes in the series
will be spread out over time. Next episode's gonna be unrelated, but in a few months I'll be dipping back into the utopian waters.
I think this look at Xanadu gives us a good foundation to move forward. Next time, I'm
planning to touch back on some of the latter developments in Project Xanadu, examine its
shortcomings, and then get into what you might call a sibling system. Along the way, we'll
keep trying to work out why hypertext has this
strange place of privilege. Thanks for listening to Advent of Computing. I'll be back soon with
another piece of the story of the computer. And hey, if you like the show, there are now a few
ways you can support me. If you know someone else who'd be interested in the story of computing's
past, then why not take a minute to share the show with them?
You can also rate and review on Apple Podcasts.
And if you want to be a superfan, you can support the show directly through Adren of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to episodes, polls for the direction of the show, and bonus content.
You can find links to everything on my website, adventofcomputing.com. If you have any comments or suggestions for a future episode, then go ahead
and shoot me a tweet. I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.