Advent of Computing - Episode 79 - ZOG: Military Strength Hypertext
Episode Date: April 3, 2022We're getting back to my hypertext series with a big of an obscure tale. ZOG is a hypertext system what was first developed in 1972 at Carnegie-Melon University. It then stagnated until the latter ha...lf of the 1970s when it was picked back up. By 1983 it was cruising on a US Navy aircraft carrier. ZOG presents a hypertext system with some very modern notions. But here's the part that gets me excited: ZOG was developed after Doug Engelbart's Mother of All Demos. So, in theory, ZOG should take ques from this seminal event. Right? ... right?  Selected sources:  https://www.campwoodsw.com/mentorwizard/PROMISHistory.pdf - History of PROMIS  https://apps.dtic.mil/sti/pdfs/ADA049512.pdf - 1977 ZOG Report  https://apps.dtic.mil/docs/citations/ADA158084 - 1984 USS Carl Vinson Report
Transcript
Discussion (0)
Have I ever talked to y'all about trends and forces?
I know I've mentioned it, it comes up on the show with some regularity, but I don't
think I've ever sat down and given you the talk, as it were.
Now, I should preface this with the fact that I'm not a real historian.
If you are a real historian, please don't get too mad at me for butchering things.
There are two main ways to look at who has agency when
discussing history. One approach is the so-called great man theory, or maybe you should rephrase it
as the powerful people theory just to cast a wider net. The theory holds that change is exacted by
singular actors, that the agency to drive history forward is held in the hands of a select
few great people. Crucially, these great people are somehow different than us slovenly masses,
somehow gifted with a greater historical power level. You don't get America without George
Washington being born at the right time, as the right person, under the right stars.
You don't get the link unless Vannevar Bush is born at the right time, with just the right powers of agency.
There are a whole lot of easy criticisms to make about this view of history.
It does kind of presuppose that some class of superhumans exist, which that's pretty easy to refute.
The theory also implies that without these superhumans, history would not advance.
No Washington, no America. No Bush, no Link.
Not only is this a really boring way to look at history, I think it's plain to see that
it's wrong. The great man theory is especially bad for describing the history of technology.
Now, on the other side of things, we have trends and forces. While not really a codified theory
in the same sense, it is a common alternative. This is a more holistic approach
to history, that you have to consider a wider range of factors that lead to events. You don't
have some superhumans with extra agency and special power, you have a cast of actors that are,
broadly speaking, interchangeable. So-called great men are simply the result of historic necessity.
They were created by trends and forces. Now, I tend to fall into the trends camp,
and there's a big reason for that. The great man theory can't explain independent invention
of technology. What happens if you have two superhumans with plenty of agency to spare,
each living in the same era and each working on the same problem in isolation?
What if each of those agency holders arrives at the same solution?
What if they never knew the other existed?
Sounds to me like maybe they weren't influenced by some great person.
Maybe they were influenced by trends. I used the example of Vannevar Bush in the link deliberately.
You've fallen into another one of my sneaky traps. Would you believe that there's another
disconnected lineage of hypertext, one that does not start with Vannevar Bush
and As We May Think. Would that make you lose faith in superhumans?
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 79, Zog, Military Strength
Hypertext. This episode, we're heading back into the realm of hypertext and utopianism with
a somewhat obscure system. Today, we're talking about Zog, a hypertext system with a totally
arbitrary name. And I'm very serious here. Zog doesn't stand for
anything, and it's not short for anything. Primary sourcing verifies the name was actually picked
because it was short and easy to remember. Initial development of Zog started in 1972,
stalled, and then restarted in the second half of the 70s. By this point, a lot of the tropes
of hypertext were well in place, at least much of the technology that would create the internet was
already kicking around. ARPANET was pretty active, the mouse had made a public debut,
we even had early wireless networking in the form of ALOHAnet. As far as hypertext, we had already hit both NLS and FRESS, two early and
arguably influential examples of the art. So where was there to go? I think Zog represents an
important stage in the development of hypertext for a few reasons. Part of this has to do with
context. This is after the mother of all demos. Development of Zog starts
after the world had seen NLS. It starts after the world had seen the mouse. Or at least a lot of
people had seen NLS. Did this impact Zog's design? I mean, how could it not? There's also the matter
of personnel. One of the early contributors to Zog was Advent of Computer alumni Alan Newell.
For those of you who are just tuning in, Newell was an early force in the development of the
field of artificial intelligence.
He was working in the field before it even had a widely accepted name.
During the 1950s, Newell co-developed a language called IPL, which would later influence
Lisp, the mother tongue of AI itself. So what was he doing touching hypertext?
And finally, we get back to the matter of funding. We've covered two main hypertext systems in this
series so far, NLS, the online system, and FRESS, which itself grew out of
an earlier system called HES. I know, nice and confusing. NLS had a series of backers and was
developed in a dedicated research group. The system was meant for widespread use. HES and
FRESS were developed at Brown University, funded by said institution, and used by, well, said institution.
ZOG, the nice silly named one, was funded by the Office of Naval Research.
Among other applications, ZOG would be used on an American aircraft carrier.
So, how exactly does hypertext fit in between bulkheads? How does that
happen in the first place? Those are the big questions that I want to examine this episode
while keeping an eye out for that utopian angle. That is, how did Zog further the betterment of
humanity? How did it help us be better? With that out of the way, I think it's time for us to link
out to the bigger picture. The story of Zog doesn't exactly start with Zog. Well, kinda.
The early days of this project get kinda convoluted, so I'm going to put that off for a
minute. Instead, we need to get into another hypertext system called PROMIS, the Problem-Oriented
Medical Information System.
And yeah, the R here and the P are both taken from problem.
That's how you can tell the programmers were, in fact, involved with this.
The PROMIS project started in 1967 as a simple grant proposal.
Use a database-like system to improve medical
record-keeping. What I think makes Promise interesting is that the database system they
wanted to create wasn't based on past computational research. In fact, the plan was to implement a
newly proposed medical note system using a computer. The system Promise wanted to adapt was,
quite fittingly, the Problem-Oriented Medical Records System, first described a few years
earlier by Dr. Lawrence Weed. Now, Weed wasn't the kind of doctor we usually talk about here
on Advent of Computing. He was actually a medical doctor. Nothing weird going on here.
He saw patients, he carried out medical research, and he went to medical conferences, like He was actually a medical doctor. Nothing weird going on here.
He saw patients, he carried out medical research, and he went to medical conferences like a normal medical doctor.
The irregular part is the research.
Weed's research didn't concern some illness or new surgery.
He was interested in the more meta aspects of practice.
Weed's research was focused on finding better patient record-keeping methods. In 1964, Weed published Medical Records, Patient Care,
and Medical Education. Really a banger of a title. And while Weed may not have been a computer
programmer, the method he outlined in his 64 paper is surprisingly ripe for digital
adaptation. A main chorus in the paper is the fact that poor record-keeping practices were
common in the medical profession. Weed's biggest complaints were that records were often inconsistent,
missing, or inaccessible. Now, he outlines a number of factors at play here, the biggest one being complacency
and laziness. You see, doctors are like the rest of us when you get down to it.
The third problem in the list I think bears special mention. Inaccessibility here came in
a number of forms. As we'd explained, some doctors would keep records in such a personalized manner that they were useless to anyone else.
Other doctors simply wouldn't share notes when requested.
Some wouldn't open up their records to patients, even.
All these problems worked together to make medical records less than useful.
Weed didn't see this as some impossible issue, more a new frontier that had yet to be
examined. Quoting from the paper, much has been written in general terms about medicine as a
science and the doctor as a scientist. Very little is done in specific terms to audit physicians or
medical institutions on a routine basis to determine just how scientific medical practice on all the patients really is.
End quote.
So, what's to be done?
According to Weed, the answer is easily within reach.
Just apply the same rigor used in the scientific side of medicine to regular patient practice.
But hey, that wouldn't be enough to
make a paper. Weed goes on to propose a very specific system tailored to patient medical
records. He calls the system problem-oriented records. Central to Weed's system was, well,
some type of centralization. He argued that all patient records should be grouped together.
So if I went in for an x-ray, the results of those scans would get thrown into a big
Sean folder.
Get a blood draw, results are dropped into the same folder.
Same goes for if I called the doctor complaining about too much blood being drawn.
A note on that would go right into the growing Sean folder. From there, Weed
moves into a five-rule regime for well-formatted notes. To reduce some actually important research
to a few sentences, Weed's method is all about organizing and tracking data points. Humans are
messy. We're not always quantifiable, so Weed adds in some needed wiggle room for that.
The problem-oriented part is that Weed's notes were organized by specific patient problems.
Each problem was assigned either a name, a numbered header, or some way to track it,
which was then used to keep information about the overall problem and progress connected.
then used to keep information about the overall problem and progress connected.
Let's say I had an earache from maybe editing too much bad audio.
If I went to see Dr. Weed, then he'd enter in that problem, that my ears ached, as problem number one.
In the next chunk of his notes, he would write up impressions, information gathered, and
data about my past history.
write-up impressions, information gathered and data about my past history. This would still be related to my earache, so it would be marked as related to problem one. Weed would continue on
in this manner for diagnosis, treatment plans, and any quantitative data. Then, let's say I come
back for my earache. Same problem, the issue number one is still used, so Weed would notate any changes
under that same section. One of the immediate benefits here is that these records would be
usable by any doctor. If I moved on or Dr. Weed retired to more digital life, then my medical
records could just be passed off to a new provider. Tucked inside is the full history of my earache, how it was treated, how it improved,
and even little graphs showing any numeric data that was collected.
It's clear, efficient, and readable.
With a discerning eye, we can also see that Weed's approach is really predisposed to a computerized way of thinking.
Let's just drop any time-period-based restrictions.
In 1964, Weed is talking about centralized data.
The numbering scheme, if we squint our eyes a little bit,
sounds like links of some kind.
You have some central page for a patient with links out to other pages for each problem.
You have some central page for a patient with links out to other pages for each problem.
Each problem could then link to impressions, treatments, subsequent visits, and so on.
It's forming a network of linked documents.
The kicker here is that Weed's approach is all based on uniform data.
Everything is kept following a set of rules.
Information is put into the proper little boxes.
Together, this is a very digital approach. So, perhaps it's no surprise that researchers would try to bring Weed's system into the computerized fold.
This brings us back to Promise.
The project was started by Jan Schultz during his PhD research,
and would bounce around campuses for a number of years.
It's most closely associated with the University of Vermont, but it actually didn't start at that college.
In 1966, Schultz was studying a mix of mathematics and computing at Case Western Reserve University.
That's where he met Dr. Weed.
Schultz would later recall the start of
this collaboration in an article published by the ACM. Quote, he was interested in exploring
the implications of automating the new organization of medical records. We decided to work together
for six months and see if during that time a long-term plan of collaboration could be worked
out. When we began our work,
other medical records research groups took the dictated or written words of physicians
and other medical personnel and manually entered them into a computer. We decided from the beginning
to interface the information originator directly to the computer and develop techniques and tools So, early on, a second problem was identified.
Weed's initial work was all about data organization.
Schultz added in the wrinkle of data origination.
I actually really like the term Schultz uses here, information originator.
That is, the person where the data is actually
coming from. In the case of medical records, the information originator was the healthcare provider,
be them a doctor, a nurse, a radiologist, or anyone up the chain. The issue Schultz and Weed
identified with contemporary attempts to digitize records was this information gap between data originator and machine.
So why is that important? There are a few ways that we can take this. First off, it just slows
down the process. Imagine if you had to type through some intermediate party by explaining
what you needed written. You might first notice how slow everything is, but over time, more insidious issues become apparent.
You get inaccuracies that are made during this telephone game.
Your chosen intermediary may mishear or misinterpret what you want written.
Even if you're handing them a written page, they could still misread your handwriting.
They could misinterpret your directions. That
could be annoying if you were, say, writing an email. That could be dangerous if you were writing
a medical note. The trick would be developing some computer system that was easy to use, at least
one easy enough to be used by anyone connected to the medical field. Nurses, doctors, x-ray techs, researchers, and residents
are all going to have different levels of exposure to computers. In the 60s, chances are none of them
actually have much real exposure to computers at all, so the lowest common denominator has to be
pretty darn low. The next jump is where we run into a strange question.
low. The next jump is where we run into a strange question. Schultz would come up with a solution very similar to hypertext, but he doesn't call it by that name. Now, you know me as a hypertext
fiend, I'm all about connections. So the biggest question for me is if Schultz's work was connected
to any other early hypertext developments.
I think it's safe to say it wasn't influenced by Ted Nelson or Xanadu.
My logic here is that Schultz doesn't use the word hypertext, which is a Nelson original term.
At least, in my head, hypertext is a pretty quick and sleazy mark of some connection to Nelson.
I also haven't found any reference to Nelson in Schultz's writing either. The timeline here is also in my favor.
Promise starts development around 1966-67. Nelson published Computer Lib slash Dream Machines in
1974. That's kind of Ted Nelson's magnum opus of his early years. Prior to that, Nelson was working
sort of underground. At least, he wasn't making big waves. He would first use the term hypertext
in a paper in 1965. That paper would make it into the Proceedings of the ACM. But then we're
getting into this realm of speculation. Did Schultz or Weed see a single
paper in a proceedings publication in 1965? I just don't know. But did they follow up the
possible lead and talk to Nelson or use the word hypertext? On that, I can pretty soundly say no.
The next possible connection is our other big player, Doug Engelbart.
In 1962, Engelbart publishes Augmenting Human Intellect. In 68, he debuts his vision to the
world with the mother of all demos. Promise is nestled right between those two events.
Once again, I think that if there's any connection here, it's tenuous at best.
Schultz doesn't talk about augmenting humans in any explicit way.
He doesn't use the fancy keywords that Engelbart used to describe hypertext.
The closest I can get to a connection is J.C.R. Licklider, who is quoted in Schultz's 1988 History of Promise.
is quoted in Schultz's 1988 History of Promise.
Licklider, through his work at ARPA,
seems to have helped fund both projects at different stages of development. So, at best, we have one degree of separation through a funding agency.
That's not really an ideological link.
With that, I think it's safe for us to say that promise is
another independent origin
point for hypertext.
At least, it seems that Schultz and Weed worked in relatively pristine conditions.
For me, that makes the details of Promise all the more important.
What are those details?
What did the system look like?
Well, it all revolves around the lowest common denominator interface that I
mentioned earlier. Promise was designed to use a touchscreen as its primary method of input.
Already, that sets it apart from Engelbart's mice and Ted Nelson's early attempts using light pins.
This choice also informs the final look and feel of Promise. Now, early on, Engelbart's lab determined that touchscreens just weren't for them.
The primary problem was one of ergonomics.
Think about using a touchscreen mounted perpendicular to a desk for any period of time.
To use the screen, you have to reach out with your arm and then use fine motor control to select options.
During a long session at a screen, your arm gets tired.
I mean, you're basically holding out your arm the whole time.
That fatigue limits how long a user can surf the screen.
This would lead Engelbart to discount touchscreens as a primary input device.
However, off in promised land, things were different.
In my head, at least, this all comes down to how Promise was used.
Instead of being some general-purpose hypertext system,
Promise was focused around traversing so-called frames.
Each frame took up a single screen of text,
and contained information and links to other frames.
Promise itself, at least on the touchscreen side of things, wasn't designed with editing in mind.
Everything was built around jumping your way through a network of these frames.
There was also a focus on speed of operation.
That is, how fast frames could be traversed and how quickly a job could
be completed. Users weren't expected to spend all day browsing Promise with a single arm
outstretched. It was meant as a tool for rapid and convenient use. These two factors, the frame-based
nearly read-only nature of the system and its speedy operation made the touchscreen a
much more viable choice. In theory, a user would never reach the point of fatigue. The frames were
structured in a pretty familiar way. You see, to my eye, Promise looks like a really rough and
primitive web browser. The touchscreen was displayed in three parts. The
topmost region held information about the current frame, things like its title and numeric ID data,
and sometimes messages generated by the current frame. At the bottom were some control options.
These menu options were dependent on the current frame. If you were in a series of frames, you
might have menu items for going forward and going backwards in the series, for instance. The middle region was where the
actual frame data was displayed. This was defined in something not entirely dissimilar to a markup
language. So we get this nice separation between internal data and the actual representation of that data that the
user would see. Promise's markup was handled by a program called the Selection Element Translator,
or SETRAN. This was, as near as I can tell, a pretty simple programming tool for defining and
editing frames. For markup to make sense, we have to talk about how Promise organized data.
Frames in Promise served two purposes. First was for entering medical record data, and second was
for browsing information. In their first use, a set of frames functioned like a multi-page form.
By making multiple selections and adding some optional data via the keyboard, healthcare workers could build up new entries in a patient's records.
Each click of a link was a choice used to fill in information about some healthcare problem.
Using the earache example, one frame might ask how long the patient has displayed symptoms.
long the patient has displayed symptoms. That selection, let's say the link saying for a few days was touched, would then populate a field in a growing medical record. For these data entry
frames, each click not only advanced to another frame, thus serving as a link, but also sent some
data to the backend. Schultz made the system sound like a primitive database, one that was capable of
storing both numeric and textual data.
The second case, simply browsing information, worked the same way but just didn't send
information back to some central database. Markup and Promise provided the actual text
to render for each choice. That text could be accompanied with a one-way link off to another
frame. If the frame was used for data
entry, then things were a little more complicated. From what I've seen, Promise had a primitive way
to build forms. The one example that Schultz includes in his 88 paper with ACM is for
describing the onset of chest pain. The frame has the field at the top, onset, followed by a list of options.
Each option looks to be formatted like a link.
What's interesting to note is that the links are all coded as numbers,
so reading the setRAN code isn't super useful unless you have some way to render it.
Now, here's where we get to what I think is the interesting part.
Schultz describes the overall set of frames as a network. Initially, PROMIS was only for entering and viewing medical records, but as the project
progressed, general medical information was also entered into the system. This allowed for a library
of medications and surgical procedures, for instance. Speaking on the benefits of the system, Schultz explained,
quote,
The user can operate from a universe larger than what can be remembered
and has only to recognize the correct selection and not recall it.
These features provide for a unique and very effective human-computer interface.
End quote.
I want to point out here, again, Schultz is operating outside of the hypertext lineage
we've covered so far.
But right here, Schultz is describing how this network of frames, this hypertext, can
be used to augment human intellect.
It's a tool to expand what humans are capable of,
to solve the information problem.
This is some classic Vannevar Bush stuff right here,
but Schultz never mentions as we may think.
Weed and Schultz are facing down the same problems
that Bush was dealing with, namely the information problem.
Humans can only keep so much in our heads,
so how can we work around that? Bush's solution was hypertext. It was the link.
Schultz and Weed arrived at that same solution. We're looking at a case of independent development,
and I think that speaks to something profound about the soundness
of hypertext as a solution. The final piece I want to drop, and something that may be more
cool than important, is that Promise would become a networked application. As the project expanded,
it became convenient to move operations to a group of mini-computers instead of just one
big computer. So by the end of the 60s, we're looking at a group of mini-computers instead of just one big computer. So by the end
of the 60s, we're looking at a network of computers that worked together to serve hypertext in a
hospital. It's definitely a niche application, Promise is specifically tailored to this one use,
but I think it's interesting to see how similar this is to the modern net.
So that's one approach to hypertext
that would find some success. Promise lived out a relatively long life in a few hospitals.
But now, let's push forward to the early 1970s. At Carnegie Mellon, Alan Newell, Herb Simon,
John Hayes, and Lee Gregg, among a group of other researchers, were cobbling together their own system. This
project was called Zog. The name stood for nothing. The early days of Zog are a little obscure, but we
do have information about it from a single report on a June 1972 workshop at CMU. Now, here's the weird part. Zog wasn't initially a hypertext system, at least not in a traditional sense. You see, Zog started out life as a type of incidental software. So let's talk about workshops and why Zog came to be.
Workshops were one common form of collaboration during the early days of computer science.
I mean, how else are you going to get researchers from multiple institutions together prior to widespread networking?
In 72, the ARPANET was just over two years old and didn't quite reach every campus in the United States.
So you can't really just have everyone hop online and start a conference call.
At this point, it was almost possible, but we're just not quite there. So workshops were still the order of the day. There was a particularly robust tradition of workshops around artificial
intelligence and cognitive sciences specifically. This started way back in 56 with the summer
workshop at Dartmouth. That's where AI got its name, just for starters.
Similar collaborations would continue for decades.
Usually, things worked like this.
A college would decide it wanted to host a workshop over the summer.
They would draft up a plan, make a proposed guest list, and then apply for a grant to fund the whole thing.
guest list and then apply for a grant to fund the whole thing. Attendees would get paid something to make it worth their while, and funding would also be used for computing resources. During the
sessions, there would be lectures, demonstrations, and sometimes collaborative software development.
We're looking at something similar to a pop-up think tank. The 1972 workshop at Carnegie Mellon University was following a somewhat new approach.
The idea was to gather up the usual interdisciplinary group of AI nerds. We have
computer scientists and cognitive scientists. Instead of working on some collaborative project
or each participant lecturing about their own topic, the plan was to show off existing AI and AI-adjacent software.
As the summary report puts it,
The programs are all specific systems that do something of cognitive interest.
Further, they require substantial interaction since they can be specified to do a variety
of interesting tasks within their general domain.
End quote.
variety of interesting tasks within their general domain. End quote. Newell, Simon, and their collective colleagues selected seven programs to be featured. Almost all of those had been developed
at CMU with one program from off-campus. In theory, this would be a way to introduce interested
researchers to some of the latest software-based exploration of the human mind. Hopefully, this would spark some new ideas and help move the field of AI forward.
However, there was a bit of a hiccup with this plan.
You see, these workshops tended to be very interdisciplinary.
One of the issues that introduced was that not everyone in attendance
would know how to use a program, or even how to use a computer.
And the interactive
component of these programs used, well, a computer terminal. So how do you get all attendees up to
speed in a relatively short amount of time? There was an all-encompassing initiative in the lead-up
to the 72 workshop to try and at least mitigate this issue. On the lower levels, the seven programs
that were going to be featured prominently had to be whipped into shape. Better fault tolerance,
better error messages, and better documentation were all a must. Software improvements were
actually factored into the workshop's budget, which I just think is neat in a planning kind of way.
budget, which I just think is neat in a planning kind of way. Moving up a level is where we run into Zog. The staff at CMU built Zog to be the digital hub of this specific workshop. It would
contain all documentation as well as facilities to run outside programs. That would all be wrapped
up into a simple unified interface. This would shield attendees from a normal command
line and hopefully make things run a little more smoothly. The weapon of choice was a structured
and connected network of data. Crucially, the workshop team didn't have funding for custom
hardware. Touchscreens are just out, so are graphics displays. Zog was all text and all keyboard taps.
From this, a user was able to traverse a tree of data.
Each node on this tree represented a section of software documentation,
and each section could optionally have subsections.
The interface here is dead simple.
You start at the root node, which describes how to use Zog itself.
From there, you can list subnodes, each with a number. To enter that node, you just type the
number in question and you're there. Once again, we've landed at the big bad T word. I swear,
I've been trying to get away from talking about trees and AI, but I guess I've just kind of hit a weird rut lately.
All I want to point out here is that trees, a fancy form of lists, are crucial in artificial
intelligence. They are an important data structure for a lot of early AI software. Specifically,
Newell and Simon were big tree enthusiasts. Call them arborists, if you will. Back in the late 50s,
they designed a language called IPL, which was built to support list processing and, by extension,
tree data structures. So a tree popping up here at least rhymes with the work surrounding Zog.
On a foliage-free note, the tree structures here are built out to somewhat
resemble the table of contents of a book. The 72 Report calls it the Zog Index, but it's a lot more
like the table of contents. Maybe I've just become accustomed to this form a little too much from
typesetting. I think this bears mentioning because it shows how, despite a digital coat of paint,
I think this bears mentioning because it shows how, despite a digital coat of paint, Zog was mimicking a physical medium.
In fact, the Zog tree was also presented in physical form.
A backup printout was made, which was called, quite fittingly, the Zog Book.
You just gotta love these names.
Zog also had built-in facilities for editing the Zog tree itself.
This part of the program was called BZog for Builder's Zog,
and it used the same interface as the rest of the system.
Expanding the tree was simply a series of menu options away. Now, the 72 report is scant on detail, but I have a radical guess about how BZog worked.
I think it used the final big feature of the overall system. A zog node could execute outside code. Like I mentioned earlier, each zog
node contained a few key chunks of data. It had a blob of text, optional children, and an optionally associated action. At a press of a single key, the
exclamation point key, that action could be executed. The idea here is that a workshop
attendee could read a section about how, say, the memory and perception program works,
and then they could launch right into a demo of the actual program by just hitting a single key.
and then they could launch right into a demo of the actual program by just hitting a single key.
If this action feature was implemented well, I could see it being used in BZOG.
That way, everything is expressed in the ZOG tree itself,
but that part is just my programmer's intuition speaking.
To put a nice bow on it, the team even had a name for the person responsible for editing the Zog tree.
They were called The Forester. This doesn't add anything to the technical details, but I just really like the name. Does your tree need some changing? Well, you better contact
The Forester. Zog would help make the 72 workshop a success.
The actual report puts it better than I can.
The idea of the Zog interface proved successful.
Essentially, all the participants were on the machine within the third hour after starting the workshop on Wednesday morning.
Several had never been online with a large system before,
several had never been online with a large system before,
but Zog provided ways in which small responses by the participant led to meaningful responses by Zog,
which then permitted interaction to explore the Zog tree and execute a program or two.
End quote.
Zog was able to lower the barrier to entry.
Instead of just dropping participants on a big fancy computer,
this system gave them
access to a subset of the machine. It provided a safe space for attendees, and it directed them
towards meaningful experiences. I think this brings us to a question. Was this early version
of Zog really hypertext? I think this is something that we need to grapple with for a few reasons.
First of all, I mean, I'm trying to chronicle hypertext here. Ancillary tech is cool, but that
can bog us down. And secondly, this is the second system I've hit this episode that is unrelated to
existing hypertext narratives. That disconnect matters because, like I mentioned
earlier, it may speak to something fundamental about hypertext. So we have to be careful about
calling a goose a duck here. Let's look at what Zog's actually doing. At the most basic level,
the program is providing a set of data, and that data has explicit connections between each item.
That, dear listener, is a link. We could call it a hyperlink. I think that's enough of a qualifier
on the technical side, but Zog also falls in line with the intent of other hypertext systems.
NLS and HES-Fres were designed to allow easy access to more information
than a person could remember. That's a design goal that Promise shared, and while Zog wasn't
explicitly planned to fill this role, I don't think it's a stretch to say that it did fill this role.
Sure, in 72, Zog was only planned as a small-scale system for one workshop,
but it has all the hallmarks I look for in a hypertext system.
Then what happened to this quirky little program after the summer of 72?
Well, nothing, it seems.
This is where we run into a bit of a timeline issue.
You see, there was some kind of connection going on between Promise
and Zog. When it started is a little bit ambiguous. This interchange of ideas may have started as
early as 1972, which means that the first pass at Zog may well have occurred right before contact
with Promise. However, I don't have any hard dates here, so we have to guess a little.
But we can safely say for sure that Newell and one of his colleagues, George Robertson,
came into contact with Promise at some point in the early to mid-70s.
Going back to the 1988 History of Promise article from Schultz,
quote,
Our funding agency was devoting a large percentage of their budget to our development efforts
and wanted to be assured that we were up to the task. An advisory committee was put together to
oversee our development. Schultz continues, Newell and Robertson had previously developed
a menu selection system called ZOG, which sensitized them to our work. They applied the Promise
interface ideas of a rapid response, large network system to ZOG." The most simple story is probably
the most accurate. Newell and Robertson had just come off the summer workshop at CMU, or at least
the workshop was in the recent past. They were called up to sit on an advisory committee for Promise.
When they got back to their home campus, they brought some dangerous new ideas with them.
The Zog Front stays quiet until 1977.
At least, that's the next time I can find a paper trail.
That year, a report called Zog, A Man-Machine Communication Philosophy,
is written by Robertson, Newell, and Kamesh Ramakrishna.
This is probably a good time to address the matter of authorship. We already know Newell,
he was a public enough figure at this point. Robertson and Ramakrishna are a bit of a blank
for me. Robertson had been involved with the 72 Cut of Zog. I have seen a few short bios under
the same name for a programmer who taught at CMU, then moved to Xerox, and after a few more jumps
now works at Microsoft. They mention hypertext, but not Zog. Regardless, we aren't getting anything
directly from Robertson about Zog. Ramakrishna fits a similar mold. I can
find other papers he authored and co-authored. In 1981, he published a thesis related to Zog,
so my best guess is he was a student involved with this second pass of the Zog project.
To nicely tie things up, we get back to Newell. He has a much larger paper trail than Robertson and
Ramakrishna combined. However, I haven't been able to find any interviews with Newell that address
Zog or hypertext, so we're kind of left in the dark about some of the finer details here.
All answers have to come from published reports and published scientific articles. Luckily, those are in
abundance. I just really wish the narrative had a bit more color to it. Anyway, getting back on
track, what's the deal with the 77 Zog report? What's with the fancy name? There's actually a
whole lot to unpack here. The 77 report outlines more than just software. It actually comes closer to an experimental design paper.
The authors are clear about their inspiration.
In fact, Promise is the centerpiece of the entire study.
This is what's kind of interesting to me.
The paper is setting up a study to see if Promise is actually useful, or put a fancier way, quote,
Though they have designed a system running for almost five years now, its impact on the
development of man-machine interfaces generally has been minuscule. We do not know of another
system with the same essential features. The paper continues, We decided to attempt to extract the scheme from its habitat in the PROMIS application
so as to study and exploit it as a general communication interface.
Our goal is to find out whether this interface does indeed have the potential it appears to have,
to demonstrate it, and to study its parameters in order to understand and optimize it.
End quote.
So right there, we have some fun details.
The Zog team, or I guess the Zog2 team if we want to be picky,
doesn't have exposure to other hypertext systems.
They saw Promise, thought it held some...
Promise.
And wanted to generalize and test that system.
As near as I can tell, Zog2 doesn't take any code from Promise.
It's simply inspired by the earlier software.
But Robertson et al. aren't just copying Promise wholesale.
They're copying it with a very critical eye.
They want to find out if Promise is a good idea,
or if it just looks like a good idea.
I think that's kind of unique.
There's also the matter of funding to touch on.
You see, ZOG was now a federally funded project.
We have a rogues gallery of support here.
The Office of Naval Research and DARPA both put up money, and Zog's development
was monitored by the Air Force Office of Science and Research. So, accordingly, most of the papers
on Zog come from progress reports sent to the U.S. Air Force. At first glance, this might sound
like a strange arrangement, but this is something I've ran into pretty frequently with federally funded research. Often, there will be multiple agencies chipping in money. Think of it
as tax-backed investors. Usually, one agency will technically administer or monitor the project for
the sake of simplicity. The net result is a complex-sounding funding statement that works out to, this project was paid for
and administered by the American taxpayer. Now, more specifically, why was Zog federally funded?
I think this is a fun question because it ties into a lot of other research. From the 50s on,
the fetties have been interested in finding better ways for humans to interface with these
new computer things. The canonical example is always SAGE, because that was an early and massive
project that saw actual use. SAGE was funded by a number of sources, but the US Air Force was
nominally in charge. NLS, and broadly speaking, Doug Engelbart's research, was also federally funded, with money coming from a similar web of sponsors.
While Zog's shady overlords may seem unique, well, that part isn't too strange.
So, what was Zog actually like?
In general, it was just a ripoff of Promise, but there are some features that differentiate it.
off of Promise, but there are some features that differentiate it. Instead of doing a full point-by-point compare and contrast, I'm going to examine the 77 Report's basic features, or at
least what the Report claims are Zog's, quote, basic features. I think that will give us a good
outline of what made Zog, Zog, and how it compares to other hypertext systems. This is a 10-item list,
so not every feature is going to be covered in depth,
but it's worth at least touching on what the Zog team thought was important to the system.
Kicking things off, we have one.
Rapid response.
For Zog, rapid response means that new frames should load instantly,
or at least fast enough that the monkey hammering keys doesn't
notice any loading time. This is actually an idea that I've seen come up a lot in early human
interface projects. In educational systems like Play-Doh, it's called instant feedback. The core
idea is that humans tend to get easily distracted or frustrated, so you have to offer fast and seamless interactions. It also speaks
to this idea of the digital illusion, that you don't have to actually be instant, you just have
to be faster than a human can perceive. You can easily pull that trick on us flesh folk.
What's interesting is that the Zog team actually puts numbers to how fast instant is.
Promise also emphasized this idea of rapid feedback with numeric measures.
From the 77 Zog report,
How fast instantly must be in seconds is not fully known.
Promise operates with a.25 seconds 70% of the time.
It is not likely to be much slower than this. Zog is targeted
at 0.05 seconds 70% of the time in order to permit exploration of this parameter. End quote.
I've never thought of load times as a free parameter, as something that could even be
explored. I think this is speaking to the root
of what this phase of Zog was all about. The team is trying to critically assess promise, to see
what they can do with the system. Item 2 is in a similar vein, simple selecting. Once again,
this is a really basic thing, I don't know if I'd even call this a feature per se. But hey, Zogland is
all about assessment, so let's just go with it. Zog's interface differed slightly from Promise.
In Zog, you still have the same rough layout. The top line of the screen gave some frame
information, the large middle section had the actual frame, then the bottom few lines had some menu options.
A difference is that Zog could be driven 100% with the keyboard or a touchscreen.
Each item in a frame, each link, had a number, so you could touch the link on the screen with your finger, or you could just as easily use that same finger to hit the number on the keyboard.
Each menu item also had a hotkey that you could use to hit the number on the keyboard. Each menu item also
had a hotkey that you could use, so the user was given options. I think this would also mean that
Zog wouldn't need a touch-equipped terminal to operate. As with Rapid Response, simple selection
was intended to make Zog more engaging and easy to use. I think the design philosophy goes a long way here.
We can stack this against the biggest hitter of all, NLS, or the online system. NLS did not have
simple selection. You had to call up the command to jump to a link and then select a link as an
argument. Those are added steps that, for a lot of users, you don't need. Like with a lot
of features, this simplicity is borrowed straight from Promise. It's simple, easy, and it may be a
little bit limited in some cases, but for most users, simple selection is an easy win.
On to item three, large networks. This is where we hit another one of those unexpected free variables,
that is, how many frames do you need before Zog is useful? You could also rephrase that as,
how many links are needed to make hypertext viable? That's a really important question,
and one that's easy to gloss over. Luckily, the Zog crew comes in with the hard numbers yet again.
Luckily, the Zog crew comes in with the hard numbers yet again.
By 77, Promise was seeing actual use in at least one hospital.
So that could count as a quote-unquote useful hypertext system.
According to the report, Promise had 30,000 frames at this point,
with planned expansion up to around 100,000 frames.
So Zog is targeting somewhere in that neighborhood at least. The trade-off here is that
as the number of frames increases, response times, at least in theory, could degrade, so a balance
had to be found. That's why the size of the network is actually a free variable subject to testing.
Items 4 and 5, frame simplicity and transparency, are smaller points, so I won't go too deep into them.
Basically, Zog was targeting uncluttered frames with easy-to-digest data.
The transparency aspect is just that the system should be easy to understand,
or, as the report puts it, it should, quote,
appear completely controllable and non-mysterious.
You want hypertext to make sense and not be magic.
Easy. Item 6 is where we start getting beyond promise. The communication agent. Now,
that's kind of a bad name, but I can't think of a better way to put it. The basic idea here is that
Zog can function as a communication agent between outside software or data sources.
This is a callback to Zog1, where nodes could have associated actions. That feature had just
been adapted and fleshed out for the follow-up. Zog is able to collect information from the user
to pass to an external program, run that program, and then present any outputs or results as
frames of new data.
In that sense, Zog could be used as the front end for really any other program, at least
within reason.
This specific feature kind of tripped me out for a bit.
The immediate use here is that Zog can help provide a consistent and centralized interface
for software.
That's cool and all.
It can go a long way towards making powerful programs more accessible.
What really got me is that this communication agent idea
is similar to how server-side software and browsers interact today.
In a modern web browser, you can click on a link
that will fire off some code sitting somewhere on a server. That program exists outside your browser, you can click on a link that will fire off some code sitting somewhere on a server.
That program exists outside your browser, or outside your frame if you like. It will run,
make some outputs, and then those outputs are shaped into something that can be rendered back
in your own browser. Just like in Zog, more complex actions are handled by a combination of front-end interface and back-end power.
On to 7.
Subnet Facilities
This is another quick one.
Everything in Zog is organized into a hierarchical network of frames.
You can label a chunk of a network as a, quote, subnet to make navigation more convenient.
Item 8 is a good deal more interesting.
Personalization As the 77 report
puts it, quote, the user can modify and augment the network to suit himself. The goal of transparency
by itself probably requires that the user be able to make small changes easily to any existing
network, representing his own understanding and preferred way of dealing with the material of the net. End quote. Now, they use one of my favorite words here, augment. I think it's telling that
they're using it differently than Engelbart would. To Engelbart, hypertext systems were a way to
augment human capability. His thinking verged on human-computer symbiosis. A computer would become
an integral part of our daily life. Hierarchical trees of data could be continuously adapted to
better suit humans throughout that process, and to help expand what you could think about.
But Zogg took a different approach. To Zogg's creator, hypertext was a tool to be leveraged.
approach. To Zog's creator, hypertext was a tool to be leveraged. True, it would increase what humans were capable of, but more in an evolutionary way instead of a revolutionary one. NLS was
intended to change the game. Zog was intended to be a better screwdriver. To that end, Zog had to
be personalizable. A user needed a way to tweak and add to the network, to augment
the network with new data as they saw fit. This was done in a similar way to Zog1. The new Zog
had a frame editor built right into the program. Once again, this is in line with contemporary
systems, just developed in isolation. NLS combined viewing and editing into a unified interface, and so did Zog
in its own way. It just seems like that was the gold standard from the start. And when you think
about it, this is probably the best way to go. If you can edit Zog within Zog, that will save a lot
of time. You don't have to drop out to some other interface you may not even be familiar with. You can just stay in the nice, fast, and simple land of Zog.
Approaching the end, we have item 9, external definition.
This is where networking finally enters the picture,
and we see a more fully server-client type of vision.
The actual data held within Zog, what the authors called the Zognet, had to be portable.
The data structures were represented in such a way that an external program, be that another Zog
instance or some other software, could grab frames or even surf the entire Zognet. So, at least in
theory, it would be possible to have third-party software that could traverse ZogNet by accessing
data served by Zog. That's starting to sound a lot like a modern network. And finally, we hit item 10,
uniform search. This is another one that I don't have much to say about. A core requirement was
that Zog frames be searchable in some reasonable way. It's a nice feature that makes life easier.
Now, I'd also like to drop my own core feature of Zog. Call it Item 11. Zog is kind of a
programming language itself. This is something that comes up repeatedly in the 77 report in
a number of ways. On the surface level, Zog had its own language for representing data in the overall Zognet.
It's similar to the simple markup language used by Promise.
But on a more meta level, Zog itself can be looked at as a language.
Think about it this way.
Each frame encodes a choice the user can make, and each choice has some outcome, some consequence.
It can fire off a command,
generate some data, or lead to another frame. By building up a network of these frames, you are,
in effect, developing a program. That affords Zog a lot more flexibility than one would first suspect. Development of Zog at Carnegie Mellon would continue for about six years, all the while
working off this general outline. During this time, a number of studies were published about the effectiveness of the
system. Remember, at this point, Zogg was all about testing out if promise could be generally
applicable. I'm going to skip all this hard work because there's a more shiny object just in the
distance. After six years of development, in 1983, Zogg moved into field
testing aboard the USS Carl Vinson, a nuclear-powered aircraft carrier operated by the U.S. Navy.
I think the immediate question is clear. What is Zogg doing on an aircraft carrier?
We actually have a really good and really satisfying answer for this.
The Navy was facing down the information problem. A report of these events was compiled in 1984,
and this is a massive report, it's almost 300 pages. It goes over the entire Test Run of Zog
in deep detail, and it kicks off with this juicy tidbit. Quote, The Navy is facing the problem of managing and operating ships that are extremely complex
and sophisticated.
Multiple interrelated weapon and sensor systems place a great demand on the information processing
capability of senior shipboard personnel.
The complexity of shipboard evolutions are taxing not only day-to-day decision-making,
but also long-range planning activities of management personnel.
End quote.
Big boats were getting more and more complicated.
There was just so much information on ships.
The information had reached a critical mass where a single person just couldn't know it all.
The 84 report doesn't use the word,
person just couldn't know at all. The 84 report doesn't use the word, but this is 100% the information problem that Vannevar Bush described way back in the 40s. The solution arrived at, or
at least the attempted solution, was to pull in some type of computer automation.
In 1982, during the construction of the Carl Vinson, the ship's executive officer found out about the Zog project.
This hypertext-like system sounded like a good fit for the Navy's needs, and it was already funded in part by the Navy.
Some small changes were made to the ship's design, and new plans were laid out.
The Carl Vinson was scheduled to depart for an initial cruise in 1983, and it would bring a prototype Zog network along for the
ride. The word prototype here is crucial. The cruise would act as a nice excuse to test Zog
in the field, to see if it could actually solve any issues or would even be used. That's the core
question at hand in this naval report. How much will Zog actually be used? It turns out that Zog was
predisposed to this kind of research, probably by design as we've seen. The Navy didn't want to be
constantly pestering sailors to check how they felt about Zog and, oh, have you logged in today?
How many frames have you seen in the past 24 hours? No one wants to answer those questions,
especially when you're doing a job.
Instead, they just had to follow the trail of data left behind by interactions with the system.
Most data collection was already automated,
with some in-person interviews conducted to augment that dataset.
The initial installation itself was on a pretty grand scale.
The Carl Vinson was tricked out with Ethernet wiring embedded into the ship's hull. This, in effect, turned the carrier into its own isolated network. While not
entirely ARPANET, this kind of technology was very much adjacent with ARPANET's development.
So we're seeing the impacts of the larger network on this ship. The actual power of the network was composed of 28 PERC minicomputers.
Now, this is where I have to make a real quick diversion.
I'd feel bad if I didn't.
PERCs are fascinating machines.
They were developed by Three Rivers Computing Company,
which was founded by a group of researchers, incidentally,
connected to Carnegie Mellon's computer science department.
These were early graphics workstations modeled very closely after machines built at Xerox.
A PERC looks eerily similar to a Xerox Alto computer. The PERC sports a full graphical
environment with windows, icons, menus, and pointers. They can even be used with a mouse
or a graphics tablet. So we're dealing
with something that looks modern-ish. This matters because the new platform affected Zog.
The hypertext system was initially developed on DEC hardware back at CMU. You'll note that DEC
and Three Rivers computers aren't compatible. Different companies, different machines. So Zog must have had to be rewritten,
at least somewhat. The upside here came in the form of new features. PERC workstations use a
vertical high-resolution display. They're roughly the size and shape of a piece of printer paper.
Thanks to this strange display layout, the PERC version of Zog was able to display two frames at once, one on the top half
of the display and another on the bottom half. In practice, users could traverse two parts of
the Zognet at once, or could keep a reference frame open while delving deeper into some subnet.
The other fancy new feature was Ethernet support. Zog at CMU had some networking capabilities, but on the Carl Vinson, those
capabilities increased. The entire Zognet used on board was spread across the ship's PERC computers.
Now, this was slated to be 28 machines, but that number fluctuated. Some machines would break down,
others didn't see actual use, so it's more like something approaching 28 computers that made up the network.
What we are seeing here is a tiny distributed hypertext network prior to the modern internet.
There were also downsides to this new platform. The biggest problem was the work required to get Zog up and running on the Carl Vinson's systems. This ended up being enough of an issue that it would mess up
the project's schedule. Zog was up and running by the time the Carl Vinson left for its first
cruise in the Caribbean. But the networking stack, that still needed some work. So programmers from
CMU ended up staying aboard to finish some code. But in a more fun way, these programmers got to
work on a nuclear-powered ship. I just
think that's cool and it also speaks to the dedication to making Zog work. Zog was fully
working by the time the USS Carl Vinson left the Caribbean. From there, the carrier made a full
circuit around the world. All the while, Zog was shooting data around its hull. Specifically, it was managing planning and
training. From the 84 report, quote,
When the project was initiated, three shipboard functions were targeted for using the ZOG system.
Planning and evaluation, the ship's organization and regulations manual,
and weapons elevator maintenance training. End quote.
All these required documents were converted into
frames. Evaluation forms became trees similar to the patient evals of the old PROMIS system.
An interesting side benefit was the fact that data held in ZOG could be really dynamic.
The report points out that it was common for changes to be made to the organization and
regulation manual while the Carl Venson was getting underway. And it makes sense, you have a new ship hitting the
open ocean for the first time. Some finer details might need shifting as real-world experience is
gained, or hey, maybe the life vests need to be moved to another compartment for easier access.
Zog made these operational changes easy to handle. Let's say you did move those life vests.
Now you just have to head over to the nearest computer, get to the section of the ship's manual
on flotation devices, and make a quick edit to that frame. Because of this, the ship's manual,
call it the documentation, could always be up to date. That not only saved time,
but would help the Carl Vinson operate
better. The final blue water use of Zog that I want to touch on is a program called Airplan.
This was another system developed by CMU for use by the US Navy, but it wasn't a hypertech system.
Airplan was what's known as an expert system. This is something I haven't really gone into detail on the podcast before,
since it's a little nebulous.
Basically, an expert system is designed as an aid for human decision-making.
It's kind of augmenting human intellect adjacent.
The actual decision-making can be done in a number of ways,
ranging from artificial intelligence to just big databases and fancy
equations. In practice, a human operator plugs in information, some wheels spin somewhere aboard
the ship, and the expert system outputs a plan of attack. Airplane was designed specifically for
managing the small fleet of planes aboard the Carl Vinson. Operators would plug in information
on planes, planned flights,
fuel levels, and other relevant parameters. Airplane would help with scheduling, planning
landing sequences, and just generally managing flights. It was a big program that required a
lot of power. On the Carl Vinson, five perks were dedicated just to Airplane's operation.
were dedicated just to Airplane's operation. So here's where Zog comes in, and here's where I think Zog really shines. One issue with Airplane was figuring out how to leverage it as a resource
efficiently. By that, I mean how to get the most out of all its number-crunching and planning
potential without overloading or, you know, crashing their computers. It's a fine line to walk. Being on an aircraft
carrier means that flights are kind of a big deal. You want everyone on board to be on the same page
about who's taking off and who's landing. But you also don't want everyone running queries and firing
new jobs all the time, since if the system goes down, well, that could lead to a dangerous situation.
The solution was to leverage Zog.
As Airplane generated new plans, those were converted into frames and then added to the larger Zog net.
That way, anyone could reach a perk terminal, hop on, and get the latest information from Airplane.
Crucially, these plans were pre-generated.
Pulling up a new frame, that doesn't connect out to AirPlan,
so the expert system wouldn't actually be running.
You're just viewing outputs.
In the biz, we call this caching.
It's a common approach to minimizing server resource utilization.
If you have a dynamic file that users pull often
but isn't necessarily
updated all that often, you can cache it. Just generate a normal boring file on a regular
interval and leave it there for anyone who wants it. This is often done with podcast feeds, for
instance. On my feed, I post new episodes every two weeks, but podcast apps will check my feed a
lot more often than that. So to limit resource
utilization, the file just gets updated every two weeks instead of every time a listener goes to
access it. It's smart design, but it's not new. Zog, despite being somewhat isolated from the
larger Hypertext conversation, was already hitting a lot of the big-ticket items that we've come to expect.
Alright, that brings us to the end of this episode.
Really, I could keep going on about Zog, but I have to reach the end sometime.
This has been a big episode already, so I'm going to try to wrap this up with a short coda.
Promise and Zog prove something fundamental. These were hypertext systems that were developed without any explicit connection
to prior arts. By that, I mean Promise wasn't inspired by Doug Engelbart or Ted Nelson.
Jan Schultz didn't reference Vannevar Bush, and neither did the Zog crew.
didn't reference Vannevar Bush, and neither did the Zog crew. Here's why that matters so much.
PROMIS was designed as a solution to the information problem. This, I'd argue, is a universal problem. Managing medical records was taking too much work. There was too much data for
humans to handle, so they needed some type of augmentation. We saw this later with the Navy.
Handling new aircraft carriers took too much data.
More than a single person could handle.
They needed a tool to increase their capacity.
The solution to both of these issues,
Promise and Zogg,
are hypertext systems.
They organized this mountain of data as smaller ideas
linked together to form chains of thought.
To form networks of data. For me, these systems prove the thesis that Vannevar Bush laid out in As We May Think.
In that paper, Bush argued that humans form chains of ideas, that we make connections
between smaller thoughts to form larger structures of information. He proposed that the solution to the information problem
was to make machines that structured data as linked ideas.
That we had to structure data in the same way we think.
That led to the link, and that led to hypertext.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of computing's past.
And hey,
if you like the show, there are now a few ways you can support it. If you know someone else who would be interested in the history of computing, then why not take a minute to share the show with
them? You can also rate and review the podcast on Apple Podcasts. And if you want to be a super fan,
you can now support the show directly through Adjunct of Computing merch or signing up as a
patron on Patreon. Patrons get early access
to episodes, polls for the direction of the show, and bonus content. There's about five bonus episodes
up now, so it's a good time to get in. You can find links to everything on my website,
adventofcomputing.com. If you have any comments or suggestions for a future episode, then go ahead
and shoot me a tweet. I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.