Advent of Computing - Episode 47 - ITS: Open Computing
Episode Date: January 11, 2021Modern operating systems adhere to a pretty rigid formula. They all have users with password-protected accounts and secure files. They all have restrictions to keep programs from breaking stuff. That ...design has been common for a long time, but that doesn't make it the best solution. In the late 60s ITS, the Incompatible Timesharing System, was developed as a more exciting alternative. ITS was built for hackers to play, there were no passwords, any anyone who could find ITS was welcome to log in. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
What do you expect when you sit down at a computer?
Just for a minute, for the sake of discussion, let's leave graphics and mice out of the equation.
You'll usually log in, giving the machine your proper username and password.
After a moment, you're dropped into your account.
You have all your files tucked away neatly in your own personal and secure digital space.
Staring in wonder at files is kind of nice and all, but it's a lot better to
run some programs. If you're getting any work done, then that usually turns into a whole lot of
different programs running at once. To the user, this is a pretty seamless process. You just fire
up a new program, maybe get a little distracted. Remember, you have to get some work done and fire
up another one. But actually, there's a whole lot going on to make that happen and make it happen so seamlessly.
On its own, a processor isn't all that powerful. Sure, it can crunch numbers really well. That's
what it was built for. And it can crunch those numbers faster and faster with each coming year.
The issue is the processors are inherently serial devices.
They can only actually do one thing at once.
So why can we multitask on modern computers?
It all comes down to timesharing, a method developed in the 50s and 60s that makes a
computer appear to do two things at once.
The operating systems we use today are just the latest evolution of this technology.
Systems like Multics, CTSS, and Unix were really zeroing in on a recognizable experience
more than 50 years ago.
But what we expect today, what we recognize, is really just what became popular.
It wasn't the only vision for the future.
Even within the field of timesharing, there were divergent views.
One particularly interesting take on the technology was ITS,
an operating system with no passwords, no file security, and no walls.
It was an open environment built to foster hackers and programmers,
an environment where culture and software seamlessly met.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 47, ITS, Open Computing.
This episode, we're back to the more technical side of things. We're exploring
the incompatible timesharing system. And as it turns out, this runs pretty close to last episode's
topic. Last time, I dove into some of my favorite hacker folklore as chronicled in the jargon file.
That file started, in large part, at MIT's Artificial Intelligence Lab. That's the same place where ITS was developed.
That means that today, we will be jumping into the source of a lot of that hacker culture.
Now, I assure you, this was not due to my good planning.
But hey, sometimes things work out on their own, so I'll take it.
Development on ITS started in the late 1960s.
That makes it an early and very influential time-sharing operating system.
There are a pile of technical advancements that were made in the service of expanding ITS.
But, as with everything, context makes ITS all the more interesting to me.
You see, ITS has really similar roots to the much better known Unix.
The two operating systems were both made for somewhat similar reasons, but in very different environments.
Unix started as somewhat of a side project at Bell Labs, whereas ITS was built as a core component of MIT's AI Lab.
Both were developed by groups of hackers.
Both filled a roughly similar
role, but things diverged greatly from there. Today, just about everything runs off Unix
or Unix-like operating systems. While Unix became a general-purpose system that could
run on anything and everything, ITS took a very different route. It was developed as
the perfect environment for hackers, programmers, and really any stripe
of curious onlooker.
ITS is the root of the modern free software movement.
In this episode, we're going to be looking at the environment that spawned both Unix
and ITS, how ITS took shape, and how the hacker ethic was enshrined in code.
So let's dive in and see how a group of hackers built their very own digital cathedral.
Over here on Advent of Computing, I find that I talk about timesharing a whole lot.
But I think we better start talking about it again.
In general, timesharing is a method where you can split a computer's resource
between multiple programs and multiple users.
This is done by careful allocation of memory between multiple programs and multiple users.
This is done by careful allocation of memory, storage, and processing time.
Essentially everything is broken down into slices that are then shared between multiple
processes.
In practice, this means that a user will think that they're just using their own small computer,
when really the computer is switching really, really fast between multiple tasks.
With fast enough switching, you form the illusion of running multiple processes at once.
It's fake parallelism, if you want to get technical.
A timesharing operating system is just, well, it's what it sounds like.
It's an operating system that uses timesharing.
Think of it as the implementation of the theory. These
fancy operating systems sit somewhere between a program and the computer. They handle allocating
resources and do all the timesharing magic. To be 100% clear, we use timesharing today,
just by another name. More often it's called multitasking, but it uses the same core concept.
It's why you can have more than one window open at the same time.
The point I'm trying to make here is that we benefit from timesharing every day.
It lets us do more with computers, and that's really the key point of timesharing.
It just lets you squeeze more out of the resources you have.
Timesharing was first theorized in the 1950s as a way to do just that, get more out of
limited computer hardware.
Mainframes of the day had power to spare if a good way could be found to use that power.
A single human being just banging away at a keyboard can only do so much with a computer.
It would simply be more efficient if you could have two humans banging away at two keyboards, or maybe three at three keyboards, or onward.
Timesharing was proposed as a solution to this bottleneck. Conceptually, it's pretty simple.
You just break up a computer and share it. But really, that simplicity is a bit of a facade.
It turns out that timesharing is wickedly difficult to implement in practice.
Programmers ran into issues that were totally new in pursuit of this goal.
How can you make a computer switch what it's doing without destroying the current program's state?
How do you keep programs from stomping all over each other?
And maybe most importantly, how can you do this efficiently?
The illusion here is the important part.
If you implement a slow and buggy version of timesharing, then everything falls apart.
You need to ensure that you can actually get productivity gains from the system.
So everything has to run to pretty tight tolerances.
This isn't to say that programmers have been sloppy, just that timesharing introduced
an unexplored frontier of pain and tribulations. CTSS, the Compatible Timesharing System, was one
early project that aimed at solving these problems. It was developed at MIT in the very early 1960s,
partly as an experiment. CTSS would work out really well for the college, and over time became
a mainstay on campus. But that was just the beginning. The biggest player was Multics,
a massive project that started a few years after CTSS. And when I say massive, I really mean
massive. From its earliest stages in 1964, Multics was planned to be essentially the total realization of timesharing.
The project was initially a collaboration between GE, MIT, and Bell Labs.
Funding was provided by the Advanced Research Project Agency, that's the predecessor to DARPA and the agency that would eventually create the ARPANET.
predecessor to DARPA and the agency that would eventually create the ARPANET.
Once completed, over 2,000 researchers, developers, planners, and administrators were involved with Multics at some step.
So suffice it to say, Multics is the big player in the field in the 1960s.
The grand plan for Multics was to create a computing utility,
something that would function like a water company, but instead of for fluids, for information.
By leveraging timesharing, a fleet of mainframes could service a vast number of users.
Anywhere there was a phone line and a terminal, a user could connect.
For the everyday user, this would revolutionize their life.
In the eyes of ARPA, Multics
could be used as a backbone for government and military operations. The
system's design would stem from these lofty goals and extrapolate out from
there. At its core, everything in Multics was about reliability and security. Users
each had accounts that were secured with passwords and they kept private files safe from prying eyes.
A complex array of paging, virtual memory, and segmentation kept running programs isolated from one another.
To cap everything off, piles of error code kept the system running in the event of a disaster.
On paper, Multics was the solution to an entire slate of problems.
By folding in earlier work on timesharing and working towards a highly reliable and
available system, Multics was really set to dominate computing for decades to come.
But that was only on paper.
The actual development of Multics was mired with problems.
It turns out that bringing such a complicated system into existence
was not a trivial task, so development ended up being pretty slow going. Last episode,
I brought up a complaint from one of Multics' developers, but I think it bears repeating.
Van Vleck, a programmer from way back in the CTSS days, mentioned that he spent well over
half his time writing code for error handling. Having a
reliable Multics was important, but that reliability took time to implement. Now, as should be expected,
not everyone was happy with how slow things were going. As the 60s drug on, Bell Labs would start
to reduce their involvement with Multics, officially leaving the project
in April of 1969.
Bell had joined the project because Multics sounded promising, they wanted a reliable
timesharing system to use internally.
Really I think everyone on the project wanted Multics to succeed so they could use it.
But after years of work, Multics still wasn't ready for prime use.
A group of Bell's ex-Multics programmers wound up spinning their own project.
This eventually turned into Unix, a system that took cues from Multics while skewing
more towards the practical side of things.
Bell Labs wasn't the only one unhappy in the project.
You see, it turns out that the hackers embedded in MIT's artificial
intelligence group also had a bone to pick with the operating system. Or more accurately,
they hated Multics. They thought it stood for everything that a good programmer should
hate. Officially, the AI group, later known as the AI Lab, was a research group investigating
artificial intelligence.
As in, they were trying to build machines that could think.
It wasn't timesharing, but it also lived on this new horizon of possibilities.
As it turns out, this kind of research attracted and nurtured a very specific type of programmer,
what we would call today a hacker.
That is, people who were bit really hard by the computer bug. They were highly dedicated to programming almost to the point of obsession,
and they were looking for new ways to push technology further. It also turned out that
both their AI work and more general programming projects didn't really fit well with Multics, or CTSS, or even timesharing for that matter.
There were technical and ideological reasons for this. Culturally speaking, Multics was anathema
to the hacker ethic. The view of Multics as a utility was one key issue. Like any other public
utility, the plan was to charge for use of Multics. Bills would eventually be calculated based on disk use, processor time, memory use, basically
all resources that you were consuming.
That also meant that quotas on resources were baked into Multics.
In other words, Multics incentivized using a computer as little as possible, and if you could, as efficiently as possible.
That really went against how hackers thought.
The AI lab's denizens wanted to be wired in at all hours of the day.
They didn't want to be on the clock, they wanted to be on the computer.
The very rigid structure of Multics also caused a lot of friction with hackers at the AI lab.
Security was a core component of the operating system, which, yeah, that's usually a good thing.
But for the more dedicated programmer, it really became stifling.
This even started before you got your hands on a keyboard.
A new user had to get an account created and get a password issued,
so you couldn't just step into an open lab and hop onto the system to get an account created and get a password issued, so you couldn't just step into
an open lab and hop onto the system to get some stuff done. Once logged in, your account was
confined to a very small, dedicated private directory. Sure, you could share files or access
other user files, but only after explicitly sharing them. If you wanted to poke around and
see what other people were up to, well,
you can't. You're just out of luck. Multics didn't really do poking around. Going further,
let's say you want to write a program. Well, there are some restrictions there too. Programs on Multics were isolated to their own virtual memory space. In other words, they were given
a chunk of dedicated memory to use and nothing else.
Once again, normally, this is a good thing. It keeps programs playing nice together.
But for more competent users, this really limits what you can do. It's just another wall.
Programs also ran at a set level of privilege. That is, Multics imposed limitations on what a user's program could do depending on the
level the user was at.
Biggest among those limitations were input-output routines.
As a normal user, you couldn't really do I.O. outside of what Multics decided was safe.
Once again, this is a fine idea unless you want to do something fun. The key here is that Multics encouraged a very narrow path for users.
It was a useful system, don't get me wrong, but it wasn't an environment built for pushing boundaries.
It was an environment full of boundaries.
And when hackers did push a little too much, the results were usually a systems crash.
hackers did push a little too much, the results were usually a systems crash. So, suffice to say, the people programming Multics also didn't like hackers being on their system.
But the issues didn't stop at mere violations of the hacker ethic, oh no. There were also
very real technical flaws with Multics that made it hard for the AI lab to use. The lab
wasn't just dealing with pure programming.
There were also a slate of devices in use.
These ranged from robotic arms to cameras that are all wired into computers.
Of course, these weren't for show.
Researchers were working on ways for computers to detect objects via camera input and manipulate
objects with robotic arms. All things considered, these are pretty high performance tasks, and once again, Multics
didn't really do high performance.
Setting aside all of the I.O. restrictions, let's just look at how a program ran on
Multics.
A user would fire off a job.
It loads into its own private memory space, then starts running. After a few ticks
of a system clock, Multics has to pause that program and switch to another user's task.
Then a few ticks later, the initial program can resume for just a bit. Then Multics pauses
things again and moves on to another program. That's just how timesharing works. Every switch between tasks eats into
precious processor time. For a normal user and a normal task like, say, editing a file, that's not
a problem. But for something like, you know, a computer looking for an object to grab with a
robotic arm, well, then we have issues. Most users aren't building intelligent robots,
so the crew developing Multics didn't really tailor their code for that. So for the lucky
few in the AI lab, Multics didn't fit the bill. It couldn't do the real-time kind
of tasks that they needed. The fact was, a lot of hackers didn't even trust timesharing
as a viable option. Projects like CTSS and Multics
really poisoned the well for them. Stifling restrictions plus the inability to handle
real-time tasks made timesharing seem unusable in the lab. So for a while, the AI lab just
didn't use timesharing. Their main computer in this period was a lone PDP-6. For the time, it was a useful
machine. But without timesharing, they ran into the good ol' throughput problem. Researchers had
to line up to schedule computer time, and with a lab full of hackers vying for a single machine,
wait time was awful. This wasn't a good long-term arrangement, so a crew of researchers within the AI lab started looking for answers.
At the top of the list was timesharing, but the only way to make that viable was with a totally new system.
One tailored to their specific needs and work ethic.
That system would be known as ITS. The drive towards incompatibility started in 1967 with Stu Nelson and Richard
Greenblatt with the full backing from the AI lab. Well, at least from Ed Fredskin, then the AI lab's
director. At first, there was a level of animosity about the project. Certain hackers were violently
against the idea of timesharing in their lab.
And with only one computer to share, certain hackers felt that timesharing was being forced on them.
But as tinkering turned into functional code, opinion would slowly shift.
More programmers would hop onto the bandwagon, partly for the fun of the hack,
and partly to aid in development for a sorely needed tool.
Tom Knight, one of the early developers on the project,
actually came up with the name Incompatible Timesharing System.
It's a bit of a jab at MIT's earlier work.
This was going to be the anti-CTSS.
It was set out to be a totally new thing,
divergent from the prevailing winds on campus and in the field of computing at large.
The goal was to find a way to make the lab's computer more accessible to more people without any kind of compromise.
In other words, ITS had to milk every ounce of magic out of the timesharing illusion.
A user should feel like they had a one-on-one connection to the lab's computer, even when
they didn't.
The environment should be open
for experimenting, and, most of all, ITS needed to encourage people to get under the hood and explore.
From the outset, some simple but effective choices were made. Now, this is where we get
into an interesting trend. ITS wasn't really written by the book, so to speak. Not every choice in its
development was made for technical reasons. Many were made to preserve the hacker ethic
inside of ITS. Even basic technical decisions were tinged by the culture within the AI lab.
One of those was the language used to develop the core components of ITS. Machine language. As in, the raw bits and bytes
that the lab's PDP-6 spoke. Imagine, if you will, a handful of highly educated hackers trying to
write one of the more complicated programs possible in pure hexadecimal. It must have
been a bit of a painful beginning. There are a few good reasons that ITS was written at this low of a level.
Multics was one of the first operating systems written in a high-level programming language.
Their tool of choice was PL1.
Multics was also slow, or at least slow enough to aggravate hackers.
To make ITS as fast as possible, the team chose to go with a lower
level language. That way, they had much more control over the actual instructions their machine
was running. They could make optimizations and pull some tricks to speed up their code that you
normally couldn't. This approach is also very in line with the hacker ethic. Machine language is
as close to the computer as you can get. It requires
a level of skill and determination that many programmers just don't possess. I'd like to count
myself in the ranks of programmers that don't possess that level of dedication. So writing ITS
in machine language really added a certain cool factor to the project. What this core group of hackers put together was impressive to say the least.
Not every feature was a totally new innovation, but the sum made up a remarkable system.
We have all the usual components and timesharing.
Each user has an account with a dedicated home directory.
The file system is built up as a hierarchical tree of directories. Each user connects via terminals, can fire off processes, and each process is isolated from other running programs.
But there are deviations that make ITS a very different and unique beast.
One of the big ones, and a feature that any Unix user should find familiar familiar is that ITS managed processes as a tree.
That is, each process could spawn its own child processes.
Those children could spawn their own children, and on down the line.
There's really not a theoretical limit to it.
Why does something like this matter?
Well, it all comes down to flexibility.
At the most basic level,
this meant a user could fire off multiple processes at the same time, or
in ITS lingo, procedures. That's useful, but there's something really cool going
on here. At its core, a time-sharing system is all about parallelism. That's
just a fancy $5 word for doing more than one thing at once.
A system like Multics uses the illusion of parallelism to make a computer service more
than one user at once. But the only code aware of this is Multics itself. In ITS, a savvy programmer
could take advantage of timesharing in a much more useful and really a much more direct way.
On Multics, Multics was really the only one who spawned new tasks.
You didn't get to do that.
But on ITS, anyone could.
Add in some extra code to handle inter-process communication, and presto.
A programmer can now write programs that run in parallel.
It's a tweak to the established formula that opens up a whole lot of new territory.
Now you can have a program spin up, fire off an extra program to do something in the background,
and still do your initial task.
Another very important change was how users connected to ITS, or rather, what they could
connect to ITS, or rather, what they could connect to ITS with. Sure, you could dust off an old teletype
terminal with its clacking head and paper feed, but why bother with that? ITS supported these
newfangled glass TTYs, and even graphic terminals. Might not sound exciting today, but it was a big step.
As a later progress report published in 1972 put it,
quote,
end quote.
Really, this goes back to the same reason that process trees were a big deal. Support for CRT and graphic terminals let hackers just do more.
It provided a whole new frontier of usefulness. Sure, you could use a graphics terminal before
ITS, but on ITS you could get a lot more done with that terminal. There were a number of ways to plot graphics to a connected terminal, but one particularly
interesting method was called the quote, fictitious display processor.
It's something that we'd probably call a virtual terminal today.
It sounds pretty fancy because, well, being fictitious is pretty fancy.
This gets into how ITS could handle hardware.
Hackers could use direct input-output without restrictions, so take that, Multics.
But there was a cooler way to get this done.
Within ITS, so-called software devices were the coolest way.
It's just what it sounds like, a software-defined device,
something that's defined within
the code of ITS itself.
One of these devices was called IDS, the Interpreted Display, and this is where we get into the
whole magic of being fictitious.
Basically, a programmer could write to IDS as if they were communicating with a real
graphics terminal.
Really, they were just talking to a software-defined terminal that's hiding inside ITS.
Why is this cool?
Well, as with a lot of features in ITS, it's all about being flexible,
finding new ways to do old things.
A programmer wrote to the interpreted display using a set of instructions,
telling ITS where to move a cursor, how to plot a section, and so on. ITS did the heavy lifting of turning that
into an image inside memory. Then, if an actual terminal was hooked up, ITS converted that
image into a signal the terminal could understand. A programmer just had to know the generic
way to produce graphics on the IDS device, then the operating
system could deal with sending it to a number of different types of terminals. Or the buffer could
just be sent to a remote terminal out on a network. Or you might not even send it to a terminal at all.
You could just plot it or pass around the image as data to another program. This is just another way that ITS created more
possibilities for programmers, instead of walling them in like Multics. All that being said, the
core of ITS shared a whole lot in common with CTSS, Multics, and really any other system that
these early hackers hated. Part of that was customized hardware. Modern processors, well, really,
let's just say slightly more modern processors, usually have built-in support for multitasking.
Honestly, it's a testament to how important multitasking has become. But when timesharing
was just starting out, programmers didn't have that luxury. An unadorned computer can't do all
that much in the grand
scheme of things, and timesharing pushed those capabilities right up to the limit.
Hardware was just as important to timesharing as software. Multics originally ran on a modified
GE mainframe, and in that tradition, ITS didn't run on a totally stock PDP-6.
ITS didn't run on a totally stock PDP-6.
The first upgrade was a massive memory expansion.
The hackers at the AI lab built a 1MB memory module as an add-on for the computer.
For the time, that was huge, both physically and in terms of raw storage.
The expansion was housed in its own refrigerator-sized cabinet.
But that was just the start. After a few years
of life, the AI lab upgraded to a DEC PDP-10, and this is where things really took off.
As the 60s turned into the 70s, the hackers continued to extend their hardware. The most
important upgrade, and one that made ITS much more viable, was custom memory management
hardware. This is the type of thing
that helped programmers in the background, but really it was for ITS only. The problem the AI
lab ran into was that they were running out of memory. As more people used ITS and more complex
programs were developed, space kind of naturally became an issue. Adding more memory could only go
so far, so some slick tricks had to be pulled to make the most of what they had.
This is where memory management hardware really comes into play. Multics went with a similar
tactic. This wasn't just an ITS problem, after all. To get the most out of your memory,
sometimes you need to shuffle stuff around. Sometimes a programmer wants access to a few bytes of memory, sometimes a few kilobytes.
As programs start and stop, memory gets taken up or freed up.
That can lead to weird chunks of unusable RAM.
So you have to do a bit of a RAM shuffle to clean things up.
But that's not a fast process.
You need to do one operation for each byte of memory you need to move.
More if you consider code used for figuring out where to put stuff.
The solution was dedicated hardware.
Think of it as a box built just for dancing the RAM two-step.
Separate hardware meant the PDP-10 could offload a lot of that work.
Memory could be used more effectively without degrading performance.
The other way ITS hackers maximized memory was by a neat trick called virtual memory.
Once again, this wasn't new.
Multics used virtual memory.
The reason the AI lab adopted this older technique is because it worked, and it worked really
well. Virtual memory
is a method that allows the computer to temporarily store chunks of memory on a disk drive,
then load them back into memory as needed. With some careful finagling, this means you can run
at over 100% memory utilization. When a program asks for some RAM to use, ITS just says, sure, I got some right here.
It could be an actual chunk of RAM, or it could just be a file on some disk somewhere.
The key is that when the program actually needs that memory, ITS ensures it's in RAM.
But when not in use, ITS can just drop it into a disk for safekeeping.
It's a really cool trick that works thanks to dedicated hardware.
If it was all in software, that would be pretty slow.
The other interesting piece that I want to touch on is what happened to the AI Labs PDP-6.
You see, it stuck around in a really neat way.
As it happened, the PDP-6 and PDP-10 were roughly compatible computers.
Binary from the older machine could run on the new model. Not willing to let a processor go to waste, the lab's hackers
figured out a cool way to use the machine. With some tinkering, they wired up the two computers
so the PDP-10 could read and write the PDP-6's memory. This allowed the two machines to communicate at really high speeds.
With just a little bit of code to glue everything together, ITS users were able to dispatch jobs
off to the older machine as needed. It was an arrangement that was really close to asymmetric
multiprocessing, and once again, just gave more power and more flexibility to users on ITS. Now, those are
all the pertinent technical details, but there's still the cultural side of things. ITS enshrined
the hacker ethic in code, and there's some strange manifestation of that ethic that makes ITS totally
unique. The most readily apparent one is that ITS didn't have passwords. You
heard right, there was no notion of security on the system at all. That should sound pretty
divergent from the norm, but there is reason for this. Partly it had to be a response to
Multics, the ultra-secure take on timesharing, but this is more than just a knee-jerk reaction.
The 72 Progress report on ITS had this to say on the matter.
Quote,
No password mechanism currently exists in ITS to stop anyone from using the system.
Initially, there was no restriction to any user storage of files on secondary storage.
Watchfulness and moral suasion have controlled unauthorized machine usage
while producing no unnecessary barriers to short demonstration uses,
unauthorized use when logging in under more than one name,
or other advantageous flexibilities.
End quote.
The fact was, MIT's hackers and, later, anyone who could get access to ITS,
didn't have anything to hide. They also didn't need protection from each other. If you were logging into ITS, you at least
knew something about computers. You were on the system to program, hack away, and explore. Sitting
down at a terminal was supposed to feel like an adventure. The fewer walls in your path, the better.
was supposed to feel like an adventure. The fewer walls in your path, the better.
But here's the other big reason for an utter lack of security. The prevailing view around the AI lab,
and later in hacker culture at large, was that no one owned code. Software should be free. Free as in freedom, if you will. Authorship still mattered, but it wasn't everything. This growing collective of
hackers believed that software should be shared, it should be open. If you wrote a particularly
nifty hack, you should share it with others. On the flip side, if someone wanted to improve their
skill, then they should be able to read through code written by experts. It's a flow of experience,
good ideas, and slick code that's essential to the hacker ethic.
Things like file access control and passwords, those just stymied that process, so ITS didn't need them.
This also made developing ITS itself a lot easier.
For one, there was just a larger talent pool.
The code for ITS was easy to access and easy to read through.
The code for ITS was easy to access and easy to read through. That meant anyone with terminal access could look inside ITS and see what made it tick,
and maybe even make important changes.
This was in contrast to systems like CTSS,
where only a very privileged user could look into the code.
It was open-source software before that term existed,
and the core tenants of that later movement
were built into ITS itself.
The final piece that made ITS eminently relevant in its heyday actually came from outside MIT.
In 1969, the first packets were sent over ARPANET.
By 1970, MIT was wired into the network. Partly by lucky happenstance and partly by masterful
design, ARPANET and ITS were a perfect match. The 1972 progress report that I keep mentioning
brings up that fictitious displays were added specifically to facilitate ARPANET connection.
ITS was also quick to adopt early networking protocols, making it one of the first
operating systems that supported ARPANET natively. But just as important was the culture around
ARPANET, or rather the sites that ARPANET was connecting. Nearly all users on the early network
were at universities, and the vast majority shared in MIT's hacker ethic. They wanted to explore
computers, see how far they could go with silicon. The rollout of ARPANET onto campuses
must have felt like getting onto an interstate highway for the very first time. Suddenly,
you could explore other computers. The digital world was expanding at a wild pace. And just
on the horizon, at one of the easternmost sites
reachable by wire, there was this thing called ITS. If you could connect to ARPANET, then you
could become an ITS user. And you too could feel the hacker ethic manifested in source code.
The best word I've seen to describe this was networked tourism. ITS had accounts open for anyone to
log in and look around. Upon logging in, they found something profoundly different. All code for ITS
and really all code on the AI lab's mainframe was open to scrutiny. You could run any program on the
system. Users could communicate with one another regardless of how remote their terminal actually was.
You could even take a peek at another user's screen.
This level of connectivity helped foster the hacker community,
and eventually led to it codifying into its own culture.
Last episode, we talked extensively about the jargon file.
Well, that file got started really close to ITS,
and its contents were influenced heavily by
the culture around ITS. The file was first uploaded to a machine at Stanford's Artificial
Intelligence Lab. That computer was running WAITS. It's an operating system written at Stanford that
was heavily influenced by ITS. The acronym has a slightly contested meaning, but the more common reading is the
West Coast Alternative 2 ITS. Soon, it was passed over the ARPANET to MIT's lab and entered ITS
proper. The early ARPANET users that frequented the AI lab's machine came into contact with the
jargon file, and from there the manifesto of the hacker culture spread. More than just code or fleeting
digital communication, the jargon file helped to cement ideas of early hackerdom. It helped to turn
the culture around ITS users into a larger cultural phenomenon. What else did hackers on ITS get up to?
Well, as it turns out, a whole lot of Lisp. That is, the programming
language Lisp. This shouldn't be a huge surprise if you're familiar with the language. Lisp was
developed at MIT in the 1950s as a language specifically for artificial intelligence.
And ITS was developed in MIT's artificial intelligence Lab, so it makes good sense that the two would
go together. Lisp was the real meat and potatoes of a lot of research going on at the lab. It was
the code that actually controlled all the robotic arms and digital cameras. ITS was host to standard
Lisp compilers and development tools, but a slate of new dialects were also developed on the system.
These ranged from Scheme to Microplanner to MDL. One of the new dialects, called Mac Lisp,
would eventually evolve into Common Lisp. That's the standardized language most often used today.
As the field of AI grew and evolved, ITS offered a safe and open environment to try new things, so
makes sense that there'd be a whole lot of lisp on it. While AI is pretty cool, I think there's
another thread that better exemplifies what ITS was all about. That's the development of Zork,
what would become a massively popular text-based adventure game. The story starts with the release of Colossal Cave Adventure,
one of the first text-based adventures ever written. Well, I say release, but that's not
100% accurate. Colossal Cave Adventure was developed by William Crowther and later expanded
on by Don Woods. In 1976, this expanded version of the game started spreading over the ARPANET,
1976, this expanded version of the game started spreading over the ARPANET, eventually making its way over to MIT.
In Adventure, you spelunk a colossal and very mystical cave.
You solve puzzles and fight monsters in an attempt to recover treasure.
It really mirrors the netsurfing adventures that many hackers were already familiar with,
so it's no surprise that the Colossal Cave became a popular
new frontier. Tim Anderson, a hacker at MIT and eventually one of the programmers behind Zork,
explained the mania as such. Quote,
When an adventure arrived at MIT, the reaction was typical. After everybody spent a lot of time
doing nothing but solving the game, it's estimated that Adventure set the entire computer industry back two weeks, the true lunatics began to look for how they could do it better.
End quote. The beauty of Adventure was that the source code was readily available. It was written
in Fortran, a language that most programmers knew, and its files were easy to find. The benefit from
this was twofold.
Adventure was easy to get up and running on your campus' mainframe.
Just compile the code and you're good to go.
Anywhere the ARPANET reached was a single step away from the colossal cave.
But for those who wanted to do more and wanted to explore more, the source code offered kind
of a final adventure.
Everything you needed to know
to make your own adventure game was within reach, so naturally hackers wanted
to improve on the original. This led to a group of ITS hackers at MIT's dynamic
modeling group rolling their own text-based adventure. It would come to be
known as Zork. A small group of programmers started it on the effort, but
this wasn't going to be a
straight clone or even a simple expansion. This was on ITS after all, so Zork took a slightly
different approach. Initial programming was done by Dave Lebling in MDL, that's a dialect of Lisp.
The key difference between MDL, sometimes called MUDL, and LISP comes down to string handling. You see,
MDL was designed to be good at parsing and understanding text. In the context of an AI
research group, that meant making sense out of sentences. So from its outset, Zorik was built
to deal with more complicated commands from users. It was made to feel a lot more realistic and a lot
more alive. Once a simple game engine was written, Mark Blank and Tim Anderson, two more MIT hackers,
threw together a demo map. It was simple, just a handful of rooms. Not much to look at, but as a
proof of concept, it worked really well. Soon, Bruce Daniels, yet another hacker, joined the crew.
worked really well. Soon, Bruce Daniels, yet another hacker, joined the crew. More formal designs were made for a game world. Puzzles were written, and a digital adventure formed in earnest.
They worked in person, but the added open nature of ITS made it easy for their collaboration to
continue in the digital realm. Zork was a team effort, with ITS as a more secretive member of the crew making everything
work together.
What I think is even more interesting is how Zork gained popularity.
Adventure was easy to get running anywhere, but that didn't actually work for Zork.
At the time, MDL was only native to ITS, so if you weren't on ITS, you couldn't compile Zork from source code.
That should have been an issue, and under normal circumstances, it would have slowed things down.
Ah, but remember, this is ITS, and ITS isn't normal. Quoting from Anderson again, quote,
No one ever officially announced Zork. People would log into DM, see that someone was running
a program named Zork, and get interested. They would then snoop on the console of the person
running Zork and see it was an adventure-like game. From there, it only took a little more
effort to find out how to start it up. For a long time, the magic incantation was
mark semicolon Zork. People who had never heard of ITS, DM, or PDP-10s somehow heard that if they got connected
to something called host70 on ARPANET, logged in, and typed in the magic word, they could
play a game.
End quote.
Getting onto ITS and running Zork really just became another puzzle in the game.
The development, the spread, and the community
that popped up around Zork, all of that came courtesy of ITS. On any other timesharing system,
this wouldn't have been possible. Or at least it would have come in a vastly different form.
The kind of digital tourism and collaboration that led to Zork and really countless other programs wasn't just a quirk of ITS. It was a planned feature.
Like everything on ITS, there were no fences, no walls, but there were rules. Well, I guess it's
more of a social contract. With no security, it wasn't possible to enforce rules. Hackers on ITS
had played well with one another, so it became expected that guests
would do the same. The AI lab even had an official document that detailed their policies on tourists,
and how a tourist could keep in the good graces of the lab. In general, the AI lab was pretty
permissive about access, so their policies are really more suggestions than anything. The idea was that
tourists were accepted but not the lab's number one priority. To that end, outsiders were asked
to stay off ITS during peak hours, not to interfere with running programs, and definitely
not to try and crash the mainframe. The AI lab was still functioning as, you know, a research lab,
so while tourism was welcome, it was hoped that tourists would play nice.
Anyone could just log into ITS, no strings attached.
However, it was highly recommended you get an actual account if you planned to stick around.
There were even pipelines set up for this.
As soon as you were issued an account, the process really began. From the lab's policy doc,
quote, the first time a tourist logs in, he will get a reminder to run the inquire program
so that he can fill in his entry in the online registry of ITS users. A skeleton entry is made
when a tourist account is granted,
but it is the tourist's responsibility to finish filling it in, end quote.
From there, a curious user could enter deeper into the ITS fold. Tourists were usually assigned
an advisor from among the researchers in the AI lab. That was the human point of contact for new users. Someone to guide them in their
journey and help indoctrinate them. But this wasn't just an ad hoc arrangement. It was baked
into the software. The command loser, that's spelled like user with an L in the front, would
call up the tourist's advisor or another more experienced hacker. Sure, it's a bit of a jab at new users, but
Loser is another example of how the culture on ITS was codified in software. Need help?
Want to learn? Well, just ask away. We even have a program to help you with that.
Info, another ITS program, offers an alternative way for tourists or new users to get help. It didn't cover all the
details of ITS, but Info did offer an interactive way to traverse help files. Between this program
and human resources around the lab, a new user could get up to speed pretty easily. Now, that's
not to say that ITS was resoundingly user-friendly. It's still controlled via a command prompt, and a lot of the quirks of the system are hard to grasp.
But there were mechanisms in place, both in software and culture, to bring new hackers into the system.
I think this is one of the ultimate expressions of the hacker ethic.
Labgoers called tourists losers, sure.
The code even backed them up on that, but it was all
in fun. ITS was open to anyone who could find it, and the hackers at the AI lab were excited to
share in their achievement with all who arrived. Alright, this brings us to the end of our exploration of ITS.
The incompatible operating system started off mainly as a reaction to Multics,
but in a very directed way.
The team at the AI lab had an enumerated list of grievances
with contemporary timesharing systems.
These ranged from technical issues to much more nuanced ethical standings.
And as ITS grew, it took on a life of its own, creating a system unlike anything else. It was
open, friendly, and a little dangerous all at once. A user could crash the lab's mainframe as
easily as they could compile a program. But ITS wasn't just about code.
At its core, it was about community.
The system was built by hackers for hackers.
The modern open-source movement takes a lot of cues from ITS.
So does hacker culture in general.
So if it's so great, why aren't we using ITS today?
Why do we expect passwords and protected files on our
computers? And I guess getting to the root of these questions, why did Unix become the general
purpose operating system and not ITS? I think it all boils down to one fact. ITS wasn't meant for
everyone. Through this episode, we've seen how ITS filled a very specific role, and it filled it
very well. It was the home for hackers, a place for interested parties to explore, and a tool for
programming research. Unix, on the other hand, never had that same kind of focus. Thompson,
Ritchie, and all their Bell Labs colleagues created kind of an ad hoc system,
one that grew to take on new tasks.
ITS would also grow, but it would stay in that niche, right where it thrived.
It was never designed to take over the world, but it was designed for fun.
Before I go, I want to give a big thanks to Lars Brinkhoff.
During my preparation for this episode, I got a lot of leads from him for sources. He actually recommended that I look into ITS in the first
place. Lars has been heading up a continued effort to preserve ITS, including maintaining
buildable source code for the entire operating system and a lot of its software. Now, if
my scheduling works out as planned, I should be dropping an interview with Lars in the coming weeks, so stay tuned for a talk about how ITS lives on in the 21st century.
Thanks for listening to Advent of Computing. I'll be back soon with the next piece in the history of the computer.
And hey, if you like the show, there are now a lot of ways you can support it.
If you know someone else who is interested in computing, then why not take a minute to share the show with them? You can rate and review me
on Apple Podcasts. And if you want to be a superfan, then you can support the show directly
through Advent of Computing merch or signing up as a patron on Patreon. Patrons get early access
to episodes, bonus content, and polls for the direction of the show. You can find links to
everything on my website, adventofcomputing.com. If you have any comments or suggestions for a
future episode, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.