Advent of Computing - Episode 30 - Coherent Is Not UNIX!
Episode Date: May 17, 2020In the current day Linux is the most widely used UNIX-like operating system. It's rise to prominence has been an amazing success story. From it's humble beginnings Linux has grown to power everything ...from super computers to car stereos. But it's not the first UNIX clone. A much earlier system existed, called Coherent. And as it turns out both Linux and Coherent share a lot of similarities. The biggest difference being that Coherent was closed source. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers:Â https://www.patreon.com/adventofcomputing Important dates in this episode: 1973: AT&T UNIX V4 Goes Public 1949: DOJ Sues AT&T Over Antitrust Violations 1975: AT&T UNIX V6 Released 1977: First Version of BSD Circulates 1977: XYBASIC Released by Mark Williams Company 1980: Coherent Released for PDP/11 1983: Coherent Comes to the IBM PC/XT 1995: Mark Williams Company Closes
Transcript
Discussion (0)
Linux is probably one of the greatest hidden success stories that I can think of.
Most people don't even know it exists, but it's a core component of our daily life.
Linux is an operating system that's basically a collection of programs that sit between a user
and the underlying computer hardware. To do much of anything with a computer, you need to have some
kind of operating system loaded up and running, and Linux has become one of the most popular options behind the scenes. It was originally
developed as a hobby project by Lennox Torvalds, a grad student at the University of Helsinki in
the 1990s. Fast forward a few decades, and it's now everywhere. The modern internet is basically
just a bunch of servers running Linux all wired up together.
All the most powerful supercomputers run on Linux.
But don't get it twisted. Linux isn't just software for big iron.
Most embedded devices like smartwatches, smart speakers, car stereos, or routers, just to name a few, run the same software.
Linux's ability to span the divide from the smallest system-on-a-chip
devices to mainframes, that really fascinates me. In theory, software written for a supercomputer
can be pretty easily modified to run on someone's car stereo. And even better, all code for Linux
is freely available, so there's no reason not to get your car computer up and crunching data today.
It's a silly way to look at it, but the point is that Linux has unified a once-fractious field
in a very interesting way. There are a lot of reasons for Linux's success. It would take its
whole podcast just to cover everything. But this is a computer history podcast, so I personally think a key factor lies in computing's past.
Linux was modeled after a much earlier mainframe operating system called Unix.
Linux was able to bring the functionality of Unix, a system designed for large computers, to basically anything with a processor.
But Linus wasn't the first person to attempt this feat.
That honor goes to Coherent, a proprietary Unix-like clone written in the 1980s by a
tiny startup on the outskirts of Chicago.
It brought Unix's power to home computers over a decade before Linux hit the scene.
But it would take a very interesting and very different route to get to that point.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 30.
Coherent is not Unix.
Today, we'll be talking about a relatively obscure topic, one that I think deserves to be better known. That's the story of Coherent, a from-the-ground-up clone of Unix that was written
in the early 1980s, and it would survive in a very particular niche until the mid-90s.
Looking at this story from our vantage point today makes it all the more interesting, I think.
This episode takes place right in the middle of
an event known as the Unix Wars. It's a bit of a dramatic name, but you know, us computer nerds
tend to be a little bit dramatic. Anyway, if you don't know much about Unix, then I'd highly
recommend pulling up the show's archive and checking out episodes four and five. They cover
the earliest origins of Unix. If you don't want to brave the early episodes of
Admin of Computing, then let me give you a quick summary. Unix is an operating system. It's
basically the software that manages a computer's resources and, ideally, it gives you a useful
environment for other programs to run on top of. This type of software is very important because,
well, it makes a computer actually usable.
Without an operating system, you'd have to use a computer by poking together wires and flipping switches.
Not really the best experience I can think of.
Operating systems are also wickedly complex programs.
Developed in the 1960s, Unix wasn't the first operating system, not by a long shot.
But it has become one of the most influential.
It was originally written at AT&T's Bell Labs to run on mainframes, but it spread out pretty quickly.
Unix is important today because, with the exception of Windows PCs, almost every computerized system runs something very similar to Unix.
Macs and iPhones all run an operating system that's descended directly from Unix.
Android phones in nearly every server runs Linux,
an operating system developed to mimic Unix very closely.
So the history of Unix plays a big role in understanding the origins of a lot of modern software.
Coherent fits into a strange spot in the wide field of Unix-related software.
It's kind of the odd one out. Coherent is probably most similar to Linux. Both operating systems were
designed for a high level of Unix compatibility, but use none of the original code from Unix itself.
They're like disconnected branches on a family tree. They're similar to Unix, but not directly related.
The similarities between Linux and Coherent, though, they kinda end there.
Linux is open source and totally free software, whereas Coherent was developed as a commercial
product.
Coherent was produced by the Mark Williams Chemical Company.
Linux is maintained by a community of software developers and hobbyists. Coherent also
predates Linux significantly. So today, we're looking at what I think is the most interesting
Unix-like systems. But to talk about Coherent properly, we'll first need to get to know its
competition a little bit better. To understand what makes Coherent interesting, we need to talk about the Unix wars a little bit.
The 1973 release of AT&T's Unix v4 was a huge deal. In fact, huge might be a little bit of an understatement. On the surface, the new operating system was a feature-rich and incredibly useful
piece of software. Over the years, Unix has earned a reputation as a great environment for programmers, and that started really early on.
The system was originally developed by Dennis Ritchie and Ken Thompson, two computer scientists working at Bell Labs at the tail end of the 1960s.
The early development of Unix was informed heavily by their personal tinkering as well as experience working on similar mainframe software.
To be more specific,
the two had previously contributed to an operating system called Multics,
which was designed to securely share a mainframe
between many multiple users all at once.
But Unix always had more to offer
than just its core functionality.
There's this certain feel to the system.
Some call it the Unix philosophy.
Each command and program bundled with the system does one simple task.
This can be something like just listing the files in a directory, or even counting the
numbers of lines in a text file.
On their own, these aren't really that useful.
But UNIX is built in such a way that it's easy to chain commands together.
The outputs from one command can be turned into the inputs for another.
That feature, called pipes, combined with the glut of programming tools bundled into
Unix from day one, made it a fantastic system for programmers.
And just from the get-go, we get a sort of feedback loop.
Programmers like Unix, so they write a lot of software for Unix, and that makes the system
bigger and better and more enticing to other programmers and to businesses. Version 4 would
bring about some massive changes, and these updates would set the stage for the Unix wars.
The new version was, at least mostly, written from the ground up in C, a brand new language
that Richie had developed specifically for Unix.
There's a whole lot to be said for C.
It's been one of the most popular programming languages for decades at this point,
and it's influenced just about every other language that came after it in one way or another.
What matters for our purposes is that C code can be easily adapted to run on just about any computer.
is that C code can be easily adapted to run on just about any computer.
The language makes very few assumptions about the underlying hardware that it's running on.
Unix was originally written in assembly language,
in which just about every line has to be tailored for its host computer.
But now that the entire system and all its utilities were written in C,
it was possible to port Unix to any suitably powerful computer. While portability was a big
part of the update, there are some side effects that can't be overlooked. The biggest implication
is that it was relatively easy to modify Unix. The entire thing, sans some hardware driver code,
was written in C. Compared to earlier systems written in assembly language, this was easy to understand
and easy to change. And since Unix has all the tools to write and compile C, that makes
modifying any part of the system, well, just super easy and super convenient.
The other reason v4 was a milestone is because this is the very first version that would
find its way outside of Bell Labs. Previous iterations had been exclusively
used within Bell. Releasing Unix to a wider audience was an exciting prospect, but there'd
be some major problems with the rollout. You see, AT&T wasn't exactly allowed to sell Unix, at least
not legally. This goes back to an antitrust battle that started all the way back in 1949.
And to be honest, the legalese part here is a little bit confusing to me.
The gist of the situation that I gather is that Bell, which was owned by AT&T, had an almost complete monopoly of phone systems in America.
During the 1940s, they conspired with Western Electric, the company that manufactured most equipment used by Bell, to maintain that monopoly.
And turns out, that is illegal. It's a violation of long-standing U.S. antitrust laws.
So, the Department of Justice came down pretty hard on the companies involved.
There were two big stipulations to the suit that matter for us.
First, moving forward, AT&T was only allowed to carry out business directly related to They could only make money on telecom.
And no matter how you split hairs, Unix isn't really communications software or services.
isn't really communications software or services. Secondly, AT&T was legally obligated to license its non-telecom technology to anyone who asked. This led to a strange situation where AT&T
couldn't really turn Unix into a big part of their business, but they also had to license it to
anyone who wanted. For the team at Bell, things got a little bit complicated due
to this. The overall result was that Unix became almost free software, at least for some users.
In 73, Bell Labs started to ship copies to universities and research institutions that
asked on an at-cost basis. If you could afford a tape or a disc in the shipping, then you could buy into a
copy of Unix. This decision was partly spurred on by a 1973 talk on the operating system that was
published by Ritchie and Thompson. Up until then, Unix had been relatively unknown outside of Bell.
But with knowledge of the software getting out, it was only a matter of time before Bell got requests.
And thanks to the Department of Justice, they legally could not refuse those requests.
By the end of the year, a dozen copies of Unix had been shipped out.
According to some folklore, Ken Thompson would put together each package himself, complete with a thank you note signed,
Love, Ken.
But what's crucial here is that initially, AT&T wasn't
shipping out ready-to-go install disks for Unix. Instead, each of these packages contained a full
copy of Unix's source code. This was either due to the earlier antitrust suit or simply a product
of convenience. I can't really tell either way. If you're one of the lucky ones to get a copy of
Unix, then you first had to figure out how to compile that code into a workable program and
load it into your computer of choice. To further complicate things, AT&T gave users no support.
Once you got the code, you were on your own. This obviously wasn't within the reach of a consumer,
but for researchers, it was definitely attainable.
Some of the early adopters of Unix in this era were universities like UC Berkeley, the University of Toronto, Yale, and the University of Waterloo, just to name a few.
The strange release of Unix ended up targeting a very specific type of user.
Most early copies made their way into computer science departments at universities.
That's where the main interest was early on. Just to get Unix up and running, you kind of had to be
a programmer, since invariably there would be some part of the code that you'd need to tweak
to work properly with your department's specific computer setup. This leaves you with a user base
of computer science students who, in general, tend to be avid programmers and insatiably curious.
And with access to the full source code for Unix, and all the programming tools they could ever want, well, it's easy to guess where things would go from here.
Pretty quickly, new software and improvements for Unix would start to pop up.
Computer enthusiasts of all stripes, given that they were on a university campus at
least, churned out a lot of new code. This ranged from new commands to games to fixing bugs present
in Unix itself. And, as academics like to do, this new code was shared widely. There were even
ambitious efforts to move Unix onto new systems that it wasn't initially designed for.
The source code for Unix would even become a teaching tool in classrooms,
further pulling in new and eager software developers.
A new community was starting to form, composed of Unix users.
It's the root of certain traditions that we still see today,
principal among those being the idea of free and open-source software.
I can't stress this point enough.
Starting from the very early days,
a precedent was set that Unix-like systems were home to open-source software.
But like I said, the circumstances around licensing Unix were very complex,
especially so in these early days.
While some were able to get their hands on free versions of the software, AT&T would
still sell some licenses. Starting with the release of version 6 in 1975, companies were able to
purchase a commercial license for Unix. As near as I can tell, this became possible thanks to the
looming breakup of AT&T into multiple separate companies. Starting back in 74, the Department of Justice filed a new antitrust suit against AT&T.
They really just can't catch a legal break here.
This would eventually end with the company breaking into nearly a dozen smaller firms.
AT&T agreed to this under the condition that the restrictions brought on by the earlier
antitrust suit were dropped.
The case did result in AT&T now being able to get into the computer business.
But the overall timeline is somewhat contradictory.
While there are records of Unix being licensed as a commercial product as far back as 75,
the breakup wasn't finalized until 82.
It may have been the case that the ongoing suit just gave the company a little bit of wiggle room,
but I'm not totally sure on that.
With the legal caveats aside, Unix would hit the ground as a commercial product in 75.
While it was essentially free for universities, industry was a whole nother story.
A commercial license would run you $20,000.
That's nearly $100,000 when adjusted for inflation.
It's not a very cheap product to get your hands on. It's a whole lot of money. But at the time,
the only customers that could really use Unix would already be invested in mainframe computers,
so price may not have been the largest problem for them. However, this leads to a really
interesting situation. If you're in education, you could get
a copy of Unix for basically a song. And since 1973, the entire source code for the operating
system had been floating around in the wild. It was only a matter of time before AT&T would see
competition. One of the earliest and still surviving spin-offs of Unix is the Berkeley Standard Distribution,
more commonly just called BSD.
In 1977, a group of programmers at UC Berkeley started passing around their own distribution of Unix.
The core system was composed of code directly from AT&T,
with some modifications.
Added on were new software packages like text editors,
a new command line interface,
games, networking utilities, and much more.
All these additions had been developed at Berkeley.
The story of BSD deserves its own time in the spotlight,
but I bring it up here to show the precedent that was being set.
BSD was free software.
Anyone could get a copy of the source code,
and it was freely passed around within the community.
In the latter part of the 70s, that meant universities or labs big enough to have mainframes.
But as time went on, interest would grow outside of this niche.
The appearance of competition and eventual expansion outside of academia would turn into what today we call the Unix Wars.
So now, let's change gears and talk about home computers.
So personal computing had existed in the 1970s. Machines like the Apple II were paving the way towards getting a computer
on every desk. But the fact remains that these systems weren't all that capable. For that reason,
Unix would stay in the realm of academics, research, and business for most of its first decade of life.
It had been designed for mainstream computers, and consumer-grade hardware didn't come close to that level of sophistication.
Unix was made for systems that were shared between many multiple users at once.
It was built to run more than one program at once.
Simply put, there was no need for something like Unix for personal computers.
At least not yet. As the 80s started, things would change considerably. Computers would find their
way into more homes, and every day, more businesses adopted computers. The market for smaller computers
had existed before, don't get me wrong, but it expanded considerably in the 1980s. Besides just playing more computers in the wild,
these newer systems were a whole lot more powerful. Now, on this podcast, we've talked a
lot about the IBM PC, and with good reason. It's a really important part of the heritage of modern
computing. But as with all things, there is a much larger context at play here. The PC was part of a line of computers manufactured
by IBM. In fact, it was the lowest-end model of that line. On the higher end of the family
was the IBM PC-AT, and I think that system is a much better example for our purposes today.
Most home systems in the 70s were built around 8-bit processors. Now, the bittiness of a processor has a lot of
implications, but one big one is the memory address space. An 8-bit processor can only
operate on 8-bit numbers, and since each location in memory has to have an address,
the result is that an 8-bit processor can only access a small amount of RAM.
There are some tricks to deal with larger amounts of memory, but it doesn't
really help a whole lot. The IBM AT came with a 32-bit processor, the Intel 8286. Older systems
may have had kilobytes of RAM to work with, but the AT could top out at 16 megabytes. That's not
just numbers. More memory meant more space for running programs. The other factor at play was
the ready availability of hard drives. Earlier systems relied heavily on floppy drives for all
their storage needs. You could get hard drives as add-ons for machines in the 70s, but it was
expensive and pretty limiting technology. The AT would come stock with a 20MB hard drive.
Instead of swapping disks to load new programs, everything could just live inside the computer,
available for use at any time.
Better storage, more memory, and a larger market led to a lot of opportunities to innovate.
The AT wouldn't be released until 1984, but it's emblematic of the direction that
things were going in the 80s.
More powerful machines with capabilities much more in line to those of mainframes
were making their way out into the wild.
So now we can get to the main course.
This has been a lot of work to set the stage,
but the main character is finally arriving.
One of the many groups that was riding the tides into the 1980s
was the Mark Williams Company.
They weren't one of the usual suspects as far as computing goes. They weren't an Apple or an IBM or a Microsoft.
Formed in Chicago in 1949 by William Mark Swartz, the original name was the Mark Williams Chemical
Company. And they weren't initially a software company. In fact, their original product was a drink called Dr. Enough,
a, quote, dietary supplement contained in a delicious lime and lemon flavor.
Yeah, it's not really the bona fides that most computer companies tout.
It wouldn't be until 1977 that the Mark Williams company
would transform itself into a software house.
By this point, the company had largely passed on to Williams' son, Robert Swartz.
But it seemed like the younger man wasn't so much interested in soft drinks and chemistry
as he was in computers.
He had recently graduated from the University of Waterloo's computer science department,
where he had been one of the many students exposed to Unix.
But that wasn't the
extent of his interests. The younger Swartz was an early convert to the wonders of the personal
computer, and while he would have had a background with larger systems from college, he saw the
promising future of smaller, in-home systems. With the reforming of the Mark Williams company,
Swartz brought along a group of college colleagues,
also CS grads, who shared a similar vision.
They knew personal computing was where they wanted to be,
but the time period they were in was a little bit of a strange, liminal space.
The real explosion of personal computing systems was just on the horizon.
For the time being, the market for software in the home wasn't all that huge.
While businesses would have either had no computers at all, or they'd be on mainframes.
So on the smaller end, you have simple machines like the Altair 8800 in the hands of mainly
hobbyists. While on the exact opposite end, you have large companies and institutions that are
rocking expensive DEC or IBM behemoths.
There was some early crossover of personal computing in smaller businesses,
but in general, the market for software was broken into two large categories.
This is all to say that Swartz and company found themselves in a very interesting time.
Mark Williams would be soundly on the personal computing side of the fence,
but they didn't totally ignore Big Iron.
I think one of the first products, XY Basic,
can shed some light on the space that they were working in.
This program would hit shelves in 1977,
targeted at the Intel 8080-based computers such as the Altair 8800.
At first glance, it would be easy to discount this as a clone of existing software.
I mean, just two years prior, Microsoft became an overnight success in the hobbyist community
with the release of Altair BASIC. Sure, Mark Williams did write a version of BASIC for the
same system, but there's a little bit more at play here just below the surface. Let me read a quick
passage from the manual for XY Basic.
Quote, Congratulations! You are about to discover the unique and powerful properties of XY Basic,
the only BASIC interpreter specifically designed for process control, data acquisition,
and real-time applications with 8080-based microcomputer systems. End quote. It's BASIC, but with a twist.
Microsoft's early version was targeted squarely at the hobbyist or home user,
but XY BASIC was a slightly different beast.
It was designed for process control systems.
In other words, we're looking at a more industrial product.
So you get all the standard BASIC instructions for math,
printing to the screen,
taking inputs and flow control, alongside more advanced niche code. What blows me away is the
fact that XYBASIC has software interrupts and events. These are two key features for any
industrial control system or any computer system that interfaces with other physical hardware.
Essentially, a software interrupt tells
your computer to wait for a certain signal, say a reading comes in over a sensor, and then once
received, run a pre-specified chunk of code. Usually, interrupts are handled on the lowest
level of computer's hardware. Even today, it's common to write interrupt handling code in assembly
language. But here in 1977, XY Basic can handle
a very similar type of functionality in a clear and concise BASIC program. Software interrupts,
plus a bevy of other features, put XY Basic in this interesting space where it's able to handle
more direct control of the underlying hardware than its competition could. The other interesting
factor is that most process
control systems used specialized computer hardware. Normally, a factory would have something akin to
a mainframe tricked out with expanded interface hardware. Obviously, that kind of hardware doesn't
come cheap, so computerization had only really reached larger factories and industry, or some
larger research labs. With the rising
availability of smaller systems at the end of the 70s and into the 80s, it was becoming possible
for smaller companies to buy into industrial control systems. And the XY Basic was positioned
to slot right into that new market. It brought features usually associated with larger and more
powerful hardware to much smaller machines.
You can even see this kind of thinking at work in the systems XY Basic was compatible with.
For the most part, any computer with an Intel 8080 processor and a good amount of RAM
could be turned into an industrial control system.
XY Basic was only the beginning for Mark Williams,
but it set a pattern that would play out in a larger way
later on. And now we arrive at Coherent, the much larger project that was brewing at the time.
Just as a note, this is where we venture into slightly sparse territory. There really isn't
a single comprehensive source I'm working off for this section. What follows is a best-as-I-could-put-
together story from archived Usenet posts, man pages, source code listings, and accounts from employees of the Mark Williams company.
There's not that much contemporary data to go on.
The timeline for the early development of the system is also a little foggy.
Most accounts I've seen state that initial development started in 1977.
state that initial development started in 1977, but I haven't been able to track down if that was while Robert Swartz was at the University of Waterloo or just as he took control of Mark
Williams' company. The goal was to offer Coherent as an alternative to Unix that retained a level
of compatibility. So why exactly did Swartz want to bring a product like this to market?
Well, we've already looked at one possible angle
in XY Basic, and that's partly applicable to Coherent. It was positioned to fill a new niche.
Unix was firmly a big iron type of product, something that cost a lot of money and could
only really be used on expensive computers. But with the oncoming tide of personal computers,
a market for something like Unix, just on a much smaller scale, was about to open up. Sure, most users wouldn't need a multitasking, multi-user operating system,
but businesses and some home users would definitely be interested. A version of Unix
for smaller computers could go a long way towards bringing features from mainframes
onto the desktop. The other side of the coin came down to practicality. A license for Unix from AT&T was
prohibitively expensive, and there would be some red tape around Mark Williams getting a license
to the code and then turning around and selling it to consumers. Around this time, there were
companies attempting to do just that, notably Microsoft and their Unix variant called Xenix.
But that was Microsoft, a relatively large company
with some money and leverage to play around with.
A soda manufacturer turned software house
on the outskirts of Chicago
couldn't really throw around the same kind of weight
that Bill Gates could.
So the most readily available alternative
would be to make your own Unix
without using any of AT&T's code.
If you think that sounds like a lot of work, then you're absolutely right.
A project like Coherent is ambitious on a massive scale,
and it would take a lot of time and effort to bring that to market.
Another big reason to copy Unix came down to exposure.
Remember that starting as far back as 73,
the source code for Unix was being used as a teaching tool in computer science courses.
It's very likely that Swartz and his fellow students at the University of Waterloo had seen the inner workings of the operating system during their degree program.
Adding credence to this theory is the fact that the University of Waterloo was on one of the first mailing lists for an early Unix user group.
of Waterloo was on one of the first mailing lists for an early Unix user. Coherent's ultimate targets were home computers, but it wasn't feasible to develop directly for that quite yet.
The market in 77 was fractious to say the least, and even the more capable systems at the time
weren't really powerful enough to get much use out of Unix. They'd have to bet on the future.
So work would start on a more usable
machine. Development began initially on a mainframe and worked its way down from there.
The machine of choice was a DEC PDP-11, and it was a reasonable pick as a starting point.
For one, the PDP-11 was a popular and eminently available mainframe at the time. Since most of
the early members of the Mark Williams company came directly from the University of Waterloo, it's extremely likely that they were
familiar with this machine also. The other big reason to target the PDP-11 is that, historically,
this computer has been the homeland of Unix. If you used Unix in this period, then more than
likely you were on a DEC mainframe. In fact, there wouldn't be an official AT&T release for any other hardware until 1978.
Put these factors together and the PDP-11 really becomes an obviously good place to start.
The early phase of developments are one place that I really wish I had some more information about.
Since Unix source code was widely available and the crew behind Coherent had to have been exposed to it at at least some point,
making a legally defensible product would be tricky to say the least.
The problem comes down to how to clone software without using or referencing any of the original source code.
The most well-documented case of this kind of development has to be Compaq's work to clone the IBM PC's BIOS software.
This is something that we've talked about multiple times on the podcast, so I'll keep it brief.
The gist of the story is that Compaq used a method called clean room reverse engineering.
This is where you have one team work up a specification for the program you're trying
to reverse engineer. Then another team, one with no prior exposure to the code base, uses that spec
to implement a clone. The idea here is that the process gives you a little bit of legal freedom.
With a proper paper trail, you can prove in court that you haven't stolen and sold someone else's
code. I can't be 100%, but I'm pretty sure that Mark Williams wasn't quite as rigorous as Compaq.
As I keep hammering home, the source code
for Unix was everywhere, specifically in academics. It would probably be hard to find a good C
programmer who hadn't seen any of AT&T's code at this point. The other reason for my suspicion
comes from Tom Duff, a programmer who worked at Mark Williams' company very briefly. When looking
back at his time with the company, he had this to say,
quote,
When I arrived, it was pretty clear
that the kernel was pretty much taken care of,
though it wouldn't be running well enough
for daily use until after I'd left.
But nobody was working on user space stuff.
So I opened up the 6th edition manual to page 1
and started implementing commands.
In the three months that I was there,
I think I did A through M. End quote. So I think it's the most likely case that Coherent was built using a more
fast and loose version of clean room reverse engineering. Instead of writing a spec in-house
and keeping a clean team isolated from any proprietary code, programmers just sat down
with some reference docs and got to work. That would
include all the core components like the Unix kernel, as well as every utility and program
you'd expect to get with a normal install of Unix. Now, as Duff alludes to in that passage,
Coherent would take about three years to get up and running. This would still have all been done
on the PDP-11 version. And now, this is where we get to one of the big advantages that Coherent had over Unix at the time. The crew at AT&T had been working towards
a portable system since the early 70s, but the first official port of Unix wouldn't be released
until 78. Part of the reason for the slower rollout was that Unix is just plain big, and
replacing all the computer-specific parts of the system
without breaking anything would take a lot of time. Swartz's rewrite didn't have that legacy
issue at all. All the code was new, so his team was able to build from the ground up with these
types of considerations in mind. By 1980, Coherent was available for the PDP-11, but ports were on
the horizon. Pretty quickly, Mark Williams started shipping versions recompiled for the PDP-11, but ports were on the horizon. Pretty quickly,
Mark Williams started shipping versions recompiled for the IBM PC. Since everything was in C,
moving to a new platform was relatively easy. I can't find an exact year for the PC release,
but it was somewhere between late 81 and early 83. Initially, a user couldn't buy Coherent.
Instead, a computer manufacturer would license Coherent. Instead, a computer manufacturer would license
Coherent to ship alongside their new computers. This was similar to the arrangement that companies
like Microsoft had regarding their early releases of BASIC. They didn't sell direct to consumers.
While workable, this is a pretty limiting sales model. So in 1983, Coherent went fully public,
and for the first time, users could bring home their own copy of the Unix clone for their existing computer.
The first public release of Coherent was for the IBM AT, or the bevy of 100% compatible clones that came with hard drives.
A license would set you back about $500, or in today's money, $1,200.
That's not exactly cheap, but let's try to see that in
context. Competition such as Microsoft's Zenix came in at an eye-watering $1,350. That's just
under $3,500 today. A license from AT&T also wasn't cheap, so actual ports of Unix were priced accordingly. We also have to keep in mind
that neither Mark Williams or Microsoft were targeting their Unix-like software at everyday
consumers. These were somewhat niche products. So what exactly did you get for $500? The package
deal is firmly in the realm of other personal computing software. Coherent came
on a set of 7 floppy disks and had a comprehensive user manual. All software was already compiled
for the PC, so you didn't get any source code. Installing it was just as easy as installing
more traditional operating systems like Microsoft DOS. Just load up the first installed disk
and follow on-screen prompts with a little bit of help from the manual.
Anyone who had used a computer before could have been up and running with Unix in under an hour.
But Coherent wasn't exactly meant for any computer user.
Just like XY Basic, it had a very specific niche.
PC Magazine ran an article called A Goodbye on Unix that gives us some insight into Coherent.
Quote, those who already have a stockpile of Unix applications ready to compile and go are in good
shape with Coherent. However, for the rest of us, the question of how much application software is
available arises. End quote. Sure, a new user could jump right into Coherent, but that wasn't really the best fit.
It was a much better option for existing Unix users. Say you needed to use Unix at work,
or school, or you were running a business that used Unix. For just $500, you could be up and
running with Coherent, which, for the most part, would look and feel similar. But it was accessible
for smaller computers, and it was
a lot cheaper than the competition. On the base level, that's a pretty good offering,
but the PC Magazine article brings up another big plus. Coherent didn't just look and feel like
Unix. It could run Unix software. Or at least it could with a little bit of coaxing. This is where
I think some of the strangeness in Coherent really comes out.
A big part of any Unix installation comes down to the programming tools.
And the Coherent build is no different.
It comes packaged with a C compiler, an assembler, and everything you need to build your own software.
Just as an aside, like everything else in Coherent, this new C compiler was built totally in-house. That's
nothing to sneeze at. That's also a massive feat of programming in and of itself. Normally, you
don't really buy Unix software. This comes down to the tradition of open source software that I
mentioned earlier in this episode. It's vastly more common to either find or repass some source
code for the program you want. Instead of running
an installer, you end up needing to compile that code into something you can run on your own
computer. But here's the thing. All of Coherent's software was distributed pre-compiled. It was a
commercial product after all, so it was totally closed source. At the same time, you could easily compile existing open source software for use
with Coherent. This type of compatibility was definitely a big draw for users. You can use
all the same programs you use with Unix. At the same time, I think this really shows the strange
split-brain nature that we see in Coherent. One other important feature that Coherent brought to smaller computers was the
idea of a shared multi-user system. Like any other variety of Unix, Coherent had time-sharing at its
core. It was able to run multiple programs simultaneously, even unlimited PC hardware
that consumers had access to. To complete the package, Coherent could run with multiple users simultaneously.
This was mainly done using serial terminals.
With a reasonably specced out PC, you could support up to three users at once, two over
terminals and one locally.
There were some practical limits imposed by RAM size, but in general, this let you get
much more use out of your hardware investment.
This type of operation was common on mainframes,
so once again, it's a feature being reduced and revamped for smaller systems. Overall,
Coherent made for a compelling product for those interested in Unix who didn't have access to
expensive hardware or software. As Coherent was catching the attention of the press and
computer users, they were also catching a little bit of unwanted attention. Really, it was only a matter of time before the wolves would come knocking at the door.
Now, we don't really have an exact date, I'd hazard a guess of 8384, but sometime around that time,
a delegation from Bell Labs would make their way to Chicago. At the head of this party was none
other than Dennis Ritchie, one of the original programmers behind Unix.
Now, AT&T was concerned that Mark Williams' company was stealing their software, and they had good reason to be suspicious.
A compatible Unix-like system gets released and the company producing it doesn't have an agreement with AT&T.
It looked like an open-and-shut case of intellectual property theft, from AT&T's
perspective at least. After some back and forth behind the scenes, it was agreed that Bell Labs
could come to Mark Williams for a demo, but it wouldn't have access to coherent source code
without a court order. So AT&T went for the next best thing. Ritchie was extremely familiar with
Unix. That may be a little bit of an understatement.
He and Ken Thompson had written the majority of the code for the operating system. They knew it
inside out and then some. If anyone could sniff out stolen code without seeing the source,
it would have to be one of the two. But that's not to say that Richie was a very eager participant.
He recalls the visit like this,
quote,
From their point of view, we were like the IRS auditors coming in.
From my point of view, I felt the same.
Except that playing that role was a new and not particularly welcome experience.
What I actually did was play around with Coherent and look for peculiarities, bugs, etc. that I knew about in the Unix distributions of the time.
It was undoubtedly a stressful and uncomfortable situation for all parties.
Swartz knew that he was in the right. Coherent was as clean from AT&T code as he could make it.
But a lawsuit from AT&T, win or lose, would mean disaster. As for Richie, it's clear he wanted no part of this. He
definitely didn't like the prospect of being a software sharer. Luckily, the situation was diffused.
Quoting from Richie again, quote, I concluded two things. First, it is very hard to believe
that Coherent and its basic applications were not created with considerable study of the OS code and details of
its applications. Second, that looking at various corners convinced me that I couldn't find anything
that was copied. It might have been that some parts were written with our source nearby,
but at least the effort had been made to rewrite. If it came to it, I could never honestly testify
that my opinion was that what they generated was irreproducible from the manual.
End quote.
And just like that, Coherent was in the clear.
Without the source code for comparison or testimony from Ritchie, AT&T could not build a strong case.
But while Coherent was saved from ruin this time, it would only be a reprieve.
The Mark Williams company continued to develop
Coherent in through the 90s. Eventually, its price would be cut back to $99, making it by far
the cheapest way to get access to a Unix-like system. By the 90s, the computer market had once
again changed considerably. Mark Williams had done well in the shifting and turbulent 1980s.
Software like Coherent was
almost tailor-made for this type of interim period. However, the company never really struck
it huge. Ultimately, Coherent would lose market share to more true-blooded Unix variants,
BSD being one that comes to mind. Without the backing of a huge company like AT&T or an
institution like the University of Berkeley, the smaller company
wouldn't be able to make it through the lean times. And in 1995, Robert Schwartz announced this,
quote, it is my sad duty to announce that Mark Williams Company has gone out of business.
There are many reasons for this decision. Some are due to mistakes that we have made,
some to changing business conditions, but the bottom line is that we must shut our doors.
With that, I think it's time to wrap up this episode.
This has been a bit of a long one, but I think the story is more than worth the time.
Coherent would have a 15-year lifespan, filling a very specific niche. During that time,
it stood as a testament to some wicked programming skills, while acting as a bridge between the
larger computers and emerging PC scene. The 1990s brought about some big change in the world of
computing, and especially when it came to Unix-like systems. Ultimately, Coherent was just one player,
but I think it's a unique and important
one. I want to close this out on this note. While digging up information on Coherent, I leaned
heavily on archived Usenet posts, basically a popular internet forum in the 80s and 90s.
During that time, there was a whole board dedicated to Coherent, fittingly called
comp.os.coherent. So it startled me to come across this post from
1992. Quote, this is a blatant plug for my own Unix-like kernel, as I'm always interested in
more beta testers. It should be self-explanatory, although interested persons should note that
version 0.12, still beta, will be out in a week or so, end quote. That was posted by Linus Torvalds,
while he was still working on early versions of Linux. It turns out that he was somewhat active
on the coherent board during the 1990s. There are even some posts where he helps coherent users with
their migration to Linux. Now, most Linux users wouldn't have been pitched the system personally,
but over time, Linux would win out and become the most widely used Unix-like operating system.
It interests me to really see this process going on in the microcosm of these old forum posts.
Coherent and Linux are, ultimately, very similar. Both accomplish the same goal of replicating Unix without using
any of AT&T's code, but both have a very different philosophy. In the end, Coherent
did make a huge difference if for no other reason than it showed it was possible to replace Unix.
Ultimately, it fell out of favor, but I think its story gives us an insight into the turbulent period of computing's past.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with a new piece of the story of the computer.
And hey, if you like the show, there are now a few ways you can support it.
If you know someone else who's interested in computing, then why not take a minute to
share the show with them?
You can rate and review me on Apple Podcasts.
And if you want to be a super fan,
then you can support the show through Advent of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and assorted perks.
You can find links to everything on my website,
adventofcomputing.com.
If you have any comments or suggestions for a future episode,
then shoot me a tweet. I'm at Advent of Comp on Twitter. And as always, have a great rest of your
day.