Advent of Computing - Episode 135 - XENIX

Episode Date: July 7, 2024

In 1984 SCO released PC XENIX, a port of UNIX that ran on an IBM PC. To understand why that's such a technical feat, and how we even got here, we have to go back to the late 1970s. In this episode we ...are taking a look at how Microsoft got into the UNIX game, and how they repeatedly struggled to make micro-UNIX work for them. Along the way we run into vaporware, conspiracy, and the expected missing sources!

Transcript
Discussion (0)
Starting point is 00:00:00 Unix has always fascinated me. Back in my younger years, I was stuck on a 286 machine running DOS, so I had this whole period where I became a little bit obsessed with the idea of Unix. It could run multiple processes at once. It didn't even have Windows. Imagine that. Perhaps it's little wonder that as soon as I got a more modern computer, I instantly became a Linux devotee.
Starting point is 00:00:31 This was one of those events that planted something in my psyche. The idea of running Unix on small and underpowered machines has always kind of stuck with me. Could I have taken my old rusty 286 and booted it into a Bash shell? The answer to that, I found, was definitely yes. The install media may have been hard to get a hold of, I may have had to pirate a few things, but it would have been possible. It would have even been possible to get Unix running on a much smaller machine. There is, of course, the bigger picture here. Unix historically has been in the realm of mini-computers and timesharing. It was born on these somewhat larger machines to do larger tasks.
Starting point is 00:01:14 But somehow, it makes the jump to small machines. It went from a big, multi-user system to a more, well, personalized experience. Now, the young Sean would have had no problem explaining that leap. Of course someone would want Unix on their home computer. I mean, come on, it's Unix. It's cool as all get out. But what's the truth of the matter? Why did Unix make that jump down, and how was it even possible for smaller computers to handle that much software? Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 135, Xenix.
Starting point is 00:02:07 Today, we're going to be talking about the smaller side of Unix and looking at some software that, really, I think I've been thinking about since I was a teenager. Before we get started, I have an announcement and I also have a warning for this episode. The warning is, it is finally summer here in the home office, and my home office is kind of hot. Don't get me wrong, I love my little office, but it does the whole greenhouse thing as soon as it gets warm outside. I don't really like it, my computer doesn't really like it, so you might hear some fans this episode. I'm doing my darndest to make sure they don't get into the mix, but if you hear that, please forgive a poor sweaty man sitting in his little desk.
Starting point is 00:02:51 The announcement, on the other hand, is much more exciting than me complaining about the heat. I'm going to be going to VCF West this year. VCF West is happening this August. It's August 2nd and 3rd. That's a Friday and Saturday at the Computer History Museum in Mountain View. It's a fabulous event, and the Computer History Museum is a fabulous place to go. Better still, I'm on the speaker schedule this year. I'm going to be giving a talk about Doug Engelbart, Edge Notch Cards, and my research into the very early origins of hypertext. So if you want to come out, say hello, and hear me talking about my weird hypertext theories, then please do come out. It's a fabulous event no matter what.
Starting point is 00:03:39 Highly recommend the trip. Anyway, with that out of the way, we have an episode to do, and that episode's about Unix. So, for the uninitiated, Unix is one of the most important operating systems probably in history. It was first developed in 1969 by Ken Thompson and Dennis Ritchie while they were working at AT&T's Bell Labs. Unix is where we get the C programming language. It's what leads to Linux and to macOS and BSD and a whole host of other operating systems that are in use everywhere. Its descendants literally make the world work. If you're using anything digital and it's not running Windows, it's probably running some flavor or derivative or descendant of Unix. In fact,
Starting point is 00:04:27 Unix is so prevalent and so styled on, there's a collective word for all these variations of the operating system. They're called NICs, stylized as asterisks NICs in IX. You also might hear them called Unix-likes or Unisees. The reason for this spread and growth is pretty simple. Unix was a very capable operating system, and due to a series of circumstances involving antitrust law, its source code was made widely available. That source code, written in C, spread everywhere. As a result, there's some Unix-like thing for just about every computer in the world. But the story of how Unix-like operating systems made it to microcomputers, well, that is very complicated. Despite being such a portable beast, such a digital chameleon, Unix wasn't compatible with many early microcomputers.
Starting point is 00:05:28 The timeline makes that especially strange. Unix most famously ran on the DEC PDP-11. That was its home platform for a number of years in the early 1970s. By the early 80s, microcomputers were more powerful than the PDP-11. But still, Unix wasn't running on very many micros at the beginning of that decade. However, that wouldn't stop folk from trying. In the 1980s, we start seeing honest-to-goodness ports of Unix onto microcomputers. Now, I want to be clear about a few things here.
Starting point is 00:06:05 onto microcomputers. Now, I want to be clear about a few things here. Today we're going to be talking about ports, as in someone taking the source code for Unix and getting it running on another microcomputer. There is a little coercion needed there, a little convincing via code, but the result is Unix running with actual Unix code. There's a whole other class of Unix-compatible software in this era, most notably Coherent. I've covered that software elsewhere, and I want to cover more compatible Unix-like things later, but this episode, I just want to look at real-life ports. To do that, I'm taking a case study approach. We're going to be exploring Microsoft's Zenix. The reason here is, well, it's pretty simple. I could say it's a case study, but that belies how distracted I get with shiny things.
Starting point is 00:06:56 You see, there's a port of Zenix that does the impossible. It runs on an IBM PC. That may not sound impressive at first, but believe me, it's a feat. In this episode, I'll be laying out why this is so cool, how Microsoft did it, and why the PC even wanted Unix in the first place. There's a lot to cover, so let's kick things off. Welcome to the smaller side of Unix. kick things off. Welcome to the smaller side of Unix. Never has there been a greater demand for software that is easy to use and maintain, and independent of the hardware on which it runs.
Starting point is 00:07:36 As the price of software rapidly outpaces that of computers, the need to increase software productivity and reduce duplication of effort has become paramount. Microsoft's Zenix operating system offers one solution to the software crisis developing in the microcomputer world. End quote. That's the introduction from a 1981 article about Zenix that appeared in Byte Magazine. That gives us a short answer for our big question, right? Why would anyone want to run Unix on a microcomputer? It also gives us a jumping-off point to examine the details of that question.
Starting point is 00:08:16 That article was written by one Robert Greenberg, himself a Microsoft employee involved with the Zenix project. So we can view this as really a bit of extended ad copy. That said, he does point out something crucial, the need for portability. This is a super period argument, which makes it all the more interesting. Allow me to explain. The first versions of Zenix hit the scene around August of 1980. That's almost exactly a year before the IBM PC. That puts us a few years before the full-blown x86 monoculture. This is a period of very rapid and violent flux in the microcomputer world. Part of this was due to
Starting point is 00:09:01 the transition from 8 to 16-bit processors. The 8-bit generation of microcomputers didn't have any one unifying architecture. There were three main chips on the market in that period. The Intel 8080, the Zilog Z80, and the MOS Technologies 6502. Most mainstream computers used one of those processors, so there were at least three instruction sets in common use. There were some machines that used the same chip. The Apple II and Commodore PET, for instance, both used the MOS 6502 processor, but those machines had very different designs. Memory on the PET had a different layout than the Apple II, and they had different character sets, not to mention graphics capabilities.
Starting point is 00:09:51 There was no commonality here, which started to show in the software of the time. If a company wanted to sell software to the microcomputer market, they needed to sell many different versions. VisiCalc, for instance, would come out in a handful of different iterations. Due to hardware differences, these weren't all direct ports, so you end up having to write the same program over and over again. It's not very practical, you're duplicating your effort. At the end of the 1970s, the 16-bit generation was just beginning. It looked, once again, like there would be no clear frontrunner. We can see this in trade and popular press publications of the time.
Starting point is 00:10:34 My usual source here is 16-bit microprocessors, published by Blacksburg in 1981, literal months prior to the IBM PC. It gives a rundown of the latest and greatest processors, which comes out to seven different chips, ranging from the well-known 8086 to the Z8000 series, LSI 11, and even the TMS99000, my personal favorite. All these chips are totally incompatible, take a very different approach to computing, and were already being used in some machines. During the 8-bit era, there had been a unifying force. At least, kinda. In part, let's say. It was called CPM, an operating system that could run on any 8080 or Z80 machine, in theory. This was accomplished by breaking out the code for managing hardware into a separate module. In practice,
Starting point is 00:11:34 CPM could be ported to any computer that used an 8080 or Z80 processor. You just had to rewrite that hardware module. So, if you were on a platform that supported CPM, you could run any CPM software. Of course, this comes with caveats about weird hardware quirks. In practice, there are many incompatible disk drives and incompatible file formats across different versions of CPM, not to mention weirdness with character sets. formats across different versions of CPM, not to mention weirdness with character sets. But anyway, in 1979, it was announced that CPM would make the jump to the 16-bit era with one port to the 8086. That could have led to a nice state of affairs, but only for a small handful of machines. Remember, the 8086 wasn't really ascendant until the PC hits the mainstream,
Starting point is 00:12:28 until we get the clone market. So, looking from 1979, it would appear that the 16-bit era would have had the same major compatibility issues we saw in the 8-bit era. Unix could be positioned to solve that problem. It was an established operating system, very well established at that. There was an ever-increasing amount of software being developed for Unix. It was highly portable. Unix, recall, is written in C, a high-level language that was built specifically for portability. Systems like CPM were written in assembly language, which tied them very closely to one processor type.
Starting point is 00:13:09 Its code was literally one processor instruction after another. That's one of the reasons that CPM stuck to Intel-like things in the 8-bit era. But UNIX, well, that could do the whole chameleon trick. To port Unix to a new platform, you only needed to rewrite the C compiler and add in a little bit of hardware-dependent code. Then, in theory, you could just compile Unix to run on the new platform. And you're done. The port is finished. That new platform would then be able to run any Unix software,
Starting point is 00:13:44 since it would have its own C compiler now. What helped was that Unix software was, itself, passed around as code. So once you had your C compiler, the sky was the limit. It hooked you straight into the Unix superhighway, so to speak. Now, of course, the real world isn't quite that nice. Unix can't just run on anything. If you know much about Unix, then you've probably heard one of its huge advantages is simplicity. Now, that's usually meant in comparison to contemporaries.
Starting point is 00:14:17 It's meant as a comparison to timesharing operating systems. Big operating systems meant to run on big computers with multiple users and many multiple concurrent processes. If you compare that to CPM, well, the differences are somewhat noticeable. CPM is only really a file manager with a few neat tricks and features. You can read and write to a floppy disk, load programs off that disk, and do some keyboard and screen stuff. Plus, there are functions for handling serial and parallel I.O. You can even use a printer if you want. If you call Unix simple, then I don't know what you would call CPM. It's something below simplicity. That's also pretty indicative of the operating systems running on
Starting point is 00:15:06 these 8-bit machines. Unix needs a few things to actually function. For microcomputers, the biggest hurdle here is something called a memory management unit, or MMU. This is essentially a circuit that sits between the processor and RAM and lets you control how memory is treated. In its most basic form, it can remap addresses. Making a request from the processor for one address actually returns a different address from physical memory. To do that, you build up this table of logical addresses, the address your program asks for, and physical addresses, the address it actually gets routed to in your RAM chips. The common trick, and the one that Unix needs, is memory protection. Unix is a time-sharing system, meaning it's juggling multiple processes at the same time.
Starting point is 00:15:59 It does that by splitting up the processor's time. It runs one process for a little bit, and then switches to the next. One of the big problems with that kind of system is preventing programs from messing with one another. The solution that Thompson and Ritchie used involves using an MMU to isolate each process's memory space. Essentially, Unix tricks each program into thinking it has access to any memory it wants. It does so by setting up the MMU to map requests from that program to some isolated chunk or chunks of physical memory. Put perhaps a little more simply, it traps a process in this fake memory space. It's like a little cage around the program. That's enforced by hardware, not software. So to run Unix, you need an MMU. Otherwise, you have performance issues,
Starting point is 00:16:53 you have all these complexities. It isn't really worth it. You need that special memory circuit. Now, this doesn't actually have to be very sophisticated. The PDP-11's MMU is notable for being very simple. It really only supported very basic mapping. The PDP-11 port of Unix, as I understand it, basically exploits this really simple segmentation mapping that the 11 does to make a very hacky implementation of memory protection. But it does work. You don't need a whole lot here. The MMU goes a little deeper than pure program isolation. Unix also needs a mechanism to keep programs from modifying Unix itself or going directly to the hardware and mucking things up. On most machines, that's accomplished by implementing what are called privilege modes
Starting point is 00:17:43 at the processor level. The simple version is that you have these different modes with different levels of hardware access. These modes are usually called, maybe unsurprisingly, privileged and unprivileged. In privileged mode, you have full access to all memory, all peripherals, and every single processor instruction. That's for things like Unix itself. It gives you ways to, for instance, control the MMU. If a process could change those mapping tables, then it could do anything it wants. You don't want that. Processes, which must be kept in isolation, run in unprivileged mode. Then there are a set of instructions for switching modes to support this type of setup. Then there are a set of instructions for switching modes to support this type of setup.
Starting point is 00:18:31 The goal here is really just to prevent a process from going rogue and crashing Unix itself. It also enforces this separation of concerns. A process shouldn't just be able to connect to a keyboard and do anything it wants. It has to ask Unix to handle that for it. That's the overall philosophy here. Something to note is it's not necessary that modes are needed to make Unix software run, it's that Unix software is written specifically to operate like this. The big issue with porting Unix to micros came down to these hardware features, MMUs, and protection modes. This was a very well-known issue at the time. In 1978, researchers
Starting point is 00:19:07 at Bell actually did port Unix to run on an Intel 8086-based computer. This was something I had never actually heard about before, and serves as a really interesting side note and a wild example of hardware issues. The port is described in this paper titled Unix Operating System Porting Experiences. The paper itself covers a few ports, but the 8086 one is what interests us here today. AT&T actually built their own 8086 machine, full of custom hardware, just to run Unix.
Starting point is 00:19:44 This is where we see the first signs of struggle. When looking at a microcomputer, the processor is just part of the larger deal. There's this whole collection of chips around the CPU which are called the chipset. You know, since they're a collection of chips, a set of chips that go with the processor. set of chips that go with the processor. Anyway, each manufacturer, be they Intel, Zilog, TI, or MOS, designs and sells this handful of support chips to be used with their fancy processors. Nowadays, most features are pushed into the CPU maybe with one big support chip. But back in this period, most of those features got discrete chips, hence chipset. If we look at the 8086's chipset, we can start to see the full picture of what the larger platform
Starting point is 00:20:31 could do. We have DMA controllers for moving around data, bus latches and peripheral controllers, a timer and interrupt controller, keyboard and display controller, clock, and a floppy disk controller. But there's no MMU. The bottom line is that the 8086 didn't offer anything like hardware memory management, and neither did its chipset. There was no concept of any kind of mode. So out of the box, it couldn't really run Unix. Hence, when AT&T went to make their own 8086 computer, they had to build their own memory management circuits. The MMU they developed
Starting point is 00:21:14 was pretty sophisticated, all things considered. It allowed Unix to build up these tables mapping the 8086's address space into physical banks of memory. It supported 2 megabytes of RAM, that's twice as much as a stock 86 machine could have addressed. Protection was accomplished using that same circuit. Ultimately, the crew at Bell was able to get Unix running on their machine, but it's all custom stuff. They even ended up with these wild peripheral processor units that had their own built-in Z80 processors. This wasn't a simple little computer, it was a serious complex machine. Once again, Unix is sometimes called simple, but that's a very relative term.
Starting point is 00:21:59 This was serious software being adapted to run on bleeding edge technology. This is one of the earliest uses of the 8086 I've ran across. It's about at the same time that Xerox was making this wild portable graphics machine with the same chip. This should give us an idea about just how hard it would be to actually sell Unix on a microcomputer. You would need to have a machine that existed and had a user base, plus an MMU and some concept of protection modes. This does open up a possible business model, but it all hinges on the 16-bit era being as chaotic as the 8-bit days. If there was one platform that reached dominance, then, well, the whole portage problem would evaporate. Dominance, then, well, the whole portage problem would evaporate.
Starting point is 00:22:46 Compatibility wouldn't really matter. Thus, we have this thin sliver of time where micro-UNIX may have made real sense. There's this pretty cool contributing factor to UNIX proliferation. Technically, AT&T wasn't allowed to sell Unix. Due to a long history of antitrust trials, AT&T was, to put it mildly, told to stay in their lane. They could only sell and profit from the telecom business. Unix was software, not telecom anything, was software, not telecom anything, so AT&T wasn't allowed to sell Unix directly. There's a pile of caveats and weird outcomes to that ruling that you gotta love. One is that AT&T had to license Unix to anyone who asked for a license, at a reasonable cost. That license ended up covering the source code. Hence, Unix gets passed around as source and gets modified very often. Most early licensees were colleges and research outfits,
Starting point is 00:23:55 but as time went on, more businesses started coming to AT&T asking to license Unix. Now, before any real Unix heads get mad at me, I know technically AT&T themselves weren't licensing Unix. It was a company related to AT&T, and it kind of shifted around at different times and in different places. But I'm going to just be simplifying things here and saying that AT&T was licensing Unix. That avoids a lot of confusion and annoyance. It's a legal matter that we might discuss another day,
Starting point is 00:24:25 but not right now. Microsoft negotiated a license for Unix in 1978. It's crucial to remember the timeline here. At this point, Microsoft is a relatively small software company. Their main product is MSBasic, a basic language environment that's sold on to hardware manufacturers. They don't sell direct to consumer, they use a business-to-business model. A company comes to Microsoft to buy software that is then sold to the consumer or bundled up with hardware. Sometimes that software is left stock, fully intact. Sometimes Microsoft agrees to modify it for the client, or sometimes the client just takes the code and modifies it themselves.
Starting point is 00:25:10 That all though depends on the licensing terms. In 1981, IBM approaches Microsoft for a version of this BASIC and an operating system. That OS, eventually known as MS-DOS, is for the IBM PC. Once Microsoft clinches that deal, and really once clones start spreading in the market, Microsoft becomes a PC shop. But that doesn't happen until the early 80s. I've seen 1983 given as the real breaking point for the rise of the PC, so let's just go off the assumption that that's the year Microsoft makes the full switch to the new platform. Keep in mind, that doesn't necessarily mean that Microsoft becomes a DOS-only company. So in 1978, we can look at Microsoft as a company that's looking for the next big thing. They decide, Bill Gates specifically, to bet on Unix. They get the license, technically from a company called Western Electric that's a subsidiary
Starting point is 00:26:08 of AT&T, but it's AT&T, so they can't sell Unix directly, they can only license. The plan here is that Microsoft will act in their usual B2B capacity. They'll be something like a distributor. And this is probably the point where we need to talk about what a distribution is, right? Unix itself is just an operating system. When you get a license, you get the source code. It's then up to you to turn that into running software. You have to compile it, tweak it for your system, make disks, and add in whatever other useful software you want. That final package, with binaries, extra software, disks, all that jazz, is what makes up a distribution.
Starting point is 00:26:53 If you've ever heard of BSD, well, that is a distribution of Unix. It's the Berkeley Software Distribution. Linux also works this way. You have Linux itself, which is actually just a kernel, then you have distributions like Debian or Fedora that add in everything else you need to actually use Linux. Distributions are what actual users are running at the end of the day. Microsoft wanted to make their own distribution, and it would become known as Xenix. The target platform for that new distribution, at least the first one, was the PDP-11, the traditional home of Unix itself.
Starting point is 00:27:36 That may sound kind of weird at first, and it kind of is, but this all has to do with the whole antitrust debacle. Microsoft would be selling software outright, something AT&T couldn't do. In that sense, Zenix would be filling a gap in the market. There were many PDP-11 users that wanted to run Unix, but didn't have the time or know-how to spin up their own distribution. BSD wasn't technically public yet, so there weren't really options for a grab-and-go Unix. Xenix would be fitting that exact market gap. At least at first. The ultimate goal was to get Xenix onto microcomputers. In theory, a PDP-11 distro would go a long way towards that goal. By 1975, DEC had released the LSI-11, a PDP-11 on a single microchip.
Starting point is 00:28:31 In the coming years, a spate of desktop-sized PDP-11s would hit the market. But that's kind of cheating. What I actually mean is that PDP-11's architecture was pretty similar to many microcomputers. Let me go back to that Bell Labs paper on the experience of porting Unix to new computers. When Unix was initially ported to the 8086, there were only a handful of changes needed. Discounting the whole custom MMU situation for a second. The PDP-11 and 8086 are architecturally somewhat similar. They treat memory in similar ways. They both have a similarly sized set of registers. They're even both 16-bit
Starting point is 00:29:15 computers. The port had to account for a small number of differences in how the 8086 treated byte ordering, some differences in how the microprocessor handled its stack, and an adjustment for system calls had to be made, but that was about it. The last one is kind of neat to me, so there's a sidebar. Apparently the PDP-11 version of Unix, at the time, used self-modifying code to handle passing arguments during a system call. So when you actually called out to Unix for help with something, there was a little memory trick used to pass around data. The 8086 port, on the other hand, passed data to system calls inside registers, or a pointer inside a register. Specifically,
Starting point is 00:29:59 and technically speaking, the paper says that the AX register was used to specify the operation and DX for the argument. That's an awful lot like how calling conventions work on the PC. When you make the equivalent of a system call under DOS or to the PC BIOS, you usually load up some call number in an AX register and an argument in another register. That other register is often DX because AX is used and CX and BX are technically used for certain other operations. You then trigger an interrupt which actually handles the system call and reads those registers and does what it needs to do. I'd be willing to bet that 86 Unix was probably using just that same calling convention.
Starting point is 00:30:49 Anyway, that's some deep lore stuff that I think is just fascinating to think about. But, um, let's get back to the point at hand. Working with a PDP-11 version of Unix would have set up Microsoft to make the jump to some other 16-bit machine. According to the Byte article I quoted at the top, Microsoft planned to port Xenix from the PDP-11 to, quote, DEC LSI 11-23, Zilog's Z8001 and Z8002, Intel's 8086 and 286, and Motorola's MC68000, end quote. Some of those are a little more surprising than others. As I mentioned earlier, the LSI11 is just a PDP11 on a chip, so that already has all the parts needed. But the 8086 and 68000 are notable for not having MMUs. The Zilog chips are notable for being almost completely
Starting point is 00:31:42 unknown in the modern day. The first microprocessor port of Zenix would actually be to one of those Zilog chips, which makes matters, well, it makes matters weird. The story goes that in 1980, the Zenix team at Microsoft started working on a port to the Z8000 series. The specific platform they were using was the Codata CTS200. What exactly was that computer? Well, it was a computer. I know it was a multibus system, basically a more modern rendition of the old S100 bus-based computers. It would have been built up from these different expansion cards. A card would have everything from memory up to the processor itself. From what I understand, Multibus was kind of a
Starting point is 00:32:31 technology. A lot of very early 16-bit machines, pre-PC, used this kind of design. The exciting part is the Z8000 series had an MMU. It was a chip called the Z8010. It's a lovely name for a lovely piece of silicon, I know. Adding a Z8010 to a system would, in theory, give it all the hardware oomph that Unix needed. But in the absence of a nice datasheet, there's a problem. The MMU was released to market after the first Z8000 processor. Specifically, it was at least 9 months late. While the Z8001 and Z8002, the first of Zilog's 16-bit chips, hit production in 1979, the MMU would have only been out in the latter part of 1980. That means that the timeline here is pretty tight. But whatever the case, I think the mysterious CTS-200 did have some kind of MMU. My evidence here is from a deep dive
Starting point is 00:33:35 into some old Usenet posts and a couple of brochures. It turns out, Codata was one of those companies that was founded by defectors. Specifically, in 1980, a number of engineers left another company called Onyx to make their own computers. Onyx also made multibus microprocessors that also used Zilog chips. Supposedly, the CTS-200 was mostly based off one of Onyx's machines. This matters because Onyx was a Unix shop. Their name even ends in an X. Their early 1980 offering, the C8002,
Starting point is 00:34:14 used a Z8002 processor and, drumroll please, a custom MMU circuit. They even brag about it in their sales brochures. So I'd wager the CTS-200, whatever it may be, had some type of MMU. That means that Microsoft's first port would still be going onto a system with the proper features. As you may be able to tell, this is a bit of a mystery period for Xenix. I've been going off a few different timelines I found on some old Web 1.0-style sites. These source to blogs, Usenet posts, magazine articles, and press releases. One of the theories I've seen crop up a number of times is that the Zilog ports of Xenix may have been vaporware.
Starting point is 00:35:06 dialogue ports of Xenix may have been vaporware. I don't know if I've ever actually discussed vaporware by name on the show, so I'm going to give a quick description before I continue, so we're on the same page. In short, vaporware is software that's been announced but does not actually exist. This can come in a number of flavors. In some cases, the manufacturer never plans to actually make that software. They just want to announce it for some 4D chess reasons. In other cases, the software's coming, it's just delayed. Or, alternatively, the project dies on the vine well before its launch date. In any event, the symptoms are pretty clear. Vaporware has press about it, but no software and no actual
Starting point is 00:35:46 sales literature. That's exactly what I've been seeing with the Z8000 ports of Xenix. There's a lot of press from Microsoft and OEMs. The Z8000 port was supposed to ship in 1980, then the date slips to 1981, then Zilog seems to disappear from any press around Xenix. So there's the symptom. But what about extant software? Well, I haven't had any luck. I've gone through all my usual tricks, looked in all the usual and unusual places, but I draw up nothing. I haven't found any disk images, no reviews, no nothing, not even a photo of a floppy disk or a user manual. All we have are articles saying that Xenix will be sold on various Z8000 machines, and entries in trade magazines and books saying which companies
Starting point is 00:36:38 will offer Z8000 machines with Xenix. This is one of the many cases where I'd love to be proven wrong. I will put the call out officially if you have a disk that has a Z8000 port of Xenix, if you have any of the software manuals, if you even have a receipt, some kind of physical evidence that Xenix was actually running and sold for a Z8000 machine, please get in touch. I will quite literally make an update episode just to set the record straight. But without any evidence, this leaves me at a bit of a conundrum. It seems that Z8000 Zenix may be vaporware. I think we can guess why that would be the case, right? 1981 is the year of the PC. That's when Microsoft clinches their huge deal with IBM, DOS ships everywhere,
Starting point is 00:37:34 and Microsoft starts becoming an x86 shop. But does that happen overnight? Does Microsoft just instantly drop everything just to write DOS software, just to work for IBM? Maybe, but also maybe not. I don't know. On one hand, some histories explain the PC as a success, but not a runaway success until the clone market really heats up in 83 or so. Others just say the PC was an instant smash hit straight to the stratosphere, nothing could stop it. The question comes down to how Microsoft viewed the whole debacle. Did Microsoft's top brass see the PC as an instant hit, or were they more careful? Is Z8000 Xenix's transparent appearance any indication of their view?
Starting point is 00:38:28 in Xenix's transparent appearance any indication of their view. I can further confuse the narrative here, and I will continue to do so this entire episode. Remember, the key question for understanding this whole vaporware situation is to figure out at what point did Microsoft go all in on the x86. in on the x86. So we must reckon with another chip, the Motorola 68000. When Xenix was first announced, Microsoft listed the 68000 as a future-supported platform. So did that chip get the same treatment as Zilog's chip once 1981 rolled around? The answer is no. Not exactly, at least. There are actual honest-to-goodness releases of Xenix for 68,000 machines, the most well-known perhaps being Xenix for the Apple Lisa. I know, an Apple machine running Unix. Who could imagine such travesty? How we get to that point is a little strange. It has less to do with Microsoft themselves, and more to do with the complex web that's always circled around Unix. By 1980,
Starting point is 00:39:32 there were a number of companies trying to bring a 68,000-powered computer to the market. This was a powerful chip for the era, and would end up in a pile of very important systems, from the original Macintosh up to Sun's machines. But early on, it seems it was tricky for companies to stick the landing. There is some strange press in 1981 about Microdasis and another company called CM Technologies, which both announced upcoming 68,000 machines. They also both announced they'd be running Xenix. But microdasis folds, and by 1982, there are articles claiming that CM Technologies was going to launch with BSD. Microsoft would eventually get Xenix on the 68,000,
Starting point is 00:40:20 but it was accomplished using outside help. In 1982, Microsoft contracted with two companies. The first was SCO, Santa Cruz Operation. The second was HRC, Human Computing Resources Corporation. This trio would jointly develop and port Zenix to new platforms. Both HRC and SCO would release versions of Zenix. Most well-known is probably SCO's port for the Apple Lisa, itself a 68,000 machine. It's unclear exactly how much work Microsoft themselves put into that port, especially since the label on the box said SCO. It's also important to note that that software shipped. We have disk images. We can run it today. There's also the matter of Tandy RadioShack and
Starting point is 00:41:06 the Tandy 16. This was another 68,000 machine that did eventually ship with Xenix. The software is also preserved, no vapor involved. From what I've read, I think the TRS Xenix was ported in-house by Microsoft, but I could be wrong. It's also possible that the Tandyenix was ported in-house by Microsoft, but I could be wrong. It's also possible that the Tandy version was ported by SCO or HRC or even Tandy themselves. The point is, the story here is very muddy. By 1983, Microsoft was leaning on those outside companies, so-called second sources, to port Xenix to new platforms. That may have been done to free up Microsoft. It may have been done to keep Microsoft away from non-x86 machines. I'm just not entirely
Starting point is 00:41:52 sure. But this leaves us with only one platform unexplored, and I think it's the cool one. Somehow, Zenix ends up running on the IBM PC. Somehow, Xenix ends up running on the IBM PC. This is an exciting platform for one big reason. The MMU. The PC did not have any kind of memory management hardware. No tricks, no gimmicks, nothing. Yet, Xenix would be released for the PC in 1984.
Starting point is 00:42:28 I think it's high time I introduce a fun document to the mix. The Microsoft Xenix Operating System OEM Directory from 1983. This is kind of a hard document to find these days, but it is worth it. It's a fabulous read. It's a directory of everyone that had a Xenix license from Microsoft. These licenses are broken down by platform, PDP-11, Z8000, 8086, and 68000. And then they each give a brief rundown of what each OEM is doing with their license. From this, we can see that in 83, there were versions of Xenix running on Intel hardware. However, there's a big caveat. It always comes back to the MMU. One company named Altos, for instance,
Starting point is 00:43:14 had a license for 8086 Zenix. Their machines all had proprietary MMU circuits. The same is true for other license holders like Intel, Microbar Systems, and even Nabu from up in Canada. There are some machines that don't mention MMUs in the directory, but these may have also been vaporware. There's a mysterious unnamed Zintec computer, which I can find exactly zero information on anywhere. There's the TRW-IWS and AWS machines, which, once again, may or may not have existed. And then we get to the Seattle Computer Products Gazelle 2. That machine may have had no MMU, but I'm not entirely sure. I've been staring at photos of old S100 cards trying to see what lines up with the address bus on the processor, but I decided that was a bit too much
Starting point is 00:44:12 even for me, and I'm going to leave SCP's machine as a question mark. What we're left with is this. In 1983, Xenix was running on the 8086, but it still needed special hardware. That means part of the job of porting Unix to the PC was already done, but we weren't all the way there. The jump was to somehow get Unix running without all the needed hardware support. This would happen in 1984. And this is where we reach yet another twist. You may expect that Microsoft would have ported Zenix to the PC themselves. After all, Microsoft was turning into the PC shop that we know today.
Starting point is 00:44:57 But here's the kicker. The PC port was actually handled by SCO. That's right. When it came time to get Zenix on the Premiere platform, Microsoft passed the task off to an outside company. SCO's PC Zenix launched in either late 83 or early 84. This actually came out just before SCO's PC-AT version of Zenix. SEO's PCAT version of Xenix. And this is really worth explaining, since it makes sourcing kind of confusing. If you're reading these period pieces about Xenix, you'll run into two versions, mapped and unmapped. Nice and descriptive, right? This basically boils down to with or without an
Starting point is 00:45:43 MMU. The reason for the wording is pretty simple. One of the things that Unix uses the MMU for is mapping logical addresses to physical addresses. We cover that. We know that. We're all mapping geniuses. This means when we're talking about PC Xenics on the original IBM PC, that was unmapped Xenics. There's no MMU, so no tricks can be pulled. No memory can be mapped.
Starting point is 00:46:09 But in 84, we get the PC-80, which rocked a newer Intel 8286 processor. That computer got a mapped version of Xenix. The 286 actually hit the market back in 1982, so we have this usual delay before it ships inside production machines. What's cool about this processor is it adds a feature called protected mode. That's a mode that should ring a few bells and sound pretty exciting. It kicks the chip into 32-bit mode, expands its memory bus, and adds features for memory protection and mapping. In other words, you can flip a switch and turn on an MMU. I need to cover the 286 on its own sometime, but until then, let me give you a taste of the weirdness. I said the protected mode is like a switch. I mean that very literally. When you boot up a 286,
Starting point is 00:47:07 it's in so-called real mode. In that mode, it functions almost exactly like an 8086. It's 16-bit, has a smaller memory bus, and no protection or MMU features. That provides backwards compatibility with 8086 software. It also means you need special software to unlock the power of the 286. All x86 processors still work like this today. PC-AT Xenix, 286 Xenix, didn't run in real mode. It ran in protected mode. That's why SEO sometimes just calls it mapped Xenics. It used memory mapping, protection, the whole nine yards, everything. The timeline here matters a lot because, well, PC Xenics, in all its unmapped glory, may have been something of a second thought.
Starting point is 00:48:00 Part of this is backed up by some possible hacky software, but we'll get to that. Let me have my buildup here first. One theory that I've seen in a few spots was that Microsoft was butting heads with IBM, which leads to all the weirdness around Unix on 86 processors. Here I'm going to be cribbing pretty heavily from softpanorama.org, one of the handful of Web 1.0 sites that have been a goldmine for Xenics information. I'm combining that with some sources listed there and some of my other experience from reading about Xenics and Microsoft in this era. Part of this argument
Starting point is 00:48:38 comes from a 2002 article in The Register. The title of that article is Bill's Vision for the Future of the PC. It talks about the continued belief in a Unix future. Crucially, it points out a rumor. The author pulls this quote from an unnamed quote-unquote grunt at Microsoft. I think Gates first tried to sell Xenix to IBM, who, afraid of what post-breakup AT&T could do to their markets, wanted nothing to do with it. The author then adds, This seems plausible, as Gates would surely try to sell something he already had, rather than something he was going to have to grab quick.
Starting point is 00:49:19 Nice footwork, though, Bill. End quote. I did say I'm bad with shiny objects, right? And all the excitement I forgot part of the larger context here. Microsoft already had a license for Unix by the time IBM came looking for an operating system. The traditional story is that men in blue suits came to talk to Bill Gates. They asked for an operating system for a new 8086-based computer. Gates said sure, all the while knowing they didn't have any software up to the task within Microsoft. Once the blue suits left, Gates ran down and licensed 86 DOS from Seattle Computing Products.
Starting point is 00:50:00 That operating system, with a few changes, becomes MS-DOS, which ships on the IBM PC as PC-DOS in 1981. This is usually told as a way to show off just how much Microsoft needed that IBM contract, how it was their big break, and how scrappy Gates was willing to get. But the Unix aspect, well, that ruins that story. In that meeting, Gates would have known that Xenix was in the tank. They would have had the source code for Unix in their very office, maybe even in the next room over. Maybe Gates agreed to supply an operating system because he knew he had Xenix. He was ready to roll.
Starting point is 00:50:43 Maybe IBM, after signing the contract, said no to Xenix at a later meeting. That would explain why Microsoft went shopping for whatever 8086 operating system they could lay hands on. Instead of Bill Gates being this scrappy yes-man, he might have actually gone to buy 86 DOS in a sheer panic. As late as the 1990s, Gates was still on the Xenix bandwagon, which makes this whole Xenix twist to the whole DOS saga feel more believable. In 1990, he still believed in a Xenix future, but things just weren't working out. When the PC-AT comes out, there was this great opportunity to switch over to Xenix. The AT could have shipped with Xenix as a stock option, which would have
Starting point is 00:51:32 unlocked the full power of a 286. Instead, it shipped with DOS, which still ran in 16-bit real mode. It was still single-tasking. It wasn't using the full power of this new machine. In this context, the whole SEO thing makes, I think, a little more sense to me. Microsoft was continually trying to get Zenix to be the future, but there were road bumps. Their largest contract, IBM, wasn't keen on the whole Unix thing, so Microsoft kept some aspects of Zenix at arm's length. I've seen it speculated that SCO was given the 8086 port so Microsoft could focus on new software for the upcoming PC-AT. If the earlier story is to be believed, then maybe SEO was also past the 286 version of Zenix after a round of failed Microsoft-IBM negotiations. Whatever the case, SEO ends up developing all the x86 versions of Zenix.
Starting point is 00:52:37 The 286 and later versions get fully mapped ports of Unix, so that's normal. You're just running normal Unix. But for the PC, the machine about to be replaced, we get unmapped Xenix. So how does it work? That's the crux of everything, right? Luckily, I have an answer. At least, in part. In 1984, at the Uniforum conference, there was a presentation given by John Hare and Dean Thomas of SCO. The title? Porting Xenics to the Unmapped 8086. That, my friends, is the exact document we need, and I don't think it actually exists anymore. So here's the skinny. When you speak at a conference, you sometimes are given the opportunity to publish
Starting point is 00:53:25 what's called a proceedings paper, like proceedings of the ACM, that kind of journal paper. From what I gather, Uniform didn't really have a directly attached proceedings journal. Rather, we get some extended abstracts. Supposedly, there was a full paper that went along with this talk, which was published in ComUnixations. It's a great, fun name for a journal. I can't find anything about that. If the paper still exists, it's not archived. I can only find a few issues of ComUnixations that are barely archival grade online. They're like phone camera photos of a couple pages. What we do have, the extended abstract, gives us just enough info to work with.
Starting point is 00:54:13 In short, some compromises were made which were needed to get Unix running. The first has to do with the 8086's memory models. This is a very super technical subject, so as always, I'm going to try and simplify this as much as I can. Under Unix, an executable is traditionally broken into two segments, text and data. These are confusingly named, as is tradition. The text segment is where your executable code lives. Data is where things like variables and constants live. To your program, this is all one chunk of memory, but we have to consider the MMU here.
Starting point is 00:54:52 In practice, text and data could be kept anywhere in RAM. This is all for the sake of flexibility. It lets Unix squeeze more programs into more memory and be more efficient. It also lets you play some cool tricks. If you have a large text segment and a small data segment, it may be more efficient to stick your data segment in a slim slice of free memory, shift things around, and then open up a larger space for your text segment somewhere else. The 8086 complicates matters because its address space is segmented. The 8086 has 16-bit wide registers. It can only operate on 16-bit numbers.
Starting point is 00:55:30 Yet it has a 20-bit wide address bus. Each address is a 20-bit number. That's too big for a single register, so in order to address all of RAM, it has to use two registers, a segment register and a base register. Physical addresses are calculated by shifting the segment register and adding it to the base register. The result is that you can technically break memory up into segments. However, each
Starting point is 00:55:57 segment can be at most 64 kilobytes. They can also technically overlap. It's not a very reasonable addressing model. So here's the first trick that made PC Xenix work. Each program simply assumes it's being loaded into the bottom of memory. Address 0 is the start. Xenix loads up that program and then sets the segment register to point to a free segment of RAM. That way, the program tries to access memory, the segment is applied, and it winds up pointing to the right place.
Starting point is 00:56:30 So a program may think it's located at 0, 0, 0, when really, Xenix put it at something like 2, 3, 4, 5. That's a trick that has some real legs to it, but it falls apart under some very basic scenarios. This imposes a lot of restrictions on what the program can do. Your code kind of just has to play nice for this to work. If your program uses more than 64 kilobytes, you need multiple segments. That forces you into this alternate memory model, which gets more complex for Zenix to handle. So small programmers only. You also have to make sure the program never alters the segment register.
Starting point is 00:57:11 Segments can be calculated and to make this trick work, they have to be calculated using a segment register. But that's just a register. You can write code that changes it. If you do that, your program would kill Zenix. There are, of course, more issues, so check this out. Unix is able to do this really cool trick called swapping. It's one of the coolest tricks around. Basically, when Unix runs low on memory, it will take a program that's not actively running and save that program to disk.
Starting point is 00:57:50 That includes code and data the program's working on and the current processor state. Everything. Then, when it's time for the program to run again, Unix loads it from disk back into memory. Crucially, Unix doesn't have to load that program into the same space in memory. With an MMU, this is a dream. This is really easy. Unix sets up a mapping table and the program will never be able to tell the difference. But without an MMU, you have a bad time. Programs can reference memory in two ways,
Starting point is 00:58:22 relative and absolute. A relative address takes your current location in memory into account. It's like saying you want to get a number that's five bytes up from you. Absolute is when you use an actual numeric address, as in give me the number at address 1234. With an MMU, you can use whatever address you want. The computer maps everything around anyway. But what if you don't have an MMU? maps everything around anyway. But what if you don't have an MMU? To quote from the Unmapped Xenix paper, another problem was swapping. If, say, absolute addresses are on the stack, and the process gets swapped out and then swapped back in at a different address, it's in trouble. Either some means must be given to detect and fix these at swap-in time, End quote. Note the X are actually written in the paper.
Starting point is 00:59:29 If that's too obtuse, let me unwind this a little. Swapping back in could lead to all kinds of issues if you have any absolute addresses. Notably, during a function call, the 8086 puts your absolute return address on the stack. That's an absolute address. That sucks. SEO got around this by preventing swapping. In the event that a swap does occur, PC Xenics has to put the program back in the same place in memory. It has a so-called fix-up table for trying to fix some addresses, but the main solution is just put it back in the same place.
Starting point is 01:00:06 addresses, but the main solution is just put it back in the same place. Put another way, PC Xenics can't swap. If you run low on memory, you kind of die. So that one trick's just dead. Swapping here is so restrictive that it doesn't really help you. The other trick is memory protection, and once again, let me pull in a quote here. A malicious user can still trash the system, though. The kernel checks certain data structures for changes on each system call, and if they have changed, assumes the worst, issues a segmentation violation, and kills the process. End quote. In other words, there is no memory protection. All programs just have to play nice together, have no bugs, and generally be good little programs. What struck me about this is the mention of memory checks.
Starting point is 01:00:58 This is actually something I have very direct experience with. Back when I was really into x86 kernel development, I wrote a lot of code in real mode, only the purest 16-bit assembly language for me. Protection was a constant issue because, simply put, programs rarely act as expected. Under Unix, you don't really have to worry about that, but on a PC, without an MMU, without protection, you get this whole new class of bugs that you have to watch out for. At a certain point, I was having a lot of issues with memory overlap, so I ended up writing this debugging routine, which would occasionally check certain delicate parts of memory and send out an error if things had been overwritten.
Starting point is 01:01:42 It sounds like SEO was using a very similar solution, but in a commercial product. It's honestly kind of a hack. It just isn't very good. So the final determination here is that PC Xenix is kind of a trick in itself. It's not really Unix as such. It's missing the key features that make Unix stable and powerful. Now, as to getting Xenix running, that's still a bit of an issue. PCJS.com actually has a working install you can play with in your browser, but it's a very basic install. The most complex thing it has is Vi, the standard text editor. I've had a little luck getting Xenix running using PCM, another emulator on Windows and Linux, but it's really fickle software.
Starting point is 01:02:31 You have to have a 10 megabyte hard disk image or it breaks. I was fiddling around with this because I was trying to see just how flaky Xenix is. And it's primitive. You don't have Bash. You have SH. So you don't have history or tab complete. You're missing a whole lot of features because this is an older version of Unix. But my big question, which I think I'm going to hack around a little bit more, is how multitasking is handled. You can't use the hotkeys to multitask. Normally in Unix, you can use Ctrl-Z or some other hotkey combination to suspend a process, and then you can bring it back to the foreground. You can use that to do primitive multitasking. I haven't been able to do that with Zenix, or at least with the versions I've been running. Something I'm going to try to do in the coming
Starting point is 01:03:20 weeks and days is get a compiler working on an install of Xenix because then I should be able to fork. I should be able to actually try writing some multitasking programs. I may report back, especially if some more details come about the Zilog versions of Xenix. But until then, it really seems to me like PC Xenics is kind of cursed. Alright, we've reached the end of our dive into Xenics. What I didn't expect was to find so many old sites dedicated to the operating system, its history, and conspiracies around it. And well, I guess I could probably do a whole episode on the weird corkboard conspiracies around Xenix. I honestly was taken in by a lot of these unsubstantiated theories floating around about this Unix operating system. That was something
Starting point is 01:04:20 that I did not expect at all for this topic. Xenix starts off as a grand plan to port Unix to all kinds of computers. By the time we hit 1984, part of that plan had come to pass. Xenix is running on PDPs, x86s, 68000 machines, and maybe a Z8000 somewhere. Although, if it ran on 16-bit Zilog hardware, those disks have not been preserved. The timeline makes the whole story all the more interesting. It also answers a question that's been in my head for years. Back when I first learned to use Microsoft DOS, I remember reading that it had certain Unix-like features.
Starting point is 01:04:59 I remember teenage me just kind of subliming that information into my skull. Yeah, Unix! I like Unix! It's taken me a while to understand what that actually meant. DOS, at least versions after 1.0, has features like pipes, redirects, and even a hierarchical file system. As for the why, well, I guess I always just assumed it was some vague appreciation for Unix on the part of Microsoft. But no, that's not really it. Unix-like features made their way into DOS because of Zenix.
Starting point is 01:05:33 Microsoft, for years, was trying to be a Unix shop. Those years lined up perfectly with the early period of Microsoft DOS. It's one of those facts that, with some context, just really makes sense. Anyway, that's Xenix. And once again, remember my challenge of the episode. If you or a loved one has concrete proof of the existence of Z8000 Xenix, please let me know. I'm talking disks, let me know. I'm talking disks, price sheets, receipts, actual physical evidence that a copy of Xenix for a Z8000 machine sold. If someone can show me that it's not vaporware, I 100% will issue an update and an apology to Zilog. But until then, thanks for listening to Advent of Computing. I'll be back in two weeks' time with another piece of computing's past.
Starting point is 01:06:26 And hey, if you like the show, there are a few ways you can support it. If you know someone else who'd be interested in the history of computing, please take a minute to share the show with them. You can also rate and review the podcast on Apple Podcasts and Spotify. You can support the show directly by buying Advent of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction of the show, and bonus content. You can find links to everything on my website, adventofcomputing.com. If you have any comments
Starting point is 01:06:55 or suggestions for a future episode, then please get in touch. And as always, have a great rest of your day.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.