Advent of Computing - Episode 118 - Viral Dark Ages
Episode Date: October 15, 2023It's finally Spook Month here on Advent of Computing! To kick things off I'm tackling a bit of a mystery. Between 1972 and 1982 there is only one well documented virus. This period is book ended with... plenty of sources and, yes, even viruses. But this decade long span of time has almost nothing! Was this era truly safe from the grips of malicious code? Or is there a secret history lurking just beneath the surface?  Selected Sources:  https://dl.acm.org/doi/pdf/10.1145/358453.358455 - Worms at Xerox PARC!  https://archive.org/details/crimebycomputer0000park - Crime by Computer  https://archive.org/details/dr_dobbs_journal_vol_05_201803/page/n89/mode/2up - Programming Pastimes and Pleasures
Transcript
Discussion (0)
I'm sure we've all heard of the Dark Ages.
It's this period in European history directly after the fall of the Western Roman Empire.
Now, I hope you already know my gripes about historical periods.
Lines tend to get so blurry as to make distinction, well, kind of meaningless.
This is something that's come up on the show a lot when talking about periods in computer history.
It just doesn't make sense with the technology.
It's something that's slowly evolving, not jumping in fits and starts.
But disregard that.
Let's just go with the assumption.
There was some period in Europe that starts in, say, 476 AD, lasts about 900 years, and historians call it the Dark Ages. The actual meaning of the term
has shifted a lot over the years. It was first used to describe a period of decline after the
fall of empire. As Rome fell back from the frontiers, its civilizing effect was lost,
leading to a total loss of culture and knowledge. People were eaten sticks and mud for almost a millennia.
This view of the period is definitely evocative.
It must have appealed to a lot of folk for a long time.
Either this period was some type of backwater hellscape,
or it was a more romantic, simpler time.
It was an era of destruction and regression,
or an era where myth and fantasy
had their last gasp. Perhaps, then, it's little wonder that so many legends and stories come out
of this period. Depending on where you put the bounds around the era, everything from Beowulf
to Faust comes out of the Dark Ages. Even later legends like the Pied Piper of Hamelin are set firmly
in the Dark Ages. The actual reality of the Dark Ages is something that's been debated for, well,
centuries at this point. In more recent years, it's become academically accepted that this wasn't
really a period of total regression. The period itself has fuzzy lines, for one. Rome didn't just disappear
and take all its trappings and all of its citizens with it. There is evidence of Roman structures
being repaired and used for decades, if not centuries, into the so-called Dark Ages. You know,
when folk were eating rocks and building houses out of sticks. That really doesn't jive with
stonemasonry skills. Culture survived,
as did knowledge. It's not the total backwater that some historians believed it to be.
With this reappraisal, the term itself has taken on, I think, a more interesting meaning.
Currently, the Dark Ages are characterized by a period with scarce surviving sources.
characterized by a period with scarce surviving sources. This dark age isn't so much dark as it is quiet. We know what we do about this period because of precious finds, delicate manuscripts,
and rumors. It's not enough to flesh in all the details, but it's enough to see the shape of
things. It's enough to see that more is going on than we first assumed. In a concerning twist, a quiet age isn't really a unique phenomenon to this one swath of time in one part of Europe.
It's actually a pretty common occurrence.
For one reason or another, sources aren't preserved, or ideas are never written down in a sensical way.
Items are lost, stolen, or even destroyed. We're forced to
rely on precious scraps of information and rumors to build a picture. Now, dear listener, would it
surprise you to know that one of these periods took place smack in the middle of the 1970s?
Ooh, welcome to a very spooky episode of Advent of Computing.
I'm your host, Sean Haas, and this is the first episode of Spook Month 2023.
This is episode 118, Viral Dark Ages. Now, you might notice that this is a little bit late.
I'm actually recording this a week ahead of my usual release schedule. I apologize for my tardiness, but this is with good reason. You see, this is a classic case of your boy Sean gets wrecked by scant sources and piles of rumors.
I got stuck in a bit of a research tar pit while constructing this episode,
so I had to push out the release in order to, I think, better address the story.
Personally, I think that was worth it.
I'm a lot happier with this product, and I hope you agree.
Now, long-time listeners will know the drill for spook month here on Advent of Computing.
Halloween is, of course, my favorite time of the year.
So to celebrate, I like to throw together these mostly ghostly episodes here on the podcast.
like to throw together these mostly ghostly episodes here on the podcast. When it comes to computing, the frights are less, well, frightening, and usually more frustrating.
In the past, we've been spooked by viruses, creepy games, giant, possibly killer robots,
email spam, and the creme de la creme. Debugging. This year, for the fourth annual spook month, we're starting out with my
traditional offering, viruses, or technically malicious software in general. So far, we've
gone from the first virus, called Creeper, all the way up to early Microsoft DOS viruses.
Along the way, we've seen everything from pulp sci-fi to academic treaties to really
mean-spirited jokes. But there's something that's really bugged me. I will admit that I've been
sticking to a pretty traditional timeline in these virus episodes. Creeper is unleashed on the ARPANET
in 1971. It's soon followed in 1972 by Reaper, a program that surfs the Protonet to find and
destroy the Creeper. Then, well, the timeline just kind of jumps up to 1982 and Elk Cloner,
the first virus to appear on home computers. We get a decade-long gap that's filled with
the occasional paper, short story, and one virus, kind of. So what gives?
Did spreadable software pop up in the early 70s then fall dormant for the next 10 years?
Personally, I don't think that's the case. For my evidence, I'd like to point to the traditional
timeline itself. Elk Cloner, the classic virus for the Apple II,
the program everyone wants, was created by a single high school student named Rich Screnta.
It was developed as an elaborate prank totally disconnected from the larger viral context.
This kind of attitude is common among hackers and computer nerds of all kinds.
This kind of attitude is common among hackers and computer nerds of all kinds.
In this, Screnta wasn't some oddity.
He was following a well-established pattern of mayhem and merrymaking.
So there should be a larger swath of destruction here.
There is also one well-known virus that falls right in the middle of this dark age.
That virus, called Animal, is another show alumni. It masqueraded as a simple guessing game that, when executed, spread itself around on disk drives. Once again, it was
built more for merrymaking than real evil. We know there were at least a few folk thinking
about viruses during this period. So then, where are all the viruses? Where have all the good worms gone?
To make this even more vexing, once we hit the 1980s, there's an explosion of viruses.
By 86, we have dozens to pick from. So there has to be something in this era.
In this episode, I'm going to really try to get to the bottom of this.
More than anything, I just want to flesh
out the timeline a little. My gut says there has to be something going on between 1972 and 1982.
What kind of spreadable code can we find in this period? Will some kind of pattern emerge?
Or are we truly looking at a dark age for viruses and worms? Before we get started,
I want to put a quick plug in here. If you haven't heard already, I'm going to be at the Intelligence Speech Conference
on November 4th of this year. That's 2023 if you're listening in the future. I'm going to
be speaking in one of the afternoon sessions. The theme this year is contingency, or backup plans,
which works out great for me because one of my favorite stories
having to do with intel is all about how a contingency ended up becoming the actual plan.
I've also been given a quick sizzle reel. So let me, um, let me find my tape here. Okay, here it is.
My name is Sebastian Major. Sebastian Major is great.
It's a continental podcast.
Partial histories.
The history of Persia.
QST of a child podcast.
At Intelligent Speech.
Go to intelligentspeechonline.com to get your tickets.
November 4th.
It'll be a doozy.
I won't mince words. It'll be a doozy. you can get 10% off with my promo code AOC. You can get tickets at intelligentspeechonline.com.
Now, with that out of the way, let's get into the viral dark ages.
I have to start off with, as is often the case, a little bit of egg on my face. I tend to play
fast and loose with the law of terms because because when it comes to technical terms, the actual
definitions don't usually matter. The real truth of the matter is often an edge case.
And in the past, I will admit, I've been a little bit fast and loose with the definition of
a virus. It's mainly been done to save brain cells on my part, and also just due to the
weirdness of the historical context.
Strictly speaking, a virus is a program that will infect and hide inside another program.
The name comes from an analogy to a human virus that infects the cells inside our body.
When it comes to the digital type of virus, the host isn't a cell, but someone else's program.
Historically, the term is a little more
loose. It's codified in the middle of the 1980s, but just know that it's in use earlier on and in
a lot of different contexts. The first use of the term virus comes from a 1970 short story called
The Scarred Man by Gregory Binford. In that text, a program called Virus
spreads from mainframe to mainframe over phone lines. The Virus, on all caps, can only be defeated
by a program called Vaccine. Again, all caps. It's computers of the era. It has to be in cruise
control. If you want more detail, the first virus episode I did actually covers the
Scarred Man in, well, more detail. Now, already, there is something weird that I want to point out.
The traditional story of the virus goes like this. Binford writes The Scarred Man in 1970.
The story is then published and reaches a very limited audience. Later, the term virus is re-coined a number of times, both in other works of fiction and
eventually by bona fide researchers in the 1980s.
The Scarred Man is then rediscovered decades later and now sits at the very beginning of
the viral timeline.
But, dear listener, this isn't entirely the case. People
did read The Scarred Man, it just may not have been the right kind of people. In 1973, there
was this spate of newspaper articles that discussed the new idea of a quote, computer virus. They all
start roughly the same way, so I'm guessing they originated from some kind of
wire service. I want to share an excerpt from one such article in the Morning News from Wilmingdale,
Delaware. This was published on June 20th, 1973. Quote,
The computer crime wave. There is a science fiction fantasy making the rounds these days about a computer disease.
So far it hasn't come true, maybe it never will,
but it's the sort of story that keeps computer operators worried.
It goes like this.
The disease is a program called VIRUS, in all caps, written by a disgruntled programmer.
Virus is fed into a computer with the ability to make telephone calls to other
computers. The program wills the computer to drop all of its other work and random dial the telephone
until it reaches another computer, pass the virus program onto it, and then go back to writing
checks or balancing books or whatever. Therefore, so the fantasy goes, virus skips across the
country from computer to computer.
If the epidemic becomes serious enough, the same programmer could devise another program called
vaccine and, for a fee, vaccinate against virus. End quote.
This is actually just an unsighted plot summary of The Scarred Man. Like I mentioned,
plot summary of The Scarred Man. Like I mentioned, very similar articles show up in a number of newspapers in 1973. This particular morning news article is interesting because it uses the framing
of this virus program to discuss early cybercrime. It gives a quick rundown of digitally-aided
embezzlement schemes. One even has to do with a programmer cooking company books by
changing around some select lines of code. The Scarred Man was being read, at least by someone,
and its ideas were getting out there, which leads me back around to something very interesting.
There are a number of rumors of computer viruses in this era. One actually comes from Binford himself.
In the year 2000, The Scarred Man was reprinted in an anthology collection of Binford's short
stories. That book made it around for reviews, which in turn prompted Binford to write a blog
post about the story. Now, Binford wasn't just an author. He was also a computer dude. He worked at labs connected to
the ARPANET back in the day, which is where he became inspired to write The Scarred Man.
He mentions in this blog post,
Another piece to this puzzle comes from an issue
of Dr. Dobbs' journal. This was a magazine targeted at early microcomputer adopters.
During this era, home computers were still pretty new, so many early users were also familiar with
large computers. That's kind of how they got into computers as a hobby.
In 1979, the journal ran an article titled Programming Pastimes and Pleasures by Charles Wetherill. The article prevents four fun programming challenges and some idle chatter
about each. Each of these challenges is presented as a famous program for an aspiring software developer to cut their teeth on.
The fourth task is to construct a virus using that very word, and it's followed by this specific idle chatter.
Quote,
I know of one rumored virus.
Back in the early days of computing, the 7094 operating system stored all object programs waiting to be punched onto a single tape.
The virus would read the last object file off the tape, modify it to include a copy of the virus, and then write the object back onto the tape.
When the object, which would be a single routine from some other program, was loaded and executed, the virus would propagate again.
End quote. This is a rumor, true, but it's a very, very specific rumor. It even comes with
implementation details. You could write this virus. Weatherall describes one other virus in
this article. It's a virus that hides itself inside a compiler. The virus
uses that place of privilege to insert security vulnerabilities into any software it compiles,
specifically a login program. This second virus, the compiler-based one, is of special note.
In one of my past viral episodes, I covered the paper
Reflections on Trusting Trust by Ken Thompson, himself of Unix fame. This paper describes a
virus that embeds itself in a compiler in order to sneak in security vulnerabilities to anything
it compiles, specifically to login programs. I think it's clear to see the similarities here.
Crucially, Thompson presents On Trusting Trust in 1984. That's well after the viral dark ages.
Here's where it gets spicy. If you go to the end of the paper, Thompson has the obligatory
acknowledgements section. It reads, quote,
I first read of the possibility of such a Trojan horse in an Air Force critique, four, of the
security of an early implementation of Maltics. I cannot find a more specific reference to this
document. I would appreciate it if anyone who can supply this reference would let me know. End quote. In classic fashion, there is an in-text citation here.
That citation, magical number four, reads,
Unknown Air Force document.
That's, uh, that's not really a lot to go off of.
Now, taken separately, these are all fun little rumors,
just something for the usual mill.
Together, however, I think this points to something larger.
The idea of a virus was out in the public sphere as early as 1973, probably sooner.
The scarred man wasn't as isolated as some seem to believe.
We have secondhand accounts of infections.
These rumors come with a lot of detail, a truly concerning amount of detail on how to make a
virus. In the case of the compiler infection, we even have corroborating stories five years apart
from one another. So let's use these rumors as a jumping off point.
First, I want to work out what this 7094 thing is. So let's get some details straight to begin with.
The possible time range here is a little fiddly. The 7094 was a big, powerful IBM mainframe,
which first came into use in 1962. The machine was technically
a souped-up version of the earlier 1790 mainframe, itself first delivered in 1959. So by the time the
virus rumor hits print, the supposedly affected platform is about 20 years old. That said,
it's likely that at least some of these mainframes were
operating well into the 1970s. These machines were big investments for institutions. They were built
to last. For evidence, look no further than many fancy scientific papers. You can find researchers
still using 7090s and 94s into the 1970s. That puts us well within the viral dark
age. Right off the bat, this leaves us at a great disadvantage. We have a pretty wide and vague time
period to search through. The primary operating system that ran on the 7090 series of computers
was called IBSYS. This was a tape operating system, meaning its primary purpose in life was dealing
with tape drives. It's an exciting existence to say the least. IBSYS wasn't really interactive
as such. It more just sat there and handled incoming compiling jobs or executed programs
into the ground. Now, as near as I can tell, the system would have operated as
described by Weatherill. Fortran code would be loaded in a punch card hopper, which would then
be read into the machine. IBSYS would start churning using tape as storage. Specifically
for Fortran jobs, this tape would be used to store object code or object programs. This is mostly compiled
code that's just waiting around to be finalized and then splashed into a larger program. When
that object code was needed, IBSYS would then go find it on tape and dump it onto a fresh stack of
punch cards. But, and here's the kicker, there were a number of operating systems for the 7090 series of machines.
In addition to IBISIS, there was FMS, CTSS, ShareOS, plus probably some more esoteric IBM mainframe operating systems.
I think we can discount CTSS from research since that's actually a timesharing operating system.
It did use tapes
for storage, but it wasn't really made for outputting data to punch cards. The share
operating system, or SOS, has been discounted for purely logistical reasons. It's almost impossible
to search for anything around share because, well, search engines kind of suck. AI also kind of sucks. It
doesn't help searching for this. It's a matter of namespace collision. Share, SOS, IBM SOS,
IBM shares 1970, IBM shares 1960, or even just IBM share, however you shift things around,
you run into bad results. Another contender here, FMS, the Fortran monitor
system, may have been too simple to be susceptible to a virus. It's basically just a fancy Fortran
compiler interface. Plus, it doesn't actually show up in much literature, so I doubt that's
what we're looking for. The neat thing is that we haven't really narrowed down the field by
very much. The 7090 series was very popular and very long-lived. Everyone and their mother used
these machines, it seems. In any case, I even have a photo of my grandfather who was not a programmer
standing by a 7094 console, so maybe I should say everyone and their grandfather used
a 7094. IBSYS was just the default environment, so that also doesn't narrow things down. We don't
have any specific indicators here. Add in the fact that during this dark age, the word virus existed, but just barely.
Malicious code has shown up in some sci-fi stories and books.
We have maybe like 2.5 virus-like programs and a smattering of newspaper articles.
But that's it.
The phraseology was a little loose.
Now, after extensive searching, and I do mean extensive, I have yet to turn up any
corroborating evidence for this rumor. No published papers about some security vulnerability in IB
CIS, no newspaper articles about some dastardly programmer, not even an oral history that mentions
a tape drive prank. If there's anything in the extant record, well,
it's pretty well hidden. This is definitely a case where, if you know something, then please
do say something. An IBSYS virus from this period would be actually really interesting to say the
least. It would blow a lot of things wide open. So, if you or a loved one was wrecked
by some tape drive tricks back in the 1970s or late 60s, please get in touch. Until someone can
habeas me some corpus, I'm gonna have to just assume that this virus is a pure rumor. Now,
at first, that may sound disappointing, but it doesn't have to be.
There's a question here that I like.
Why would someone repeat this kind of rumor?
Context really adds to the mystery.
Wetherill isn't just some random person writing into a computer magazine.
He was a professor at UC Davis in this period.
He had just published a book called Etudes for Programmers. It's this interesting set of case studies and exercises for budding
software developers. There's some level of assumed credibility there, well placed or not.
So why spread a rumor like this? This is actually one of my favorite aspects of the digital realm.
Computers seem to be predisposed to rumors and folklore.
My personal theory, as someone warped by the machines himself,
is that it has something to do with the dual nature of computers themselves.
A computer is this perfectly logical machine built up from mathematics' first principles.
It's a monument to order and reliability.
And yet, computers are so complicated as to be totally incomprehensible.
They can be inexplicably capricious, just kind of doing whatever they want for no apparent reason at all.
Poor and illogical flesh folk, well, we tend to get ground up between this chaos and its
underlying immovable, flawless logic.
The result is this almost mythical realm surrounding machines and the software they run.
The result is rumor and folklore, stories told over drinks
after work or speculation shared in glossy print. It's a way of coping or perhaps even making sense
of the vast digital realms. Rumors persist in part because of this environment. Besides,
a good rumor about a computer is really easy to believe.
There's still something remarkable about this specific rumor.
It's still a really early use of the term virus in print.
At least, with this specific meaning.
As a side note, I had no idea how much virology research was going on on mainframes in the 1970s? There are a whole lot of papers
about that. It's something that is also incomprehensible to me and I kind of want to
learn more about. Anyway, it's important to point this out because, once again, this breaks the
timeline a little bit. According to most histories, the term is recoined and properly defined by Fred Cohen in the early 1980s.
1984 or so.
But here, we have an article in 1979 that very casually uses the term.
I mean, Weatherall's article is presenting a virus as a fun project based off a famous program.
It's a literal exercise to the reader. That speaks to a level
of familiarity with the term and the software. The bottom line is that even if the 7090 virus
is only a rumor, the fact that it's being so casually discussed means something. This is the
printed tip of an iceberg. Imagine how many programmers were talking about viruses around the
office, or talking stories of infected files at a bar near their office. There's something here,
there just has to be. After many years have slipped by, the leader of the Greeks,
opposed by the fates and damaged by the war, built a horse of mountainous size through Pallas's divine art,
and wove planks of fur over its ribs. They pretended it's a voting offering, this rumor
spreads. They secretly hide a picked body of men, chosen by lot there in the dark body,
filling the belly and the huge cavernous insides with armed warriors. humans have always had a certain fascination with, and maybe a predisposition for trickery.
Some of our earliest stories are about lies, tricks, and deception.
Computers have automated and really streamlined
so much of our lives, including the trickery bits. It's really little wonder that we have
modern Trojan horses. Perhaps it was even inevitable that these automated tricks would
adopt an ancient name. Digital Trojan horses, most often just shortened to Trojans, are a weird category of
malicious code. A Trojan isn't exactly a virus, but it's also not really not a virus. Basically,
a Trojan is a program that presents itself as harmless, but it includes some secret payload.
as harmless, but it includes some secret payload. That payload can be anything, really. Now,
as a programmer, I don't super like this as a category of dangerous software.
All code comes with a secret payload. They happen to be called bugs. I guess the distinction is that the little travelers inside a Trojan have been placed there intentionally. The malicious
case here is easy to understand. Trojans exploit trust in a very basic way. A computer user goes
out looking for new software. As an example, let's take an unfortunate programmer at the largest
software corporation I can think of, Widget Co. Ltd. LLC. Maybe they need a new compiler,
a very common tool that any programmer uses on a daily basis. It, you know, compiles your code.
It's how you're able to run your code. You have to have it. They find the program they want. It's
a file that purports to be the latest version of the C compiler. And that's perfect for the job.
So they set to work, writing and compiling whatever software their heart desires.
You know, just doing their job.
A few months later, there's an incursion.
Someone was able to get onto our programmer's computer.
This intruder circumvented password checks and all standard security procedures.
They seemingly broke in
with no resistance. Nothing was broken, per se, but some files had been copied and transferred
to a far-off machine on another network. The only sign of entry was a mysterious record of
a late-night login and some associated network chatter. This intruder now has all of WidgetCo's financial records
for the past seven years, plus the designs for this year's holiday run of doodads.
It's truly a grim outcome. It could result in an IRS audit, or it could just result in
losing the holiday market this year. In the fallout, an audit happens.
losing the holiday market this year. In the fallout, an audit happens. Widget Co. calls in security experts to tear through drives and the poor programmer is interviewed. During this process,
no one would think to look at the new C compiler. I mean, it's just a tool. If someone broke into
your house, you wouldn't start looking through your tool drawer, you wouldn't check your screwdrivers for defects or look for cameras in your hammers. You trust
your tools, but in the digital wild west, those tools can be turned against you. Your trust can
be violated. After an excruciating process, it's determined that something is off with the new C compiler.
The program itself is actually the wrong size.
That was the only tip-off.
There are a few extra kilobytes of code just tucked somewhere in the program.
It's a Trojan horse, and a subtle one.
The code has been implanting security vulnerabilities into select programs it compiled.
Eventually, someone recompiled a login program.
That was the point of no return.
This is the rough outline of the virus, or Trojan, that Ken Thompson presents in Reflections on Trusting Trust.
And, as mentioned earlier this episode, the story is corroborated by Weatherill's
article in Dr. Dobbs' journal. Weatherill's description is eerily similar to Thompson's,
right down to the attribution. Thompson cites a forgotten Air Force document and
something about Multics, while Weatherill adds, quote,
I heard this tale far removed from the perpetrator I suspect it to be apocryphal, but have you checked your compiler lately?
Should I still be trusting rumors written down by Wetherill?
Perhaps not, but I'm going to do it anyway.
Besides, I can't help but be fascinated by two sources describing something so wildly
specific. Especially when both sources end with, eh, I don't really know where the idea came from,
but, you know, it's probably trustworthy. This is where the rabbit hole begins.
We actually have evidence of Trojan horses during the Dark Decade. The term is first applied to tricky software in
the early 1970s. The origins are a little, well, tricky. There's actually a name drop in a 1971
version of the Electronic Manual for Unix. In this file, colloquially called the Man Pages,
there is a short line about how permissions on files help prevent
quote Trojan horses. Then in 1972, the phrase shows up again in a report titled Computer Security
Technology Planning Study by James P. Anderson. He specifically cites a colleague named Daniel
Edwards. Now, there is a fun bit of shuffling going on here.
According to the jargon file, Edwards was a quote, MIT hacker turned NSA spook,
and the one that should receive credit for coining the term Trojan horse. Just to fill in the gaps
from what I can tell, Anderson was working as a contractor in this period. The
security study specifically says it was prepared by James P. Anderson & Co. for a division of the
U.S. Air Force. Usually, a company named after yourself is a sign of an independent contractor,
at least in this context. This report wasn't classified, it was cleared for public use,
and was even discussed at public security conferences. The first Unix manual, on the
other hand, was a much more exclusive kind of file. 1971 is before C was designed, so we're
dealing with a very early version of Unix. Thus, the manual was probably only ever seen by its creators and their colleagues.
Namely, we're talking about Ken Thompson, Dennis Ritchie, and maybe a handful of other
programming nerds at Bell Labs out in New Jersey.
I think this is a case where one source may have came first, but wasn't widely available,
so didn't have any kind of impact.
Alright, with that pretzel thoroughly tied, what does Anderson's report have to say about the
matter? Quote, Trojan horse, C footnote. This rather interesting attack is directed to placing
code with trapdoors into a target system. It attempts to achieve this by presenting
the operators of the system with a program so useful that they will use it even though it may
not have been produced under their control. An ideal gift of this kind would be a text editor
or other major system function that requires access to user files as part of the function.
End quote. That footnote is,
This attack was identified by D.J. Edwards.
This appears to be one of the ways that the phrase Trojan horse is popularized amongst computer users.
This is also one of those great times where the initial meaning is, more or less, the same as the modern meaning.
We do not get so lucky with the term virus, sadly.
Something interesting to note here is the idea of a Trojan as a gift.
This actually fits something we've talked about before.
Animal, the breakout virus of 1975, is kind of a Trojan.
Animal, the breakout virus of 1975, is kind of a Trojan.
That program presented itself as a fun game where the computer tries to guess what animal the user is thinking about.
The program would ask you a series of yes-no questions, eventually arriving at a guess.
All the while, Animal was copying itself to all media the user had access to.
It was a neat little gift that hid some unintended consequences.
But a Trojan doesn't have to come from the outside. It doesn't have to be a gift at all.
The phone call can come from inside the house, so to speak. This gets us to one of the earliest reported instances of an honest-to-goodness Trojan. I'm talking an actual new virus inside the Dark Age.
At least, kinda. For my caveat to make some sense, I need to weave a bit of a tale for you.
It starts with one Don Parker. He was one of the first people to take serious interest in
computer crime, and he came to it from a somewhat strange angle,
at least it's unique. Parker was a programmer with an interest in ethics. Right off the bat,
there aren't a lot of folk like that. Most programmers, especially in this early era,
tend to be ruffians, so to speak. As a consummate professional, Parker was also a long-time
member of ACM, the Association of Computing Machinery. He held many positions in the ACM
over the years, eventually leading to the posting of Chairman of Professional Standards and Practices
Committee. During his tenure, Parker started the huge undertaking of developing a professional code of ethics for ACM members.
This is a pretty normal thing for all big field to have.
Medical folk, for instance, all have professional and ethical standards they are bound to.
The same is true of scientists, bankers, even boiler makers in most cases.
But computer scientists, at least up to this point,
were ethically unbound. The ACM as a centralized professional organization didn't offer much
guidance on ethics or professionalism. Parker set about changing that. During this process,
Parker started to see a very disturbing pattern. There were actually programmers behind
bars. Worse still, some of them were ACM members. Parker never says as much, but I can imagine that
some were even paying dues from the penitentiary. This is part of a larger trend that starts to
become visible in the early 1970s. Computers had left the realm
of pure research and entered into less rarefied air, namely business. Mainframes and minis could
be found not only at colleges, but also at banks, hospitals, really you name it. Even some small
businesses were starting to get small computers. This presented a very interesting
new type of crime. Computer crime. It's during this period that we see the first arrests related
to digital activity. To be clear, these weren't arrests for things like hacking or writing evil
code. Legislation always lags behind progress. There were no explicit laws against
messing up a programmer getting into the admin's files. The charges that we see in this era
are related to fraud or theft that just happens to be perpetrated with a computer. For instance,
there was a somewhat famous case of a programmer working at a telecom company.
They switched around some code so that they could order all the telephone equipment they wanted
and have it sent directly to their home.
All on the company's dime, of course.
This programmer was eventually busted, but wasn't charged with hacking or anything of that nature.
It was a case of old-fashioned fraud that just happened to be facilitated by a computer.
Parker, ethical dude that he was, took a keen interest in this budding problem.
He actually started meeting with early computer criminals.
As he explained in an oral history interview with the Babbage Institute,
these criminals were eager to talk.
Parker was one of the first people to start documenting all of this in a
centralized and systematic way. He was slowly building up and documenting digital crime.
An interesting side note is that Parker implicitly breaks these ne'er-do-wells into two camps,
those committing run-of-the-mill crimes and hackers. He presents it as a matter of motive.
crimes, and hackers. He presents it as a matter of motive. Many computer criminals were operating from a very normal headspace. Parker describes cases where a company is underperforming, so
someone goes in and fudges some numbers on a projection program. Another employee needs a
little more cash, so they go in and change their salary in a file somewhere. This is all real-life stuff that only incidentally
involves a computer. Hackers, on the other hand, why, they commit their acts for the sake of the
act itself. In 1971, Parker publishes his first paper on the subject, titled Computer Abuse.
In the following years, he lectures and publishes on his growing crime files,
leading to the 1976 book simply titled Crime by Computer. This is where we get one of the
earliest outlines of a Trojan horse, but it's a little sparse on details. I can't corroborate
the story because of that, but I have read a whole pile of Don's work,
and I think we can take him at his word.
The story, as presented in Crime by Computer, starts like this.
One of the few attacks of this kind ever discovered happened at a university computer center.
A timesharing service is provided for hundreds of students, professors, and researchers
through online
terminals connected by telephone circuits. One day, a systems programmer was tracking down a
software bug after a system failure had occurred. He obtained a printed image of the contents of the
part of storage containing the operating system programs. Some clues from the way the system
failed led him to search several of the 400 pages.
Buried in the middle of a program that just happened to be familiar to him was some strange
code that he did not recognize. Curiosity overtook his interest in bug chasing. End quote. From here,
I'm paraphrasing from Parker in order to save some space and add commentary. This programmer was
well-versed with the software at hand. It's implied that he wrote at least this program and maybe
more of the system. At this point in time, that kind of familiarity wasn't uncommon. We're dealing
with smaller teams in general. We don't have as many tools for managing programming, so you have
to keep your wits about you when working on a big project. This familiarity allows the programmer to spot something strange.
Buried in those 400 pages is a small chunk of code that doesn't fit. Once he finds this
interloping code, the problem is escalated. A larger investigation takes place. It's determined that the new code adds a security
vulnerability to the system. It creates a backdoor that circumvents the usual security
checks and system privileges. If someone knows that door is there, well, they have the keys to
the kingdom. Somehow, this unnamed college is able to track down the source of the bad code.
this unnamed college is able to track down the source of the bad code. It came from inside the computer itself. Or that's the implication, at least. Someone that Parker just calls a quote
hacker had developed a handy tape utility program. This hacker most likely had an account on the
campus computer and probably an account with low-level access. That meant that they
couldn't access all the files on the computer, and probably had limitations on what types of
software they could run. These kinds of limitations, well, they don't sit very well with computer folk.
Judging by Parker's interpretation of hackers, I'm gonna say that the motive here was pure fun.
The hacker probably wanted more access for the sake of, you know, getting more access.
They wanted to pick a lock that was placed in front of them.
So they developed this handy tape utility program.
On the surface, it looked like an ordinary tool, just a new hammer to put in the tool belt.
But there was a special payload inside.
This was a Trojan horse. Whenever
a user ran this utility, the program checked what privilege that user had. On these shared
kinds of systems, different users had different levels of access. The Trojan was looking for the
most coveted user account of all, an administrator. For lowly salt-of-the-earth users, the Trojan did
nothing. But for administrators, the digital gentry, the Trojan sprang to life. Administrators
usually have free range over all files. That includes the files of the operating system itself. The Trojan used that elevated privilege to modify
system files. It implanted the small back door. From there, the hacker would have been able to
elevate themselves to the level of an admin. We don't get a year for this event, but we can
guess at one. It has to be before Crime by Computer went to print, so pre-1976.
Call it pre-75, since the book probably took more than a year to write.
The first timesharing system that served a university campus was CTSS, which began full
operations in 1963.
Now, to be clear, I'm not suggesting this trojan was running on CTSS, just that CTSS gives us a lower time boundary
In other words, this could have been an event that predates the first well-documented virus
We don't know much for sure because the sources are a little… scarce, at least online
The University of Minnesota has a collection of Don Parker's
papers, including his crime files. If this story interests you, then please hit me up.
There's a really wide open avenue for research here that I don't have the time to go into right
now. Just suffice to say, there's a lot more information here that I think would be accessible with a
little bit of work. But for now, I'm going to leave it at that. We have a lot more ground to
cover today. So how does this connect up to on-trusting trust? Simply put, the Trojan described
by Parker is very similar to the Trojan described by Thompson and Wetherill. Both masquerade as a trusted
program. Both implant a security vulnerability, and both are hard to track down. This also gets
to why I don't super like keeping Trojans as a separate class of malicious code. Ultimately,
most Trojans infect another program. That's some classic virus-style activity. The Trojan part is
really just the means of entry. But anyway, that's neither here nor there. Let's get back to the
rumor at hand. We've established that Trojans existed and were being discussed all the way back
in the early 70s. So, do we have a Trojan that fits the rumor described by both
Thompson and Wetherill? I was actually able to track down the origins of this rumor.
It comes from Multics Security Evaluation Vulnerability Analysis, a 1974 report funded by the U.S. Air Force.
This is a pretty heavy-duty report.
It details a prolonged security audit of Multics,
this big time-sharing system that was partly funded by the U.S. government.
The end goal was for Multics to be the operating system for the Feds,
and as such, it had to be secure.
This report is part of that
larger goal. It covers how Multics could be exploited and where its security was lacking.
This actually includes a few theories on possible attack vectors for viruses, or Trojans if you
prefer that term. I'm working up to a passage here, but I have to start with some background.
you prefer that term. I'm working up to a passage here, but I have to start with some background.
The report is pretty early in the chronology of computer security. As such, we get some fun,
antiquated language. A large section of the report is concerned with so-called trap doors.
These are what we'd probably call back doors these days. They're holes in security that are purposefully placed by an attacker or left in by some other dubious actor. One specific trapdoor discussed is the object code trapdoor.
Object code here doesn't mean object-oriented. Instead, it means pre-compiled libraries that
are often used by multiple programs. It's a pretty common practice both then and now.
by multiple programs. It's a pretty common practice both then and now. Before you get any ideas, no, I don't think this has any connection to the object code and tape virus of the IBM 7094.
Multics wasn't really for IBM hardware, so I don't think this hooks back, sadly. I personally was
very excited for that possible loop.
Now, the specific type of attack here is interesting in its own right.
The idea is that a vulnerability could be hidden inside a useful library.
So instead of a Trojan hiding inside something like a handy tape utility,
it could be lurking inside a library for controlling, say, a new graphics terminal,
or printing fancy text. Any programmer writing code for that terminal would, invariably,
reach for that slick library. They would call that object file from their own code,
thus compromising their software. Even more specifically, this report calls out the danger of a trapdoor being added
to object code ex post facto. In other words, an attacker would modify an existing library.
Libraries don't change all that often, so they become innocuous. The only way to cure this type
of virus is to replace the infected library. This is often done by
recompiling code, taking your known, good, safe code and converting it into a fresh object file.
That is, if you can even find the issue in the first place. It's once again like hiding inside
a hammer. No one thinks to check their tools. That brings us to this passage, which is a bit of a long one, but come on, it's going
to be worth it.
Quote,
It was noted above that while object code trapdoors are invisible, they are vulnerable
to recompilations.
The compiler or assembler trapdoor is inserted to permit object code trapdoors to survive
even a complete recompilation of the
entire system. In Multics, most of the ring 0 supervisor is written in PL1. A penetrator could
insert a trapdoor in the PL1 compiler to note when it's compiling a ring 0 module. Then the compiler
would insert an object code trapdoor in the ring0 module without listing the code in the listing.
Since the PL1 compiler is itself written in PL1, the trapdoor can even maintain itself when the compiler is recompiled.
In Maltic speak, ring0 means the highest level of access. It's like having an administrator account.
The software that runs in Ring Zero is stuff like The Kernel, or other all-encompassing code that
requires total power and authority. This is also the exact type of virus described by Thompson and
Wetherill. The only change is that in latter stories, the virus targets login systems specifically.
Now, there is a caveat here.
This is all theory stuff that stems from an Air Force audit of Multics.
The report does explain that a few example trapdoors were developed.
There's even some listings, but there isn't a report of a compiler trapdoor in the wild.
In other words, it's a ghost story.
That said, it's still very notable that this is occurring during our Dark Age.
Have you ever heard the tale of the programmer who lost control of their own code?
It's happened a number of times.
I've even seen it with my own eyes.
I want to share a particular story of this occurrence with you.
It comes from 1979 at Xerox PARC, one of the most technologically advanced outfits of the age.
There were these two programmers, John Schock and John Hupp,
who were experimenting with an early version of the Ethernet.
They were in the right place to do it, too.
Park was outfitted with a small fleet of Alto computers.
These were graphical machines that were built in-house.
They were all wired together with one of the first Ethernet networks.
In fact, Ethernet had been developed in this very lab a few years prior.
The two Johns weren't planning a normal test. There wasn't a need
for the ordinary. Park employees had been using Ethernet for things like email and file sharing
for a number of years. This test would be grand. A few years back, the two were made aware of this
program called MCROSS, the Multi-Computer Route-Oriented Simulation System. It was a distributed
program, a type of program that ran on multiple machines. These machines would all communicate
and coordinate towards solving a larger problem. In this way, an almost unlimited amount of power
could be thrown at a task. The team thought that Xerox PARC offered an interesting opportunity
for testing distributed programs. It was networked, for one, and its computers were only used during
the day, for two. The core of the idea was simple. Write a program like McRoss, but automate it.
The program would automatically find new computers that weren't being used by anyone.
It would copy itself over and start crunching numbers. Thus, the duo set to work creating this
new program. They called it a worm. Its functioning was simple. When the worm started up, it was just
a single computer, a single segment. From there, it would go looking for idle machines. The segment
would then copy itself over to the new computer. After that step, there were two segments looking
for idle machines. Then four, then eight, on until all free resources could be brought to bear.
At that point, John and John could carry out all kinds of useful computations.
They knew this kind of thing could get out of hand, so they were careful.
The worm program would never write itself to disk, only living in memory.
Each segment was somewhat dangerous, since it only took one segment to spread,
but the computer could be cleansed by a simple reboot.
Just flip a switch and you were safe.
But here's where things went wrong.
The worm was set free to roam and grow at night, with the knowledge that in the morning, workers would just reboot their
computers and start, well, working. One morning, as the office was coming back to life, something
strange happened. A perfect storm was brewing. Altos were randomly crashing. A machine would come up, start to run, then it
would glitch out and die. The duo figured it must have to do with their worm program. That was
the only unknown quantity in the lab, after all. Something had gone wrong. Something had gone rogue.
There was a segment out there wreaking havoc with the network. But where? A posse was a symbol to scour the lab.
Programmers went office by office, cubicle by cubicle, looking for the worm. But they quickly
ran into a major problem. Not everyone was in the office. Some were out sick, some were on vacation,
and some offices were just plain out of use. Most of those offices were locked,
rendering the computers within out of reach. Any one of those machines could have the rogue
segment on it. The worm could literally be hiding in a locked room. Luckily, the Johns had prepared
for this kind of problem. They had implemented a kill switch in each segment. By issuing a secret
command, the worm would be set to self-destruct. And just as luckily, the rogue worm was still
bound by this order. Disaster was averted at the cost of a morning of work. That said, disaster was
very close at hand. If the worm had chosen to disregard its command, well, things could have
turned out very differently. On the surface, that might sound like another rumor. There are just
enough specifics to make it sound credible, but things are left vague enough that it can't be
fact-checked, right? However, dear listener, this one is all true. At the tail end of the 1970s, a worm did go rogue at Xerox PARC.
It did crash their network of altos,
and it was stopped by a pre-programmed kill switch.
This rumor, also isn't a rumor at all,
it comes from an academic paper.
Worms are a third category of malicious code
that we haven't directly discussed on the show.
These are the most interesting out of all the classes.
A worm is a distributed program.
It spreads segments across multiple computers.
Those segments can work in concert, spreading the worm further and working towards some grander goal.
This is another one of those programs that was presaged by science fiction.
The first depiction of a worm at that point called a tapeworm is the 1975 book The Shockwave
Rider by John Bruner. The text describes multiple tapeworms that spread over a primitive network,
becoming more advanced as they grow. The book also served as inspiration
for Shock and Hup. In 1979, spurred by the Shockwave Rider and the aforementioned McRoss,
the two Johns set to work creating a worm of their own. The entire saga is chronicled in
The Worm Program's early experience with a distributed computation.
To start with, I need to explain some things about the workings of the Xerox Alto.
This really was a wildly advanced machine for the time. Not only did it run a fully
graphical environment, it also came backed with Ethernet support. In fact, Ethernet had been developed at Park a scant handful of years
before Shock and Hub start working on their program. In modern parlance, we'd probably call
the Alto a networked workstation. While graphics were a big glossy feature, networking was another
huge deal for the platform. The Alto could boot from a local disk drive like any
computer. It had this neat thing called a Diablo drive, which used a removable hard disk cartridge.
The Alto could also boot from the network. In this configuration, the disk wasn't even touched,
at least not initially. The machine would turn on and get in contact with a server over the
Ethernet. That server would then send over a file, which the Alto would load into memory and execute.
An Alto could also be remotely instructed to boot off the network, so you could bring up a machine
from anywhere at Xerox PARC. As explained by John and John, this remote boot feature was mainly used for diagnostic
purposes. At night, when machines weren't doing anything, they would be instructed to run a memory
diagnostic procedure. Some server would reach out over the network and tell all the altos not in use
to download and begin the diagnostic. When someone wanted to use an Alto,
well, they just had to sit down and reboot the machine.
That would clear the diagnostic from memory and operations could go on as normal.
The experimental worms didn't start off as a way to infect computers,
but rather as a research tool.
Schock and Hupp wanted to test out a means
of automatically bringing up a distributed computer network.
During the night, all those computers just kind of sat around doing nothing of actual use.
It's wasted cycles, or as the authors put it themselves, quote,
We once described a computational model based upon the classic science fiction film The Blob,
a program that started out running on one machine, but as its
appetite for computing cycles grew, it could reach out, find unused machines, and grow to encompass
those resources. In the middle of the night, such a program could mobilize hundreds of machines in
one building. In the morning, as users reclaimed their machines, The Blob would have to retreat
in an orderly manner, gathering up the
intermediate results of its computation. Pulled up in one or two machines during the day, the program
could emerge again later as resources became available, again expanding the computation.
This affinity for nighttime exploration led one researcher to describe these as vampire programs.
one researcher to describe these as vampire programs. End quote. You gotta love this kind of stuff. I personally find it really evocative of the headspace these programmers must have been in.
They were steeped in sci-fi and fantasy, and before them were the tools to make some of these
wild stories into reality. It's this beautiful mix of playfulness and skill that I find really compelling.
I think it also speaks to the fact that we're in a very early era for this kind of code.
Shock and Hupp aren't citing other works on viruses or worms or Trojans, they're instead
describing their program in terms of science fiction. Their closest citations are movies and pulp sci-fi novels. It's also worth noting,
partly as an aside and partly as a serious observation, that a quote vampire program
would actually be an automation of an old-school hacker practice. Back in the early days of time
sharing computers, time was hard to get. It was either expensive, scarce, or both. Often, it was easiest
to get uninterrupted access at odd hours of the day, namely, very late at night and very early
in the morning. Computer labs would be less crowded, and sometimes hourly rates would be
cheaper. Many hackers became nocturnal in order to take advantage of these schedules. These
obsessives would program late into the night and then on into the next morning, becoming
software vampires themselves. That's one of the reasons that a lot of hacker stories either take
place very early in the morning or very late at night. I think it's fitting that these early
worms were conceived of as an automatic software vampire.
Now, in addition to leveraging resources,
Schock and Hupp also wanted to study the idea of a real-world worm.
How would it propagate over a network?
What kinds of issues would they run into?
Are there gotchas to look out for?
Once again, we aren't dealing with a real-world infection.
Instead, this is a testbed for new ideas.
These early worms were, very fittingly, primitive.
Ethernet was much more simple back in those days.
Altos didn't even have IP addresses, they just had numbers.
So you literally could have an internet address of one. The most basic worm worked
something like this. The first segment would start up. Once running, it would determine if
it needed to spread. This was one of the free variables that Shock and Hub experimented with.
A segment would keep a table of other known segments, which was refreshed at regular periods.
There was this
heartbeat signal going back and forth between each segment, so when one dropped off from the network,
each segment would know about it. The data was used to decide when a segment should spread and,
in the larger worm, which segment should do the spreading. Spread here was simple. The segment
would reach out on the network and scan for idle altos. This
was done by starting at address 0 and working its way up to the highest numbered address on the
network. If asked, an alto would totally tell you if it's idle or in use. Once an idle machine was
found, the segment would reach out again and tell that alto, hey, it's time to boot, and then the segment would
pass itself to the target Alto. The target would then dutifully download the segment and start
running, which would make the worm longer. Now, there's a new segment. Right away, there were some
fun issues with the worm. I've already detailed one problem. Once in a blue moon, the transfer
process would mess up.
Ethernet was pretty primitive back in the day, and the Alto was still a bit of an experimental
machine despite Xerox's best attempts. At one point, a segment was damaged in a transfer.
We don't know exactly how it was damaged. The net result was that when it attempted to spread to a
machine, the new segment program would be corrupted. It would simply crash, causing the Alto to reboot. Once rebooted,
well, the Alto was technically idle, so it was ready to be crashed again, and again, and again,
and again. Another noted issue was network isolation. Most worms were programmed to only
spread to a set number of computers. Each segment was keeping track of other segments, partly to ensure that worm size stayed in check.
Recall that this information was spread by a heartbeat.
Active segments would send little notes to each other at regular intervals, saying all was well.
If one segment was killed off, then the notes would stop.
The other segments would assume the computer had been rebooted, and then the worm would decide where to pick up a new segment. But what if that
missing segment was still running? What if instead of being rebooted, the machine was just inaccessible?
That would lead to some weird issues. And recall that Ethernet is still in its infancy at this
point. A situation would crop up where an interlink
between two parts of the office went down. Thus, it was broken into two smaller networks.
Each of these new networks contained a few segments of a worm. When the network bisected,
these segments were isolated. After a few missed check-ins, each segment just assumed that the
worm had shrunk. Some of the altos had rebooted,
that's normal, so these two worms would start to grow independently. Once the network was repaired,
Shock and Hup were faced with two worms where they only expected one. I can only imagine how
confused the poor worms were. That's the gist of the platform, the code that spread and managed
the worms.
So what were these worms actually doing?
Well, in general, just kinda messing around.
Recall that the whole point was to experiment with distributed computing.
The platform part, all the fun wormy bits, was a means to test propagation and communication
over the network.
The actual payload, the mischief that the worms got up to, was pretty mundane.
The first worm described in the paper just announced its presence by throwing a note up
on the screen. I don't think it was, I'm the creeper, catch me if you can, but I get a similar
vibe from the paper, so I'm gonna assume it's just, I'm the creeper, catch me if you can.
From there, Shock and Huff upped the ante by displaying an image on infected machines.
Not that exciting, but definitely more complex software.
But we can get more interesting.
The next most complex worm in the hierarchy was an Ethernet diagnostic program.
This is actually a pretty neat use of a worm.
agnostic program. This is actually a pretty neat use of a worm. The program spread over the network,
then started monitoring traffic and sending test packets. This was used to measure error rates and response times between pairs of machines. It's the kind of widespread testing that's necessary,
but really tedious to manage in practice. The worm turned out to be a slick tool to automate away the tedium.
This, though, was all a warm-up for the big bad worm, the animation worm.
Now, once again, we are dealing with experimental software.
That said, this is a pretty cool experiment.
The animation worm enlisted computers as a rendering farm.
A central program would delegate rendering tasks out to each segment,
and then compile the finished frames into a finalized animation.
In order to handle more complex worms, Shock and Hup created a central control program.
This is probably where the kill signal was sent when things went wrong.
This control program could also display vital
information about the worm's spread and health. From the program, Shock and Hup could also fire
off multiple worms. The network diagnostic program, for instance, was actually a set of
small worms that wove their way through Xerox PARC. The animation worm is especially cool here
because, I mean, come on. This is actually distributed computing.
Shock and Hup were building this ad hoc cluster of machines,
farming out smaller tasks and handling all the communications necessary for the task.
I've actually written and maintained this type of software both for fun and professionally.
Distributed computing requires a whole lot of communication between nodes and a lot
of overhead for setup. So believe me when I tell you, the idea of automatic distributed computing
like this, that's really cool. Just the fact that these Xerox worms were able to self-organize
is impressive. The animation worm also presages something more recent. That is, the botnet.
These are programs that, similar to worms, propagate across many multiple computers.
The virus then uses these machines as resources for some nefarious goals. It could be running a
denial-of-service attack, mining bitcoin, or sending spam mail to spread further. It's interesting to note
that this early animation worm is so similar to something that's this modern. I think when you get
down to it, a lot of these viruses, either experimental, rumored, or very real, have modern
equivalents. We can really see reflections of the past in the present day in these.
Okay, I think that's probably enough for a while. So let's bring this back to the central line of inquiry for this episode. Why is there this big hole in the timeline of viruses from 72 to 82?
in the timeline of viruses from 72 to 82? Why do we only see one virus, animal, in the middle of this dark age? This has been a bit of a hard-fought episode. That's the big reason for it being late.
Once again, I'm sorry. The bottom line is that there were viruses in this period. It's only a
dark age in so much as sourcing. The viruses that did exist weren't
necessarily called viruses. I think that's the biggest reason for this dark age, or quiet age.
The term virus existed on the fringes, but it wasn't super codified. It wasn't in any sort of
common use. That said, there were viruses by other names. Worms, aka tapeworms, and Trojan horses were known quantities.
Worms weren't in the wild yet, but they were being researched.
Trojans, on the other hand, were very much in play.
Don Parker's work provides enough evidence of real-life Trojan horses, as vague as that evidence may be.
When I started this episode, I had a gut
feeling that there would be accounts of viruses in this era, but they'd be hard to find. I'd already
read enough rumors that I could see an outline. I think I've managed to fill in part of that
outline, but there's a lot more work that could be done. If this stuff interests you, then once
again, please hit me up and I'll pass you off
a whole lot of leads. With a little diligence, I think this case could be cracked wide open,
and that'd be a huge benefit to the field at large. Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of computing's past. And hey, if you like the show,
there are a few ways you can support it. If you know someone else who'd be interested in the history of computing, then
please take a minute to share the podcast with them. You can also rate and review on Apple
Podcasts. If you want to be a super fan, you can support the show directly through admin of
computing merch or signing up as a patron on Patreon. And I will give you a little hint,
those patron funds recently came in very clutch
for securing a really, really cool and possibly one-of-a-kind source. So the support does matter.
In exchange, patrons get early access to episodes, polls for the direction of the show,
and bonus content. You can find links to everything on my website, adventofcomputing.com.
If you have any
comments or suggestions for a future episode, then go ahead and shoot me a tweet. I'm at
adventofcomp on Twitter. And as always, have a great rest of your day.