Advent of Computing - Episode 100 - Updates and Mysteries
Episode Date: January 23, 2023Advent of Computing has finally reached 100 episodes! Today we are taking a break from the usual content to discuss the show, it's arc, and some of the mysteries I have yet to solve. ...
Transcript
Discussion (0)
I've probably told this story before, either on this show or on some other podcast, but
you're going to hear it again.
I want to tell you all how I started Advent of Computing.
It was early 2019, and I was feeling a little restless.
I was relatively fresh out of college and had been doing the whole industry thing for
a while.
It may come as no surprise that I've never been a big
establishment kind of dude. I'd always prefer to stick it to the man. As such, the capital A
style of academia, well, it never really worked out for me that well. At one point, I was close
to heading off to grad school. I wanted to become either a professor or some type of researcher.
But I decided that I was kind of done with school in general.
It just didn't fit.
I didn't want to deal with all the administrivia and poor treatment,
which, I was told, only got worse once you were above undergrad.
Since that time, I've had multiple friends confirm these rumors were
true. I've always liked learning. I've always enjoyed more scholarly pursuits. But the trappings
just weren't my thing. So I decided to go my own way, brave a new path, and start a podcast.
start a podcast. I know, very original of me. By 2019, I was something of an industry insider already. I'd been working at a CDN for years, and part of that job had to do with podcast production
and hosting. I've always been a big computer nerd. During college, I majored in physics, but
basically all of the research that I was involved in was computational.
So computers and me, well, we go way back.
When I started planning a podcast, I knew exactly what I wanted to cover.
It also helped that no one was really doing the same kind of academic treatment of computer
history that I was interested in.
I listened to a lot of podcasts, usually history
podcasts, and I just didn't see exactly what I had in mind. I've always been fascinated by the
history and heritage of computers, not just how they work, but why they work in a certain way.
So things just fell into place, and I decided why not make a kind of hardcore computer history
podcast. I started reading up on how to actually, you know, make a podcast. I then recorded an awful
first episode. I read some more, I got a cheap microphone, and I learned how to actually use
audio editing software. Episode 1 of Advent of Computing
was published at the start of April 2019. Today, I'm hitting publish on episode 100.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is a truly monumental episode of the show.
We've finally done it.
We've hit episode 100 of the podcast.
I honestly never thought I'd be here.
I started Advent of Computing, what, almost four years ago now.
I assumed that maybe it would be
a fun little side project that a few dozen people might even listen to it. But the response,
especially in the last year, has really blown all my expectations out of the water. So I just want
to start this episode off by saying thank you to everyone listening. Advent of Computing couldn't have come this far without you.
Today isn't going to be a normal episode. I've decided I'm going to do a bit of a victory lap.
So if for some reason you're a first-time listener, please stop what you're listening to,
go back and listen to a few other episodes. This isn't very indicative of the usual show content.
For episode 100, I've decided to draw back the curtain a little bit.
I have two topics on the docket today.
First is the state of the podcast after its first 100 episodes.
I don't really talk about the podcast on the podcast very often.
There is good reason for this.
I want Adren of Computing to be an evergreen kind of show.
I want each episode to be relevant far into the future, and more than anything, I try
to come off as a little bit professional.
I'm not a personality podcaster, so I tend to shy away from talking about myself and
talking about the show.
We got into some of that territory back when I did the 50k Q&A special, but that's been
a while.
I figure you're all due for an update.
Second is a segment that I'm going to call Computer Mysteries.
After nearly four years of production and research, I've ran into a number
of nuts that I haven't yet been able to crack. But hey, you know me, your intrepid host. I'm not one
to wallow. I prefer to scheme. I keep track of all these topics in a special Computer Mysteries
spreadsheet. So here's the deal. I want this to serve as something of a call to action.
We're going to be going over a handful of these mysteries and how I think they could be solved.
I'm hoping that someone out there will know something or have access to documents that
can help me. This is very much an if you or a loved one knows kind of thing. Besides just a call for help, I think these histories are fascinating in their own right.
They speak to some of the more strange or just obscure sagas in the torrid history of
the computer.
So, let's dive in.
Welcome to episode 100.
To kick things off, what's the current state of the Advent of Computing podcast?
Well, pretty good.
Thanks for coming to the show.
I'm kidding, there's a lot more to say than just that.
The podcast is doing quite well for itself.
We can just start by looking at the numbers.
Advent of Computing is getting really close to 200,000 all-time listens.
I think we're going to break that in the next month or two.
That's spread over 99 full-length episodes, plus a handful of public bonus episodes and interviews.
That's a lot by any standard.
We're not the biggest podcast in the world, but, you know, we're up there.
We're a lot bigger than most shows around.
According to ListenNotes, one of the many sites that attempts to track podcast popularity,
our Humble History Show is in the top 2% of all shows. That's nothing to sneeze at.
The show also has a Patreon page that's been doing pretty well. Right now, we have 74 patrons that support the
show financially. That money goes exclusively to the show. It's all reinvested. We're talking
hosting fees and the cost of research, mainly. That said, I am talking to you through hardware
bought with patron donations. My audio interface and my microphone were both funded, in part at least,
by patron funds. Things can really add up after a while, and I'm thankful for all your contributions.
You've helped me get my hands on a bunch of out-of-print books and scans of documents buried
in archives. If you need some more concrete examples, I can offer a few of my favorite uses of these donations.
Back when I ran my BASIC episode, one of the main sources I used was a book called Back to BASIC.
Nice punny name.
It was written by Kurtz and Kamini, the researchers who developed BASIC in the first place.
That book was only available in physical form and was really the best accounting of the birth and development of the language.
A more recent example is the ACM's History of Programming Languages 2.
This is this big, hardbound tome of conference proceedings from, well, ACM's second History of Programming Languages conference.
ACM's second History of Programming Languages conference.
What's really cool about this conference is that the speakers were all involved with the development of the languages they were discussing.
So you have people like Dennis Ritchie talking about and even doing a Q&A session about the
development of the C programming language.
Everything is packed into this book, from papers to transcripts
of panel discussions. They even have little slide photos. The proceedings for the first conference
are easy to find. They're digitized and available for free online. The third conference, Hopple 3,
was recorded, and you can find videos of it on YouTube. But Hopple 2 has not been digitized. Or if it has, it's hidden somewhere
outside the long arms of Sean. In the last few weeks, I was finally able to track down a physical
copy of the second proceedings, which is now actually sitting right behind me on my sources
shelf. I'm going to get a whole lot of good episodes out of that. The final example I want to give is more archival. One of the recurring characters on the podcast is
the one and only Vannevar Bush. He pops up any time we're dealing with hypertext or user interfaces or,
strangely enough, analog computing. In 1970, he wrote an autobiography called Pieces of the Action that covered his
really far-flung life and contributions. He was involved in everything from analog machines to
the birth of hypertext to the atomic bomb. It's kind of staggering when you think about it.
Anyway, in preparing for that book, Bush sat for a series of oral history interviews.
Those were never published.
Instead, parts of them were adapted into pieces of the action.
Well, it turns out, those interviews were preserved at MIT's archives.
Thanks to patron donations, I now have a copy of those transcripts.
It's a few gigabytes of PDFs and
has become really a valuable tool for figuring out just what Bush was actually involved with.
I've also been saving up donations for a big research trip. Nothing's set in stone yet,
but there are a few archives around that I've been trying to visit. I live out in a pretty
rural part of Northern California, so it's not always easy to get to
archives. I get to plan these longer-term projects thanks to my patrons. So hey, I guess this can
also be a plug for the Patreon. If you want to support the show, then go sign up. You can find
a link at my website, adventofcomputing.com. Patrons get episodes as soon as I'm done editing them, which
is usually a day or two before anyone else. They also get these bonus bite-sized episodes every
few months, and they get to vote on the topic of those episodes. I actually need to get another
one of those sorted out pretty soon, so now's a great time to join. Alright, so the other big
state of the pod topic is, well, the non-podcasting thing I'm working on.
That's Notes on Computer History.
That's my project to start up a non-academic journal covering the history of computing.
The core idea is to provide a place for anyone to write on and research the kind of stuff I cover here on Advent of Computing.
write on and research the kind of stuff I cover here on Advent of Computing. One of the struggles I've really ran into is finding a place to publish some of my side research. Believe it or not,
I do run into some topics that are better done in print with big glossy photos, or some topics that
I think are too technical for the general audience that I try to make Advent of Computing fit with.
Since I'm not an academic,
I don't work for a college, and I don't have an advanced degree, I can't reliably publish in
academic outlets. I'm a bit of an outsider. I'm a renegade, if you want to be somehow romantic
about not having a PhD. Doing freelance writing also just kind of sucks, so I don't do much of that.
Over the years, I've kept thinking about how I wanted a place that was somewhere between
freelance and academia that I could send articles to. There really isn't a spot like that for
computer history specifically, so I decided last year to try and make that place. That's notes on
computer history in a nutshell.
Right now, we have six articles that are ready to roll, and we have a pretty good staff of editors,
I think. I want to get another four or five before I finalize things and get the first issue ready to
publish. The biggest hurdle right now is finding authors. I've gotten a lot of emails from interested parties, but not all of them have
sent in finished drafts. I was kind of ready for the first issue to be a bit of a slog.
I think things will get a lot better once the journal has a track record. But for now,
I implore you, please shoot me your submissions. If you're listening, then you could become an author. Yes, you,
dear listener. For the full rundown, you can go to history.computer, which I think is the
best domain name I've ever been able to use. Issue 1 will come out sometime this year.
Mark my words. While I was putting together this segment, I was thinking about just going down a
bullet list of what's
been going on with the show, but that's kind of boring. A lot of that would be kind of dumb
technical stuff that only I care about. So instead, I posted around asking for some questions from
listeners. I figure that's a way that I can actually pass along information that you are
curious about. This is in no particular order, and some of these questions may be mixed in later on in the episode.
So, to start with, one listener asks,
The show sounds part scripted and part ad-lib.
It would be interesting to hear how you script it up and what a recording session looks like.
Well, I can answer that very deftly because I am in the middle of a recording session right now.
I'm actually really pleased with this question.
So one of the things that I try to do with the podcast is make it sound personable.
I don't want to have it be just me reading from a script.
That's boring.
And it turns out that written text is different than spoken word.
You know, who would have thought?
What I've found works for me is I do fully script the episode,
and I do go off script quite a bit,
but I've kind of dialed in the ability to write how I talk, if that makes sense.
If you were to read my script, it kind of just matches up with my speaking patterns.
It does make it hard when I write for print, but when I'm doing scripting, it's really convenient.
I know my mannerisms and how I want to phrase things when I speak.
So, I'm glad that's coming across.
Alright, the next question.
How do you decide what topics to cover?
Will you ever discuss the home computer revolution, both 8-bit and 16-bit?
Another good question.
My decision-making process is somewhat of an enigma even to me.
So I keep a big sheet with every topic I can think of. Whenever I come across an article or
a source or a reference to something that sounds neat, I put it in the sheet.
So I'm a spreadsheet man. It's all in the big spreadsheet. When I need a new topic,
I'll usually pick something off of that list. It also includes listener contributions,
so I will get emails from people saying, hey, it also includes listener contributions. So I will get emails
from people saying, hey, you should cover this topic. So usually what ends up happening is I just
cover whatever sounds interesting to me in the moment. I'll finish recording and I have Sunday
off of my production schedule. So I'll sit around, look at the spreadsheet and go, yeah,
I'll sit around, look at the spreadsheet and go, yeah, keyboards or hey, Unix sounds cool this week.
The other part of my calculus is I try to not put similar topics near each other.
So if I do a programming language episode, I'm not going to do another language episode right after that. I might wait a month or two since I don't want this to just be advent of computing, the programming language
month or quarter. That's not the most fun thing. I prefer a little variability in my life.
The second part of that question, will I ever discuss the home computer revolution?
I kind of, I think I do a little bit. I will admit I'm a bit of a PC head.
I grew up with an old IBM AT clone.
So the PC is really the platform that I think about when I think about the home computer
revolution.
And that's kind of been borne out by my coverage.
That said, I really do need to talk more about the 8-bit part of the revolution and the 16-bit
part.
I haven't really hit the Atari ST at all, and that isn't 16-bit.
Anyway, the short answer is yes.
I will discuss more of the home computer revolution.
I just need to find the time in the schedule and really find the specific angles that I'm interested in talking
about. Especially with 8-bit computers, there is a whole lot of really good coverage about that in
the historical scene and also in the retrocomputing scene. So for me to cover something like, say,
the Sinclair Spectrum, I'm gonna need to do a bit of research and find an angle that I think would
be fresh and would actually add to the conversation instead of just being, you know, rattling off
facts. The rest of the submitted questions are kind of woven in to the rest of the episode's
narrative, but there is one more that I want to quote directly. You did a great job answering some of this in previous
episodes, but I do have a crazy question, dot dot dot. Given your reluctance to go beyond, say,
1984 or so, which I share and understand, dot dot dot, do you think you'll ever run out of material
for the podcast? Now, this is something that I've actually been asked quite a few times at this point.
The short answer is no, I don't think I'll ever run out of topics.
One lesson I've learned time and time again is that computer history is a shockingly dense subject.
A lot of episodes that I thought would be simple have really ended up devolving into huge rabbit holes.
Whenever I get into a topic, I find all these little nooks and crannies that are just
asking to be probed. It's the kind of thing that could take a lifetime. I don't know if I'll go
that far, but it definitely gives me material for a lot of episodes. The long answer to this question, well, I think that brings us nicely to the
unsolved computer mysteries. For these rambling tales to make sense, I think we should start with
a little introduction and discussion about how I handle research. You've probably noticed that
I tend to go for more obscure topics. At least, I rarely go down well-trodden paths.
There are some good reasons for this.
I'm not just trying to be cool.
I like covering lesser-known stories because I like to think I'm trying to fill in gaps
in the larger narrative.
There's been a lot of good work done on things like the rise of Microsoft and Apple,
and countless biographies of well-known figures.
I'm not going to ever do a show that's just about Bill Gates, for instance. Other people have done
that better and more exhaustively. It's this drive towards obscurity that kind of puts me in a weird
position. I often get asked which books on computer history I can recommend, or which books I use for the show.
I usually can't answer that very well because, at least when it comes to the podcast, I don't use a whole lot of books.
That might sound like an oversight on my part, so let me try and explain.
I have a hierarchy of sources in my head.
At the very top of my source pyramid are primary sources.
Those are texts written by people involved with an event at or near the time of the event.
That's the gold standard as far as I'm concerned.
Primary sources let you see how people were talking about a subject at the time, in their words.
That's as close to a time machine as we can get, at least for now.
In this category, we're looking at notes, memos, scraps of paper, napkins, contemporary articles,
and contemporary interviews. These sources are great, but they can be kind of scarce. They can
be hard to find and hard to get my hands on. Lower down, we get to interviews
and recollections after the fact. These kinda sit between primary and secondary sources, at least
in my head. Time can dull and mix up stories, but these are still great sources to work from as long
as you consider possible inaccuracies. This is where things like oral
histories, one of my all-time favorite types of sources, come in. These are just interviews
conducted with someone who was present for the event. The Computer History Museum in Mountain
View has this wonderful treasure trove of histories that I work from pretty often.
that I work from pretty often. This is also the tier where we hit books that I actually do like to use. On occasion, involved parties will write books years after events. The Basic Book by
Kurtz and Kimony is one example from my growing collection. Another source that comes to mind is
IWAS. That's an autobiography written, strangely enough, by Steve Wozniak.
These types of books will answer all of your questions from a, hopefully, reliable narrator.
That is another one of those things that you have to keep in mind.
These primary and a half sources are also neat because they show how someone wants to portray their earlier work.
That can tell you a lot about a person and their motivations, but it can make interpreting these sources a little bit tricky.
You have to read with an eye for nuance.
The final layer is where we reach secondary sources.
Books and articles written by other people.
Books and articles written by other people.
These can range from really, really good to garbage and everything in between.
I have been burned a number of times with these secondary sources.
The main reason I try to avoid these really comes down to interpretation.
In these texts, someone has already gone to the primary sourcing, at least hopefully they have, and done the research. They then interpret and synthesize these sources into
a story. That's exactly what I do on the podcast every week. So secondary sources will always have
some kind of angle to them. Once again, it's what I do on the show. I always have some kind of angle
I take even if I'm trying to be non-bias. That's just how we work as people. Another consideration
I make is if there is already a secondary source, then someone's already done the job for me.
I don't want this to be a book review podcast, so I'll often forego a topic if I know there are
existing texts that cover it really well. As the show's gone on, I like to think I've gotten
better at avoiding the book review style episodes. That said, there are still some secondary sources
that I do like to use. This usually happens when the primary sources are inaccessible to me or
utterly impenetrable, but I still want to discuss the topic. Sometimes I think I can add to the
discussion instead of just doing a book rundown. Minitel Welcome to the Internet is a great example.
That book covers a French computer network. It just so happens I don't actually speak French,
so I can't read the primary sources. I know, it's a travesty. The Friendly Orange Glow is
also another great example. That text is all about the Plato series of educational terminals.
It's the definitive work on the subject, and primary sources are hard to find.
There are also many books that contain interviews in them, usually verbatim. Last episode was on
BSD, and my main source for that episode was a book called A Quarter Century of Unix.
That text is absolutely packed with verbatim interview transcripts, which makes it a fantastic
source to work with.
I guess this is just a really rambling way of saying that I have my preferences.
I prefer to use primary sources, I usually use those weird primary and a half sources,
and sometimes I have to pull from secondary.
My primary source preference is partly have to pull from secondary. My primary source
preference is partly due to my educational journey. I went to college to become a physicist.
I even happen to have a bachelor's of science in the subject. One of the things that was drilled
into us was intense incredulity. Don't trust a paper just because it says something. To believe a claim,
you have to back it up. You have to prove it. So when it comes to computer history, I take
kind of a similar systematic approach. I want to be able to prove something happened in a certain
way at a certain time, or at least present evidence that points to a probable series of events.
That's just what feels right to me.
But this leads to a problem in my life.
Unsolved mysteries.
During the course of normal research,
I tend to hit these spider webs of connected events.
I will usually follow up on a lot of these leads.
It's just how I am as a person. I'm easily distracted by little hints. One great example is an episode I did on the Viatron.
I first saw a mention of this weird company when I was reading an oral history with Chuck Petal.
That was for an episode on the 6502 processor. Now, during that interview, Petal mentioned in
kind of an offhanded way that one of his old co-workers had gone off to work at this
Viatron place. He'd thrown in a few details, then said the company fell apart when everyone
realized it was actually just a massive stock fraud. That small mention led me down a tangent which eventually ended up with a
full hour-long episode. I'm afraid, dear listener, that that's only the best case example for these
weird nooks and crannies. Sometimes I hit dead ends. I try to keep track of those for later,
at least the ones that I think are worth solving.
So, allow me to introduce you to a selection of these mysteries, why I think they're worth solving, and how they may eventually be untangled.
Like I mentioned at the top, this is actually a call for help.
Help me!
If you or a loved one knows anything about these computer mysteries, then please get in touch.
You can find my email and all my contacts at adventofcomputing.com.
Let's start things off with a light mystery, just to wet the palette.
There are a lot of questions we can ask about the history of computers that simply have no clear-cut answers.
computers that simply have no clear-cut answers. However, there is one fundamental question that legally does. Who invented the first electronic digital computer? According to Honeywell v. Sperry,
a lawsuit over the patentability of ENIAC, the creator of the electronic digital computer is none other than Dr. John
Atnasoff. That also means that when we talk about the earliest digital machines, we have to deal
with four Johns. Atnasoff, von Neumann, Mauchly, and Eckert. It's a bit of a pack to keep track of.
There are some more caveats here, of course, nothing is ever clear-cut for me.
Specifically, Atanasoff was ruled, in a court of law, the first to have claim over the idea of
using digital logic circuits to construct an electronic computer. There are more legal things
involved, but that's the summation as well as I can understand it.
This stems from work that Atanasoff and one of his grad students, Clifford Berry, did in 1942.
They actually built an entire functioning computer.
The machine was called ABC, the Atanasoff-Berry Computer.
the Atnasoff-Berry computer. While not programmable in any modern sense, it did perform complicated mathematics using digital logic circuits. It even had memory. It was able to
solve matrices, which really isn't the easiest thing in the world to do. The story of the ABC
is well known and well covered. Atnasoff sat for many interviews, and there are older primary sources from the 40s that we can go off of.
But there's this part to the story that's stayed in the back of my mind.
You see, Atanasoff cracked the digital code late at night in a quote-unquote honky-tonk
somewhere near the border of Illinois and Iowa.
That's the birthplace of the computer, as far as I'm concerned. Now, I know that a lot more went
into digital machines, but I just like the story that computers were birthed in a bar late at night
in the Midwest. Here's the full story from Atanasoff himself, to quote.
After months of work and study, I went to the office again one evening,
but it looked as if nothing would happen. I was extremely distraught.
And then I did something that I did in those days, but I've had to stop lately.
I got in my automobile and started to drive.
I drove hard so I would have to give my attention to driving and I wouldn't have to worry about my problems.
And whenever and once in a while I would commence to think about my efforts
to build a computer, and I didn't want to do that, so I drive harder so I wouldn't.
Here, I drove towards the east.
I was driving a Ford V8 with a south wind heater.
I don't suppose you know what a south wind heater is.
Pretty warm, but the night was very cold.
It was in the middle of the winter in 1937 and 38.
When I finally came to earth, I was crossing the Mississippi River, 189 miles from my desk.
You couldn't get a drink in Iowa in those days, but I was crossing into Illinois.
You couldn't get a drink in Iowa in those days, but I was crossing into Illinois.
I looked ahead and there was a light.
Of course, it was it.
And I stopped and got out and went in and hung up my coat. I remember that coat.
And sat down at the desk and got a drink.
And then I noticed that my mind was very clear and sharp.
And I knew what I wanted to think about,
and I went right to work on it and worked for three hours,
and then got in my car and drove slowly back to Ames.
And I had made four decisions in that evening in the Illinois Roadhouse.
Use electricity and electronics. That meant, of course, vacuum tubes in those days. Use base two in spite
of custom for economy. Use condensers but regenerate to avoid lapses. Compute by
direct action not by enumeration. That's some fundamental stuff that was decided late at night in a bar.
I mean, we're still using binary.
And really, you just gotta love these kinds of stories.
John was frustrated, so he decides to go on a reckless nighttime drive.
Things got a little out of hand.
He winds up in another state, sees a roadside bar,
which in most other tellings he calls a hockey tonk, and then decides, hey, you know, a drink
might be nice. This leads to the conception of the digital computer. The tenets that he laid out in
the bar are at the core of every machine we use today in some shape or form. There's been an evolution,
but Atanasoff really cracked the code at that hockey talk. Better still, his ideas were miles
ahead of the first generation of computers that were built at the tail end of World War II.
Sure, the ABC might not have been able to do as much as ENIAC, but it was based off more advanced theory.
That theory was thought up while drinking in a roadside tavern over a hundred miles away from a lab.
So where is this mythical honky-tonk?
Where can we all travel to venerate the birth of the first computer?
Therein lies the mystery. Atanasoff
even mentioned in a few other interviews that he can't remember exactly where he went. What we do
know is that he was driving east on Highway 30 at high speed in the dead of winter. He recalls
crossing the Mississippi River at Davenport, Iowa. A few miles later, he stopped at the fabled watering hole.
Then we meet back up with the known story.
He drives back to Ames.
He builds a computer with one of his grad students.
It goes into cold storage for years.
It's eventually found by some lawyers and rebuilt and brought into a courtroom.
I think it's clear why this is an important mystery to solve.
I mean, it's not earth-shattering stuff.
It's more around the level of maybe a nice historical plaque.
Plus, how cool would it be to go for a drink in the bar where the computer was born?
So how do we go about solving this one?
was born. So how do we go about solving this one? The biggest problem, I think, is that towns and roads shift around a little bit. The other is that Atanasoff's recollections may be a little shaky
here. Highway 30 doesn't actually go through Davenport, Iowa. It crosses the border in a town called Clinton. So I think the most probable location,
looking at some older maps, is a town called Fulton, Illinois. If he did cross in Davenport,
then the target would be Moline, Illinois. The trick here is to find some old maps of those
towns that show businesses. Maps from the late 30s, maybe the really early
pre-war 40s. We're looking for a roadside bar or restaurant that would have been visible from the
highway. I'm assuming the bar will have some wild west-ish name, since at Nassau, usually called it
a honky-tonk. It's the kind of place that would have a piano and serve whiskey.
Now, I haven't really had much luck finding good maps online,
or I'm just bad at searching for maps.
It's a little outside my usual realm of expertise.
I think the best place to look for this would be a local historical society.
So if you happen to live in this part
of the world, then I could see this being a pretty fun weekend project. The bar probably
doesn't exist anymore, the building may have even been torn down and replaced. But if it does exist,
well, I think it would be really worth a trip out to Illinois.
I think it would be really worth a trip out to Illinois.
Alright, on to the next mystery.
This is going to be another small one, but definitely an obscure bit of lore, a deep cut.
Have you ever seen the music video for Money for Nothing by Dire Straits?
If you haven't, then go take a look.
The bulk of the video is very early computer-generated imagery.
You know, it's bad CGI.
It shows these robot-looking dudes kinda jukin' around movin' simple machines like refrigerators and, yes, also color TVs.
It's boxy, colorful, and kinda crude. But hey, it was pretty cool for the mid-80s.
How was CGI created in that era? Well, it really depends. One approach was to use
dedicated hardware, machines built for generating the proper images. That's the approach that was used for this Dire Straits
video. There were two machines involved here. One was the Quintel Paintbox, a relatively well-known
device that was used in a lot of video production in the 80s and 90s. The Paintbox could do things
like compositing, image overlaying, and other fancy video trickery. It's a well-documented
machine, so that's not the mystery. That's boring. The second machine in the toolchain was the Bosch
FGS-4000. And that is the mystery. There is very little I can dig up on this machine. It was some type of custom device built by Bosch. It had a
68,000 processor, plus some kind of in-house TTL logic-based processor for the actual rendering
side of things. It was used to produce 3D renders. How did it work? What were its full capabilities?
How did it work? What were its full capabilities? We don't really know for sure.
I ran into this mystery all the way back in episode 17, which covered the BBC Doomsday Project.
This was a really cool program developed at the BBC. It used a special Laserdisc player,
coupled with an upgraded BBC Micro, to present an interactive survey of the United Kingdom.
Part of the Doomsday Project was an interactive 3D museum. A user could, quote-unquote, walk through this museum and look at various artwork on its walls.
These exhibits linked out to other parts of the program, so
this was kind of a 3D hub world for part of the Doomsday discs. The Bosch FGS-4000 was used to
render this environment. I think between Doomsday and Money for Nothing, we can say that the FGS-4000
is a somewhat important machine. It's maybe a step above a footnote.
Despite the relevance here, the machine itself remains pretty enigmatic. I can find one other
video that was rendered by a 4000. It's an 11-minute demo animation that apparently won
a technical Emmy, but I can't find more information
on that award. There's just some mention of it in the sources that we do have. So I guess that's,
what, three sources and some contemporary articles that say an Emmy was won?
So here's the thing. We have no documentation about this Bosch computer. We have a few threads
on Twitter and the Vintage Computer Federation forums, some print ads, and an ACM SIGGRAPH
webinar that includes a Bosch employee. I actually just found recording the webinar, so
that's a new source here. We get two main issues with discussing the FGS-4000.
The first is, of course, the lack of solid technical information. We have a thread on
the VCF forum where someone who worked with the 4000 explains the machine in very broad terms,
but we don't have real documentation.
That's a shame, because it sounds interesting.
I mean, it had a stock microprocessor paired up with custom logic.
I really want to know more about how that worked.
It sounds kind of like a GPU-CPU setup, but I just have no idea.
I don't think anyone really does unless they work with this machine. How would
one go about solving this mystery? Well, this is the next level up, so it's a little more difficult.
The FGS-4000 sounds like a pretty niche, industry-specific kind of machine. Worse still,
Bosch still exists as a corporate entity. I found that industry-specific stuff tends to be poorly preserved.
I think it's just that no one outside the niche really cares enough to scan or archive documents.
That's doubly so when you're dealing with a live company.
When a company goes under, you can get lucky.
Troves of internal information can leak, papers can find
their way into archives, that sort of thing. But few operating companies want to open their file
cabinets to researchers such as myself. There are a few possible routes here. One would be to try
and deal with Bosch directly. I haven't gone down that path yet, partly because I already have a whole folder of
please stop contacting me emails from other companies, but hey, maybe they'd be responsive?
I just have no idea. The second route, and I think the most promising, is to track down someone who
worked with an FGS-4000. People like to scuttle away documents, and I'm willing to bet that someone out there
has all the details on the machine just sitting on a shelf or, you know, maybe in a storage
unit.
Once again, solving this mystery won't be earth-shattering, but it would certainly be
an interesting machine to learn more about.
Our next mystery is something that's actually vexed me since high school, if you can believe it.
When I was a teenager, I got really deep into the world of operating system development.
For me, that meant countless hours bashing out assembly language and tinkering with old PCs.
It was the type of mania that only the young can sustain.
the type of mania that only the young can sustain. One of my favorite activities during this period was searching for strange operating systems that I could try out myself. I figured it was a good
way to see what had already been done and maybe see if there were some good ideas that I could
learn from. At the time, we had dial-up at home. Remember, rural California. But my school, they had access to broadband.
Whenever I had free time during school hours, I'd browse old FTP servers,
scramble over the net, and slowly acquire new disk images. After a while, I'd built up a pretty
solid collection of oddities. Everything from small early projects like Loose Those to bigger productions like QNX.
Even some weird ones like a DOS program that could load you into a very minimal Linux distro.
That was one of my favorites to mess with.
There was one system that caught my attention like nothing else.
In my misspent youth, I came across something
called PIC-OS. It was an operating system, and it was also a database. It ran on some of the
earliest x86 hardware, and perhaps best of all, its history reads like a bad piece of internet fiction.
Allow me to tell you the tale.
In the middle of the 1960s, a programmer named Richard Pick worked for IBM.
But all his friends called him Dick Pick.
He was assigned to this project called the Generalized Information Retrieval Language System, or GIRLS.
This new database had been designed to help build attack helicopters.
But projects run their course, and eventually Dick Pick would work on GIRLS as much as he could.
Dick left IBM and started his own company. At this new outfit, Dick developed this PickIC operating system for the IBM XT and 100%
compatibles. PIC was similar to Girls, and it was a fancy database. The key difference is that PIC OS,
well, it was an entire standalone operating system. It let you boot directly into a database manager.
system. It let you boot directly into a database manager. Its native interface was called English.
So, for the first time, programmers could simply work in English, all thanks to Dick Pick and his foundational research on girls. Now, I know this all sounds really dumb, but it's all real.
Dick Pick actually existed.
His database software actually made a big splash, at least in some circles.
The technology would be licensed out, so there are still these descendants of the Pick database in use today.
Pick OS was popular enough that it showed up in newspaper and magazine
articles. So there is a contemporary paper trail. It existed. We have manuals, programming guides,
even third-party books like Exploring the PIC Operating System. We have, really, by my standards,
at least, a lot of information about PIC-OS.
There's probably enough for me to do an episode.
I'm actually pretty sure I could put something together.
That said, we're missing one key thing.
The actual software itself.
Now, I've looked high and low for ages and turned up nothing.
looked high and low for ages and turned up nothing. I've contacted people online who claimed to have boot disks and only received DOS programs. That's actually another issue here, is there was PIC-OS
and there was also a DOS program that was the PIC database, but not the operating system. It makes it hard to sort things out when you do find them.
I've scoured eBay for years, and the best I've found, in another aggravating twist,
is a binder for PIC-OS that has empty sheets where the floppy disks would go. I swear this
thing exists somewhere. The reason I find this so vexing
is that there are more obscure operating systems that exist in better states of preservation.
Once we hit the PC era, it becomes much easier to track down software. I mean, there's all kinds of
strange Unix derivatives that you can pull up on a whim and still run today. But
PICOS? Nowhere to be seen. Why is this the case? Well, I think this is another manifestation of
the industry curse. I can see a scenario where, much like the FGS-4000, PIC-OS was only used by a small set of folk.
It was only used for a niche application.
I mean, it's a database operating system, after all.
That's pretty niche.
So, there were never enough people to really leak disk images.
My other conjecture is that there may be copyright fears.
Since PIC-like systems still exist,
some may be concerned that releasing a disk image would awaken some sleeping lawyers in some cavern.
I think PIC-OS is old enough to be abandonware at this point,
but I'm no law-knower, so I could be very wrong.
I'm pretty sure the only way we'll ever see PIC-OS running is if
someone out there with a disk decides to speak up. So, I will ask again. If you or a loved one
has a working PIC-OS boot disk, then please get in touch. I really want to see what this operating
system was like. I want to fill in this final cap in the story. Alright, next one. So far,
we've been looking at pretty low-stake mysteries, so I think it's time to up the ante a bit. Allow
me to introduce you to a mystery that I've been actively trying to solve for quite a while now.
The enigmatic Project Lightning. This is one of the few topics on the show that
might actually get me put on another list. Now, I learned about the existence of Project
Lightning when I produced episode 31, Road to Transistors part 1. That episode dealt with,
among other things, the Cryotron. If you want the full advent of computing take, then just go listen
to the episode. The short explanation is that the cryotron was an early competitor to the transistor,
at least when it came to digital circuits. During the 50s, there was a push to find a logic element
to replace the vacuum tube. Researchers were looking for something faster, smaller,
less hot, more reliable. Really, it wasn't hard to be better than a vacuum tube, in this era at least.
The eventual winner was, of course, the boring transistor, which is basically a semiconductor
switch. It's a stack of different semiconductors that can be switched on or off
using an electrical impulse. But the transistor didn't take over right away. At first, these new
devices were actually hard to manufacture and not that reliable. The process for making transistors
is also highly toxic and poisonous, but that's a different thread to pull. The cryotron was
an alternative option. These are superconductive switches. Cryotrons were invented at MIT in 1953
by Dr. Dudley Allen Buck. Its principle of operation is, well, really, really cool. Pun very much intended here.
You start with a superconductive wire, which has to be very cold to function. Superconductors are
fascinating materials. They're perfect conductors, materials with zero resistance. There are caveats, but for anyone who's not a physicist,
it's a wire with zero resistance. That's cool. This leads to all kinds of weird properties.
For our purposes, they function as perfect conductors. That is, unless they're exposed
to a magnetic field. That actually kills the superconductor. It straight up makes it
stop conducting electricity. That is a switch. Buck figured out that you could exploit this
property to make a logic element. You take a superconductive wire, wrap it in just a normal
wire, you chill that assembly down, and then you have a nice conductor.
You can pass current through the superconductor to your heart's content.
But as soon as current goes through the coil, the superconductor stops working. It turns off.
The overall function here is identical to a transistor or a vacuum tube, at least when it comes to digital operations.
These newfangled cryotrons also offered distinct advantages to their competition.
They switched very quickly, they allowed for fast signal propagation, and were relatively
easy to manufacture. I mean, they started out as just a wire wrapped with another wire. How much more
simple can you get? You can mass produce that. The only downside is that cryotrons only work at
really low temperatures. At that time, that meant that a cryotron circuit had to be submerged in
liquid helium. That particular element, helium, is actually a pretty scarce resource here on Earth.
So, there are some impractical aspects here, but everything could be remedied with some more
research. Buck would actually take the cryotron pretty far. By 1950, he and his colleagues were
even producing integrated cryotron circuits. The best part,
at least in my opinion, is that the cryotron was explicitly digital technology. Both vacuum tubes
and transistors started out as analog amplifiers. They were adapted to be digital. The cryotron
was all digital from the beginning. This is some homegrown computer magic.
At this point, though, we all know that the transistor would win. I mean,
when was the last time you had to top off your computer's liquid hydrogen tank?
The question is, how far did the Cryotron go? This leads us to a pretty wild story.
There are two pieces to this.
I'm going to start out with the tragedy before moving into the spooky part.
Buck actually died very young.
I'm going to quote directly from deadlybuck.com, since that site gives a better summary than
I could.
Quote,
since that site gives a better summary than I could.
In April of 1959, a delegation of seven top Soviet computer scientists visited facilities and staff at IBM and MIT.
Included on the tour was a meeting with Dr. Dudley Buck.
Vice President of Lockheed Missile Systems scientist Dr. Louis Ridenour,
in the previous December,
told President Eisenhower to appoint Dudley Buck to his National Security Agency Scientific
Advisory Board. The NSA-SAB consulted directly with President Eisenhower on matters such as
the future of the U-2 spy plane, the Corona spy satellites, and early detection of Soviet attacks.
plane, the Corona spy satellites, and early detection of Soviet attacks. Dr. Ridenour and Dudley were scheduled for two days of meetings in late May 1959. Three days before this meeting
was to begin, 32-year-old Dudley Buck died of a mysterious illness. On the very same day,
47-year-old Dr. Ridenour would die. End quote.
Buck and Ridenour had been friends and collaborators for years,
and that makes their deaths all the more bizarre and tragic.
There's also this implication that it may have been more than random chance.
Buck suddenly took ill at his offices at MIT.
He went home complaining about symptoms, then died the next morning.
Ridenour died of a brain hemorrhage on the same day in a hotel room.
It's the kind of thing that kind of begs to be read into, as morbid as that is.
It sounds like the end of a Cold War thriller.
The full story of Buck's work has been compiled into a book called The Cryotron Files. This is actually one that I strongly recommend reading. One of the co-authors
is Buck's son, Douglas, so there's a very personal motivation behind the research.
This also gets us into the mystery. The Cryotron files briefly mention this thing called Project Lightning.
It was an NSA-funded project dedicated to the creation of an ultra-fast computer.
A number of technologies were examined during the project's inception.
The final decision was to focus on cryogenics, to build a cryotron-based computer.
There are scant sources on lightning. We have one ACM paper that's
somewhat related, and two or three declassified reports from the NSA. The NSA reports were
declassified as part of a Freedom of Information Act request, and I'm willing to bet the request
was filed by Douglas Buck himself. Anyway, we know next to nothing about Project Lightning.
We just have the most cursory details.
It was funded by the NSA, it involved contracts with IBM, RCA, and maybe other companies.
There may have been a test machine delivered.
That's about it.
At least, that was the state of things at the beginning of 2022.
I've actually been grinding on this one. My biggest hope right now rests in the tender
arms of the NSA themselves. I have an outstanding Freedom of Information Act request with them.
It's been accepted and I even have a case number, so it's just a matter of waiting around.
My request is pretty broad, so it might take a while, but eventually I should have a pile of
internal documents about Lightning. But that's a waiting game, so there's no information on that
front. But check this out. I actually have evidence now that Project Lightning bore fruit. At least,
bore circuits. Last year, I talked one of my research associates, read, long-suffering friends,
into hitting up an archive for me. You see, I'm stuck here in NorCal. I can't exactly
jet-set around the world. I have to hold down a day job. At least, I can't
jet-set around the world yet. I found that the Charles Babbage Institute, located at the
University of Minnesota, actually held some promising-looking boxes in their archive.
This unnamed friend, you know who you are, happened to be passing through the area, so
I made some calls and had them pay U of
M a visit for me. Right now I'm sitting on about 800 or so pages of reports on Project Lightning.
These are from IBM, RCA, and RAND. I've been holding off on going through these reports
in depth. I'm planning to go through everything once my FOIA request comes in.
I kind of don't want the suspense, I'd rather solve the mystery all at once if possible.
That said, I have read some of these, I just couldn't help myself. What I've found so far
shows that Lightning was a lot closer to reality than I initially assumed. Some of these reports have schematics for
components of the Cryotron computer. I'm talking logic elements, math circuits, and memory.
That last one, memory, I think that would have been the biggest hurdle.
Anyway, we're really close to solving this mystery of Project Lightning. I have a whole
pile of sources and more on the way. That said, there's still a lot more information out there.
I haven't followed all the leads yet, mainly because those leads drop me back into libraries
I can't easily get to. So here's the call to action. If you live near the University of
Minnesota, then get in touch with me. I think if I had someone in the area to help facilitate things, we could scrape the archives
clean, since there are still more progress reports at the Babbage Institute. The other avenue that
could bear fruit might be the National Archives or maybe internal documents from the contracts
in question. This is the kind of mystery that I think will ultimately be solved by slow, hard work.
As far as importance go, I would rank Project Lightning really high up there.
There was a point where transistors weren't a clear winner,
they weren't a clear replacement for vacuum tubes.
The Cryotron could have been the way of the future. Lightning
represents a totally different way to build a computer. That on its own is really neat.
It's a pivotal project that's remained hidden inside classified documents in beige folders.
It's definitely worthy of seeing the light of day. Now, I know this episode's running a little long,
but this is a part, and I have one more mystery to share.
This brings us to perhaps the biggest mystery of my life.
If you're a long-time listener, you can probably guess where this is going.
It's my obsession, my quest, the one and only edge-notched card.
This is a mystery that I've been working on since the early days of the show, and one that I think
is also eminently solvable. I first learned about edge-notched cards back in the single-digit
episodes. I ran across an article on Hackaday that
just mentioned their existence. They sounded neat, I did some research, and I fired off episode 6.5,
a mini-episode about this technology. It's like 15 minutes if you want to listen to it.
Edge-notched cards are essentially a physical form of a database. Each card is just a piece of cardstock, some stiff
paper. The perimeter of the card is perforated with holes that can be cut to form notches.
That simple schema is used to encode metadata about what you write on the face of the card.
This metadata can range from simple categories to numerical data and all the way up to complex character
encoding schemes. The point is that edge-notch cards provide a way to tag, categorize, and
even sort information. On its own, I mean, I getcha, that sounds really boring, right?
It's the application of this technology that's, I think, where the interesting part is.
The best example comes when you give each card a serial number.
Of course, this is notched on the card's edge.
You can select for notched cards by placing a so-called sorting needle,
just a long, thin rod needle into a given position.
By lifting up, any card with a notch in that position will fall out of the pack.
This is a big O of one operation, meaning in algorithm terms, that for any number of
cards in a deck, it takes one operation to get a result. By the cunning application of this
technology, you can do something really cool. You can make hypertext. You can make links.
This is a big deal. You give each card a unique identifier. Then you can link out to it by writing
that number on the face of another card. Doug Engelbart, one of the first people to really
dig into this whole hypertext thing, well, he used this exact schema while he was conducting
his early research. That puts edge-notched cards into a very crucial historical context.
Engelbart even mentions notched cards in Augmenting Human Intellect, that's his seminal work on user
interfaces and hypermedia. That alone should establish the importance of the humble cardstock.
However, no one has really written about edge-notched cards in the historical sense.
These cards are something of a missing link, if you'll pardon the pun, between pre-digital data organization
and hypermedia. That's a big deal, and a big omission from the scholarship.
I've been working to correct that. Over the last few years, I've been tracking down
everything I can about edge-notch cards. I've scoured archives, collected rare books, and even tracked
down a number of complete sets of notched cards. As I'm writing this, I actually have a few stacks
of the cards sitting pretty close by. Perhaps my greatest discovery in this research has been
Engelbart's own note cards. Back in 2020, I made the trip down to Stanford's archives to find and scan these cards.
I've actually held some of the earliest links that Engelbart mentions in his papers.
Last year, I even gave a talk at ACM's Human 22 conference on this research, so I'm starting to get some traction.
But there's still a lot of missing information in the story.
I guess I can just come out and kind of say it at this point.
I've been working on a book manuscript that will cover the full history of the Edge Notched card.
It's in a pretty rough state.
I think I've restarted and scrapped and rewritten a draft four times at this point,
but it's slowly coming together.
Just don't expect a release party
anytime soon. I'm at the phase where I've conducted enough research and worked up enough of an
outline and written enough sections that I know what's missing and I know what I need to find.
I have the broad strokes of the story down well. Edge Notch cards evolve out of very early punch
cards around the 1890s. Quickly, the punched
part is dropped, leaving cards with perforations along the edge and open spaces on their face.
These cards are adopted by offices, insurance companies, and some researchers. Data is
squirreled away in special filing cabinets for later retrieval. It's a very back-office kind of tech. It was used
for notes, short-term records, and quick data tabulation. You know, that kind of boring stuff
that people usually want to forget about. And therein lies a major issue with this research.
This isn't a glamorous technology. Notch cards were a workhorse of the everyday office.
The written works I can find on these cards are usually trade publications. Something like
punch cards, their applications to science and industry. Not the most accessible text.
Most of the data stored on these cards were short-term in nature. They were used for financial records that were often only kept for a few years at a time,
or for compiling survey data.
I have a few papers that I've dug up that discuss using edge-notch cards
as an intermediate step in generating numeric results, even.
None of this is the kind of stuff you keep around,
so I really have to scour for temporary notes that just happen to be preserved.
The other fun wrinkle is that archives don't often know they have notch cards in them.
Usually an archive categorizes what they store by medium.
You'll have audio recordings on tapes, racks of microfilm, VHSs or DVDs, and then boxes upon boxes upon boxes of paper products.
They don't separate out edge-notched cards, so a little brute force searching is required.
The result here is that I have very few complete sets of notched cards to examine,
and you really need a complete set with a description, since there's a lot of complicated
reasons, but Notchcards oftentimes end up being a really personalized storage medium.
So herein lies the first major mystery.
How far were edge Notchcards pushed?
I've read about some wild encoding schemes, but I don't have very many real examples
to back them up with.
but I don't have very many real examples to back them up with.
The biggest question here, at least in my head, is if Engelbart's linking schema was truly his own creation.
Were other Notchcard practitioners constructing linked decks of cards?
I don't know.
And the only way to find out is to conduct an exhaustive search.
The popularization of the medium occurred in the 40s and 50s.
There are two companies involved here, Copland Chatterson and McBee, sometimes Royal McBee.
CopChat was based out of England, and McBee's central office was in Ohio.
They both produced cards based on the same patent.
My assumption is they were both licensing out the patent, but I do not know.
I just don't have evidence yet.
In general, there's a big gap in the sourcing right here.
This is the second mystery.
It's a little more broad than the first.
What exactly is the deal with McBee and Cope Chat? Their corporate histories are, well, poorly documented in this period. There are
archives to deal with here, so we can start to tease out some information. Many of McBee's records
are held at the Athens Historical Society in, surprise surprise, Athens, Ohio. Copland
Chatterson's files are archived in England's National Archives in Gloucestershire. If you're
near any of those places, then please, you too can become one of my research associates. I'm hoping
to fly out to one of these in the near future, but we'll see how things develop as the year goes on.
The other lead I have is a book.
It's titled, and this one's a doozy,
The Story of the Part that Copland-Chatterson Limited has played in the development of loose-leaf systems in Canada and abroad from 1893 to 1943, 50 Years of Service by Robert J. Copeland. There is one copy
in the American library system, and it's for on-site use only. If you have this book or somehow
have access to it, then once again, please get in touch. If I can get my hands on this, I would be over the moon.
The good news here is, once again, I think that the edge-notched mysteries can be solved.
It's just going to be a really slow process.
It's going to require a lot of active searching and probably a trip abroad.
But I think this mystery will be revealed in good time.
Alright, thus we reach the end of a rather unorthodox episode of Advent of Computing.
I just want to end things by repeating myself a little.
Thank you.
To everyone who's listened to the show, to everyone who's told a friend about this weird computer podcast, and to everyone who's donated to my strange journey, thank you. I never
dreamed Advent of Computing would hit 10 episodes, much less 100, so here's to many more. I'll be
back in two weeks' time with a more normal show. I'm thinking about going back to my roots a bit. So we'll probably be
in the realm of programming languages, but don't hold me to that. That's always been a fun topic
for me. Anyway, you can find links to everything at my website, adventofcomputing.com. You can
reach out to me on Twitter, I'm at adventofcomp, or hey, you can email me directly too. I'm
adventofcomputing at gmail.com.
If you think you can help solve any of these mysteries, then I would love to hear from you.
And as always, have a great rest of your day.