Advent of Computing - Episode 127 - Nim
Episode Date: March 11, 2024This is going to be a wild rambling ride. In 1939 a computer called Nimatron was made. It was one of the earliest digital electronic computers in the world. It did one thing: play a game called Nim. ...Over a decade later, in 1951, another Nim machine hit the scene. This computer called Nimrod, was designed to demonstrate how computers worked... by playing a game of Nim. These machines, humble as they may sound, end up deeply complicating the history of computing. Join me as I, once again, muddy the long arc of progress.  Selected Sources:  https://archive.org/details/faster-than-thought-b.-v.-bowden - Faster Than Thought  https://www.goodeveca.net/nimrod/NIMROD_Guide.html - Faster Than Thought
Transcript
Discussion (0)
There's this mountain ascent that my buddies and I have been trying to crack for about
the last year and some change.
I've talked about it on the show before, it's kind of my nemesis lately.
I've been stuck in the snow out there, I've had close encounters with a bear, and I've
worn myself ragged.
The funny thing is, it's not even a particularly large mountain.
There's even a road that goes directly to the summit.
large mountain. There's even a road that goes directly to the summit. But there's supposedly an eastern ascent from a nearby riverbed directly to the top. We've been trying to figure that route
out for quite a while. It was only recently that we were able to make the full ascent.
The route would be instantly familiar to you if you live near any type of old lumber land. You follow an old decommissioned
lumber road for miles. Eventually, it ends at a large clearing. This one happens to be right on
the edge of a sheer cliff that looks into this really pretty sheltered valley. Another road
runs atop a ridgeline above the clearing. That road leads directly to the summit. But the clearing and the road are not connected
in any obvious way. That is, unless you know exactly where to look. During the hike, the
terrain gets more and more rocky until you're walking past boulders and an exposed stone.
Cut into that stone. In one of the cliffs above this clearing, there is a trail that forms a link between that
clearing and the final summit road. It's an old trail. It's easy to miss. I've missed it many
times before. But it's undoubtedly there if you look at just the right spot. It's this slender
cut that snakes up to the ridgeline. It respects older, larger stones. It turns into switchbacks
as it climbs further up. We first found this trail late last fall, but we didn't follow it
all the way. When we got to it, the sun was already setting and we didn't want to go back
down it in the dark, so we turned back early. During the winter, right after the first snow fell on these parts, we made our final
bid. Our day started out in that familiar riverbed and ascended up, the snow piling up every foot we
climbed. We hit the secret trail and, to our surprise, it just opened up to us. We had been
worried that the trail would peter out before the ridge, or that snow would make
it impassable or impossible to find.
But that simply wasn't the case.
The tiny trail, this route that we didn't really think existed, was there.
It was clear as day.
We had a few moments where we thought the trail was about to disappear, but those moments
were fleeting.
We'd turn a corner or push aside some brush,
and the trail would be right there in front of us. It almost invited us up the mountain.
It was a very weird experience after failing this summit so many times before.
While this was very exciting, it only opened up more questions. The secret trail had no reason to be there. The two roads it connects
aren't really walk-in roads. They were meant for big logging equipment and trucks. After the ascent,
and after I worked out all the aches and sore muscles, I went looking for answers. Luckily,
I happened to have a neighbor who used to be an archaeologist for the state. He's surveyed and dug in most parts of the county, and had actually driven up both of these very roads before they
were reclaimed by nature. He had been to the clearing, but he had never seen the secret trail.
He never saw the spot where it starts. He had no idea how old it may be, or why it was there.
We both checked through old maps and old reference
materials, and there just hasn't been anything. There are ancient trails in the area that go from
the coast to the interior, but those run miles to the west and they don't cross any summits or
ridgelines. Not all connections, not all links are readily apparent.
The trail is there, we can see it and we can walk along it,
but we don't know where exactly it came from.
We don't know when or why.
But it is there.
For me, this makes this trail particularly special.
It's a type of mysterious connection. something inextricable that I've
passed over before without even noticing. Today, we're going to be looking for a similar connection,
a faint link that's easy to overlook and may have some strange and wonderful implications.
Because you see, it's those hard-fought connections, those difficult-to-find links,
that I think are the most exciting.
Welcome back to Adrin of Computing.
I'm your host, Sean Haas, and this is episode 127, NIMH. Today, we're talking about video games. Or,
at least, we're talking about something that passed as a video game in the very early digital
days. An explanation here is definitely in order, because this is, uh, this is going to be a bit of
a ramble and a doozy of an episode. Over the course of covering old games, I've developed a yardstick that I use to measure
when something becomes an actual video game.
According to my logic, which I assure you is 100% sound and rational, a video game must
use the digital medium in order to create something that can't be done with any other
medium.
It also has to, you know,
be a game, but that's a nice nebulous qualifier. This means that things like Space War, where you
dogfight spaceships around a black hole, are video games. The main reason I use this yardstick is to exclude things like poker and blackjack programs from consideration.
Now, this may sound a little dumb at first, but there is a good reason for this.
We actually keep finding older and older programs that simulate card games.
In fact, we keep finding older and older programs that simulate all kinds of real-world games in general.
older and older programs that simulate all kinds of real-world games in general.
Thus, this rule lets me avoid a pile of um-actuallys, and it also helps focus my discussion on early video games. It, as an ancillary effect, prevents me from doing
exhaustive searches of sources that are about to change or may or may not be accessible by
human hands. The secondary reason for this rule is
simply the fact that old games, and here I mean really old games, just feel different.
Now, I wasn't able to properly articulate that difference until very recently.
I was actually helped in this by our main man, Dr. Alan Turing.
If you happen to attend VCF SoCal, then I'm sure you heard me ramble on about my extensive library.
Since starting the podcast, and really since before then, I've been building up a collection of rare books and reference materials.
After a few choice acquisitions at VCF SoCal, I decided that I needed to do a little bit of archival work.
So I've been going through the process of wrapping up some of my books in archival plastic.
I assure you, it is acid-free.
I've never really been too good with my hands, so it's a bit of a learning curve. I'm taking it slow and working up to the more precious tomes that I keep on my shelves.
The first book I wrapped was called Faster Than Thought.
It's a collection of essays from 1953 that cover computer history and the state of computing in that era.
Sadly, my copy is a third edition, to my great shame.
I picked this book up because it was cited by something else, and frankly,
it's kind of just a wild book. I mean, it has computer history essays from 1953.
Computers are eight years old at that point. It would be like writing a history of World War II in, well, 1953. It's just neat. Near the back of the volume, once we
get out of the history section and into the current topic section, is an essay titled
Computers Applied to Games. It was penned by none other than Alan Turing himself. It's a fascinating little piece that heavily inspired this episode.
So, hey, thanks Alan. Anyway, in the essay, he drops this, quote,
The reader might very well ask why we bother to use these complicated and expensive machines in
so trivial pursuit as playing games. It would be disingenuous of us to disguise the fact that the principal motivation which
prompted the work was the sheer fun of the thing, but nevertheless, if we are ever to justify the
time and effort, we could quite easily make a pretense of doing so. We have already explained
how hard all programming is to do, and how much difficulty is due to the incompetence of the
machine at taking an overall view of the problem which it is analyzing.
This particular point is brought out more clearly in playing games than in anything else.
End quote.
These very early games aren't entirely games.
They're closer to applied mathematics.
They're closer to research programs, because really at the time, any program would be
a research program. The fun is there, but making fun is somewhat secondary to just the main struggle
of making a program, making a computer work at all. Now, I initially intended to do a fun and
light-hearted survey of some early games. Turing even gives us a nice outline
to work with. But, well, best laid plans and all that. I got stuck on one game in particular,
and that opened up a number of connections that had been bugging me for a while.
Today, we're looking at the game of NIMH, its digital solutions, and how that leads into the larger narrative of computer history.
On the surface, that probably does sound like a fun, light-hearted affair, but trust me, there's some weird stuff here.
There's some surprisingly deep things going on.
NIMH actually complicates the history of computing in a very fascinating and I think fundamental way.
Before we talk about the solutions and problems with adapting NIM to a computer,
I should probably ask you, have you ever played a game of NIM before?
I will say I had not until very recently.
Like, as I'm writing the script recently. It's one of those
games with just enough complexity for some interesting strategy. There are a number of
variants, but I'm going to explain the most simple version. The setup is easy. You get 16...
things. They can be anything you want. The most recognizable items used are matchsticks, but
you could use pebbles or dice or candies or coins. You lay them out in four heaps,
organized like a pyramid. So the first heap has one item, the second has three,
the third heap has five, and the fourth heap has seven. As a quick note, these are called heaps for
some reason I do not understand.
The nomenclature made NIMH a little hard for me to understand at first just by reading about it.
In a game of NIMH played with matchsticks, these heaps are usually laid out as rows,
so it might be easier to think of them that way.
The game is played with two players.
On each turn, a player must remove one or more items from a single heap. The goal of the game is to force your opponent to take the final item.
It sounds simple at first, but like I said, there's just enough depth for a strategy.
Nim is also simple enough that there is a fully mathematical solution to the game. This is coming straight from Turing's article, but he's citing earlier work.
In general, though, just know I'm using Turing's description of a well-known solution.
The trick is to represent each stack in binary notation.
That will sound a little confusing, so just bear with me and I'm going to try and explain this as best I can.
In Turing's paper, he uses a graphic to show this trick.
He makes a table where each row is one of the heaps.
In each row, you write out the number of items in that heap, but in binary.
You end up with a table of ones and zeros, where each column would represent a figure in those
binary digits. So the first column would be the ones place, the second would be the twos,
third would be the fourth, and so on. Actually, it would just be the fourth. You just have
three-bit numbers. Next, you add up each column. Now, here's an annoying tricky part to this again.
This isn't a binary addition, but a decimal addition.
Very gross.
But what you end up with is a final row that shows the sum of each column.
For a new game, that row would read 2, 2, and 4.
That's also an example of a winning position, where all the numbers are even.
Now, I do not understand the math here, but stick with me.
There are proofs if you like that sort of thing.
Just know that to have a winning position for this given rule set,
all the columns in that binary
table have to add up to even numbers. The trick to winning a game of NIM is to always move into
a winning position. If you run the sums and you see that a column is odd, then you need to make
a move that evens out that column. In the game, that actually means you just need to take the right number of
matchsticks from the right place. You can use this mathematical theory to build up a more
humanistic strategy. In the game of NIMH, there are only a fixed number of winning positions.
You just have to memorize the winning positions, and if you're passed a board that's not a winning
position, you take items to make that into a winning position. You can extrapolate all those winning positions using
that binary table method that I described. This means that NIM, unlike chess or a more complicated
game, is a totally solved problem. There are not infinite variations. There aren't subtle psychological strategies.
If you know winning positions or the theory and proofs, then you can always win at NIM.
That is, unless you make a mistake or you go second. It actually turns out that if you go
first and make no mistakes, then you will always win with this rule set. Caveats aside, the strategy
here is very simple. In fact, it's simple enough you could even automate it. And well, the strategy
is oddly digital, after all. But let me back up. It wouldn't be Advent of Computing if I was direct
with my approach. There's this book called The Dream Machine that was my first entry into the world of
computer history.
It's one of those really far-spanning popular history books that starts at the beginning
of computing in order to eventually discuss Project Slade and Licklider.
Early on, when it's just starting the digital era, the book drops this idea that's
kind of stuck in my head ever since. The specific section is about the first binary logic circuits.
Basically, how the jump was made from analog math on mechanical machines to binary math
on electronic machines calculated using logic gates. To make a long story short, this was a case of independent invention.
In the latter half of the 1930s, at least two people stumbled on this idea.
Claude Shannon, working under Vannevar Bush at MIT,
writes a thesis paper on logic circuits.
That thesis includes electronic digital circuits
that can perform binary math using logic gates. That's a
core, a very primordial element of all later computers. In the same exact time period,
George Stibitz steals a handful of relays from his office at Bell Labs and makes a binary adding
circuit on his kitchen table. He goes on to develop more sophisticated digital
circuits for binary mathematics while on company time. The Dream Machine uses this story to argue
that there's something fundamental about binary electronic computing. That binary data representation
is natural for electrical circuits. Given enough time, anyone would arrive at that conclusion.
This idea has always jived with me because, in a way, it speaks to a type of digital predeterminism.
We use electronic digital machines because that's the best combination of options. It's a solution
that can be proven perfect by first principles and the forces of history.
We can up the ante a little bit.
This independent invention has occurred more than twice.
In the late 1930s, the same exact period as Stibitz and Shannon,
John Atanasoff arrived at the same conclusion.
Around midnight, in a roadside, quote--unquote honky-tonk, probably
fueled by a little bit of ethanol, Atanasoff worked out the simple fact that a computer had
to use binary numbers and that logic gates could be used to do binary math. He then returned to
his office at Iowa State College and built a computer. So we have binary computers invented in a lab at MIT, a kitchen table,
and a roadside bar. This, to me, speaks to something profound. We were always destined
to stumble on computers. Maybe we were just cursed as a species the first time someone
brought a copy of the I Ching a little too close to a Galena crystal.
Over the run of Advent of Computing, I try to keep an eye out for independent invention because it scratches this whole itch for me. It adds more evidence to humanity's digital curse.
I go looking for this kind of stuff often, but it's rare that I stumble on it by accident.
but it's rare that I stumble on it by accident.
So, check this out.
In the late 1930s, engineers at Westinghouse were working with these new digital scaling circuits.
These were used in Geiger counters.
The detector tube of a Geiger counter is a very simple device.
It creates a current whenever it's exposed to radiation.
You get a series of pulses out of the tube that correspond to detection events,
hence the clicking.
At a certain point, however, your detection resolution kind of gets trashed.
Once you hit a high level of radiation, the pulses become nearly continuous.
All you can say is that you've maxed out. You're beeping, but you can't say how high one maxed reading is when compared to another maxed out reading.
They're just off the charts. One solution is the so-called scaling circuit.
This is a circuit that takes the pulsing signal and essentially computes a running average.
the pulsing signal and, essentially, computes a running average. It's also best if you can adjust how that average is calculated, or the scale factor. Think of the circuit as taking a chunk
of that pulse train, broken up into some time range, and then dividing that by a set value.
That allows you to change the resolution of your detector on the fly. Now you can turn the continuous hum of a
max reading into a discernible series of pulses. You can actually tell if one maxed out reading
is higher or lower than another max reading. You can work up a scaling circuit using resistors,
but that only works for analog readouts. For digital, for beeps and clicks, you need a digital circuit.
There were a number of designs for this.
The engineers at Westinghouse were using a relay-based circuit for their scaling needs.
And let me assure you, this is a digital circuit.
It takes a series of pulses, ons and offs, and sends out another series of pulses.
That's pure digital.
and sends out another series of pulses. That's pure digital. One of the researchers working with these new counter-circuits was Edward Condon, nuclear scientist extraordinaire. During the
1940s, he would work under the Manhattan Project, later running the National Bureau of Standards and
making countless contributions to the field of quantum physics. This is, straight up, a name that I know
from textbooks back in college, so he's kind of a big deal. But in 1939, he had yet to reach the
highest highs of his career. He was working research at Westinghouse and spending a little
bit of time at their booth at the New York World's Fair. The old World's Fairs were wild events. If I had a time machine
and really just wanted to have a good time, I might try and attend one. The one that matters
for us today is the 1939 New York World's Fair. Now, despite the name, this event actually took in 1939 and 1940. It started in April of 39 and ended in October of 1940. Over 40 million people
attended the fair. Westinghouse had an entire building to exhibit their latest and greatest
machines. It was in late 1939 that Condon comes into the picture. To quote from an oral history conducted with the AIP,
quote, during that winter between 39 and 40, the company was looking for ideas to freshen up their
exhibit for the second year, and I thought of building this machine, and so we did build it
and exhibited it, end quote. It was that simple. Wessinghaus wanted something new to spice things up for the new year.
Condon decided a game would be just the thing.
So in late 39, he and a group of relay engineers started work on Nematron,
a machine he would later call his greatest failure.
In principle, we're dealing with a pretty complex device.
The Nematron was able to play a game of
NIM against a human opponent, following the same rule set that I described earlier. Four heaps,
whoever takes the last item loses. Instead of matches, the Nematron used a grid of lights to
represent the game state. If a light was turned on, then that represented an item. If the light
was off, well, that meant the item had already
been taken off the board. The Nematron was meant to be a cool display and a novelty for visitors.
As such, it was a pretty flashy cabinet. You could probably call it one of the first arcade
cabinets, actually. On one side of the machine were the controls, a small grid of lights,
buttons for taking items from each heap, and a button to pass the turn to the machine. Atop the cabinet was a cube studded with more lights.
Those upper lights showed the same game state as the console, so that a crowd could watch a game
of NIMH being played. Rounding out the machine was a set of counters that showed total games,
human wins, and machine wins. According to Condon,
out of 100,000 games played by the Nematron, the machine won 90% of the time.
On the surface, this seems like a cool novelty. This was how it was intended.
But here's the thing. The Nematron was a computer. It was a binary electronic computer built in 1939.
This means that we can add a fourth event to the pile. Binary machines were invented at
a lab at MIT, a kitchen table, a roadside bar, and the 1939 World's Fair.
The internals of the Nematron are fully described by a patent.
It covers everything from the circuit schematics up to a full explanation of how it works.
Actually, I think you could just rebuild a Nematron using this patent.
We can totally dig into the patent to get all the gory details, all the proof that this is a computer,
or we can stay on the surface. So check this out. totally dig into the patent to get all the gory details, all the proof that this is a computer,
or we can stay on the surface. So check this out. In 1942, Condon wrote this passage for a newsletter.
Quote, the Nematron is a machine which is very skillful at playing the game of NIM.
Unlike other mathematical machines, the Nematron serves no other useful purpose than to entertain,
unless it be to illustrate how a set of electrical relays can be made to make a decision in accordance with a fairly simple mathematical procedure. End quote.
I hope my emphasis on the last part of that quote came through in the audio. I have it in big, bold letters in the script.
Here, Condon is saying that the Nematron uses relays,
digital logic elements,
to make decisions and follow a mathematical procedure.
It does math.
It makes choices.
And it's digital.
That's all backed up by patent claims.
But it goes deeper than this.
The Nimitron is a fully binary machine.
The display lights aren't just for show.
It thinks in terms of ones and zeros, ons and offs.
In fact, it uses the same Nim solution that I outlined earlier, at least the backing for that.
This machine took a binary representation of the game state, which it stored internally,
did some manipulations, and used that to choose the next move.
Now, it was a little simplified.
According to Condon, the Nimetron actually only knew about a dozen moves,
but that was apparently enough to win 90% of its games.
Once again, the binary orientation is backed up by the patent.
The machine's operations are all described in terms of base-2 mathematics.
So, roundup.
We're looking at a machine that operates on binary data.
It internally stores and manipulates a state.
operates on binary data. It internally stores and manipulates a state. It takes inputs, gives outputs, does math operations, and makes decisions based off the results of those operations.
That is a computer just as much as any contemporary machine. Which, I guess,
brings us to a bit of a conversation about our main man Turing. For a machine to be a computer, it needs to be Turing complete.
That means it can carry out a series of operations with conditional branches.
It can do something, then do something else, depending on the results of that first something.
This is often shortened to conditional looping, meaning that you can repeat operations until you reach a given state.
If a machine is Turing-complete, then it can do anything that any other computer can do.
I'll leave that proof as an exercise to the reader.
Turing-completeness is most easily understood in stored program computers.
That is, computers that can be programmed and save that program in memory.
That is, computers that can be programmed and save that program in memory.
That's because your decision can be something like,
if the stack overflows, then go to line 10,
or add to the register until we reach the number 100,
and then jump to instruction 50.
But none of these early machines are stored program computers.
They are, well, there's not a nice universal word for them.
Their programming is in their wiring. The Nimitron only knows how to play Nim,
but it can do a loop. We see that with the overall game cycle. Make a move, process the player's move,
repeat from the top until there's a winning condition, and then stop looping.
It calculates sums of tables, and then branches off based off the results.
That's a program, but it's not expressed in code.
The program isn't stored in memory.
That means that these non-programmable machines aren't flexible,
but they're still doing everything that later machines can do.
I just wish there was a better term for these machines than non-stored program computers.
Anyway, in 1939, Condon is sitting face-to-face with a computer.
I'd argue this is a Turing-complete computer,
meaning that it is an honest-to-goodness computer, no ifs, ands, or buts.
And he just takes it to the world's fair.
It beats the pants off of attendees. It gets some press. It baffles and excites onlookers. And then it's pushed off to a museum. Condon moves on with his life.
This is why Condon called the Nematron one of his greatest failures. He had a digital computer in the palm
of his hand, a new type of machine that only a brave few had even dreamed of. Just like Atanasoff
and Stibitz and Shannon, Condon had almost stumbled into this machine. But unlike the rest,
he didn't build this machine for math. It was a novelty, so he just kind of let it slip away.
If he had viewed the Nematron in a different light, he could have become one of the founding
fathers of computing. Instead, his machine faded into obscurity without a second thought.
It was only later that he realized what he had made.
that he realized what he had made. Now, here's where we hit another twist. You see, the Nematron may have spawned a lineage. Supposedly, Condon's machine inspired another NIM playing computer.
This latter machine was also made for an exhibition. The difference is this time, the new machine was explicitly
meant as a computer. To get up to the second machine, to this supposed child of Nematron,
we need to fast forward to 1951. In those 11 years, the digital landscape has totally changed.
We've reached a point where there are honest-to-goodness
stored program machines. We even have computers with recognizable architectures. We're well into
the digital age. But despite these changes, we're still in the early days of the discipline.
Computers aren't well-known or well-under understood, especially by the public.
This, however, was beginning to change.
EMCC, the first computer company, is founded in 1946.
Existing companies in the laded fields soon flocked to this new market.
IBM, Remington Rand, GE, and in England, a little company called Ferranti.
Their first computer,
the Ferranti Mark I, was set to launch in 1951. That same year would also be the Festival of Britain, a more localized celebration similar to the World's Fair. Ferranti planned to set up an
exhibit, and I can't help but think they planned this as a way to drum up business
for their upcoming machine. The exhibit they submitted was a computer called Nimrod. According
to Replay, the history of video games, this computer was a slapdash affair. Supposedly,
Ferranti dropped the ball and threw together a machine in a few weeks. The main designer of Nimrod was one John Bennett, and
according to the book, he was inspired by Nimitron itself. However, this wouldn't be a one-to-one
recreation. This was a totally new digital age. The plan was for Nimrod to demonstrate how these
new computers worked. Festival goers would be able to play Nim,
watch how the computer worked, see a demo explaining all the ins and outs, and even
could buy a pamphlet explaining the deeper inner workings of the machine. That pamphlet is where
much of the information for this section comes from. So let's start at the beginning. Nimrod is supposed to be more than just a neat game. It
was meant as a way to demonstrate how a computer works. As such, Nimrod is a full-on computer.
Sort of. It is Turing complete, but it's not a programmable machine. It's a special-purpose
machine, like Nimitron. In practice, this means that it only
has one program hardwired into its circuits. It also uses the same solution to Nim that we saw
earlier. But a key difference is that Nimrod uses a fully generalized solution. Instead of a dozen
moves, Nimrod can play any game of Nim. It has the full solution wired up inside.
If you wanted a shallow experience, then you could walk up to Nimrod and lose a game of Nim.
It had switches and buttons for controlling the game and big lights to show the board state,
but that's just the pedestrian view of Nimrod, and we aren't mere pedestrians around here.
The machine is nearly covered in lights, and that's where a lot of the cooler details hide.
These lights are broken into panels. The most prominent shows the game state,
but another panel shows a block diagram of the computer and the machine's current internal state.
A third panel explains the quote-unquote program.
A lower panel is studded with the actual vacuum tubes that are used by the machine.
This program panel is actually a neat sight of hand.
Nimrod wasn't a stored program machine.
It didn't have software or operation codes or even instructions.
But for demonstration purposes, it acted like it
was programmable. Or at least, the people showing off Nimrod acted that way. The program panel
listed seven operations, the broad steps that Nimrod went through while it was deciding a move.
Each was numbered and a set of lights displayed which operation the computer
was carrying out. Normally, this would go by too quickly to notice, but Nimrod could be kicked over
into a step-by-step mode. In the larger world, these types of step-by-step modes are used for
debugging, but here it was used as a way to explain and show how the machine worked. This worked in concert with the block schematic
panel. That panel showed the data in binary that was stored in memory and in Nimrod's arithmetic
logic unit. I did say this was a computer, right? It has memory, just not a whole lot. We're talking
about a few dozen bits. And it does have an ALU, just a very simple one. It can't
even multiply, just do some basic, well, increment and decrement operations, but that is an arithmetic
logic unit. What's neat about this panel is it's laid out just like the weird binary tables used to solve NIM. It shows a very cool visual representation of the
program. You get heaps 0, 1, 2, and 3 decomposed into binary numbers and displayed as a grid.
The columns line up just like the tables we discussed earlier, so you can see right on the
machine how the sausage is being made. Another neat touch is that this panel displays two counters.
Nimrod's program works as a number of loops. It considers each heap and each column in order
by looping two counter registers. The value of those registers are displayed using lights.
So in step-by-step mode, you can see exactly where the program is and what it's doing.
Honestly, I wish all debugging interfaces were this clear.
What you get is a very practical demonstration of a computer.
You can watch, in real time, as Nimrod beats you at Nim.
You can see where it is in its deliberation and watch it adjust its internal memory.
And then you see the result of its move on the game board. I think this is a fascinating contrast to the Nimitron.
Here we have a computer that was designed and built to demonstrate how a computer works. The
demo was chosen as Nim because that had a very handy digital solution, and it was a well-known game at the time. It made for a fantastic example that the public could understand.
And that was kind of the point.
Computers were powerful and new.
They were complicated, but they could be understood.
To quote the pamphlet that Ferranti sold next to Nimrod,
Automatic computers are often described as mechanical or electronic brains. The use of the
word brain can lead to great confusion and therefore we prefer to avoid its use. In the
first place, it gives one the impression that automatic computers can think for themselves,
which is not true of any machine that has been made so far. They do think after a fashion,
but only in the manner that their designer and the
person controlling the machine allow. Second, the word brain, together with other words,
has been used in literature giving the impression that these machines are extremely complex and
quite beyond the understanding of an ordinary person. It is true that some of the individual
details are complicated, and also that one must learn to appreciate the individual steps in a thought process, but there should be little difficulty
in grasping the broad principles of operation of the machine, and thereby getting some insight
into the computing methods.
End quote.
You'll have to excuse the extended quote, but I really like this kind of stuff.
It reminds me of the tagline to Ted Nelson's computer lib.
That reads, you can and must understand computers now. Nimrod really was a very approachable and
reasonable way to understand computers. It showed that computers could be understood by lay people,
and that computers weren't these evil electronic brains, they just were a little bit
faster at thinking than the rest of us. This sentiment was reflected in the name of this
mysterious pamphlet I keep referring to. You see, it was actually called Faster Than Thought.
The book that inspired me to start this episode was named in honor of this very specific pamphlet.
How's that for a weird connection? The point is that computers aren't more complex than humans,
they're just more powerful than humans. They're just faster than us, at least at some things.
But the intended message didn't exactly resonate. There's this beautiful recording from a BBC broadcast
that's been passed down by the webmaster over at goodeveca.net.
In the broadcast, a reporter describes Nimrod as a terrifying metal brain.
Instead of explaining the demo,
he simply tells listeners that it's huge, complex, and incomprehensible,
that it plays Nim, but does so unlike anyone has seen before, and you should be very afraid of that.
Now, admittedly, this seems to have been a very extreme reaction. I found some other press
coverage and papers that is a lot more restrained, let's just say. Basically, the worst they do is call it a
big electronic brain. So, a little counterproductive and against the spirit of the
machine, but nothing awful. The actual sad part is the public reception at the festival.
According to Bennett, most festival goers just wanted to play NIMH. They didn't sit for a demo,
they didn't buy the pamphlet, and they didn't take the time to
understand the machine. So, in a way, Nimrod was reduced to the same role as the older Nimitron.
That's the general rundown of Nimrod's history, so I want to zoom out. Let's refocus the
conversation on the matter of context and lineage, because I think that's where something interesting is going
on. As I keep saying, Nimrod was everything that the Nimitron wasn't. Nimrod was built explicitly
as a computer. It wasn't a machine that just happened to be a computer. It was meant as a
demonstration of a computer, not just an entertaining game. In that capacity, it falls
much more into what Turing was talking about in Faster Than Thought, the book, I mean. At least,
in a manner of speaking. One of the striking aspects of Turing's article is the explanation
that early game programs were basically all groundbreaking research. Chess programs were
on the cutting edge of data
processing. To play a game of chess, a computer had to twist its circuits into a pile of little
metal pretzels. Data had to be manipulated in new ways. Fancy new types of data structures had to be
formed or improved. It was a huge programming challenge. At publishing time in 1953, the best chess program could solve
a two-turn chess puzzle, but that took nearly 15 minutes to process. Nim, on the other hand,
was a resoundingly simple game. It had a mathematical solution that, although not
exactly straightforward, was well-defined and quick to calculate.
So Nimrod wasn't so much groundbreaking in terms of programming or computer design.
Nimrod isn't using some super-arcane math model that no one knew about. It's not even a very
impressive machine by 1951 standards. It had almost no memory, and it could only carry out a few very basic mathematical operations.
It's not even fully programmable.
I'd argue, rather, there was groundbreaking in another way.
Nimrod is one of the earliest educational machines I have ever heard of.
This is something that I don't think I've seen properly addressed before.
Nimrod was designed as a means to demonstrate how a computer works.
It was meant to teach what a computer was, what a computer did, and roughly how it did it.
It says so right in Faster Than Thought, the pamphlet Faster Than Thought.
Tuck that thesis behind one ear for a second, because there's another
thread that I want to pull on. Nimrod was made in a fully digital context. At the time, Ferranti
was developing their first computer, the Mark I, which was a fully binary digital machine that was
programmable. We can also see this context on a more personal level. John Bennett,
the primary designer of Nimrod, had been on the team that built EDZAC, one of the first
stored program computers. He was steeped in this digital tea. So there's this personal
and professional connection between Nimrod and other more fully-featured machines.
and other more fully-featured machines.
Nimrod, however, isn't a technical step forward.
In fact, it's kind of a step backwards.
It's more specialized, more simplified, and less powerful.
That evolutionary regression, however, serves a purpose. On the one hand, it made Nimrod easier to make.
Ferranti did kind of hammer this thing together in a few weeks after all,
so a simplified design was necessary.
But on the other hand, it made Nimrod uniquely suited as an educational computer.
It's at least easier for an novice to understand than, say, a full stored program computer.
It introduces concepts that are used in more complex designs,
but does so at a very basic and simplified level. That's something unique, especially for the time.
I think it's wild how circumstances led to this weird little machine with such an educational
potential. This has kind of turned into one of those episodes where I'm wearing a lot of my
influences on my sleeve. So I'm going to give you another citation, then I'm going to ramble and try
to weave together a few threads here that I think we have just the edges of. There's this paper by
Doran Swade titled Forgotten Machines, The Need for a New Master
Narrative. I know it from a larger book called Exploring the Early Digital. This is one of the
many papers that kind of lives rent-free in my head. I'm just going to read from the abstract
to kick this off. Quote, History of computing seeks, among other things, to provide a narrative
for the overwhelming success of the modern
electronic digital computer. The datum for these accounts tends to be a finite set of machines
identified as developmental staging posts. The reduction of the datum to a set of canonical
machines rips out of the frame other machines that were part of the contemporary context,
but which do not feature in the prevailing
narrative. End quote. The paper discusses two machines that don't fit that narrative,
but do fill places in the larger context. One machine is the automatic totalizator that I ran
an episode on three years ago, two years ago, way back in the archive. Anyway, the general argument is that
computer history needs an adjustment, that forgotten machines need to be incorporated
into the story for the story to actually make sense, for it to be true to life. It's a sentiment
that I try to keep in mind when producing Advent of Computing. I really like this paper because,
besides feeling a little bit academically
subversive, it verges on discussing trends and forces. Let me try and explain this a little
better. It's very easy to try and describe history in terms of big events and big figures.
It's simple to look at how one event had a huge impact on history.
ENIAC hits the scene and computing changes forever. No
one ever uses an analog computer again. FORTRAN is released and programming becomes a new discipline.
No one ever even thinks about machine code for the ensuing hundred years. It's simple, clean,
and fast. But the fact of the matter is that events are slow things.
Fortran hits the scene, and a handful of IBM users start toying with a high-level language,
on occasion, sometimes.
ENIAC becomes operational, and the Manhattan Project has a nice new toy.
Over the course of years, those events echo, their meaning is reshaped, and progress moves forward.
So while it's easy to just look at big events, that misses the actual slow grind of progress.
It misses the ripples and all the smaller rocks that the ripples roll around.
The general idea of a trends and forces view of history is that the entire arc of humanity can be explained by overwhelming historical forces. Take the development of binary logic circuits as a
perfect example. There are multiple independent inventions, at least four, probably more than that.
That points less towards one genius strolling into a lab with a handful of
vacuum tubes, and more towards some type of overwhelming force. There was a drive to make
more and more complicated mathematical machines. There was some critical mass of tools and research.
It happened that by the late 1930s, everything was right, all the forces lined up, and binary
logic circuits appear.
Atanasoff would ride these waves in a honky-tonk.
Just as Stibitz would ride them on a kitchen table, Shannon would catch a wave in a lab,
and Condon rode them at the World's Fair.
Looking at computer history in terms of a finite set of points, really, well, it just
misses the point.
Nimrod is another piece of tantalizing evidence that breaks the traditional narrative.
It's regressive.
It doesn't fall nicely into this grinding timeline of progress.
It doesn't fit into the larger narrative of progress towards some predetermined goal
that we are sitting at today.
But despite falling outside the narrative, Nimrod offers an important piece of the larger story.
There is the fact that it was a very early educational machine. That in and of itself
is important. But there's something else here. I think many forgotten machines serve as interesting
ways to examine these larger historical forces.
The computers that we use today are stored program machines.
You will also see them referred to as general-purpose computers in earlier literature.
You have a processor that understands a set of instructions, and those instructions can be very easily changed.
More specifically, instructions are stored in memory,
and the computer can operate on memory.
It's a wonderfully flexible design, and it's a relatively old one.
It dates, once again, to the late 1930s.
Turing drafted theoretical designs for these types of stored program computers
as early as 1939.
The idea was out there,
but stored program machines wouldn't actually appear for over a decade.
In 1945, the first draft of the EDVAC report is leaked, which contains full designs for a
stored program computer. That gets adapted into a number of machines. The first, the Manchester Baby, becomes operational in 1948. There is a huge lag time
between theory, design, and practice here. Even once the Baby is operational, not everyone is
on board with general-purpose computing. It doesn't instantaneously travel the world.
We know, looking back, which designs win out. We can construct a nice timeline that shows when all the quote-unquote
good features were invented.
But that misses all of these transitionary periods.
It misses all the actual work that went into improving features,
or even just proving they were useful.
In the late 40s and early 50s,
there was still debate over the place of general-purpose machines
versus special-purpose machines.
In the book Faster Than Thought, there's a chapter called Special-Purpose Automatic
Computers.
It makes the argument that special-purpose computers are useful alongside general-purpose
computers.
This is a very period argument, and I think it's a fascinating one. It goes
something like this. Not all problems warrant a full-on programmable computer. That's a lot of
power and a lot of money. Even then, a general purpose machine may not be very efficient at
certain tasks. Special purpose computers can be adapted to fill very specific
niches, tailor-made for specific jobs. This not only saves money and time, but can end up being
more computationally efficient. Their example is, maybe unsurprisingly, Nimrod. The chapter
breaks down special-purpose machines into three main traits.
I.O., storage, and quote, the computer itself. The I.O. part here is fairly straightforward.
It's just how the machine talks to the outside world. In the case of Nimrod, this is very
specialized. It has an input panel designed to play NIM. It has one output for displaying a NIM board and
another output for displaying internal information. Contrast that with a general-purpose machine that
might have something like a full control panel and punch card reader for input
and output to a printer or maybe a punch card puncher. Storage, in this era, actually means memory. In the normal sense, we're talking
about just basic random access memory, or at least what passed for random access in 1953.
You have discrete addresses that are connected over buses to different locations inside the
machine. Nimrod uses more specialized storage. It has just a few bits of memory that are wired up to only a few circuits.
Then we turn to...
the computer itself!
I can't help but say that dramatically.
This is where the rubber really meets the road.
By creating a custom, highly specialized machine,
you can make much better use of your weird
I.O. and storage setups. Nimrod connects this all together with highly specialized circuits.
It's completely purpose-built. It can only increment and decrement a two-counter system,
and it can manipulate a few locations in memory. It can only branch in a few ways
and to a few points in this quote-unquote program. The specialization does make Nimrod limited,
but it also makes it a more efficient machine. It's relatively cheap to design and produce,
it uses very few parts, and it can very efficiently bang bits, as long as we're talking about some very specific bits.
A general-purpose computer can totally do all of this.
A general-purpose machine can be wired up to special inputs and outputs.
In fact, many computers have interfaces designed for just that.
You can totally get a mainframe to display a NIM board
on a grid of lights. You can disregard actual special storage and just use RAM for handling
data. That's easy enough. As far as the special computer, well, that's just a matter of code.
Thanks to the Church-Turing thesis, we know that any computer can work as any other computer.
thesis, we know that any computer can work as any other computer. So with a good enough language,
you can program a big mainframe to act exactly like Nimrod. You don't actually need a special purpose computer. A general purpose machine, a big stored program computer, can do anything a
special purpose machine can. But that's a very modern way of looking at things. These days, you can buy a microcontroller
chip for less than a penny, wire it up to a few cents worth of LEDs, and even program it in C++
or Python. General-purpose computers are cheap, plentiful, and available. That simply wasn't the case in 1953. At that point, there were maybe a few
hundred computers in the world. We were just starting to transition from the period of
unique machines to mass-produced machines. But numbers were still very low. Computers
weren't just expensive, they were scarce. You aren't going to take one of these priceless
machines and reduce it to a tiny little
demo computer that can only play NIMH. That would be a huge waste of money and computing power.
Doing so, pulling an entire machine out of circulation, would have probably had a physical
and noticeable impact on the pace of research. There's another factor at play here. Programming in this era was difficult.
In Faster Than Thought, Turing explains that programming is so hard that anything you write
is a challenge. That's one of the reasons that he characterizes early game programs as productive
research. Any program could count as productive research at this early stage, but there's a line
between research and
squandering resources. The simple fact was that stored program computers were too important to
be reduced to special-purpose machines. That's why it could be argued that special-purpose machines
had a niche. Something like Nimrod took the technology used in these groundbreaking computers
and used it for a more specific task.
This was not only a viable solution in the early 1950s, it was also a very smart choice.
It solved the scarcity problem while also reaping the benefits of digital technology.
But that niche was a fleeting thing. In the years after Nimrod, mass production would actually kick off.
And I mean in earnest. Remington Rand, Bull Gamma, and IBM all started to ship machines in the
thousands. Better programming tools appeared. Digital technology became cheaper and actually
available. Suddenly, it made sense to take a general-purpose computer
and adapt it for a special use case. This is when we see computers controlling factories and
machining equipment, routing trains, or tracking mail. As machines moved from largely theory to
practice, well, there were more machines to go around than anyone would have guessed.
there were more machines to go around than anyone would have guessed.
Alright, that brings us to the end of an, admittedly, pretty rambling episode.
Thanks for sticking with me here.
As far as conclusions, well, this is a classic case where the end actually opens up a lot of new beginnings.
1939's Nematron is a wild machine,
especially given the era and context. It gives us another instance of independent evolution of electronic binary computers. It's one of those events that I think really shows how complex
history can get. This is the kind of thing that draws me into computer history in general.
history can get. This is the kind of thing that draws me into computer history in general.
We have enough details, events are recent enough, that we can use a very fine-toothed comb to pull out some wild things. There's something about binary, something about logic, and something
about electronics in the 30s that just clicked into place. It really makes me think that there
are more hidden binary machines out there.
1951's Nimrod complicates the narrative in another way.
You could call it an example of a dead-end methodology,
a special-purpose machine created just as general-purpose computers flooded the world.
In that capacity, Nimrod serves as an interesting way to talk about that transition.
Viewed in another way, Nimrod is a very early educational machine.
But no matter how you look at it, we're dealing with a fascinating computer.
Its story, just like Nimitron, really leaves me wanting to know more. What other kinds of special-purpose machines are tucked away in this transitionary period?
That, uh, I think may be a very fruitful avenue of
research for a later episode. Thanks for listening to Advent of Computing. I'll be back in two weeks
time with another piece of computing's past. If you like the show, there are a few ways you can
support it. If you know someone else who'd be interested in the history of computing, then
please take a minute to share the show with them. You can also rate and review the show on Apple
Podcasts. If you want to be a super fan, then you can support the show with them. You can also rate and review the show on Apple Podcasts.
If you want to be a super fan,
then you can support the show directly through Advent of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and bonus content.
You can find links to everything on my website,
adventofcomputing.com.
If you have any comments or suggestions
for a future episode,
then please feel free to
shoot me a tweet.
I'm at Advent of Conf on Twitter.
And as always, have a great rest of your day.