Advent of Computing - Episode 91 - Whirlwind
Episode Date: September 18, 2022Whirlwind represents a fascinating story of transition. The project started in the middle of the 1940s as an analog machine. As times changed it became a digital device. By 1951 it was perhaps the fa...stest computer in the world, filled to the brim with new approaches to design and new technology. It may have even been host to the first video game.  Selected Sources:  https://apps.dtic.mil/sti/pdfs/AD0896850.pdf - Report on MIT's storage tubes  https://sci-hub.se/10.1109/MAHC.1983.10081 - An interview with Jay Forrester  https://ohiostate.pressbooks.pub/app/uploads/sites/45/2017/09/retro-hurst.pdf - Screenshots and info about the Bouncing Ball  https://www.retrogamedeconstructionzone.com/2021/07/the-whirlwind-bouncing-ball-simulator.html - Play the Bouncing Ball Program for yourself!
Transcript
Discussion (0)
How would you describe old computers?
Now, I don't mean old machines from the 80s and 90s.
I mean really old computers.
I'm talking machines from the 40s and early 50s.
The real old school stuff.
The first words that I'd reach for would be big, expensive, and complicated.
But are those entirely fair descriptors? The big and expensive part,
I don't think anyone can really argue with. These early machines were huge. They also
cost a small fortune. On occasion, they could cost a pretty big fortune, actually.
Computers started as research machines, after all. We barely had the technology and know
how to breathe life into these new digital beasts. That's going to lead to big, expensive machines,
undoubtedly. But what about complicated? I think that this might be a bit of a misconception.
The first generation of machines were, relatively speaking, simplistic. They didn't be a bit of a misconception. The first generation of machines were,
relatively speaking, simplistic. They didn't have a lot of features. These were bare-bones affairs.
By comparison, modern computers are some of the most complicated devices ever built by humans.
I couldn't begin to understand all the layers of a modern microprocessor,
but an old machine like ENIAC only has so many moving parts to come to terms with. Now, I'm not trying to say that early computers are easy to understand.
The technology and use is totally different from anything we're used to.
The principles of operation are similar-ish,
anything we're used to. The principles of operation are similar-ish, but early computers were designed for totally different tasks than modern machines. You're dealing with a different beast, plain and
simple. The saving grace here is that early machines just tend to be less complicated.
There are fewer components. It's just that each component happens to be physically large.
There are fewer features you have to wrap your head around.
Here's a fun example that I think illustrates my point.
Most machines in this first operational class performed serial operations.
That means that they could only really do one-bit math, adding a 1 or a 0 to another 1 or a 0.
Larger operations were built up from that basic building block.
Thus, the math circuitry is remarkably simple.
You can point to the handful of tubes for addition, for instance, and know what kind of data is flowing through them.
Today, you'd probably need a microscope to do that. I think that should give you an idea of
how alien early computers are, and also how much simpler they are. But there was a crossover point,
a step in the path where computers started looking more like machines we know.
Where speed and capability expanded.
Or, in a less charitable reading, where machines started to get truly complicated.
One candidate for that crossover point that I'd like to present today is Whirlwind.
Whirlwind.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 91, Whirlwind.
You could also call this episode, How I Learned to Stop Worrying and Love the Vacuum Tube.
I'd like to be able to say that today is different than my usual highbrow fare,
but who am I kidding? I'm on my usual stuff today.
This episode will be covering the development of the Whirlwind 1 computer, and specifically, I'm going to be working up to some controversies.
Some nerdfight-level controversies.
some controversies. Some nerdfight-level controversies. I want to believe I'm above a good argument,
but that's clearly not the case. Project Whirlwind, in general, gives us a fascinating view of the transition into the digital world. You see, Whirlwind was part of a Navy contract
that MIT was filling. That contract didn't say, build us a computer. The
Navy wanted to run bomber simulations for training pilots and conducting experiments.
MIT decided the solution turned out to be a computer. At first, this was an analog machine,
but that design was ditched in favor of a newfangled digital device. The project started
in the late 40s, with the first numbers being crunched maybe around 1951. There's a bit of
variation because Whirlwind changed a lot during development. So today will be in the extreme early
days of the digital. The final design of Whirlwind is, on its own,
really neat. It was one of the first machines to use bit-parallel operations. That is,
Whirlwind could run multiple one-bit operations at a time. The result was that adding two large
numbers, for instance, only took one processor cycle. Nice and fast. The speed was leveraged
to allow for real-time computing. Whirlwind was interactive during an era of batch processing.
Now, the interactivity part here I find really fascinating on an ideological level.
There's this enduring view of early computers as secret altars to electronic gods,
that only special practitioners could ever touch such a machine. A programmer, despite their deep
connection to the computer, had to pass off programs to these digital clerics in order to
facilitate actual execution. You had to have your code run in a queue. Whirlwind, at least in theory,
subverts part of that vision. I think that's something that we just have to examine. But
I did say there would be controversy. The plan is to conclude this episode with a dive into this.
Whirlwind was the first computer to use magnetic core memory. As a fan of complicated
history, this definitely perks up my ears. There are multiple claimed inventors of the magnetic
core. One of them is Jay Forrester, the researcher in charge of Project Whirlwind.
There are maybe three or even four others that claim independent invention of the technology.
So we get a nice mess to untangle here.
I covered this a while back when I did a full episode on magnetic core memory.
I landed on Forrester as the first person to push magnetic cores into practical use,
not the real inventor of the technology.
But what did that practical use
look like? Why did he start using this relatively new technology? And will my old conclusions hold
up under closer scrutiny? There's one other matter that we will need to investigate. Games.
There were a number of video games developed for Whirlwind.
I think that makes good sense on the surface.
I mean, it's an interactive computer.
What else is it really good for?
The catalog even included a recently discovered and restored Blackjack simulator.
But we also have claims that Whirlwind had the first graphical computer game.
Which, big if true. But is there any meat to this argument? Well, let's see if I can dig anything up. Really quickly, before we get
into the actual episode proper, I have my usual announcement. Notes on Computer History is coming
along nicely. The first few articles are in editing
right now, which is really exciting. I have a pretty good team formed for the project. But that
said, we're still looking for more authors. We need quite a few more authors. So if you have any
interest in writing about the history of computing or any computer-related topics, then shoot me a line or
go to history.computer. It's my favorite top-level domain these days. Anyway, as far as who we're
looking for, that's a frequently asked question. The answer is anyone who wants to write about
computer history. We're trying to cast a wide net and really open up
the journal to anyone interested in the topic. So if that sounds like you, which if you're listening,
I think that is you, go ahead and shoot me an email or go to history.computer and read more
about the submission process. So with that out of the way, let's get into today's episode and
our discussion of Whirlwind.
Before we get too deep into Project Whirlwind itself, I want to set the stage.
It's very important for us to remember that computers were initially developed during World War II.
They came of age in the war's shadow.
That's something that's easy to overlook, but it's central to the story of the computer.
The year the project whirlwind really starts going is 1947.
At that time, World War II is barely over.
Hostility is ended in the fall of 1945 following the atomic bombing of Hiroshima and Nagasaki
and the subsequent surrender of Japan.
That's traditionally the end date for the war. However, the war wasn't fully over for a number of years.
You could argue that its effects have never really ended, but that's a different discussion.
The paper trail for World War II terminated with the signing of the Treaty of Paris in early 1947,
and then the Treaty of San Francisco in 1951. Those are the documents that officially wrap
up the conflict in Europe and the Pacific, respectively. The Allied coalition that ended
the war was breaking apart even before the documents were signed. Hostilities between
Western powers and the USSR were mounting.
This brewing conflict, the so-called Cold War, is also more nebulous. There isn't necessarily
a start date, but there are milestones on its trajectory. In the fall of 1949, the USSR
detonated their first atomic bomb. For the US, this was a horrifying revelation, but
not really an unexpected turn of events. Other countries had been pursuing atomic weapons for
decades. Notably, the Manhattan Project, the US effort to create a bomb, was started after
intelligence sources claimed that Nazi Germany had their own atomic weapons program.
America's days as the only atomic superpower were numbered from the very start.
So, what does all this have to do with our story?
Well, there are certain realities about atomic weapons in this early era
that influence the direction of military research and development.
The radioactive nature of atomic bombs are more of a side product of the main show, albeit
this side product can be just as devastating and much more devious.
The big draw here is that atomic weapons pack a higher energy density, more bang per pound,
so to speak. The first atomic bombs packed the same power as thousands of conventional bombs. Now, I'm not trying to linger or be grisly
here, but I do think that a comparison is important. We should always remember how dangerous these weapons are. And I think that that's something
we need to keep in mind, the actual implications of packing more power per pound, and how that
changes what weapon systems are like and what we need to do to deal with weapon systems.
In March of 45, the U.S. launched a conventional bombing raid on Tokyo.
The raid was composed of 279 B-29 bombers.
Each of them carried thousands of pounds of traditional bombs.
The estimated death tolls were over 100,000 lives.
The bombing of Hiroshima occurred just months later.
lives. The bombing of Hiroshima occurred just months later. This entailed one plane with one bomb supported by three other planes. That one bomb had a higher death toll, upwards of 140,000
lives. That's discounting the lingering deaths from radiation exposure. But just looking at the cold hard numbers, it's plain to see that
an atomic weapon is more effective. This isn't the good kind of efficiency, to be clear. This
is the kind of efficiency that has reshaped the world for the worse. Most of the casualties from
both of the raids were civilians. A key difference is that the atomic bomb was better designed to eliminate
bystanders. The destructive density meant that conventional weapons platforms were made obsolete.
It was now possible to replace hundreds of bombers with a single plane. The delivery system
was totally different, and as such, defenses also had to change. The US was now in this strange
position where they needed an all-new slate of aircrafts in order to adapt to the post-war
reality. This was something that would have been planned for because the Manhattan Project had been
ongoing for years. This was a reality that the US had to have been working towards for years at this point.
During World War II itself, a wide array of new planes were rolling off runways, but as the Cold War set in, a new flock of birds were entering their design phase.
A subsidiary problem became training.
You know, since planes can't fly themselves.
So how do you train up a lot of new pilots to use a lot of new planes? And how do you do so without crashing a lot of state-of-the-art
airplanes and killing a lot of pilots? The main option at the time was a mechanical device called
the Link Trainer. Now, don't be fooled. Link here isn't a descriptor, it's a name.
The trainer was designed by Ed Link. The best way I can think to describe the Link Trainer is
a very fancy carnival ride. This definition is helped along by the fact that early trainers were, in fact, rented out as carnival sideshows.
It's essentially a miniature airplane cockpit that a trainee sits in.
Some trainers even have these cute little wings and a short tail.
The entire rig is mounted on a set of four bellows that can angle the plane in all directions.
As the pilot moves around the flight yoke,
the fake plane banks and climbs just like the real thing. Link had come up with the idea for
this trainer in the 1920s, shortly after earning his pilot's license. The practice at the time was
to use specialized trainer aircraft. These were usually planes with two seats, one for the instructor who would be a
skilled pilot, and one for the student, the newbie. It works, but it's expensive and it's really
dangerous. A crash in one of these trainers could kill a skilled teacher as well as the student,
plus it would destroy your very expensive aircraft. That's not a good situation.
The solution, in Link's mind, was to devise some way to train students on the ground.
Prior to learning to fly, Link had worked in his family's organ and piano company,
so he drew on somewhat musical technology in constructing his teaching device.
I think that's why we end up with a somewhat whimsical machine. drew on somewhat musical technology in constructing his teaching device.
I think that's why we end up with a somewhat whimsical machine.
Link was granted a patent on his trainer in 1929, and from there he used it as a tool in his own flight school.
The trainer languished in relative obscurity until the Second World War.
It was quickly adopted by the US military as a tool for getting fresh
pilots up to speed quickly. This was also when the trainer was further complicated with the
addition of instruments. Planes in this era were becoming more complicated in general,
and the military increasingly needed to train pilots to fly by instruments. That is, to be able to rely on the gauges and
dials that peppered the cockpit. This is essential for controlling larger and more complex planes,
but also for flying in poor weather conditions. You can't always rely on eyes and a feel for
flying alone. The Link trainers saw a large-scale adoption, but it wasn't the only machine that filled this
niche. Even companies like Bell Labs got in on this training business. But there was a problem
to these mid-century flight trainers. They were purely mechanical. Once built, a mechanical trainer
could only really simulate a single plane. You couldn't flip a switch and go from a bomber to
an interceptor. So if you wanted to train a pilot on five different planes, you needed five different
trainers. That's not the most efficient proposition. This was a known issue, but there was a more
interesting problem related to the limitations of mechanical trainers.
Luis de Flores, at the time a commander in the US Navy, was the first to attempt to tackle this
issue. De Flores had a vision, a configurable flight trainer. A trainer that could be used to
simulate any kind of plane. This dream trainer wouldn't just be used to get new pilots up to
speed, but could also be used as a tool for research and development. You see, with a
configurable trainer, you don't have to stick to real planes. The idea is that you'd only need a
set of parameters that describe how the plane flies
and handles.
Those could be acquired using methods like wind tunnel tests.
Thus, you could simulate a plane that was still in development.
You could get feedback from pilots before the plane was actually manufactured.
Design flaws could be caught earlier in the process, thus saving money.
You could even, in theory, train up pilots for a plane that wasn't even being built yet.
All of this sounds fantastic, in theory at least.
Mechanical trainers just weren't up to this task.
So DeFlores went shopping for a solution.
This is where Project Whirlwind really
starts. The Navy's Special Devices Division, headed by DeFlores, contracted out with MIT to design
this new machine. It was informally called a Trainer Analyzer. The contract was inked in early 1944, but things were slow going for the first few years.
Project Whirlwind starts out as part of MIT's Servomechanisms Lab.
This was basically a tiny war department within MIT.
This was the lab where MIT researchers developed new weapons control systems.
It made for a good location for this new trainer project.
At first, the term project, well, that may have been a little bit much, though. Whirlwind started
out as a single researcher, one Jay Forrester. The early days of Project Whirlwind are a little
obscure. Most of the information, notes, and memos are sequestered
away in the MIT archives. That takes a little bit of time to get to. But luckily, there have
been some historians that made the trip out to campus. That saves me a lot of legwork here,
and reinforces the fact that Advent of Computing is really built from out-of-print books and archival records.
The out-of-print book in question today is Project Whirlwind, the history of a pioneering
computer. The text was written by Kent Redmond and Thomas Smith. Most of the finer points of
this section come from that text, which references directly to archival sources. So this is as good
as it gets. At this point in the story, something like October or November of 44, Whirlwind was
really just that small project. Forrester was in the feasibility stage. He was figuring out if this
magic trainer analyzer could even be made. It would take nearly
a year for Jay to arrive at the answer. It turned out the answer was no, not with current technology.
Such a device was basically impossible to construct. Forrester worked out that it would
take around 33 equations to properly simulate the handling
of an aircraft. There was a similarly high number of parameters and variables required.
The trainer would need some way to calculate each of these equations in real time,
feed the results to the pilot, and then take more inputs. Those inputs would signal the start of another execution, write, read kind of loop.
That sounds an awful lot like a computer system, but digital computers didn't really exist yet.
This is where we get to run down the list of the usual non-programmable machines.
Was there a solution Forrester could draw on? Bell Labs had
some promising relay computers, but those weren't quite up to the task. Bell's machines at this
point were just slow and they weren't really programmable. There were some simple electric
analog computers, but those were basically fancy calculators.
And then we have the vast realm of mechanical analog computers.
MIT actually had one of the most complicated of these machines on campus,
Vannevar Bush's differential analyzer.
Forrester's initial pass at the trainer used a hybrid machine.
Part mechanical, part electronic, but all analog.
While this was certainly some kind of approach, it proved to be not a very viable one.
Analog computers just weren't flexible or fast enough to handle the rigors of whirlwind. For a simulation
to be believable, it would need to give almost instant feedback. As a pilot moves the flight
stick, the trainer needs to respond. You don't want to be in a situation where you turn the wheel,
you wait, and then you bank and get knocked out of the thing.
Forrester was finding that analog calculations just weren't fast enough for that.
There is also the problem of accuracy.
A small adjustment on the plane's yoke can have huge implications on the craft's trajectory.
This is impacted by current system variables.
By this I mean things like the velocity of the aircraft, headwinds, throttle level, and so on.
It's a very complicated and self-referential math system.
Modeling this type of system relies on feedback, which means that small inaccuracies or errors will build up. A single equation may be calculated slightly wrong at one step.
It may be off by 0.01 to start with.
That bad result will get folded into other equations,
which in turn will take on a measure of error.
Issues will compound.
Since the first pass at Whirlwind was analog,
that introduced all kinds of spots for these errors to occur.
In an analog computer, numbers are represented by some continuous value,
like a voltage on a wire or rotation on a shaft or a gear.
A fluctuation in that value can occur for
any number of reasons. Pulleys or gears can slip, causing rotational values to be thrown out of whack.
An electrical surge can change your precious voltages, or a part that has a 0.1% tolerance might actually have been manufactured to a 0.4% tolerance.
Thus, your calculations are ruined because your physical materials are faulty.
Putting everything together, I think it's fair to say that Forrester had found a use case
that pushed a little bit too far past the frontier of analog calculations.
Analog can be really slick for certain applications,
but solving 33 highly coupled equations and handling real-time feedback,
well, that's not exactly safe territory for analog devices.
That's not exactly safe territory for analog devices.
Nonetheless, Forrester was able to show that a trainer analyzer may be possible if a couple new analog computing circuits could be devised.
This was promising enough for Whirlwind to secure more funding.
This is also around the time that World War II was coming to its conclusion,
which put the Servo Lab in a bit of a weird location.
As the Project Whirlwind book describes it, the cessation of hostilities caused many companies and labs to go back to peacetime projects.
There weren't the same incentives to keep up wartime production when there was no more war.
This benefited Forrester and Whirlwind because, by and large,
the Servo Lab was happy to keep their contracts current.
The end of World War II had another effect.
It allowed certain classified projects to be brought out into the open.
In August, the Manhattan Project,
at least in part, was unshrouded. It was impossible to hide the effects of two nuclear bombs, after all. MIT's Differential Analyzer, Harvard's Mark I, and the University of Pennsylvania's ENIAC
all started to come out from the shroud of classification. While not necessarily topics of public debate,
this loosening of wartime classification
meant that more cross-institution collaboration could occur.
Compartmentalization was starting to break down,
and there were benefits to that.
It didn't take long for Forrester to hear about this new ENIAC thing.
In the fall of 45, a colleague, Perry Crawford, informed Forrester about the machine.
The timing here worked out pretty well.
That very month, there was a planned conference to discuss the state of computing and mathematical aids,
which would include discussion of the ENIAC.
aids, which would include discussion of the ENIAC. In Redmond and Smith's text, it sounds like there was an almost religious conversion inside Forrester. Once he learned of electronic digital
computers, he was hooked. All the timing here just worked out really nicely. The dropping of the
classification veil, the end of the war, the continuation of MIT
contracts, and increased funding for Whirlwind. These forces combined to lead to a new direction.
Analog was dropped, the existing project was scrapped, and Forrester started pushing his
new team into the digital realm. One reason for this transition was what we've already covered. Analog kinda
sucked. It just wasn't up to the task. So if Whirlwind required all new and improved analog
devices, then why not just jump to a different technology? It would end up taking the same kind
of effort. This was pure R&D after all. There was also the pull factor of a general-purpose
computing machine. This thinking is best shown by how Forrester set to work staffing the rebooted
Project Whirlwind. Forrester himself was a product of the electrical engineering department at MIT,
so it would be reasonable to assume he just tried to hire on promising
electrical engineers. But that's not the case. Forrester pulled talent from a slightly wider
range of sources. He did grab up some engineers and electrical engineers, but he also courted
physicists and mathematicians. One issue with analog machines that I don't see discussed, at least not often,
is that these devices weren't really programmable. I mean, it kind of goes without saying, right?
You could configure an analog computer, but that entails rebuilding or rewiring large parts of the
device. Physical artifice means you lack a certain flexibility. That's the root of this
issue. Some early machines suffered from a similar problem. ENIAC is the prime example here. To
program ENIAC, you had to swap around a network of patch cables. You had to rewire the machine.
ENIAC was built at least somewhat in the image of earlier analog devices.
It was kind of a hack job, so there is a lineage of inflexibility.
Setting up a problem on ENIAC might take days, weeks, or even months.
The kicker here is that while these machines were being configured, they couldn't be used for anything else.
That setup time required full access to the computer.
Put another way, it wasn't easy to allocate time on these machines.
You couldn't share resources with other projects in any real way.
A real computer, an electronic digital computer with stored programs,
could be shared much more easily.
That's why Forrester's drive to include more disciplines in the project is especially interesting and especially important.
He was planning for a machine that could be shared as a larger resource. Whirlwind was more than just
a trainer analyzer. It was moving beyond just a weapon of war, but to be fair, it would still be owned in part by the U.S. military.
Thus, we enter 1946 and the total rebrand of Whirlwind.
This is also where MIT's machine starts to differentiate itself.
Forrester and his team are still working towards
that Navy contract, but to do so, they're using totally new technology. This means that the
overall goals of the project are still the same. The new digital whirlwind needs to be able to
quickly solve all the math needed for flight simulation. It also needs a way to take inputs from a simulator
and provide outputs, and that all has to be done pretty quickly. The best comparison we have to go
off as far as prior art is EDVAC, the theoretical successor to ENIAC. This was kind of the cool new computer design in the immediate post-war years.
At the time, EDVAC was really just a draft report, but still, little steps.
The big innovation here was the notion of stored programs in a shared memory space.
You used encoded instructions to control the computer, and those instructions
were in the same memory that stored data. Code and data were, to the computer, the same.
The EDVAC report was really popular when Forrester first got his hands on it. In the coming years,
a handful of vacuum tube-based computers would be built based off the design in this
report. I covered SEAC a while back, a vacuum tube machine built by the National Bureau of Standards.
That's just one such machine. It copied the overall architecture presented in the EDVAC draft,
but that was only part of the connection. This report also outlines some of the technology
used to implement the spec, or at least make recommendations. It's all very abstract,
but you can't get away from some of the physical realities. The EDVAC report talks about memory in
terms of delay elements. At the time, that could only be practically implemented using
mercury delay lines. Logic circuits are described using these abstract E elements, which, you know,
they're vacuum tubes. I guess they could be relays, but those wouldn't be fast enough. The draft's author, John von Neumann,
is really trying to stay abstract. It's just that in 1946, there weren't any alternative options as
far as physical implementations. So it's good to be abstract. That's important for future work.
But there aren't other options. That is, unless you want to get really creative.
You see, EDVAC-style computers suffered from a few bottlenecks.
One is the aforementioned delay line memory.
Mercury delay lines are, for all intents and purposes, some high strangeness.
It's the kind of technology that is nearly indistinguishable from magic,
but the magic just isn't that good. It would be like watching a really convincing magic trick
where someone pulls a bunch of handkerchiefs out of their mouth. Neat, perhaps inextricable,
but you could do better. This kind of memory works by sending acoustic waves down a long tube filled with mercury.
These waves are generated and received using polished crystalline transceivers bonded to electrical contacts.
That's the magic part.
These are literally sound waves in a liquid metal that are propagated thanks to specifically cut and honed crystals.
The lame part is that mercury delay lines, and really any delay lines, are a sequential memory.
You can't ask a tube of mercury for the value at a certain address in RAM. You have to have specialized circuitry that waits for the proper
time to make reads and writes. Essentially, you're storing data as these acoustic wavefronts.
Sometimes you get lucky and ask for data at the right time, right as the wavefront is hitting
that receiving crystal. Other times, you have to wait for the wave to propagate down the
entire tube. As a consequence, delay line memory tends to be slow, it tends to be complicated,
and it has this weird quirk where a read or write operation doesn't always take the same amount of time. The last bit is perhaps the most important and most insidious.
What happens if you need tight timing?
Well, you kinda just add a luck.
In this case, the actual implementation of memory will affect how your program functions.
And I'm not just talking about performance, but fundamentally how your program
will execute. In many cases, that effect is small or manageable. In the case of a feedback system
like an aircraft trainer, this effect would definitely be noticed. You could hit a button
and the response could come 10 milliseconds, 20 milliseconds, maybe 100 milliseconds later.
There's no way to tell how long it will take. There was an alternative, but it was even newer
technology. In a 1947 project report, Forrester explained it like this. It's a bit of a longer
passage, but I think it nicely sets up where we're going.
Quote,
Storage of controlling orders and numerical quantities is, of course,
one of the most important functions of the digital computer.
Storage for the Whirlwind 1 and Whirlwind 2 computers is being planned in the form of electrostatic storage.
By proper control of electrode voltages and beam
current, it is possible to store electrostatic charges on the dielectric surface and to later
read the polarity of these charges in the output circuit. Storage for satisfactory periods of time
has been observed and good output signal level has been obtained. Many research problems
principally associated with the control of secondary electron redistribution still remain
to be solved. End quote. You see, Project Whirlwind was investigating a type of memory that
we really haven't talked about on the show yet. This is a technology called electrostatic memory. Now,
there's a good reason for this omission. Electrostatic memory is relatively short-lived,
and it seems like it was relatively disliked. From accounts I've read, it seems like a finicky
and annoying medium to work with. But it did have its place, so I think it's a good spot to take a look at
these tubes. The big brand name in this field was the Williams tube. This was the most common
type of implementation, so it bears at least a little discussion. Now, any kind of computer
memory has to leverage some memory effect, something that can change
state and then stay in that state for some amount of time.
Delay lines leverage, well, a delayed wave propagating down a medium.
Magnetic core memory leverages the magnetic hysteresis effects of certain types of materials.
the magnetic hysteresis effects of certain types of materials. The Williams tube and all of these electrostatic tube memories leverage the electrical properties of glowing
phosphor on glass. If you've ever used an old phosphor CRT display, then you'll be familiar
with this effect. It works on green, amber, blue, or any other color display.
These kinds of CRTs tend to show after images,
a glow that remains for a fraction of a second after the image changes.
That's a small amount of data stored in a medium, so it's something that can be scaled up and automated.
The actual effect here is a little subtle and more than a little strange.
A CRT is a somewhat simple device.
It's a cathode ray inside a tube.
A cathode at the small side of the tube spews electrons out towards the larger end of the tube.
That large part, the actual screen we look at, is coated on the inside with phosphorus. When the beam strikes
that coating, a region of the phosphorus starts to glow. This happens because the phosphorus has
been excited. Its internal energy state has been raised thanks to some free electrons.
To lower back down to its base state, which it has a compulsion to do, it must release energy. And it does so in
the form of photons, light. That release takes a little bit of time, so this makes for a very
simple memory element. As with most memory, writing is simple. A Williams tube stores data
as a grid of dots, each dot representing a single bit of storage. To write
a bit, all you have to do is blast that electron beam in the proper location. Staring is accomplished
by what's known as a magnetic yoke, just like in a normal CRT. Reading data is a little more
complicated. A dot on a CRT carries a slight positive charge. This is a little
contradictory to me since, you know, a dot is produced in the first place by blasting a spot
with electrons. Electrons happen to be negatively charged. From what I've read, this has to do with
the movement of electrons, and the charge is slight, so we just have to stick with this.
What's notable here is that the surrounding regions aren't charged.
This is thanks in part to the fact that glass is an insulator, a dielectric medium.
The tube itself isn't spreading charge around.
The tube itself isn't spreading charge around.
This means that when you write a dot to the screen, you're actually creating a small change in electric charge.
A very tiny changing electric field.
And it's only in one spot.
Now, one rule that I do remember from my university days is that a changing electric field induces a magnetic field, and vice versa.
So here's the trick.
You put a metal sheet in front of the CRT.
This cover goes right over the part you'd normally look at.
This sheet is, of course, conductive, as all good metals are.
Here's what's cool.
The magnetic field created by flipping a dot on can pass through the tube's glass.
Glass is a dielectric material, but not diamagnetic.
It doesn't stop magnetic fields.
Anyone who's played with a magnet in front of a TV can tell you that much.
The magnetic field here is transient.
It changes.
So we get to apply our rule. A changing magnetic field will create an electric field. So we get something that may sound underwhelming.
When you plot a dot on a Williams tube, it creates a small current on its readout plate.
So in practice, it really just looks like you're sending an electron
down a tube and out another wire. A read operation takes advantage of this small induced current.
To read a location on the screen, you blast an electron beam near the spot you want to read,
but not precisely on it. This has the effect of adding some negative charge to
the region around the spot you're trying to read. If there's already a spot there, this new blast
will destroy the small positive charge. In other words, it induces a change in charge at the spot
you're trying to read. That change is a well-defined quantity. It's something that the tubes tune for, so you
know when this phenomenon occurs. That charge gets picked up by the readout plate. If there's not a
spot at the location you're reading, then you don't get a flippant charge. You don't get a
characteristic bump that you can record. So, no current on the plate. I did tell you this was complicated, right?
Well, there's one more complication with the Williams tube. The hysteresis effect here is
pretty short-lived. Phosphor doesn't just keep glowing for very long after it's struck.
So, you have to have a refresh cycle.
At set intervals, the circuit needs to read each location in memory.
If it's a 1, then it has to write that value back to the CRT.
This was a point of frustration for Whirlwind. The rewrite cycle meant that a normal Williams-style tube was a slow device.
It was truly random access.
You could request any location in memory,
but it had to be refreshed constantly. That meant a big hit to performance, which
Forrester could not abide. There were thoughts of using standard tubes, but that appears to
have been dropped pretty quickly. Whirlwind was all about fast real-time responses. That was the central design
goal. And these tubes couldn't provide that. So where does this leave MIT's computer? Well,
I think this is where I need to deal with a small question that's easy to ask. Why not use vacuum
tubes for memory? You can make a flip-flop circuit using vacuum tubes, which is a one-bit
memory element. So why not just use a lot of those? I think this is an important tangent
since it illustrates another issue lurking just below the surface. No one really wanted to use
vacuum tubes for anything. Vacuum tubes were hot and unreliable. That much is kind of
notorious if you're at all familiar with the history of computers. But there's something else
in this early era that hurt even more. They weren't yet specialized digital vacuum tubes.
This is something I never thought about until I picked up my copy of
Redmond & Smith. Vacuum tubes were analog audio devices. They were specialized to work in that
analog application. They were meant as analog amplifiers and analog rectifiers. They were also built for relatively short-term use. A radio wasn't
usually running 24-7. If it was and a tube burnt out, well, that's not a big deal. A radio might
have a handful of tubes that's easy to diagnose and replace. And if a tube was starting to wear
out, the radio probably wouldn't stop working. It might just get quieter or a little fuzzy.
For a computer, you need to drive tubes hard and for a long time.
These early vacuum tubes were built for this analog application, which practically meant they were designed for a range of voltages.
Most of the time, a radio operates within the middle or lower part of that range,
so the tube isn't driven very hard. You don't always crank a radio up to 11 and just leave it
there. So during normal operation, electrodes don't really break down. There just isn't much
load. Well, in a computer, you're either all the way on or you're all the way off.
You also have to cycle from fully on to fully off in rapid succession.
We're talking pulses that last microseconds.
That is the worst case scenario for these kinds of devices.
The primary failure mode here was something called cathode poisoning.
It's not like tubes physically burst.
Thrashing tubes causes chemical breakdown of their internal contacts.
The heat and current inside the tube helps to draw out impurities like oxygen and silicon compounds,
which degrade the metal of the cathode.
That makes the cathode more resistive. It causes
a tube to output less power and thus become useless. In a digital system, you're looking for
some set voltage as your transition threshold. But as a tube gets more resistive, it can no longer
hit that voltage. So it just doesn't work. It stops.
A flip-flop takes two vacuum tubes.
That's two points of failure.
Multiply that by the number of bits you want to store.
For a kilobit, you get 2,048 points of failure.
Multiply that by 8 for a kilobyte.
That is not practical.
You'd have way too much downtime. That's not to mention power consumption and heat. Vacuum tubes kinda just sucked. Now, that's not to say there
were no tube-based memory. Registers were usually implemented using vacuum tubes, but we're talking
bytes of data, just a small number of tubes here.
Another factor is, of course, cost, but I personally think that's less interesting to
explore. Vacuum tube-based memory requires a lot of vacuum tubes, a lot of support components,
and a lot of wiring, so of course it's going to be expensive. So of course you want to look for
a cheaper solution. But that still leaves Forrester and co. in a bad position.
They can't use flip-flops, they can't use Williams tubes, and they can't use delay
lines.
So what to do?
Electrostatic memory was the closest workable solution, and the nerds over at MIT decided
to take a swing at improving the Williams tube.
At the time, that seemed like the only viable course of action.
There's an interesting report made to the Navy covering this part of Project Whirlwind.
I'll throw a link in the description if you want all the details.
Essentially, the MIT Radiation Lab was brought in to try and make a custom electrostatic memory tube.
The goal was performance, to somehow arrive at better read and write speeds.
These custom tubes would eventually get as fast as 10 milliseconds per read.
That's nothing to sneeze at, but that's still measured in bits.
to sneeze at, but that's still measured in bits. A quick conversion shows that you could read 12 bytes in a second, assuming modern 8-bit bytes, so it's still not very fast. Despite refinement,
there was an inherent problem with all this tube-based storage, even if the radiation lab
could save the day with a faster tube.
This brings us to the second key aspect of Whirlwind that we need to discuss.
MIT was building a bit-parallel machine.
That means that instead of working on a single bit at a time,
Whirlwind was designed to operate on multiple bits simultaneously.
Why?
Well, it all comes down to the core tenet of Whirlwind, speed.
Some early machines used a bit serial architecture. That is to say, they operated on data one bit at a time. Let's look at addition, for example. That's the simplest case. Let's say you want to add two 16-bit numbers. In a bit
serial computer, you would load up the first bit of each number into your arithmetic logic unit,
or ALU. That's the circuit that usually does the math. Then you would run the operation
and get back a result plus a carry flag. Next, you'd load up the next bit and run the operation and get back a result plus a carry flag.
Next, you'd load up the next bit and run the same operation.
This would have to be done 16 times in order to complete a full edition.
One way to discuss the speed of these old computers is in terms of cycles.
How many internal clock ticks it takes to run a complete task.
Each basic operation inside the machine takes a cycle, so giving a number of cycles for an algorithm gives a rough idea of how complex a job is.
In this hypothetical bit serial machine that I've described, one bit addition takes one cycle.
That's the basic operation here. So adding two 16-bit numbers would take 16 cycles. The machine's clock would have to tick 16 times, with each tick
firing off a single one-bit operation. It should be clear to see that bit serial is pretty inefficient.
It should be clear to see that bit serial is pretty inefficient.
So why would a machine use this design?
Simple. Simplicity.
My hypothetical serial machine here has a radically simplified ALU.
Every math operation deals in single bits, so I only need a one-bit adding circuit. Same goes for my subtraction,
multiplication, and maybe even division circuits. All in all, this machine only requires a handful
of vacuum tubes to flesh out its mathematical functions. That's pretty good design. You can
extend this design out to memory access. A fully serial machine would only be able to fetch data one bit at a time.
That's slow and inefficient, but it means that I don't need many wires.
I don't need big, wide buses or complicated multiplexing and demultiplexing.
I just need a single circuit for moving one bit from a memory address into a register. Sure, it's slow,
but it's really simple to implement. I've seen claims that all early computers were serial
machines, but that's actually laughably wrong. This comes down to a problem with applying broad
labels to this pack of early computers. ENIAC, although it was something of a
strange architecture, had some parallel parts to it. I mean, if we want to get really old school,
then Babbage's analytical engine included parallel operations at a basic level in its design.
When you say that early computers were serial, however, there is one big one that matters.
EDVAC.
The initial designs, the draft spec that many other machines worked off of, called for a
bit-serial machine.
Now, we don't really use bit-serial architectures anymore.
Most computers are bit-parallel.
serial architectures anymore. Most computers are bit parallel. That means that instead of working with one bit at a time, they'll work with a set of bits all at once. Instead of adding two
16-bit numbers one bit at a time, they'll just use 16 smaller adder circuits to run all those operations in parallel. Therefore, it takes one cycle to run a 16-bit addition.
Just as an aside, this is kind of related to the whole bit-ness designation used to describe
computers. If you have, say, a 16-bit computer, that means you can probably process 16 bits in
parallel when you do math operations or memory access.
I've been sticking with the addition example here because, at least in my mind, that's the most simple manifestation of bit parallel design.
However, that's just one example.
Forrester was pursuing a parallel architecture because he wanted to find a way
around performance bottlenecks. Parallel math operations are far more efficient than serial
operations, but that's only one of many tasks a computer has to perform. To fully take advantage
of a fancy parallel ALU, you need to be able to move data around in parallel. This can kind of fade
into the background, but when we're talking about buses and them having a width, they need that
aspect because they're parallel buses. You need a wire for each bit to travel down. That leads to
some interesting implications for memory. So let's go back to MIT's memory tubes.
Electrostatic memory is, by its very design, a serial storage device. A tube can only read
one bit at a time. You can only write one bit at a time. For a fancy parallel computer,
that represents a bottleneck. The way around that was, well, just to use more tubes.
This is a strategy called banking.
Not banking as in saving up money,
but banking as in constructing banks of identical devices.
Whirlwind was a 16-bit computer,
so a memory bank was made up of 16 identical electrostatic tubes.
A memory operation would then source a single bit from each of these tubes.
So while the actual device was serial,
a bank could be treated as a parallel device.
This gets around some of the bottleneck,
but there's still the problem that these tubes make for a slow and unreliable memory.
One of the fun problems that Forrester and co. kept running into came down
to manufacturing. Think the Williams tube sounded complicated? Well, MIT's homespun memory tubes
were even more so. The glass for the tube had to be blown, electrodes had to be cast, and they had
to use exceedingly pure materials. Coatings had to be applied to
tight tolerances. And then, finally, the tube had to be sucked down to a vacuum. By the end of the
1940s, the Rad Lab could turn out a tube in about a week and a half at the cost of a little over
$2,000. And even then, it wasn't guaranteed a tube would work. The lab went through
more than 20 revisions trying to make a more reliable device. But something as simple as a
stray gas molecule inside a tube could destroy all that work. There were a lot of hurdles,
but by 1949, Whirlwind 1 was mostly operational. For the time, it was
probably the fastest machine in the world. But what about the controversy? What about the ruffled
feathers? Before we get on to the bigger ticket claims, I want to finish up all this bit parallel
stuff. It's often claimed that Whirlwind was the first parallel computer.
It's a bold claim, and I think it's a wrong claim. At least, things are more complicated
than a single sentence. First off, there's the simple fact that parallel devices existed well
before Forrest ever received a military contract. Look no further than Charles Babbage. His mechanical
computers were capable of operating on multiple digits at once. I mean, sure, there's a world of
difference here, but I think it's important to make the comparison. Babbage's engines stored
numbers as decimal digits on gears. Those gears had a clutch system that allowed you to operate
on multiple digits at once. An addition operation, for instance, engaged every gear in a register at
the same time. Adding 10-digit numbers took the same amount of time as adding one-digit numbers.
That's parallel. But hey, that's from the non-electronic world. So let's look for some
contemporaries. EDZAC was another early stored program computer that was constructed in England
around this same time. It has a birthdate sometime in 1949. EDZAC used mercury delay line memory,
but there were tricks involved. You see, EDSAC banked mercury delay
lines in a way that's very similar to how Whirlwind banked storage tubes. That computer
used an 18-bit word, so each bank consisted of 18 identical tubes of mercury. Read and write
operations were carried out by the word. There was still waiting involved,
but to the rest of the computer, memory appeared as a parallel device. Internally,
Edzak handled math using older bit-serial methods, but there is some parallelism here.
As far as I'm concerned, there are two key points that give Whirlwind a claim to this title.
First is the parallel nature of its arithmetic circuits.
That seems to be unique to Whirlwind in this era,
at least when it comes to stored program electronic digital computers.
Big caveat.
The parallelized memory is more of a wash here,
since there are machines doing similar things at similar times.
That said, Forrester and his team were the first to combine parallelized memory with a larger bit-parallel architecture.
The combination of these two factors made Whirlwind as fast as its namesake.
A larger point here is that it gets hard to apply these wide labels to early
computers. In this first generation of machines, you get weird mixes of features. So is Whirlwind
the first parallel computer? For the most part, yes, but there were earlier machines that fit
part of the definition. MIT's computer was the first to put all the pieces together in
a practical mold. This will be something of a theme here. Alright, so now onto the main controversy.
Magnetic core memory. You see, Whirlwind didn't stick with tubes for very long.
Running in parallel with the Rad Lab's work was another project. Forrester and a group of researchers were trying to build a new type of memory.
This new storage medium would be fast, it would be natively parallel, and it would be data dense.
Forrester was working towards magnetic core memory.
But was he the first on this tip?
Forrester started down this path in 1947.
It sounds like it was more of a personal project than officially part of Whirlwind,
at least at the start. Maybe call it Project Whirlwind in all lowercase.
What I find interesting here is that Forrester was coming at this project in a sort of backwards way, at least compared to other memory developments. Other researchers had started with a one-bit memory
element and then scaled up from there. Forrester started with a vision of the scaled-up storage
medium, then worked backwards. He observed that, given the technology of the 40s, there was a certain dimensionality to computer memory.
Delay lines worked as a one-dimensional data store.
Everything was on a line.
Each bit had a location on a line indexed by one value.
Electrostatic tubes presented a two-dimensional version of the same technology.
Electrostatic tubes presented a two-dimensional version of the same technology.
Each bit existed on a 2D plane, addressable by two values.
The difference between these two mediums, all things being equal, came down to density and wasted space.
Sure, neither Mercury delay lines or William tubes were particularly small,
but the 2D medium could pack more data into the same physical space. So what would happen if we went up a level? What about 3D storage?
Here I'm working off an interview Forrester gave to IEEE Annals of Computing History in 1983,
history in 1983, within which he describes his jump in thinking from 2D to 3D.
This has always struck me as a bit of a shallow thought. On its own, it sounds like something an outsider would say about computers, someone who knows of technology but not much detail about it.
Oh, the 2D executables are cool and all, but I'm already thinking in three dimensions. You
just need to figure out how to code in 3D. Imagine the gains. Just think about it. Hey, maybe I'm a
little jaded here. I shouldn't let my cynicism creep in too much. Forrester definitely knew what
he was talking about, and he was seeing a similar jump right in front of him.
He had watched memory go from 1D up to 2D. Sure, tubes weren't the best storage medium,
but they had their advantages. So why not take it a step further? I think that's a reasonable
idea to entertain. From that IEEE interview, quote,
I had contemplated the solution for some time, but of course,
the idea depended on the existence of a suitable element that could be used at the intersection of these three axes.
It was clear that we needed a nonlinear element at the three-dimensional intersections.
The element had to retain a 1 or a 0, the two binary stored digits.
It had to retain either digit in spite of whatever we might do in switching
while picking other elements in the array and storing or reading out those other elements.
The nonlinear device would have to operate at high speed and be very, very reliable.
End quote.
The trick to making 3D memory possible was finding a suitable
memory element, something that can store one bit of data. Forrester's first attempt was using a
pretty arcane type of technology, a plasma bulb. Certain gas mixes actually exhibit a hysteresis
effect under specific conditions.
In these mixes, the amount of energy required to strike an arc is higher than the energy required to maintain an arc.
This allows for an arc to be flipped on, then flipped off, while maintaining its state via some intermediate energy level.
In other words, it acts as a one-bit memory element.
The specific property here is called a nonlinear hysteresis loop. If you plot out the energy put into the system versus some observable property like energy out or resistance, then you get a
little rectangular loop with these sloped sides. Forrester was able
to construct small 3D grids of these plasma bulbs thanks to this specific hysteresis effect.
The grid was always energized at a base level. By flipping intersecting wires to a higher voltage or
a lower one, it was possible to flip bulbs on and off. Reading was
accomplished in a similar manner. Forrester built up 2D grids of these bulbs with tiny glass ampoules
at each wire intersection. Simply stack those grids and you're at three dimensions. The only
problem was that the underlying storage element still sucked.
Plasma bulbs suffered from all the same sorts of problems we saw with larger storage tubes.
Each bulb is a tiny evacuated glass container with contacts inside.
The bulb is backfilled with a small amount of some inert gas like argon or xenon.
It's easy to mess up the gas mixture, or not pull the proper vacuum first,
or have a hole in your bulb, or have impurities in the electrodes. You don't get any reliability
gains by using tiny bulbs, just more points of failure. By the tail end of the 40s, Forrester was pretty bummed out. This plasma bulb memory was parallel, but it wasn't very, very reliable.
It wasn't even normal reliable.
Then we come up to 1949.
Forrester would happen upon something very interesting.
Quoting from IEEE again,
I was reading a technical journal one evening,
just leafing through the advertisements in the magazine Electrical Engineering,
when I saw an advertisement for a material called DeltaMax,
which had a very rectangular hysteresis loop.
It had been developed for magnetic amplifiers.
Forrester continues,
When I saw this nonlinear rectangular magnetic hysteresis loop, I asked,
Can we use it as computer memory?
The answer to that final question was an unequivocal yes.
This loop stuck out to Forrester so much because it looked like the hysteresis loop measured in the very breakable plasma bulbs he was working with.
Soon, Forrester was back in the lab, armed with a new path forward. A corner of the lab and a
handful of researchers became dedicated to getting this new technology off the ground.
There were ups and downs, proper materials had to be selected, and everything had to be tested.
proper materials had to be selected, and everything had to be tested. The initial shakeout only took a few months. Before 49 was out, Forrester had his hands on a 2D plane of magnetic core memory,
each bit stored as a magnetic field in a tiny ferrite donut. It took until 1951 for the
technology to be put to use in Whirlwind. Whirlwind was the first computer to use magnetic core memory.
Therefore, this was the first full application of the technology. And core was well-suited to
the specific job. It's a fully parallel medium, at least once it's set up properly. It's fully
random access, and it's fast. It's also reliable and cheap to make. There were just no
downsides. Best of all, it meshed really well with Whirlwind's overall parallel architecture.
Truly a match made in digital heaven. Or, I guess, in Massachusetts, in this case.
But was Forrester really the inventor of this new technology?
Well, say it with me now, it's complicated.
Forrester holds a patent on a, quote,
multi-coordinate digital information storage device.
That's his cube of magnetic cores.
But he wasn't the first person to use little metal
donuts to store data. In fact, he was probably about the third. This is something I covered
extensively on my past episode on magnetic core memory. If you want the full story of the
development of the core prior to Forrester, then go check it out. The gist of it
is that a number of researchers were on a similar path around the same time. Each of these projects
was separate and started in a similar way. Dr. Ann Wang was probably creating the most similar
device to Forrester's core cube. Wang had started his investigation with the memory element. He knew
of the specific nonlinear hysteresis effect that metal core transformers exhibited. Those delta
max magnetic amplifiers that Forrest stumbled upon? Well, those were used as electrical transformers first and foremost. Wang probably wasn't working with
DeltaMax, but he had experience with similar materials. He independently discovered that
you could induce a magnetic field in these donut-shaped transformers, and you could then
switch it. It was also possible to read that magnetic field, hence data.
Data was written by inducing a field in a certain direction,
then it was read out by trying to flip that field.
This is the same implementation that Forrester landed on.
The key difference is Wang, well, he wasn't thinking in 3D. He was back in the one-dimensional world.
He used magnetic donuts to make a shift register, a one-dimensional type of memory similar to delay lines. Not as fancy
as Forrester, but still magnetic memory that took advantage of a very specific physical phenomenon.
on. Crucially, Wang filed a patent on this technology in 1949. The bottom line here is that Forrester didn't invent magnetic core memory. There were previous examples of the art that he
may or may not have known of. The jury is still out on that. However, Forrester definitely made Magnetic Core practical. He was the first
to put these tiny metal rings into full effect. That's enough memory talk, so to wrap things up,
I have one more possible controversy to share. Games. This is really something that I just have
to slip in here because, honestly, I don't know where else I get to talk
about this. I'm calling this coda the bouncing ball debate. This is kind of a small thing, but
it actually managed to rustle my jimmies something fierce. While researching for this episode,
I ran into a handful of claims that the first video game was developed on Whirlwind. Specifically, the first
graphical computer game. I think that's close enough to just a plain video game that we can
call it that. The claim traces back to a demo program called Bouncing Ball. Whirlwind, as an
interactive computer, had a simple graphics display, not much more than a fancy oscilloscope.
This let programmers plot data and simple images on a screen. Early on, sometime in 1949,
a set of demos were written to show off this display. Later, Norman Taylor would explain
these demos during a SIGGRAPH panel. In this presentation, Taylor described a
program that showed an animated ball. Quote, Charlie Adams, the original programmer, decided
that we'd better go beyond static curves, and he invented what we called the bouncing ball program,
the solution of three different equations. He continues. A little later, Adams and Gilmore decided to make the first
computer game, and this was also in 1949. This is a more interesting display. You see the bouncing
ball finds a hole in the floor, and the trick was to set the frequency such that you hit the hole
in the floor. This kept a lot of people interested for quite a while, and it was clear that man-machine
interaction was here to stay. Anyone could turn the frequency knobs. End quote. This took me quite
a while to track down. When I first ran across this claim, it was a scant few mentions that
said Whirlwind had some early video game written on it. I initially dismissed the idea. Space War was
the first video game, after all. But reading this account has actually changed my mind.
This all comes down to how we want to define a video game. I don't really like the, oh,
I'll know it when I see it approach. I prefer to have a bit of a wiggly yardstick.
I see it approach. I prefer to have a bit of a wiggly yardstick. My usual measurement for these old games comes in two parts. One, does the game have a concrete goal or win condition? Is there a
way to succeed and fail? And two, does it allow you to do something you normally couldn't do?
I think rule one is self-explanatory here. Rule two is there to
exclude things like computerized tic-tac-toe or digital card games. I don't think those types of
programs really use the medium in any special way. You can play Solitaire on a screen, but
you can also pull out a deck of cards and play it on the same desk that you
have your terminal on. That's not interesting. That's not really a video game to me. In past
episodes, I've used this logic to exclude things like Birdie the Brain, a large electronic tic-tac-toe
playing machine. The same goes for random blackjack simulators.
Normally, this means that the first video games are things like Tennis for Two or Space War or the Sumerian game.
Tennis for Two is an interesting case here.
That game used an analog computer to simulate tennis.
What pushes this into the realm of a full game, at least for me,
is the fact that Tennis for Two allowed for multiple gravity settings. You could simulate
playing tennis on Jupiter, for instance. So, sure, you can't normally play tennis in a lab or at your
desk, but the gravity setting really makes this pass rule two without any questions.
That's definitely a video game. So is the bouncing ball program a video game?
By my own logic, yeah. It has a goal. You're trying to adjust three parameters to get the
ball into a target. And you can do things within that medium that you can't do in
real life. Namely, you can adjust the simulation's gravity. I guess that was a recurring theme with
these old math-based games. Here's the interesting part to me. According to Taylor's account,
Bouncing Ball was developed in 1949. I think that jives with the larger timeline here.
Taylor makes note about how the program was annoying to write due to Whirlwind's weird
electrostatic storage tubes. That places it before early 51. So even if Taylor's data is slightly off,
that means the maximum drift here is three years. I'm inclined to just believe Taylor
is a source, so let's stick with 1949 as the date. Tennis for Two, probably one of the earliest
video games out there, was developed in 1958. Space War, the usual claimant to the throne for
first graphical computer game, was written in 1962. There were
a spate of math simulation games developed for business training in the late 50s and early 60s.
Eventually, that lineage leads to the Sumerian game in 1964. The traditional nexus point for
when video games first become a thing is usually in this range. We're talking
late 50s into the early 60s. Bouncing Ball kind of throws that out the window. It's from 1949,
maybe 1950 if we want some wiggle room. The next things that are even close to a video game
are tic-tac-toe simulators. Birdie the Brain and another simulator,
OXO, are both developed in 1950. Those aren't really video games, more electrified simulations,
but it's something. The fact here is that Bouncing Ball appears to be a video game, and it's much earlier than other examples of the art.
What does this mean? Well, in the grand scheme of things, not a whole lot. But for me,
this is kind of a big deal. It makes me wonder how many more early games have flown just under the radar. Okay, this is a bit of a long one, but that does it for our
dive into Whirlwind today. This is another one of those topics that I have to come back to sometime.
There's a lot of early software on this machine that I want to talk about, but that's going to take more time and space.
Whirlwind offers a fascinating glimpse at transition.
I think that's the most important part of this story.
The project starts out in the analog era, or at least Forrester thinks he's still in an analog world.
He's still in an analog world.
Thanks to a little information asymmetry,
we're able to see some of the pitfalls within the analog way of computing.
We can see the technology straining under greater and greater demand.
Then, with the end of World War II, a veil is lifted.
Knowledge of digital technology starts to come out,
and Forrester never looks back.
That's just the first transition, though. During development, Forrester and company decided to eschew a more established design. They added in bit-parallel operations, something that would
become the de facto choice in latter years. They dropped serial memory entirely. Eventually, this led to the
practical application of magnetic core memory for the first time. Once again, something that would
become the de facto choice in computing. Whirlwind established a strong pattern for later machines to
build from. I think that's an important legacy no matter the caveats about
firsts. So let's close this out with one last thing. The bouncing ball program is still really
messing with me, and I think it should mess you up too. I think it's a neat complication that
throws a lot of thinking into question. I'll include a link to where you can actually give
the game a whirl, or at least a simulation of it. This is starting to make me think that early computer
games might bear some more examination in the near future. Anyway, this has been a long one,
so I'm going to keep the outro short. Thanks for listening to Advent of Computing. I'll be back
very soon with another episode. Now, we're moving into October, so that means the somewhat
spooky episodes are coming back for a month. If you like the show, then please share it with a
friend or leave a review on Apple Podcasts, Spotify, or wherever you're listening. You can
also contact me pretty easily. I'm at AvernovComp on Twitter, and I always love hearing from
listeners. If you want to get more episodes and
support the show directly, then you can support me on Patreon. You can find links to everything
on adrenofcomputing.com. And as always, have a great rest of your day.