Advent of Computing - Episode 80 - The Analytical Engine
Episode Date: April 17, 2022When people talk about early computers Babbage's Analytical Engine is bound to come up. Designed back in the 1830's it's definitely older than any other example of the art. But it also has a lot of s...trikes against it. The machine was purely mechanical. It only really did math. It stored numbers in decimal instead of binary. Worst of all, it only ever existed as designs on paper. So should we call this beast a computer? Or is it something else entirely?  Selected Sources:  https://www.fourmilab.ch/babbage/sketch.html - Sketch of the Analytical Engine, and Lovelace's Notes  https://web.archive.org/web/20210226094829/http://athena.union.edu/~hemmendd/Courses/cs80/an-engine.pdf - Bromleys low level description of the engine  https://sci-hub.se/10.1007/978-3-642-61812-3_2 - On the Mathematical Powers of the Calculating Engine, by Charles Babbage  https://archive.org/details/bub_gb_Oi3IhTZyVCAC/mode/1up - The Ninth Bridgewater Treatise, Babbage
Transcript
Discussion (0)
You ever hear nerds opine for the so-called good old days?
Computers these days are just so boring.
Software, well, that's all too bloated.
Back in my day, we had 8K of RAM and we were happy and, frankly, lucky to even have that.
High definition? Well, that's just too many pixels.
Web 3.0? I say they should bring back Blink and Banner instead.
Now, I know I've been party to
some of these complaints myself. Maybe it's an expression of nostalgia, maybe it's actually
warranted critiques, or maybe us computer folk just like to complain a little, you know, to take
the edge off. If we believe Wu-Tang, then maybe this is something more akin to masochism. But,
for the sake of argument, let's accept this premise.
Computers were better back in the day.
What happens if we take this argument to its logical conclusion?
Someone in the 2020s saying, oh, I really miss the good old days,
might be talking about the 1990s.
Someone in the 90s saying the same thing might mean the 70s.
Now, it's kind of weird to think about,
but we will reach a point where there were no more good old days. Logically speaking,
that should be when computers were at their apex, their most pure, before the state of computing
degraded and slid into decadence. We can actually put a date on this. For computing, the ultimate good old days were sometime in the 1830s,
when real programmers had to punch cards by hand,
when machines were powered by cranks.
They didn't even have those dumb addressed memory cells.
They had gears and rods instead.
And hey, I hear they didn't even use pointers.
It's the real rugged era of computing.
Everything since has really just gone downhill.
I say this, of course, with a knowing wink, but this isn't just a bad joke.
These are actually the characteristics of a real machine called the analytical engine.
Perhaps the oldest computer ever conceived, and hey, perhaps it was the apex
of computing. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 80,
the analytical engine. Today I'm taking on a task that I've historically
avoided, talking about really old computers. If you take a glance at the archive, you can see
that most of my episodes actually take place in this sweet spot. It's somewhere between the
mid-50s and the mid-1980s. That's a swath of time that I'm really comfortable with,
mainly because once we get past the really early years of computers, machines start to look and feel modern.
I mean, sure, I've never used a PDP-11 for any extended period of time, but my experience on newer machines really helps me puzzle out how those kinds of older systems worked.
those kinds of older systems worked. You could call this coverage range the accessible past,
since knowledge of modern computing translates pretty well back to this time range. The details are different, sure, but the differences aren't fundamental. We still have logic gates and memory
and keyboards. Programmers still live in caves and cry when they run out of coffee. It all has
a certain familiarity to it. If you're
not seeing the exact same things, then maybe you're familiar with the rhyme. Go further back and
things get, let's just call them weird. Some early computers didn't use binary to store numbers,
they used decimal. Some systems didn't even store discrete numbers,
instead opting for analog ranges of data. Some computers, well, they weren't even electronic,
or at least not really. It gets a lot more difficult for us, so steeped in modern digital
marvels, to relate to someone working on a computer that's made out of gears and operated by hand.
This has always been my immediate reaction
when I hear about Charles Babbage's analytical engine. It was designed in the 1830s, it was
never built, and according to some, it's the first computer. It's the granddaddy of all, so to speak.
It was totally mechanical, stored numbers in decimal form, and could in fact be programmed.
The engine itself was Turing-complete, meaning that using later theorems,
it can be proven that Babbage's computer could function just as well as much later machines.
The programming aspect has also been something that's always confounded me.
The analytical engine never actually existed. It
was designed, but not built. However, programs were written for the machine. Babbage, as creator
of this beast, obviously wrote programs for it. The more high-profile programmer on this theoretical
platform was Countess Ada Lovelace. Many call Lovelace the first programmer because of this.
But what does that actually mean in this context?
How does one exactly program for a purely theoretical machine?
How do you even program a mechanical machine?
It's all just so far out of my comfort zone that I don't even know where to begin.
So all of this brings me to the crux of my comfort zone that I don't even know where to begin. So all of this brings
me to the crux of my confusion. Should we consider the analytical engine a real computer? And I don't
just mean on a technical level, I'm also talking about the broader historical context. How does
this downright archaic machine stack up to really anything modern? I mean, can we even make comparisons with something
this old? I know I have a lot of questions here, but we are slipping back into some pretty distant
territory. Born in 1791 somewhere in England, the exact location is actually contested, Charles
Babbage would become a mathematician early in life. Now, abbreviated
biographies follow a very generic genius childhood kind of trope. The young Charles was infatuated
with math. He read all about it in his free time, and by the time he was in higher education, he was
bored to tears. But at this point, we've already reached some of the less relatable aspects of the story.
We all probably have a pretty good idea of what a mathematician is in the modern day,
or even in the past 50 or 100 years.
You study math, and that's usually it.
If you expand outside the field, it's probably some related discipline.
A mathematician like Alan
Turing got into computers and cryptography. Maybe on the fringes of this, some math nerds might have
a dueling interest in psychology. But that's all firmly in the sciences. We're talking about
numbers and methods and rigor, right? Well, Babbage, and indeed many of his contemporaries didn't adhere to these modern
notions. Sadly for them, they didn't have someone from the 21st century there to put
expectations on their actions. If that did happen, well, that might be kind of weird.
The romantic word that some people would use here is polymath, as in someone who isn't restricted to any one field,
but is adept at many. Now, I don't really like that term because I think it's a bit of a cop-out.
It's an attempt to wave away a more complex picture. I think it's better to just drop in
an example. In 1837, Babbage published a book called The Ninth Bridgewater Treaties. If I may, I think outlining some of
the chapter headers will paint a pretty vivid picture for us. Just to list a few of my favorites.
On the account of creation in the first chapter of Genesis. Of the desire of immortality.
A priori argument in favor of the occurrence of miracles. Reflections on free will.
And on the calculating engine.
So, simple question here.
What was Babbage's degree in again?
And no cheating, just go off that list I gave you.
We have some definite theology going on here.
The immortality chapter could be related to psychology.
Maybe reflections on free will will slot nicely into a philosophy magazine.
And then we have on the calculating engine.
That has to be some math or engineering something, right?
Well, Babbage had a degree in mathematics.
Like I said, he was a trained mathematician. Keep in mind, these are all chapters in a larger cohesive text. Calling
someone a polymath, at least to me, makes it sound like they just practice a lot of different
disciplines. Maybe they teach math most of the time, but, you know, on Sundays they like to give theology lectures.
Sometimes they end up substituting for sick professors in the philosophy department.
But in this case, Babbage isn't slipping between classrooms.
All these separate fields, separate beliefs, and separate works are intimately blended together.
The theology relies on mathematical principles.
blended together. The theology relies on mathematical principles. Take the chapter Argument in Favor of the Design from the Changing of Laws in Natural Events as an example. This is
Babbage's argument for divine creation of the universe, or at least some greater hand defining
the nature and natural laws of the world. How does he back this up? Well, he argues by analogy using his
calculating engine as an example. I know, it sounds weird and reading the whole thing,
it's still pretty weird. The actual argument is long and drawn out, so instead of pulling
from the text ad nauseum, I'm going to give a simplified rundown. Babbage's argument hinges on the idea of natural
law, that there is some grand governing principle behind the world that us humans can't fully
understand. The cycle of life and death, the erosion of stones, the movement of the waves in
the heavens. To us, it may seem like these are separate processes, but they are all dictated by one core law.
The process may look different, but that's only because we can't observe the deeper machinations
at play. Here's where the calculating engine comes in. Babbage asks the reader to imagine
that they're standing in front of a row of dials, each displaying a different digit.
Read across all the dials, you get some long number. Now,
Babbage walks us through a thought experiment. Quote,
Let the figures thus be seen by the series of natural numbers, 1, 2, 3, 4, 5, etc., each of
which exceeds its immediate antecedent by unity. Now, reader, let me ask how long you will have
counted before you are firmly convinced that the engine, supposing its adjustments to remain
unaltered, will continue whilst its motion is maintained to produce the same series of natural
numbers. End quote. You gotta love the language here. Unity, if you're not aware, is just a fancy word for one.
So slip that in next time you're talking some math.
Of course, the answer that he's trying to lead us towards is some low number.
Maybe you get the pattern after the machine counts to ten.
From there, you formulate some self-consistent law that the machine must be following.
But you've fallen
for Babbage's tricky machine. He goes on to explain that after counting up to one million,
the pattern changes. The machine stops counting in ones and instead starts counting in triangular
numbers. As an aside, because this will matter later, triangular numbers are a type of series.
It starts with 1, 3, 6, 10, 15.
And it goes on from there.
The point that Babbage is trying to make with this is you can't immediately see the pattern
unless you study it rigorously.
Triangular numbers are defined as the number of dots that can be arranged in an equilateral triangle.
It sounds like a fun kind of math game for kids, but this will prove to be important to the discussion eventually here.
Anyway, the machine you are observing starts counting differently.
It appears to start following some new law that you haven't seen before.
If you're simply observing the machine, Babbage argues, you don't really
understand what's going on. It looks like the thing broke, or that there are multiple distinct
laws governing its action. However, that's not the case. Babbage actually rigged up his
calculating machine to follow one grand unified law. Since we're on the outside, we don't know
what that law is, and we're left to guess on
what's happening. The allegory here is that, according to Babbage, this is how the intelligently
designed world works. Natural laws seem complex only because we can't fathom their deeper
machinations. The alternate reading, and the wrong reading that I like the most, is that Babbage is actually claiming to be some almighty machine god, crafting the laws of the universe within his tiny box.
The point I'm getting at here is that for Babbage, these separate fields were all intermingled.
Biology tied into physics, which tied into theology, which tied into math. We could
delve that for some deeper meaning to his actions, or we could take a more shallow approach.
The net result here is that Babbage's accounts of some of his computing-related work are tied
in with a larger tableau of research. So, what is this calculating machine that Babbage uses to prove
the divinity of creation? Well, to start with, it really isn't any one machine. Calculating machine
is what Babbage uses to refer to his proto-computers, if we can call them that. This includes
the analytical engine, but also its immediate predecessor, Babbage's difference
engines.
To have anything like a comprehensive understanding of the later analytical engine, we have to
start with these earlier machines.
The difference engine was a single-purpose machine.
It was built to calculate tables of values.
Why would you build such a device?
Well, according to Babbage, the idea came in a dream.
From his autobiography, Passages from the Life of a Philosopher, quote,
One evening, I was sitting in the rooms of the Analytical Society at Cambridge,
my head leaning forward on the table in a kind of dreamy mood, with a table of logarithms lying
open before me. Another member coming into the room and seeing me half-asleep called out,
Well, Babbage, what are you dreaming about?
To which I replied,
I am thinking that all these tables might be calculated by machinery.
End quote.
I know, it's kind of a hammy story.
But even at this early point, we're starting to see some interesting connections.
This episode would have to happen sometime in the early 1820s, well over a century before the
digital computing era. ENIAC was initially designed to help generate artillery firing
tables for the US military. The need for automation in the ENIAC case seems pretty obvious. Firing tables
took a lot of repetitive work to create, and humans are easily burnt out. Worse, we're all
pretty susceptible to errors, especially after long hours of running the same calculations over
and over again. Babbage's machine had a slightly different target. Mathematical tables are a mostly antiquated tool that was used primarily back in the day.
By back in the day, I mean well before automatic computers and good calculators really hit the scene.
But I will admit that I've used them before.
Sometimes they come in handy with really nasty functions that you can't compute very well.
A mathematical table is really just a labor-saving device,
if you can call a flat piece of paper a device at all.
The table shows some input to a function or algorithm on one column,
and the results in the other.
One example is the table of logarithms that Babbage was falling asleep on back at Cambridge.
Calculating a logarithm by hand is tedious, but this function shows up in a lot of equations.
Logs are especially prevalent in physics and astronomy, so if you're crunching numbers in
either of those fields, then you're going to need to run some logarithms. You could keep calculating the natural log of 5.5 over and over again, or you could reach
for a table. Now, that's the most simple case, just a piece of paper with two columns printed on it.
Moving up into the deeper realms of math land, we reach entire books of mathematical tables.
Functions and equations can take multiple arguments, so
tables will often have multiple inputs. Sometimes this is expressed as a big grid, other times just
as multi-colon tables with arguments grouped together in some sort of logical structure.
The result here is that mathematical tables are often pretty sophisticated tools. At least, that was the case prior to computers.
Of course, there's a fun undercurrent here. How are these tables being produced? In Babbage's time,
it was very slowly, very tediously, and by human hands. This is a case where effort has to be put
in up front to save time at the other end of the process.
And crucially, this human process has its own failings. It takes a long time, and it can be
prone to error. Sure, out of a thousand-line table, maybe there are only one or two errors,
but imagine if one of those errors lands on an entry you actually need. Now, all of a sudden, your
calculations are wrong, and you can't figure out why. This is the point where Babbage would burst
in saying, there has to be a better way. And Babbage's solution was a machine that he called
the Difference Engine. Now, we already have some fun going on here. There were actually two difference engines, the number one and number two.
There were also a number of test machines, or at least parts of machines,
built in the lead-up to number one, so the whole thing is a bit of a morass.
Anyway, the idea for Babbage's difference engines date back to at least 1819.
Construction of the difference engine number one
started in 1821, so what exactly did this machine do? The trick is in the name. Difference here
doesn't necessarily refer to subtraction, but instead the method of differences. To further
muck things up, this is sometimes called Newton's divided differences interpolation.
A lot of math stuff just has multiple names, so you're going to have to deal with that.
The method here is a little complicated.
It's not the worst, but it's not good for an audio format,
so I'm going to leave the full description as an exercise to the reader.
Hey, this isn't just a math lecture series, you know. The general idea is that using the method of differences,
you can compute the value of a function using only addition. Of course, there are caveats. I mean,
it would be too good to be true otherwise. You have to have a function, let's call it f of x, that's a polynomial.
By that, you should be thinking about something like the quadratic equations from back in
algebra class.
You know, f of x equals ax squared plus bx plus c.
This method works with polynomials of any order, so as long as you have some sum of AXs to some power,
then you can use this formula. You also have to be tricky about how you handle your data.
The method of differences depends on the value of previous calculations, so if you're trying to get
F of 5, you also need F of 4. The other trick is that, technically speaking, you need other
operations besides addition, but it's easily shown that other operations can be represented
using only addition. If you meet all those restrictions, then congratulations! You can
use the method of differences. This might sound a little, well, restrictive. However, polynomials are really a
workhorse in the world of math. For starters, kinematics, that's the physics of normal motion,
can be described using mostly polynomials. So something like the trajectory of an artillery
shell, for instance, can be expressed as a polynomial. It gets better. Going further into
the world of funky math, we get approximations. That is, methods for generating equations that
are close enough to other equations. There are entire classes of approximation methods that only
use polynomials. All these characteristics make the method of differences particularly well-suited
to generating tables of data. You will always have past values since that's kind of your final
product. You can also take special advantage of approximations. All approximations come with some
amount of error. It's usually expressed as the number of decimal places the approximation holds for and some range of
values that good enough covers. You can control how big your table is, and you can control how
many decimal places you report. So a lot of times an approximation will, in fact, be good enough.
And just like that, you have a really good trick to make tables easier to handle.
Babbage pulls another really neat trick here that, combined with everything else,
would make it possible to automate this task really quite efficiently.
His design of the difference engine called for a special kind of addition.
How do you normally add two multi-digit numbers?
Well, at least for me and I think for most people, you end up doing it digit by digit.
You add each digit.
If the result is greater than 9, then you have to carry a 1 over to the next digit.
Then you repeat the process.
You'll note that that's a lot of steps.
At best, this is one step for each digit.
If addition results in a carry, then this is more like two steps per digit.
That means that the time it takes to carry out this process increases as you work with larger and larger numbers.
So the first hundred entries of a table might be quick to crunch, but wait until you get
to the ten thousandths.
Babbage's solution was what he called a two-step addition, and I think it shows the kind
of adaptation that was needed on the way towards automation. Instead of calculating each digit
separately, Babbage's two-step shuffle just adds all the digits across at once. You could call this
parallel addition, since each digit is added in parallel at the same time. This is possible only in the land of automation.
All Babbage had to do was create parallel adding devices, one for each digit. Step two of the
shuffle is handling carries, which are just applied to the result of step one. I'm pointing
out this special addition here because it shows us how Babbage had to adapt normal mathematics to work in the mechanical realm. I think this is another interesting connection
between Babbage's work and later digital computers. One dirty little secret is that
normal math doesn't actually translate that well into the digital world. Sometimes that means
approximations, sometimes that means that certain equations
are just out of the question. But with some careful design, you can do radically new things
using this kind of machinery. Now, we also have the little matter of how the difference engine
expressed values. If you want to try and draw more analogies, what kind of memory did this thing sport? This gets a little weird. So,
the Difference Engine didn't have memory in the addressable sense. It had something more like
registers, I guess? Now, to start with, I'm not a mechanical engineer, so the deeper machinations
of Babbage's machines are kind of an enigma to me. At least, it's outside my area of expertise.
So, if I don't go into the details enough for your taste, I'm sorry, I'll link out to more resources that go further in-depth on the gear side of things.
The big point here that I can address is that the difference engine was digital, not analog.
For a mechanical device, that might sound a little
weird. I know I'm used to seeing analog mechanical computers as the main mechanical type of computers.
Numbers in the difference engine were represented as rotation on a shaft, but these shafts were
geared. Each tooth on the gear corresponded to a natural number between 0 and 9,
with no intermediate values allowed. So you couldn't encode 1.5 on a single gear.
Just that simple fact that these machines were digital puts us in a very interesting arena.
I can't say that this was comparable to, for instance, analog machines
like Vannevar Bush's differential analyzer. Babbage's engine was working on fundamentally
different principles. But we still aren't to the point of a computer. The difference engine can't
handle conditional execution. It doesn't even really have something to execute. It's a single-purpose
machine. You put in initial parameters, you turn a crank, it prints a table. There was also the fact
that the difference engine didn't completely exist. Babbage's plans called for a pretty big machine.
Initial drafts called for 20 registers, which would allow the engine to crunch really big polynomials.
In 1823, Babbage received governmental funds to build the engine, but that wouldn't go as planned.
There was a debacle involving craftsmen stealing tools, which, set it back, there's also the matter of just not having reliable enough gears.
By 1832, he had managed to construct a six-register model.
At that point, it was determined that making a full-scale model would be too expensive
and maybe not even possible with how reliable parts were at the time.
But Babbage was looking forward, even while trying to get his first machine off the ground.
Going back to his 1837 treaties, we run into this interesting section just kinda tucked
away in his chapter on intelligent design.
Quote,
The first engine must be susceptible of having embodied in its mechanical structure that
more general law of which all the observed laws were but isolated portions, a law so complicated that analysis
itself in its present state can scarcely grasp the whole question. The second engine might be
of far simpler contrivance. It must be capable of receiving the law impressed upon it from without,
but it is incapable, by its own intrinsic structure, of course, the difference engine.
A clockwork machine that can only do one task, that task being built into its gears and shafts.
But what if you had some simplified device? One that could understand
commands that you could describe a task to? That, dear listener, sounds a lot more like a computer.
The development of the analytical engine, this second engine that Babbage mentions,
came as a direct extension of his work on the difference engine.
Specifically, Babbage claims in his autobiography that the path towards a more general-purpose
machine came thanks to this overall organization of the difference engine.
The circular arrangement of axes of the difference engine, round, large, central wheels,
led to the most extended prospects. The whole
arithmetic now appeared within the grasp of mechanism. A vague glimpse of even an analytical
engine at length opened out, and I pursued with enthusiasm the shadowy vision."
This circular wheel thing seemed a bit weird to me at first.
It took me a bit to get it.
I've actually seen two difference engines on display personally.
The number two, built later into the 19th century, can be seen at the London Science Museum.
There's also a modern recreation of the difference engine on display at the Computer History Museum in Mountain View.
of the difference engine on display at the Computer History Museum in Mountain View.
These are big steel and brass monsters, housed in the obligatory glass display case so, you know,
people like me don't go try to turn the cranks and gears. The first thing you notice is that they look like giant boxes, at least from the side. You're viewing all the registers, or axes as Babbage called them,
from the side. Each is mounted vertically, and they all are squished up together in these big
boxes. I've started to think that this is the wrong way to display these machines.
Babbage's big architectural drawings of his engines all show a top-down view,
as if the reader was looking down on the machine from above.
From there, the difference engine doesn't really look like a big box.
It looks like a series of interlocking circles.
Viewing Babbage's engines this way, I think, makes a lot more sense.
Whether we're talking about the difference engine or the analytical engine, Babbage built everything up in identical layers. In his notes, these were called
cages. Each cage contained a single digit for each axis, so one number for each register,
plus all the parts necessary for moving, shifting, and modifying those digits.
Viewed side-on, you can see all the numbers.
They're etched on the side of the gears, but everything is just repeated cages. You can't
see how the machine worked, but from the top-down view, you can start to pick out the connections.
You can actually see the axes and wheels that Babbage wrote about. You can see how data would flow around.
Anyway, that's just my museum rant for the episode. Now, we can put together a compelling
reason for why Babbage moved ahead with a bigger machine. Pulling from his 37 treaties,
we get our first motivation. Babbage wanted a more generalized and more simplified
device. I think it's clear to see that the difference engine was a dead end. What if you
need to make a table of functions that could not be approximated as polynomials? What if you needed
to, say, do something besides simple math? Then you'd need to make a difference engine model 3 that could run some other class of
equations, or a model 4 that could, say, handle loops. The difference engine was hardwired,
so to speak, maybe hard-geared, to work for a single task. Simplicity might sound a little
strange in this context, but think of it this way. What if Babbage wanted to create a
difference engine that could combine multiple tasks? One that could be used for the method
of differences, that could calculate trig functions without approximations, that could
integrate and could plot charts. That would essentially take four different machines just
wired or tied together. It would be a lot more simple to design a machine
that could do a handful of simple operations, then chain those operations together via something like
a program. That could accomplish the same task as a limitless number of custom single-purpose
machines. And I think we could argue that Babbage was on the same tip as Turing here.
And I think we could argue that Babbage was on the same tip as Turing here. A programmable machine could accomplish the task of any single-purpose machine.
It just makes good economic sense.
That all sounds like a nice theoretical foundation,
but what does this translate to in reality, or at least in design?
Calling the analytical engine a complicated machine would be a mistake.
It's a very complicated machine. My general understanding of this, really, beast comes
from a paper written by Alan G. Bromley in 1982. The paper is titled Charles Babbage's Analytical Engine, 1838. Nice and to the point. In this paper, Bromley works
from Babbage's notes to describe how the engine was actually designed. It's basically from the
gears up. Bromley points out that if the analytical engine was built, it would have been around the
size of a train engine. Like, the whole car. Looking at Babbage's plans, I think that may
have even been an understatement. The largest chunk of the machine was its storehouse, or store,
which is roughly analogous to memory. Keep that roughly in mind here. While the analytical engine
will have many components that look similar to parts of modern computers,
there are significant and important details that are different. Bromley makes a special note of this multiple times in his paper, and I think I just have to echo the sentiment here for safety.
The analytical engine's store was similar to the registers used in Babbage's earlier engine.
The main differences come down to scale and structure.
The new axes were massive.
Each was planned to hold 40 digits,
and some of Babbage's drafts called for as many as 1,000 axes.
Brownlee estimates that each of these axes would have stood about 10 feet tall.
Babbage's axis store here stacks up nicely with other types
of computer memory. I mean, not speed or convenience-wise, but there's a certain
familiarity to its operation. Reading from an axis was a destructive operation.
This is where I do need to talk a little bit about gears. I've been trying to avoid this since it gets complicated,
but I think it's worth examining at least in a restricted sense.
Each gear used on an axis was hollow.
On the outside, the gear had, well, teeth for rotating it into position.
Each had ten teeth.
It was also engraved with numbers so you could read off the current value
at a glance. The inside of the gear was hollow, with a single pin projecting towards the center
of the gear. Writing a value to the digit was a simple operation, just rotate the outer teeth
until you arrive at the desired value. Reading took two steps. First, a rod that sat inside the axis had to be moved into place.
This rod had a pin that could be aligned with the digit's inner pin. Once aligned, the rod rotated,
driving the gear backwards once it came into contact with the gear's inner pin. The rod was
set so that its sweep would reset the digit gear to a value of zero.
Along the way, the gear's current value would be given off, as Babbage put it.
In other words, if the gear had previously been rotated to a value of three,
it would now rotate backwards the same amount.
That rotational energy was then carried along what Babbage called the rack.
Rotational energy was then carried along what Babbage called the rack.
It's a series of gears and some sort of drive chain that moved data around the analytical engine.
The issue at hand here is that every read operation resets an access to zero.
Many forms of electronic memory have the same problem.
For example, reading from magnetic core memory will set a bit back to the zero state. The same goes for more modern silicon-based memory, interestingly enough.
The usual solution for electronic memory, at least, is known as the read-write cycle. After
read operation, you check if the read value is a 1. If so, then you immediately go back and write a 1 to the bit
you just read. Thus, data is preserved. However, that takes some special tooling.
You need to be able to either store a value for a split second or do a quick comparison.
It's a very electro-digital kind of approach. With wires and electrons, you can easily add
in little tricks like these.
In the mechanical realm, everything has to be connected physically by something. That something,
be it gears or belts or pulleys or chains, has physical characteristics that will limit its
action. So the solution has to be simple, and it has to work within this framework.
solution has to be simple, and it has to work within this framework. Babbage decided to just double up every axis. So, in theory, an engine that could store a thousand numbers would need
two thousand actual axes. I know it sounds overkill, but this is a satisfyingly simple
solution. The paired axis, call it a shadow axis if you will, was connected to the main axis via
a transfer gear that could be engaged or disengaged. When you go to read the value off an axis,
the value that's given off will also rotate the transfer gear, thus setting the shadow axis to
the read value. That value can later be fed back into the main axis using the same process.
That value can later be fed back into the main axis using the same process.
It's all very symmetrical, which I do enjoy.
These axes were only intended to store numeric data.
This is basically a big chunk of gears used for variables.
Code was handled elsewhere.
So, to come in hot with the anachronisms, this could be loosely called a Harvard architecture machine.
Code and data were kept in physical isolation. They're just in distinct locations in the machine.
I don't think this is all that surprising, since, as we've seen, Mavage isn't really reaching very good data density in the storehouse. I can't even begin to calculate a comparison. 10 feet for 40 digits isn't really
comparable to much of anything. Imagine how quickly you'd run out of memory. This is also
in line with many early computers. EDZAC was the first machine to store code and data in the same
memory. Code didn't start executing on that machine until 1949, so I don't think Babbage
is really behind the times here. It's also important to point out that the engine store
was not addressable in the same way as modern memory. The machine couldn't handle something
like, say, indirect addressing. A result of which is that Babbage didn't work with pointers or arrays. Now,
this is something I'm going to come back to when we talk about programming this machine.
I just want to make sure that we're on the same page here. The storehouse was similar to memory.
Babbage even faced problems that later researchers would also run into, but it was still a distinctly different kind of device.
Next, we arrive at the mill. Not the building, but the component. Once again, I think it helps
us to start with an analogy. The analytical engine's mill is similar to a modern processor.
And I really do mean roughly similar. The mill is where actual math occurred.
This is also where we hit our first big historical roadblock.
I've been alluding to this, but it's time to be more explicit.
The analytical engine was never built.
Babbage drafted plans for the machine during the 1830s.
He would also return to the project in the 1850s.
These plans are highly detailed.
We have engravings and explanations for every part of the machine.
Explanations of how everything worked down to the gears.
However, there are issues when it comes to discussing the particulars
of how the engine would have actually been used.
Some of this is flux in the design.
Babbage's ideas change over time, and we don't have a finished machine for a finalized reference.
Bromley, for instance, focused his 1982 paper on the engine as it stood in 1834.
That's one way around this particular problem.
1834. That's one way around this particular problem. We also lack explicit explanations from Babbage for which operations the machine would have supported. The details are spread
throughout Babbage's notes. Unlike ladder machines, there's no table of instructions to work off.
There's no documentation. As a consequence of that, it can be a bit fiddly to run down the capabilities of
the mill in full. The closest summary I've come across is an 1837 paper written by Babbage himself
called On the Mathematical Powers of the Calculating Engine. But we have a fun caveat here.
This paper was never published by Babbage. It was left unfinished and eventually found its way into the Museum of
History of Sciences collection in Oxford. The paper was eventually published in a larger manuscript
written by Brian Randell in 1973. So, we have something that's basically a primary source,
but was probably not seen by any contemporaries. Anyway, from Babbage,
by any contemporaries. Anyway, from Babbage, quote, The operations which can be performed in the mill are the following.
Addition, subtraction, multiplication, division, extraction of roots.
These five processes are all which it will be necessary to consider in the present volume.
Others might be added if required, and the additional mechanism would be small.
New barrels with studs differently arranged would be nearly the whole change.
End quote.
Already, this sounds different than what we're used to.
Where are all the operations to move around data?
What about addressing?
There's no bitwise anything here.
And crucially to the whole computer thing,
the list makes no mention of comparisons or
branching operations. The trick lies in the oblique reference to barrels with studs.
Here's the best way I can explain it. The analytical engine had two distinct instruction
sets. One set was for the user, the programmer, if you will.
That set was concerned primarily with running math operations. The other set operated on a
lower level, telling the analytical engine how to carry out more complex instructions.
This lower instruction set was stored on a series of studded barrels. You could call it microcode, I've seen
the comparison often made, but once again, we're dealing with a different beast altogether.
I think it's better to look at the engine's barrels as a substitute for fancy electrical
wiring. Remember that we're in mechanical land. Everything has to be physically connected. You can't run a bus anywhere a wire can reach.
Instead, you have to have some way to engage physical linkages. You have to have clutches.
This is essentially what the engine's barrels accomplished. Each barrel was broken up into a
set of verticals, basically columns of studs. The pattern of studs encoded an operation.
columns of studs. The pattern of studs encoded an operation. I don't have a nice list of these operations, this being deep internal machinations of the engine after all. Babbage gives some short
lists, usually with the caveat that there are other commands the barrel can encode. It's just
one of those fun things where there is a little bit of ambiguity. In general, the barrels
told the analytical engine how to move around data. To get why this matters, we need to address
the overall structure of the engine. The mill had three axes that are important here. The two
ingress axes and the one egress axis. These could be seen as similar to the registers of a modern processor.
When called up to perform an operation, the mill took values on the ingress axes as operands,
and the result was pushed out to the egress axis. So, an addition might be explained as
egress equals ingress1 plus ingress2. To facilitate that operation, the barrels would
initiate a shuffling around of data. So if a user program specified to add the value of two of the
store axes, that would actually fire off a set of commands stored on the barrels. The studs would
engage clutches to fetch data from the store, transfer it over to the rack, and then load it into the ingress axes. That's just the most simple operation the barrels performed. They could read
data from different sources into the mill. Babbage designed a secondary input method called
number cards to be used in conjunction with dynamic storage. These were mainly for constants,
unchanging variables that could be fed directly
into the mill as instructed by the barrels. A barrel could also initiate the mill to rotate,
engage, or disengage any other barrel, including itself. The designs I've seen show three barrels,
but conceivably an engine could have more. The idea here is that the set of barrels functioned as their own program.
These rotations, as well as other operations initiated by the barrels,
could be affected by certain conditions within the mill.
Brownlee gives the example of an overflow if the result of an operation exceeds 40 digits.
If that condition was reached, then a barrel might initiate a
conditional advance to a special vertical. This ability to act based off a condition, call it a
conditional branch if you want to be fancy, is what makes the engine more than a calculator.
It makes it a computer. Now, there was one other set of operations that the barrels handled. This is
also a part that I've been avoiding so far. In Babbage's 1837 description, he lists these two
actions that the barrel can invoke. Quote, a variable card to be turned, an operation card
to be turned. End quote. That's among a larger list of operations.
You see, the barrels also controlled the flow of program execution.
Those programs were in the second user-facing instruction set that I mentioned.
Now, I probably don't need to tell you that programming the analytical engine was strange.
That should be self-evident
if you've been listening at all to this episode. Everything about Babbage's beast appears
qualitatively similar to an electronic computer, but closer examination shows unexpected differences.
Programming is an especially interesting case here. The machine read in programs from punch cards.
Programs consisted of a series of instructions.
That sounds broadly familiar, right?
Isn't that how most old computers were programmed?
Well, dear listener, the differences start to pile up as soon as we take a closer look.
Right out the gate, we are not talking about
the standard 80-column cards used in the 20th century. These are somewhere between the cards
used by Automatic Looms and the strange early punch cards used by Hollerith. By that, I mean
Babbage's cards were strung together to form a long, tape-like medium. And there is a distinct lack of complex encoding. At least,
encoding was kept to a minimum. From Babbage's writing, it's clear to me that the pattern of
holes on a card was taken pretty literally by the machine. One hole might instruct the mill
to run an addition, while the hole next to it might designate subtraction. This is in contrast to even early computers,
where encoded data was used to represent instructions.
In the more modern regime, a 1 might represent addition,
while a 2 could be used for subtraction and so on.
I think this just comes down to complexity.
For Babbage, it wouldn't have made much sense to devise some type of binary encoding
scheme and then build a mechanism and a device for decoding it.
There were three types of punch cards in play.
I've already mentioned number cards, the cards used for feeding constants into the
mill.
While useful, that's not where the real oomph is.
The magic is held in the operations and variable cards.
Now, this is a strange and arcane type of magic. The actual code, the operations that tell the
mill what to do, are stored on a feed of cards separate from the variables to operate on.
At first, that may sound unworkable. I know it freaked me out when I started reading
about the engine. However, it does make some kind of sense. Describing addition is the easiest way
for me to try to pick this apart. To kick things off, you need two operation cards. Technically
speaking, all operations on the analytical engine are signed and fixed point.
Each operation card here is punched to specify addition and the sign of its corresponding operand,
so if you're adding 1 and 1, then each card will be punched for positive signed addition.
The first card is red, which fires off the barrels. The barrels instruct the mill to get ready for signed addition,
and that another operation card is needed,
so the card reader advances on the tape to the next card.
The barrels then instruct the mill to start reading in variable cards.
This is the strange part to me.
Addition on the analytical engine takes three variables, two inputs and one
output. So for each addition operation, you need three variable cards lined up and ready to be read.
The input cards here specify where data should be loaded from, either a numbered axis in the
storehouse or a number card. The output card serves the same purpose for,
well, outputs. You can specify either an axis on the store or that a number card should be punched.
Babbage isn't super clear about this, but as near as I can tell, number cards were also arranged as
a tape, so there's some nice sequential axis going on here for constants. Let's say you wanted to add the values in access 2 and a constant number on a card,
then store the result in access 3.
You would encode three variable cards in that order, input, input, output.
Then you need to make sure the engine's card readers are synchronized.
Once the two operation cards have been read in,
the barrel then instructs the mill to read the first two variable cards and load those values
into the ingress axes. Once addition is complete, the barrels will instruct the mill to read the
next variable card and then transfer the value of the egress access accordingly. There are two main things that I want to point out here.
First, the variable cards and operation cards have to be in proper order to work.
Initially, I was just thinking of this as some weird archaic mechanism.
I mean, I kind of get why Babbage had to do this.
He wasn't using some type of encode-decode scheme
to pack operations. There wasn't a precedent that told him he should do that for a computer.
It's a lot more convenient to have these two distinct card readers and two card feeds.
That said, I've started to see something neat here. If you squint a little, the variable cards kind of look like a stack. Now,
technically speaking, the feed of variable cards can be wound either way. The mill can go forward
a card or back a card. Usually a stack only goes one way. Normally, you push an item onto the top
of a stack. Then, when you want that item, you pop it off. But let's just take the simple
example for a moment, the example where the variable cards always move forward when read.
When you feed a program into the engine, you have to build up your stack of variables.
If you know your program starts with an addition, then you need to make sure you have three variable cards lined up and ready for use.
Here's where it clicked for me.
I'm a low-level kind of guy, I like programming assembly language as a recreational activity.
You know, for my health.
And I know I say this a lot on the show, if I made an advent of computing bingo card,
then talking about assembly language would have to go somewhere on the grid.
Anyway, down in assembly land, you get to think about how everything is implemented.
That includes calling conventions.
That is, you have to figure out how arguments will be passed to functions.
The easiest solution is just to put arguments in registers, the tiny chunks of memory that
exist inside the CPU.
Calling out to a function that adds two numbers? Well, then you just put your operands in register
0 and register 1. Done. The issue here is that registers are a really scarce resource. A
processor might only have a handful of registers. x86 processors, the chip family that run all PCs, have six generally usable registers.
So then, how would your PC handle a function with seven arguments?
The professional solution is to use the stack.
The x86 architecture has pretty nice support for a stack.
Instead of putting arguments into registers,
you push them onto the stack. Then, when a function is called, it can pop its arguments
off that stack. And presto, you can now pass as many arguments as you want. At least, within reason.
Let's think about this in the context of the analytical engine. The variable cards function like a pseudo-stack,
so maybe it makes more sense to talk about the mill's operations as functions that take
arguments off of that stack. For me, at least, that makes the engine sound a lot more approachable.
It's in terms I know and terms that are actually applicable to this older machine.
and terms that are actually applicable to this older machine.
Okay, I promised a second point about variable cards, and here it is.
If we want to use modern language,
then we could say that variable cards can only reference static addresses.
A card can, say, to grab the value of access 4, for instance.
It can't, however, tell the mill to grab the value from XS4 and then treat that value as an address to pull from somewhere else in the store. In other words, the engine can't support
pointers. As we will see, the analytical engine can loop, but you'll never see a program that
loops through, say, an array. Lists and arrays just can't be implemented
on this machine, not in any meaningful way. This is a big part of what I mean when I said that the
store wasn't really memory. It's close, but some more powerful machinations are missing.
The final card that we need to address are called the combinatorial and index cards.
These are technically operation cards, but with a twist.
In the words of good ol' CB, combinatorial cards are used, quote,
to govern the repeating apparatus of the operation and variable cards,
and thus to direct at certain intervals the return of those cards to given places
and to direct the number and nature of the repetitions which are to be made by those cards.
End quote.
This is the all-important conditional branching that makes the engine a true computer.
When the machine runs into one of these cards, the flow of operation is changed.
Depending on certain conditions,
the barrels can instruct the mill to run the operation tape forward or backwards a number of
cards. The index card mechanism is a little more vague. Babbage only explains that index cards are
used to fire off some counting mechanism. My assumption is that they are used like labels,
as a way for a computer to keep track of where
jumps need to go.
I also have one final bit of fun to interject before moving forward.
In Babbage's unpublished manuscript, he suggests that programming the engine would,
in fact, be a bit of a pain.
Programming has always sucked.
So he offers up a solution.
Color coded cards. The scheme is also really simple. Each
operation is assigned a color. Variable cards used for those operations take on the same color,
with output cards always being black. So if you ran an addition, the two operation cards would
be on white cardstock, the two input variable cards would be on white, and the single output would
be on black cardstock. I think this might be the earliest attempt at computer debugging I've seen
yet. So what did programming this beast look like in practice? Well, you know what I mean,
semi-practice. We're still dealing with a machine that never actually existed.
There are some examples
of early programs for the theoretical engine in Babbage's notes and writings. However, most of
that went unpublished. For actually published programs, we have to turn to Countess Ada Lovelace.
Now, I have to burst some bubbles to start with, or maybe I have to complicate the picture.
I have to burst some bubbles to start with, or maybe I have to complicate the picture.
Lovelace was not the first person to write a program for the analytical engine.
As the designer of the system, Babbage would have written the first scraps and eventually first complete programs during, you know, the design process.
But the story here isn't entirely straightforward.
Lovelace met Babbage in 1833 through a mutual connection. At the time, Babbage was still working on the Difference Engine,
but plans for the Analytical Engine were brewing. Lovelace and Babbage would stay in touch during
the development phase. A number of years later, in 1840, Babbage made a trip to Turin, Italy.
A number of years later, in 1840, Babbage made a trip to Turin, Italy.
While there, he gave a lecture on the analytical engine at the, excuse my pronunciation,
Academia della Scienza di Torino.
One of the attendees was Luigi Federico Menabrea, at the time a mathematician and later the Prime Minister of Italy.
a mathematician, and later the Prime Minister of Italy. Two years later, 1842, Menabrea would compile his notes from that lecture into a paper describing the analytical engine. The paper was
in French and published in Switzerland. You know, as one does. It's also perhaps the first widely
circulated description of the machine. Lovelace would get a hold of this paper,
translate it to English, and expand it with a series of notes.
Metabrea's initial paper included a number of simple example programs. So, strictly speaking,
those would be the first published computer programs. I mean, if we call the analytical
engine a computer, and if we call instructions to a theoretical machine a program.
These programs would have been taken from Babbage's lecture, so that's still a point for Babbage.
But there's the matter of Lovelace's additions.
The translated article was titled Sketch of the Analytical Engine.
Lovelace added seven notes to it, labeled notes
A through G. I think it would be better suited to call these commentaries or maybe essays, since
the notes are actually longer than Minabrea's initial paper. Minabrea's section on the engine
is interesting, but it kind of loses its luster now that we have access to some of Babbage's
unpublished work.
Babbage just explains his own machine better than a third party could.
Lovelace's notes are much more interesting because they provide a commentary on the engine,
or, put another way, an analysis of the first computer.
I think that's some pretty important stuff. Crucially, Lovelace's notes show a degree of separation from the gear level that Babbage was working at. There's a level of
abstraction to the Countess's commentaries that isn't as present in Babbage's own writings. In
that sense, I think it's probably less realistic to call Adam Lovelace the first programmer.
That's a little reductionist.
Instead, maybe we should look at her as the first computer scientist.
This is going to be a bit of a long quotation, but I think it's emblematic of Lovelace's notes.
In note A, she gives a broad explanation of Babbage's engines,
pointing out all the crucial advantages
that the analytical engine has over the difference engine. An important point she makes is that the
engine doesn't necessarily have to be used for mathematics, but shows how operations in general
could be automated. From note A, quote, the operating mechanism can even be thrown into action independently of any object
to operate upon. Again, it might act upon other things besides numbers, where objects found whose
mutual fundamental relations could be expressed by those of the abstract science of operations.
She continues, supposing, for instance, that the fundamental relations of pitched sound in the science of
harmony and of musical composition were susceptible of such expression and adaptations,
the engine might compose elaborate and scientific pieces of music of any degree of complexity or
extent. End quote. To my ear, this is starting to sound similar to the Church-Turing
thesis. That a computer can compute anything computable. That math isn't the sole purpose
of these new machines. There are some wide-sweeping caveats to what counts as computable.
Even with restrictions, the computing domain is wide open. The idea that a
computer could be more than a calculator would still be cutting edge all the way up to the 50s,
maybe even beyond. Here we're seeing the first seeds of that cutting edge idea, and it's happening
around the first computer. That's a big deal. I don't know how else to put it.
Now, that all said, most of Lovelace's notes are concerned with math. I mean,
the analytical engine was still a number-crunching machine at this point. We can't break into the
domain of text, images, or sound quite yet. The note slowly builds in complexity. By note F, we hit something that
I want to pause on. Lovelace starts talking about loops. Quote,
It is obvious that this mechanical improvement is especially applicable wherever cycles occur
in the mathematical operations, and that, in preparing data for calculations by the engine,
it is desirable to arrange the order
in combination of the processes with a view to obtain them, as much as possible, symmetrically
and in cycles, in order that the mechanical advantages of the backing system may be applied
to the utmost. End quote. I'm never going to stop hammering this point home. Conditional loops are what make computers computers.
That's the crucial element that allows a machine to compute anything computable.
Outside theory, there is a more practical reason to have loops.
As Lovelace points out, you can take advantage of loops, or cycles as she calls them,
to simplify operations.
of loops, or cycles as she calls them, to simplify operations. There's a certain interplay here going on between traditional math and the analytical engine. Some equations or algorithms are cyclic
in a sense. Take the triangular number series from back in the difference engine days. New entries
in that series could be defined in terms of the previous entry. So yeah, you can hamfist
everything out quite easily. If you want the first 100 triangular numbers, you could write out 100
operations that would get you there. Or you could take advantage of the cyclic nature of the series.
The difference engine did this by design. That was all the engine could do. But the analytical engine could be instructed to
cycle operations. You could construct a loop in any way you wanted. That means you might be able
to arrive at a more elegant solution than Babbage initially designed the machine for.
Note G is where the rubber meets the road. When people say that Lovelace wrote the first computer program,
they're talking about Note G.
In this note, Lovelace presents a program for calculating Bernoulli numbers.
This is another series of numbers that, similar to triangular numbers,
are used in certain mathematical equations.
So we're still in the realm of generating mathematical tables.
The difference is that the equation for calculating Bernoulli numbers is pretty complicated. Some definitions are recursive, or you can represent the series
using summations. However you cut it, Bernoulli numbers are annoying to calculate, so this makes
a good target for automation. Lovelace provides a walkthrough of how these gross definitions can be broken down
into a series of simple operations. Then, she drops the final result, the actual program.
The program isn't in a familiar form. Lovelace represents her programs as a table. There is some
precedent for this. Manabrea's section of the sketches uses tables to present example programs,
so we know that Babbage was doing the same.
However, Lovelace's tables are structured in a unique way.
These tables are broken up into five sections.
Number of the operation, nature of operation, variables for data,
working variables, and result variables.
The first two chunks are simple. Number of
operation is basically our line number. Nature of operation is just the operation to run, so plus,
minus, divide, multiply. The variable chunks are more interesting. Each of these sections has
columns labeled with axis numbers, so each column represents a single storage axis. The variable
for data part is where initial inputs are described. These are the values that were fed
into the analytical engine before operations began. In the Bernoulli numbers program, one of
these inputs was in, which represented which Bernoulli number you wanted to calculate.
The working variables are, once again, just what they sound like.
These are temporary variables used during calculations.
It's active storage, or heap, or just allocated memory.
As operations were carried out, their results were dropped into one of these axes.
This leads us to the final section, result variables.
These were, well, final results. Once all calculations
ceased, you could read off your new Bernoulli numbers on those axes. What I'm getting at here
is these tables are less like source code and more like diagrams showing the flow of data within the
engine. Lovelace makes special note that there are distinct types of variables.
Sure, each axis is identical, say, for its position,
but you as the programmer get to choose what each axis represents.
What she's talking about is another kind of abstraction.
A programmer has to go into a computer and place meaning on its numbers.
A programmer has to go into a computer and place meaning on its numbers.
Here, Lovelace is providing simple structure that can be used to that end.
The final piece, the topper to the program, is the fact that Lovelace uses nested loops.
In order to reduce the number of operation cards needed, she has a big loop that has two smaller loops inside it. That's a classic programmer
move. Given the context, perhaps we should call it THE classic programmer move. This is about the
most complicated program the analytical engine could have ran.
Alright, that brings us to the end of today's episode.
This is one of those topics that I was initially concerned about.
I was worried that it would be too inaccessible for me, but I'm glad I was wrong.
The analytical engine is a fascinating machine.
Although it was never built, we have enough information about it that we can figure out more or less exactly how it operated.
The engine is undoubtedly a computer.
But the first computer?
I think as long as we have proper caveats, that statement can be true.
Babbage designed the first digital computer.
It used base 10 instead of base 2.
It was mechanical instead of electrical.
And it was mechanical instead of electrical, and it was
never built.
As long as we're precise here, then the analytical engine was some kind of first.
It also fits into the framework of later machines.
We just have to be careful about certain bits.
Some of these differences are due to technical limitations.
You can't have a very big memory space if memory takes up a lot of physical
space. Other limitations, like the weird multi-feed programming, were probably more due to a lack of
precedent. Babbage was striking out on his own, so some weirdness is to be expected. To close out,
we have the question of who counts as the world's first programmer. If we say that the
analytical engine was the first computer, then Babbage wrote the first computer program. I think
that's a simple answer. However, Countess Lovelace was the first person to develop software for the
machine as an activity separate from the engine's physical implementation. As long as we restrict programming to being
separate from the actual computer design and construction, then we can definitely call
Lovelace the first programmer. But I think that's a little bit of a reduction. Lovelace's notes on
the analytical engine are less programming and more an example of computer science. Sure, it has an early and very complex program for the engine.
Probably the first published program for the engine that's more than just notes.
Anyway, the point I'm trying to make here is the story's a lot more complicated
than just slapping a label on things.
We have to be careful when we put modern labels on something this old.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of computing's past.
And hey, if you like the show, there are now a few ways you can support it.
If you know someone else who'd be interested in computer history,
then why not take a minute to share the show with them?
You can also rate and review on Apple Podcasts.
And if you want to be a superfan, you can now support the show directly through Advent of Computing
merch or signing up to be a patron on Patreon. Patrons get early access to episodes, polls for
the direction of the show, and bonus content. You can find links to everything on my website,
adventofcomputing.com. If you have any comments or suggestions for a future episode, then go ahead
and shoot me a tweet. I'm always happy to hear from listeners.
I'm at AdventOfComp on Twitter.
And as always, have a great rest of your day.