Advent of Computing - Episode 86 - Fluidic Computing
Episode Date: July 10, 2022What is a computer? A miserable pile of electrons! But... not necessarily. I have yet to find a fully satisfying definition for "computer" that encompasses the full grandeur of calculating machines. T...his episode we are further complicating that quest by adding fluid based computers to the mix. We will be looking at 3 machines that crunched numbers using nothing but fluids and tubes. There's actually a rich tradition of fluidics to talk about.  Selected sources:  https://archive.org/details/electronicbrains0000hall/page/186/mode/2up - Electronic Brains chapter on MONIAC  https://archive.org/details/ACFELANALYTICALSTUDIESOFFREEZINGANDTHAWINGSOILS1953/LUKYANOV%20-%20Hydraulic%20Apparatus%20for%20Engineering%20Computations%20%281955%29/ - Translated paper on the water integrator  https://www.gwern.net/docs/cs/computable/1964-gluskin.pdf - FLODAC!
Transcript
Discussion (0)
If you've ever taken a course in electrical engineering or electronics in general,
then you've probably heard this analogy before.
Electrical current is like the flow of water.
A battery is similar to a pump that just moves electrons around.
Wires are like tubes.
Resistors are like choke points that constrict the free flow of electrons.
Capacitors, well, those are just
big tanks full of sloshing electrons. Inductors are... well, let's not worry about inductors for
a minute. The analogy is a fun little teaching toy. It's easy to understand on a very basic level,
and doesn't really hold up when you move into more advanced topics. But what if you really riffed on the whole hydraulic analogy? What if you took
things to their logical conclusion and actually, you know, made one of these hydraulic circuits?
You can actually do just that. At least, it works for many electrical components.
You can quite easily model a circuit just using tubes full of water, or compressed air, or really any other kind of fluid.
All you have to do is make sure your fluid restrictions are the right size and everything is pumped at the right pressure.
You should even be able to create your model accurately enough to run calculations on it.
The hydraulic analogy can actually be quite quantitative if you get down to it.
But can we go further?
If we use the hydraulic analogy to model circuits,
we should be able to model just about anything, right?
Can we model math?
Can we even, perhaps, build an entire computer using nothing but tubes?
That has to be too far-fetched, right?
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 86, Fluidic Computing.
Today, we're going to be looking at one big question. It's one that keeps me awake at night and one that I keep coming back to.
What even is a computer?
one that I keep coming back to. What even is a computer? Of course, we do have a guiding principle thanks to Alan Turing. The generalized definition boils down to a machine that can
execute conditional branching. A true computer has to be able to compare values and then change
its execution based on the outcome of that comparison.
That's worked well as a goalpost for modern machines and for modern programming languages,
but I think it breaks down a little bit once we look at the historical context.
This general guideline called the Turing Completeness Theorem doesn't really work for analog machines.
It also doesn't work well for some mechanical computers. So when we start going back to the dawn of the digital era and even further,
we need to get a little more loose with our definitions. This episode, we're going to be
diving into a computing technology that's obscure, strange, and outside
the well-defined Turing complete realm. We're talking about water and fluid-based computers.
And that is right. That is actually a thing. So let's start by breaking down the completeness
theorem and, well, removing its biggest requirement.
Never mind the whole reacting based off comparisons thing for a second here.
A computer has to have some way to represent numbers and some way to operate automatically
on those numbers.
The automatic bit here I think is crucial.
You need a computer that can automate away some work.
Otherwise, it's just a calculator.
Now, you could say that computers really just automate math.
Or, if you want to be philosophical, you could say that a computer simulates human mathematics.
Current digital computers are boring, since they represent numbers as
a series of discrete electrical pulses. You get on and off, one and zero, lame and more
lame. Some analog machines represent data as rotation of a drum, which is a little more
interesting and has more of a cool factor. Going further afield, we get to the fluidic machines.
These computers store numbers as volumes of water.
We're talking encoding via milliliters here.
That is far cooler than ones and zeros.
As far as operations go, well, that follows directly from this very flexible encoding scheme.
It's possible to perform mathematical operations on these vats of fluid using pumps, pipes,
and valves. As far as automatic, look no further. Equations can literally be modeled as the flow of a fluid through pipes. Now, jokes aside, fluid-based
machines can actually implement many of the features of their more solid counterparts.
In fact, if carefully constructed, they can implement every feature that a normal,
run-of-the-mill electronic computer has. But do they have any advantages?
Well, that gets a little murky.
Today, we're going to be taking a very broad survey of fluidics.
This broad approach is partly due to sourcing.
There aren't exactly volumes and volumes written about water calculators.
I also think that this will give us
a pretty well-rounded foundation for understanding these strange machines. To that end, we're going
to be looking at three computers in particular. Moniac, the water integrator, and Flowdac.
Between these machines, I think we should be able to draw some conclusions.
machines, I think we should be able to draw some conclusions. Why would you want to use fluids to model mathematics? How do fluidic machines relate to the larger field of computing? And finally,
the subtle one, do these strange machines tell us anything about the human drive towards computers?
This is a question that I tend to mull around in my head. After looking at the
sheer diversity of machines that have been built, it kind of makes me wonder if it was inevitable
that us flesh folk would look towards some more hard, automated counterpart. I know that may sound
a little poetic, but it's a serious idea to consider. Were we somehow pre-wired to build
computer-like machines? Or are computers just so simple that we keep reinventing them?
Now, before we get into the actual show, I want to just slip in one quick announcement, if I may.
I happen to know the producers, so I think they'll let me get away with it. I'm currently working
on a project that's being tentatively called Notes on Computer History. It's going to be
something between a magazine and an academic journal discussing the history of computing.
Now, my big goal with this is to provide an accessible venue for people to write about their research in computer history.
I don't want this necessarily to be a place for just academics. I want it to be a community
project that a lot of people from many different backgrounds and different skill levels can
contribute to. The impetus is there's not really a place for that right now. I mean, there are academic publications that discuss computer history,
but those are hard to get into.
You have to go through a peer review process, for instance.
So Notes on Computer History is going to be an accessible place
where all of us have a chance to contribute.
To that end, I'm looking for contributors.
Right now, I have a few people that have already said they'll help me do editing for submissions,
but I could still use some more editors.
If anyone listening is good at typesetting and LaTeX, I could use maybe one or two more
people that know LaTeX besides myself on the team.
And more than anything, I'm looking for submissions.
So if there's something about computer history you're interested in,
or something you've been looking into, maybe a neat machine you've run across,
then please get in touch.
I'd love to feature you in this magazine.
You can reach out to me on Twitter at Advent of Comp, or just email me directly.
I'm adventofcomputing at gmail.com.
I'm planning, once I get a domain for this project,
to have a fancier email, but right now, just send something to my personal email and we'll talk.
I'll get you in touch with an editor. With that quick announcement out of the way, let's get into
the actual content for today. To start with, we need to talk about economics. We also need a disclaimer.
I am no economist.
I don't know which theories are correct, and I don't really know that much about the field.
So, please, don't send me hate mail if I malign your favorite or chosen school of economics.
Rest assured that I don't exactly know what I'm talking about.
With that said, let's talk some economics.
Specifically, we need to talk about two economists, William Phillips and John Maynard Keynes.
The last name here is probably the most familiar.
Keynes is the progenitor of the so-called Keynesian school of economics.
Now, this is all outside my usual area of expertise, but as I understand it,
immediately after the Depression, Keynes published a hot track called The General Theory of Employment,
Interest, and Money. This would become the foundation for his tradition of economic theory.
The general idea is that markets should be well-regulated in order to flourish,
and that small fluctuations on the micro scale can build up and lead to fluctuations on the macro scale. One final piece to note is the employment part of Keynes' general theory. The text deals,
in large part, with how the rate of employment ties into other economic factors.
Another thing to keep in mind just in general about economics is that it's something of a flexible science.
It's not entirely a hard science, it doesn't work from first principles like math or physics,
but it's also not a totally soft science like history, for instance.
It's somewhere in between.
Economics tends to use math as a way to describe very human-driven processes.
It's the last bit here that adds in some mushiness.
The point is that economics can, in fact, benefit from calculations.
At least, mathematics can make for a guide to navigate that mushy human factor.
At the most, that can be used for quantitative purposes.
Different schools of economics will tend to have different uses for mathematics, or maybe interpret equations in a different way, but numbers are key here.
The fancy word, if you want to sound impressive, is model.
As in, why yes, this all conforms to my latest model.
In general, a model is just an equation that's used to represent some phenomenon or system.
The game Oregon Trail, for instance, uses a model to calculate how food is
consumed during a trek. In that example, a model is used to simulate a phenomenon, to figure out
what's happening at any given step. You need some way to calculate how much food you have once you
reach Oregon in the first place. Models can also be predictive. That's the more
exciting type of mode for economists, say. The ability to model and then predict economies.
As long as your model is sound, then there's no reason you can't just run it forward a week,
or maybe even a year. The Keynesian school of economics rose to prominence
after the publication of the general theory. It garnered many followers and the theories and
models laid out by Keynes were expanded over the years. One of these devotees was the other
economist I mentioned, William Phillips. He got to economics via something of a circular route. You see,
Phillips was initially trained as an electrical engineer. Phillips was born and raised in New
Zealand back when that country was still held by the British Empire. Quick throwaway fact about our
main man Bill here. At one point, he worked as a crocodile hunter. So, needless to say, he had a mixed
background. Phillips eventually found his way to the UK, where he was trained as an electrical
engineer via a correspondence course. This was just prior to the outbreak of World War II.
After the war, he returned to the UK and resumed his education.
Now, there's something here I want to highlight.
That's the danger of multidisciplinary studies.
Of course, I mean threat in the best possible way here.
Phillips had initially been fascinated by engineering and electronics.
Here I'm drawing from the book Electric Brain,
the stories from the dawn of the computer age by Mike Hawley.
The text has a chapter covering Phillips' exploits. Hawley chose to start that passage
by describing how Phillips, a teenager at that time, fixed up a dead car so he could get to
school more easily. From there, things sort of snowballed. Like many others, Phillips was profoundly changed by the Second World War.
He fought for the British first in China, then in Java, all the while fleeing the advancing Japanese troops.
Phillips was eventually captured, becoming a prisoner of war.
But his engineering savvy wouldn't rot away in some prison camp.
Phillips would work on a number of clandestine radio sets that were
cobbled together with the help of his fellow inmates. Holly also relates how Phillips built
a tiny electric stove for boiling tea. Tiny creature comforts like this were one way to
survive the horrors of internment, but also must have been used to keep Phillips' mind sharpened.
but also must have been used to keep Phillips' mind sharpened.
We don't know the specifics, but during this period,
Phillips seems to have become interested in organized human relationships.
I say that we'll never know the specifics here because, in general,
prisoners of war don't talk much about their experiences.
I have a little bit of a personal connection here, too.
My grandfather was also interned in Japanese prison camps during the Second World War.
He very rarely and very reluctantly discussed his internment with the rest of the family.
It's an experience one wants to forget.
When pressed, Grandpa would bring up light anecdotes,
like swapping a watch with a guard for a bottle of beer, but he never really gave us the full story on anything.
It appears that Phillips was much the same. So, we have a few anecdotes, a few small details about
the better moments, but we don't have the full story. Holly's chapter on Phillips was constructed primarily from interviews with those who knew Phillips personally.
From that, Holly suggests that Phillips was intrigued by the social connections formed between inmates.
After the end of the war, Phillips returned to England and, armed with this new fascination, enrolled in a sociology program.
So, evidently,
a bit of a transformation occurred. However, he quickly realized that he wasn't all that interested in sociology itself. Part of his degree program involved taking a general course
in economics, and it was at this lower-level class that he found his true new passion.
Heather Sutton, one of Phillips' classmates, describes the transformation like this.
Quote,
We were walking around Lincoln's Inn field one day,
and he explained to me that he had realized that economics was just simple hydrodynamics,
and he proceeded to wave his arms around and tell me all about it. Not that I knew anything about hydrodynamics, and he proceeded to wave his arms around and tell me all about it. Not that I
knew anything about hydrodynamics, but he really engaged my attention and I never forgot that
meeting." Specifically, Phillips had become engrossed with Keynesian economics. The hydraulic
analogy here does seem pretty sound.
Economic models usually represent how funds flow from one category to another,
and in the Keynesian school, that flow is controlled and restricted by some regulatory agency.
You might be describing how money flows out of your wallet into things like rent, food, and internet hosting fees. All the while,
a small trickle is diverted off for taxes. In a traditional model, that flow would be expressed as an equation. Something like total expenses equals tax rate times the sum of rent, food,
and services. That works great quantitatively, but it has some conceptual issues. The equation on its
own represents a relationship which is pretty abstract. If you plug in values, then you get
actual results, but you only get one snapshot in time. Maybe the result is $50. That might be a cool value to look at, but it doesn't really describe how the relationship
works. It doesn't provide much understanding. Another option is to generate a graph of the
model. For this, you need to pick your variables. A 2D graph, the kind you can draw on a chalkboard,
will have one free variable, so you might end up
needing a number of graphs in order to show how expenses change as each category fluctuates.
That's not all that intuitive and possibly time-consuming for more complex models.
The hydraulic point of view that Phillips stumbled on offers a totally new way of representing this model.
Plugging in every number to that model gives you a discrete value, $50.
Plotting a graph gives you something more like continuous values, a curve or a line.
The hydraulic model gives you something entirely different.
It shows a fully dynamic representation and,
depending on how you set things up, allows you to inspect different aspects of that model in
real time as they change together. Here, we see the threat of interdisciplinary studies
reach its full bloom. It's something that shows up surprisingly often in my coverage.
Someone will have a
background in one field and interest in a second and go on to do great things. A diverse background
tends to, in my admittedly amateur analysis, open a person up to new and unique ways of thinking.
For Phillips, his engineering background gave him a new insight into economics.
It also drove him further.
If this hydraulic modeling was a passable theory, then why not build a machine?
It sounds like these fluidic dreams had come to Phillips near the end of the school year.
He was already kind of done with sociology.
He'd been slacking in order to draft plans for a strange device.
Once school was out for the summer, he set to work. In his landlady's garage, he started building a strange machine. A fascinating point here is that Phillips had already secured a little bit
of funding. An economics professor had seen the value of this hydraulic model, so kicked in some money to help construct the device.
His landlady also provided some funds.
From what I've read, it sounds like Phillips had a sort of magnetism to him.
He was able to make people believe in his ideas.
The machine he built would be christened Moniac.
As folklore goes, it was a joke on the name Inniac, which had been recently unveiled.
As far as the timeline goes, this does check out. Phillips constructed Moniak in 1949,
about four years after Inniac came to life, so there's plenty of leeway in there for him to hear
about the more real computer. Specifically, Maniac was designed to model the
English economy, at least initially. It was a simulation of the Keynesian view of macroeconomics.
In that sense, it was a series of equations modeled via the flow of water from tanks.
That all sounds pretty simple. In practice, it's also relatively straightforward.
But the trick is getting everything to flow around just right.
Maniac consisted primarily of a set of tanks connected by tubes. Now, I want to approach
Maniac as if it were a computer. That's going to be a primary assumption here. And by computer,
I just mean a machine that represents numbers and automatically operates on them. So forget
the whole comparison business. The reason to take this framework is that it gives us a nice
list to move down while discussing Moniac's technical features. It will also make it easier to point out its
idiosyncrasies, I think. So let's start with a very basic rundown of the machine.
MONIAC was composed of clear tanks and clear tubes, plus a pump and a few valves. Dried water
started out in a reservoir at the bottom of the machine, which represented the total market
you were modeling. A pump carried that water up to the top, where the fluid flowed down under
gravity. All the stuff in the middle, all the paths for the working fluid, represented the model.
Looking at how Maniac handled numeric representation gets us, well, into some weird territory pretty quickly.
Phillips had built an analog machine, so we get some baggage here. In general, analog computers
represent data as continuous values. You don't have a distinct 0 and 1. You have a rotating
shaft that can represent any possible value between 0 and 1.
Results of calculations also exist in this continuous state.
As your gears grind, you'll see physical movement.
Dials will sweep through multiple values on their way to a final answer.
Certain numeric values in MONIAC were represented as the water level in small tanks.
These were particular values of interest.
A 2007 article by Chris Bizzle called The MONIAC, a hydromechanical analog computer of the 1950s, provides some insight on the machine, plus useful diagrams, which when you're dealing with analog
computers is very important. As Bissell explains, these tanked values were significant variables
in economic models. Maniac could be set up in a few different configurations, but the most
documented was a simple simulation of national economies.
That's the model that I'm working off here.
This specific model had variables for income taxation, surplus balances, federal reserve,
domestic expenditures, foreign-held balances, and the ominous International Monetary Fund.
The ominous International Monetary Fund.
You know, all the usual fields you might have in your household budget spreadsheet.
These were all variables that could be monitored, numbers that could be read out and watched for changes.
Call them registers if you want to make the comparison to a modern computer.
However, some numbers were represented as flows. For instance,
there's a big pipe at the very top of Moniak labeled taxation. That is a variable that would show up in an economics model, in a mathematical model specifically. But it's not a measurable value for moniac these flow or piped variables are
constants in the actual equations many of them are also adjustable probably before moniac starts
running i think if you moved a valve that might upset the model's integrity. The taxation tube, for instance, can have its flow restricted by a valve.
So this is still a type of numeric representation, it's just more of an input than an output,
if that makes any sense here. Numbers were operated on by the simple flow of water
downward through these labeled tubes. But there were some catches that modified that flow.
Some tanks would drain freely. The Federal Reserve tank, for instance, was allowed to flow free.
Phillips also implemented these interesting proportional flow slots. These are tapered
slots that restrict the flow out of the tank such that they drain at a rate proportional to their fill
level. They look like slim pyramids. If I understand correctly, then the more full the tank
is, the higher the flow rate. Some valves in Moniak were coupled to floating switches, which
acted as water level gauges. The idea here was to create feedback that would change how freely water flowed through certain tubes.
In this economic model specifically, imports are affected by domestic expenditures and foreign-held balances.
Accordingly, there's a valve on the imports tube that is opened and closed by corresponding floats in the proper tanks.
In other words, Moniak hits all our key
points. It had a way to show numbers, kinda. It operated on those numbers automatically.
That's pretty close to a computer as far as I'm concerned. There's even a small element of
feedback and choice in there. Moniiac could also create a data product.
Philips would later expand Maniac by building plotters that connected to a series of floats,
that way the value of variables could be recorded.
But here's the thing.
Maniac is sometimes called a computer, but it wasn't ever a serious computer. It's not a big suit-and-tie
kind of machine. The first outing of Maniac was a demonstration, but not as a hardcore calculating
device. No, Maniac was always designed to be used as a way to visualize economic models.
You could call it a teaching aid, but I have some problems going that
far. Maniac was first shown off at a seminar at the London School of Economics. The crowd was
composed primarily of grad students and professors. The machine wasn't being used as a teaching aid
to show first-time students how economics worked. It was positioned more as a way
to visualize a complex system. This was a difficult task given the state of the art in 1949.
Phillips claimed that Maniac was accurate within a few percent of mathematical models, but
this machine wasn't really producing quantitative results. The computer was more a way to aid in a qualitative understanding.
Ultimately, a number of Moniacs were actually built in varying configurations.
It proved useful in this economic education niche.
But this isn't the end of the episode.
We can keep going down this weird fluid rabbit hole.
What if we actually wanted to produce quantitative results using water? Well, it has been done,
we just need to go a little further back in time. To get into this truly wacky hydraulic world,
we need to head to the USSR in the 1930s. In general, it seems the computing
developments behind the Iron Curtain are somewhat unknown in the West. Of course, I have to give an
obligatory shout out to my friend Kristaps over at the Eastern Border Podcast. He did an episode
on Soviet computers a while back. It's a good show to check out. Anyway, this obscurity becomes doubly true
once we get into the era of analog computers. I mean, American analog computers are pretty
darn obscure to Americans already. Part of this lack of coverage comes down to accessibility.
Analog machines work on totally different principles.
The language barrier makes studying Soviet analog computers especially difficult.
But luckily for us, some of the suits over in Washington have translated a few documents.
This is actually where the story gets even weirder.
In the early 50s, the Army Corps of Engineers was looking to build a hydraulic
analog computer of their own. Yeah, this technology isn't a one-off thing that Phillips dreamt up.
We'll get back to the provenance of this translation later, but I want to get to the
actual document first. Vladimir Lukyanov was working in construction during the early era of the Soviet Union.
More specifically, Lukyanov worked as an engineer for the National Rail and Road System.
During this period, the late 1920s into the 30s, the railroad was rapidly expanding.
The state in general was rapidly industrializing, which demanded more infrastructure.
But there was a weird problem.
Cement.
I know, once again, I bring you the mundane parts of history.
But, my friend, that's really what computing is all about.
Here's something you've probably never thought about.
Cement doesn't always cure evenly.
Different parts cure at different rates,
interior regions cure differently than exterior regions, and these rates are all affected by
temperature. Now, in some situations, this isn't a big deal. Each chunk of a sidewalk doesn't have
to harden up at exactly the same time. But if not accounted for, this unevenness
can spell doom for cement structures. Think bridges, tunnels, foundations, and even roads.
Uneven curing can, in some cases, lead to cracks that weaken overall structural integrity.
So your bridge might look great, your pylons are all set. The roadbed is being laid.
What would happen if one of your pylons had cured inconsistently? Maybe the shape that was chosen
didn't allow for internal regions to harden or dry or heat up at the same rate. There's a gradient somewhere, a small stress forms. That could cause
internal cracks, and those cracks could propagate, leading to a disaster.
The temperature component here is especially insidious. The USSR covered a massive region
of the planet, from warm coastal waters in some southern regions to
some of the coldest parts of the world up in Siberia. A concrete structure might cure just
fine down in Odessa, but in Vladivostok, a cold winter might make your cement crumble.
These kinds of integrity issues aren't always immediately noticeable either. It might take months, years,
or even decades for anything to actually go wrong. The way around the situation was to design
structures and mix cement in a way to minimize cracking, but that's not all that easy.
Surprisingly, this involved a lot of math. If you know much physics, you may have spotted the
red flag here. I mentioned temperature. That means this is a thermodynamics problem. For those
uninitiated lucky few, thermodynamics is one of the nastier parts of physics. The math involved is frustratingly complex. You end up needing to handle vector math,
and you end up running a lot of partial derivatives. If you don't know what that means,
well, let me just put it this way. The math involved takes a lot of work. This is especially
true when you need to solve thermodynamic equations. For instance,
if you want to try to figure out what temperatures will cause a slab of cement to crack.
That's not just plugging in numbers, that's rearranging and then plugging in numbers.
It's a small difference, but it ends up making things a lot more complicated.
Lukyanov's solution was to automate the process. Now, I'm working with limited
documents, so I'm not entirely sure where Lukyanov got this idea. I don't have a diary entry from the
Lukyanov papers, for instance, that explains everything nicely. That said, I have some
details from a translation of an article in the Russian journal Science and Life.
According to that piece, Lukyanov was drawing from earlier analog computing efforts in the USSR.
The big connecting fiber here would have been the work of Mikhail Viktor Karpachev.
And I'm sorry for all the mispronunciations.
or Karpachev, and I'm sorry for all the mispronunciations. Karpachev was a physicist who developed a method that Lukyanov called the hydraulic analogy. Or, at least, Karpachev laid
the groundwork for this analogy. So, what exactly is the hydraulic analogy? This is going to be based off my physics degree plus my somewhat tight-ish grasp of
Lukyanov's machine. Let me start by introducing you to a cool little equation. PV equals nRT.
This is a nice equation that chemists seem to really love and is also applicable to any fluid dynamics work. It states that pressure and
volume are directly proportional to the number of moles of a fluid and that fluid's temperature.
Moles, in this case, are just a way to measure the amount of a substance, and the r here is a
constant that makes the numbers work out. There are a few ways to read
this depending on what you're interested in and it only holds for certain substances that don't
care about compression but we're not going to talk about that. That's a go take a chemistry
class kind of discussion I don't want to have. Here's the reading that matters for concrete and water.
If pressure and number of moles are kept constant,
then a change in volume is directly proportional to a change in temperature.
If the substance's volume goes up,
then its temperature will go up a proportional amount.
The opposite also holds. Here's how this
becomes useful. Using this equation, you can turn an equation that cares about temperature
into one that cares about volume. In other words, you can make a thermodynamic equation into a
into a volume-o-dynamic equation, or whatever you'd call that. I guess hydrodynamic? Anyway,
you can turn a model that talks about temperature into one that talks about volume. That's the gist of this equation. What makes the Lukyanov paper a little hard to parse
is that he gives most equations in terms of a variable h.
That represents the height of water in a tube, or the height of a column of water.
That works out to something that approximates volume. Just put a pi and a size of the tube
somewhere near it. What you end up with is a set of equations that relates a change in the height of water in a
column to a change of temperature in another system. It's a neat little math trick, and if
you like this kind of stuff, then might I recommend taking a few physics courses?
This is the theoretical foundation for Lukyanov's water integrator. It follows from there that the central element of this machine
would be vertical tubes filled with what else but water. Technically speaking, the translated paper
doesn't say explicitly that it's filled with water, so it could be something else, but come on,
it's probably water. So we're in a similar realm to MONIAC, but there are notable differences in the principle
of operation.
Both MONIAC and the water integrator represent numbers as the height of water.
Call it volume to be technically accurate.
The water integrator is grounded in more generalized mathematics, while Maniac was built for a very small set
of applications. I'm going to be focusing on the one-dimensional water integrator, since
as complexity goes up, the machine gets much more difficult to understand. Lukyanov did build
two- and three-dimensional machines, but those are a little above my pay grade for right now.
Besides, the 1D solution is just easier to explain in audio form.
For this to make sense, I have to touch on one more math thing.
Sorry, but this is going to be the last of the math.
The class of equations that Lukyanov was dealing with are called differential equations, specifically partial
differential equations, that is to say, equations that include derivatives. These express the change
of certain values over the change of another value, aka the slope of a curve. Think of something like
the change in volume over the change in time,
for instance. Lukyanov wanted to solve these, which means an integral, hence the name water
integrator. The device was physically integrating equations. Now, the integral is really just a way
to calculate an area under a curve. There are calculus proofs that show how the integral is
how you reverse a derivative, but I can leave that as an exercise for the reader, so to speak.
To numerically solve an integral, as in the way computers handle it, you have to break a curve
down into chunks. If you're integrating against the variable x, for instance,
then you might call these chunks delta x, as in, a little change of x. It'll give you some cool
street cred if you do so. For each of these chunks, you then calculate the area under the curve in
that region. Then you add up each chunk to get the total area. Lukyanov's integrator does this in a very visceral manner.
And I just dig it.
Each delta x is represented by a single glass tube.
As water enters the integrator, these tubes fill up to the proper height.
Taking the sum of those heights gives the total integral.
the proper height. Taking the sum of those heights gives the total integral. If you just look at the height of water in each of these tubes, which are set side by side, then you can actually see the
slope of the curve. You can see a nice little curve, any shape you want. It's really neat to
see diagrams of this. Now, the actual curve and the value under it are two useful data products right there.
And remember that height of water is equivalent to temperature, so this is actually solving a thermodynamics problem just with water.
I can't stress how cool that is to me.
Of course, the actual machine is more complicated than just filling some tubes.
Just like with Maniac, there are contrivances used to fill in the details of the model.
One interesting aspect of the integrator is its system of floats.
This is a system of adjustable rubber tubes and small floating outlets
that help keep the actual readout tubes from emptying past a certain point.
This allows an operator to set a lower bound for each delta x. The integrator also has a way to
model resistances. I'm not talking electrical resistances, but temperature resistances.
Each layer of concrete in Lukyanov's model will have some resistance to the transfer of heat energy.
Thanks to PV equals NRT, this can be modeled as hydraulic resistance.
In other words, little tubes that make it harder for water to flow.
To set up a model properly, small glass tubes had to be inserted between each of the vertical readout tubes.
These came with different levels of hydraulic resistance that had to be matched up carefully
with the mathematical model. The final bit of fun are the so-called prismatic inserts.
The name is kind of cooler than the actual device. These are pieces of metal that are curved on one side and tapered on the other.
These prismatic inserts are, well, inserted into the vertical readout tubes.
This allows an operator to adjust the cross-sectional area of the tube to correct for temperature change.
Remember that temperature must remain constant for the whole equivalency thing to hold.
From this, I think it's clear to see that the water integrator was a very serious scientific
tool. As for the actual use of the integrator, well, that gets a little tricky. This is where
I hit that sourcing barrier again. I'm dealing with scant translations here. There are some sources,
but I don't trust them all that much. I've seen it mentioned that versions of Lukyanov's machine
were used up to the 1980s, but I don't have primary documentation backing up that claim.
The actual truth of the matter is probably in some untranslated scientific papers.
of the matter is probably in some untranslated scientific papers. That said, I can't believe that the water integrator would have seen use into the near-modern era. Many pre-digital
technologies were in use up into the 1980s, or 1970s more conservatively, only dying out with
the rise of commodity microcomputers. It helps that the water integrator was doing something slightly different from mainframes and other big iron machines.
Real silicon and electron computers, at least ones that existed prior to the rise of PCs, filled a different niche.
Lukyanov's integrator was operating like a really fancy desktop calculator,
while big computers were better for more complicated research or operational tasks.
We can also factor in resource contention here.
Some vacuum tube machines were used well into the Silicon era because there simply weren't enough computers around.
Everyone needed computer time, but resources just couldn't allow for that.
If you were working in a Soviet lab and just needed to run some integrals,
then it may have been hard to get access to a fully-fledged computer. However, there was a
more hydraulic alternative kicking around. I guess this brings us up to a lead that I've kind of been keeping buried.
Why is there a translation of Lukyanov's article anyway?
I mentioned that the translation was part of an Army Corps of Engineers contract, but there's a bigger story here.
You see, the Corps of Engineers wanted a hydraulic computer for their own possibly shady purposes, or maybe cool. This is part of a
larger series of projects called, quote, Frost Investigations. It's a good name for a coded
operation, perhaps. Anyway, the main goal was to model how the ground freezes and thaws.
Turns out that that model is very, very close to the concrete cooling model
that Lukyanov was cracking back in the 1930s.
But the rabbit hole gets deeper and stranger.
To quote from one of these Frost investigations,
Apparently hydraulic analog computers originated almost simultaneously in the United States and in Russia in 1935 and 1936.
through building walls. At the same time, Lukyanov built a hydraulic apparatus in the Russian Institute of Roads and Construction, which he used to study a variety of problems, including
freezing in soil. End quote. The Moore in question here was Professor Arthur D. Moore of the University
of Michigan in Ann Arbor. His machine operated on the same principle as Lukyanov's, but
had a slightly more complicated construction. His readout tubes could tilt in order to change
their cross-sectional area. So, once again, water-based computing isn't some one-off invention.
I gotta add in one more weird wrinkle. This is a part of Lukyanov's report that, when I read it, kinda shocked me.
It kinda messed me up a little bit.
It's easy to fall into this mind trap where you see analog computers as evolutionarily
separate from digital computers.
I mean, what does an IBM PC have to do with a pile of wet tubes?
Admittedly, not a lot, but there is a through line.
Lukyanov's translated article that I've been working off was published in 1939.
The history of the concept of Turing completeness is a little complicated.
There are precursor theories that are similar.
There are earlier machines theories that are similar. There are earlier machines that
exhibited similar properties. Turing starts describing so-called Turing machines in the
late 30s into the 40s. That's where the idea of Turing completeness really comes from,
very broadly speaking. So it's possible that Lukyanov read about Turing, but I think that's somewhat unlikely.
Turing was a math student and Lukyanov's an engineer. They're related fields, but they're
different. Different journals and maybe even a language barrier. Keep this in mind as I read
this quote from Lukyanov's 1939 paper. Quote, the author believes that, in some cases,
the hydraulic method can even be extended to problems beyond the limit of equations.
Undoubtedly, the principles of this method permit a very wide application to problems, but
in order to realize these possibilities, it is imperative to be able
to create a machine which can be applied to several kinds of problems instead of to just one.
The water integrator was just modeling a single equation. It could be reconfigured,
but fundamentally it was always solving a derivative. It was always running an integral.
It was tied to this world of very concrete mathematics. To get past that, you need something more algorithmic. A machine with the ability to follow instructions, adapt, and solve multiple
types of problems. You need some generalized machine. To crib from only slightly later works,
you need something like the universal Turing machine, a device that follows simple instructions,
operates on data, and can solve any type of problem. Lukyanov even goes on to describe how
his water integrator could break past equations if it could dynamically change
while it was crunching numbers. That, my friends, sounds an awful lot like a Turing complete machine.
In this small passage, we can see that even back in the analog days, researchers wanted more.
Better still, they basically knew what they wanted. It wasn't just
some, you know, ethereal, oh, I wish this machine was better. No, they wanted computers very
specifically. Lukianov is stating, quite plainly, that it would be much better to build a machine
that can go beyond math. In other words, a machine you can program, a fully-fledged computer.
In other words, a machine you can program, a fully-fledged computer.
For me, that goes a long way towards establishing some ideological line between a pile of wet tubes and an IBM PC.
Both machines are striving towards the same dream.
One just happens to be further along the path.
Now, if you'll permit me, I'm going to make the story even more strange. This one is to go further beyond, so to speak. So far, we've been talking
about analog computers that existed prior to digital machines, or just around the advent of recognizable computers.
What about after the digital boom?
What about, say, after the transistor?
Surely the fluid-based computer was done once transistors arrived, right?
Let me introduce you to a little machine called FlowDAC, designed by none other than Univac themselves. The year? 1964. Stranger still, FlowDAC is described as a binary computer. That's right,
binary doesn't have to be electric. Now, just a quick disclaimer. FlowDAC isn't strictly a
water-based machine. The final model ran using compressed air. That said, all the theory that
makes FlowDAC possible would work with any fluid. So what I'm getting at here is that water binary computers are, in fact, very possible. Everything we know
about Flowdac is relegated to a single paper. This article was written by three Univac employees,
R. Gluskin, M. Jacoby, and T. Reeder. As far as I can tell, this is their only publication, which is a bit strange. The paper in question, FLODAC,
a pure fluid digital computer, was published in the American Federation of Information Processing
Society's Proceedings. Big mouthful, but that means that the computer was either shown off at
a conference or there was at least a talk about the computer.
Someone knew about FLODAC. Someone had to. There just isn't much chatter outside of this one paper.
Source complaints aside, this is Advent of Computing after all, the FLODAC paper offers a fascinating alternative to boring electronic digital machines.
Better still, the authors actually present a very persuasive argument.
The core of any electronic digital computer is the triode.
That's a specific type of amplifying component that takes two inputs and gives a single output. Back in the
day, these were vacuum tubes. Those were replaced with transistors, and those were eventually
replaced with doped silicon. This means that, at least in theory, anything that can be used as an
amplifier can be used to create logic elements. From there, you get what I like to call the big computing proof backdoor.
If you can create certain types of logic gates, then you can plug your work into the larger body
of computer science proofs. Usually, you need either a NOT AND gate, aka NAND, or a NOT OR gate, aka NOR. You can build an entire computer using
only NAND gates or only NOR gates, and like I mentioned, there's a lot of proofs showing how
that's done. So if you can get up to that step, you win. You're done. There are quite a few ways to construct basic triodes, but most of the well-known
approaches are electronic. It turns out you can construct an analogous component for fluids,
using some careful geometry and basically nothing else. These are called fluid amplifiers,
or fluidic amplifiers. In principle, they're really similar to electronic amplifiers, but, you know, they don't really care about electrons.
So a normal electronic amplifier works kind of like a valve.
You have some large input current that is then switched by the signal you want to amplify.
input current that is then switched by the signal you want to amplify. This allows you to control a larger current with a comparatively weak signal. These are called triodes because they have three
legs, the input, the trigger or signal, and the output. The output leg is where you get the
amplified signal. Fluidic amplifiers do the same thing, but just for fluids. You have an input
port that takes a strong current, like a jet of water. Then you have a trigger or gate port that
accepts a weaker signal. You then get an amplified signal out your output port. One of the neat
things about these fluid amplifiers is that they have no moving parts.
The input current is physically deflected by the incoming signal, and both of those
signals, both of those currents, are just fluids.
Usually this is done by placing the signal jet perpendicular to the input jet, and then
carefully shaping the output nozzle.
input jet, and then carefully shaping the output nozzle. That redirection is called momentum transfer and allows a relatively weak stream of fluid to redirect a much stronger stream.
There is one other fun effect that we can work with, the wall effect. A fluid stream can get
stuck, for lack of a better word, on a nearby wall under certain circumstances.
This all depends on geometry, but in general, you can trap a stream on a wall. The stream can then
be unstuck if it's given a push in the correct direction. That sounds a lot like memory.
The FLODAC paper starts from this point. Fluidic amplifiers and a few basic principles,
and builds from there. The first stop is actually an inverter, a NOT gate. This is a fluid component
that will invert inputs. Once again, this is all accomplished using basic geometry,
just like with the basic amplifier. The actual NOT gate is a channel cut
in a block of some material. The final component operates just like a normal electronic NOT gate.
Putting in a 1 will output a 0 and vice versa. It's just that in this case, a 1 is represented
as a pulse of fluid and a 0 as the absence of fluid.
From there, the paper builds up a few more gates before we hit the big one, the NOR gate.
The geometry here is a little funky.
It's a weird asymmetrical Y with a little V stuck to the side.
I'll link to the actual article so you can see it yourself, since I can't really
describe it very well. Anyway, we're up to the NOR gate. Thus, we've reached the magical proof
back door. We're approaching a computer. And here's where Univac really delivers.
What follows is what I can only describe as a digital fever dream. FlowDAC was never meant as a very
powerful computer, let's just say. It's a tech demo, a proof of concept. A model was actually
built, and that's what the paper describes. FlowDAC was composed of around 200 fluidic
NOR gates. As I've mentioned before, the working fluid wasn't water, but instead compressed
air. In theory, you can use any fluid, air just turns out to be the most convenient in this case.
Each NOR gate was injection-molded plastic. The physical implementation is a little odd, but
once we get away from that, FLODAC is remarkably mundane, if small, for a computer.
Let me lay out the machine a little just to drive that point home.
FlowDAC has only four instructions.
Transfer, add, conditional jump, and halt.
Those instructions were encoded as binary numbers with arguments also as binary numbers.
Data was represented in binary.
Immediate working data was stored in an accumulator register. More data, as well as the
actual program, was stored in random access memory. The actual memory bank was tiny. Flowdac only had
four words of storage available, but hey, it's a research machine, so it can be forgiven.
This is, by all measures, a computer.
The conditional jump even means that it fits the whole Turing definition.
Now, that's all very run-of-the-mill.
We have a digital machine that executes instructions and encodes data as binary numbers.
a digital machine that executes instructions and encodes data as binary numbers.
Getting to the inputs and outputs drops us back into the more fluid territory.
The input panel is probably my favorite part of the whole machine.
Instead of switches, FlowDAC uses a series of bleed holes.
These are just holes that air normally rushes out of or bleeds off pressure. To actuate these holes, you block them with your fingertips. That causes pressure to build up and a flip is
flopped somewhere. This means that to input data to FlowDAC, you play it just like a recorder or
a penny whistle with a series of holes that you seal with your fingers.
There's something whimsical about that that I just love.
Output is where we reach the only moving part of FlowDAC proper.
Data is displayed via a series of clear tubes with little plastic balls inside.
If there's air flowing into the tube, aka a one, then the ball's pushed up. Otherwise,
the ball stays down. So for the entire computer, the only actual moving parts are a few small
indicator beads. That's pretty slick. Now, that's the computer itself and the theory.
It's a neat machine, but it leaves us with one big question. Why? Why exactly build
a fluidic computer? The authors over at Univac actually have a four-part answer that I find
fascinating. To quote, reliability, environmental immunity, low cost, and absence of RF radiation.
immunity, low cost, and absence of RF radiation. End quote. The low cost part is kind of the least interesting, but I think it's worth addressing. The argument goes that since each logic element
is just injection molded, a computer like FLODAC could be made really cheaply. In general, you
could use any material to build one of these fluid machines. You just need to mold specific shapes.
Or engrave.
You can use any manufacturing process that can make the properly shaped channels.
I don't think this rationale has really aged super well.
Flowdac was designed back in the 60s, when a computer meant something very different
from a modern computer.
The same year the Flowdac paper was published in 1964, for those keeping track,
IBM unveiled their System 360 series of machines.
These computers cost hundreds of thousands of dollars.
With inflation, some models would have been past the million-dollar mark.
The idea of a computer built from cheap plastic parts would have been past the million-dollar mark. The idea of a computer built from cheap
plastic parts would have been really exciting. But nowadays, we have the microprocessor, which
fits the bill for cheap, reliable computing without requiring some new operating principles.
Reliability and environmental immunity offer more interesting justifications.
In theory, a fluidic computer could operate indefinitely.
I really want to underline that indefinitely part.
This is thanks to the fact that you can use any combination of material and fluid to build these machines.
Just pick a hard material that can't be eroded by the working fluid and
you're done. You have an unkillable machine. The only failure point would be a compressor or some
kind of pump needed to circulate fluid, but clever design can solve that. Something like a pulsar
pump can move water using gravity alone, and it has no moving parts. So in theory, the
entire device could be static. No moving parts at all. As far as environmental immunity, this
stems from that same design flexibility. A normal computer is limited to conductors and electrons.
Sure, you can choose exotic conductors, maybe get some germanium up
in there, but you fundamentally have to be pushing electrons around. Conductors tend to interact with
things like heat or acidic compounds. Copper, for instance, will corrode and melt relatively easily.
Metals will melt at high temperatures. Metals also lose conductivity at higher temperatures.
Temperature cycling between hot and cold can cause cracking in certain materials.
These are definitely edge issues, but they do matter for things like space-bound computers.
A satellite could be subjected to extreme cold and extreme hot, swings between those temperatures plus the vacuum
of space. For that reason, computers used in space are often specifically designed for that use case,
or heavily protected. But with fluidic computers, you get to pick all materials involved. You can
make a machine with tungsten elements that uses liquid gallium as a working fluid.
You could use some type of heat differential pump to move around the liquid metal.
The machine could withstand temperatures in excess of 2,000 degrees centigrade.
That machine could, in theory, survive being submerged in magma.
Good luck dunking an IBM System 360 into a volcano.
That won't last very long.
Fluidic computers are also immune to another silicon killer,
radiofrequency radiation.
That's another huge problem for space-based computers.
Radiation plays havoc with traditional machines.
More specifically,
radiation can induce charges in conductors. It can also damage certain materials. Fluidic computers, depending on the materials used, are actually impervious to radiation. These machines
don't rely on electrons like electric computers do. Radiation? Who cares? Another small note here is that
traditional electric computers also create RF radiation. It's just a product of moving currents
down wires and through junctions. Since fluidic computers don't deal in electrons, they don't
produce radiation. Once again, this helps make fluidic computers almost
inert objects. This really captures a certain sci-fi feeling I get about fluidic computers just
in general. FLODAC fits into something like the computer science uncanny valley. It's binary,
functions like a normal computer, but does all its work
using compressed air and pipes. It stores numbers as a gentle breeze and runs calculations using a
series of tubes. It makes the mind real. One of the reasons that I'm so taken with FLODAC comes down to the possibility of extraterrestrial
life.
And hey, I see you out there, stick with me, I swear I'm going somewhere with this.
I've been a bit of a space nerd for most of my life.
I mean, I do have a degree in astrophysics.
During undergrad I also interned at the SETI Institute, so I do have
some inside baseball knowledge about the search for ETs. It's kind of an underlying passion of
mine. As we're reaching the end here, I want to introduce a bit of a spooky idea. It's called
the Fermi Paradox. This paradox is relatively simple to lay out.
It's obvious that life must exist outside of Earth.
There are various ways we can reach that conclusion,
most of which come down to the fact that the building blocks for life
exist in abundance all over the universe.
Add in the age of the universe,
and we arrive at the fact that life must have also evolved
elsewhere in the cosmos. That's kind of just a fact. If you disagree, then it's the podcast
stance that you're kind of wrong. So if life exists out there in the ink, then why haven't
we made contact? Why haven't we seen concrete evidence of extraterrestrial life? That's the paradox.
There are a number of solutions ranging from aliens just don't like us
up to most worlds wipe themselves out via war before they can travel as stars.
One of the factors that SETI stresses is that alien lifeforms probably communicate in a way
that would be unrecognizable to us Earth folk.
I personally like that idea, that there are alien signals out there, we just can't make sense of
them yet. Much of the search for aliens comes with assumptions about their environment and
technology. A habitable planet should be something like Earth. It should have liquid water, it
shouldn't be bathed in radiation,
and the civilization there should emit some kind of radio signals. Any civilization advanced enough
to, say, build computers, should be sending data around as RF radiation. Sans that, there will at
least be some electronic signature, right? Some techno-signature, as it's called. Well, what if
humans are the odd ones out? What if we just happen to be more obsessed with electrons than
other lifeforms? I'm sure someone could write a whole book about how our situation on Earth has
informed this obsession. Maybe it's the elements present on the planet, or the fact that our eyes
see in a particular part of
the spectrum, or maybe even the heat of our local star. I bring this up to point out that maybe,
just maybe, a world exists out there where the local apex predator doesn't really care about
electrons. Maybe they're more interested in the flow of fluids. So hey, let's just pick a planet
and flesh out some details. Let's go with Venus. It's hot
and has a thick atmosphere, so that blocks a lot of light and causes intense pressure at the planet's
surface. The Venusians never evolved eyes like ours. There just isn't much to see, and the arid
atmosphere would dry them out pretty quick. The population lives underground in grand cities.
They had an industrial revolution, but it didn't end in electronic power, since they didn't need
lights for their underground homes. Maybe at one point they even experimented with gear-based
computers, but those fell out of fashion once binary became the way of the future.
The Venusian industrial Revolution required a lot
of math, so eventually a grand calculating machine was devised. It used convenient materials,
pressurized carbon dioxide and volcanic glass. The glass channels were, of course,
made via injection molding. It's all built on an assembly line, after all.
These alien fluidic computers could
work off the same principles of binary logic we use on Earth, but use no conductors. No
semiconductors, either. They wouldn't produce any recognizable signals. No signature that we could
glean. Let's say millions of years pass by. Society on Venus has crumbled or maybe they left,
earthlings finally land on the planet.
Any metallic machines have long since corroded away, or at least ceased to function.
The only record that's left is stored in the Grand Venusian Computer,
a giant cavern filled with intricately molded volcanic glass.
If we came face to face with this kind of machine, how many computer nerds would think, oh yes, I've seen this before.
We definitely couldn't detect its output from space, even as the computer was still crunching
away. Religious sight. Alright, that does it for our very literal exploration of the flow of data.
I think we've charted the breadth of fluid-based computers in this episode.
From the wishy-washy demos of Maniac to the brass tacks of the water integrator,
and then on to the uncanny valley that is Flowdac.
At the end here, there are a few small details that I want
to highlight. I think it's important for us to all remember that electronic digital computers
offer just one mode of computing. A machine doesn't have to be built that way. It just so happens that
the current state of the art uses binary logic represented as currents on wires and
semiconductor junctions. Computing has seen a lot of variety over time, and I think it will see a
lot of variety in the future. The current regime, the microprocessor world, only came about in the
mid-70s. That's barely 50 years old. Who's to say what the next 50 years will bring? Maybe it won't be fluidics,
but we should keep our eyes peeled for left-field solutions. This sort of connects up to my big
philosophical question this episode. Was the creation of the computer inevitable? Were we
somehow predestined or predisposed to build automatic number crunchers. I mean, just look at the sheer number and variety
of computing machines that we've made over time. Fluidic computers offer a great example of the
lengths people will go to in order to automate math. In this instance, I think the water
integrator is the best possible example. It shows how something as mundane as tubes of water can be used to solve
physics equations. If you'll allow me the indulgence, I'm going to get a little out there
with it. The fact that we can model physics with devices like the water integrator points to some
kind of satisfying symmetry in the world. As soon as we figured out physics, we started looking for ways to exploit
physics. On one end of the spectrum, we have binary computers, a very abstract way to model
physical phenomena. On the other hand, we have analog machines like Lukyanov's water integrator,
a very direct approach to calculating and modeling. For me, it seems like as soon as physics became
a consideration, we as a species were on a crash course for computing. Just the ability to
manipulate and swap around equations that describe the world, well, that can really start you
thinking. To me, I think it can lead us earthlings to some very interesting places.
I think it can lead us earthlings to some very interesting places.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of Computing's past.
And hey, if you like the show, there are a few ways you can support it.
If you know someone else who'd be interested in the history of computing,
then why not take a minute to share the show with them?
You can also rate and review on Apple Podcasts.
If you want to be a super fan,
then you can support the show directly through Advent of Computing merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction of the show,
and bonus content. You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet.
I'm at AdventOfComp on Twitter.
And as always, have a great rest of your day.