Advent of Computing - Episode 75 - A Cybernetic Future
Episode Date: February 7, 2022Cybernetics is broadly defined as the study of control and communications, with a special emphasis on feedback-based systems. Put another way: cybernetics is the study of the flow of data. Predating ...computer science by decades, cybernetics offers up an interesting view of computing. But of course, there's a lot more to the picture than just computers. This episode we are looking at Project Cybersyn, an attempt to automate Chile's economy via cybernetics. To talk about this specific case we are going to dive deep into the history of cybernetics itself. Selected Sources: https://sci-hub.se/10.1086/286788 - Behavior, Purpose, and Teleology https://sci-hub.se/10.1057/jors.1984.2 - The Viable System Model, by Beer https://web.archive.org/web/20181222110043/http://ada.evergreen.edu/~arunc/texts/cybernetics/Platform/platform.pdf - Beer on Cybersyn https://web.archive.org/web/20200619033457/https://homes.luddy.indiana.edu/edenm/EdenMedinaJLASAugust2006.pdf - Designing Freedom, Regulating a Nation, by Eden Medina
Transcript
Discussion (0)
On September 11th, 1973, Chile's democratically elected government was toppled in a coup.
A faction of the Chilean military, led by Augusto Pinochet and backed by the CIA, violently seized power.
Why was the U.S. involved in this coup?
Well, there's a simple answer.
The deposed president, Salvador Allende, had been attempting to transition Chile into socialism. Here we see
yet another battlefield in the Cold War. Immediately following the coup, Pinochet's
regime set about tightening their control on the country and removing or reshaping the mechanisms
of power used by Allende. During this process, in the days or months following the coup,
someone from the new regime must have found themselves face-to-face with a pretty strange sight.
In a building in Santiago, probably near the Capitol building, was a hexagonal chamber.
It was populated with swiveling chairs, each with control panels built into the armrest.
The room's walls were studded with displays, readouts, lights,
and projector screens. Somewhere nearby, a mainframe hummed away, feeding data to this
cadre of readouts. This was the control center for Project Cybersyn, the technologic heart of
Allende's new socialist economy. From this futuristic room, the entire nationalized part of Chile's economy
could be monitored, profiled, and adjusted. For around two years, it had shown some promise,
but Pinochet and the CIA weren't really interested in some new kind of planned and
automated socialist economy. Cybersyn would be a slightly later victim of that same September coup.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 75,
A Cybernetic Future. This episode, mark my words, is going to be pretty
light on the actual technical details. I think I've kind of burned myself out a little bit with
the last month of research, so we're taking a break from the exacting brass tacks of language
design and memory implementation. This episode's going be more theory than anything. So this time,
we're taking a look at a topic that kind of lurked behind swaths of computer history.
That's the study of cybernetics. Now, I don't know if I'd go so far to say that cybernetics
is a misunderstood field. Rather, I think it's just little understood. Most people probably don't
even think about it. The easiest summary is just to say that cybernetics was a competing discipline
to computer science. Computer science as a rigorous field won out, and cybernetics fell
by the wayside as an outmoded way of viewing machines. That's a nice little explanation. It even makes us feel all fancy and superior with
our CS textbooks in hand. However, I don't think that's a good explanation. It's not even all that
correct. Cybernetics, very broadly speaking, is often described as the study of regulatory systems.
That's really broad, but it's going to have to do to start with.
The practitioners of this art, sometimes called cyberneticists, come from a very diverse set of
backgrounds. The field is seen as interdisciplinary by nature, which makes it all the more difficult
to define. Cybernetics doesn't necessarily have to do with computers, but computers became a very
important area of study within cybernetics as the 1940s drug on.
What makes this older field of study interesting, at least to me, is that last part.
Cybernetics just plain has a different relationship to computing than computer science does.
It's like looking at the world with a different pair of glasses.
Some things look the same, others may be slightly warped.
Maybe you'll even see totally new sights off on the horizon.
One of the more interesting new sights here comes in the form of politics.
Computer science can be contentious, but it's more in
terms of nerd fight contention. Folk often argue if CS is an actual hard science, if it's applied
math, or if it should be looked at as a relative of engineering. Sure, that's very broadly speaking
political, but at a pretty low-stakes kind of level.
Cybernetics, on the other hand, that's another story.
There's this interesting back-and-forth relationship between cybernetics and communism or socialism.
At certain points, the study of cybernetics was actually outlawed in the Soviet Union.
At other times, cybernetics was seen as a way to reshape the Soviet economy.
I covered some of this way back in the early days of the podcast when I talked about attempts to
network the USSR. Some of those projects drew heavily from cybernetics. And then we have Project
Cybersyn, an attempt by the Chilean government to harness cybernetics to
manage their new economy. If the whole USSR-cybernetics connection was obscure,
then Cybersyn exists on a whole other level of esoterica. That said, I think that investigating
Cybersyn gives us a unique opportunity. The system was only in place for a short time,
gives us a unique opportunity. The system was only in place for a short time, around two years or 18 months depending on how you measure it. That time frame includes its design,
implementation, and a little bit of its use. It also had a more limited scope than network
systems in the USSR. What we end up with is a very well-defined case study for cybernetics.
up with is a very well-defined case study for cybernetics. It's bite-sized, but I think that makes it a lot easier to discuss in total. So let me lay down our roadmap for today.
Call it our very own one-hour plan. We're going to be taking a look at cybernetics with a very
specific eye to where it differs from computer science. I think that's where a lot of interesting discussion lies.
Should we be looking at cybernetics as a dead end, or are there some lessons that we can learn?
Then we'll examine how these broad principles of cybernetics were applied in Project Cybersyn,
looking specifically for the pros and cons of the cybernetic approach.
To kick things off, we need a more stringent
working definition for cybernetics. Now, this can be a little bit of a slippery target because of
the interdisciplinary aspects of the field. Interdisciplinary studies, that is, research
that draws from multiple formal disciplines, often run into a unique set of problems.
Nebula-sounding goals are one of those issues,
but we'll see more issues as we continue. One of the early foundational texts for the field
is just called Cybernetics. It was written by Norbert Weiner in 1951, and I think it serves
as a good starting point. Also, just as an aside, this specific text was banned in the USSR
for some number of years. Anyway, Weiner describes the field like this, quote,
We have decided to call the entire field of control and communications theory, whether in
machine or in the animal, by the name cybernetics, which is from the Greek kybernetike, or steersman.
Now, poor pronunciation aside, this gives us some firm footing to jump off of. Fundamentally,
we're looking at an area of research concerned with command and control, or put another way,
and control, or put another way, the flow of information.
Cybernetics is also a relatively old field.
Well, kinda.
This is gonna be one of those episodes where we have a lot of caveats.
Weiner explains that the field wasn't fully formed and named until around 1947, but examples of related research exist going back into antiquity. One of the first
principles of cybernetics is feedback, the idea that a system can self-regulate through some kind
of self-monitoring. The connection here might seem a little tenuous at first. Look at it this way.
The flow of information is self-contained. The system, whether that be an animal or some type of machine,
is just keeping track of information about itself and using that information to make decisions.
An example of this is the mechanical governor.
This is a pretty simple and really, I think, kind of nifty device that can regulate the speed of a motor or engine.
I think kind of nifty device that can regulate the speed of a motor or engine.
This is usually accomplished by a set of hinged swinging weights connected to a switch or valve.
As the weights spin faster, they'll be drawn outward by inertia.
As they swing out to a certain point, corresponding to some maximum speed,
a switch is triggered.
The switch controls the inputs to the engine, so in the case of a steam engine, when the governor trips, the flow of steam would be cut off or reduced.
Now, that's a pretty rudimentary example, but we can see all of the pieces that cyberneticists
like. There's a flow of data, in this case the rotational velocity of a shaft. That data is
monitored by a sensor, even if the sensor is
just some weights in a switch. Then decisions are made as a result of that monitoring. Should the
flow of steam continue or should it be adjusted? Like I said, this is a simple example, but it
encapsulates what cybernetics is all about. These kinds of feedback systems have technically always existed. We can find
them in nature, and nature, believe it or not, is pretty old. And here we run back into that pesky
interdisciplinary issue. You see, if you think about it, many natural phenomenon can be described as self-regulating.
Atoms self-regulate their energy states by releasing radiation if too much energy enters the system.
That's feedback.
Most of the stuff we can see is made up of atoms,
so therefore, the entire universe could be explained in terms of a self-regulating process, right?
Maybe you see where this is going.
I've noticed this tendency to post-date earlier research as part of cybernetics in general.
And, sure, why not?
That shows up in a lot of fields. The work being done on thinking machines was easily brought under the umbrella of artificial intelligence once a better defined field formed.
Programs like Logic Theorist and Turochamp were written before the term AI was coined, but we can all agree those are examples of artificial intelligence.
With cybernetics, things are a little different.
With cybernetics, things are a little different.
The interdisciplinary nature of the field means that you can claim a whole lot of earlier research was actually cybernetics.
Galileo? Orbits kind of self-regulate, so he was obviously a cyberneticist.
Plato? Well, that's an easy one.
Political systems are all about the flow of information, so perhaps he was the first real cyberneticist. Now, of course, those are kind of tongue-in-cheek examples. My point is this.
Cybernetics can seem daunting and inaccessible because of how broad it is. I can't tell if this
is something that crept into the field over time, or if it was there since the
40s, but it's something that we have to contend with. That said, Weiner's text plays its cards
pretty close to its chest, so while the field will spread out quite a lot, Weiner at least
presents a more curated view. According to him, one of the earliest real examples of cybernetics was a paper written by
James Maxwell in 1968. That name may sound familiar to some. This is the same Maxwell of
Maxwell's equations that govern electromagnetism. Well, a few years after those more impactful
contributions, Maxwell published an article simply titled On Governors. See, there is some method to
my madness here. What makes On Governors a reasonable first entry in the cybernetics canon
is the fact that Maxwell presents a real, like, systematic examination of feedback effects. He's
not just saying feedback exists and leaving it at that.
The paper describes a few different types of governors, then shows derivations for equations
that dictate their action. Maxwell is actually applying physics and math to characterize a
feedback-based system. That's actual science, at least in the broad sense. So I think it's pretty reasonable
to say that cybernetics really starts with this one Maxwell paper. So how does this connect up
to the early days of computing? Well, it's a story we've already heard a couple of times on the show.
Cybernetics proper, at least the named field, emerges from the same motivation that led to the first digital computers.
We're talking World War II, more deadly artillery, and firing tables.
Improvements in artillery in the lead-up to the Second World War meant that weapons took more fine-tuning to be used in the field.
You could have different combinations of conditions.
to be used in the field. You could have different combinations of conditions, guns, charges,
a handful of other variables, and for the first time, you might actually need to hit an airplane.
To deal with these complications, the US military started producing books of firing tables,
charts that let artillery operators simply plug in variables and get out the parameters needed to target their guns.
Those tables required a lot of complicated math to produce.
Early on, this was all either done by hand or using analog computers.
But it became apparent that this was not a sustainable solution.
ENIAC is the most obvious example of a better solution. ENIAC is the most obvious example of a better solution. It was originally planned as a
tool to produce improved firing tables. However, ENIAC wasn't the only project tackling this issue.
Another approach was to replace firing tables with automated firing systems. This is where Weiner himself enters the actual historical picture.
During the war, he was a professor at MIT. Specifically, Weiner was a mathematician.
Wartime offered a strange opportunity at these larger colleges. The US government was willing
to pump funds and human resources into a really diverse set of research projects, basically anything that could help the
war effort. At the University of Pennsylvania, this led to ENIAC. For Weiner, this would lead
to an epiphany. Weiner had previously published works on a set of equations called the Weiner-Hopf
method that could be used to predict future movements of objects based on past observations.
Now, that's not really what the equations themselves do.
Mathematically speaking, the Weiner-Hopf method is used to solve systems of integrals.
But, you see, Weiner wasn't really a pure mathematics kind of guy.
He preferred to apply math to the real world.
Integrals are often used in physics
for describing motion. This Weiner-Hopf method could be used for purely abstract purposes,
but that's kind of lame. It could also be used as a tool for applied physics. Already,
we're seeing a certain level of interdisciplinariness creep in.
Weiner's first jump was to apply this method to the design of a new type of automated firing system.
We're looking at 1942-1943 here, so we aren't to the point where slick digital computers are a factor at all.
Instead, Weiner looked to the work of Vannevar Bush. In other words, Weiner looked to
analog computers. The artillery system he envisioned worked by monitoring targets with radar.
That radar data was then fed into an analog machine that solved for upcoming locations
using the Weiner-Hopf method. Those solutions were used to automatically steer the big gun.
This would be particularly useful for surface-to-air defense, since it could, at least in theory,
compensate for the velocity of enemy aircraft. Of course, Weiner didn't stop there. We should
be able to see now a familiar pattern forming. Weiner is recasting the problem of fire control from an operations issue to an exercise in
the flow of information.
Information flows from the radar to the analog computer and then back to the actual artillery.
There's even room here to add in more sophisticated forms of feedback.
So we have another somewhat simple system that seems
to fit this cybernetic mold. Weiner recognized that this approach, recasting a problem in terms
of the flow of information, may be more generally applicable. He wasn't calling it cybernetics,
not yet, but he was heading towards that general direction. Soon after, in 1943,
Weiner would drag two other researchers into this newly forming field.
That year, Weiner, Arturo Rosenbluth, and Julian Bigelow published a paper titled Behavior, Purpose, and Technology.
We're already starting to see a wide range of researchers interested in early cybernetics.
Weiner was, of course, a mathematician by training.
Rosenbluth was a professor of physiology at Harvard's medical school.
That's pretty far removed from math.
Bigelow was an electrical engineer at MIT.
We get this broad spectrum of expertise all working towards a collective goal. So what
makes this paper so important as a step towards cybernetics? Well, I'd kind of be lying if I said
I really liked behavior, purpose, and technology. It's all very conceptual and all very vague,
two things that I don't usually enjoy together.
I think that vagueness is partly because the trio of researchers are still trying to figure out how best to fit their disparate fields of study together.
There are no mathematical theorems in this text, for instance.
There's also no drawings of biology.
However, this is where we see the next big theme of cybernetics show up.
The argument presented in the paper starts by defining purposeful behavior, that is,
behavior in the service of some larger purpose that is driven by observable inputs.
The purpose part here is a little weird. It's not necessarily something that can be known or understood by an outside entity.
So the text describes purpose as the opposite of random action.
Basically, it's the difference between signal and noise.
Once that's all defined, the text launches into a discussion of feedback.
This is mostly what we've already hit on,
the idea that
a system can change how it operates based off its own observations. The paper even throws out a new
name for these feedback-based systems. They call them teleological. It's a fun word, but I don't
think it adds anything but confusion to the conversation, so we'll be sticking with cybernetics.
Now, the new piece here is that the text draws a connection between natural phenomena
and machines. This is, I think, where cybernetics really sets itself apart. You see, the cyber trio
here take the assumption that one of these feedback systems can be described as a black box.
The authors admit they don't know any of the finer details about how a dog or a cat works,
but they can make observations to show that they hunt prey based on feedback. They may not exactly
know how governors work, but it does work towards a purpose based on feedback. The argument here
is that there is something fundamental connecting anything that presents this kind of feedback-based
behavior. In other words, there's something fundamentally similar between lifeforms and
machines. I'd like to remind you that this fell out of research into firing control systems.
That's the same place that digital computers come from. So we are looking at a branch of
research that, from the start, shares its roots with computing. Remember, cybernetics is going
to be a different lens we can use to look at computers. That establishes where cybernetics is going to be a different lens we can use to look at computers.
That establishes where cybernetics starts and how it shares some DNA with computing,
but that's still a little vague for my tastes.
We're still dealing with an era just prior to the digital machines we know and love.
So what happens once honest-to-goodness computers hit the scene?
What do cyberneticists think of these new machines? Well, we can turn back to Weiner's Cybernetics to get our answer.
The important overarching theme here is that, for cybernetics, computers serve as just one
area of study, one arrow in a larger quiver. It was recognized early that computers exhibit all the
properties of cybernetics, everything that cyberneticists were concerned with. Moreover,
computers could be used to simulate or build feedback systems. Now, of course, that comes
with a pile of implications. Artificial intelligence just kind of pops out right from the basic assumptions of
cybernetics. Treat systems as black boxes, simulate them on a computer, and you can have a thinking
machine. It even has an easy three-step plan. Sure, it's a bit of a reduced approach, but it
works within the framework. This approach, at least on the technical level,
was based off von Neumann's theories on cellular automata. These are simple cellular simulations
that create output based off some input. In other words, a perfect tiny model of cybernetics.
The next order up is the neural net, a collection of interconnected cells that could, at least in theory, think.
Now, I have to make a bit of a weird distinction here.
Within the framework of cybernetics, this isn't really artificial intelligence.
I don't think the word artificial really fits.
A neural net is doing the exact same thing a brain does. It just happens
that a brain uses proteins and a neural net uses some harder stuff. But that all falls inside the
black box. In cybernetics, that's just intelligence. Alright, so that's our crash course on the very
rough basics of cybernetics, but there's still one
more thing we need to cover before we can move on. How does cybernetics compare to computer science?
It might be a little weird to think about, but computer science as a formal discipline
is very young. It's younger than computers themselves, actually, and certainly younger than cybernetics.
The first computer science degrees weren't awarded until the 1960s.
The name doesn't even show up in journals until 1959.
That's a weird amount of lag time.
Far from being a deterrent, I think this actually helped to create a more focused field of study.
Although having a large
spread of applications, computer science is primarily concerned with, what else, the study
of computers. In general, we're talking theoretical here, so maybe that's better put as the abstract
study of computation. We can get into arguments about whether computer science is a hard science that uses the method
of relative of math or something in between, but the point remains that it's focused and
dedicated to computers. That's the core of the field. The next contrast we can draw
is the treatment of information. Cybernetics is all about the flow of data, while computer science is more about how data changes.
Now, I might get some PhDs mad at me for this,
but that's not going to stop me.
Basically, I see this as a matter of what's being operated on.
Much of computer science is dedicated to the study of algorithms
and mechanisms for computing.
Whatever the case, data is being operated on.
The information itself is changing or being used to create new information.
Compare that to cybernetics where information is used to render an outcome,
but the information doesn't have to change within the system.
In the case of feedback, cybernetics is only concerned with the change of information
outside the system. The internal state is simply treated as a black box.
We can also see this difference in how computer science models computing. The canonical model,
the one that always shows up in CompSci, is the Turing
machine. I also think this is actually three episodes straight now where I've mentioned this
model, so I'll probably back off for a while after this. Anyway, a Turing machine consists of a long
tape of cells where each cell contains some value and a moving head. The tape head can sweep up and down
the tape, read the value of a cell, and change the value of the cell. The tape head is controlled by
a series of instructions somewhere. In Turing's model, the internal state, the actual data held
on the tape, is everything. It's what the machine starts with, it's the end product of the computation,
everything. It's what the machine starts with, it's the end product of the computation,
and it's the record of every step along the way. The primacy of data is key. Compare that to a more cybernetic model like cellular automata, and we see a very clear difference. An automata model
is more concerned with how data travels from one cell to another. The flow of data is of prime importance here.
Information is acting as a motive force to drive outcomes. Both models can be used to
fully describe computing, but they treat data in a very different manner.
I want to be sure I underline that cybernetics isn't some entirely analogous field to computer
science. We can't just swap out a few
theories to switch from one framework to the other. By that same token, cybernetics isn't
simply an older attempt at formalizing the study of computers that fell out of fashion.
This is a field that has close connections to computer science. There's a good deal of overlap, especially when we look at the early days of comp sci. But broadly speaking, these are two
different fields with two different understandings of computers. In computer science, computers are
the center of the field, at least in practical terms. In cybernetics, computers are a part of a larger constellation of systems, useful but
not necessarily central.
So, this episode is about the destruction of mainframe in Chile in September of 1973,
right?
What's the bridge here?
So far, we've been looking at cybernetics in a very micro-focused way.
far we've been looking at cybernetics in a very micro-focused way. Cells, neural nets, animals,
and even computers, despite the size of some early machines, are all pretty small in the grand scheme of things. The next step, the next pylon in our bridge to 1973, is a subfield called management
cybernetics. The undisputed father of this particular genre of the discipline
was Stafford Beer. That said, he wouldn't have been able to blaze his own path without a lot
of help. One thing I've noticed while researching for this episode is how small the overall field
of cybernetics really was in this era. Seriously, there's around 10 or 12 big-name practitioners that all co-authored papers together
or had some informal mentor-mentee connection. Beer stands out from this admittedly small pack
for one big reason. He didn't have a formal education, at least he never completed even
an undergraduate degree. Most cyberneticists were all PhD nerds of some kind, so in that sense,
Beer was almost an outsider's outsider. I like to think that this gave him a little bit more
freedom in such an already fluid field. I definitely get some vibes here similar to
Ted Nelson's ideas on being a pure generalist. Beer benefited from some of these informal mentor relationships that
I mentioned. In the 1950s, he read Cybernetics, the book I keep citing, and eventually befriended
Weiner himself. That would open doors for more connections to cyberneticists. Beer soaked in
all the information he could get his hands on, but at the same time brought his own take to cybernetics.
Beer wasn't so much interested in math or biology or even electrical engineering. He was more
interested in business and management. Now, this is where I have to throw in some more of those
caveats. Stafford Beer offers an interesting research rabbit hole. The man wrote a lot, there's just no other way to put it.
The bulk of his output was in the form of books, and we're talking 500-plus page books here,
the kind of hardbound tomes that could kill you if dropped from the right height.
His books often cite his earlier texts, so you get into this circle of unending books with similar sounding
names, and there's about a dozen of them. This can make Beer somewhat difficult to get into.
He also has a particular writing style that doesn't really click for me. I don't know,
that's more of a matter of personal taste. But the bottom line here is there's a whole lot of information
about Beer that he wrote himself, and it's also really hard to go through because of that.
Beer's career starts at United Steel in England. He joined a branch of the company in 1956,
sometime around his introduction to cybernetics. Shortly thereafter, he would be heading up the new
cybernetics and operational research branch of the company. For Beer, cybernetics wasn't just
some abstract area of study. It was a tool that itched to be used. And, perhaps more importantly,
he believed that cybernetics could be applied to systems of any size. At United Steel, he brought in a
mainframe, a team of programmers, and started to wire up factories and foundries. At the heart of
Beer's work was the idea that cybernetics could be used to improve industry. This means, among
other things, modeling factories after organisms and controlling systems via feedback loops.
Instead of a nervous system, Beer employed a network of sensors and terminals.
What couldn't be monitored directly could be punched into the mainframe via cards or buttons.
Reports were generated from the incoming wave of data and then acted upon to improve factory outcomes.
That's Beer's broad sweeping approach. Let's use it as
a skeleton that we can drape maybe some muscle and skin on. The core of the operation was, of course,
the mainframe. This is where data had to be somehow synthesized into meaningful results.
In the simplest case, this meant daily reports, but the true goal was to generate actionable
information.
Sure, you could throw out a list of which furnaces and teams were producing the most
steel, but that can only go so far.
Beer wanted his system to aid in high-level decision-making.
Reports need to provide predictions for future numbers, and recommendations for how to achieve
better results.
So, how did you go about making these kinds of decisions?
Well, as we know, the word decision is usually left to the realm of humans, or in some cases,
artificial intelligence.
But we aren't talking about AI here.
We're talking about a different type of approach.
When in doubt, just remember that cybernetics is computer science through a different lens.
To solve this issue, Beer started to construct what he called the viable system model.
As he put it in a summary paper in 1984,
The quest became to know how systems are viable,
that is, how they are capable of independent existence,
as the dictionary has it. End quote. We can hammer out a very basic roadmap to Beer's research right
here. One, figure out what makes a system viable. Two, find some way to measure viability. Three,
collect data on a working system. And four, adjust variables to push it towards viability. Three, collect data on a working system. And four, adjust variables to
push it towards viability. We can see all the cybernetic goodies we know and love right there.
We have feedback, the centrality of communications, and even using data as a motive force for change.
This is all just scaled up and applied to the real world. By
following this prescription, then the whole matter of how to make a decision falls to the wayside.
It becomes a simple calculation. Now, I'll just say that the viable system model, or VSM,
is confusing. And I think a lot of that, at least in part, is thanks to how Beer writes.
He uses a lot of analogies, often multiple in the same description, to try and explain the VSM.
When he gets into the fine details of his model, he brings in a lot of new terms that he's coined
himself, or cites heavily from earlier work without explaining further. It's a bit of a web trying to parse out what's going on,
so here's the system as best as I can untangle it.
First off, the VSM is defined as a recursive system.
A viable system can be composed of other smaller viable systems.
If you're looking at a business using VSM, then this makes a lot of sense. You
have a larger organization that must be viable. Then under that, you have departments that can
follow the same framework and on down to individual employees and machines. Beer, of course, takes this
further, talking about organs and cells as their own tiny viable systems all following the same framework.
But hey, that's part of the complicated web that I'm trying to steer us clear of.
VSM describes systems as adhering to a five-system structure.
Beer modeled this after the contemporary understanding of the nervous system, or at least his understanding of the human nervous
system. Now, I don't know how accurate that is or how close this mimics true reality. We just have
to take this as we see it. In this structure, we have System 1, which is where the actual work is
done. System 2, Beer describes as a, quote, local regulatory system. This stage monitors
system 1 and controls data flow between it and later stages. System 3, quote, inside and now,
self-organization, regulation, end quote. This is a more global-level governor. It controls rules on how the overall system works.
Those are the three lower levels of the VSM. Think of them as the internal part of the system,
the deeper workings. The last two tiers are where things get more complicated. We start with System
4. Outside and future. Self-reference, simulation, planning,
end quote.
At this level, we're actually getting to management decisions.
This is where analysis of the overall process starts.
Then we finally have System 5, which is usually just labeled as policy.
In some diagrams, Beer puts two smaller processes in System 5.
We have 3-4 homeostasis and algedonics.
That last word, algedonics, is one of those beer terms.
It's from the Greek, meaning something along the lines of pain and pleasure.
And I think it's kind of dumb.
I just have no other way to put it.
Beer uses algedonics to just mean alerts. That's it. There's nothing special to his formulation,
just the name. When something bad happens on a lower level, it raises an algedonic alert.
level, it raises an algidonic alert. It's just an alert. I don't know why he has to use a special Greek-derived name for this. Now, the homeostasis part, that is actually interesting. There's
actually some intelligible theory under the hood. The word just means equilibrium, or a state of balance. In specific, 3-4 homeostasis means that the VSM needs to balance the system's internal state with its outputs.
Or, put a more philosophical way, the VSM seeks to balance the now and the future.
While Beer references this five-stage VSM model a lot, we only need this last part to start understanding his brand of cybernetics.
Homeostasis is going to be the guiding principle here.
The reason I want to bring up the full model is to show how complex Beer's work is, and,
in general, how complex these applied cybernetic systems end up being.
Now, there are a few other observations we can draw from this morass.
Nowhere in Beer's description of VSM do we find explicit reference to computers.
There isn't really even any math.
This is a generalized and conceptual model.
In theory, you could apply VSM to anything. Beer often points out that VSM can be used to describe natural phenomena, but in practice, the system only made
sense when using a computer or some other automated system. Sure, you can apply this all to a business
using pen and paper. It would take a lot of work, and the overhead would probably ruin your operation.
So while the generalization here is cool, I think it detracts from the practicality of the model.
Feedback plays a big role in VSim, although it's a little obscured at first. This is tucked inside
the concept of homeostasis. To keep a system in balance, or to drive it towards balance,
you have to use feedback. This is a smaller point, but it gives some more connective tissue to earlier cybernetics work. Then we have the isolation of the internal parts of the system.
This, I think at least, is where we run into a systematic flaw in Beer's model.
we run into a systematic flaw in Beer's model. VSM presupposes that lower levels, the whole one class here, operate in something like controlled isolation. The data only flows
between these nodes via higher stages in the model. That all data paths can be neatly modeled
and categorized. Beer does away with the variance on these low levels by just modeling them as a
black box. They take inputs and they send outputs up the pipeline. Now, I will admit I have some
trouble grappling with this stuff. I'm not sure that the black box approach used in cybernetics
is 100% applicable to the real world. For instance, in later works, Beer addresses that there could be a breakdown in the system,
that incorrect data could be pushed up from these black boxes.
He kind of just brushes that away, saying, oh, that proper programming allows for anomaly detection.
But it's just not hard to imagine a case where the anomalies could be so small or
grow so slowly that issues would propagate. If a black box breaks, you don't always have a way to
know, much less a way to fix it. But hey, that's probably getting a little bit too far into
speculation for right now. The point here is that VSM gives us a description of a working and self-sustaining
system. The quantitative goal is always homeostasis to balance your inputs and your outputs, with
the actual data and calculations used to measure that varying from system to system. The tool for
the job, the thing that makes this all possible, is a computer, no matter how Beer puts it.
So while technically computing is a central feature of Beer's vision,
the theory is separated from binary machines.
After applying this model to United Steel, Beer would move up to a grander stage.
In 1961, he founded a consulting firm called Sigma, based out of Paris.
The firm helped clients apply Beer-style cybernetics to solving management problems.
During the 60s, Beer would bounce around from contracting job to contracting job, publishing
a few more books along the way.
Now, I haven't done much digging on this phase of the story.
Secondary sourcing is scarce, and all the primary stuff is,
frankly, buried somewhere in Beer's massive literary output. The more interesting part,
for me at least, is Beer's next big gig. As early as 1962, Beer was involved with Chilean industry.
This would eventually lead to Cybersyn.
I think the man himself gives the best introduction to this turn of events.
In Brain of the Firm, Beer wrote,
It seems to me that the posture of a, quote,
neutral scientific advisor became untenable after the experiences of World War II, and especially since the full circumstances
surrounding the Holocaust and Japan in 1945 became known. This book has already tried to
demonstrate that the role of System 4 is in cybernetic principle part of the command axis,
and if it is not, then in political practice, nothing will happen.
Thus, I do not understand the outlook of the scientific overlords, in Britain for instance,
who happily survive in government for a professional lifetime while parliaments of opposite tendency come and go.
End quote.
Now, I know that's a bit of a long quotation, but I think it's important for setting the stage.
First of all, Beer's conception of cybernetics was explicitly political. I think in general, there is a political tone to most cybernetics. Theories in the field are generalized to the
point where they can describe any system, from cells to computers to nations, and those theories make implicit
judgment calls. Beer used VSM to make factories run at peak efficiency. He also believed VSM
should be used to make countries run at peak efficiency. This isn't really something you kit in computer science very much.
Beer's connection with Chile and Cybersyn, as he puts it, is complex and total. Here I'm pulling
from Eden Medina's Designing Freedom, Regulating a Nation, which gives a really good rundown of the lead-up to Cybersyn. In 1962, Sigma, Beer's firm, was contracted to help modernize Chile's steel industry.
A group of consultants, Sands Beer, made the trip to Chile to kick things off.
As the project grew, local students were hired to help things along.
From that, another twisting web of cyberneticists formed.
Some students, like Fernando Flores, would go on to become professors, all the while
retaining a fascination and background in cybernetics.
Flores specifically would become involved in the popular movement that eventually led
to Allende's election in 1970.
In the wake of that election, Allende started major governmental and economic reforms.
Importantly for us, this involved the nationalization of major companies.
This wasn't done just for the sake of enlarging government coffers, but primarily to break
up monopolies and address a growing wealth inequality.
Nationalization efforts went forward at a shockingly fast pace.
Nationalization efforts went forward at a shockingly fast pace.
Allende's government was attempting to selectively ease into more widespread nationalization.
That was hampered in kind of an odd twist by massive public support.
Workers started taking over their factories and turning the keys over to the government. It was a very grassroots seizure of the means of production.
government. It was a very grassroots seizure of the means of production. The issue was that this dropped a lot of contested resources at the Chilean government's feet. Their new holdings
had grown faster than they planned for, and that swamped the system. How could all these new
resources be effectively managed? In July 1971, Flores reached out to his third-hand mentor. He had never actually met Beer,
only Sigma employees had actually visited Chile back in the early 60s, but Flores had read Beer's
work. And believe me, just reading Beer takes some dedication. Flores outlined the management
problem facing Chile as more of an opportunity for Beer to try out his theories on a national scale.
Beer couldn't resist the opportunity.
In August, Flores flew to England to hammer out more details with Beer.
By November, Beer was in Santiago, prepared to pitch a wild project to President Allende.
Now, we can finally get to the main event, Project Cybersyn.
Of course, there's a bit of a complication I need to get out of the way first.
I haven't been able to really work with governmental documents here.
I just can't find them.
I think there's a good chance many documents were destroyed during the 1973 coup,
and even if I
could find the usual government sources that I'd like, I probably couldn't use them since, you know,
I'm an Anglophone, I don't really speak or read Spanish very well. So I'm going to be pulling
from two major sources. First is Beer, as I keep saying, he wrote a lot about everything and, luckily for me, it's all in English.
Second is Medina's article on cybernetics in Chile. It's just a wonderful source.
With that in mind, let's get down to business. Project Cybersyn was technically under the purvey
of CORFO, the Corporación de Fomento de la Protección. As I understand it, Corfo is roughly equivalent to
the U.S. Department of Commerce. It's a government organization dedicated to helping the economy
grow. In the 70s, Flores was the general technical manager of Corfo. So technically speaking,
Cybersyn was working under Flores.
Beer was just a contractor, albeit one with a lot of sway.
So, I think we have a good understanding of Beer's approach to cybernetics.
At least the theory part.
So what exactly did Flores want this model to be applied to?
The initial plan for Cybersyn was actually pretty confined. Flores and Beer set out to implement the system just for the nationalized part of Chile's industrial sector.
In practice, that meant mainly factories and some mines. Now, there's an interesting detail here
that I think is worth addressing for later on. Allende, Beer, Flores, and the rest of
the project team were working from a Marxist-Leninist view of communism. Now, Advent of Computing is not
really a political podcast, so I'm not going to get too far into the politics here. That would
really explode the scope of this episode by a lot. What we need to know here is that communism
centers around the worker, the proletariat. This specific brand of communism has a surprisingly
restrictive view of what counts as the proletariat. Specifically, we're talking about
manual laborers in the industrial sector, usually in an urban setting. So right off the bat,
Cybersyn is not a general system. It has restrictions based off political ideology
that Beer aligns with, and it also has restrictions just based off what was possible.
Anyway, back to the system itself. Cybersyn would be composed of, perhaps unsurprisingly, a five-layer
system. It's just all applied VSM from top to bottom. System 1 in Cybersyn was the actual
industrial unit, the factory floor. This is the only place where production was actually occurring.
All other levels are administrative. That's worth pointing out because, as we'll see,
there's a lot of bureaucratic overhead here. Most of the system is bureaucracy, but everything is
built up from this lowest level. The entire project started with Corfo employees heading
out to factories and drawing system float charts. The point here was to identify how each factory
functioned, how inputs float into outputs, and from there draw a set of system variables for
use by higher levels. These variables measured things like gross inputs and outputs, you know,
the basics. This was taken alongside other readings, such as the number of employees clocked in during each shift.
In theory, these metrics were to be sent up the chain in real time,
but in practice, this was only communicated daily, which brings us to System 2.
Some of these levels have fun names.
System 2 was also called Cybernet.
It was the network that connected industrial
units to the rest of the nation. Once a day, a manager at each industrial unit would head
over to the locally installed terminal, type in the day's readings, and send it off to
a central office. Cybernet was a telex system, this is the usual phone line and modem setup
that we see everywhere during this era.
Numeric and textual data was sent down existing phone lines,
so the actual installation and upkeep of Cybernet was pretty low cost.
Plus, Telex allows for two-way communications,
so signals and requests could be sent back down to the industrial units from higher levels.
The system of set variables and networking worked as a bit of a double-edged sword. Restricting the variable space, or filtering as Beer called it,
is crucial for problem analysis. I mean, you can't really monitor everything, that's simply not
possible. So by reducing an entire factory to a set of variables, you can start to analyze the system and tackle problems.
On the other hand, this simplification severely limits your view of the situation.
Medina makes the point that the system couldn't account for political friction or other internal disputes at the factory level.
Beer would try to capture this by tracking the number of employees present versus the number of total employees,
but that's a pretty abstract measurement.
It doesn't count the number of employees engaging in, say, a slowdown strike.
Moving further up the chain, we get to System 3, Cyberstride.
Of all the made-up names this episode, I gotta admit, this is my favorite.
This is the level where we actually
come into contact with a computer. Cyberstride was a set of programs designed to analyze incoming
industrial data and generate daily reports. It initially ran on an IBM System 360, but was
eventually moved to a Burroughs mainframe. As far as I'm concerned, this is where the rubber really meets the road.
Looking at the entire system, we see that Cyberstride was the place that maintained
homeostasis, at least in an automated sense. Everything else required human intervention.
This is what should be doing the most work to keep everything in check. So how did it work?
It all came down to ratios and historic data. Industrial units were
sending in raw numbers which had to be dealt with somehow. Beer's approach was to turn these all
into ratios, so each data point had a corresponding maximum of some sort. I already introduced one
ratio, the number of employees who clocked in divided by the total number of employees.
ratio, the number of employees who clocked in divided by the total number of employees.
Beer called this ratio the, quote, employee absenteeism. Similar ratios were calculated for production efficiency, total production potential, and a handful of other factors.
The idea here was to take very abstract numbers like, say, one million pounds of grain and turn
those into ratios, like 80% of workers showed up
today. Beer argued that people just worked better with ratios, and I mean, that makes some conceptual
sense. The check for homeostasis was accomplished pretty simply. When industrial units were first
checked into Cybersyn, upper and lower bounds for crucial ratios were determined.
When a ratio crossed these boundaries for some set amount of time, an alert would be raised.
Alerts and daily ops data were passed up to the next level, and this is where humans enter back
into the picture. System 4 contained two major features. The computationally interesting part
was Checo. This part of the system was
intended to be a simulation of the entire Chilean economy. It would use current and historic data
from Cyberstride to construct future projections. It would also be a way for managers of the system
to test out possible solutions. Basically, they'd be able to feed in new data points and see how adjustments would
propagate into the future. It sounds like a pretty sweet system, but it was never actually implemented.
I think this came down to time and funding constraints, but there's also the possibility
that Checo may have been too complex to actually create with the current technology.
may have been too complex to actually create with the current technology.
In cybernetics, there's this idea of a cycle or oscillation, that feedback-based systems will go through oscillations as their course is corrected towards homeostasis.
So, in theory, you have to be careful how you provide and respond to feedback.
This is especially important in systems where you can control that feedback,
like, say, a computer-based economic management system. I could see a situation where Beer's view
on feedback made implementing Checo a little too complex. Whatever the case, Cybersyn was never
fully simulating Chile's economy. The second feature at this level, and the most flashy of the entire system,
was the ops room.
When you look up Cybersyn, this is what you'll see.
It's the big, front-facing component of the project.
The ops room itself looks like something pulled straight out of a sci-fi movie.
Inside this hexagonal room were seven swiveling chairs, each molded out of plastic.
The armrest of each chair had a handful of big buttons. Along the room's six walls were CRTs
and projector displays that showed the latest data fed up from Cyberstride. If you think this
sounds like the bridge from the original Star Trek series, then you aren't
far off.
The whole thing really screams mid-century futurism.
Unlike the bridge of a starship, the ops room wasn't really meant for continuous use.
Instead, it was intended to be used only when Cyberstride raised an alert.
A group of seven managers would rush into the room, quickly assess the
situation, and then try to make adjustments. Decisions could be transmitted from the ops room
down through the rest of Cybersyn. During this process, Checo could be used to check exactly
which changes could avert disaster. Now, there's something else kind of subtle about the ops room.
master now there's something else kind of subtle about the ops room the lack of keyboards medina brings this up explicitly in their discussion of the ops room and i think this is where we run into
some more of the caveats inherent to beer's conception of cybernetics the eventual point
of the ops room was that workers would be able to manage the Chilean economy themselves.
Beer had plans for this on all levels.
Factories would have smaller and less technically advanced versions of this
where workers would be trained in how to helm any level of the overall system.
This makes sense in the broader context of socialist reforms that Allende was championing.
The means of production were being given back to the workers,
and that would have to include the means of managing that production.
But here's where this goes off the rails.
Medina notes,
quote,
Beer recognized that the men sitting in the chairs would not possess skills as typists,
an occupation typically
performed by female secretaries. Therefore, in lieu of the traditional keyboard, the ops room
team designed a series of large, big hand buttons as the input mechanism that one could thump to
emphasize a point. Beer felt this design decision would allow the technology to facilitate communication,
eliminating, quote, the girl between themselves and the machinery, end quote.
Medina continues, Beer claimed the big hand design made the room appropriate for eventual use by workers, end quote.
Once again, we're seeing a kind of weird and unnecessary restriction placed on who counts as a worker.
The means of production here were being seized specifically for those who are male, urban, industrial workers.
That's not really the entire population.
Beer's also taking this strangely patronizing stance.
Beer's also taking this strangely patronizing stance.
You can see in the ops room design that the armrest and buttons definitely look sturdy.
Beer's explanation for that?
Well, you know, workers are ruffians that don't know how to use machines. We have to make sure that they don't break the ops room during passionate discussions and thumping of buttons.
ops room during passionate discussions and thumping of buttons. There's a whole lot of commentary that could be drawn from this by people who are admittedly smarter and more well-versed
than myself. What I'd like to throw out there is this. Here we see an example of what happens
when you mix political ideology too deeply and indiscriminately with your science. Maybe that's why Beer's take
on cybernetics seems so complicated and confusing. Sure, he is approaching part of his work as purely
scientific, but there are some decisions informed by his political ideology. Maybe that led to extra
wide armrests, or maybe it led to specifically centralized mechanisms
of control.
Now, just like Checo, the ops room was never fully completed.
There was a prototype built.
When you look around for photos of Cybersyn, that's what you'll see.
The final layer in Cybersyn was Corfo Senior Management.
The idea being that if Cyberstride and the ops room both failed to
solve a problem, they could call in a manager to solve it. In this way, ultimately, Cybersyn was
built as a way to leverage centralized power. There's just no other way to look at it.
Everything percolated up to some higher management layer. So what are we left with?
to some higher management layer. So what are we left with? By the time of the September coup,
Cybersyn was still unfinished. There just wasn't enough time to complete the entire system, and then time ran out very abruptly. That said, components of Cybersyn would see use.
Cybernet in particular was key in maintaining Allende's government in the face of industrial
strikes. It was used to coordinate relief efforts in factories and manage shipping in the country
during a 1972 strike. But Cybernet, despite the name, was probably the least cybernetic-y part
of the system. Really, I think this speaks more to the importance of network communication than anything.
Was Cybersyn a failed project?
In a lot of ways, I gotta say yes.
But at the same time, the cards were stacked against it. Given more time, Beer may have shown the world a cybernetic utopia.
Or at least a utopia for a subsection of that world.
at least a utopia for a subsection of that world.
Alright, that brings us to the end of this episode. This has been a bit of a strange journey for me. Cybernetics is something that's been showing up in my research from time to time.
It's a bit of a secret undercurrent in the early history of computing. I think taking a closer look at the field has definitely been worth my time,
but the thing is, I have mixed feelings about cybernetics.
If you're looking for the best way to talk about computing,
then computer science is 100% the way to go.
Cybernetics isn't primarily about computers or computing. Digital machines are part
of the discipline, but exist as either tools or just another type of system to examine.
That generalized approach can be interesting, it can be useful in some cases, but I think it's
also a weakness. Computer science, despite its wide range of applications, is a very focused field.
It's about computing and data.
That's what you get.
Cybernetics spans everything from math to psychology to philosophy to biology to politics.
Not all of those influences are scientific,
but cybernetics tries to pull everything under this umbrella of science.
Sometimes that works, sometimes it
doesn't. I personally prefer consistency over generality. So to close this out, I'd like to
ask one more question. Is cybernetics still relevant today? There are a few ways to tackle
this one, so I'm going to be taking a bit of a weaselly way out of it.
I think cybernetics as an independent field isn't really that relevant anymore. Like, I keep saying
we have comp sci, which is just kind of better. The interdisciplinary aspect of cybernetics is
also a bit less relevant nowadays, at least when it comes to computing. Most fields of study have been reshaped, in part or sometimes in total, thanks to computers.
The easiest example I can think of is physics.
When I was an undergrad in the physics department,
almost every class had some computational component, usually programming.
All the research I did involved programming or computational data analysis.
I have chemist friends who had a similar experience. Computational X is really where
the sciences are going. The result here is we have chemists, physicists, biologists, and any
scientists you can think of who are intimately familiar with computers.
Sure, programming isn't technically computer science, but it's only a hop, skip, and a jump away.
I think there are parts of cybernetics that are still relevant, but I have to stress here
parts.
It offers a different way to look at systems.
For us, or at least I'd assume for most of you listening, what matters here is that it offers a different way of look at systems. For us, or at least I'd assume for most of you listening,
what matters here is that it offers a different way of looking at computing.
On occasion, it's valuable to be able to step back and look at a computer as a cog in a larger
system, or to compare computers to an organism. That can be a powerful approach. It's plain as
day. Neural nets, the quintessential cybernetic approach to
intelligence, are still in use today. Similar examples abound. Genetic algorithms are one that
I can think of off the top of my head. So maybe it is relevant to turn to cybernetics from time
to time and pick out some of the good parts. Just watch out for all the pitfalls along the way.
Thanks for listening to Advent of Computing. I'll be back in two weeks' time with another
piece of computing's past. And hey, if you like the show, there are now a few ways you can support
it. If you know someone else who'd be interested in the history of computing, then why not take
a minute to share the show with them? You can also rate and review on Apple Podcasts. And if
you want to be a superfan, you can support the show directly through Advent of Computing merch
or sign up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and bonus content.
You can find links to everything on my website,
adventofcomputing.com.
If you have any comments or suggestions for a future episode,
then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.