Advent of Computing - Episode 49 - Numeric Control and Digital Westerns
Episode Date: February 8, 2021Saga II was a program developed in 1960 that automatically wrote screenplays for TV westerns. Outwardly it looks like artificial intelligence, but that's not entirely accurate. Saga has much more in c...ommon with CNC software than AI. This episode we take a look at how the same technology that automated manufacturing found it's way into digital westerns, and how numerically controlled mills are remarkably similar to stage plays. Clips drawn from The Thinking Machine: https://techtv.mit.edu/videos/10268-the-thinking-machine-1961---mit-centennial-film Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
Well, can you show me where a computer can do anything original?
I think that might help to convince me.
Well, it depends an awful lot on what you mean by original.
Would you regard writing a television western as being original?
Do I have to answer that?
What do you mean to tell me that a computer can really write a play?
Well, we can write pretty good plays.
That's just a short clip from a 1961 CBS documentary called The Thinking Machine.
It was part of a larger series celebrating MIT's
100th anniversary. There's something wonderfully campy about the whole video. It's framed as a
professor trying to explain artificial intelligence to a somewhat reluctant presenter, with the aid of
lots and lots of cool demos. Of course, everything has this sheen of mid-century state
of the art to it. It's a really interesting piece of period media that shows how scientists
were trying to explain computers to the public. We can see all the cool projects that show what
one of these newfangled computer machines is capable of. But tucked away in this hour-long program is one particularly strange
demo. The program is called Saga 2, and it writes western screenplays. Even more surprising,
these TV scripts are intelligible enough to be used. In theory, Saga 2 could create an infinite
number of western plays, but the thinking
machine only shows three.
Each short vignette is captured in live action, using real human actors, but strange inhuman
writing.
These are pretty short scripts, only a few minutes long if we're being generous.
Everything is simple, there's no dialogue, and the play is only populated
with two people.
Saga 2 was showing that a computer, a device that was still seen as a number crunching
machine, well, it was able to encroach on something that we thought was fundamentally
human.
So the question is, was Saga 2 actually creative? Or are we looking at some digital smoke and mirrors?
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 49, Numeric Control and Digital Westerns.
Now, this is another one of those episodes where my initial planning was thrown around quite a bit,
and really, my expectations going into this were all wrong.
At first, I planned to just cover Saga 2, a program that wrote Western screenplays. It was developed at MIT, so there's a connection to the eventual
founding of the AI lab. The program was written on TX0, the first transistorized computer, so
there's some interesting context, right? In my head, that was a simple bingo-bango-bongo on the
way to a fun episode. But after making a few outlines, I realized that there was something a lot more interesting here and something that I didn't expect.
Ultimately, Saga wasn't very sophisticated artificial intelligence.
And it doesn't really matter what hardware it ran on.
Saga existed close to the home of AI research, but it lived in a slightly different realm. For Saga, it all comes
down to automation, computer-aided design, and a programmer named Doug Ross. Now, if you're like me,
you don't really spend all that much time thinking about manufacturing or automation.
Same goes for CAD, aka computer-aided Design. I've used it before.
For me, I've used a lot of KiCad for drafting circuits.
I know that it's important software for industrial design.
And that's about it.
This episode, we will be dealing primarily with Doug Ross,
one of the programmers behind Saga2.
More commonly, he's referred to as the father of a
language called APT, and one of the masterminds behind the development of computer-aided design.
Basically, Ross had a hand in the creation of computerized manufacturing. His career spanned
from analog computers to vacuum tubes to transistors. Working everywhere from universities to government labs to the private sector.
At each step, Ross was looking for a way to save time and effort.
So, how did a very serious programmer help create the first software-defined screenplay?
How does the development of CAD and computerized milling factor into Saga 2?
development of CAD and computerized milling factor into Saga 2? Believe it or not, the two have a lot more similarities than you may initially think. Our story starts, where else,
but at MIT in 1951. That year, Doug Ross and his wife Patricia Ross moved to Cambridge. Both had
recently graduated from the math program at Oberlin College.
Doug was accepted to MIT's grad program, so he was bound for a few years of teaching and research.
Patricia, on the other hand, was heading into a job at the newly formed Lincoln Lab. Her job title
was something that we don't use for humans anymore. She was bound to be a computer. Lincoln Lab had been
founded earlier that same year as part of this ever-growing private-public partnership. The lab
was founded by the U.S. Air Force and eventually the Department of Defense added in funding.
The goal was modernizing America's air defense. With the Cold War deepening,
the federal government was really eager to throw money at new technology,
and Lincoln Lab was working on turning that new technology into a sort of nation-sized shield.
In 1951, computers did exist,
and New England was a hotbed for the new technology,
but digital machines weren't really readily available yet.
There were just too few electronic computers to go around,
and the ones that existed were relatively limited in what they could do.
So research sites like the Lincoln Lab relied on this mishmash of humans and analog computers.
That's where Patricia Ross and other workers like her came into
the picture. While a large swath of the lab was working on theory and engineering, Patricia Ross
and other computers handled the brass tacks of turning theory into concrete numbers. Part of
this work was done with the, you know, good old standby. Pen, paper, and countless hours of work. Human computers would be
given pages and pages of equations. Then they would set to work, breaking those down into smaller
operations, running numerical analysis, and eventually producing results. But that was only
one part of the job. The other part was operating their analog computer counterparts. In this period,
analog systems filled this kind of weird gap between hand calculations and full digital systems.
They worked, they were a little faster than manual number crunching, and they were often
less error-prone than doing things by hand. That being said, they were still largely manually
operated machines. You just didn't program an analog computer in the same way that you would
later program a more modern computer. It didn't understand code, and it definitely didn't operate
automatically. An operator had to swap out gears, shafts, and pulleys to model an
equation. Then, they had to very carefully input their parameters and watch for proper outputs.
Patricia Ross was one of those operators. One of the big tasks that she handled were
correlation functions. The Servomechanics Lab, part of the larger defense project,
was working on targeting mechanisms for air defense systems.
In other words, they were developing the brains that helped point a gun at a moving target.
It requires a pile of calculations to get these kinds of systems operational.
At least it did before the use of more modern digital computers.
Patricia Ross was one of the crew of computers that handled this specific
task. And it was really tedious work. Fiddling with machinery, tracing trajectories, compiling
results, and eventually double-checking everything took hours of concentration. But by and large,
it was a repetitive job. Numbers were crunched for one possible trajectory, then
inputs were changed, and the next trajectory was analyzed. It's the kind of job that has a purpose,
but can be frustratingly dull. Evidently, it bugged Patricia enough that she brought the
problem home after hours. I'd guess this started as idle griping, the kind of after-work debrief that most partners
share with each other. But Doug started to take more of an active interest in what he was hearing.
He and Patricia were on the opposite side of the same field. It was a matter of theory versus
practice. The cross-correlation calculations that Patricia was running sounded really similar to
some mathematics that Doug had been researching with his students. So he naturally wanted to learn more. He asked for
some details, and Patricia was happy to help. Quoting from Doug, quote,
So she brought back a seven-page purple ditto description of the principles of the machine,
how to run it. Came summer, I called them up. I have no idea where I got the gall to propose doing this,
but I just looked in the phone book and called up the executive officer
of the Servomechanisms Laboratory and said,
I'm a math graduate student and would like a summer job.
If you could find me an electrical engineering student,
I'm sure by the end of the summer we could make you an electrical calculator
that would beat the pants off that little mechanical thing. End quote. So there it is. We have a simple
cycle that we've seen before. Someone has a problem, someone identifies a solution, and a plan forms,
and eventually work starts. There were accuracy issues with the analog machines that Patricia and her fellow computers worked with.
The mechanical devices were also tedious to use.
This sort of solution just begs for an upgrade, right?
Well, after getting in touch with the right people,
Doug found out that no one actually cared to upgrade right now.
The cycle stopped there, but admin did offer Doug a job. So come summer,
he went to work with Patricia as a human computer. This rejection would actually be a pivotal moment
for Doug Ross. He spent that summer crunching numbers for the Servo Mechanics Lab, getting
first-hand experience with the state of the analog computing art. As he suspected, and as Patricia had told him,
there was a lot of room for improvement.
So he set to work gumming up the operation as best he could.
Doug figured that, you know, maybe he could reorganize
how his co-workers processed data.
He tried to work up some kind of pipeline,
or data assembly line, if you will.
It was while trying to
restructure his co-workers that Doug first heard of Whirlwind. It should come as no surprise that
Doug wasn't the only one who took issue with analog computers. By the time he offered to
electrify the Servo Mechanics lab, researchers were already attempting to tackle the problem.
This project was called Whirlwind, an effort to replace all these analog systems in the lab with a vacuum tube-based computer. One day in 1952, after a few summers of grinding away on
analog computers, a researcher in the Servo Lab recommended that Doug head down the hall a little bit and check out what was being built.
After a short walk, Ross came face-to-face with Whirlwind, a machine that instantly made everything he was struggling with obsolete.
Before the summer was over, Doug had filed for computer time and poured over the reference documents for Whirlwind.
He wasn't the computer's first programmer,
but he was an early convert to this new digital way of life.
Programming wouldn't just take over his free time.
It also became his primary focus when school was back in session.
When it came time to complete his master's degree, he knew where to look.
For his thesis, Doug wrote a four-year transform
library for Whirlwind. By the time he graduated, Ross had moved from a mathematician to a fully
fledged computer scientist. After graduation, Ross stayed very close to Whirlwind, and he was
swept up in changes around the lab. In the next few years, the SAGE project took over Lincoln Lab.
This was a government-funded initiative to build a nationwide defense network. The project
encompassed a huge range of technology, spanning from networking radar dishes, fire control to
aerospace tracking. Ross worked on the number crunching and display side of things, basically what
most users would see during day-to-day operations. His code managed running correlations between
curves. Now, that must have felt like a nice culmination of his early career. Going from
hearing about cumbersome analog machines to experiencing them and then implementing a
solution must have felt really satisfying.
Ross's other big contribution to SAGE came in the form of vector analysis.
One of the cooler pieces in this giant government project was actually its user terminals.
Each terminal had a large circular radar screen in its center,
a smattering of buttons and inputs, and a light gun.
An operator could point to the radar printout with the light gun, pull the trigger, and then send the location off to the main computer for use.
The idea was to make marking targets easy.
See a dangerous and suspicious-looking blip on your display?
Just blast it and then let the computer
figure out what's going on. Just as a bit of an aside, the terminals themselves have this
wonderful mid-century sci-fi feel. Everything is steel and gray, and to top everything off,
the case even includes a cigarette lighter and ashtray. So yeah, technology in the 50s, a little bit
different than what we expect today. Roughly speaking, these radar terminals operated
something like a touchscreen. Some computer off in the heart of a sage installation sent an image
to the display. Users gave inputs via light gun. So kinda like touching a screen.
Then the computer processed those inputs and updated the display.
Rinse, repeat, and you get this interactive system.
Ross helped with the software for processing those inputs.
This involved figuring out where the gun was pointing, registering a pull of the trigger,
and handling the actual data that came back.
Operators could use multiple clicks to build up shapes on the display,
so Ross's code got a little bit complicated.
It got into the realm of vector analysis,
that is, figuring out what to do with a series of dots and directions.
Going from clicks on a display to a shape can get surprisingly complicated.
You have to take things like angles of separation and display characteristics into account.
The details are confusing, so let's just leave it at a complicated task that involves graphics.
That is, a complicated task that Ross became deeply familiar with.
Over the first half of 1950, Ross went from 0 to 100,
so to speak, moving up one rung at a time until he reached the state of computing art.
And then he helped push it forward. Whirlwind changed him from a pure mathematician to an early computer scientist, and Sage would push him into a tighter specialization.
In 1956, he got to put those skills to the test.
That year, MIT started up a new government project.
Ross traded in his light gun for a numerically controlled milling machine.
Numeric control is a bit of a strange topic to think about.
Numeric control is a bit of a strange topic to think about. It's the NC in CNC machine, but it's also widely applicable to a whole host of devices.
At its core, numeric control is just a technique to control a machine via a series of numeric
inputs.
Most commonly today, we see this type of thing used for automatic mills or 3D printers, basically devices
that turn a series of inputs into motion and eventually create something. If you want to get
a little glib about it, then the first numerically controlled machine was either Bouchon or Jacquard's
automatic loom. These were looms that took a series of cards as inputs, each card described
how to move some strings to weave a pattern,
and, after repeated application, produced a finely patterned textile.
It may be best to think of numeric control as a type of dumb program.
It can't do math, it can't make decisions, it's just a series of step-by-step instructions.
But this only appears simple on the surface.
Moving forward, we're going to be looking at mills as a specific example,
but numeric control can be used for anything.
It's just that mills are one of the more common applications.
A simple mill consists of a flat bed that holds whatever material you want to carve.
An arm is positioned above the bed.
This arm has a spinning bit for cutting.
Then via a series of gears and cams, this arm can move in three dimensions.
X, Y, and Z.
More complicated mills sometimes have moving beds.
Those add another two or three dimensions to the mix.
You operate one of these mills by just sending it a series of instructions.
You can do things like moving the cutting head in all three dimensions at variable speeds,
spin up or stop the bit, and even some fancy mills let you change the bit out while it's
in operation.
In theory, this means that you just have to work out the proper sequence of instructions
to draw your shape and cut out your part.
But this isn't the world of theory.
An NC mill is a very real device.
It has to adhere to the rules of the real world.
A bit can only spin so fast.
The arm has limits to how it can accelerate. Even
the material you're working with has certain tolerances you have to be aware of. You can't
just instantly bring everything to a full stop and then start moving at full speed in another
direction. You have to take things like turning radius into account. Those are just a few considerations, there's a lot more involved.
Running a mill is a tricky process, so automating it is also fairly tricky. So then why on earth
would anyone want to use an NC mill if they're such a pain to deal with? It all comes down to
mass production and mass reproducibility.
Once someone works up the specific set of instructions to carve out a widget, then you
can make as many exact duplicates as you want.
Each widget will be exactly the same, down to the tooling marks and paths used to make
them.
You can even swap out materials to get the same part.
Want a fancy wooden widget for testing but also need a few made from aluminum or steel?
Easy, just plop down whatever you want to carve up,
and as long as everything's set up properly, the mill will cut out your widget.
The only problem here is, of course, going from your nice, well-drawn-out designs to a massive series of step-by-step instructions encoded as numbers.
This process started with a draft of the part that you wanted to produce.
Basically a blueprint. You have all the dimensions and particulars about the design.
Then some poor, poor person would set to work on the toolpath needed to make the part.
All of this was done by hand.
After that, the instructions were punched onto a reel of paper tape, fed into the mill, and production started.
If there were problems, then the part you were designing would get destroyed, and you'd have to go back to the drawing board.
The part you were designing would get destroyed, and you'd have to go back to the drawing board.
What I find really interesting here is NC mill operators were facing the same issue as programmers in this era.
At the same time, around the middle of the 1950s, the only real way to program was via machine code or assembly language. Rough plans for programs were sketched
up, slowly converted into bits and bytes, and then fed manually into a computer for debugging.
Once everything was done, you could get a lot of use out of a computer. But the bottleneck was
really programming. For NC Mills, the same bottleneck was generating machine instructions. At Lincoln Lab,
these issues weren't just some theoretical shortcoming. There was a lot of money involved.
The U.S. Air Force had a long-running contract with MIT to develop numeric control systems.
Specifically, the Air Force wanted a way to mass-produce planes and weapons.
The fear of falling behind technologically in the Cold War meant that there was plenty
of federal money to throw at projects like NC Milling.
On the flip side, these fears also meant the military wanted effective solutions as fast
as possible.
It wasn't just a matter of making a better bomber than what the Soviets had.
It was also a matter of making that better bomber first. As Franklin Lab's work on SAGE
neared completion, Doug's work started to clear up. He made the transition from SAGE to the
Numeric Control Project sometime in 1956. The overall goal was simple. Find a way to make NC systems easier to use. It may
sound strange at first, but Ross was uniquely prepared for this task. Automatic mills and
radar systems probably don't sound very similar. In general, they aren't, but the
vector processing that Ross dealt with on the SAGE project is
the key here.
The Servomechanics lab was already trying to use computers to automate NC machines,
but only in a limited sense.
Programmers in the lab had built up simple libraries for producing snippets of NC instructions.
Imagine a program that spits out the instructions to cut a circle
and you're there. Using these programs like puzzle pieces, a larger part could be formed a little
more quickly, but it didn't solve the full problem. What was needed was some way to simply and easily
describe a part's geometry, then automatically turn that shape into the instructions
needed for the tool's path. That first part is where vectors come in. The lab needed a way to
turn a series of dots into a vector. That's nearly the same code that Ross had written for Sage.
So vector processing and analysis was an important piece of the puzzle, and it gave
Ross a bit of a foot in the door to tackling the problem.
But that only made up part of the larger issue.
The other puzzle piece didn't come from Sage, but the larger computer science community.
In 1956, the hottest technology on the block was a little thing called automatic programming.
That name should sound strange because we don't use it anymore.
Today, we call that a high-level programming language.
This early and tumultuous era of programming has been something I've covered a lot on the podcast,
so if you want the really fine details, check out the archive.
a lot on the podcast, so if you want the really fine details, check out the archive. I'd highly recommend my episodes on Fortran or Jovial as probably the best place to learn more about this
era of programming. Anyway, automatic programming is, exactly as the name suggests, a way to
automate creating code. I honestly like the older name better here since it describes exactly what
the technique does. Using one of these fancy languages like Fortran, for instance, a programmer
can really quickly generate a program. Fortran is a great example because it predates Doug's work,
and it really codified how automatic programming would work. All you have to do is write something like A equals 1 plus 1,
and then the Fortran compiler reads that in,
turns that into a series of machine code instructions,
and lets you run that on the computer.
The human may just write one line of code,
but that can turn into dozens of machine instructions.
It's fast to write, easy to use, and more than anything, it was exciting.
A certain subset of researchers really resonated with this idea,
so they started looking for ways to automate away their particular problems.
Some MITers were even experimenting with the new dark art.
One programmer, Arnold Siegel, had even tried his
hand at applying the technique to NC Mills. As near as I can tell, this would have happened in
1955, just after Fortran released and just before Ross joined the project. Siegel's work was
essentially a proof of concept showing that automatic programming techniques could be applied to numeric control.
The central idea here was a new language.
But instead of describing a full program, this language described shapes.
It had all the expected trappings, a very simple math system, variables, and statements.
Siegel's new language let a user describe a part that they wanted to build
in terms of curves and lines. By specifying dimensions and where curves and lines met,
you could build up a shape you wanted to cut. The important piece of software here was a
compiler. Well, I guess compiler is probably the best word we have here, but it's not
exactly right. This program took your code and turned it into
a series of numeric instructions for a mill. So it wasn't compiling code into a binary file in the
usual sense, but it's doing something roughly analogous, so we're sticking with compiler.
This first gloss was primitive. It could only handle 2D shapes and only define lines and circles.
But here's the big deal.
A user never had to even think about the mill they were programming for.
Siegel's compiler just dealt with that for you.
It not only handled working out the toolpath needed, but it also took the mill's limitations into account.
The program was smart enough to avoid impossibly tight turns and even adjust for bit size. All the
operator had to do was think about what they wanted the part to look like. In that sense,
Siegel's early NC language cut the worst step out of the overall manufacturing process.
cut the worst step out of the overall manufacturing process. But it also added a new possibility.
It was now easy to adjust a part.
Need to double its size?
Just change a few variables and rebuild it.
Putting everything together, the parallels become clear.
Siegel Solution may have had a different application, but was closely following the same path as other
early programming languages.
When you get down to it, NC automation programs fit really nicely next to the development
of languages like Fortran or COBOL.
That's something that I did not expect to learn when I got into this episode.
But as cool as Siegel's work was, it was only a proof of concept. The language
was terse and limited. In practice, its restriction to 2D made it unusable. But the proof was there.
This could be done. Numeric control could be changed. And manufacturing could be revolutionized.
It just needed a little bit more of a push.
And that brings us up to Doug Ross' arrival on the project,
and really a total change in the project's goals.
Ross and a large team of researchers started in on a totally new language.
They called it Automatically Programmed Tool, simply abbreviated as APT.
In a 1958 progress report,
Doug explained the project's goals like this. The research program must recognize that the disparate characteristics of the human programmer, the general-purpose computer, and the numerically
controlled machine tool cannot be considered one at a time, but must be linked together in an overall system concept. It was
also felt that the most practical way to limit the objectives of the effort without stifling
its vitality was to establish a hierarchy of successive APT systems, each one being characterized
by more and more sophistication in the human input language. End quote. Now, that last part is a little obtuse
at first, but crucial. Ross shifted how the problem was being viewed. It wasn't just a matter
of finding an easy way to program an NC tool. It was a matter of integrating the human, computer,
and mill. One of the crucial parts here was adding the human as a key component.
That was also a big deal with Sage. In the Sage program, a human operator and human inputs were
just as important as a radar dish. So really, I think Ross is cribbing from that past experience.
APT had to be usable by people. That was the whole point of an NC programming language
so how would he get to that goal?
what does Ross mean by a hierarchy of successive systems?
in a word, he's talking about abstraction
and this cuts to the heart of what made automatic programming such a revolutionary idea
and why programming languages remain so important today.
In the context of a normal language,
abstraction just means something that keeps you from manually dealing with bits and bytes.
Features like variables save a programmer from addressing memory locations by hand.
Fancy math systems mean a programmer never has to remember which register
on a computer can be used in multiplication and, you know, which ones are only good for addition.
You just have to write 1 plus 1 and the compiler does the rest. Abstraction, when done right,
saves time and headaches. It frees a programmer to do more with less code, and it lets you tackle
totally new problems. Ross was applying abstraction wholesale to NC machine programming, but in a
slightly unusual way. When dealing with Fortran, for instance, you're only ever one step away from
machine code. You write a Fortran program, the compiler does its magic,
and then you have a working executable file. Apt works using multiple layers of abstraction.
Each layer has its own number. At the bottom was apt1, the least abstracted and closest to
NC instructions. This level dealt in points, contained vestiges of the old path
libraries, and ran calculations and rule checks. The language itself described where to move the
mills bit point by point. Compiling an APT1 program gave you the raw numeric control instructions that
a mill could understand. Above that was APT2, and this ended
up being the most used layer, so this is what we're going to spend the most time talking about.
APT2 was all about paths, shapes, and intersections. It should be pretty expected now, but
due to the early date and strange application of APT2, it doesn't really look like any
other language I've seen before.
It has assignments and variables, but it doesn't really have math facilities.
No conditionals either.
Really it's a highly simplified language.
The lack of conditionals alone means that it's not even general purpose.
But you see, APT was never meant to be general
purpose. The core part of APT 2 comes down to defining geometries. You get four types of shapes
to build with. Points, lines, circles, and planes. From that, you can build up pretty much any design
you want. It's easy to define any of these shapes.
The syntax is a little weird, but it's simple.
If you want to write a line, just say,
my point equals line slash point one comma point two.
Circles, points, planes are all defined the same way.
Besides the slash, it's not even that counterintuitive. You save which geometry you
want, then give its location and characteristic size. What makes APT2 so powerful and actually
useful for designers is that you can define shapes in relation to other shapes. Now,
this goes beyond saying, oh, a line starts at some predefined point. You can tell APT2 to generate a circle that
intersects with a plane at some point, or a line tangent to a curve. And it's all done pretty close
to English. Well, about as close as you can hope for. This comes down to the usability aspect of
APT2. It wasn't meant for computer scientists, not really at least. In the field,
it would be used by engineers, fabricators, or factory workers, none of which necessarily had
degrees in mathematics or computer science. So the language had to make sense to a large set of
people. Now, bear in mind this is still the 1950s we're talking about, and it's still a programming language.
But Act 2 statements are generally readable as plain English.
A line like circle int of line 1 line 2 radius 5 makes a circle at the intersection of two lines with a set radius.
To me, it reads kind of similar to BASIC, where each keyword is close enough to
English that you can actually speak out code. In practice, this made going from blueprints to
INSEE instruction tape really easy. It must have felt like talking to a computer. You just
explain in words what your part looks like, and then, using a few funny spellings, you're there.
While really cool and really important, geometry only accounts for one part of APT2.
Toolpaths were also defined in the language.
That is, general instructions for how the mill should cut out your part.
Even the fancy software really couldn't figure out how to cut everything
on its own, at least not quite yet. This set of commands, called motion commands, came
after your geometric code. Once again, it followed the same close-to-English syntax
as the rest of APT2. To move the mill's cutting bit along a line, just say, from A go to B. Simple as that.
By using more complex commands, APT2 would generate instructions to cut along curves,
carve out divots, or create any other shape you wanted. All instructions could be given in terms
of definite numbers, or in terms of your predefined geometry, along with the ability to cut defined paths in terms
of tangents, intersections, and really any geometric configuration you can think of.
The final piece of the language, the glue that makes everything work, is a set of auxiliary
commands.
This is where you specify things about your NC setup, what type of tool you wanted to
control, the size of the cutting bit, how fast you wanted
to move the machine's head. These instructions come right in the middle of a normal APT2 program.
It's after your geometry is defined, but before the tool path information. Essentially, each
command dictates how APT interprets paths and shapes. If you have a half-inch cutting bit,
then the path needs to be adjusted to give proper clearance.
If your movement speed, called the feed rate in APT, is really high, then the system needs to take corners a little bit wider.
It's a concise way to define a rule set that has to be observed for a successful cut.
Now, here's something that's easy to miss.
successful cut. Now, here's something that's easy to miss. APT does a lot of things implicitly,
and I don't just mean in terms of variables or definitions. This gets into one of the places where Ross couldn't abstract away everything. And, well, it's something I find really fascinating.
But it's a little technical. Ultimately, an APT2 program was controlling
some type of numerically controlled device. I've been sticking with mills as our example,
but it could be a drill press, a lathe, really any smart tool. Those are physical machines.
Any operation on them will spin up a motor, move something, or make something stop moving.
Now, if you want to get theoretical here for a second, which I do, so we're going to get theoretical here for a second,
we can call an NC mill a finite state machine.
It has a finite set of states it can be in, described by the XYZ location of its cutting head and a few other simple parameters.
It moves between those states in discrete steps or operations.
So we're looking at a state machine.
This is the weird part.
APT2 is used to program that state machine, but you can't always directly control its state. The APT system
handles some of the automation and rules for you. That means that at every line of code, at least
while defining paths, there's some implicit state. You can't easily access that state within APT2,
it's just somewhere deeper in the system. This means that order of operation
in APT2 is exceedingly important. Each path command puts the NC mill's cutting head at some
location. It moves the machine into some state. So you have to be careful your commands are in the
right order or you'll wind up with some wild and unexpected
state. It's a great example of how underlying hardware, no matter how far removed, impacts
the language heavily. All right, so that's the language in general and my little digression
about state machines. But there's also the implementation side of things. That's how APT was used in the lab.
There was an APT3, but in practice, APT2 was the workhorse.
To go from code to NC instructions, a user first compiled their APT source code into APT1.
Then, that APT1 program could be converted into machine instructions for actual use.
Once again, that's not how most programming languages work.
In the world of Fortran, you just compile your code once,
the compiler spits out an executable file, and the process is over.
So why does APT use this multi-step process?
The technical reason comes down to debugging.
By producing the intermediary
code, a programmer could see what the compiler was doing before the process was complete.
It was a way to catch bugs and correct for systematic mistakes. Ross and his team even
worked up a suite of debugging programs for use in this process. The coolest and the one most connected to Doug's sage
days was a path visualizer. This program took a final toolpath and then plotted it in 3D
space on a fancy vector screen. So yeah, tucked away inside a CNC programming package were
some cool 3D wireframe graphics. A savvy operator could see a simulation of the part they were designing
before they even turned on a mill.
It's a touch that made APT as a system all the more approachable and really usable.
There is one other reason that APT employed this weird two-step shuffle.
Now, I didn't see this explicitly stated in any of Ross's writings,
so bear with me for a moment while we step into the realm of conjecture and educated guess.
The intermediary output of APT1 code made the compiler more transparent. A programmer could
actually look inside and see what the compiler was doing.
Sure, it wasn't a full view, but it showed the workings of a really big step in the process.
Here's the dig.
As cool as programming languages were, many programmers were still resistant to their adoption.
This is something that comes up whenever I look into programming in the 50s and early 60s.
Thomas Kurtz, the co-creator of BASIC, is one great example. He flat out refused to learn Fortran at first. Kurtz was convinced it would be slow and inefficient, a wasted exercise in his
mind. He didn't change his mind until he was stuck on an assembly language program and decided to give Fortran a try.
After that, he became a devotee.
Performance wasn't the only issue in the minds of programmers.
A high-level language abstracts away a lot of the mind-numbing minutiae of programming, but it also takes away a lot of control.
You simply have to trust that your code will be turned into some
reasonable blob of binary data. This is where ROS and APT come into play. APT1 is verbose,
but it is human-readable. If a programmer has concerns about the APT2 compiler, then
they can just pop open the output file and have a look around.
This is the design choice that preempts a lot of questions and concerns. Generating apt-1 code is just part of the process. It's a way to open the doors and help a resistant programmer really grow
some confidence in Ross's work. For Ross and the sizable team he managed, APT would smash any expectations. Development would continue basically indefinitely both inside MIT and in the larger field.
computer into a more central part of a factory. Even today, in 2021, APT and its derivatives are still in use. It has many multiple descendants, multiple standardized versions, and widespread
industry support. But this isn't the end of the episode. You see, APT is just the preamble to our
main course. We finally arrive back where this episode started. 1961
marked 100 years since the founding of MIT. Among other festivities planned were a series of films
produced by CBS to celebrate the latest and greatest research going on at the university.
The idea was simple, have some hosts part the academic veil around MIT, show off some
cool demos, explain some new research, and throw in a few scientists talking about their work.
We're basically dealing with a really fancy, high-budget open house on reel-to-reel film.
Now, here comes the frustrating part for me. Supposedly, this was a series of videos called
CBS's Tomorrow. I can only find one part of the series. In fact, I can only find CBS's Tomorrow
referenced in a single video. So, maybe it was a series, maybe it was just one film,
I'm not entirely sure. My guess is there's some other reels tucked away in
archives that have yet to be digitized. However, the single video I was able to find was just
enough to send me down a really weird rabbit hole. The video is titled The Thinking Machine,
and it's all about MIT's computer-related research. As a modern observer, the thinking machine is just plain
strange. This episode started with a snippet, but here's another one just to give you a taste.
With me tonight is Professor Jerome B. Wiesner, Director of the Research Laboratory of Electronics
at MIT. Dr. Wiesner, what really worries me today is what's going to happen to us if machines can think.
And what interests me specifically is, can they?
Well, that's a very hard question to answer.
If you'd asked me that question just a few years ago, I'd have said it was very far-fetched.
And today, I just have to admit, I don't really know.
I suspect if you come back in four or five years, I'll say, sure, they really do think.
Well, if you're confused, Doctor, how do you think I feel? So, from the jump, it's clear there's a narrative going on here.
The film has two hosts, the actor David Wayne and MIT's Dr. Jerome Weisner.
The duo sit in a pretty comfortable-looking 1950s study, smoking cigars and drinking for the whole
hour-long runtime. Wayne is confused and concerned about the imminent rise of artificial intelligence
and computing in general. Weisner tries to answer Wayne's questions and assuage his fears,
of course with the help of a series of pre-recorded demos.
To me, it feels like CBS is trying to ride this thin line
between sensationalism and actually showing modern research.
So maybe I should revise things a little.
It's like an open house with more
restrictions. Each demo has to fit into the overarching narrative of progress towards AI
that is ultimately non-threatening. The demo can't be overly technical or have any kind of
interactivity. And more than anything, they have to be visual. It has to fit the medium of a film.
We don't have a lot of information about how the thinking machine was planned and produced.
My guess is MIT put out a call to researchers asking for possible demos or projects to film.
Among the scientists to take up this challenge were Doug Ross and Harrison Morse, another programmer at the college.
Their submission was a program called SAGA, referred to in some documents as SAGA II.
It gets confusing.
It's unclear if SAGA was actually under development prior to the production of The Thinking Machine.
I found a memo that describes the project from late 1960, a little
under a year prior to the documentary's debut, so it's a distinct possibility, but this whole
phase of development is unclear. With that aside, what exactly is Saga? Well, simply put,
it's a program that could automatically generate western screenplays. Punch in a few parameters, run Saga, and out the other side came a printout.
This is 1960s computing technology, so we aren't dealing with a very complicated script,
but it's still a feat to behold.
Theoretically speaking, Saga can produce an infinite number of unique screenplays,
but they all follow a rigid pattern.
There are only two characters, the bandit and the sheriff. The overall structure is also highly
simplified. The bandit is on the run from the sheriff after stealing some cash. The sheriff
eventually catches up to the bandit. A shootout may ensue with one party victorious. The output itself was also
simplistic. It reads like a series of curt instructions. One of the printouts shown in
the thinking machine starts as, quote, the robber is at the window. Go to door. Open door. Go through door. Close door. Go to corner. Put money down at corner.
So it's not very nuanced, but Saga did produce reasonable plays. That is, the scripts were even
good enough that they could be understood by humans. These scripts were also good enough to
be turned into short films. CBS's documentary shows
three examples, each carried out by two actors in full western costume. There's some interpretation,
but by and large, they're just following what the computer tells them to do. In one scene,
the sheriff wins a shootout. In another, the robber escapes. A final scene rounds things out by demonstrating an infinite loop.
That last scene is more interesting thematically than anything.
Something to show that computer intelligence is just funny.
There's nothing dangerous there.
It's a spectacle, to be sure.
It's the kind of thing that's great in a TV documentary.
But the details.
That's where I think Saga gets really interesting.
My ham-fisted delivery may have given it away early, but functionally, how Saga works is
shockingly close to APT. So what do simulated Wild West plays have to do with numeric control?
The most important factor comes down to how Ross and
Morris modeled their Western play. I know it feels a little weird to talk about a screenplay
as a model, but that's a totally valid way to look at it. A 1960 memo describing the program
makes this really clear. They're simulating a Western as a simple state machine. I think this is a really smart
approach. A narrative, really any narrative, plays out as a series of steps. Sentences,
actions, or what have you. After each step, the overall state of the narrative changes.
Each future step is somehow impacted by this changed state. If a sheriff drops their gun, then they can't fire until they pick it up, for instance.
Instead of the NC machine analogy with its XYZ coordinates,
Saga has a more niche set of variables that describe its current state.
In all, there are 16 general state variables, plus a few more that are baked into the code.
These range from how far away the sheriff is from the robber, if the sheriff can see the robber, or if the robber can see the sheriff, if either character has been shot, and, most importantly, a quote-unquote inebriation factor.
That's right, in Saga, one of the possible outcomes is that the robber or sheriff gets
too drunk to shoot straight.
The other component that makes Saga tick are so-called switches.
Now, I get why Ross called these switches, but I think a more reasonable term would be
conditional operations.
These are the operations that act on the state machine, changing the overall state and
moving the story forward. Switches are also where Saga differs slightly from APT. In that system,
each statement is deterministic. That is, typing GOTO11 will always move the mill's cutter to the
same position. That's how a programming language should be.
But Saga isn't a programming language. It's a semi-intelligent automatic playwright.
You don't give it instructions on how to make a western, it just figures things out on its own.
And it does so using a random number generator and a table of weighted outcomes to operate these switches.
Each switch represents a possible action a character in the play can take.
Saga switches off turns between the sheriff and the robber. On each turn, it decides which
operation to run, and this is based on probability. The total probability of any outcome is calculated as a combination of a constant
called an A-factor and Saga's current set of state variables. These so-called A-factors are
just constants set on runtime. So if Ross or Morse wanted to, they could make a trigger-happy robber
or a drunk sheriff very easily. Factoring in the state variables prevents
impossible actions. This means that for some states, certain operations can't occur, certain
switches can't be taken. For instance, if the sheriff can't see the bandit, then the chance of
placing a shot is set to zero. Ross and Morse programmed a series of switches for each action
that could occur in the world
of Saga.
That makes up the whole set of possible operations.
Then with careful selection, a pack of A-factors were calculated.
Together these two components created a full rule set.
They prevented Saga from doing something that would be impossible in a Western setting.
What's under the hood is a little different,
but this kind of rule set also exists within APT. That system's smart enough to prevent
tight turns or impossible NC mill maneuvers. Saga is smart enough to prevent reality-breaking
scenarios. Now, throughout this, I've been avoiding calling Saga artificial intelligence.
The program does appear to show some kind of intelligence and even creativity in a limited sense.
There's something else at play here, though.
In The Thinking Machine, Doug Ross gives a short explanation of the Saga project.
In that, he says the project's goals were to show that, quote,
intelligent behavior is rule-obeying behavior.
Saga isn't some artificial intelligence that likes to write Western plays.
It has an element of trickery to it, but it demonstrates one of the many steps made towards AI.
Saga is more than a flashy demo.
It shows how existing programming knowledge in the 60s was being used to leverage and
pursue AI.
In the latter half of the 50s, AI was already taking shape.
A new field was forming.
Saga came from a slightly different part of computing, but it had a similar outward appearance.
Alright, that does it for this episode.
Saga turned out to be one of the more strange histories I've looked into thus far.
From its presentation in The Thinking Machine, Saga is definitely sensationalized.
It's a program that can write TV westerns, even display creativity,
but scratching below the surface
takes us to a more interesting story. Saga looks like AI, but it didn't come from the same
background as other AI projects. It was built by programmers that were used to working with
industrial manufacturing machines and vector drawing routines. The internal tooling, what
made Saga look like
an intelligent program, well, that was remarkably similar to tools that Doug Ross developed for
automating numeric control. In that sense, Saga writes plays in almost the same way that APT
carves out a part. Ultimately, Saga's most high-profile appearance was in the thinking machine.
It didn't spread campus to campus like some influential programs of the time,
and it didn't spark a larger project.
My best guess, and once again this is conjecture so take it with a grain of salt,
is Doug Ross viewed Saga as a fun side project.
He was working on more serious programs. APD was spreading
quickly within research and industry. In the years following Saga, Ross became crucial in the
development of computer-aided design, also known as CAD. The work he contributed to in the 50s and 60s revolutionized manufacturing.
Saga wasn't a revolution, but it was a fascinating byproduct of the same technology.
The bottom line is this.
The next time that you find yourself watching a Western, remember,
it's really just a dressed-up CNC machine.
Thanks for listening to Advent of Computing. i'll be back soon with another piece of
the story of computing and hey if you like the show there are now a few ways you can support it
if you know someone else who would be interested in the show then why not take a minute to share
it with them you can also rate and review on apple podcasts and if you want to be a super fan you can
now support the show directly through Advent of Computing merch
or signing up as a patron on Patreon
patrons get early access to episodes
polls for the direction of the show
and bonus content
right now I'm working on a patron voted episode
on the strange connection between
the supposed majestic 12 conspiracy
and Vannevar Bush
so if you want to get in on that, you can head over to Patreon and sign up.
Links to everything are on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.