Advent of Computing - Episode 87 - The ILLIAC Suite
Episode Date: July 24, 2022Can a computer be creative? Can we program a machine to make art? It turns out the answer is yes, and it doesn't even take artificial intelligence. This episode we are diving in to the ILLIAC Suite, ...a piece for string quartet that was composed by a computer. Along the way we will examine the Markov Chain Monte Carlo method, and how methods used to create the hydrogen bomb were adapted to create music. Selected Sources: https://archive.org/details/experimentalmusi0000hill/page/n5/mode/1up - Experimental Music https://web.archive.org/web/20171107072033/http://www.computing-conference.ugent.be/file/12 - Algoryhythmic Listening(page 40) https://www.youtube.com/playlist?list=PLEb-H1Xb9XcIyrrN5qauFr2KAolSbPi0c - The ILLIAC Suite, in 4 parts
Transcript
Discussion (0)
There are many different reasons that I find digital computing so interesting.
I'd say the biggest single one comes down to flexibility.
Now, I don't mean flexibility of certain hardware or flexibility of specific software.
I mean the flexibility of the idea of digital computing itself.
Take the universal Turing machine, for instance. I know, I harped
on it last episode, but hey, we're always close to theoryville over here. Essentially, there's a
whole class of digital machines that are sufficiently complicated that they can do the task of any other
computer. That level of sophistication is actually a pretty low bar. So, in practice, a computer can
do just about anything. That's how we get cool but mundane applications. I'm talking mathematics,
spreadsheets, networking, you know, the kind of stuff that holds the world together but
doesn't necessarily spark the imagination of all who find it.
Most of the use cases for computers fit very well within these comfy boundaries.
It's once we get out towards the borderlands that things really start to get cool.
Can a universal Turing machine be creative?
Can a computer break into the world of art? Well,
here's the fascinating part. Here's a wonderful example of why I find digital technology so
interesting. You can program a computer for creativity. It's actually a fairly easy task.
The trick is to borrow from more mundane applications.
For instance, some of the first computer-generated music was adapted from chemistry simulation
software. Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 87, The Iliac Suite.
Today, we're going to be dealing almost purely in the realm of music, math, and how computers can tie the two together.
But before that, I want to squeeze in a quick announcement.
I mentioned this on the last show, and I've been pushing it around on all my information
channels, so to speak.
But I'm moving forward with Notes on Computer History.
That's this new publication that I'm trying to get off the ground.
It's community-driven, and it's based around the history of computing.
And crucially, this is the big one, I want to invite everyone to contribute to it.
The idea here is that I want to make Notes on Computer History a space for anyone who wants
to write about the history of computers. A space where anyone can contribute regardless of if
you're an academic, a hobbyist, or just someone who's fascinated by computing. I also now have a website set up for the project.
It's the wonderful domain history.computer.
I will admit I'm a little rusty on that side of web tech, so I didn't realize that.computer
is a top-level domain now, but hey, I think it's pretty neat.
So go over to history.computer right now, there's
no.com needed, and read about the Notes on Computer History publication that I'm working on,
and please do submit. I've gotten a handful of submissions already, and people who have said
they will write for the publication. So we are getting somewhere, but we need more submissions for the
first issue. So just hop on over and write about anything in the field of comp history that you
find interesting. With that out of the way, let's start to ease into the episode. But also before
we get started, I do want to throw in a bit of a clarification. I do enjoy music myself.
I actually play a number of instruments.
But I'm not planning to play anything on the show today.
I know, I'm kind of ruining a perfect opportunity.
I was actually planning to play a section of the Iliac Suite,
but the specifics of the topic kinda ruined that idea.
The Iliac Suite is a very long composition for string quartet. The issue here for me specifically
is one of compatibility, so to speak. I am, very notably you could say, one person.
I am, very notably you could say, one person.
I also don't play string instruments, I play brass.
If you're not familiar, there is a wide swath of difference between the two.
So I can't play the suite for myself, but we will be listening to a few excerpts from the actual composition near the end of the episode.
So anyway, let's talk brass
tacks for a second. What exactly is the Iliac Suite? Simply put, it's a composition that was
written in 1956 by a computer, specifically by Iliac, a vacuum tube machine built and operated
at the University of Illinois.
The piece was subsequently performed by a string quartet on a number of occasions.
This is one of those notable attempts to apply a computer to a very creative pursuit,
a very human-specific kind of task.
It's also important to note that the ILLIAC Suite wasn't written using artificial intelligence.
We aren't talking about a program that learned how to write music or examined hundreds of pieces,
but instead algorithmically generated music.
It's a distinction that will become important as we continue on.
Most of all, the ILLIAC suite is just neat. There's something satisfying about seeing, or hearing in this case, a machine replicate
a very human task.
There's also something a little scary about that, but we're going to focus on the positives
here.
Advent of Computing is, in most cases, a very uplifting and positive program, I like to
believe.
So that leads us to the big question for this episode.
How can you make a computer act creative?
Music, especially the composition of music, requires a lot of creativity.
It requires a sense of aesthetic.
For something to sound nice to the human ear, its composer needs, well,
a human ear, right? Maybe not. The Iliac Street offers an interesting case study of how careful
analysis can render a seemingly impossible problem eminently solvable. But, of course,
problem eminently solvable. But of course, there are some caveats. So are we dealing with a secret digital Mozart? Or perhaps something closer to smoke and mirrors? I want to start by getting
us into the right frame of mind. Music is such an emotional and really experiential thing that
I feel compelled to throw in some set dressing. We're going back to an era that
I usually characterize with one special word. The. During the dawn of digital machines,
computers often took this definite article. Programmers would say something like, oh,
I work on the Iniac or the Iliac. Machines were, for the most part, unique.
Each had their own way of doing things.
Each was incompatible, with some notable exceptions.
Just as a very quick side note I'm throwing in here,
ILLIAC actually had a few roughly compatible machines out there,
but that's a separate topic for another time.
I think the most exciting part here is that each machine must have had its own feel to it.
Now, I don't just mean technically, I mean a real sense of place. Take ILLIAC, for instance,
since that's going to be the setting that we'll be in this entire episode. ILLIAC was
built in 1952 at the University of Illinois. For its entire operating life, which lasted a whole
10 years, it occupied the same physical location on the U of I campus. These early machines also
had a bit of lineage to them. The ILLIAC-1, the first of a larger series of
somewhat similar machines, was based on a draft designed by John von Neumann. His draft, in turn,
was based off plans for a computer called EDVAC. This consistent setting, long life,
and close network of predecessors must have formed a real sense of identity for a computer like ILLIAC.
You're not just sitting down with a random desktop.
You're sitting down with the ILLIAC.
The computer has papers.
So when we're talking about ILLIAC,
I want you to try and place yourself in this setting.
In a high-ceilinged room, the air
is probably a little colder than comfortable due to the machine's cooling system. You
may see a faint glow from Iliac's vacuum tubes, especially if the lights are off at
night. You may hear a faint hum. It was in this room in 1955 that Lejarin Hiller and
Leonard Isaacson, two chemists, started teaching a computer how to
compose music. Of course, this wasn't the first attempt at non-human music. In fact, it could be
argued that computers by their very nature are musical. Now, I know that might be a little
surprising and also might sound a little pretentious, the idea of computers
playing or writing music or being fundamentally musical, but maybe there is a deeper connection
here. Recently, I read a fascinating article by Miyazaki Shintaro called Algorithmic Listening,
1949-1962, Auditory Practices of Early Mainframe Computing. The wordthmic here is a pun that doesn't really work that well in audio form.
Somewhat ironically.
Miyazaki spells the last part of the word as rhythmic, as in keeping a beat or cadence.
Computers are kind of predisposed to a certain musicality. I think
many of us are at least a little familiar with this idea. Digital machines tend to make a lot
of sounds, from beeps and boops on down to humming transformers. On older machines, you could even
tell that the hard drive was in use thanks to the
clicks and clacks made by the head scanning over the disk's platters. Don't even get me
started about floppy drives, those are a totally different cacophonous beast.
Miyazaki argues that this connection between rhythmic sound and computers goes even further back. As he explains, computers have always made
sound. I know, not a huge revelation, but check out where this goes. A computer is supposed to
be this predictable, number-crunching beast. It should always act the same. It should be
deterministic. A big part of programming and debugging really
comes down to recognizing patterns in the machine, making sure your code has the expected outputs,
takes the expected steps, that sort of thing. Back in the day, as in the 1940s and 50s,
we didn't really have as many debugging tools. So some absolute mad lads would just listen to the machine for issues.
In some cases, you could actually just hear the hums and clunks of the components.
This was mostly true on early relay-based computers.
But this could go further.
One example that Miyazaki draws from is a computer called
BINAC. An operator figured out that the machine's circuits produced a radio signal. So if a radio
was placed nearby and turned to the right frequency, you could hear packets of data
running through BINAC's circuits. Careful listening could reveal what the computer was doing. Miyazaki goes on
to drop more examples. On some machines, like Univac-1, actual speakers could be connected
up to certain wires. An operator that was in tune with the computer could literally hear problems
as they occurred. The expected patterns, the rhythms of the machine, would be slightly off.
You'd normally hear one set of beeps and boops, but if something went wrong, you'd be able to
notice that the beeps were in the wrong place. The reason for these rhythms is simple. From their
earliest days, electronic digital computers were designed around clock circuits. That is, a circuit that pulses at
a predetermined rate. You could call it a beat that plays to a well-defined rhythm, or you could
call it a computerized heartbeat. These clock circuits are important because most digital
computers are serial machines, as in, they run instructions one after another
after another.
There may be some parallelism deeper down in the nest of wires, but fundamentally, each
instruction is run in series.
That's a core tenet that makes things like programming languages possible.
You have to have some concept of order of operation, and that's
maintained by following the beat of a clock, by playing to a rhythm. Of course, these so-called
algorithms are really just incidentally music. It's like how the sound of shoes on pavement
technically makes a beat. What about purposeful musical sounds produced by computers? Well,
those would grow out of two separate traditions. The earlier form comes from these debugging
beepers. During the early 50s, a group of researchers working on the CSIR computer in
Australia started making really basic tunes. That machine had a speaker connected
up for sending out debug pulses. There was even a bit of a special instruction for buzzing the
beeper. The instruction name is lovely. It was called Hoot. By hooting, a programmer could be
alerted that something was going on. But this was a little
more complicated than just hoot and done. This instruction sent a single pulse to the speaker,
so in practice, you had to hoot a few times in a row to make an auditory buzz.
Once again, this drops us right into the usual programmer tropes. We tend to be curious folk, and we like
to abuse cool hardware and software. Never show us a closed door, we'll always try to break into it.
Programmers on CSIR realized they could play specific tones by hooting at proper intervals.
Soon, they were playing scales. Very soon after that, they were playing simple monophonic
tunes. On the other end of the spectrum, we hit Bell Labs. During the middle part of the 50s,
researchers at Bell were experimenting with computer-generated waveforms. Using a combination
of custom hardware and software, it's possible to replicate sounds
via computer.
You actually build up the physical sound waves and then play them back.
This type of research would eventually lead to full-on computerized digital synthesizers.
However, there was a problem early on.
Computers back in this period weren't very fast.
They definitely weren't fast enough to
generate complex audio waveforms and then somehow play them back in real time. Bell's solution was
to pre-generate waveforms, store them in an intermediate format, and then play them back
later. This got around the speed issue, but made this form of computer music a little indirect.
the speed issue, but made this form of computer music a little indirect. You couldn't play the computer like an instrument, for example. Both of these traditions are interesting in their own
right, but we won't dwell on them much longer. Take this as a bit of background as we move into
a discussion of a third approach. You see, there is a limiting factor with Bell's waveform generation methods.
The same goes for CSIR's hooting ways.
Both of these approaches were seeking to use computers as a kind of overblown instrument.
They were playing existing songs or taking direct instruction from humans.
They were all directed by the human ear. But what if we flipped
this? Instead of a human telling a computer what to play, what notes, why not have a computer tell
a human what to play? Is that even possible? In 1955, that was still an open question,
one that Hiller and Isaacson would soon solve. Now, a quick digression to talk
about music. I just want to drop some quick fundamentals here so we're all on the same page
moving forward. A musical composition is usually expressed as a score, sometimes just called sheet
music. Just think sheets of paper with wacky lines and dots everywhere. More specifically,
a score will have parts for multiple instruments expressed in a standardized notation. Scores are
of particular interest because, in general, music is a communal activity. You don't often play in
total isolation. This is especially true when playing instruments that can only produce one note at a time,
so-called monophonic instruments.
That's your horns, reeds, and certain string instruments.
A score gives you a particular way to notate polyphony with multiple instruments,
as in, get the trumpet to play a melody and have a trombone
that harmonizes with it. You end up getting more than one note at a time, which allows for more
complex sound. Now, here's where we can start our trek into the more technical realm. Sheet music
is really just a type of encoding. However, it's more subtle than something like ASCII or binary.
Musical notation allows for the encoding of at least four types of data.
Tonal data, rhythmic data, dynamics, and playing instructions.
Tonal data is just which notes to play.
Notes are placed on a set of lines called the staff, and
their vertical location encodes their pitch. Higher on the staff corresponds to higher pitches.
Each note is a circle connected to a vertical line, the tail. That tail's shape, plus if the
note's body is filled in or left empty, is used to encode rhythm. Different tails correspond to how many fractions
of a bar each note should be played for, or another way of looking at it is how many beats
a note should be played for, or how many fractions of a beat. Rhythm's a little bit confusing.
Dynamics and playing instructions are slightly less formalized. Both are a type of marginalia that
accent the rest of the score. Dynamics tell the musician how loudly or quietly they should be
playing. Playing instructions include things like where to breathe or if a note should be accented
in a specific way. There are other types of marginal notation and flow control stuff, but let's just stick with
these four main data streams. Using this encoding scheme, one can describe anything from Beethoven's
symphonies to the laid-back sounds of Coltrane. It's a very robust schema that's been in use for
centuries at this point. But what we've looked at here is really just the
syntax, the nuts and bolts of how music is encoded. While you can express wonderful music this way,
it's actually easier to encode garbage. And I'm not just talking about lame songs, I mean
actual nonsense sounds that, well, sound bad. There's a sense of aesthetic that goes into any data encoding,
and that's especially true when it comes to scoring music. It would be trivial to write
a program that randomly places notes and dynamics on a staff. The result, however,
would be on par with a page full of random letters. It may technically be music,
but just barely. In order to write music, you need a sense of the semantics involved.
You have to actually know how to speak the language, not just how it's written.
This is where we come back around to the main thread today. While the Iliac Suite was a team
effort, I'm going to be mainly focusing on Hiller.
The reason for this comes down to sourcing. I just don't have the same amount of resources
on Isaacson. What we do know about Hiller is more than enough to flesh out the story.
The primary source that we're dealing with here is a book titled Experimental Music,
Composition with an Electronic Computer.
It was written by both Hiller and Isaacson.
To that, we can add a number of interviews that were conducted with Hiller over the years.
Experimental music is of particular importance because it acts as a primer to music theory and history
while also covering the actual development of the Iliac Suite. However,
it doesn't go into details on the author's backgrounds. This background is important
because it informed the approach outlined in Experimental Music. You see, this is a really
dense book. There are actual subsections explaining, in very mechanical detail, how music is supposed to make you feel.
It's a fascinating read, but also one of those texts that sucks some of the wonder out of its
subject. This academic rigor comes from the author's roots. They were both chemists. Both
were professors of chemistry at the University of Illinois. Both had PhDs in the subject.
These are the alpha nerds, so to speak.
During an interview, Hiller gives this short synopsis of how the suite came to be.
Quote,
Leonard Isaacson and I did this ILLIAC suite completely as a bootleg job at night on the ILLIAC-1.
The programming came about because I actually adapted some of the
rubber molecule programming to the writing of Counterpoint. In other words, I had an idea
one day when I was hanging around the chemistry lab just doing I don't know what, when I thought,
well, you know, if I change the geometrical design of this random flight program I've written,
which had gotten quite complicated, changed the parameters,
the boundary conditions, so to speak, I can make the boundary conditions strict counterpoint
instead of tetrahedral carbon bonds, end quote. While this may not be the most comprehensive or
good explanation, I think it gives us a starting point. It gives us some details to pick apart. It also helps us
explain something about Hiller. He was fascinated by music. By trade, Hiller was a chemist. That's
what his primary degree was in. After earning his PhD, Hiller worked at DuPont Chemicals for years,
only leaving in 1952. After DuPont, he signed on as a professor with the University of Illinois.
Throughout this whole series of events, Hiller was looking for a musical outlet. He had learned
to play music in college, even taking a few classes covering the theory side of things.
While working at DuPont, Hiller was composing music. Then, once at the University of Illinois,
he actually would earn a master's degree in music.
The whole time, he had been trying to break into the contemporary scene as a composer.
But, as he describes it, this was a frustrating process. No one would see a chemist as a serious
musician. Now, there is the rubber software to get out of the way.
From Hiller's quote, this may sound like a throwaway line, but this appears to be a pretty consistent story.
Another mention shows up in experimental music, this time with a citation.
So, we do know specifics here.
In 1953, doctors Wall and Hiller published a spicy paper titled
Statistical Computation of Mean Dimensions of Macromolecules.
It's not just a chemistry paper, it's a computational chemistry paper.
Perhaps the nastiest genre possible.
The paper discusses the composition of polymers, as in long chains of
similarly structured chunks of molecules. As a neat trivia word, these smaller chunks that make
up polymers are called monomers. Polymers are interesting compounds for a number of reasons.
They can form strong yet flexible compounds,
they can be engineered in the lab, and they are somewhat chaotic. Some simple molecules
always take the same physical form, as in the molecule has the same shape, but that's only the
case for a select few compounds. It's more common for a molecule to have multiple possible shapes,
each called a conformation. Each conformation is still the same molecule. It still has the
same chemical structure. It's just that their physical shapes are slightly different.
Polymers are able to take on a near limitless number of conformations since the bonds between
monomers can come in a number of possible angles.
They can also be any length. You can have any number of monomers. It all depends on really
complicated physics. You basically get a number of possible bond angles for each monomer.
But the result is that polymer strands can form these long, tangled, chaotic messes.
Many properties of the final material are determined by the properties of these long, tangled, chaotic messes. Many properties of the final material are determined by
the properties of these long polymer chains. So it's important to have some idea about how
polymers are shaped. But the actual conformation of polymers are basically random. At least,
nearly random. Back in the day, it would have been impossible to profile
polymer conformations. That is, until we reach the computing era. Statistical computation of
mean dimensions of macromolecules, despite the mouthful of the name, is actually a straightforward
paper. Wall and Hiller describe a program that can simulate the growth of polymer chains.
This program allowed the authors to generate statistical profiles for polymers
given a set of starting conditions.
The program itself is a restricted random walk, at least that's what Wall and Hiller use.
More commonly, or more precisely, you'd call this the Monte Carlo
method. How does this method lead to music? Well, it all comes down to math. The Monte Carlo method
is one of those terms that sounds daunting, but is actually pretty reasonable. The method allows
you to sample a set of possible outcomes, and it's usually used with pretty complex systems.
you to sample a set of possible outcomes, and it's usually used with pretty complex systems.
It works great for polymers. I actually have a little experience using the Monte Carlo method as part of a simulation of how light scatters off clouds of dust. I like the dust example
especially because it's illustrative of a more casual name for these methods, the random walk.
Let's say you have a cloud of randomly placed dust
particles. They're kind of hard to find. But nonetheless, you want to figure out how a ray
of light would traverse the cloud. To start with, you need to set up some conditions for the model.
We can make the simplifying assumption that each grain of dust will perfectly scatter light,
as in, once the ray
hits a piece of dust, the light gets redirected in a random direction. So we start with a ray
shining into the cloud, and then we start computing steps. We take the starting direction, then check
if the ray is hitting a grain of dust. That means it's time for a new step. So we roll the dice and randomize the direction
of the ray of light. Then we do the process again. We keep stepping, checking for hits,
and randomly scattering light until some end condition. In this case, we go until the beam
is free from our dust cloud. Computationally speaking, this is a really simple algorithm,
Computationally speaking, this is a really simple algorithm.
But the results are a pretty true-to-life model of how dust scatters light.
Run this for some huge number of light rays and you have a fantastic result to work with.
That's something that can't really be solved by normal mathematical means.
Hence, these models are sometimes called non-numeric.
Wall and Hiller's polymer model worked in a similar way.
They start with one monomer and then go to add another.
The addition here is the random step.
The orientation of the bond is randomized.
There's a check to make sure that this addition doesn't overlap with an existing monomer.
If that's all good, then the model moves on to the next step.
The end condition is a little specific here. The first monomer was located at the origin of this virtual space, 0, 0, 0. The simulation completed once another monomer was placed on the origin.
In other words, the program took random steps until it formed a closed loop. The actual
data product was an analysis of the size of these randomly formed loops. The result, once again,
is a computationally simple model that produces uniquely complex data. This all works thanks to the boundary conditions put in place, aka the rules over
at Monte Carlo. In the dust example, the rule is just to scatter at random until light gets out,
not much of a boundary condition really. Whereas Wall and Hiller's polymer program had more complex
restrictions. But either way you cut it, the actual computational load per step
is relatively low. So once the code is set up, you can run it for hundreds, thousands,
or even millions of steps. That's just what Wall and Hiller did for their polymers,
and their digital weapon of choice was the ILLIAC. It was during this project that Hiller
had his flash of insight. He realized that the Monte Carlo method could, in theory, be adapted to compose music.
That may sound like a bit of a leap in logic, but this is actually a pretty small step.
The connection is explained in exhaustive detail in Experimental Music.
Here's the best summary I can give, and I think it's relatively cogent.
Music, in general, is composed from a random set of possible data. That data is just those
four data streams that I talked about earlier. Tonal, rhythmic, and then added dynamic and
playing instructions. It's the job of a composer to draw from that random set and build an intelligible
tune. This kind of sounds like some wishy-washy musical philosophy, but it makes good sense in
the simulation framework we're working towards. Think about the polymer simulation, for instance.
In general, there is an unlimited set of random polymer chains. You could have two monomers in a straight
line, or you could have a twisted loop of hundreds of monomers. Wall and Hiller's program, when you
gloss over the implementation details, is selectively drawing random polymer rings out of
this huge bag. That's roughly equivalent to what a composer does. Their job is to select a series of notes
from the overall set of possibilities, specifically looking for a series that sounds good.
In the polymer case, this selection criteria is simple. Make sure you get a loop without overlaps.
In the musical case, well, selecting good music is a little more complicated.
This is the basis that Hiller and Isaacson would work off of.
They would conduct a number of different experiments, the most important being the four separate
tests that would become the four movements of the Iliac Suite.
For the sake of brevity, I'm going to only really focus on the work corresponding to movements 1 and 4.
The middle movements are interesting, but they serve more as a bridge to the really experimental stuff at the end of the suite.
Once again, experimental music really provides great information on the, well, experiments in music.
As a warm-up, we can start with the first compositional experiment,
generating a simple melody.
I think this will give us a good jumping-off point
to start looking at more complicated simulations.
The first attempt was called the try-again method.
That's also the method by which I tend to operate.
This is the closest to the rubber polymer simulation.
Hiller and Isaacson encoded notes as simple numbers. Iliac's memory layout made it easy
to encode numbers between 0 and 15, which gave the program 16 notes to work with.
The duo chose to use this limited space to encode only whole notes, as in notes without sharps or flats.
Those correspond to the white keys on a piano.
This is a pretty smart choice, but it's also a big restriction right off the bat.
Without flats or sharps, you miss out on a lot of subtlety.
You aren't going to be playing much jazz that way, for instance.
The upshot is that composing with
only whole notes is a lot simpler. If you've ever sat down and noodled on a piano, then you probably
know what I mean. You can hammer out something that sounds remarkably like music by just hitting
the white keys. The program worked like this. Each melody started on middle C. The program would then pick a random
note from a series of 16 possibilities. It then checked if the new note violated any boundary
conditions. If so, then the whole melody must be trash, so it was scrapped and started over.
The whole note thing here is more of an implicit boundary condition imposed by data representation.
Something to keep track of, but more part of the overall setup.
We get four explicit restrictions that are actually programmed for.
The actual program wouldn't start checking for boundary violations until at least three notes were picked.
That was, I think, partly due to the first boundary restriction.
No tritones. That means the program isn't allowed to choose three consecutive notes,
so you can't get an A, B, C. The next rule is no sevenths, as in, you can't follow a note with the
seventh of that note's scale. So if the program just picked a C, then it can't pick a B.
For the final two, I'm pulling right from the text. Quote, three, the melody must start and
end on middle C. And four, the range of the melody from its highest to lowest note must not exceed
one octave. End quote. These all might sound like music nerd stuff,
but there is method here. Each of these rules is in place to ensure a nice sounding melody.
Tritones and sevenths are dissonant. A tritone doesn't make a nice chord, and a seventh can't
make a chord with its tonic. Dissonance can be used as a cool trick in music,
but it takes more subtlety than a random walk can muster on its own.
You have to resolve dissonance for it to sound good.
Otherwise, it just sounds unresolved.
Rule three is about resolution.
A melody sounds nice when it starts and ends on the same note. It makes it
sound finished. That's the whole point of resolving things. The final rule is to prevent extreme highs
and lows. Although with 16 notes you aren't really getting a lot of range anyway. The simulation
completed once it got back around to middle C.
The final result was actually promising.
Hiller and Isaacson had a few short melodies generated by a computer.
But it was only a small step.
For one, this wasn't a very efficient program.
The triagon method made more attempts than it needed to.
That had to be corrected.
It was also highly simplified.
Iliac was only spitting out tiny monophonic tunes. A neat party trick, but not really impressive,
not something you can really write a paper about. The next experiment, what we can call Experiment 1, was far more ambitious than Try Again. From the text, quote,
In Experiment 1, the two major objectives were to develop a technique for the composition of a simple but recognizable type of melody, and to achieve simple polyphonic writing. End quote.
This is going to be the bare minimum for an interesting output. The first part here, the recognizable
type of melody, doesn't just mean a catchy tune. Hiller and Isaacson wanted Iliac to follow a
well-established musical form. This would give them a structured goal to program towards,
as well as offering a body of existing work to draw from. The form they chose was called strict counterpoint.
This is also where we reach another one of these patented Sean Gets Wrecked sections.
I've played music for years, basically since grade school, but I'm not very good at music
theory. I prefer to learn and play rote, aka by ear. I can read sheet music just fine, but I kind of prefer to just pick up an instrument and
whistle out a tune.
I can't really explain my preferences, as you know, I'm usually in my head about stuff,
but this is just how my mind works when it comes to music.
Theory tends to kind of go in one ear and right out the other with me. So we're going to be learning about counterpoint together.
In general, counterpoint is a type of composition that shows up in concert music.
This is more commonly just called classical music, if you must.
Counterpoint is a way for describing and building polyphony or polyphony.
I've heard it said both ways,
but this is just music that contains multiple tones at once. You could call that a score,
but Counterpoint's a little more specific. The framework was formalized back in the 18th century, so it has a rich tradition and a lot of scholarly work written about it. I think that makes it count as a recognizable type of melody.
The basic idea with Counterpoint is you build a song from a set of so-called cantus fermi,
or short melodic phrases.
These phrases might be a handful of notes, or they might be up to a dozen.
Then, working within a series of rules, you write out phrases
to play with the initial Cantus firmus. What makes Counterpoint useful here is that it contains
specific rules for how you develop those subsequent parts, those counter-melodies.
That can be translated into a program quite trivially. Experiment 1 starts with the four
basic rules of the earlier try-again test, but a few alterations. Melodies must stick to a single
octave. The cantus firmus must start and end on C. No sevenths, and tritones are now actually
allowed, but they have to be resolved. So you have to go like, ba ba ba, ba. You can't just go,
ba ba ba, since, you know, that leaves the listener hanging. In all, there are 16 total
rules, so I'm not going to hit all of them, just some highlights. Counter melodies had to resolve
to a note in the cantus firmus's key that means that a counter
melody could start on say a d but it had to end on a c e or g the main melody was always in c so
that made things easier there's another fun rule that the text calls the, quote, forbidden repeat of climax. Each phrase had to have a climax note,
the highest note in the line. The program was only allowed to hit that note once unless it was a C,
then it could be repeated. There is also a rule about resolving skips. If the next note skipped
too many notes, then it had to be resolved to a note between the
skip. All of these rules are in place to create a nice set of phrases, and they're imported directly
from existing rules using counterpoint. There is also a specific set of rules, eight of the total
16, just for handling harmony, as in to prevent the counterpoint melodies from clashing.
The actual Monte Carlo program was also beefed up a little bit to improve efficiency.
The first alteration was to drop the total try-again methodology. Instead, the program
would pick a random note and then test if it fit. If not, then another note would be chosen.
it fit. If not, then another note would be chosen. A melody was only scrapped if no suitable note could be found. Total randomness was also replaced with a more subtle note selection method. This new
program was smart enough to detect if it needed a truly random note, or needed a note to resolve a
skip or a tritone. Thus, it wasn't always drawing from total noise.
That last part is interesting because I think it speaks to the hardware limitations at play.
The Monte Carlo method is wonderful on these older computers because it doesn't necessarily
require too many calculations. The only real calculations are the random number generation
and then any checks you run.
Well, for the ILLIAC suite, all the checks are just comparisons.
At most, there's a subtraction or addition operation.
So randomness was the real heavy computational lift.
Therefore, it was in Hiller and Isaacson's best interest to limit the number of calls to generate random
numbers. The final part of Experiment 1 is, of course, the matter of rhythm. This is one of the
few places where experimental music is a little lacking on details. I do have the full scoring
for the entire Iliac Suite, so I can't confirm that Movement 1 has rhythmic data to it. It's
not complicated, but it does have notes of different lengths. So how were these rhythms
chosen for Experiment 1? This is something I've agonized over. I even went as far as going back
to some of the texts that Hiller and Isaacson cite when discussing the history of counterpoint. That turned out to be a bit too deep for me. Once again,
formal music theory tends to be a little beyond my intellectual reach. But here's my educated
guess based off what's implied in experimental music. It seems like the duo were pulling from some strict rhythmic format,
probably some part of counterpoint that I'm not familiar with. I haven't seen it spelled out
explicitly in the text, so it probably is one of those things that's just too obvious to mention
for those conversant in composition. Hiller and Isaacson do explain that the overall layout of the score was chosen to
highlight the progression of the experiment itself. It goes from a single voice to two voices and on
up to all four parts. But rhythm isn't really mentioned in Experiment 1. A contributing reason
for my logic here comes down to Experiment three. Part one and two were all
about tonality, harmony, and actually generating something that didn't sound like noise. Rhythm
isn't even discussed until experiment three. And by this, I mean that experiment three was,
in large part, about introducing rhythm, period. To quote directly from the beginning of the section on this experiment,
quote,
Rhythm was perhaps the most important musical element we felt had to be treated
if a fundamental compositional technique utilizing computers was to be developed.
Our objective in considering rhythm as a musical entity to be treated by computer processing was in accord with a recognition of this condition. So, two things from that passage. basis for the further elaboration of rhythmic devices in more complex contexts. End quote.
So, two things from that passage. One, this quote may have clued you in to why I've been
citing sparsely. This is a pretty dense book. And two, rhythm is important. I think this importance
is probably why the first two experiments lack much discussion
of rhythm. The team was working up to that task. It turns out that rhythm really complicates things.
Instead of lingering on experiment three, I'm just going to jump forward to the ultimate data
product. Experiment number four. This is where ILLIAC was set loose to generate more
open-ended compositions. Rhythm can be a bit of a pain to deal with. The actual notation for rhythm,
at least in the Western tradition, was developed later than tonal notation. It doesn't help that
the standard encoding scheme is a bit... complicated.
A piece of music is broken up into measures, also called bars.
Each measure is some set number of beats long,
the length being determined by the key signature.
Usually this is four beats, but it can be anything you want.
Notes are then described as being a fraction of a measure. So a whole note
will be four beats long, in other words, a full measure. A quarter note is actually one beat,
or a quarter of a measure. An eighth note is an eighth of a measure, which is also half a beat.
However, like I said, this is all subject to change. You don't have to compose in a time signature with four beats per measure.
In fact, you can even use multiple time signatures in one song.
You can see how notating this could get confusing.
How do you port this scheme over to a computer?
Well, the MIDI standard, one of the more flexible ways to describe music digitally,
uses a bit of a workaround.
Instead of actually notating rhythm, MIDI provides the user with two simple commands, note on and note off.
By timing the on and off, you create rhythms.
That's a nice solution because it's flexible and you don't have to account for time signatures at all.
The computer can just beep.
MIDI is much more recent, but I think we can find a parallel here.
Hiller and Isaacson also used a workaround for rhythm notation.
However, ILLIAC would gain a special benefit from this workaround.
Starting in Experiment 3, we see rhythm encoded as a set of possible measures. Instead of allowing Iliac to pick individual notes,
Hiller and Isaacson provide a set of 30 possible measures, each with a predefined rhythm.
Experimental music provides a table with all these possible rhythms. This made it much easier to actually store and generate beats, since Iliac just had to pick a number.
Thus, the normal Monte Carlo schema could be adapted to a new use.
The measured table also let Hiller and Isaacson perform a type of pre-selection.
The actual rhythm table could only include beats that, you know, made sense.
This also helped in a more mundane way.
The output.
This is an aspect of the Iliac suite that I've only brushed on so far,
so I guess it's time to actually get into it.
The Iliac actually had a pretty robust set of outputs for the time.
By the 60s, Don Bitzer was getting ILLIAC to send out simple graphics to remote terminals.
There was also the more traditional punched paper mediums.
But in 1955, there was only one real human-readable output, the ever-vexing printer.
But this medium introduced its own problem. Back in the day,
printers were strictly textual affairs. They printed characters using some kind of impact
method. These are basically computer-automated typewriters. Useful for most things, true,
but the Iliac Suite was a little too far afield. Traditionally, music is written by hand.
You usually start with a sheet that's had a staff printed on it.
You grab a pen, and you get down to business.
More recently, nice digital packages for composition have hit the scene,
but even then, you need a capable printer to output physical music.
Either that or a screen to display said music.
Experimental music does mention some typewriter-like machines for music composition,
but that topic is quickly dropped as a dead end. The solution Hiller and Isaacson settled on was
rudimentary, to put it lightly. They had ILLIAC dump out numeric data straight from their simulation.
So the actual output from these programs was a sheet of paper crammed with numbers.
Each voice would be on its own line, so you got groups of four lines.
This worked out pretty simply for purely tonal experiments.
The addition of rhythm made things more complicated.
The final output became kind of a mess of numbers that had to be decoded. This becomes a larger morass when you
add in the last two data streams. Experiment 3 added code for handling dynamics and playing
instructions. Once again, this is all numerically encoded. Once printouts were done, Hiller and Isaacson had to translate
this numeric pile into human-readable notation by hand. That had to have been slow going.
While these printouts must have been nasty, they pale in comparison to Big Number Four.
Experiment Four is where things go fully off the rails. As the text describes it, this experiment was an attempt to generate a novel musical form using a computer.
In other words, no more crutches.
Sharps and flats were thrown in.
The ability to change key was added.
Up to this point, everything was in C major.
We have the full four-part data stream and the crowning piece.
We have the Markov chain.
Thus, the ILLIAC suite ends with a full Markov chain Monte Carlo simulation,
aka the MCMC method.
Now, as another weird aside, the text spells Markov as Mark-O-F-F. It's more properly
spelled as Mark-O-V. I don't fully know what's up with that. My best guess is there were some
translation issues with papers they were reading. Anyway, the core concept of a Markov chain is actually pretty simple.
It's a series of finite states where each move from one state to another has an associated
probability.
This probability is, in turn, dependent on the current state of the overall system.
Usually this works out to something kind of like a rigged random number generator, where
the result of the dice roll is based off the current system's state.
The technique was initially developed by, who else, but Andrei Markov.
These chains were both used for some statistics that's a little too heavy-duty for me.
However, shortly after all that math, Markov would turn his chain to another task, poetry.
He used his new method to analyze a set of poems.
So, even at this really early juncture, Markov chains are being used to characterize non-numeric data.
This method really gets supercharged once computers enter the picture.
Adding a Markov chain to a random simulation
allows you to better model reality. Not all processes are 100% random. Thus, the Monte
Carlo method has a certain limit to its usefulness. Take the dust scattering example from earlier.
A ray of light won't scatter in a totally random direction. It's probably more likely to scatter
off at 90 degrees or so. It's highly unlikely that the incident ray of light would reflect
straight through a grain, a full 180. You can start to build up a Markov chain from this.
You can create a table of possible scattering angles based off the incoming angle of light.
In other words, a set of possible new states with probabilities based off the model's current state.
One of the first simulations to make use of this combined approach, the MCMC method,
was developed at Los Alamos.
The details were actually published in the Journal of Chemical Physics in 1953.
The article was titled, The details were actually published in the Journal of Chemical Physics in 1953.
The article was titled,
Equation of State Calculations by Fast Computing Machines.
This may have been where Hiller and Isaacson heard of the method, but the paper doesn't mention Markov or Markov.
There's this little quirk that this method is so ubiquitous, or at least so simple to stumble upon,
that it's not always called the Markov method. The paper is also a bit of a red herring. It's a discussion of the
use of an augmented Monte Carlo method for calculating multidimensional integrals. You see,
the MCMC method is good at sampling a so-called probability space, or prior space.
If you have an equation that you can't outright solve,
you can use the MCMC method to make a bunch of educated guesses.
Make enough guesses using, say, a computer,
and over time you approach the proper solution.
The 53 paper says that this can be used to solve nasty integrals.
However, its authors list give away its true intentions. It was penned by Metropolis,
Rosenbluth, Rosenbluth, Teller, and Teller. Four of the five authors were working at Los Alamos
National Lab at the time. The Tellers on the byline are particularly telling.
One of those, Edward Teller, is sometimes called the father of the hydrogen bomb.
So, you see, the truth is, the MCMC method first stretched its proverbial legs,
making a literal doomsday weapon. They were using this at Los Alamos to simulate nuclear fusion.
Go forward a few years, and ILLIAC is using that same method to generate music.
How does this heavy-duty algorithm apply to the ILLIAC suite? Experimental music explains this
best for tonality, but it was used for all of the other data streams. The first trick was to stop
describing music in terms of discrete notes and instead examine the transition between those notes.
Call it reading between the tones. The transition, the jump from one note to another, represents a
change of state. In a chemical system, a change of state has some
associated energy requirement, a barrier that has to be met. In music, a transition has an impact
on the overall melody. Certain notes don't fit together, others blend nicely, thus certain
transitions are less likely to occur. So, we have a change of state with an associated consequence, and those
consequences are based off the current state of the model. This is right where a Markov chain
really makes sense. Hiller and Isaacson created a table of probabilities for different transitions,
for different intervals between notes. The table represents how good a note sounds when played after a certain note.
Interestingly enough, the most likely note according to the literature is actually a repeat.
So if the tune started on a C and was only following the Markov chain, then it would most
likely just play C, and then C, and C, and C, and C on forever. But by applying the larger set of rules to the system,
something more interesting happens.
The computer actually creates its own music.
With the experiment design explained,
let's actually close out by listening to some tunes.
The output of Hiller and Isaacson's four experiments
were transposed to sheet music and then prepared for
string quartet. That's a group composed of two violins, a viola, and a cello. Experimental
music points out that Iliac actually produced way more music. Pieces had to be arbitrarily
selected to form the final scores. The idea was to present a representative sample instead of cherry-picking the best tunes.
The suite was first performed in August of 1956 at the University of Illinois.
It was part of a larger concert.
The program notes give a very cogent and detailed explanation of the project, but we've heard all that already.
I prefer the press release that the university put out a few days later.
To quote,
A musical suite composed by an electronic brain was introduced by a string quartet last night,
but some listeners didn't dig the beat.
Why, it does away with the need for human composers, one woman music lover remarked glumly.
According to that same press release, the cellist that was playing in the quartet said, quote,
Maybe not a ringing endorsement, but not that bad.
So, was the output really so concerning?
Well, we can hear for ourselves.
Here's what Movement 1, aka Experiment 1, sounds like.
Keep in mind, this was the most restrictive experiment.
And unlike Hiller and Isaacson, I'm going to cherry pick a nice selection for us to listen to. Terima kasih telah menonton! Thank you. That's pretty, isn't it?
It sounds like any other string quartet.
If you didn't know it was composed by a computer,
you probably wouldn't even try to critically listen to that piece.
And I think that's what makes the Iliac Suite so interesting. Right out of the gate, it's recognizable as concert music, as classical music.
It doesn't sound robotic, it just sounds... normal. I think that's really neat.
Part of the success here is thanks to the fact that Hiller and Isaacson went with a human quartet.
success here is thanks to the fact that Hiller and Isaacson went with a human quartet. Computers at this point couldn't really produce this kind of sound. They could beep, but that was about it.
In theory, you could program in samples, but that technique was still in its infancy.
One solution would have been to use existing analog synthesizers or go the human route. I think
real instruments helps to blend the Iliac Suite into the larger musical context.
But of course, we can go further afield. The Suite starts out recognizable, then
goes beyond that comfort zone. Here's an excerpt from Movement 4,
the most experimental form of the Iliac Suite. © BF-WATCH TV 2021 I'm going to assume you can hear the difference.
In the final experiment, Iliac was let loose.
It still followed all the rules of composition that Hiller and Isaacson had programmed in,
but it had more freedom to
build rhythms and choose notes and keys. What we get is a tune that's technically music.
It's technically very musical. It's not breaking any rules of composition. It just doesn't sound
all that pleasant to the human ear. Maybe the result here is that creative pursuits do
still require a bit of a human touch, but with some smart guidance, you can program a computer
to help out. Alright, that brings us to the end of this very musical episode. We've examined how
a computer can be set to the task of generating
music, but ultimately, the ILLIAC suite was only a starting point. I think it's plain to see, or
rather hear, that vacuum tube machines weren't going to be really replacing composers anytime
soon. I want to point out here that the ILLIAC suite isn't really an example of artificial intelligence. At least,
I wouldn't say so. One of the reasons that I have to bring this up is because during my research,
I ran into a number of modern articles that called this the first AI-composed music.
Well, that's simply not the case. Not only is that inaccurate, but I think it takes away some of the coolness
of the ILLIAC suite. I find that many times when you say something is AI-generated,
the actual program becomes a sealed box. Yeah, it works. Somehow, the computer figures it out for
you. What Hiller and Isaacson actually did was adapt a very serious computing method to a more fun purpose.
The Monte Carlo method and the MCMC method in general are both crucial to current scientific pursuits.
Better still, this is an approach to simulation and study that is only viable with a computer.
These are mathematical tools that become useful because
of the computing revolution. This is part of the digital revolution that we need to keep in mind.
Yes, computing has revolutionized how we work. It's revolutionized the world. It's touched all
our lives on a very personal level. But before any of that happened, computers earned
their salt by revolutionizing mathematics. Algorithms like the MCMC method were part
of that revolution. In some research, the MCMC method led to more powerful bombs.
In other applications, it led to some captivating music.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of computing's past.
And hey, if you like the show, there are now a few ways you can support it.
If you know someone else who'd be interested in the history of computing,
then why not take a minute to share the show with them?
You can also rate and review on Apple Podcasts.
And if you want to be a superfan,
you can support the show directly
through Advent of Computing merch
or signing up as a patron on Patreon.
Patrons get early access to episodes,
polls for the direction of the show,
and bonus content.
You can find links to everything on my website,
adventofcomputing.com.
If you have any comments or suggestions
for a future episode,
then go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter.
And as always,
have a great rest of your day.